text
stringlengths
100
500k
subset
stringclasses
4 values
Nicolas.Kruchten.com SciPy 2021: Data Visualization as the First and Last Mile of Data Science Remaking Figures from Bertin's Semiology of Graphics Interview on IQT Podcast SciPy 2020: Tools Plenary Session Polytechnique Montréal: Plotly Express & Dash Beyond "tidy": Plotly Express now accepts wide-form and mixed-form data Polytechnique Montréal: Plotly Express & Dash (en français) Montreal Python: Plotly Express and Dash Introducing Plotly Express Polytechnique Montréal: Intro to Plotly DSDT MTL: Intro to Dash Software Product and/or Professional Services Organizations Data Visualization for Artificial Intelligence, and Vice Versa Make Patterns Pop Out of Heatmaps with Seriation Straightening Great Circles Data Visualization: it's a lot like Photography Introducing react-pivottable Getting Out the Vote for Valérie Plante Plotly.js and Crossfilter.js Pivot Table of All 2017 Montreal Election Results Map of All 2017 Montreal Election Results Interview on Using Reflection Podcast The Feedback Log: My Product Owner Workflow Université de Montréal: Introduction to Data Visualization Mapping Car2Go Vehicle Availability in Montreal Notes on metro platform wayfinding Direction Angrignon: a different kind of subway map VisMtl: Graph Visualization vs Dimensionality Reduction Machine Learning Meets Economics, Part 2 BIG 2016: The Machine Learning Database Concordia: Applied Auction Theory in Online Advertising HTML5mtl: PivotTable.js, an Open-Source Story Machine Learning Meets Economics JS Open Day Mtl: JavaScript for Data Visualization Montreal R User Group: ggplot2 & rpivotTable PyCon Canada: Make Jupyter even more magical with cell magic extensions! Big Data Montreal: the Machine Learning Database Drag'n'Drop Pivot Tables and Charts, in Jupyter/IPython Notebook Election Pies Visualizing Family Trees RoboWar MTL Data: Montreal 311 Service Requests, an Analysis Montreal Python: Introducing the Machine Learning Database House Numbers on the Island of Montreal VisMtl #5 Roundup VisMtl: Canadian MPs 2012 Visualization The Programmatic Waterfall Mystery: Solved PAPIs.io 2014: Behind the scenes with a Predictive API Interactive Subreddit Map with t-SNE Montreal Python: Unsupervised ML with scikit-learn Arbitraging an RTB Exchange VisMtl: Maps, Tools, Stories Visualizing High-Dimensional Data in the Browser with SVD, t-SNE and Three.js Big Data Week Montreal: From Big Data to Big Value A Modest Proposal for Ethical Ad Blocking Early Voting in the 2013 Montreal Election Zoomable Map for Montreal Election Results Ternary Plots for Election Results Dot Map of 2013 Montreal Election Results Near-Real-Time Election Results Dashboard Peeking Into the Black Box, Parts 1-5 JS-Montreal: PivotTable.js PivotTable.js Kernel Density Estimation and Analysis Tool Rubicon Tech Talk: The Algorithms Automating Advertising Volunteer Database for Santropol Roulant My QR-Code Business Card Real Time Bidding, Characterized Using make to Orchestrate Machine Learning Tasks Datacratic's Dataviz System Rancilio Silvia iPhone Remote Control Statsd, Graphite and Nagios Graphing Silvia Temperature on iPad Silvia Mod Plan hackMTL Inbox Social Network Visualization Wanderer v2, with Wireless Wii Nunchuck Arduino Wanderer v1 OpenMyEWB BitNorth 2009: myEWB PHPTab Bridge-Optimizing Genetic Algorithm in Excel VBA Evolutionary Algorithms Real-Time Bidding Plotly Express - a high-level interface to Plotly.py react-pivottable - modern in-browser Pivot Table PivotTable.js - classic in-browser Pivot Table MLDB - the Machine Learning Database RTBkit - Open Source Real-Time Bidding January 27, 2016 – Machine Learning The business world is full of streams of items that need to be filtered or evaluated: parts on an assembly line, resumés in an application pile, emails in a delivery queue, transactions awaiting processing. Machine learning techniques are increasingly being used to make such processes more efficient: image processing to flag bad parts, text analysis to surface good candidates, spam filtering to sort email, fraud detection to lower transaction costs etc. In this article, I show how you can take business factors into account when using machine learning to solve these kinds of problems with binary classifiers. Specifically, I show how the concept of expected utility from the field of economics maps onto the Receiver Operating Characteristic (ROC) space often used by machine learning practitioners to compare and evaluate models for binary classification. I begin with a parable illustrating the dangers of not taking such factors into account. This concrete story is followed by a more formal mathematical look at the use of indifference curves in ROC space to avoid this kind of problem and guide model development. I wrap up with some recommendations for successfully using binary classifiers to solve business problems. The Case of the Useless Model Fiona the Factory Manager had a quality problem. Her widget factory was throwing away too many widgets that were overheating due to bad gearboxes. She hadn't been able to work with her gearbox supplier to increase their quality control, and she didn't have a non-destructive way to check if a given gearbox will cause problems, so she asked Danielle the Data Scientist to build her a device which could evaluate the quality of a gearbox, to avoid using bad ones to build overheating widgets. Danielle was happy to help, as she would be able to use some fancy machine learning algorithms to solve this problem, and within a month she proudly presented her results. "This is a quality scanner. Just point the camera at the gearbox and the readout will tell you if it is safe to use the gearbox in a widget," she told Fiona. "This little knob on the side can be used to tune it if you get too many false positives (bad gearboxes being marked as good) or false negatives (good gearboxes being marked as bad) although I've preset it to what should be the most accurate setting." A few weeks later, Danielle dropped by to check up on the scanner. "Oh, we stopped using it," responded Fiona, "it gave us too many false negatives, so we were throwing away too many good gearboxes to make it worthwhile. I lose money when we throw away widgets, but gearboxes aren't cheap, and I can't afford to throw away working ones! We tried playing with the little tuning knob but in the end we had to turn it all the way down to stop losing money and it never tells us the gearbox is bad, which is the same as not using the scanner." Danielle was taken aback, as she had tested her scanner very carefully and it had achieved quite good results in the lab. Danielle started thinking that she herself had a quality problem on her hands. While she was digging into the problem with Fiona, Emily the Economist poked her head in the door. "I couldn't help but overhear and I feel like my skills might be applicable to your problem," she told them, "Mind if I try to help?" Danielle, stumped so far, accepted. "So, how does your scanner work, and what does the knob do?" asked Emily. "It's pretty simple, actually," explained Danielle, "I used a machine learning algorithm to train a model, which is a function that takes an image of a gearbox and assigns it a quality score: the higher the likelihood of the gearbox being good (i.e. a positive result), the higher the score. When you combine a model with a threshold value above which you say the gearbox is good, you get a classifier, so the model defines a family of possible classifiers. The knob on the scanner allows you to choose which classifier you want to use. Like this!" she walked up to the whiteboard and wrote out a simple equation. \[class(input) = \begin{cases} positive & \text{if $score(input) > threshold$} \\ negative & \text{otherwise} \end{cases}\] "I took all the photos of good and bad gearboxes and split them into two piles, one which I used to train models, and one which I used to test them. I built and tested a whole series of models and used the one with the highest AUC for the scanner." Emily, who had been nodding until this point, jumped in, "Wait, what does AUC mean?" "Oh, right," replied Danielle, "it's a measure of how good the model is. Let me explain… Whenever I build a model, I run it on the testing data for each setting of the threshold, and I measure the true positive rate (the percentage of good gearboxes correctly identified as positive) and the false positive rate (the percentage of bad gearboxes incorrectly identified as positive). I'll draw out a graph for you. Here you can see that when the threshold is low, the resulting classifier correctly identifies most of the good gearboxes, but tends to miss the bad ones, and vice versa when the threshold is high." [Author's note on the charts in this story: the model performance data was generated by sampling from realistic simulated class distributions, and the thresholds were scaled to lie between 0 and 100.] "We data scientists prefer to look at the same data a bit differently, though, so we can see the tradeoffs between the two measures more easily. What we do is we plot the true positive rate on the Y axis versus the false positive rate on the X axis. The resulting curve is called a Receiver Operating Characteristic, or ROC, curve, and this X-Y space is called ROC space. Each point on the ROC curve corresponds to the performance of the classifier at a given threshold. Here's the curve for the model in the scanner right now." "The ideal classifier is one that makes no mistakes, so the ideal model's 'curve' goes through the point the top left corner, representing a 100% true positive rate with a 0% false positive rate. A model producing classifiers which just randomly assign good and bad labels has a curve that just follows the diagonal line up and to the right." "So to get back to your question, we can boil down model goodness to one number with the AUC, which stands for the Area Under the Curve. The closer it is to 1, the better the model is, and anything below 0.5 is worse than random guessing. The model I used in the scanner had an AUC of 0.85, so I'm really puzzled that it didn't help Fiona! I mean, turning the knob all the way down means that she's setting the scanner all the way into the upper right corner of the space!" Fiona was pretty bored by this point in the conversation, so she was glad when Emily turned to her and asked, "What happens when you try to use the scanner with the threshold knob set just a little higher than the minimum?" "Hmm, at the lowest setting we tried before giving up, it tagged 15 gearboxes as bad, out of a batch of 5000, and when we tested them, only 1 of them actually was," answered Fiona after glancing at some records. "One overheating widget costs me $300 in wasted parts and labour, but using the scanner I threw away a bunch of gearboxes at $50 each. Unfortunately, testing destroys the gearbox so I can't use it after testing, which is why I had such high hopes for this scanner's ability to solve this problem for me." Emily thought about this for a minute and finally asked, "OK, a couple more questions: how much do you make per working widget, and on average how many widgets do you have to throw away per batch of gearboxes due to overheating problems right now?" "We sell widgets for $320, so net $20 of profit per working widget. As to how many bad gearboxes per batch, I'd say 250 is about right," said Fiona. Emily went to the whiteboard. "We now have enough information to figure out why this model is useless to Fiona even though it scores high marks in Danielle's book! What matters to Fiona is money lost or gained, so let's see if we can work out an equation we can plot that relates Fiona's utility (the economist's term for how much money she is making or losing) to the position of the knob in Danielle's scanner. Fiona told us that 5% of the gearboxes are bad, so we can figure out how many true and false positives and negatives are likely per batch of gearboxes for each point on the ROC curve. True and false negatives are worth -$50, since we throw away the gearbox when the scanner says it's bad. False positives are worth -$300 each, since we have to throw away the whole widget if we build one with a bad gearbox that the scanner missed. True positives are worth $20 each because we can sell a working widget and make back the cost of the parts with a bit of margin. The overall utility is the sum of the utilities of the various cases, scaled by the rate at which they occur." She fluidly wrote out the following equation, table and graph on the board as she spoke. \[utility(t) = ($20 \cdot TPR(t) \cdot 95\%) - ($50\cdot(1-TPR(t)) \cdot 95\%) - \\ ($300 \cdot FPR(t) \cdot 5\%) - ($50 \cdot (1-FPR(t)) \cdot 5\%)\] Actual condition of gearbox output at threshold = @t@ True Positive @utility = +$20@ @rate(t) = TPR(t) \cdot 95\%@ False Positive @utility = -$300@ @rate(t) = FPR(t) \cdot 5\%@ @utility =-$50@ @rate(t) = (1-TPR(t)) \cdot 95\%@ True Negative @utility = -$50@ @rate(t) = (1-FPR(t)) \cdot 5\%@ "As you can see, Fiona makes the most money when she sets the threshold knob all the way down to the minimum, which works out to using all the gearboxes and throwing some widgets away every day. Her profit per gearbox is $4 due to all the discarded widgets. Now let's see if we can relate this to your ROC curve, Danielle. Every point on the utility curve corresponds to a point on your ROC curve like this." Emily then demonstrated her unbelievable perspective drawing skills and produced the following plot on the whiteboard: "You can think of utility as being a z-axis coming out of your ROC-space graph. You can see that utility is a linear function of true positive rate and false positive rate, because the utilities of all the classifiers sit on an angled plane above the ROC space. I added a useful line for you on that plane in blue, representing the intersection of the angled plane with the utility associated with simply using every gearbox to build widgets. Let's see where that line lands in 2-dimensional ROC-space." Emily added a line to Danielle's original ROC curve. Danielle, who had been furrowing her brow until that point, lit up. "I get it! You're saying that every classifier whose performance sits along that blue line in ROC space will give Fiona $4 per gearbox, which is the same utility as just using every gearbox, which is the same as setting the threshold to its minimum value. So what we need is a model with a ROC curve that crosses above that line." "Luckily, I have just the one! While I was working on this project, I kept records of every experiment I did. I generated a model with a curve that looks like this, but I didn't use it because its AUC wasn't the highest." She added a green curve for the new model to the now-crowded ROC graph. "That's great!" said Emily, adding a green line to her original utility curve graph. "Because your second model's ROC curve rises above the indifference curve, its utility curve also rises above the utility of setting the knob to zero." She punched numbers into her calculator, "With a model like that, you could set the knob to around 45 such that out of a batch of 5000 gearboxes, 171 would be flagged as being bad, of which 62 would be false negatives. That works out to more than twice the profits for Fiona." "Hmm," said Fiona thoughtfully, "So you're saying that even though I'd be throwing away many more good gearboxes, using this scanner would more than make up for it by finding almost half of the bad ones… That sounds reasonable. I don't care if this model has a lower area under some curve if it helps me make more money! Danielle, if you can update your scanner, I'll give it another shot." "Of course," replied Danielle, "and the next time we work together, let's do this analysis first, so we can avoid this kind of problem in the future." A week later Fiona e-mailed Danielle and Emily with the news: the updated scanner had indeed more than doubled her profits for the widget production line! (Update: Danielle and Emily come back in Machine Learning Meets Economics, Part 2 with The Case of the Augmented Humans!) Analysis: Indifference Curves in ROC Space As the story above shows, if the utility of true and false positives and negatives are constants, then the expected utility (or business value) of a binary classifier is a linear function of its true positive rate (@TPR@) and its false positive rate (@FPR@). These four utility values and the class distribution (the proportion of positive and negative classes) together comprise the operating context for a classifier. If the operating context is known, a family of straight lines along which all points have the same expected utility can be plotted in ROC space, which is a plot of @TPR@ vs @FPR@. Such lines are known as indifference curves because someone trying to maximize utility is indifferent to the choice between points on the same curve, because they all have equal utility. Expected utility is the average of the utilities of the four possible outcomes, weighted by the probability of that outcome occurring. In the equation below, @r_p@ and @r_n@ are the fractions of positive and negative cases, respectively, in the business context: \[E[u] = u_{tp}r_p{TPR}+u_{fn}r_p (1-{TPR})+u_{fp} r_n {FPR} + u_{tn} r_n (1-{FPR})\] Solving for a line in ROC space that goes through the point @(d,d)@ on the main diagonal yields the following relationship between @TPR@ and @FPR@, where @s@ is the slope of the line: \[({TPR} - d) = ({FPR} - d) \cdot s \\ s = \frac{r_n(u_{tn}-u_{fp}) }{ r_p(u_{tp}-u_{fn} )}\] The equation for @s@ can be re-expressed to make it easier to reason about. The quantity @-(u_{tn}-u_{fp})@ can be called @c_n@, as it equals the cost of misclassifying a negative example compared to the utility of correctly classifying it. Similarly, @-(u_{tp}-u_{fn})@ can be called @c_p@. The slope @s@ then has the following compact form: \[s = \frac{r_n c_n}{r_p c_p}\] All lines with slope @s@ are indifference curves in ROC space, so all points along such a line have equal expected utility. This utility is computable by substituting @d@ for @TPR@ and @FPR@ in the utility equation. Every point in ROC spaces is intersected by one indifference curve, and because every classifier occupies a point in ROC space, every classifier is part of a set which all have the same utility. For all values of @d@ from 0 to 1, the indifference curve traces out the classifiers which have the same utility as a positive-with-probability-@d@ classifier. All classifiers along the @d=0@ and @d=1@ curves have the same utility as the "trivial" always-negative and always-positive classifiers, respectively. When @s>1@, an always-negative classifier has higher utility than an always-positive one, and vice versa when @s<1@. In order for a classifier to have an expected utility above and beyond that of a trivial one (i.e. for it to have any real business value), it must lie above both the @d=0@ and @d=1@ indifference curves in ROC space. Classifiers lying below both of these curves are somewhat peculiar in that they are so bad at classifying that the inverse of its output has a higher utility than either trivial classifier. Such a situation merits further investigation, as it suggests a problem with either the training or testing procedure (e.g. inverted labels in either the training or testing set) which, once corrected, could yield a valuable classifier. The partition of ROC space above runs counter to an oft-repeated notion about ROC space whereby classifiers above the main diagonal have "good" performance and those below have "poor" performance. From an expected-utility perspective, and assuming no problems with the training and testing procedures, this is only true in the critical case where @s = 1@. This occurs when the ratio of the class proportions is the inverse of the ratio of the misclassification costs: \[s = 1 \Longleftrightarrow \frac{r_p}{r_n} = \frac{c_n}{c_p}\] For example, when there is 1 positive example for every 5 negative ones, then @s=1@ if and only if the misclassification of a positive example is 5 times more costly the misclassification of a negative one. This relationship holds in the textbook case where the classes are balanced and have equal misclassification costs, but is not generally true of all business problems which can be addressed with binary classifiers. I have never seen a business process which included a step that randomly assigned labels to items in a stream. In most real-world situations, the baseline is either always-positive (e.g. "there are costs but it's worth it overall") or always-negative (i.e. "it's too risky so we don't do this"), as one of these always has equal or better utility than randomly assigning labels. The good news is that class-proportion and misclassification-cost ratios should be reasonably easy to estimate for a given business context, giving at least some indication of how close @s@ is to 1. If @s@ is far from 1, a data science process of model development and selection which blindly maximizes the AUC (or many other general-purpose machine learning metrics) and then proceeds to threshold selection runs a real risk of producing a model with an optimal threshold that cannot beat a trivial classifier, and thus has no real business value. If @s@ can be estimated ahead of time, a data science project can increase its chances of business success by focusing on models whose ROC curves rise into the value-added zone of ROC space. Furthermore, estimating @s@ can be useful even before model selection, during problem selection. The further @s@ is from 1 for a given problem context, the smaller the value-added region in ROC space, and the harder it will be to develop a classifier able to beat a trivial baseline. This type of insight can help prioritize data science projects to focus resources on the projects most likely to succeed. Of course, such a prioritization approach would necessarily involve an investigation into the existing business-as-usual case: is the incumbent policy in fact a trivial always-positive or always-negative one? Or does the process under consideration already use some form of classifier like a set of hand-maintained rules which outperform trivial classifiers on utility and therefore set the baseline indifference curve even higher? The benefits of using machine learning to solve a problem can be quantified using the expected-utility equation by comparing the utility of the incumbent policy with that of a reasonably-achievable classifier and that of a perfect classifier. Prioritization can then focus on the tradeoff between the magnitude of the payoff and the estimated likelihood of success. Conclusions & Recommendations In this article I detailed a fictional situation in which the use of a general-purpose metric during model selection could lead to the deployment of a classifier without any business value. I then showed how the economic concept of expected utility could explain this problem, and in fact could have avoided this problem had it been incorporated into a model selection process. Finally, I showed that by estimating a single quantity (the slope of the indifference curve in ROC space) you can evaluate the difficulty of solving a business problem using a classifier, even before starting model development. My recommendations, if you are considering the use of a binary classifier to solve a business problem, are as follows: When evaluating a potential project, do a basic economic analysis of the problem-space: ask about the relative frequencies and misclassification costs of the various outcomes. This will help you figure out how difficult it is likely to be to beat the current baseline and if you do, how valueable that is likely to be. Potential leading questions are: "What does your process do right now and how much are you making or losing?" "How much would you make or lose with a perfect classifier that never made a mistake?" "How often does your process make a mistake right now, and do they tend to be false negatives or false positives?" When executing on a project, if you are able to compute expected utility, consider maximizing that directly instead of AUC or another general-purpose metrics. If you are not confident about computing expected utility, at least estimate the slope of the indifference curve and focus on developing models whose ROC curve rises above the trivial indifference curves. I have written a follow-up to this post in a similar format, if you enjoyed this one: Machine Learning Meets Economics, Part 2 Much has been written by others on evaluating machine learning models, using various metrics like overall accuracy (ACC), the Area Under the Curve (AUC), the Matthews Correlation Coefficient (MCC) and others. These are general-purpose metrics, and I have found very few articles written for practitioners which focus on the business value of such metrics, or on finding metrics designed to correlate with business value. I found the following works relevant while researching this article, although none of them quite present the same arguments I do here with respect to problem- and model-selection: An Introduction to ROC Analysis - this very solid introduction to ROC analysis covers some of the same ground as the present article when discussing ROC convex hulls. The Geometry of ROC Space: Understanding Machine Learning Metrics Through Isometrics - this paper explores families of linear curves which can be drawn through ROC space, without covering indifference curves. A principled approach to setting optimal diagnostic thresholds: where ROC and indifference curves meet - this paper discusses the application of indifference curves in ROC space specifically in the context of threshold selection, without reference to model selection or problem selection. A Unified View of Performance Metrics: Translating Threshold Choice into Expected Classification Loss - this long and thorough paper explores the relationship between the metric being optimized in model selection (i.e. AUC or others) and the threshold-setting policy, with a focus on situations where the operating context is unknown or unknowable during model development. Illustrated Guide to ROC and AUC - this blog post is a gentle introduction to ROC space and mentions utility curves without linking the two with indifference curves. Visualizing Machine Learning Thresholds to Make Better Business Decisions - this blog post also touches on some of the same issues without fully exploring the concept of indifference curves. Evaluating Machine Learning Models - this free ebook mentions the problem of classifiers with high AUC being unable to beat baselines, but only in the context of unbalanced classes, without reference to misclassification costs. ⁂ Follow Nicolas More Machine Learning © Nicolas Kruchten 2010-2022
CommonCrawl
Biogenic fabrication and characterization of silver nanoparticles using aqueous-ethanolic extract of lichen (Usnea longissima) and their antimicrobial activity Khwaja Salahuddin Siddiqi1, M. Rashid2, A. Rahman2, Tajuddin2, Azamal Husen ORCID: orcid.org/0000-0002-9120-55403 & Sumbul Rehman4 Biogenic fabrication of silver nanoparticles from naturally occurring biomaterials provides an alternative, eco-friendly and cost-effective means of obtaining nanoparticles. It is a favourite pursuit of all scientists and has gained popularity because it prevents the environment from pollution. Our main objective to take up this project is to fabricate silver nanoparticles from lichen, Usnea longissima and explore their properties. In the present study, we report a benign method of biosynthesis of silver nanoparticles from aqueous-ethanolic extract of Usnea longissima and their characterization by ultraviolet–visible (UV-vis), Fourier transform infrared (FTIR) spectroscopy, transmission electron microscopy (TEM) and scanning electron microscopy (SEM) analyses. Silver nanoparticles thus obtained were tested for antimicrobial activity against gram positive bacteria and gram negative bacteria. Formation of silver nanoparticles was confirmed by the appearance of an absorption band at 400 nm in the UV-vis spectrum of the colloidal solution containing both the nanoparticles and U. longissima extract. Poly(ethylene glycol) coated silver nanoparticles showed additional absorption peaks at 424 and 450 nm. FTIR spectrum showed the involvement of amines, usnic acids, phenols, aldehydes and ketones in the reduction of silver ions to silver nanoparticles. Morphological studies showed three types of nanoparticles with an abundance of spherical shaped silver nanoparticles of 9.40–11.23 nm. Their average hydrodynamic diameter is 437.1 nm. Results of in vitro antibacterial activity of silver nanoparticles against Staphylococcus aureus, Streptococcus mutans, Streptococcus pyrogenes, Streptococcus viridans, Corynebacterium xerosis, Corynebacterium diphtheriae (gram positive bacteria) and Escherichia coli, Klebsiella pneuomoniae and Pseudomonas aeruginosa (gram negative bacteria) showed that it was effective against tested bacterial strains. However, S. mutans, C. diphtheriae and P. aeruginosa were resistant to silver nanoparticles. Lichens are rarely exploited for the fabrication of silver nanoparticles. In the present work the lichen acts as reducing as well as capping agent. They can therefore, be used to synthesize metal nanoparticles and their size may be controlled by monitoring the concentration of extract and metal ions. Since they are antibacterial they may be used for the treatment of bacterial infections in man and animal. They can also be used in purification of water, in soaps and medicine. Their sustained release may be achieved by coating them with a suitable polymer. Silver nanoparticles fabricated from edible U. longissima are free from toxic chemicals and therefore they can be safely used in medicine and medical devices. These silver nanoparticles were stable for weeks therefore they can be stored for longer duration of time without decomposition. Metal nanoparticles (NPs) have attracted much attention during recent years owing to their unique properties which are different from bulk material. These particles gained importance during recent years owing to their broad-spectrum application in a number of processes such as agriculture, cosmetics, healthcare, drug or gene delivery, medical devices, biosensor and catalysis [1,2,3,4,5,6,7,8,9] besides their antimicrobial properties [2, 10, 11]. Many metal NPs are essential nutrients to living system while some are toxic [12]. Their efficiency depends on their shape and size. Among the coinage metals silver has highest thermal and electrical conductivity. They may have multidimensional structure such as nanotubes and nanowires. A variety of methods for the fabrication of NPs have been developed but reduction reaction, photochemical reaction, thermal decomposition, electrochemical, sono-chemical and microwave assisted methods are prevalent these days. Although, these synthetic procedures are effective and high yielding they require chemicals which are often toxic and pollute the environment. However, these methods are not economical and sometime require expensive and hazardous chemicals which are difficult to handle. Green method of NP synthesis using plant extracts, bacteria, actinomycetes, fungi and enzymes are therefore, frequently used because of their environment friendly nature and bio compatibility [2, 13,14,15,16]. Major compounds found in plant extracts are generally glycosides, alkaloids, phenols, quinines, amines and terpenoids which convert silver ions to silver nanoparticles (Ag NPs) [2, 11]. Thus leaves, bark, flowers and seed extract of plants containing above chemicals are used as a source of reducing agents. For instance, Dhand et al. [17] have reported green synthesis of Ag NPs from roasted Coffea arabica seed extract. They were found to be highly crystalline with spherical and ellipsoidal shape. Average particle size ranged between 10 and 150 nm. It was observed that the particle size increased with decreasing concentration of AgNO3 solution. They were also effective against Escherichia coli and Streptococcus aureus. It was noted that smaller Ag NPs were more effective than the larger ones. In another study biosynthesis, biocompatibility and antibacterial activity of Adathoda vasica extract mediated Ag NPs have been thoroughly studied [18]. They showed significant antibacterial activity against Vibrio parahaemolyticus but were non-toxic to Artemia naupli. Since Vibrio parahaemolyticus causes vibriosis in shrimps (early mortality syndrome) biosynthesized Ag NPs have been used to protect them from this disease [19]. Vibrio infection also causes high mortality in Siberian tooth carps, milk fish, abalone and shrimps [20,21,22]. Overuse of vaccines and antibiotics have made them resistant. Since Ag NPs are known antibacterial substance they have been green synthesized from plant material and used frequently to prevent bacterial infections which are resistance to trivial drugs [2, 11]. The lichen, Usnea longissima belonging to Usneaceae family grows as moss on trees in temperate climate. They are slowest growing plants living in symbiosis with algae, fungi and perennial trees. Different genera of lichens are used in curing dyspepsia, amenorrhea and vomiting. Lichens produce secondary metabolites which are used as crude drugs. It contains mainly usnic acid and its derivatives called usenamines, usone and iso-usone [23]. Three compounds containing OH and NH2 groups have been shown to inhibit the growth of human hepatoma, HepG2 cells with significant IC50 values between 6.0–53.3 μM. This value is lower than that found for methotrexate (IC50 value of 15.8 μM) under the same condition. U. longissima exhibits myriad biological properties such as antitumor, antiviral, antimicrobial, anti-inflammatory and insecticidal activities. Since it is known to damage the liver, its application in human system is limited [24,25,26] even though it is used to treat ascariasis [27] and fractured bones. Its extract is known to contain monosubstituted phenyls, depsides, anthraquinones, dibenzofurans and terpenoides which have been shown to exhibit insecticidal and antioxidant activities [23, 28,29,30]. A number of bacteria and fungi (E. coli, Candida albicans, Bacillus subtilis, Mycobacteriun smegmatis, Trichophyton rubrum and Aspergillus niger) have been used to investigate the in vitro activity of usnic acid derivatives [23, 31, 32]. Usnic acid derivatives are cytotoxic and antimicrobial it is also used as an expectorant and in the treatment of ulcer. It has been shown by Nishitoba et al. [28] that all depsides and orcinol derivatives of U. longissima act as growth inhibitor of lettuce seedlings. Inhibition of tumour promoter induced Epstein-Barr virus by U. longissima extract has been shown to exhibit highest inhibition activity [33]. Methanol extract of U. longissima has also exhibited antioxidant activity [34]. Antiulcerogenic effect of U. longissima water extract against indomethacin induced ulcer in rat has been investigated [29]. The extract showed moderate antioxidant activity when compared with trolox and ascorbic acid as positive antioxidant [29]. Biosorption of trace amounts of Au(III) and Cu(II) by U. longissima biomass has been investigated [35]. It is surprising that effective absorption of both the metals occurs either at pH 2 or pH 8 within 75 min. It has been found that 1 g of dry lichen absorbed 9.4 mg Au(III) and 24.0 mg Cu(II). The recovery of metals is nearly quantitative (> 90%). Several compounds from U. longissima have been isolated and identified but no effort seems to have been made to synthesis Ag NPs. Although, Ag NPs alone have numerous qualities, bio-functionalized NPs are more effective against pathogenic microbes such as bacteria, virus and fungi. Biosynthesized Ag NPs using edible U. longissima is free from toxic chemicals and hence they can be safely used in medicine and medical devices. We are, therefore, reporting, for the first time, the biosynthesis of Ag NPs from U. longissima in 50:50 water- ethanol extract and their characterization by ultraviolet–visible (UV-vis), Fourier transform infrared (FTIR) spectroscopy, size distribution, transmission electron microscopy (TEM) and scanning electron microscopy (SEM) analyses. Their antibacterial activity against some clinical isolates of bacterial strains (six-gram positive and three-gram negative) has also been investigated. Chemicals, plant material and instrumentation AgNO3 (Merck, India Ltd.), ethanol (AR grade) and double distilled water were used. Aqueous solution of poly(ethylene glycol) (Merck, India Ltd.) was used. Usnea longissima was procured from the pharmacy unit of Aligarh Muslim University, Aligarh, India (Fig. 1). UV-vis spectral measurements were done with an Elico Spectrophotometer between 200 and 500 nm. Size distribution was determined by Malvern Instruments Ltd., Zetasizer Ver. 7.11. FTIR spectra were recorded with Perkin-Elmer Spectrometer, FTIR spectrum ONE, in 4000–400 cm− 1 region as KBr disc. TEM Images of Ag NPs were obtained using JEOL, JEM 2100 transmission electron microscope at 190 KV. Samples were prepared using a drop of colloidal solution of Ag NPs on a carbon coated copper grid and allowing the above sample to completely dry in a vacuum desiccator. The sediment particles obtained were used for scanning FTIR spectra. SEM images were obtained with JEOL, JSM 6510LV scanning electron microscope. Usnea longissima Synthesis of ag NPs Ag NPs were prepared from aqueous-ethanolic extract of U. longissima. Lichen was gently washed with distilled water to remove dust. It was subsequently dried at 600 C and powdered. Ten g of this dry powder was refluxed in 100 ml ethanol-distilled water (50:50) mixture for 3 h, cooled to room temperature and centrifuged at 10,000 rpm to remove the solid mass. Ten ml of this extract at pH 7 was taken in an Erlenmeyer flask and 1 ml of 0.01 M solution of AgNO3 was added to start the reduction of silver ions to Ag NPs. The mixture was vigorously stirred on a magnetic stirrer for ten to fifteen min and incubated in dark to protect the contents from sunlight. Colour change was regularly monitored. Reaction was completed only after 72 h showing purple colour. Reaction mixture was then centrifuged at 10,000 rpm to separate NPs from the liquid. It was decanted and the supernatant was further centrifuged to isolate any NP left in the solvent. The sample thus obtained was stable for weeks although the yield was very low (35%). All manipulations were done at ambient temperature. Evaluation of antibacterial activity Ag NPs thus obtained were tested for antimicrobial activity against Staphylococcus aureus, Streptococcus mutans, Streptococcus pyrogenes, Streptococcus viridans, Corynebacterium diphtheriae and Corynebacterium xerosis (six-gram positive bacteria) and Escherichia coli, Klebsiella pneuomoniae and Pseudomonas aeruginosa (three-gram negative bacteria) obtained from Department of Microbiology, Jawaharlal Nehru Medical College & Hospital, Aligarh Muslim University, Aligarh, India. The solid media namely Nutrient Agar No.2 (NA) (M 1269S-500G, Himedia Labs Pvt. Ltd., Bombay, India) was used for preparing nutrient plates, while Nutrient Broth (NB) (M002-500G, Himedia Labs Pvt. Ltd., Bombay, India) was used for the liquid culture media. Antibacterial activity was evaluated by agar well diffusion method. All the microbial cultures were adjusted to 0.5 McFarland standards, which is visually comparable to a microbial suspension of 1.5 Х 108 cfu/ml. Agar medium (20 ml) was poured into each petri plate and were swabbed with a colony from the inoculums of the test microorganisms and kept for 15 min for adsorption. Using sterile cork borer of 6 mm diameter wells were bored into the seeded agar plates. They were loaded with 100 μl of dimethylsulphoxide (DMSO) of 2 mg/ml. All the plates were incubated at 37 °C for 24 h. Antimicrobial activity was evaluated by measuring the zone of growth inhibition against the tested gram positive and gram negative bacteria with Antibiotic Zone Scale (PW297, Himedia Labs Pvt. Ltd., Mumbai, India), which was held over the back of the inverted plate. It was held a few inches above a black, non-reflecting background and illuminated with reflected light. The medium with DMSO as solvent was used as a negative control whereas media with Ciprofloxacin (5 μg/disk as standard antibiotic for gram positive) and Gentamicin (10 μg/disk as standard antibiotic for gram negative) were used as positive control. The experiments were performed in triplicates. UV-vis spectra It is known that when Ag NPs are formed, colour of the solution containing both the NPs and plant extract turns dark brown or purple depending on the presence of organic molecules in the extract. In our case, AgNO3 was mixed with aqueous- ethanolic extract of U. longissima and incubated at room temperature; its colour turned purple after 72 h. It did not show any significant change thereafter which confirms the formation of Ag NPs according to the following equation. $$ {\mathrm{Ag}\mathrm{NO}}_3+{\mathrm{NR}}_3={\mathrm{Ag}}^0+\mathrm{NR}{3}^{+}+{\mathrm{H}}^{+}+{{\mathrm{NO}}_3}^{-} $$ The UV-vis spectrum of this colloidal solution was run from 200 to 500 nm at room temperature which displayed peaks at 350 and 400 nm. Highest peak at 400 nm has been attributed to the excitation of surface plasmon resonance (SPR) of Ag NPs (Fig. 2a). Photo-oxidation of chemical constituents present in the extract may also have occurred [36]. Profile of the UV-vis spectrum depends on the concentration of substrate and silver ions. However, when aqueous solution of poly(ethylene glycol) – PEG, was added to the above solution, new peaks at 424 and 450 nm were observed (Fig. 2b). It is probably due to coating of Ag NPs with the polymer as shown in figure (Fig. 3). Colour of these NPs remained unchanged even after several weeks [37]. It has been observed that when the colloidal solution containing Ag NP is slowly heated up to 60 °C the colour intensity increases with increasing temperature and NPs are quickly formed. It demonstrates the effect of temperature on the biosynthesis of Ag NPs. Absorption peaks in the UV-vis spectrum are related to the shape of Ag NPs. According to the criterion of Zhang and Nogues [38] the peaks at 385, 435, 465 and 515 nm correspond to cubical Ag NPs, those at 462 for truncated cubes, at 430 cuboctahedral and 400 nm peak for spherical NPs. Since we have observed major peak at 400 nm Ag NPs are supposed to be mainly spherical though the presence of small amount of other types of NPs cannot be ignored. UV-vis spectrum of (a) silver nanoparticles and (b) poly(ethylene glycol) coated silver nanoparticles at 25 °C Poly(ethylene glycol) coated silver nanoparticles Size distribution was performed using water as dispersant at a count rate of 271.6 k cps. There are three types of Ag NPs present in the colloidal solution. Their hydrodynamic diameters are very large as shown in Fig. 4. However, the peak-1 shows the abundance of particles with average hydrodynamic diameter of 184.5 nm with an intensity of 59.4% but overall average is 437.1 nm. This may be due to aggregation of the particles in the solvent. The hyddrated NPs are always larger than the isolated ones because of the aggregation of water molecules around them. Since the surface area of aggregated NPs is decreased they would not be in direct contact with microbes and their antibacterial efficiency will obviously decrease. Particle size distribution of Usnea longissima mediated silver nanoparticles TEM and SEM TEM images (Fig. 5) showed that Ag NPs in our case are mainly spherical in shape. There is very narrow range in the size of NPs. Average particle size varies between 9.4 and 11.83 nm. TEM images show spherical morphology with an average size of 10.49 nm (Fig. 5a-c). It has also been observed that these are much smaller in size than those recently reported [39]. TEM images of silver nanoparticles; (a) under 80,000 magnification (average size, 11.83 nm), (b) under 100,000 magnification (average size, 10.20 nm) and (c) under 80,000 magnification (average size, 9.44 nm) The topology and size were also confirmed by SEM images (Fig. 6a-d) showing the presence of small and uniformly spherical shaped Ag NPs with smooth surface and very narrow distribution range of 9.4–11.83 nm [40]. The larger particles are formed due to aggregation of Ag NPs otherwise they appear to be segregated. The cluster must have been formed due to evaporation of the solvent during sample preparation. The scattered shiny dots appearing in the SEM images are due to uncoated free Ag NPs which look like shining stars in milkyway (Fig. 6b) in dark night [41]. SEM images of silver nanoparticles; (a) under 10,000 magnification, (b) under 1500 magnification, (c) under 20,000 magnification and (d) under 7000 magnification. IR spectrum FTIR spectrum was run to identify the involvement of biomolecules present in U. longissima extract for the reduction of AgNO3 to Ag NPs. It is known to contain phenol, amines, aldehydes and ketones besides many other compounds in traces. However, usnic acid and usenamine (Fig. 7) are dominant compounds in aqueous- ethanolic extract of U. longissima which interact with AgNO3. Since all these compounds are excellent reducing agents, they undergo changes in stretching frequencies of their functional groups as a consequence of reduction of AgNO3 to Ag NPs. IR spectrum (Fig. 8) is very complicated because of the overlap of frequencies in the same region. However, we have attempted to identify the shifts in stretching vibrations after the formation of Ag NPs. Primary amines exhibit two N-H stretching frequencies in 3500–3300 cm− 1 region which have been found to appear at 3400 and 3455 cm− 1 in the NPs containing U. longissima extract (Fig. 8). The band in 1600–1500 cm− 1are due to CO stretching but amide bands also appear in the same region of spectrum. We have observed amide I and amide II bands at 1650 and 1540 cm− 1. A band at 1560 cm− 1 has been assigned to (C=O) stretching frequency. The COO− group generally appears above 1600 cm− 1 but overlaps with amide II band [42]. These spectral results indicate the involvement of organic molecules in the reduction of AgNO3 leading to the formation of Ag NPs. Structures of usenamine and usnic acid FTIR spectra of (a) aqueous-alcoholic extract and (b) silver nanoparticles Antibacterial screening Results of in vitro antibacterial activity of Ag NPs against Staphylococcus aureus, Streptococcus mutans, Streptococcus pyrogenes, Streptococcus viridans, Corynebacterium diphtheriae and Corynebacterium xerosis (gram positive bacteria) and Escherichia coli, Klebsiella pneuomoniae and Pseudomonas aeruginosa (gram negative bacteria) are presented in Table 1. The zone of inhibition suggests that Ag NPs are weekly toxic to both gram positive and gram negative bacteria. Ag NPs with larger surface area provide a better contact with microorganisms [2, 6, 11]. Thus, these particles may penetrate the bacterial cell membrane or attach to the bacterial surface and inhibit their replication [43, 44]. In our experiment, Ag NPs have been found to be most effective against E. coli. It has been reported that antibacterial efficiency is increased by lowering the particle size [45]. Usually NPs attach on the cell wall of bacteria and damage membrane and respiration system leading to cell death [11, 43]. Toxicity of smaller NPs was greater than those of larger ones because the smaller ones can easily adhere to bacterial cell wall [11, 46]. Table 1 Mean zone of inhibition (in mm) Silver ions penetrate into cytoplasm; denature the ribosome leading to the suppression of enzymes and proteins which eventually arrest their metabolic function resulting in apoptosis of bacteria. Bactericidal activity is due to silver ions released from Ag NPs as a consequence of their interaction with microbes [11]. However, four possible mechanisms of antibacterial activity of Ag NPs have been proposed (i) interference during cell wall synthesis (ii) suppression during protein biosynthesis (iii) disruption of transcription process and (iv) disruption of primary metabolic pathways [17]. Each mechanism involves structural changes, biochemical changes and charges on both the silver ions and bio molecules in the microbial cells. Ag NPs also inhibit the proliferation of cancer cell lines by different modes of action [47]. They mediate and amplify the death signal by triggering the activation of Caspase-3 molecule. The DNA splits into fragments by Caspase-3. Ag NPs may interfere with the proper functioning of cellular proteins and induce subsequent changes in cellular chemistry. Sometimes Ag NPs alter the function of mitochondria by inhibiting the catalytic activity of lactate dehydrogenase. Ag NPs may also cause proliferation of cancer cells by generating ROS which ultimately leads to DNA damage. Few species of lichens have rarely been exploited in the production of NPs; in the present investigation we successfully fabricated Ag NPs by bio-reduction of silver nitrate from aqueous-alcoholic extract of U. longissima at room temperature. Size distribution shows the presence of three types of Ag NPs, the average diameter of which is 437.1 nm. However, NPs with hydrodynamic diameter of 184.5 nm are in abundance. It has been observed from SEM images that NPs are mainly spherical in shape. There is not very large variation in their size (9.4–11.3 nm). Ag NPs are antibacterial and their sustained release may be achieved by coating them with a suitable polymer. They are highly effective against E. coli and K. pneuomoniae, although S. mutans, C. diphtheriae and P. aeruginosa are resistant to it. In the present work, the lichen acts as reducing as well as capping agent. These Ag NPs were stable for weeks therefore they can be stored for longer duration of time. However, their slow oxidation to silver ions cannot be prevented. Siddiqi KS, Husen A, Sohrab SS, Osman M. Recent status of nanomaterials fabrication and their potential applications in neurological disease management. Nano Res Lett. 2018;13:231. Husen A, Siddiqi KS. Phytosynthesis of nanoparticles: concept, controversy and application. Nano Res Lett. 2014;9:229. Husen A, Siddiqi KS. Plants and microbes assisted selenium nanoparticles: characterization and application. J Nanobiotechnol. 2014;12:28. Husen A, Siddiqi KS. Carbon and fullerene nanomaterials in plant system. J Nanobiotechnol. 2014;12:16. Siddiqi KS, Husen A. Fabrication of metal nanoparticles from fungi and metal salts: scope and application. Nano Res Lett. 2016;11:98. Siddiqi KS, Husen A. Fabrication of metal and metal oxide nanoparticles by algae and their toxic effects. Nano Res Lett. 2016;11:363. Siddiqi, Husen A. Engineered gold nanoparticles and plant adaptation potential. Nano Res Lett. 2016;11:400. Siddiqi KS, Husen A. Green synthesis, characterization and uses of palladium/platinum nanoparticles. Nano Res Lett. 2016;11:482. Siddiqi KS, Rahman A, Tajuddin HA. Biogenic fabrication of iron/iron oxide nanoparticles and their application. Nano Res Lett. 2016;11:498. Siddiqi KS, Husen A. Recent advances in plant-mediated engineered gold nanoparticles and their application in biological system. J Trace Elements Med Biol. 2017;40:10–23. Siddiqi KS, Husen A, Rao RAK. A review on biosynthesis of silver nanoparticles and their biocidal properties. J Nanobiotechnol. 2018;16:14. Siddiqi KS, Rahman A, Tajuddin HA. Properties of zinc oxide nanoparticles and their activity against microbes. Nano Res Lett. 2018;13:141. Tagad CK, Dugasani SR, Aiyer R, Park S, Kulkarni A, Sabharwal S. Green synthesis of silver nanoparticles and their application for the development of optical fiber based hydrogen peroxide sensor. Sensors Actuators B Chem. 2013;183:144–9. Venkateswarlu S, Kumar BN, Prathima B, Anitha K, Jyothi NVV. A novel green synthesis of Fe3O4–ag core shell recyclable nanoparticles using Vitis vinifera stem extract and its enhanced antibacterial performance. Physica B. 2015;457:30–5. Rao Y, Kotakadi VS, Prasad TNVKV, Reddy AV, Sai Gopal DVR. Green synthesis and spectral characterization of silver nanoparticles from Lakshmi tulasi (Ocimum sanctum) leaf extract. Spectrochim Acta A. 2013;103:156–9. Husen A. Gold nanoparticles from plant system: synthesis, characterization and their application. In: Ghorbanpourn M, Manika K, Varma A, editors. Nanoscience and plant–soil systems, vol. 48. Switzerland: Springer international publishing AG, Gewerbestrasse 11, 6330 Cham; 2017. p. 455–79. Dhand V, Soumya L, Bharadwaj S, Chakra S, Bhatt D, Sreedhar B. Green synthesis of silver nanoparticles using Coffea arabica seed extract and its antibacterial activity. Mat Sci Eng C. 2016;58:36–43. Latha M, Priyanka M, Rajasekar P, Manikandan R, Prabhu NM. Biocompatibility and antibacterial activity of the Adathoda vasica Linn extract mediated silver nanoparticles. Microb Pathog. 2016;93:88–94. Tran TA, Kinch L, PeňaLlopis S, Kockel L, Grishin N, Jiang H, Brugarolas J. Platelet-derived growth factor/vascular endothelial growth factor receptor inactivation by sunitinib results in Tsc1/Tsc2-dependent inhibition of TORC1. Mol Cell Biol. 2013;33:3762–79. Austin B, Austin DA. Bacterial fish pathogens. Diseases of farmed and wild fish, springer-praxis publishing, ltd., United Kingdom, 1999. Cai JP, Li J, Thompson KD, Li CX, Han HC. Isolation and characterization of pathogenic Vibrio parahaemolyticus from diseased post-larvae of abalone Haliotis diversicolor supertexta. J Basic Microbiol. 2007;47:84–6. Jayasree L, Janakiram P, Madhavi R. Characterization of Vibrio spp. associated with diseased shrimp from culture ponds of Andhra Pradesh (India). J World Aquacult Soc. 2006;37:523–32. Yu X, Guo Q, Su G, Yang A, Hu Z, Qu C, Wan Z, Li R, Tu P, Chai X. Usnic acid derivatives with cytotoxic and antifungal activities from the lichen Usnea longissima. J Nat Prod. 2016;79:1373–80. Favreau JT, Ryu ML, Braunstein G, Orshansky G, Park SS, Goody GL, Love L, Fong TL. Severe hepatotoxicity associated with the dietary supplement LipoKinetix. Ann Intern Med. 2002;136:590–5. Neff GW, Reddy KR, Durazo FA, Meyer D, Marrero R, Kaplowitz N. Severe hepatotoxicity associated with the use of weight loss diet supplements containing ma huang or usnic acid. J Hepatol. 2004;41:1062–4. Guo L, Shi Q, Fang JL, Mei N, Ali AA, Lewis SM, Leakey JEA, Frankos VH. Review of usnic acid and Usnea barbata toxicity. J Environ Sci Health C Environ Carcinog Ecotoxicol Rev. 2008;26:317–38. Wei JC, Wang XY, Wu JL, Wu JN, Chen XL, Hou JL. Lichenes Officinales Sinenses. Beijing: Science press; 1982. p. 18–58. Nishitoba Y, Nishimura I, Nishiyama T, Mizutani J. Lichen acids, plant growth inhibitors from Usnea longissima. Phytochemistry. 1987;26:3181–5. Halici M, Odabasoglu F, Suleyman H, Cakir A, Aslan A, Bayir Y. Effects of water extract of Usnea longissima on antioxidant enzyme activity and mucosal damage caused by indomethacin in rats. Phytomedicine. 2005;12:656–62. Fernández-Moriano C, Gómez-Serranillos MP, Crespo A. Antioxidant potential of lichen species and their secondary metabolites. A systematic review. Pharm Biol. 2016;54:1–17. Luzina OA, Salakhutdinov NF. Biological activity of usnic acid and its derivatives: part 1. Activity against unicellular organisms. Rus J Bioorg Chem. 2016;42:115–32. Luzina OA, Salakhutdinov NF. Biological activity of usnic acid and its derivatives: part 2. Effects on higher organisms. Molecular and physicochemical aspects. Rus J Bioorg Chem. 2016;42:249–68. Yamamoto Y, Miura Y, Kinoshita Y, Higuchi M, Yamada Y, Murakami A, Ohigashi H, Koshimizu K. Screening of tissue cultures and thalli of lichens and some of their active constituents for inhibition of tumor promoter-induced Epstein-Barr virus activation. Chem Pharm Bull (Tokyo). 1995;43:1388–90. Odabasoglu F, Aslan A, Cakir A, Suleyman H, Karagoz Y, Halici M, Bayir Y. Comparison of antioxidant activity and phenolic content of three lichen species. Phytother Res. 2004;18:938–41. Turhan K, Ekinci-Dogan C, Akcin G, Aslan A. Biosorption of au(III) and cu(II) from aqueous solution by a non-living Usnea longissima biomass. Fres Environ Bull. 2005;14:1129–35. Eugino M, Muller N, Frases S, Almeida-Paes R, Mauricio LMTR, Lemgruber L, Farina M, de-Souza W, Anna CS. Test-derived biosynthesis of silver/silver chloride nanoparticles and their antiproliferative activity against bacteria. RSC Adv. 2016;6:9893–904. Rajput S, Werezuk R, Lange RM, McDermott MT. Fungal isolate optimized for biogenesis of silver nanoparticles with enhanced colloidal stability. Langmuir. 2016;32:8688–97. Zhang JZ, Nogues C. Plasmonic optical properties and applications of metal nanostructures. Plasmonics. 2008;3:127–50. Rajakumar G, Gomathi T, Thiruvengadam M, Rajeswari VD, Kalpana VN, Chung IM. Evaluation of anti-cholinesterase, antibacterial and cytotoxic activities of green synthesized silver nanoparticles using from Millettia pinnata flower extract. Microb Pathogen. 2017;103:123–8. Kalimuthu K, Suresh Babu R, Venkataraman D, Bilal M, Gurunathan S. Biosynthesis of silver nanocrystals by Bacillus licheniformis. Colloids Surf B Biointerf. 2017;65:150–3. Vijay Kumar PPN, Pammi SVN, Kollu P, Satyanarayana KVV, Shameem U. Green synthesis and characterization of silver nanoparticles using Boerhaavia diffusa plant extract and their anti bacterial activity. Ind Crop Prod. 2004;52:562–6. Nakamoto K. Infrared and Raman spectra of inorganic and coordination compounds, part a and part B, 2 Vol set, 6thEdition. John Wiley & Sons, Inc. USA, 2009. Prabhu S, Poulose EK. Silver nanoparticles: mechanism of antimicrobial action, synthesis, medical applications, and toxicity effects. Int Nano Lett. 2012;2:32. Hong X, Wen J, Xiong X, Hu Y. Shape effect on the antibacterial activity of silver nanoparticles synthesized via a microwave-assisted method. Environ Sci Pollut Res. 2016;23:4489–97. Agnihotri S, Mukherji S, Mukherji S. Size-controlled silver nanoparticles synthesized over the range 5-100 nm using the same protocol and their antibacterial efficacy. RSC Adv. 2014;4:3974–83. Saravanan M, Amelash T, Negash L, Gebreyesus A, Selvaraj A, Rayar V. Deekonda K extracellular biosynthesis and biomedical application of silver nanoparticles synthesized from Baker's yeast. Int J Res Pharm Biomed Sci. 2013;4:822–8. Bonnigala B, Aswani Kumar YVV, Vinay Viswanath K, Joy Richardson P, Mangamuri UK, Poda S. Anticancer activity of plant mediated silver nanoparticles on selected cancer cell lines. J Chem Pharma Res. 2016;8:276–81. Authors are thankful to Prof. Wazahat Husain, Department of Botany, Aligarh Muslim University, Aligarh, India for plant identification; and Department of Microbiology, Jawaharlal Nehru Medical College & Hospital, Aligarh Muslim University, Aligarh, India for providing bacterial culture. The datasets supporting the results of this article are included in the article. Department of Chemistry, Aligarh Muslim University, Aligarh, Uttar Pradesh, 202002, India Khwaja Salahuddin Siddiqi Department of Saidla, Aligarh Muslim University, Aligarh, Uttar Pradesh, 202002, India M. Rashid, A. Rahman & Tajuddin Department of Biology, College of Natural and Computational Sciences, University of Gondar, P.O. Box #196, Gondar, Ethiopia Azamal Husen Department of Ilmul Advia (Unani Pharmacy), Aligarh Muslim University, Aligarh, Uttar Pradesh, 202002, India Sumbul Rehman M. Rashid A. Rahman Tajuddin All authors had significant intellectual contribution towards the design of the study, data collection and analysis; and write-up of the manuscript. All authors read and approved the final manuscript and agreed to its submission. Correspondence to Azamal Husen. The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Siddiqi, K.S., Rashid, M., Rahman, A. et al. Biogenic fabrication and characterization of silver nanoparticles using aqueous-ethanolic extract of lichen (Usnea longissima) and their antimicrobial activity. Biomater Res 22, 23 (2018). https://doi.org/10.1186/s40824-018-0135-9 Biosynthesis Silver nanoparticles Antimicrobial activity
CommonCrawl
Hostname: page-component-7ccbd9845f-l8x48 Total loading time: 1.484 Render date: 2023-01-28T14:58:08.684Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true >American Political Science Review >Volume 109 Issue 1 >Quantifying Social Media's Political Space: Estimating... American Political Science Review MEASUREMENT OF IDEOLOGY SOCIAL MEDIA ENDORSEMENT DATA USING SOCIAL MEDIA DATA TO SCALE IDEOLOGICAL POSITIONS VALIDATION OF THE MEASURE EXAMPLE APPLICATIONS OF THE IDEOLOGY MEASURE Quantifying Social Media's Political Space: Estimating Ideology from Publicly Revealed Preferences on Facebook Published online by Cambridge University Press: 03 March 2015 ROBERT BOND and SOLOMON MESSING ROBERT BOND* SOLOMON MESSING* Facebook Data Science Robert Bond is Assistant Professor, School of Communication, Ohio State University, 3072 Derby Hall 154 North Oval Mall, Columbus, OH 43210 ([email protected]). Solomon Messing is Research Scientist, Facebook Data Science, 1601 Willow Rd, Menlo Park, CA 94025, ([email protected]). Save PDF (1 mb) View PDF[Opens in a new window] We demonstrate that social media data represent a useful resource for testing models of legislative and individual-level political behavior and attitudes. First, we develop a model to estimate the ideology of politicians and their supporters using social media data on individual citizens' endorsements of political figures. Our measure allows us to place politicians and more than 6 million citizens who are active in social media on the same metric. We validate the ideological estimates that result from the scaling process by showing they correlate highly with existing measures of ideology from Congress, and with individual-level self-reported political views. Finally, we use these measures to study the relationship between ideology and age, social relationships and ideology, and the relationship between friend ideology and turnout. American Political Science Review , Volume 109 , Issue 1 , February 2015 , pp. 62 - 78 Copyright © American Political Science Association 2015 Many theories in political science rely on ideology at their core, whether they are explanations for individual behavior and preferences, governmental relations, or links between them. However, ideology has proven difficult to explicate and measure, in large part because it is impossible to directly observe: we can only examine indicators such as responses to survey questions, political donations, votes, and judicial decisions. One problem with this patchwork of indicator measures is the difficulty of studying ideology across domains. Although we have established reliable techniques for measuring ideology among individuals and legislators, such as survey measures and roll-call vote analysis, methods for jointly estimating the ideologies of ordinary citizens and elite actors have only recently been developed. To understand the relationship between elite ideology and beliefs of ordinary citizens, we need measures of ideology that allow us to place ordinary citizens and elites in the same ideological space. For example, a longstanding debate in political science concerns whether the American public has become more ideologically polarized in the last 40 years (e.g., Abramowitz and Saunders Reference Abramowitz and Saunders2008; Fiorina, Abrams, and Pope Reference Fiorina, Abrams and Pope2006). If so, does mass polarization drive elite polarization, or vice versa? To test these theories, we need joint ideology measures that put elite actors and ordinary citizens on the same scale. The lack of comparable ideological estimates for individuals and elites can be attributed in part to a paucity of suitable data. Although scholars continue to develop methods that use text as data to measure ideology (Laver, Benoit, and Garry Reference Laver, Benoit and Garry2003; Monroe, Colaresi, and Quinn Reference Monroe, Colaresi and Quinn2009; Monroe and Maeda Reference Monroe and Maeda2004), text data have problems that are yet unsolved—in addition to problems related to human language's high dimensionality, more fundamental problems can confound such efforts: not only are data on how ordinary citizens talk about issues related to ideology sparse, but also the context of communication for elites and ordinary citizens often differs, and each may use different language to describe the same underlying ideological phenomena (polysemy), or use the same language to describe different things (synonymy). While complex data such as human language may one day provide a superior measure of something as complex as ideology, further tools need to be developed to address these issues. The most promising avenues for estimating ideology comprise discrete behavioral data that reveal preferences, including roll-call votes (Poole and Rosenthal Reference Poole and Rosenthal1997), cosponsorship records (Aleman et al. Reference Aleman, Calvo, Jones and Kaplan2009), campaign finance contributions (Bonica Reference Bonica2013), and—as in this article—expressions of support for elites by ordinary citizens. Previously, obtaining sufficiently large behavioral datasets about ordinary citizens' political preferences has been cost prohibitive. Using large data sources, such as campaign contributions and online data, we are now able to use methods similar to those used on political elites to estimate the ideology of a broader set of political actors. These large-scale data sources allow researchers to view ideology at a more "macro" level, with the ability to couple the ideological estimates produced with other data sources. Using vast, emerging data sets like these allows political scientists to study ideology from new perspectives (Lazer et al. Reference Lazer, Pentland, Adamic, Aral, Barabasi, Brewer, Christakis, Contractor, Fowler, Gutmann, Jebara, King, Macy, Roy and Alstyne2009). Although methods for jointly scaling elites and masses using campaign finance records are promising (Bonica Reference Bonica2013), they should constitute the beginning of our efforts to produce measures that allow us to place elites and ordinary citizens on the same scale. While estimates based on campaign finance (or CFscores) extend available data beyond law-makers, they are restricted to a particular type of ideological expression—giving money to campaigns, a behavior limited to class of individuals who have substantial financial resources and who choose to express their ideological preferences with money. Only donors who give at least $250 to a particular campaign are named in campaign finance reports, though many give to campaigns in much smaller amounts—46% of Barack Obama's 2008 presidential campaign donations were $200 or less (Malbin Reference Malbin2009). While CFscores extends our ability to estimate ideological positions from politicians to PACs, the citizen-level estimates it produces cover an elite class of private donors who give large amounts to campaigns. Our ability to use such estimates to draw inferences about the ideological preferences for the voting public at large is limited. In this article, we contribute to the effort to jointly measure elite and mass ideology using social media data. In contrast to donations, candidate endorsements in social media do not entail any financial outlay; any individual with an account may endorse any political entity she wishes. Furthermore, individuals can endorse any political entity that has a presence on the platform, meaning that such data allow for joint estimation of ideology for individuals and a broad range of political actors. This ideology measure is not fully representative—individuals who endorse political entities in social media have higher levels of political interest than average citizens, while individuals who are active in social media generally have higher levels of education and income. Nonetheless, this approach offers significant advantages. For instance, individuals in this data set need not overcome costly structural barriers to express their support of political candidates, nor do they comprise a limited, homogenous group of donors—instead they are drawn from a much broader swath of society. The potential generalizability of these data provides a compelling reason to examine and validate the ideological estimates it produces. We proceed as follows: first we review the extant literature on methods for measuring ideology, both in populations of elites and for ordinary citizens. Following that, we explain how our social media endorsement data are structured and the how they are applicable to estimating ideology. Next, we describe our estimation technique and the resulting estimates. We then validate our estimates against other measures of ideology for both the elite and mass populations. The next sections apply the measure, investigating the relationship between age and ideology, ideology's structure in social networks, and whether having friends with more distant ideologies decreases the likelihood of voting. Last, we conclude with a discussion of how this technique opens directions in the study of ideology and suggest future work using these data. Political scientists have generally measured ideology by asking individuals to place themselves on a 7-point liberal-conservative scale.Footnote 1 Although this measure's ubiquity allows us to compare ideology across studies and across time, recent work has shown that it has a number of methodological problems. It cannot account for the multidimensionality of ideology (Feldman Reference Feldman1998; Treier and Hillygus Reference Treier and Hillygus2009)—respondents might place themselves on a social, political, or economic ideology scale, or perhaps some combination thereof. The measure is recorded as an interval rather than a continuous measure (Jackson Reference Jackson1983; Kinder Reference Kinder1983), making it difficult to analyze with conventional statistical techniques. The measure also lacks reliability. Survey respondents may interpret the question differently, particularly respondents with liberal ideologies who are hesitant to be labeled "liberal" (Ansolabehere, Rodden, and Snyder Reference Ansolabehere, Rodden and Snyder2008; Schiffer Reference Schiffer2000). Although these issues are not so problematic that the measure should be abandoned, and researchers have proposed methods for addressing these concerns (Wood and Oliver Reference Wood and Oliver2012), they suggest that other measures should be explored. Techniques to measure the ideology of political elites frequently use the choices that elites make as data from which an ideology measure may be deduced. The most well known of these techniques is the NOMINATE method of ideal point estimation using roll-call votes for members of Congress (Poole and Rosenthal Reference Poole and Rosenthal1997). Methods of ideal point estimation have been examined further in Congress (Clinton, Jackman, and Rivers Reference Clinton, Jackman and Rivers2004), and extended to other types of elite actors, such as members of the Supreme Court (Martin and Quinn Reference Martin and Quinn2002) and European Parliament (Hix, Noury, and Roland Reference Hix, Noury and Roland2006). These techniques have provided researchers with estimates of the ideal points of actors within a given institution, but have largely left researchers without tools to compare the ideology of actors across them. Typically, one cannot estimate the ideologies of a diverse set of political actors because they make disjoint choices. In order to jointly estimate the ideologies of actors with (primarily) disjoint choice sets one must have some set of choices that "bridges" choice divides (Bafumi and Herron Reference Bafumi and Herron2010; Bailey Reference Bailey2007; Gerber and Lewis Reference Gerber and Lewis2004; Poole and Rosenthal Reference Poole and Rosenthal1997). As with campaign contributions, Facebook's data on which users support which candidates represent the bridge actors necessary to jointly estimate the ideology of politicians and ordinary citizens. Users are able to "like" pages regardless of their political institution, meaning that all pages are part of the same choice set. Bridge actors, such as Facebook users or donors, serve two important purposes. First, they bridge actors that otherwise would not be connected, such as members of Congress and mayors of cities. Second, they span the divide between politicians and individuals. To put elite political actors and ordinary citizens on the same ideological scale we use data from Facebook that mitigate limitations that previous measures of ideology faced in terms of bridging diverse sets of elite actors and including measures of the ideology of ordinary citizens. For Facebook users, showing support for political figures is simple, relatively costless, and requires little cognitive effort. Our approach uses singular value decomposition (SVD) on the transformed matrix of user to political page connections on Facebook to estimate the ideological positions of Facebook users and the political pages they support. These estimates are consistent with the first ideological dimension recovered from roll-call data (Clinton, Jackman, and Rivers Reference Clinton, Jackman and Rivers2004; Poole and Rosenthal Reference Poole and Rosenthal1997) and with individuals' self-reported political views indicated on the user's Facebook profile page. Social media serves not only as a platform to communicate with social contacts and share digital media, but also as a forum for political communication. As more individuals utilize this forum, they create rich data sources that can be used to understand political phenomena.Footnote 2 Political scientists have begun to study how people engage and express their political viewpoints online using this rich data source (e.g., Barberá 2014, Bond et al. Reference Bond, Fariss, Jones, Kramer, Marlow, Settle and Fowler2012; Butler and Broockman Reference Butler and Broockman2011; Butler, Karpowitz, and Pope Reference Butler, Karpowitz and Pope2012; Grimmer, Messing, and Westwood Reference Grimmer, Messing and Westwood2012; Messing and Westwood Reference Messing and Westwood2012; Mutz and Young Reference Mutz and Young2011; Wojcieszak and Mutz Reference Wojcieszak and Mutz2009). Social media enable ordinary citizens to endorse and communicate with political figures and elites. On Facebook, in addition to displaying demographic, educational, and professional information in their profile, users can "like" pages associated with figures such as politicians, celebrities, musicians, television shows, books, movies, etc. Users do so by listing such entities on their profile, by visiting a politician's page on Facebook, or they may be prompted to like pages by friends. Liking political figures may be communicated to the user's social community via the political figure's page, the user's page, and the "news feed" homepage that friends of the user see. Liking pages makes the user a "fan" of that page, meaning that the individual may see content published on the page in their news feed. Facebook also maintains an accounting of pages that are "official."Footnote 3 We scale ideology for both users and political figures using endorsements of official pages. Facebook is an especially appropriate platform to examine the suitability of social media data to generate joint ideological estimates. On Facebook, users self-report demographic and political data, making it possible to cross-reference these estimates with self-reported ideological measures, and examine variation by demographic category. We can validate legislators' ideological estimates based on traditional measures such as DW-NOMINATE. Hence, our method can be extended to other social networking sites where public data on the politicians individuals endorse and follow are readily available to researchers, including Twitter. However, because Facebook collects demographic data, including self-reported political affiliation, it serves as a more appropriate platform to validate this approach. For most results below, we used data collected on March 1, 2011 from all U.S. users of Facebook over age 18 who had publicly liked at least two of the official political pages on Facebook. This constituted 6.2 million individuals and 1,223 pages. Only official pages identified by Facebook were included. We also collected data from March 1, 2012 to see for whom ideology changed year over year. Finally, we collected data from November 2, 2010, the day of the Congressional election that year, in order to calculate an ideology score that we use to study its relationship to voting, as explained below. All data were de-identified. Unlike roll-call data that are ideal for scaling, social media endorsements, much like campaign contribution data, require some processing prior to scaling. Roll-call data are well suited for scaling because votes are coded as either "yea" or "nay," and abstentions may be simply treated as missing data. For both Facebook and campaign contribution data, the presence of relationships is clear, but the absence of a relationship is ambiguous.Footnote 4 The lack of a supporting relationship may be related to ideological considerations, lack of knowledge about the candidate, or an unwillingness to make their supportive relationships public. As with campaign contribution data, Facebook users may choose to support any combination of political figures. Although this is also the case for campaign contributions, giving to candidates is influenced by an individual's budget, which may preclude an individual from giving to the full set of candidates she supports, or from giving at all. Nothing precludes a Facebook user from supporting a candidate and her opponent. One approach to simplify the analysis treats the data as choices between incumbent-challenger pairs. However, many of the political figures we wish to scale run for office against minimal opposition or do not run for office at all, meaning that their opposition has few or no supporters. Model of Endorsement Suppose n users choose whether to like m candidates. Each user i = 1, . . ., n chooses whether to like candidate j = 1, . . ., m by comparing the candidate's position at ${\bm \zeta\ }_j$ and the status quo (in this case, not publicly supporting the candidate) located at ψ j , both in R d , where d = dimensions of ideology space. Let (1) \begin{eqnarray} y_{ij} = \left\lbrace \begin{array}{ll}1 & \hbox{if user $i$ likes candidate $j$}, \\ 0 & \hbox{otherwise.} \end{array} \right. \end{eqnarray} User i receives utility for supporting candidates close to her own ideal point x i in R d policy space. We can specify this ideological utility as a quadratic loss function that depends on the location of the candidate and the status quo: (2) \begin{eqnarray} U_{ij}^{candidate} &=& -\Vert \bm {x}_i - \bm {\zeta\ }_j\Vert ^2, \nonumber \\ U_{ij}^{status\ quo} &=& -\Vert \bm {x}_i - \bm {\psi\ }_j\Vert ^2. \end{eqnarray} The net benefit of liking is then the difference in these two utilities, (3) \begin{eqnarray} U_{ij}^{like} &=& U_{ij}^{candidate} - U_{ij}^{status\ quo} \nonumber \\ &=& -\Vert \bm {x}_i - \bm {\zeta\ }_j\Vert ^2 + \Vert \bm {x}_i - \bm {\psi\ }_j\Vert ^2. \end{eqnarray} Notice that the utility of liking is decreasing in distance between the candidate and the user, but increasing in the distance between the status quo and the user. Finally, suppose also that the utility of liking is increased by candidate-specific factors ϕ j that govern how desirable each political page is (some pages are more popular and, perhaps, easier to find on the site) and user-specific factors η i that govern each user's propensity to support candidates (some users get greater utility from the act of liking, and are thus more likely to engage in the activity than others). Putting these together with liking utility yields (4) \begin{eqnarray} U_{ij} = -\Vert \bm {x}_i - \bm {\zeta\ }_j\Vert ^2 + \Vert \bm {x}_i - \bm {\psi\ }_j\Vert ^2 + \eta _i + \phi _j. \end{eqnarray} To group row and column terms into new variables, we let β j = ψ j − ζ j , and θ j = ψ 2 j − ζ 2 j + ϕ j . Thus (4) simplifies to (5) \begin{eqnarray} U_{ij} = -2\bm {x}_i \bm {\beta }_j + \eta _i + \theta _j. \end{eqnarray} We do not observe direct utilities, but we do observe likes. Suppose that observing a like means that the true utility of endorsing is high, while not observing one means that the true utility is low (without loss of generality, suppose the utilities are 1 and 0, respectively). Not all likes yield exactly the same utility, so we can think of the true utility as being equal to a function of the observed like (yij ) minus an error term (ν ij ): (6) \begin{eqnarray} U_{ij} = y_{ij} - \nu _{ij} \end{eqnarray} Substituting, we get (7) \begin{equation} y_{ij} = -2\bm {x}_i \bm {\beta }_j + \eta _i + \theta _j + \nu _{ij}. \end{equation} To further simplify the model, we factor out the the η and θ terms by employing the double-center operator D(.) defined for a matrix Z to be each element minus its row and column means plus its grand mean divided by − 2: (8) \begin{equation} D(z_{ij}) = (z_{ij} - \bar{z}_{i.} - \bar{z}_{.j} + \bar{z}_{..})/(-2). \end{equation} In the literature that utilizes roll-call votes to estimate ideology, Poole (Reference Poole2005) and Clinton, Jackman, and Rivers (Reference Clinton, Jackman and Rivers2004) discuss the use of the double-center operator on a squared distance matrix, not on the roll-call matrix itself. The effect of this operator is to generate a new matrix with all row and column means equal to zero. As a result, any term that does not interact with both a row and column variable will factor out of the matrix. Suppose ν ij is an independent and identically distributed random variable drawn from a stable density. Suppose further, without loss of generality, that the dimension-by-dimension means of x and β equal 0. If so, then applying the double-center operator in equation (8) to both sides of equation (7) yields (9) \begin{eqnarray} D(y_{ij}) = \bm {x}_i \bm {\beta }_j + \epsilon _{ij}, \end{eqnarray} where the new error term ε ij is also a stable density defining the stochastic component of the identity. We can now use singular value decomposition (SVD) of the double-center matrix of likes to find the best d dimensional approximation of x i and β j (Eckart and Young Reference Eckart and Young1936): (10) \begin{eqnarray} D(\bm {Y}) = \bm {X} \bm {\Sigma } \bm {B}, \end{eqnarray} where Y is the observed matrix of likes, X is an n × n matrix of user ideology locations, Σ is a n × m matrix with a diagonal of singular values, and B is a m × m matrix of βs. The d largest singular values correspond to the d columns of X and d rows of B that generate the best fitting estimates of x i and β j (Eckart and Young Reference Eckart and Young1936). Although it is possible to analyze the full matrix of likes from users to political pages, the candidate (ϕ j ) and user (η i ) specific factors mentioned above bias the estimation. Because some pages are so much more popular than others, and to a lesser extent because some users like many more politicians than average, the estimation yields ideological estimates that are weighted by the relative popularity of the candidates. For instance, Obama is to the extreme left of the distribution and Romney and Palin are to the extreme right, while candidates who have few likes are in the middle of the distribution regardless of their ideological views. To account for political page- and user-specific factors, we compute a political page adjacency matrix, A = XX′, in which the rows and columns correspond to political pages and entries consist of the number of times an individual user likes both pages, and then compute the ratio of common users, corresponding to an agreement matrix, G = aij/diag(ai ), described in further detail below. We then apply SVD to the G matrix to estimate x i . Estimation of Ideology from Endorsements We begin by creating a matrix in which each column represents a user and each row represents an official Facebook page about politics. We limit our data to Facebook users in the United States who are over age 18 who endorse, or "like," at least two political pages.Footnote 5 This leaves us with approximately 6.2 million users and 1,223 pages and 18 million like actions from users to pages. An example of the first ten rows and first ten columns of the bipartite matrix is in Table 1. TABLE 1. The First Ten Rows of the User by Political Page Matrix Note: Entries in the matrix are dichotomous, where 1 means that the user has liked the page and . means that the user has not. A few things should be clear from this example and the summary statistics described here. First, there are few likes relative to the size of the matrix overall, making the matrix sparse. Second, users vary in the number of candidates they support. Although we limit the data to include users who like at a minimum two candidates' pages, the average number of likes is 3.04 and the maximum number of pages liked is 625. Figure 1 shows the full distribution of the number of likes for users and the number of fans per page. Most users like only a few pages, but a few like many. Similarly, political pages vary in the amount of support from users they attract. The maximum number of fans of a page is 3.67 million (Barack Obama), with an average of 15,422.5 fans per page. Most pages have a few thousand fans.Footnote 6 FIGURE 1. The Left Panel Shows the Distribution of the Number of Pages that Each User Likes; the Right Panel Shows the Distribution of the Number of Fans that Each Page Has To estimate ideology our approach is similar to the approach used by Aleman et al. (Reference Aleman, Calvo, Jones and Kaplan2009) to estimate the ideal points of legislators in the United States and Argentina using cosponsorship data, and is similar in principle to the use of Relational Class Analysis by Goldberg (Reference Goldberg2011) and Baldassarri and Goldberg (Reference Baldassarri and GoldbergForthcoming). Estimating separate parameters for users and pages, as is typical of estimation techniques like DW-NOMINATE or Bayesian analysis (Clinton, Jackman, and Rivers Reference Clinton, Jackman and Rivers2004), would be difficult on a large, sparse matrix. Instead, we construct an affiliation matrix between the political pages in which each cell indicates the number of users that like both pages. We do not use the original (two-mode) dataset of connections between users and political figures, which is organized as an X = r × c matrix, with r = 1, 2. . .R users and c = 1, 2. . .C pages, but instead use an affiliation matrix (Table 2), A = XX′. In this affiliation matrix, the diagonal entries are the total number of users that like each page and the off-diagonal entries are the number of times an individual user likes both pages. Table 2 shows the first ten rows and ten columns of the affiliation matrix, A. The table shows that there are very significant differences in the total number of users that like each page, as well as notable differences in the number of users that like each pair of pages. TABLE 2. The First Ten Rows of the Affiliation Matrix Note: Diagonal entries are the number of fans of the page. Off-diagonal entries are the number of fans of both pages. Next, we calculate the ratio of shared users by dividing the number of users that like both pages by the total number of users that like each page independently, which produces an agreement matrix, G = aij/diag(ai ), as depicted in Table 3. Because each page has a different number of fans, the denominator changes and the upper and lower triangles of the new square matrix, G, are not identical. For instance, Barack Obama has many fans on Facebook (the most of any page in the data set) so the values in the first row are small as they are all divided by the number of fans that Obama has. However, many of the fans of other candidates also like Obama, so those values are relatively high. TABLE 3. The First Ten Rows of the Ratio of Affiliation Matrix Note: Because each page has a different number of fans, the denominator changes and the off diagonal entries are not identical. It is notable that there is overlap in fans across partisan lines. Take, for instance, the Barack Obama column in Table 3: this column represents the proportion of the other politicians' fans who are also Obama's fans. Although it is not surprising that the other Democratic politicians have fans that are also Obama's fans, more than 7% of both Romney's and Palin's fans are also fans of Obama. Thus, for Facebook users who are fans of at least two candidates, there is not complete polarization among Facebook users. This may owe to centrist users who are interested in following members of both parties or to users who are interested enough in politics to follow a large set of popular politicians regardless of their ideological affiliation. The agreement matrix provides the information required for estimation. From this stage, a number of methods to scale the data may be employed. For simplicity, we use SVD on the centered matrix, G. Because we normalize the agreement matrix, which makes it asymmetric, the results from the left and right singular vectors are not similar. The right singular value is still highly related to the page's popularity, as its denominator is the number of fans of the page. The left singular value is unrelated to popularity, as its denominator is unrelated to the popularity of the page. Therefore, we retrieved the first rotated left singular value as the ideology measure for the pages. We rescaled the values to have mean 0 and standard deviation 1 for ease of interpretation. We were next interested in estimating ideology scores for the users. If we were to scale the entire user by page matrix we would be able to estimate separate parameters for the users and the pages that should be measures of ideology. Another approach would be to replicate what we have done with the pages and to create a matrix of connections between users based on shared political pages that they both like. This would create a matrix of approximately 6.2 M2 entries. However, decomposing a 6.2-M × 6.2-M matrix generally requires more computational resources than are available to political scientists running R on a normal computer. Instead, for each user we take the average of the scores for the political pages that user endorsed. We begin our exploration of our estimates by examining the distributions of the ideology scores of politicians and individuals, as shown in Figure 2. The figure shows that both individuals and politicians are bimodally distributed, which is similar to Poole and Rosenthal's (Reference Poole and Rosenthal1997) results for the U.S. Congress, and Bonica's (Reference Bonica2013) results for candidates for office. The bimodal distribution of individuals is consistent with a polarized American public (Abramowitz Reference Abramowitz2010; Abramowitz and Saunders Reference Abramowitz and Saunders2008; Levendusky Reference Levendusky2009), in contrast to those who have argued that polarization in the electorate has not increased (Dimaggio, Evans, and Bryson Reference Dimaggio, Evans and Bryson1996; Fiorina, Abrams, and Pope Reference Fiorina, Abrams and Pope2006; McCarty, Poole, and Rosenthal Reference McCarty, Poole and Rosenthal2006). However, our sample is not fully representative of the population. Indeed, because those in our sample are likely to be more politically engaged than average, our data may reflect that more politically engaged citizens are also more polarized (Abramowitz and Jacobson Reference Abramowitz and Jacobson2006; Fiorina and Levendusky Reference Fiorina and Levendusky2006). Finally, while the distributions of both sets of actors show evidence of polarization, politicians are more dispersed than individuals. FIGURE 2. Density Plots of Ideological Estimates of 1,223 Politicians and 6.2 million Individuals. In order to validate the measures that we estimate, we begin by showing the correlation between our measures and other commonly used measures. For political elites, we rely on measures of ideology that come from their voting records. For individuals, we use self-reported indicators of ideology. Validation of Measures for Political Elites To examine the similarity between legislators' positions as derived from roll-call vote data and those from Facebook liking data, we matched Facebook pages to their corresponding DW-NOMINATE first dimension scores. We matched 465 pages to members of the 111th Congress. The Pearson correlation between the two measures is 0.94, and the within-party correlation for Democrats is 0.47 and for Republicans is 0.42. This correlation is quite high given that Aleman et al. (Reference Aleman, Calvo, Jones and Kaplan2009) find that the ideological estimates from roll-call data and ideological estimates from cosponsorship data in the U.S. Congress correlate between 0.85 and 0.94. Figure 3 provides a visual representation of the relationship between the two ideology scores. The figure shows that the measures cluster legislators into two parties, and that the correlation within parties is quite high as well. For comparison, Bonica (Reference Bonica2013) finds that the overall correlation between DW-NOMINATE and CFscores scores among incumbents is 0.89, with a Democratic correlation of 0.62 and a Republican correlation of 0.53. FIGURE 3. Scatter Plot Showing the Relationship between the Facebook Based Ideology Measure and DW-NOMINATE We have labeled some of the members of Congress in the figure to illustrate where some of the most extreme members and some of the more moderate members lie on both measures. There are two points for Jeff Flake and Ron Paul in Figure 3. Although most of the political figures in the dataset had only one page, some had more than one. Ron Paul maintained two official pages, one for his presidential candidacy and one for his work in Congress (this page explicitly asked for all discussion related to his presidential candidacy to move to the other page). Ethics rules state that members need to separate official business and campaign activity, and these separate pages may reflect an effort to comply. While it is unfortunate to have multiple pages representing the same individual with respect to reliably estimating ideology, it does allows us to see whether we get consistent ideological estimates across multiple pages. The similarity in ideological estimates for Flake and Paul's pages suggests that our measurement strategy is reliable. Validation of Measures for Ordinary Citizens We next turn to the validation of the individual-level ideological estimates. First, we computed the average ideology score of individuals based on their stated political views. On Facebook users may fill out a free response field that many users fill out called "political views." Many users type the same things in, such as "Democrat," "Republican," "Liberal," or "Conservative." We took all labels that more than 20,000 users had used and calculated the average ideology score for the group, as well as the 95% confidence interval for that estimate. The results are shown in Figure 4. Notes: The category labeled "none" is the group of users that actually wrote the word "none" as their political views. The point labeled "(blank)" is the group of users that has not entered anything in as their political views. The 95% confidence intervals for each of the estimates is smaller than the point. The color of the points is on a scale from blue to red that is proportional to each group's average ideology score. FIGURE 4. Average Facebook Ideology Score of Users Grouped by the Users' Stated Political Views Figure 4 shows that the ideology score predicts users' stated political views well. There appear to be at least three clear groups—those who state their political views and are liberal, those who do not state a clearly liberal or conservative ideology, and those who state their ideology and are conservative. There is also substantial variation in the middle group, those that do not state a liberal or conservative ideology. The groups represented are in approximately the order one would expect based on their average ideology, save for the fact that those who self-identify as "very conservative" are slightly to the left of those who self-identify as "conservative." While the above analysis suggests that the estimates we make for users are valid, stated political views on Facebook is a measure of political orientation that has not been previously analyzed. Stated political views of an individual on Facebook are difficult to interpret because we do not know how users understand the question. Therefore, we conducted a survey using a more standard ideology survey question to further understand if the estimates correlate with this commonly used ideology metric. We use survey data from 20,027 individuals for whom we were also able to estimate ideology to further validate our ideology measure. These individuals consist of the intersection between individuals who took the survey and those who liked two or more pages. The survey was issued to a convenience sample on September 6, 2012 to U.S. Facebook users who were logged in. 282 thousand individuals started the survey and 78 thousand completed it; the approximate AAPOR response rate was 2.6 percent. Respondents' demographics reflect a departure from U.S. Census demographics with respect to age (54% 18–34, 34% 35–54, and 11% 55+); gender (57% female); ethnicity (83% white); education (19% high school or less; 41% some college; 28% college graduate; 12% postgraduate); and, to a lesser extent, ideology (12% very liberal; 22% liberal; 42% moderate; 17% conservative; 6% very conservative). We asked individuals to place themselves on a five point ideological scale and also for their party identification as either a Democrat, Republican, Independent, or Other. We then computed the average Facebook ideology score for each group. The results are shown in Figure 5. Note: The 95% confidence intervals for each of the estimates is smaller than the point. FIGURE 5. Average Facebook Ideology Score of Users Grouped by the Users' Ideology (left panel) and Party Identification (right panel) from a Survey Conducted Through the Facebook Website Figure 5 shows that the Facebook ideology score is strongly associated with traditional survey ideology and partisanship measures. The results show there are three clear groups: Liberals, Moderates, and Conservatives. Although the measure can statistically differentiate between those who answered "Liberal" from those who answered "Very liberal" and those who answered "Conservative" from those who answered "Very conservative," the differences between those groups is small. Further, the ideology measure from Facebook is a good predictor of partisanship, with Democrats occupying the scale's left end, Republicans the right, and Independents the middle. We do not expect the respondents to our survey to be representative of the population overall or of the population for which we measure ideology, but we also do not expect that either method will show bias in measurement for either population. Therefore, we use these data to validate the Facebook measure, not to make arguments about the overall population. Example 1: Age and Ideology A substantial literature has found a relationship between age and political ideology (Cornelis et al. Reference Cornelis, Hiel, Roets and Kossowska2009; Glenn Reference Glenn1974; Krosnick and Alwin Reference Krosnick and Alwin1989; Ray Reference Ray1985). Most studies have found that as people age they become more conservative and less susceptible to attitude change (Jennings and Niemi Reference Jennings and Niemi1978; Krosnick and Alwin Reference Krosnick and Alwin1989). However, less is known about how that relationship varies across individual characteristics. Recent studies have begun to investigate how personality plays a role in mediating the relationship between age and conservatism (Cornelis et al. Reference Cornelis, Hiel, Roets and Kossowska2009). Here we investigate how the relationship between age and ideology varies by characteristics such as gender, marital status, and educational attainment. A key advantage of using the Facebook ideology data is the large number of observations. By looking at patterns in the raw data we can understand phenomena that are not as easily understood using standard techniques, such as regression. In order to study ideology across characteristics, we took all individuals for whom we calculated an ideology score and matched the individual's characteristics that they listed on their profile. Figure 6 shows the average ideology of users age 18 through 80, then separates the estimates by gender, marital status, and college attendance. The figure shows that older people are more conservative than younger people. Women are more liberal than men but a similar pattern in which older women and older men are more conservative than their younger counterparts emerges. While the overall pattern is similar for those who get married and those who do not, young people who get married are more conservative than their younger counterparts and young people who do not get married are more liberal than their younger counterparts. After the age of approximately 35, the pattern of increasing conservatism with age is similar across the groups. Finally, college attendance is not predictive of ideology among the young, but for among older groups having attended college is related to being more liberal. Notes: The upper left panel shows the average ideology of all users in our sample. The upper right panel shows the average ideology of men and women by age. The lower left panel shows the average ideology of married and unmarried individuals by age. The lower right panel shows the average ideology of college attendees and those who have not attended college by age. FIGURE 6. In Each Panel the Points Show the Average Ideology of the Age Group for Individuals Age 18 through 80, and the Lines Represent the 95% Confidence Interval of the Estimate While these results are consistent with previous research, we have not studied change in ideology over time. The finding that older people are more conservative than younger people is consistent with a population that becomes more conservative over time. However, it is also consistent with recent surveys that have shown that the most recent generation is among the most liberal in recent memory (Kohut et al. Reference Kohut, Parker, Keeter, Doherty and Dimock2007). While we have some evidence of how an individual's ideology changes over time, we find that the correlation in ideology from March 2011 to March 2012 is 0.99. With more time we should be able to better discern whether the patterns we found are due to cohort effects or change in an individual's ideology over time. Example 2: Ideology in Social Networks Another advantage of using the Facebook ideology data is the abundance of data about the social networks of its users. Previous work has shown that ideology clusters in social networks (Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004). Here, we wish to characterize the extent to which the clustering varies based on the strength of the relationship between two individuals. We consider three general types of relationships: friendships, family ties, and romantic partners. Clustering in the network may be due to some combination of three possibilities. First, clustering may be due to exposure to a shared environment. Friends may be both exposed to some external factor that influences both to change their ideology in a similar way (e.g., attending the same college (Newcomb Reference Newcomb1943)). Second, clustering may be due to homophily. That is, people may choose friends based on ideology (Heider Reference Heider1944; Festinger Reference Festinger1957). Third, clustering may be due to influence. That is, one friend may argue in favor of an ideological position and change their friend's views (Lazer et al. Reference Lazer, Rubineau, Chetkovich, Katz and Neblo2010). These processes are more likely to occur between close friends than socially distant ones—closer friends are more likely to be physically proximate and hence more likely to experience the same external stimuli. Friends who share an ideology are more likely to become close friends as they will have more in common. Finally, close friends will have more opportunities to influence one another's ideological views, which should also increase ideological similarity. As social relationships become stronger, friends should be increasingly likely to hold similar ideological views. And, this pattern should be strongest for the most intimate relationships. Recent work has shown that while people do not usually select romantic partners based on ideology, they do base such decisions on factors related to ideology, which leads to highly correlated ideological views (Klofstad, McDermott, and Hatemi Reference Klofstad, McDermott and Hatemi2013). Furthermore, this pattern should grow stronger over time as partners are exposed to the same factors, influence each other, and as more similar partners should be more likely to stay together. Similarly, we expect that familial relationships will show evidence of these processes. Members of a nuclear family are more likely to experience the same external stimuli, due to being more likely to live near one another. While parents cannot select their children (or vice versa) based on ideological views (or other characteristics related to it), recent work shows a genetic component to ideology (Hatemi et al. Reference Hatemi, Gillespie, Eaves, Maher, Webb, Heath, Medland, Smyth, Beeby, Gordon, Mongomery, Zhu, Byrne and Martin2011). This genetic component coupled with socialization and similar exposure to factors that influence ideology mean that ideology is likely to be correlated within family units. Since Facebook allows users to identify their familial and romantic relationships, we were able to test for ideological similarity across these social links. We began by pairing all individuals with their siblings, parents, or romantic partners. We then calculated the Pearson correlation for each group. The results are shown in Figure 7. The figure shows that married couples have the highest correlation in ideology, while engaged couples have the second highest value. Correlations within the nuclear family have lower values, with parent-child relationships being stronger than sibling relationships. The fact that the correlation between siblings is lower than the correlation between parent-child pairs may be due to the fact that our sample skews toward younger users and that the set of parents on Facebook are more likely to be similar to their children across a range of factors than parents who have not yet joined the website. FIGURE 7. The Correlation in Ideology for Familial and Romantic Relationships. The 95% Confidence Intervals for Each of the Estimates is Smaller than the Point Next, we were interested in the correlation of ideology between friends. First, we paired all 6.2 million users for whom we estimated ideology in 2012 with every Facebook friend for which we also had an estimate for ideology, for a total of 327 million friendship dyads and an average of 53 friends per user. The overall correlation in 2012 was 0.69, which approximates other measures of ideological correlation among friends (Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004). We repeated this procedure for 2011, with 6.1 million users who had 238 million friendship connections to other users for whom we estimate ideology.Footnote 7 The overall correlation between ideology among these friends in 2011 was 0.67. The slight increase in correlation of friends' ideology from 2011 to 2012 is suggestive that there is greater polarization among friends in 2012. Additionally, we categorized all friendships in each year of our sample by decile, ranking them from lowest to highest percent of interactions. Each decile is a separate sample of friendship dyads. We validated this measure of tie strength with a survey (see Jones, Settle, Bond, Fariss, Marlow, and Fowler (Reference Jones, Settle, Bond, Fariss, Marlow and Fowler2012) for more detail) in which we asked Facebook users to identify their closest friends (either 1, 3, 5, or 10). We then measured the percentile of interaction between friends in the same way and predicted survey response based on interaction between Facebook friends. The results show that as the decile of interaction increases the probability that a friendship is the user's closest friend increases. This finding is consistent with the hypothesis that the closer a social tie between two people, the more frequently they will interact, regardless of medium. In this case, frequency of Facebook interaction is a good predictor of being named a close friend. Using the decile measure of tie strength, we then calculated the correlation between user and friend ideology on each set of dyads for both 2011 and 2012 (see Figure 8). For both years the correlation in friends' ideology increases as tie strength increases. The proportion of interaction between friends is a better predictor of similarity of ideology between friends in 2012 than in 2011, again suggesting that in 2012 friendships are more politically polarized than in 2011. We caution that this correlation may increase for other reasons, such as better measures of ideology in 2012 owing to users liking more political pages, or perhaps to changes of the makeup of the set of individuals for whom we can estimate ideology. Although we can only estimate ideology for about 2% more people in 2012 than 2011, the change in the sample could be enough to account for the difference in 2012. Notes: Each decile represents a separate set of friendship dyads. Decile of interaction is based on the proportion of interaction between the pair during the three months before ideology was scaled. The 95% confidence intervals for each of the estimates is smaller than the point. FIGURE 8. The Correlation in Ideology for Friendship Relationships Example 3: Friend Ideology and Turnout While the composition of ideology in social networks is important on its own, considerable research has been applied to understanding how the makeup of an individual's social network affects political participation (Eveland and Hively Reference Eveland and Hively2009; Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004; Huckfeldt, Mendez, and Osborn Reference Huckfeldt, Mendez and Osborn2004; McClurg Reference McClurg2006; Mutz Reference Mutz2002; Scheufele et al. Reference Scheufele, Nisbet, Brossard and Nisbet2004). Scholars have long theorized about how cross-cutting pressures in an individual's social environment may cause an individual to become less interested in politics and to disengage (Campbell et al. Reference Campbell, Converse, Miller and Stokes1960; Ithel de Sola Pool and Popkin Reference de Sola Pool, Abelson and Popkin1956). Recent work has tested theories about whether disagreement in an individual's social network affects the political participation (Mutz Reference Mutz2002; Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004). This work has consistently found that exposure to disagreement depresses engagement and participation. Previous studies have relied on snowball samples in order to construct social network measures. Survey respondents may be biased in recalling their discussion partners. Social network sites, such as Facebook and Twitter, allow us to observe friendships without asking individuals about their political discussion partners. If bias in recalling discussion partners is associated with the difference in ideology between a pair of individuals, which certainly may be the case if such discussions are more likely to be memorable, estimates of its effect on participation may be biased as well. We add to this literature by studying the relationship between exposure to disagreement and validated public voting records. We matched records for all individuals from 13 states (see Jones, Bond, Fariss, Settle, Kramer, Marlow, and Fowler (Reference Jones, Bond, Fariss, Settle, Kramer, Marlow and Fowler2012) for more information) to the Facebook data. Among the 6.2 million individuals for whom we calculated an ideology score, we match a voting record for 397,815,Footnote 8 resulting in a total of 2,410,097 friendship pairs where the individual and the friend had both an ideology score and a public record of whether each person voted. Here we test the relationship between an individual's (in social network terms, the "ego") turnout and the difference between the ego's ideology and that of a friend (the "alter"). Our primary variable of interest is the Mean |Ideology Difference|, as it measures the average difference in ideology between the ego and all connected alters. As Table 4 shows, an increase in the ideological distance between friends is associated with lower rates of turnout by the ego. A one standard deviation increase in the Mean |Ideology Difference| measure corresponds to a 1.3 [95% CI 1.1, 1.4] percentage point decrease in the likelihood of voting (95% CI estimated based on King, Tomz, and Wittenberg Reference King, Tomz and Wittenberg2000). TABLE 4. Logistic Regression of Ego Validated Voting in 2010 on Ego Covariates and Alter Characteristics of Ideology and Turnout. Note: Robust standard errors clustered by common friend (e.g., Williams Reference Williams2000; Wooldridge Reference Wooldridge2002). This result is consistent with previous work showing that disagreement in an individual's social network is associated with lower turnout rates (Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004; Mutz Reference Mutz2002). However, the present work has the advantage of gathering behavioral data. That is, we use validated public voting records, a behavioral ideology measure, and observe friendships, which help us avoid an array of issues including survey response bias and recall bias for friendships. This article makes several important contributions: first it presents a method for measuring ideology using large-scale data from social media, and second, it uses that measure to examine political polarization, the structure of ideology that characterize relationships in society, and how that structure is associated with participation rates. We show that the method produces reliable ideological estimates that are predictive of other measures of elite ideology—DW-NOMINATE, and individual level ideology—self-reported liberal-conservative ideological background. Placement of elites and masses on the same scale is an important step in the study of electoral politics and political communication. For instance, a longstanding debate in the literature concerns whether ordinary citizens are polarizing (Abramowitz and Saunders Reference Abramowitz and Saunders2008; Fiorina, Abrams, and Pope Reference Fiorina, Abrams and Pope2006) and, if so, whether this divide is driven by elites or mass preferences. Data that put elites and ordinary citizens on the same scale are necessary to the study of these types of phenomena because they allow for reliable comparison of the ideology of both groups. We then investigate the ideological polarization of social relationships by examining how ideology maps onto the structure of our social relationships, coupling our ideological measure with extensive data about social ties and interaction online. We not only confirm previous work finding that friendships cluster by ideological preference, but extend this work to show that ideological correlation is stronger between close friends, family, and especially among romantic partners. Further, we show that from 2011 to 2012 there is an increase in the extent to which ideology is associated with the closeness of a friendship, suggesting polarization over the one-year period. This evidence contributes to the debate about polarization by using new evidence about not only the distribution of ideologies in the electorate, but also about the distribution of social relationships among those with similar and different ideological preferences. A better understanding of this type of polarization is critical, as the ideologies of our social contacts can impact the likelihood that we are exposed to new ideas, which is a critical component of democracy (Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004). Future work on polarization among friends will be important for understanding the evolution of ideological polarization in ordinary citizens. Finally, we study the association between disagreement in an individual's social network and decreased rates of turnout. This result is consistent with previous work (Huckfeldt, Johnson, and Sprague Reference Huckfeldt, Johnson and Sprague2004; Mutz Reference Mutz2002), but has the advantage of using behavioral measures of ideology, turnout, and friendships. This helps to avoid biases that individuals have in answering survey questions about ideology and turnout and recalling friends with whom they have discussed politics. While our results show that individuals in networks with disagreement are less likely to participate in politics, further work should further investigate whether this relationship is causal. It is important not to lose sight of the potential drawbacks of the approach we use and the weaknesses of this particular study. First, the population we study here is not representative of the U.S. population overall. For instance, Facebook users are on average younger than the population. However, more than 100 million American adults are Facebook users, out of a population of 230 million. Furthermore, the particular sample we use here—those who have liked two or more political pages—differs from the overall population of Facebook users. This subset is likely older, more politically engaged, and more engaged with Facebook than the common user. However, our sample is not limited to political elites or donors to political causes. Rather, they constitute a subset of the population who are on average simultaneously more interested in politics and more engaged on Facebook. Early in the article, we discussed several problems with traditional measures of individual ideology. First, traditional survey measures do not account for the multiple dimensions of ideology. While we have focused here on the first dimension of ideology, our method of measurement recovers many dimensions related to ideology. In future work, scholars should investigate these dimensions and how they can help us to better understand ideology outside of the traditional left-right paradigm—for example, can this measure help us understand libertarian-authoritarian and "intervention-free market" dimensions (Hix Reference Hix1999)? Second, traditional survey measures rely on an interval ideology measure, which often assumes that differences between responses are equal (that is, the difference between a response of "Very Liberal" and "Liberal" is assumed to be the same as the difference between responses of "Liberal" and "Somewhat Liberal"). While our measure is continuous by its construction, we find that there are three clustered groupings of individuals: those who are liberal, those who are conservative, and those who are somewhere in the middle. While we find few differences between those who label themselves "Liberal" or "Very Liberal" and "Conservative" or "Very Conservative," future work should further investigate whether the lack of meaningful difference is due to our measurement strategy or to other factors related to how individuals present themselves politically online. Finally, traditional survey measures of ideology are unreliable, particularly among liberals who are unwilling to label themselves as such. The construction of our measure should help to make the measure more reliable. Still, we cannot say that our measure is strictly better than others. Where survey respondents may give misleading answers to questions about ideology, in our case, individuals may like political pages to acquire information about a candidate in addition to signaling support. As scholars investigate how people make decisions about how to present themselves politically online we will be better able to understand how any issues affect the reliability of our measure. Our hope is that the method we have described here will allow for tests of theories from spatial models of politics that require data on the relative positioning of political actors, as have previous methods that put a diverse set of political actors on a common scale. Our method has the potential to uncover ideological estimates from any entity present in social media—including legislators, the candidates for office they have defeated, bureaucrats, ballot measures, and political issues. This type of data will be important for our ability to study political phenomena concerning the interaction between legislator and constituent ideology, such as representation or vote choice. The possibilities for future research on large data sets that contain previously unstudied types of information about people such as Facebook and Twitter should not be underestimated. The increased power associated with having a large number of individuals affords researchers the opportunity to unobtrusively test theories we previously could not; furthermore it increases precision when we do so.Footnote 9 While in many cases, such fine-grained estimates may not be necessary and conventional survey sampling works well, there are an array of applications for which such estimates are desirable: estimates for ideology or support for candidates in small districts, local officials; estimates of support among minority populations; estimates of change over short periods of time; and ideological estimates in non-U.S. political locales where reliable survey, roll-call, and/or fund-raising data might not be available. With measures such as the one described in this article, we may be able to detect changes in the public perception of political officials and/or candidates simply by examining the public's changing preferences online. This research is part of a growing literature in the social sciences in which large sources of data are used to conduct research that was previously not possible (Lazer et al. Reference Lazer, Pentland, Adamic, Aral, Barabasi, Brewer, Christakis, Contractor, Fowler, Gutmann, Jebara, King, Macy, Roy and Alstyne2009). We hope that the ideology measure we use in this article, and others that measure the ideology of large numbers of the mass public (Bonica Reference Bonica2013; Tausanovitch and Warshaw Reference Tausanovitch and Warshaw2012), will contribute to our understanding of ideology in new ways. While there has been a long tradition of research into ideology and its structure, this article should form a starting point for future research into how our social networks are critical to our understanding of society's ideological makeup. Due to privacy considerations, data must be kept on site at Facebook. However, Facebook is willing to give access to de-identified data on site to researchers wishing to replicate these results. 1 Although question wording varies, a common version is, "We hear a lot of talk these days about liberals and conservatives. Here is a 7-point scale on which the political views that people might hold are arranged from extremely liberal to extremely conservative. Where would you place yourself on this scale, or haven't you thought much about this?" Respondents are then given the choice to identify themselves as liberal or conservative, "extremely" or "slightly" liberal or conservative, "moderate/middle of the road," or "don't know/haven't thought about it much." 2 More than half of Americans use social media on a regular basis (Facebook 2011) and American Internet users spent more time on Facebook than any other single Internet destination in 2011 by nearly two orders of magnitude according to Nielsen, with the average person spending 7–8 hours per month (Nielsen 2011). Over 42 percent of Americans reported learning something about the 2012 campaign on Facebook, according to Pew (2012). 3 For instance, there are many pages that are about President Obama in one way or another, but only one page (www.facebook.com/barackobama) is official and is ostensibly maintained by the President. 4 The fact that the data include only the presence of a relationship is not unlike other data sources that have been used to scale the ideology of political actors, such as legislative cosponsorships (Aleman et al. Reference Aleman, Calvo, Jones and Kaplan2009; Talbert and Potoski Reference Talbert and Potoski2009) and campaign finance records (Bonica Reference Bonica2013). 5 We exclude users who like only one page and pages with only one supporter as they do not add any additional information to the matrix. These users would only be counted in the diagonals of the affiliation matrix described below. 6 These figures are current as of March 2012. 7 The correlation over time of ideology within an individual from 2011 to 2012 is 0.99. While this is further evidence of ideological constraint, it is important to consider the measure's construction, as it can impact the measure of change over time (Achen Reference Achen1975; Converse Reference Converse2006). As few users changed the politicians they liked over the period, such a high correlation in ideology not entirely surprising. The correlation is not due only to few users changing the pages they like, but also the changes that did occur did not change the ordering of the ideologies of the pages to a significant degree. 8 The low match rate of about 6% owes to the fact that we only have voting information from 13 states, while our sample for which we calculate ideology scores may reside in any state. 9 We caution that with this increase in statistical power that we should be careful to not confuse statistical significance with practical significance. Abramowitz, Alan. 2010. The Disappearing Center: Engaged Citizens, Polarization and American Democracy. New Haven: Yale University Press.Google Scholar Abramowitz, Alan, and Jacobson, Gary C.. 2006. Red and Blue Nation? Characteristics and Causes of America's Polarized Politics. Washington, DC: Brookings Institution Press, chapter "Disconnected or Joined at the Hip?"Google Scholar Abramowitz, Alan, and Saunders, Kyle. 2008. "Is Polarization a Myth?" Journal of Politics 70 (2): 542–55.CrossRefGoogle Scholar Achen, Christopher. 1975. "Mass Political Attitudes and the Survey Response." American Political Science Review 69 (4): 1218–31.CrossRefGoogle Scholar Aleman, Eduardo, Calvo, Ernesto, Jones, Mark P., and Kaplan, Noah. 2009. "Comparing Cosponsorship and Roll-Call Ideal Points." Legislative Studies Quarterly 34 (1): 87–116.CrossRefGoogle Scholar Ansolabehere, Stephen, Rodden, Jonathan, and Snyder, James M.. 2008. "The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint, and Issue Voting." American Political Science Review 102 (2): 215–32.CrossRefGoogle Scholar Bafumi, Joseph, and Herron, Michael. 2010. "Leapfrog Representation and Extremism: A Study of American Voters and their Members in Congress." The American Political Science Review 104: 519–42.CrossRefGoogle Scholar Bailey, Michael. 2007. "Comparable Preference Estimates Across Time and Institutions for the Court, Congress, and Presidency." American Journal of Political Science 51: 433–48.CrossRefGoogle Scholar Baldassarri, Delia, and Goldberg, Amir. 2014. "Neither Ideologues, nor Agnostics: Alternative Voters' Belief System in an Age of Partisan Politics." American Journal of Sociology (forthcoming).Google Scholar Barberá, Pablo. 2014. "Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data.." Political Analysis (forthcoming).Google Scholar Bond, Robert M., Fariss, Christopher J., Jones, Jason J., Kramer, Adam D. I., Marlow, Cameron, Settle, Jaime E., and Fowler, James H.. 2012. "A 61-million-person Experiment in Social Influence and Political Mobilization." Nature 489 (7415): 295–8.CrossRefGoogle ScholarPubMed Bonica, Adam. 2013. "Ideology and Interests in the Political Marketplace." American Journal of Political Science 57 (2): 294–311.CrossRefGoogle Scholar Butler, Daniel M., and Broockman, David E.. 2011. "Do Politicians Racially Discriminate Against Constituents? A Field Experiment on State Legislators." American Journal of Political Science 55 (3): 463–77.CrossRefGoogle Scholar Butler, Daniel M., Karpowitz, Christopher F., and Pope, Jeremy C.. 2012. "A Field Experiment on Legislators' Home Styles: Service versus Policy." The Journal of Politics 74 (02), 474–86.CrossRefGoogle Scholar Campbell, Angus, Converse, Philip E., Miller, Warren E., and Stokes, Donald E.. 1960. The American Voter. Chicago and London: University of Chicago Press.Google Scholar Clinton, Joshua, Jackman, Simon, and Rivers, Douglas. 2004. "The Statistical Analysis of Roll Call Data." The American Political Science Review 98 (2): 355–70.CrossRefGoogle Scholar Converse, Philip. 2006. "The Nature of Belief Systems in Mass Publics." Critical Review 18 (1–3): 1–74.CrossRefGoogle Scholar Cornelis, Ilse, Hiel, Alain Van, Roets, Arne, and Kossowska, Malgorzata. 2009. "Age Differences in Conservatism: Evidence on the Mediating Effects of Personality and Cognitive Style." Journal of Personality 77 (1): 51–88.CrossRefGoogle ScholarPubMed de Sola Pool, Ithel, Abelson, Robert P., and Popkin, Samuel. 1956. Candidates, Issues and Strategies. Cambridge, MA: MIT Press.Google Scholar Dimaggio, Paul, Evans, John, and Bryson, Bethany. 1996. "Have Americans' Social Attitudes Become More Polarized?" American Journal of Sociology 102: 690–755.CrossRefGoogle Scholar Eckart, Carl, and Young, Gale. 1936. "The Approximation of One Matrix by Another of Lower Rank." Psychometrika 1 (3): 211–8.CrossRefGoogle Scholar Eveland, William P. Jr., and Hively, Myiah Hutchens. 2009. "Political Discussion Frequency, Network Size, and "Heterogeneity" of Discussion as Predictors of Political Knowledge and Participation." Journal of Communication 59 (2): 204–24.CrossRefGoogle Scholar Facebook. 2011. "Statistics." http://www.facebook.com/press/info.php?statistics Google Scholar Feldman, Stanley. 1998. "Structure and Consistency in Public Opinion: The Role of Core Beliefs and Values." American Journal of Political Science 32 (2): 416–40.CrossRefGoogle Scholar Festinger, Leon. 1957. A Theory of Cognitive Dissonance. Evanston, IL: Row Peterson.Google Scholar Fiorina, Morris, Abrams, Samuel, and Pope, Jeremy. 2006. Culture War? The Myth of a Polarized America. New York: Pearson Longman.Google Scholar Fiorina, Morris, and Levendusky, Matthew. 2006. Red and Blue Nation? Characteristics and Causes of America's Polarized Politics. Washington, DC: Brookings Institution Press, chapter "Disconnected: The Political Class versus the People."Google Scholar Gerber, Elisabeth R., and Lewis, Jeffrey B.. 2004. "Beyond the Median: Voter Preferences, District Heterogeneity, and Political Representation." Journal of Political Economy 112 (6): 1364–83.CrossRefGoogle Scholar Glenn, Norval D. 1974. "Aging and Conservatism." The ANNALS of the American Academy of Political and Social Science 415 (1): 176–86.CrossRefGoogle Scholar Goldberg, Amir. 2011. "Mapping Shared Understandings Using Relational Class Analysis: The Case of the Cultural Omnivore Reexamined." American Journal of Sociology 116 (5): 1397–436.CrossRefGoogle Scholar Grimmer, Justin, Messing, Solomon, and Westwood, Sean J.. 2012. "How Words and Money Cultivate a Personal Vote: The Effect of Legislator Credit Claiming on Constituent Credit Allocation." American Political Science Review 106 (04): 703–19.CrossRefGoogle Scholar Hatemi, Peter, Gillespie, Nathan, Eaves, Lindon, Maher, Brion, Webb, Bradley, Heath, Andrew, Medland, Sarah, Smyth, David, Beeby, Harry, Gordon, Scott, Mongomery, Grant, Zhu, Ghu, Byrne, Enda, and Martin, Nicholas. 2011. "A Genome-Wide Analysis of Liberal and Conservative Political Attitudes." Journal of Politics 73 (1): 1–15.CrossRefGoogle Scholar Heider, F. 1944. "Social Perception and Phenomenal Causality." Psychological Review 51 (6): 358–74.CrossRefGoogle Scholar Hix, Simon. 1999. "Dimensions and Alignments in European Union Politics: Cognitive Constraints and Partisan Responses." European Journal of Political Research 35 (1): 69–106.CrossRefGoogle Scholar Hix, Simon, Noury, Abdul, and Roland, Gerard. 2006. "Dimensions of Politics in the European Parliament." American Journal of Political Science 50 (2): 494–520.CrossRefGoogle Scholar Huckfeldt, Robert, Johnson, Paul E., and Sprague, John. 2004. Political Disagreement: The Survival of Diverse Opinions within Communication Networks. New York: Cambridge University Press.CrossRefGoogle Scholar Huckfeldt, Robert, Mendez, Jeanette M., and Osborn, Tracy L.. 2004. "Disagreement, Ambivalence, and Engagement: The Political Consequences of Heterogeneous Networks." Political Psychology 25 (1): 65–95.CrossRefGoogle Scholar Jackson, John E. 1983. "The Systematic Beliefs of the Mass Public: Estimating Policy Preferences with Survey Data." Journal of Politics 45 (4): 840–65.CrossRefGoogle Scholar Jennings, M. Kent, and Niemi, Richard G.. 1978. "The Persistence of Political Orientations: An Over-Time Analysis of Two Generations." British Journal of Political Science 8 (3): 333–63.CrossRefGoogle Scholar Jones, Jason J., Bond, Robert M., Fariss, Christopher J., Settle, Jaime E., Kramer, Adam D. I., Marlow, Cameron, and Fowler, James H.. 2012. "Yahtzee: An Anonymized Group Level Matching Procedure." PLoS ONE 8 (2): 55760.CrossRefGoogle ScholarPubMed Jones, Jason J., Settle, Jaime E., Bond, Robert M., Fariss, Christopher J., Marlow, Cameron, and Fowler, James H.. 2012. "Inferring Tie Strength from Online Directed Behavior." PLoS ONE 8 (1): e52168.CrossRefGoogle ScholarPubMed Kinder, Donald. 1983. Diversity and Complexity in American Public Opinion. Washington, DC: APSA Press, 389–425.Google Scholar King, Gary, Tomz, Michael, and Wittenberg, Jason. 2000. "Making the Most of Statistical Analyses: Improving Interpretation and Presentation." American Journal of Political Science 44 (2): 347–361.CrossRefGoogle Scholar Klofstad, Casey, McDermott, Rose, and Hatemi, Peter. 2013. "The Dating Preferences of Liberals and Conservatives." Political Behavior 35: 519–38.CrossRefGoogle Scholar Kohut, Andrew, Parker, Kim, Keeter, Scott, Doherty, Carroll, and Dimock, Michael. 2007. "How Young People View Their Lives, Futures and Politics: A Portrait of 'Generation Next'." http://www.people-press.org/files/legacy-pdf/300.pdf Google Scholar Krosnick, Jon A., and Alwin, Duane F.. 1989. "Aging and Susceptibility to Attitude Change." Journal of Personality and Social Psychology 57: 416–25.CrossRefGoogle ScholarPubMed Laver, Michael, Benoit, Kenneth, and Garry, John. 2003. "Extracting Policy Positions from Political Texts Using Words as Data." The American Political Science Review 97 (2): 311–32.CrossRefGoogle Scholar Lazer, David, Pentland, Alex, Adamic, Lada, Aral, Sinan, Barabasi, Albert Laszlo, Brewer, Devon, Christakis, Nicholas, Contractor, Noshir, Fowler, James, Gutmann, Myron, Jebara, Tony, King, Gary, Macy, Michael, Roy, Deb, and Alstyne, Marshall Van. 2009. "Computational Social Science." Science 323: 721–23.CrossRefGoogle ScholarPubMed Lazer, David, Rubineau, Brian, Chetkovich, Carol, Katz, Nancy, and Neblo, Michael. 2010. "The Coevolution of Networks and Political Attitudes." Political Communication 27 (3): 248–74. http://www.tandfonline.com/doi/abs/10.1080/10584609.2010.500187 CrossRefGoogle Scholar Levendusky, Matthew. 2009. The Partisan Sort: How Liberals Became Democrats and Conservatives Became Republicans. Chicago: The University of Chicago Press.CrossRefGoogle Scholar Malbin, Michael J. 2009. "Small Donors, Large Donors and the Internet: The Case for Public Financing after Obama." Unpublished manuscript.Google Scholar Martin, Andrew D., and Quinn, Kevin M.. 2002. "Dynamic Ideal Point Estimation via Markov Chain Monte Carlo for the U.S. Supreme Court, 1953–1999." Political Analysis 10 (2): 134–53.CrossRefGoogle Scholar McCarty, Nolan, Poole, Keith T., and Rosenthal, Howard. 2006. Polarized America: The Dance of Ideology and Unequal Riches. Cambridge, MA: MIT Press.Google Scholar McClurg, Scott. 2006. "The Electoral Relevance of Political Talk: Examining Disagreement and Expertise Effects in Social Networks on Political Participation." American Journal of Political Science 50 (3): 737–54.CrossRefGoogle Scholar Messing, Solomon, and Westwood, Sean. 2012. "Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation when Selecting Online News Media." Communication Research, 41 (8): 1042–63.Google Scholar Monroe, Burt L., Colaresi, Michael, and Quinn, Kevin. 2009. "Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict." Political Analysis 15 (4): 372–403.Google Scholar Monroe, Burt L., and Maeda, Ko. 2004. Rhetorical Ideal Point Estimation: Mapping Legislative Speech. Palo Alto: Stanford University.Google Scholar Mutz, Diana. 2002. "The Consequences of Cross-Cutting Networks for Political Participation." American Journal of Political Science 46 (4): 838–55.CrossRefGoogle Scholar Mutz, Diana C., and Young, Lori. 2011. "Communication and Public Opinion Plus Ca Change?" Public Opinion Quarterly 75 (5): 1018–44.CrossRefGoogle Scholar Newcomb, Theodore Mead. 1943. Personality & Social Change; Attitude Formation in a Student Community. New York: Dryden Press.Google Scholar Nielsen. 2011. "State of the Media: Social Media Report Q3." http://www.nielsen.com/content/corporate/us/en/insights/reports-downloads/2011/social-media-report-q3.html Google Scholar Pew. 2012. "Internet Gains Most as Campaign News Source but Cable TV Still Leads: Social Media Doubles, but Remains Limited." http://www.journalism.org/commentary_backgrounder/social_media_doubles_remains_limited Google Scholar Poole, Keith. 2005. Spatial Models of Parliamentary Voting. New York: Cambridge University Press.CrossRefGoogle Scholar Poole, Keith, and Rosenthal, Howard. 1997. Ideology and Congress. New Brunswick, NJ: Transaction Publishers.Google Scholar Ray, John J. 1985. "What Old People Believe: Age, Sex, and Conservatism." Political Psychology 6 (3): 525–28.CrossRefGoogle Scholar Scheufele, Dietram A., Nisbet, Matthew C., Brossard, Dominique, and Nisbet, Erik C.. 2004. "Social Structure and Citizenship: Examining the Impacts of Social Setting, Network Heterogeneity, and Informational Variables on Political Participation." Political Communication 21 (3): 315–38.CrossRefGoogle Scholar Schiffer, Adam J. 2000. "I'm Not That Liberal: Explaining Conservative Democratic Identification." Political Behavior 22 (4): 293–310.CrossRefGoogle Scholar Talbert, Jeffery C., and Potoski, Matthew. 2009. "Setting the Legislative Agenda: The Dimensional Structure of Bill Cosponsoring and Floor Voting." Journal of Politics 64 (3): 864–91.CrossRefGoogle Scholar Tausanovitch, Chris, and Warshaw, Christopher. 2012. "Representation in Congress, State Legislatures and Cities." Unpublished manuscript.Google Scholar Treier, Shawn, and Hillygus, D. Sunshine. 2009. "The Nature of Political Ideology in the Contemporary Electorate." Public Opinion Quarterly 73 (4): 679–703.CrossRefGoogle Scholar Williams, Rick L. 2000. "A Note on Robust Variance Estimation for Cluster-Correlated Data." Biometrics 56 (2): 645–6.CrossRefGoogle ScholarPubMed Wojcieszak, Magdalena E., and Mutz, Diana C.. 2009. "Online Groups and Political Discourse: Do Online Discussion Spaces Facilitate Exposure to Political Disagreement?" Journal of Communication 59 (1): 40–56.CrossRefGoogle Scholar Wood, Thomas, and Oliver, Eric. 2012. "Toward a More Reliable Implementation of Ideology in Measures of Public Opinion." Public Opinion Quarterly 76 (4): 636–62.CrossRefGoogle Scholar Wooldridge, Jeffrey M. 2002. Econometric Analysis of Cross Section and Panel Data. Boston: The MIT Press.Google Scholar Fariss, Christopher J. 2014. Human Rights Treaty Compliance and the Changing Standard of Accountability. SSRN Electronic Journal, Bonica, Adam 2015. Inferring Roll-Call Scores from Campaign Contributions Using Supervised Machine Learning. SSRN Electronic Journal, Barberá, Pablo Jost, John T. Nagler, Jonathan Tucker, Joshua A. and Bonneau, Richard 2015. Tweeting From Left to Right. Psychological Science, Vol. 26, Issue. 10, p. 1531. Bakshy, Eytan Messing, Solomon and Adamic, Lada A. 2015. Exposure to ideologically diverse news and opinion on Facebook. Science, Vol. 348, Issue. 6239, p. 1130. Mellon, Jonathan and Prosser, Christopher 2016. Twitter and Facebook are Not Representative of the General Population: Political Attitudes and Demographics of Social Media Users. SSRN Electronic Journal, IMAI, KOSUKE LO, JAMES and OLMSTED, JONATHAN 2016. Fast Estimation of Ideal Points with Massive Data. American Political Science Review, Vol. 110, Issue. 4, p. 631. Broockman, David E. 2016. Approaches to Studying Policy Representation. Legislative Studies Quarterly, Vol. 41, Issue. 1, p. 181. Shi, Yongren and Macy, Michael 2016. Measuring structural similarity in large online networks. Social Science Research, Vol. 59, Issue. , p. 97. Mitchell, Charles L. 2016. Does Changing Media Reality Likely Affect the Election of 2016?. SSRN Electronic Journal, Ceron, Andrea 2017. Social Media and Political Accountability. p. 45. Ceron, Andrea 2017. Social Media and Political Accountability. p. 1. Ceron, Andrea 2017. Social Media and Political Accountability. p. 197. Anastasopoulos, Lefteris Jason Moldogaziev, Tima T. and Scott, Tyler 2017. Computational Text Analysis for Public Management Research. SSRN Electronic Journal , Bond, Robert M. Settle, Jaime E. Fariss, Christopher J. Jones, Jason J. and Fowler, James H. 2017. Social Endorsement Cues and Political Participation. Political Communication, Vol. 34, Issue. 2, p. 261. Ceron, Andrea 2017. Intra-party politics in 140 characters. Party Politics, Vol. 23, Issue. 1, p. 7. Souza, Rafael Martins de Graça, Luís Felipe Guedes da and Silva, Ralph dos Santos 2017. Politics on the Web: Using Twitter to Estimate the Ideological Positions of Brazilian Representatives. Brazilian Political Science Review, Vol. 11, Issue. 3, Mellon, Jonathan and Prosser, Christopher 2017. Twitter and Facebook are not representative of the general population: Political attitudes and demographics of British social media users. Research & Politics, Vol. 4, Issue. 3, p. 205316801772000. Ahler, Douglas and Broockman, David E. 2017. The Delegate Paradox: Why Polarized Politicians Can Represent Citizens Best. SSRN Electronic Journal , Broniecki, Philipp and Hanchar, Anna 2017. Data innovation for international development: An overview of natural language processing for qualitative data analysis. p. 92. Taylor, J. Benjamin 2017. Extreme Media and American Politics. p. 159. ROBERT BOND (a1) and SOLOMON MESSING (a2)
CommonCrawl
Does a Buckyball spin like an electron or like a baseball? We are often told that an electron does not really spin like a baseball. Only one (or two, if you count up and down) spin states, for example. How about a Buckyball? Does it spin more like an electron, or more like a baseball? Where is the dividing line? How can you measure the difference? quantum-mechanics classical-mechanics angular-momentum quantum-spin Jim GraberJim Graber $\begingroup$ A buckyball is a fairly large molecule. Why couldn't it spin like a baseball? $\endgroup$ – Asher Aug 27 '15 at 21:49 The fundamental difference between an electron's spin and that of a baseball is that the electron is (as far as we know) a point particle. It therefore cannot rotate in the usual sense, where individual parts move relative to the center of mass; we say that its angular momentum is intrinsic. The magnitude $\lvert\vec{S}\rvert^2$ of a particle's intrinsic angular momentum $\vec{S}$ is fixed, which is the sense in which an electron has "only one" spin state. (The direction is not fixed, so, as you say, the spin can be up or down.) A buckyball, like a baseball, has internal structure; the carbon atoms can be set in motion around the center of mass, giving it angular momentum. (The fermions inside the carbon atoms have intrinsic angular momentum, but in the ground state of the molecule these cancel out.) The magnitude of the buckyball's angular momentum $\vec{J}$ is not fixed, so in this sense it is more like a baseball. But angular momentum is quantized, and while this is utterly irrelevant for a baseball, it has measurable consequences for even large molecules like fullerenes. The total angular momentum obeys $\lvert\vec{J}\rvert^2 = \hbar^2 J(J+1)$, where $J$ is an integer (assuming your buckyball has an even number of fermions, e.g., 60 ${}^{12}$C atoms), while the projection on any axis is restricted to integers from $-J$ to $+J$. The kinetic energy associated with rotation is $\frac{\lvert\vec{J}\rvert^2}{2I}$, where $I$ is the moment of inertia, so this implies (unequally spaced) steps in the allowed energy. For C$_{60}$, $I$ is small enough that these steps have been measured using Raman spectroscopy. Stephen PowellStephen Powell $\begingroup$ I have accepted this answer, but I am still thinking about the boundary. A proton and a neutron are not point particles, so I would say they also spin like baseballs. Besides the electron, the photon which is not a point particle, but is a zero (rest) mass "particle" would not spin like a baseball. Similarly the gluon, the quark and the graviton would be non-baseball spinners. This leaves the neutrino, sdomewhat more uncertain, but still probably on the not baseball-like side. But everything else would fall on the spins like a baseball side. $\endgroup$ – Jim Graber Aug 28 '15 at 14:47 $\begingroup$ All the standard-model particles are "point particles"—in the sense of having no constituents—including photons, neutrinos, and the rest. In a baryon (proton, neutron, etc.), there can be contributions from the quarks' intrinsic angular momentum and also from their orbital angular momentum (their relative motion). I believe it's the intrinsic contribution that dominates, but this is well outside my area of expertise. (In the equivalent case of an atom, the different contributions from intrinsic and orbital angular momenta depend on the type of atom and its excitation state.) $\endgroup$ – Stephen Powell Sep 3 '15 at 13:43 $\begingroup$ Yes, one way to answer the question is to say that only point particles spin like an electron. So quarks, yes, but protons and neutrons, no. Then you have the photon. It has intrinsic spin, I think, but I hesitate to call it a point particle. Also, there is the little matter of spin one versus spin one-half. $\endgroup$ – Jim Graber Sep 4 '15 at 14:38 Not the answer you're looking for? Browse other questions tagged quantum-mechanics classical-mechanics angular-momentum quantum-spin or ask your own question. What does it really mean that particle has a spin of up/down? And how is spin actually meassured? Reaction force in electron spin measurements Problem counting spin states What is the significance of electron spin quantum number? Spin state of electron after measurement What does the magnetic field of the (quantum-mechanical) electron look like? Conceptualization and modelling of spin Aren't all spin states (up, down, left, right, in, out) orthogonal? When an electron changes its spin, or any other intrinsic property, is it still the same electron?
CommonCrawl
Somalier: rapid relatedness estimation for cancer and germline studies using efficient genome sketches Brent S. Pedersen ORCID: orcid.org/0000-0003-1786-22161,2, Preetida J. Bhetariya1, Joe Brown1,2, Stephanie N. Kravitz1, Gabor Marth1, Randy L. Jensen3, Mary P. Bronner4, Hunter R. Underhill5 & Aaron R. Quinlan1,2,6 When interpreting sequencing data from multiple spatial or longitudinal biopsies, detecting sample mix-ups is essential, yet more difficult than in studies of germline variation. In most genomic studies of tumors, genetic variation is detected through pairwise comparisons of the tumor and a matched normal tissue from the sample donor. In many cases, only somatic variants are reported, which hinders the use of existing tools that detect sample swaps solely based on genotypes of inherited variants. To address this problem, we have developed Somalier, a tool that operates directly on alignments and does not require jointly called germline variants. Instead, Somalier extracts a small sketch of informative genetic variation for each sample. Sketches from hundreds of germline or somatic samples can then be compared in under a second, making Somalier a useful tool for measuring relatedness in large cohorts. Somalier produces both text output and an interactive visual report that facilitates the detection and correction of sample swaps using multiple relatedness metrics. We introduce the tool and demonstrate its utility on a cohort of five glioma samples each with a normal, tumor, and cell-free DNA sample. Applying Somalier to high-coverage sequence data from the 1000 Genomes Project also identifies several related samples. We also demonstrate that it can distinguish pairs of whole-genome and RNA-seq samples from the same individuals in the Genotype-Tissue Expression (GTEx) project. Somalier is a tool that can rapidly evaluate relatedness from sequencing data. It can be applied to diverse sequencing data types and genome builds and is available under an MIT license at github.com/brentp/somalier. DNA sequencing data from matched tumor-normal pairs are critical for the detection of somatic variation in cancer studies. However, a sample swap leads to a dramatic increase in the apparent number of somatic variants, confounds the genetic analysis of the tumor, and the probability of such a mix-up increases with the size of the study cohort. The correction for sample mix-ups, possibly a swap with another sample in the same study, requires a thorough evaluation of the coefficient of relationship (henceforth "relatedness") among the entire set of samples, as measured by the similarity of their genotypes at polymorphic loci. This is not possible directly on the somatic mutation predictions because somatic variants are typically detected from comparisons of tumor-normal pairs, and often, only somatic (not germline) variants are reported [1]. Therefore, resolution of the sample swap would require the researcher to jointly call germline variants with a tool like GATK [2] and then use methods such as peddy [3] or KING [4] to calculate relatedness across the entire set of samples. Joint variant calling is time and resource intensive, especially when all that is needed to resolve the sample swap is an accurate calculation of relatedness among the samples. After experiencing this inconvenience in our own research, we developed Somalier to quickly and accurately compute relatedness by extracting "sketches" of variant information directly from alignments (BAM or CRAM) or from variant call format (VCF) [5] files including genomic VCFs (GVCF). Somalier extracts a sketch for each sample and the sketches are then compared to evaluate all possible pairwise relationships among the samples. This setup mitigates the "N+1 problem" by allowing users to add new sketches as needed and efficiently compare them to an existing set of background samples. The text and visual output facilitates the detection and correction of sample swaps, even in cases where there is severe loss of heterozygosity. It can be used on any organism across diverse sequencing data types and, given a set of carefully selected sites, across genome builds. We show that Somalier produces similar kinship estimates to KING [4] in much less time and that it produces reliable measures across tissue types and when comparing DNA samples, RNA-seq samples, and DNA to RNA-seq samples. Selecting and extracting informative variant sites We have previously shown that using as few as 5000 carefully chosen polymorphic loci is sufficient for relatedness estimation and that this subset of informative loci yields more accurate estimates than using all available variants [3]. A similar site-selection strategy is also used in Conpair to estimate contamination [6]. In Somalier, we utilize the observation that the optimal sites for detecting relatedness are high-quality, unlinked sites with a population allele frequency of around 0.5. A balanced allele frequency maximizes the probability that any 2 unrelated samples will differ. We distribute a set of informative sites to be queried by Somalier, though users may also create their own sites files tailored to their application. The sites are high-frequency single-nucleotide variants selected from gnomAD [7] exomes that exclude segmental duplication and low-complexity regions [8]. We also distribute a set of sites limited to exons that are frequently (> 10 reads in at least 40% of samples) expressed in GTeX for use in cohorts with RNA-seq data. To minimize genotyping error, variants with nearby insertions or deletions are excluded. In addition, we have excluded sites that are cytosines in the reference so that the tool can be used on bisulfite seq data, for example, to check the correspondence between bisulfite sequencing and RNA-seq data. The Somalier repository includes the code to create a set of sites for different organisms given a population VCF and a set of optional exclude regions. We distribute a default set of matched sites for both the GRCh37 and GRCh38 builds of the human reference genome. This allows a user to extract sites from a sample aligned to GRCh37 using our GRCh37 sites file and compare that sketch to a sketch created from a sample aligned to GRCh38 by extracting the sites in our GRCh38 file. This is convenient as labs move from GRCh37 to GRCh38 and future genome builds. The sites files include informative variants on the X and Y chromosomes so that Somalier can also estimate a sample's sex from the genotypes. However, only autosomal sites are used to estimate relatedness. With the default sites files, Somalier inspects 17,766 total sites (these are distributed with the Somalier software), all of which are chosen to be in coding sequence so that they are applicable to genome, exome, and RNA-seq datasets. In order to quickly extract data from polymorphic sites into a genome sketch, Somalier uses the BAM or CRAM index to query each file at each of the informative sites. Alignments with a mapping quality of at least 1 that are not duplicates, supplementary, or failing quality control (according to the SAM flag) are used. Each passing alignment is evaluated at the requested position and the base in the alignment at that position is checked against the given reference and alternate for the query variant. This check considers the CIGAR operation [9] at that base which indicates insertions, deletions, and other events within the read. This is faster than a traditional sequence alignment "pileup" as it looks at each read only once and interrogates only the exact position in question. If a VCF (or BCF or GVCF) is provided instead of an alignment file, Somalier will extract the depth information for each sample for requested sites that are present in the VCF. The sketches extracted from a VCF are indistinguishable from those extracted from alignment files. In order to support single-sample VCFs, which do not contain calls where the individual is homozygous for the reference allele, the user may indicate that missing variants should be assumed to be homozygous for the reference allele. This also facilitates comparing multiple tumor-normal VCFs where many sites will not be shared (however, in those cases, it is preferable to extract the sketch from the alignment files rather than from the VCF). Somalier tallies reference and alternate counts for each site. Once all sites are collected, it writes a binary file containing the sample name and the allele counts collected at each of the inspected sites. For the set of sites distributed from the Somalier repository, a sketch file requires ~ 200 KB of space on disk or in memory. This sketch format and the speed of parsing and comparing sketch files are key strengths of Somalier. For example, since Somalier can complete a full analysis of 2504 sketches from the 1000 Genomes high-coverage whole-genome samples (Michael Zody, personal communication) in under 20 s, users can keep a pool of sample sketches to test against and check incoming samples against all previously sketched samples. Comparing sketches Thousands of sample sketches can be read into memory per second and compared. In order to calculate relatedness, Somalier converts the reference and alternate allele counts stored for each sample at each site into a genotype. The genotype is determined to be unknown if the depth is less than the user-specified value (default of 7), homozygous reference if the allele balance (i.e., alt-count/[ref-count + alt-count]) is less than 0.02, heterozygous if the allele balance is between 0.2 and 0.8, homozygous alternate if the allele balance is above 0.98, and unknown otherwise (Fig. 1a). A flag can amend these rules such that missing sites (with depth of 0) are treated as homozygous reference, rather than unknown. While simple, this heuristic genotyping works well in practice and is extremely fast, because Somalier looks only at single-nucleotide variants in non-repetitive regions of the genome. As the sample is processed, Somalier also collects information on depth, mean allele-balance, number of reference, heterozygous, and homozygous alternate calls for each sample, along with similar stats for the X and Y chromosomes. These data are used to calculate per-sample quality control metrics. In order to measure relatedness, the data collected for each sample is converted into a data structure consisting of hom_ref, het, and hom_alt bit vectors (Fig. 1b). The bit vectors consist of 64 bit integers, enabling Somalier to store 64 variants per integer. There are 17,384 autosomal sites in the default sites file used by Somalier, consuming only 6519 bytes per sample (17,384/64 bits * 3 bit-vectors/sample * 8 bits/byte). With this data layout, Somalier can represent all 2504 samples from the 1000 Genomes Project in under 17 megabytes of memory. This simple data structure also facilitates rapid pairwise comparisons (Fig. 1c); for example, we can compute IBS0 (that is, "identity by-state 0" or sites where zero alleles are shared between two samples A and B) with the following logic which evaluates 64 sites in parallel: $$ \left(\mathtt{A}.\mathtt{\hom}\_\mathtt{ref}\ \mathtt{and}\ \mathtt{B}.\mathtt{\hom}\_\mathtt{alt}\right)\ \mathtt{or}\ \left(\mathtt{B}.\mathtt{\hom}\_\mathtt{ref}\ \mathtt{and}\ \mathtt{A}.\mathtt{\hom}\_\mathtt{alt}\right) $$ Comparing genotype sketches to compute relatedness measures for pairs of samples. a Observed counts for the reference (Ref.) and alternate (Alt.) allele at each of the tested 17,766 loci are converted into genotypes (see main text for details) to create a "sketch" for each sample. b The genotypes for each sample are then converted into three bit vectors: one for homozygous reference (HOMREF) genotypes, one for heterozygous (HET) genotypes, and one for homozygous alternate (HOMALT) genotypes. The length of each vector is the total number of autosomal variants in the sketch (i.e., 17,384) divided by 64, and the value for each bit is set to 1 if the sample has the particular genotype at the given variant site. For example, four variant sites are shown in b and the hypothetical individual has a homozygous alternate genotype for the second variant (the corresponding bit is set to 1), but is not homozygous for the alternate allele at the other three variant sites (the corresponding bits are set to 0). c The bit vectors for a pair of samples can be easily compared to calculate relatedness measures such as identity-by-state zero (IBS0, where zero alleles are shared between two samples) through efficient, bitwise operations on the bit arrays for the relevant genotypes We repeat this for each of the 272 (17,384 autosomal sites/64 sites per entry) entries in the array to assess all of the genome-wide sites for each pair of samples. In fact, we do not need the sites, just the count of sites that are IBS0. Therefore, we use the popcount (i.e., the count of bits that are set to TRUE) hardware instruction to count the total number of bits where the expression is non-zero in order to get the total count of IBS0 sites between the 2 samples. In addition to IBS0, we calculate counts of IBS2 where both samples have the same genotype, shared heterozygotes (both are heterozygotes), shared homozygous alternates, and heterozygous sites for each sample. All of the operations are extremely fast as it does not require code branching via, for example, conditional logic; instead, the calculations are all conducted with bitwise operations. Once those metrics are calculated, the relatedness between sample i and sample j is calculated as: $$ \left(\mathtt{shared}\hbox{-} \mathtt{hets}\left(\mathtt{i},\mathtt{j}\right)\hbox{-} \mathtt{2}\ast \mathtt{ibs}\mathtt{0}\left(\mathtt{i},\mathtt{j}\right)\right)/\mathtt{\min}\ \left(\mathtt{hets}\left(\mathtt{i}\right),\mathtt{hets}\left(\mathtt{j}\right)\right) $$ where hets is the count of heterozygote calls per sample out of the assayed sites. This metric is derived by Manichaikul et al. [4]. In addition, the homozygous concordance rate is reported as: $$ \left(\mathtt{shared}\hbox{-} \mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{i},\mathtt{j}\right)\hbox{-} \mathtt{2}\ast \mathtt{ibs}\mathtt{0}\left(\mathtt{i},\mathtt{j}\right)\right)/\mathtt{\min}\ \left(\mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{i}\right),\mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{j}\right)\right) $$ This measure is similar to the one described in HYSYS [10] except that the HYSYS measure is simply: $$ \Big(\mathtt{shared}\hbox{-} \mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{i},\mathtt{j}\right)\hbox{-} /\mathtt{\min}\ \left(\mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{i}\right),\mathtt{homozygous}\hbox{-} \mathtt{alts}\left(\mathtt{j}\right)\right) $$ Our formulation has the benefit that it matches the scale and interpretation of the relatedness estimate; unrelated individuals will have a concordance of around 0, whereas in HYSYS they will have a value around 0.5. This is a useful relatedness metric when severe loss of heterozygosity removes many heterozygous calls from the tumor sample making the traditional relatedness calculation inaccurate. If a pedigree file is given, Wright's method of path coefficients [11] is used to calculate the expected relatedness. These values can then be compared to the relatedness observed from the genotypes. For somatic samples, the user can also specify a "groups" file where sample identifiers appearing on the same line are expected to be identical; for example, three biopsies from each of two individuals would appear as three comma-separated sample identifiers on two separate lines. Finally, the output is reported both as text and as an interactive HTML page. When using the webpage, the user can toggle which relatedness metrics (IBS0, IBS2, relatedness, homozygous concordance, shared heterozygotes, shared homozygous alternates) to plot for the X and Y coordinates and, if expected groups were given (e.g., tumor-normal pairs) on the command-line, points are colored according to their expected relatedness. This setup means that points of similar colors should cluster together. In addition, Somalier plots the per-sample output in a separate plot with selectable axes; this functionality allows one to evaluate predicted vs. reported sex and depth across samples. Somalier requires htslib (https://htslib.org). It is written in the Nim programming language (https://nim-lang.org) which compiles to C and also utilizes our hts-nim [12] library. It is distributed as a static binary, and the source code is available at https://github.com/brentp/somalier under an academic license. Glioma patients with 3 samples We ran Somalier on BAM alignment files from five individuals, each with three assays: a normal sample, a glioma tumor sample, and cell-free DNA, for a total of 15 samples [13]. The extraction step, which creates the genome sketch and can be parallelized by sample, requires roughly three minutes per sample with a single CPU. Once extracted, the relate step, which computes the relatedness measures for each sample pair, requires less than 1 s. Somalier was able to clearly group samples using the default sites provided with the software (Fig. 2). Because the site selection is so strict, none of the sample pairs from the same individual had an IBS0 metric above 0, indicating that those sites are genotyped correctly. The user can specify expected groups of samples (e.g., from the same individual) with sample pairs expected to be identical colored as orange. With this layout that colors sample pairs by expected relatedness and positions them by observed relatedness (as computed from the genotypes estimated from the alignments), it is simple for the researcher to quickly spot problems. For example, Fig. 2a illustrates an obvious mix-up where samples expected to be unrelated have a high IBS2 and low IBS0. Since the plot is interactive, the user can then hover over points that appear out of place (in this example, the green points that cluster with the orange) to learn which samples are involved. After correcting the sample manifest based on this observation, and re-running the relatedness calculation, the updated plot shows that all samples cluster as expected given their relatedness (Fig. 2b). Glioma samples before and after correction. a A comparison of the IBS0 (number of sites where 1 sample is homozygous reference and another is homozygous alternate) and IBS2 (count of sites where samples have the same genotype) metric for 15 samples. Each point is a pair of samples. Points are positioned by the values calculated from the alignment files (observed relatedness) and colored by whether they are expected to be identical (expected relatedness), as indicated from the command line. In this case, sample swaps are visible as orange points that cluster with green points, and vice versa. The user is able to hover on each point to see the sample pair involved and to change the X and Y axes to any of the metrics calculated by Somalier. b An updated version of the plot in a after the sample identities have been corrected (per the information provided by a) in the manifest after re-running Somalier 1000 Genomes high-coverage samples In order to evaluate the scalability and accuracy of Somalier, we used the recently released high-coverage data from 2504 samples in the 1000 Genomes Project [14]. We extracted sites for all 2504 samples from the jointly called VCF. After extracting sketches, comparing each sample against all other samples (a total of 3,133,756 = 2504 * 2503 * 2 comparisons) required merely 6 s, following 1.1 s to parse the sketches and roughly 2 s to write the output. Although the 1000 Genomes Project provides a pedigree file, none of the samples included in the 2504 are indicated to be related by that file. However, using Somalier, we found 8 apparent parent-child pairs (NA19904-NA19913, NA20320-NA20321, NA20317-NA20318, NA20359-NA20362, NA20334-NA20335, HG03750-HG03754, NA20882-NA20900, NA20881-NA20900) 4 full-sibling pairs (HG02429-HG02479, NA19331-NA19334, HG03733-HG038899, HG03873-HG03998) and 3 second-degree relatives (NA19027-NA19042, NA19625-NA20274, NA21109-NA21135) (Fig. 3). These same sample pairs also have higher values (as expected) for homozygous concordance. In addition, there are several pairs of samples with a coefficient of relatedness between 0.1 and 0.2 that appear to be more distantly related. An earlier analysis on a different subset of the 1000 Genomes samples uncovered some of these same unreported relationships [15]. Relatedness plot for thousand genomes samples. Each dot represents a pair of samples. IBS0 on the x-axis is the number of sites where 1 sample is homozygous for the reference allele and the other is homozygous for the alternate allele. IBS2, on the y-axis, is the count of sites where a pair of samples were both homozygous or both heterozygous. Points with IBS0 of 0 are parent-child pairs. The 4 points with IBS0 > 0 and IBS0 < 450 are siblings. There are also several more distantly related sample pairs We also note that several samples indicated to be female in the manifest appear to have lost an X chromosome as they have lower depth and no heterozygous sites (Fig. 4a). However, they also lack coverage on the Y chromosome (Fig. 4); as such, we think that loss of X in these cell-line samples is more likely than a sample swap or manifest error. Finally, Somalier also provides other sample metrics including mean depth, counts of each genotype, mean allele balance, and others that are useful for sample quality control. The user can customize the visualization on the interactive web page by choosing which metrics to display on the X and Y axes. Sex quality control on thousand genomes samples. Each point is a sample colored as orange if the sample is indicated as female and green if it is indicated as male; all data is for the X chromosome. a The number of homozygous alternate sites on the x-axis and the number of heterozygous sites on the y-axis. Males and females separate with few exceptions. b The number of homozygous alternate sites on the x-axis compared to the mean depth on the Y chromosome. Males and females reported in the manifest separate perfectly, indicating that some females may have experienced a complete loss of the X chromosome Comparison to KING In order to evaluate the accuracy and speed of Somalier, we compared its performance to KING [4]. Since KING also has an extraction-like step, in converting VCF to PLINK's [16] binary format, we partition the timing into distinct steps for data extraction ("extract") and computation of relatedness ("relate"). We used KING version 2.2.4 and plink2 version 1.90p. We compared the speed and output of Somalier to that of KING on the 2504 thousand genomes, high-coverage VCF. Somalier is more flexible as it can be applied to VCF, BAMs, GVCFs, etc., but it is also faster both at extraction (which will only be done once) and at the relate step which can be repeated each time new samples are added (Table 1). Much of the speed improvement observed in Somalier comes from the sketch, which contains only a small subset of sites on the genome. Furthermore, kinship estimates from KING and Somalier are very similar (Additional file 1: Fig. S1). Table 1 Speed comparison to KING. The extract step consists of conversion to a sketch for Somalier and of conversion to a plink binary bed file for KING. The relate step is the time spent measuring kinship between all pairs of samples. Times shown reflect the wall time required for completion Evaluation on GTeX RNA-seq and whole-genome seq In order to show that Somalier can be used to find and verify sample identity across sequencing experiments and tissues, we applied it to a set of data from the GTeX project [17]. We utilized 216 samples with WGS, RNA-seq from skin (not sun exposed), and RNA-seq from blood. We expect each of these 648 assays to have a relatedness of 1 to the two other samples from the same GTeX individual. We used the per-exon, per-sample expression levels to create a set of sites that have relatively high allele frequency in gnomAD and are commonly expressed (> 10 reads in 40% of samples). These sites are distributed at the Somalier repository and include 16,469 autosomal sites and 794 sites on the X chromosome. We found that with a cutoff of relatedness of 0.5, which enforces that pairs of samples below this threshold are not from the same individual, we are able to correctly classify every sample pair (out of 209,628 pairs). In order to show the specificity of Somalier with a smaller number of sites, we ran Somalier with 10, 20, 40, 100, 200, 400, 1000, 2000, 4000, 8000, and 16,000 of the original 16,469 autosomal sites and demonstrate that we are able to correctly classify all pairs with as few as 400 of the original 16,469 sites (Additional file 1: Fig. S2). If we instead require that unrelated samples have a calculated relatedness of less than 0.2 and related samples have greater than 0.8, then at least 1000 sites are required to reduce the false-positive rate (where unrelated samples are classified as related because they have a relatedness > 0.2) to under 0.05 (Additional file 2). Further, on inspection of the interactive Somalier plots (Additional file 3), it is clear that the false positives are driven by a few low-quality samples, each of which is involved in 647 pairs. If those were removed, the false-positive rates would drop. We have introduced Somalier to efficiently detect sample swaps and mismatched samples in diverse DNA and RNA sequencing projects. On a set of 15 samples, we were able to detect and correct sample swaps using the text and HTML output from Somalier, which ran in less than a second. In addition, Somalier can be used to provide an accurate relatedness estimate using homozygous concordance even under severe loss-of-heterozygosity. We have designed it to measure relatedness very quickly despite assaying the alignments directly, and we have shown that using a carefully selected set of sites facilitates accurate separation of related from unrelated samples even on a small gene panel. We have carefully selected the sites assayed by Somalier to minimize sequencing artifacts and variant calling errors. In addition, we distribute a set of sites for genome build GRCh37 that is compatible with genome build GRCh38. Because the sets are identical, we can compare samples aligned to either genome build. This becomes more important as research groups switch to GRCh38. In fact, in comparing the recently released high coverage 1000 Genomes samples (aligned to GRCh38) to the Simons Diversity Project samples [18] (aligned to GRCh37), we found several samples shared between these projects. To our knowledge, this has not been previously reported. These findings highlight the utility and novelty of Somalier, as it enables comparing across large cohorts. Previous tools such as peddy provide similar functionality when a jointly called, germline VCF is provided. However, that is often not practical for cancer studies. In addition, HYSYS can detect sample swaps in cancer samples using homozygous concordance; however, it requires a custom text format which reports germline variants that have already been called across all samples. The sketch format used by Somalier is a simple binary format. We provide an example in the repository that demonstrates reading the data in a simple python script and performing ancestry estimation using principal components analysis. While Somalier can also utilize any number of VCF files as input, we expect that the simplicity and speed of using alignment files will make that the most frequent mode of use. We have introduced Somalier, a tool to rapidly evaluate relatedness from sequencing data in BAM, CRAM, or VCF formats. We show that it works across tissue types and to compare RNA-seq data to WGS. It is fast and simple to use and it simplifies analyses—such as comparison across cohorts and genomes builds—that were previously difficult or not feasible. Availability and requirements Project name: Somalier Project home page: https://github.com/brentp/somalier Operating systems: Linux, OSX, Windows (a static binary is provided for Linux systems, users on other OSes can build the tool) Programming language: Nim The 1000 Genomes data [14] was downloaded from: http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000G_2504_high_coverage/working/20190425_NYGC_GATK/. The data used for the GTeX analyses described in this manuscript were analyzed on the Terra platform on April, 24, 2020. All procedures related to acquisition and use of data from glioma [13] patients were approved by the University of Utah Internal Review Board prior to study initiation (protocol #10924) and conform to the principles of the Helsinki Declaration. All glioma patients provided written informed consent. Due to the limitations imposed by the University of Utah Institutional Review Board (IRB), only a subset of the human sequencing data presented in this study can be made publicly available in a data repository. All glioma data was acquired between January 2016 and December 2016 under IRB #10924. All sequencing data acquired under IRB #10924 after January 1, 2015, and before August 23, 2017, cannot be shared in a public repository unless the patient is deceased. Bam files from three glioma patients are available in the NCBI Sequence Read Archive database under accession PRJNA641696 [13]. For those data that are unable to be shared via repository, please contact Ann Johnson ([email protected]), Director of University of Utah IRB, to request access to the data. Cibulskis K, et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnol. 2013;31:213–9. McKenna A, et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303. Pedersen, B. S. & Quinlan, A. R. Who's who? detecting and resolving sample anomalies in human DNA sequencing studies with peddy. Am J Hum Genet (2017). https://doi.org/10.1016/j.ajhg.2017.01.017. Manichaikul A, et al. Robust relationship inference in genome-wide association studies. Bioinformatics. 2010;26:2867–73. Danecek P, et al. The variant call format and VCFtools. Bioinformatics. 2011;27:2156–8. Bergmann EA, Chen B-J, Arora K, Vacic V, Zody MC. Conpair: concordance and contamination estimator for matched tumor-normal pairs. Bioinformatics. 2016;32:3196–8. Lek M, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536:285–91. Li H. Toward better understanding of artifacts in variant calling from high-coverage samples. Bioinformatics. 2014;30:2843–51. Li H, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9. Schröder J, Corbin V, Papenfuss AT. HYSYS: have you swapped your samples? Bioinformatics. 2017;33:596–8. The Method of Path Coefficients on JSTOR. https://www.jstor.org/stable/2957502. Access date 15 Feb 2020. Pedersen BS, Quinlan AR. hts-nim: scripting high-performance genomic analyses. Bioinformatics. 2018;34:3387–9. ID 641696 - BioProject - NCBI. https://www.ncbi.nlm.nih.gov/bioproject/PRJNA641696. Personal communication, M. Z. http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000G_2504_high_coverage/. Roslin, N. M., Weili, L., Paterson, A. D. & Strug, L. J. Quality control analysis of the 1000 Genomes Project Omni2.5 genotypes. bioRxiv 078600 (2016) doi:https://doi.org/10.1101/078600. Chang CC, et al. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4:7. Lonsdale J, et al. The Genotype-Tissue Expression (GTEx) project. Nat Genet. 2013;45:580–5. Mallick S, et al. The Simons Genome Diversity Project: 300 genomes from 142 diverse populations. Nature. 2016;538:201–6. We thank Wayne Clarke for noting that the anomaly in the 1000 Genomes samples was loss of chromosome X rather than a sex swap. Thanks to several early users of Somalier for valuable feedback. Thanks to the New York Genome Center for making the high-coverage 1000 Genomes data available. The Genotype-Tissue Expression (GTEx) Project was supported by the Common Fund of the Office of the Director of the National Institutes of Health, and by NCI, NHGRI, NHLBI, NIDA, NIMH, and NINDS. The development of Somalier was supported by an NIH award to Base2 Genomics, LLC (R41HG010126), an NIH award to ARQ (R01HG009141), an NIH award to ARQ and GTM (U24CA209999), and an NCI award to HRU (R37CA246183). This study was conducted with support from the Biorepository and Molecular Pathology Shared Resource and the Cancer Biostatistics Shared Resource supported by the Cancer Center Support Grant awarded to the Huntsman Cancer Institute by the National Cancer Institute (P30CA04014). Department of Human Genetics, University of Utah, 15 S 2030 E, Salt Lake City, UT, 84112, USA Brent S. Pedersen, Preetida J. Bhetariya, Joe Brown, Stephanie N. Kravitz, Gabor Marth & Aaron R. Quinlan Base2 Genomics, LLC, Salt Lake City, UT, 84105, USA Brent S. Pedersen, Joe Brown & Aaron R. Quinlan Department of Neurosurgery, Radiation Oncology and Oncological Sciences, Huntsman Cancer Institute, University of Utah, 5th Floor CNC, 175 North Medical Drive, Salt Lake City, UT, 84132, USA Randy L. Jensen Department of Pathology, Huntsman Cancer Institute, University of Utah, 15 S 2030 E, Salt Lake City, UT, 84112, USA Mary P. Bronner Division of Medical Genetics, Department of Pediatrics, Department of Radiology, University of Utah, 15 S 2030 E, Salt Lake City, UT, 84112, USA Hunter R. Underhill Department of Biomedical Informatics, University of Utah, 421 Wakara Way #140, Salt Lake City, UT, 84108, USA Aaron R. Quinlan Brent S. Pedersen Preetida J. Bhetariya Stephanie N. Kravitz Gabor Marth BSP conceived and wrote the software and wrote the manuscript. ARQ wrote the manuscript. PJB provided insight that led to development of the software. JB wrote visualization code used in the software. All authors read and approved the manuscript. Correspondence to Brent S. Pedersen or Aaron R. Quinlan. All procedures related to acquisition and use of data from glioma patients were approved by the University of Utah Internal Review Board prior to study initiation (protocol #10924) and conform to the principles of the Helsinki Declaration. All glioma patients provided written informed consent. Brent S. Pedersen and Aaron R. Quinlan are co-founders of Base2 Genomics. The remaining authors declare that they have no competing interests. Additional file 1: Supplementary Fig. 1. Comparison of KING estimate of kinship to that of Somalier. Supplementary Fig. 2. Evaluation of false-positive rate of Somalier as number of assayed sites is varied. Subset analysis for GTeX data. HTML output for GTeX analysis. Pedersen, B.S., Bhetariya, P.J., Brown, J. et al. Somalier: rapid relatedness estimation for cancer and germline studies using efficient genome sketches. Genome Med 12, 62 (2020). https://doi.org/10.1186/s13073-020-00761-2
CommonCrawl
pdgLive Home > Supersymmetric Particle Searches > R-parity violating ${{\widetilde{\mathit q}}}$ (Squark) mass limit ${{\widetilde{\boldsymbol q}}}$ (Squark) mass limit For ${\mathit m}_{{{\widetilde{\mathit q}}}}$ $>$ 60$-$70 GeV, it is expected that squarks would undergo a cascade decay via a number of neutralinos and/or charginos rather than undergo a direct decay to photinos as assumed by some papers. Limits obtained when direct decay is assumed are usually higher than limits when cascade decays are included. Limits from ${{\mathit e}^{+}}{{\mathit e}^{-}}$ collisions depend on the mixing angle of the lightest mass eigenstate ${{\widetilde{\mathit q}}_{{1}}}={{\widetilde{\mathit q}}_{{R}}}$sin$\theta _{{{\mathit q}}}+{{\widetilde{\mathit q}}_{{L}}}$cos $\theta _{{{\mathit q}}}$. It is usually assumed that only the sbottom and stop squarks have non-trivial mixing angles (see the stop and sbottom sections). Here, unless otherwise noted, squarks are always taken to be either left/right degenerate, or purely of left or right type. Data from ${{\mathit Z}}$ decays have set squark mass limits above 40 GeV, in the case of ${{\widetilde{\mathit q}}}$ $\rightarrow$ ${{\mathit q}}{{\widetilde{\mathit \chi}}_{{1}}}$ decays if $\Delta \mathit m={\mathit m}_{{{\widetilde{\mathit q}}}}–{\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}{ {}\gtrsim{} }$5 GeV. For smaller values of $\Delta \mathit m$, current constraints on the invisible width of the ${{\mathit Z}}$ ($\Delta \Gamma _{{\mathrm {inv}}}<2.0$ MeV, LEP 2000 ) exclude ${\mathit m}_{{{\widetilde{\mathit u}}_{{L,R}}}}<$44 GeV, ${\mathit m}_{{{\widetilde{\mathit d}}_{{R}}}}<$33 GeV, ${\mathit m}_{{{\widetilde{\mathit d}}_{{L}}}}<$44 GeV and, assuming all squarks degenerate, ${\mathit m}_{{{\widetilde{\mathit q}}}}<$45 GeV. Some earlier papers are now obsolete and have been omitted. They were last listed in our PDG 2014 edition: K. Olive, $\mathit et~al.$ (Particle Data Group), Chinese Physics C38 070001 (2014) (http://pdg.lbl.gov). R-parity violating ${{\widetilde{\boldsymbol q}}}$ (Squark) mass limit INSPIRE search VALUE (GeV) CL% $\text{none 100 - 720}$ 95 1 CMS 2 large jets with four-parton substructure, ${{\widetilde{\mathit q}}}$ $\rightarrow$ 4 ${{\mathit q}}$ $\bf{> 1600}$ 95 2 KHACHATRYAN 2016 BX CMS ${{\widetilde{\mathit q}}}$ $\rightarrow$ ${{\mathit q}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}$ , ${{\mathit \lambda}_{{121}}}$ or ${{\mathit \lambda}_{{122}}}{}\not=$0, ${\mathit m}_{{{\widetilde{\mathit g}}}}$ = 2400 GeV 2015 CB ATLS jets, ${{\widetilde{\mathit q}}}$ $\rightarrow$ ${{\mathit q}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit q}}{{\mathit q}}$ , ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 108 GeV and 2.5 $<$ c$\tau _{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ $<$ 200 mm ATLS ${{\mathit \ell}}$ +jets +$\not E_T$, CMSSM, ${\mathit m}_{{{\widetilde{\mathit q}}}}={\mathit m}_{{{\widetilde{\mathit g}}}}$ CMS ${}\geq{}3{{\mathit \ell}^{\pm}}$ 1 SIRUNYAN 2018EA searched in 38.2 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for the pair production of resonances, each decaying to at least four quarks. Reconstructed particles are clustered into two large jets of similar mass, each consistent with four-parton substructure. No statistically significant excess over the Standard Model expectation is observed. Limits are set on the squark and gluino mass in RPV supersymmetry models where squarks (gluinos) decay, through intermediate higgsinos, to four (five) quarks, see their Figure 4. 2 KHACHATRYAN 2016BX searched in 19.5 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events containing 4 leptons coming from R-parity-violating decays of ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}$ with ${{\mathit \lambda}_{{121}}}{}\not=$ 0 or ${{\mathit \lambda}_{{122}}}{}\not=$ 0. No excess over the expected background is observed. Limits are derived on the gluino, squark and stop masses, see Fig. 23. 3 AAD 2015CB searched for events containing at least one long-lived particle that decays at a significant distance from its production point (displaced vertex, DV) into two leptons or into five or more charged particles in 20.3 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV. The dilepton signature is characterised by DV formed from at least two lepton candidates. Four different final states were considered for the multitrack signature, in which the DV must be accompanied by a high-transverse momentum muon or electron candidate that originates from the DV, jets or missing transverse momentum. No events were observed in any of the signal regions. Results were interpreted in SUSY scenarios involving $\mathit R$-parity violation, split supersymmetry, and gauge mediation. See their Fig. $14 - 20$. 4 AAD 2012AX searched in 1.04 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 7 TeV for supersymmetry in events containing jets, missing transverse momentum and one isolated electron or muon. No excess over the expected SM background is observed and model-independent limits are set on the cross section of new physics contributions to the signal regions. In mSUGRA/CMSSM models with tan ${{\mathit \beta}}$ = 10, ${{\mathit A}_{{0}}}$ = 0 and ${{\mathit \mu}}$ $>$ 0, squarks and gluinos of equal mass are excluded for masses below 820 GeV at 95$\%$ C.L. Limits are also set on simplified models for squark production and decay via an intermediate chargino and on supersymmetric models with bilinear R-parity violation. Supersedes AAD 2011G. 5 CHATRCHYAN 2012AL looked in 4.98 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 7 TeV for anomalous production of events with three or more isolated leptons. Limits on squark and gluino masses are set in RPV SUSY models with leptonic $\mathit LL\bar E$couplings, ${{\mathit \lambda}_{{123}}}$ $>$ 0.05, and hadronic $\bar U \bar D \bar D$ couplings, ${{\mathit \lambda}}{}^{''}_{112}$ $>$ 0.05 , see their Fig. 5. In the $\bar U \bar D \bar D$ case the leptons arise from supersymmetric cascade decays. A very specific supersymmetric spectrum is assumed. All decays are prompt. SIRUNYAN 2018EA PRL 121 141802 Search for pair-produced resonances each decaying into at least four quarks in proton-proton collisions at $\sqrt{s}=$ 13 TeV KHACHATRYAN 2016BX PR D94 112009 Searches for $\mathit R$-Parity-Violating Supersymmetry in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV in Final States with $0 - 4$ Leptons AAD 2015CB PR D92 072004 Search for Massive, Long-Lived Particles using Multitrack Displaced Vertices or Displaced Lepton Pairs in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV with the ATLAS Detector AAD 2012AX PR D85 012006 Search for Supersymmetry in Final States with Jets, Missing Transverse Momentum and One Isolated Lepton in $\sqrt {s }$ = 7 TeV ${{\mathit p}}{{\mathit p}}$ Collisions using 1 ${\mathrm {fb}}{}^{-1}$ of ATLAS data CHATRCHYAN 2012AL JHEP 1206 169 Search for Anomalous Production of Multilepton Events in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 7 TeV
CommonCrawl
Radiometric Correction and 3D Integration of Long-Range Ground-based Hyperspectral Imagery for Mineral Exploration of Vertical Outcrops Lorenz, S.; Salehi, S.; Kirsch, M.; Zimmermann, R.; Unger, G.; Sørensen, E. V.; Gloaguen, R. Recently, ground-based hyperspectral imaging has come to the fore, supporting the arduous task of mapping near-vertical, difficult-to-access geological outcrops. The application of outcrop sensing within a range of one to several hundred meters, including geometric corrections and integration with accurate terrestrial laser scanning models, is already developing rapidly. However, there are only very few studies dealing with ground-based imaging of distant (i.e., in the range of several kilometres) targets such as mountain ridges, cliffs, and pit walls. In particular the extreme influence of atmospheric effects and topography-induced illumination differences have remained an unmet challenge on the spectral data. Those effects cannot be corrected by means of common correction tools for nadir satellite- or airborne data. Thus, this article presents an adapted workflow to overcome the challenges of long-range outcrop sensing, including straightforward atmospheric and topographic corrections. Using two datasets with different characteristics, we demonstrate the application of the workflow and highlight the importance of the presented corrections for a reliable geological interpretation. The achieved spectral mapping products are integrated with 3D photogrammetric data to create large-scale now-called "hyperclouds", i.e. geometrically correct representations of the hyperspectral datacube. The presented workflow opens up a new range of application possibilities of hyperspectral imagery by significantly enlarging the scale of ground-based measurements. Keywords: hyperspectral; topographic correction; atmospheric correction; radiometric correction; long-range; long-distance; Structure from Motion (SfM); photogrammetry; mineral mapping; Minimum Wavelength mapping; Maarmorilik; Riotinto Remote Sensing 10(2018)2, 176 DOI: 10.3390/rs10020176 Whispers Conference 9th Workshop on Hyperspectral Image and Signal Processing, 23.-26.09.2018, Amsterdam, Nederland FLUKA simulations of the neutron flux in the Dresden Felsenkeller Grieger, M.; Bemmerer, D.; Hensel, T.; Müller, S. E.; Zuber, K. The Dresden Felsenkeller ist a shallow-underground site featuring a rock overburden of 47 m which hosts a 5 MV Pelletron accelerator in tunnels VIII and IX. Using previous measurements in the lowbackground 𝛾-couting facility in tunnel IV as a benchmark, a FLUKA simulation has been developed to predict the neutron flux in tunnels VIII and IX. The simulation results provide insight into local neutron field inhomogenities caused by the measurement environment itself. DPG Frühjahrstagung, 26.02.-02.03.2018, Bochum, Deutschland Long-Wave Hyperspectral Imaging For Lithological Mapping: A Case Study Lorenz, S.; Kirsch, M.; Zimmermann, R.; Tusa, L.; Möckel, R.; Chamberland, M.; Gloaguen, R. Hyperspectral long-wave infrared imaging (LWIR HSI) adds a promising complement to visible, near infrared, and shortwave infrared (VNIR and SWIR) HSI data in the field of mineral mapping. It enables characterization of rock- forming minerals such as silicates and carbonates, which show no detectable or extremely weak features in VNIR and SWIR. In the last decades, there has been a steady increase of publications on satellite, aerial, and laboratory LWIR data. However, the application of LWIR HIS for ground- based, close-range remote sensing of vertical geological outcrops is sparsely researched and will be the focus of the current study. We present a workflow for acquisition, mosaicking, and radiometric correction of LWIR HSI data. We demonstrate the applicability of this workflow using a case study from a gravel quarry in Germany. Library spectra are used for spectral unmixing and mapping of the main lithological units, which are validated using sample X-ray diffraction (XRD) and thin section analysis as well as FTIR point spectrometer data. Keywords: long wave infrared; thermal infrared; mineral mapping; hyperspectral International Geoscience and Remote Sensing Symposium, IGARSS 2018, 23.-27.07.2018, Valencia, España FLUKA simulations of neutron transport in the Dresden Felsenkeller Grieger, M.; Bemmerer, D.; Müller, S. E.; Szücs, T.; Zuber, K. A new underground ion accelerator with 5 MV acceleration potential is currently being readied for installation in the Dresden Felsenkeller. The Felsenkeller site consists of altogether nine mutually connected tunnels. It is shielded from cosmic radiation by a 45 m thick rock overburden, enabling uniquely sensitive experiments. In order to exclude any possible effect by the new accelerator in tunnel VIII on the existing low-background gamma-counting facility in tunnel IV, Monte Carlo simulations of neutron transport are being performed. A realistic neutron source field is developed, and the resulting additional neutron flux at the gamma-counting facility is modeled by FLUKA simulations. – Supported by NAVI (HGF VH-VI-417). DPG Frühjahrstagung, 23.-27.03.2015, Heidelberg, Deutschland Strahlenschutzrechnungen für den Untertage-Ionenbeschleuniger am Standort Felsenkeller Grieger, M. Geringe natürliche Hintergrundstrahlung ist für die Untersuchung von Brennprozessen in Sternen von hoher Bedeutung. Für mehrere Szenarien wurden detaillierte FLUKA durchgeführt um die zusätzliche Strahlungserzeugung durch den neuen 5 MV Pelletron Beschleuniger zu studieren - mit dem Ziel die benötigte Abschirmung zu optimieren. Lange Nacht der Wissenschaften Dresden, 16.06.2017, Dresden, Deutschland Flue dust from the copper converter process - Recovery of Cu and In by solvent extraction Rädecker, P.; Scharf, C.; Zürner, P.; Frisch, G.; Pieplow, G.; Lindner, D.; Koch, J. Flue dusts from copper metallurgy are resources for base metals such as copper, zinc, tin or lead. However, there is also a potential for the recovery of strategic elements like indium. At the moment flue dusts are recirculated within the copper process, but residues are available from historic produc-tion processes. Hydrometallurgical processes seem to be a promising method to recover the base and strategic met-als from these fine-grained flue dusts. In preparation for further processing, the secondary material was characterized by mineral liberation analysis (MLA) and electron probe micro analysis (EPMA). An iron- and zinc-rich spinel phase (Zn,Fe,Mn)(Fe,Mn)2O4 was detected as the main phase (76 wt.-%) in the flue dust. The chemical composition of the flue dust was analysed by X-ray fluo-rescence spectroscopy (XRF spectroscopy). Leaching by sulfuric acid leads to precipitation of lead sulphate and calcium sulphate, which remain in the residue. The level of some impurities in the solu-tion can be controlled. This work focuses on the selective recovery of copper and indium by solvent extraction from the leaching solution. Preliminary synthetic solutions of copper, iron(III), zinc, indium and mixtures of them were used for the investigations. The influence of pH value, concentration of acidic extract-ants, extraction time, and phase ratio on the extraction of copper, iron(III), zinc, and indium were studied. The results of the selective extraction of copper in the presence of iron(III) will be present-ed. European Metallurgical Conference EMC2017, 25.-28.06.2017, Leipzig, Deutschland Proceedings of the EMC2017, Clausthal-Zellerfeld: GDMB Verlag GmbH, 978-3-940276-74-2, 1121-1132 European Metallurgical Conference EMC 2017, 25.-28.06.2017, Leipzig, Deustchland The Natural Neutron Background Underground: Measurement Using Moderated ³He Counters in Felsenkeller A FLUKA simulation has been made to analyse the propagation of neutrons from the future Felsenkeller accelerator throughout the tunnel system of Felsenkeller. A neutron flux measurement with detectors loaned by the BELEN-collaboration has been performed to put the accelerator induced neutron flux into perspective with the natural neutron background. To deduce the neutron flux from the counting rates, FLUKA has been used to simulate the detector responses. The three measured locations in the existing γ-measurement facility in tunnel IV show different traits in their neutron spectra. Their cause has been analysed and the results had an impact on the construction planning of the new Felsenkeller laboratory in tunnels VIII and IX. Felsenkeller Workshop, 26.-28.06.2017, Dresden, Deutschland NDRA 2016, 29.06.-02.07.2016, Riva del Garda, Italia DPG Frühjahrstagung, 14.-18.03.2016, Darmstadt, Deutschland Mentor: PD Dr. Daniel Bemmerer Fakultät für Werkstoffwissenschaft und Werkstofftechnologie an der TU Bergakademie Freiberg Rädecker, P. Vorstellung der Studiengänge der Fakultät 5 der TU Bergakademie Freiberg. Studieninformationstag am BSZ Konrad Zuse, 14.11.2017, Hoyerswerda, Deutschland Construction and setup of a KÜHNI column in pilot scale Die Trennung von Kupfer und Eisen durch Solventextraktion ist in der Metallurgie, speziell bei der Verarbeitung von Lösungen aus dem Laugungsprozess oxidischer Kupfererze, ein vielfältig untersuchtes Verfahren. Es wurden die Reaktionsisotherme für Kupfer, die Zeit- und Konzentrationsabhängigkeiten sowie die Trennung von Kupfer und Eisen bestimmt. Die ermittelten optimalen Parameter werden angewendet, um den Prozess auf eine gerührte 32-mm KÜHNI-Extraktionskolonne (bereitgestellt durch SULZER Chemtech AG) zu übertragen. Die Einbauten sind aus korrosionsfestem Kunststoff und die wässrige Phase wird als disperse Phase gefahren. Keywords: KÜHNI-Kolonne; Solventextraktion; Kupfer; LIX984 Neue Verfahren und Materialien für Energie- und Umwelttechnik, 09.11.2017, Zwickau, Deutschland FLUKA Radiation Safety Calculations for the Underground Accelerator Laboratory Felsenkeller/Dresden Grieger, M.; Bemmerer, D. The study of stable stellar burning reactions in nuclear astrophysics requires the use of ion accelerators in a low-background setting underground. Currently, there is only one such laboratory, the LUNA 0.4 MV accelerator deep underground in Gran Sasso/Italy. Several higher-energy underground accelerators are under development worldwide, including a 5 MV Pelletron to be placed in the Felsenkeller underground laboratory in Dresden/Germany. The shielding requirements for underground accelerators, where the paramount concern is the background in neighbouring rare-event searches is reviewed. Detailed FLUKA simulations have been carried out to study several different operating scenarios of the new 5 MV Felsenkeller accelerator, with a focus on the side effects on an existing γ-counting facility in the same tunnel system. The results of the simulations and practical implications will be discussed. FLUKA shielding calculations for the underground accelerator laboratory Felsenkeller/Dresden SATIF-13, 10.-12.10.2016, Dresden, Deutschland XPS spectra, electronic structure, and magnetic properties of RFe5Al7 intermetallics Finkelstein, L. D.; Efremov, A. V.; Korotin, M. A.; Andreev, A. V.; Gorbunov, D. I.; Mushnikov, N. V.; Zhidkov, I. S.; Kikharenko, A. I.; Cholakh, S. O.; Kurmaev, E. Z. The results of X-ray photoelectron spectroscopy measurements (core levels and valence bands) of RFe5Al7 (R = Lu, Tm, Er, Ho, Dy, Tb, Gd) single crystals are presented in comparison with the results of bulk magnetization studies and electronic structure calculations. It is shown that the increase of the Curie temperature in RFe5Al7 from Tm to Gd is associated with an increase of the indirect R 4f - Fe 3d exchange interaction at the expense of the multiplicity 2S + 1 (statistical weight) in the ground state 2S + 1LJ of R3+ ions. The nonmonotonic behavior of the ferrimagnetic compensation temperature, Tcomp, as well as the values of the spontaneous magnetic moment, Ms, and formation energy, Eform, of the 4fn levels in R metals in a series from ErFe5Al7 to GdFe5Al7 are explained by the difference in the quantum numbers L, J and S of the ground state of R3+ ions, leading to a maximum value of Tcomp, Ms and Eform for the Dycontaining compound. The electronic structure of Gd/LuFe5Alsub>7 is calculated using the GGA+U approach, on the basis of which the physical mechanism and relative strength of the interatomic R-Fe and Al-Fe interactions are considered, and also the difference in the magnetic moments of iron atoms in different structural positions is explained. Journal of Alloys and Compounds 733(2018), 82-90 Process Simulation of Si Dot Fabrication for SETs by Ion Beam Mixing and Phase Separation in Nanopillars Prüfer, T.; Heinig, K. H.; Möller, W.; Xu, X.; Hlawacek, G.; Facsko, S.; Hübner, R.; Wolf, D.; Bischoff, L.; von Borany, J. The single electron transistor (SET) is considered a promising candidate to continue the revolution of information technology due to its very low energy consumption (~100 times less then common FET). The big challenge is the manufacturability of SETs working at room temperature (RT). This requires the fabrication of much smaller structures (<5nm) than present-day and even future (multi-E-beam or EUV) lithography can provide. Here we propose an ion-beam-assisted, CMOS compatible fabrication process of SETs. To realize the controlled tunneling of single electrons we propose a nanopillar of a Si/SiO2/Si stack with a single Si quantum dot embedded in SiO2 and connected by tunnel junctions to Si electrodes, which makes the drain and source. For RT operation the quantum dot has to be smaller than 5nm and requires tunnel distances lower than 2nm. The size of this pillar needs to be in the range of 10-20nm. In this presentation we show the simulation of a CMOS compatible process to fabricate this quantum dot by using ion beam mixing and self-assembly. Earlier projects proved already the reliability of dot formations using ion beam mixing technologies. Starting with a layerstack of Si/SiO2/Si, the ion beam irradiation by high energy Si+ ions causes mixing of the two Si/SiO2 interfaces what transforms the SiO2 layer into metastable SiOx (Figure 1). During subsequent heat treatment the mixed region of SiOx (<10nm2) separates into Si and SiO2, what leads to the formation of one single Si nanodot in the SiO2 layer (Figure 2). The irradiation simulations are done by TRIDYN and TRI3DYN program codes and the annealing by a self-developed Kinetic Monte Carlo program. We will present, how this process can be controlled using the ion beam irradiation values, geometrical sizes and the heat treatment parameters, so that it is yielding suitable conditions for application in hybrid SET-CMOS devices operating at RT. This part of the work is being funded by the European Union's Horizon 2020 research and innovation program under Grant Agreement No 688072 (Project IONS4SET). Keywords: SET; CMOS; Phase Separation; Ion Beam Mixing Electron, Ion, and Photon Beam Technology and Nanofabrication, 30.05.-02.06.2017, Orlando, USA Determining antiferromagnetic domain patterns electrically Kosub, T.; Hübner, R.; Appel, P.; Shields, B.; Maletinsky, P.; Kopte, M.; Schmidt, O. G.; Faßbender, J.; Makarov, D. Extrinsic effects on Cr2O3 thin films are shown. Also a statistical method to evaluate AF domain pattern in an electric way is demonstrated. AF Spintronics Workshop, 25.10.2017, Grenoble, France Purely Antiferromagnetic Magnetoelectric RAM Kosub, T.; Kopte, M.; Appel, P.; Shields, B.; Maletinsky, P.; Hübner, R.; Fassbender, J.; Schmidt, O. G.; Makarov, D. MERAM based on Cr2O3/Pt is presented DPG-Frühjahrstagung, 19.03.2017, Dresden, Germany CIMTEC Ceramics Congress, 18.-22.06.2018, Perugia, Italien MMM Pittsburgh, 13.-17.11.2017, Pittsburgh, USA EMN Meeting, 16.-20.07.2018, Berlin, Deutschland Purely Antiferromagnetic MERAM Kosub, T.; Kopte, M.; Appel, P.; Shields, B.; Maletinsky, P.; Hübner, R.; Schmidt, O. G.; Faßbender, J.; Makarov, D. Magnetoelectric and purely antiferromagnetic RAM is shown based on Cr2O3/Pt IEEE Dublin, 24.04.2017, Dublin, Ireland Unconventional spin dynamics in the honeycomb-lattice material α-RuCl3: High-field electron spin resonance studies Ponomaryov, A. N.; Schulze, E.; Wosnitza, J.; Lampen-Kelley, P.; Banerjee, A.; Yan, J.-Q.; Bridges, C. A.; Mandrus, D. G.; Nagler, S. E.; Kolezhuk, A. K.; Zvyagin, S. A. We present high-field electron spin resonance (ESR) studies of the honeycomb-lattice material α-RuCl3, a prime candidate to exhibit Kitaev physics. Two modes of antiferromagnetic resonance were detected in the zigzag ordered phase, with magnetic field applied in the ab plane. A very rich excitation spectrum was observed in the field-induced quantum paramagnetic phase. The obtained data are compared with the results of recent numerical calculations, strongly suggesting a very unconventional multiparticle character of the spin dynamics in α-RuCl3. The frequency-field diagram of the lowest-energy ESR mode is found consistent with the behavior of the field-induced energy gap, revealed by thermodynamic measurements. Physical Review B 96(2017), 241107(R) Optimized Synthesis of the Bismuth Subiodides BimI4 (m = 4, 14, 16, 18) and the Electronic Properties of Bi14I4 and Bi18I4 Weiz, A.; Le Anh, M.; Kaiser, M.; Rasche, B.; Herrmannsdörfer, T.; Doert, T.; Ruck, M. We optimized the syntheses of α- and β-Bi4I4 and transferred the method to the very bismuth-rich iodides Bi14I4, Bi16I4, and Bi18I<4. Phase-pure, microcrystalline powders of BimI4 (m = 4, 14, 18) can now by synthesized on a multigram scale. Conditions for the growth of single crystals of Bi16I4 and Bi18I4 were determined. The redetermination of the crystal structure of Bi16I4 hints at a stacking disorder or the presence of 1͚[BimI4] ribbons with m = 14 and 18 among the dominant type along with m = 16. The electronic band structures for m = 14, 16, and 18 were calculated including spin-orbit coupling. They vary markedly with m and show numerous bands crossing the Fermi level, predicting a 3D-metallic behavior. Measurements of the electrical resistivity of a polycrystalline sample of Bi14I4 as well as polycrystalline and single-crystalline samples of Bi18I4 confirmed their metallic nature over the temperature range 300 K to 2 K. For Bi<18I4, a positive and strictly linear magnetoresistance at 2 K in static magnetic fields up to 14 T was observed, which could indicate a topologically nontrivial electronic state. European Journal of Inorganic Chemistry (2017)47, 5609-5615 DOI: 10.1002/ejic.201700999 Flat Bands, Indirect Gaps, and Unconventional Spin-Wave Behavior Induced by a Periodic Dzyaloshinskii-Moriya Interaction Gallardo, R. A.; Cortés-Ortuno, D.; Schneider, T.; Roldán-Molina, A.; Ma, F.; Lenz, K.; Fangohr, H.; Lindner, J.; Landeros, P. Periodically patterned metamaterials are known for exhibiting wave properties similar to the ones observed in electronic band structures in crystal lattices. In particular, periodic ferromagnetic (FM) materials, also known as magnonic crystals (MCs), are characterized by the presence of bands and bandgaps (BGs) at tunable frequencies in their spin-wave (SW) spectrum. While those frequencies typically cover the GHz-range, no fundamental reason prevents one from extending this range towards THz-frequencies, a regime of high importance in communication technologies. Recently, the fabrication of magnets hosting Dzyaloshinskii-Moriya interactions (DMIs) has been pursued with high interest since properties such as the stabilization of chiral spin textures and nonreciprocal SW propagation originate from this antisymmetric exchange interaction. In this context, to further engineer the band structure of MCs, we propose the implementation of MCs with periodic DMIs, which can be obtained, for instance, patterning periodic arrays of heavy metals (HMs) on top of an ultrathin FM film. We demonstrate through theoretical calculations and micromagnetic simulations that such systems exhibit a unique evolution of the standing SWs around the BGs in areas of the FM film that are in contact with the HM wires. We also predict the emergence of at SW bands and indirect magnonic gaps, and we show that these effects depend on the strength of the DMI. This study opens further pathways towards engineered metamaterials for SW-based devices. Keywords: chiral; magnonics; spin waves; DMI; Dzyaloshinskii-Moriya Interaction; magnonic crystals; metamaterials; micromagnetic simulations; ferromagnetic resonance; FMR MMM 2019 - Annual Conference on Magnetism and Magnetic Materials, 04.-08.11.2019, Las Vegas, United States of America Transglutaminase 2 as a target for functional tumour imaging: From substrates to inhibitors to radiotracers Löser, R. The talk is covering the efforts of our group in the development of inhibitor-based radiotracers for the imaging of tumour-associated transglutaminase 2. After introducing the biological function of transglutaminase 2, the development of substrates for fluorimetric activity assays will be lined out. Major emphasis will be put on the synthesis, kinetic characterisation and in vitro pharmacokinetic profiling of acrylamide-based irreversible inhibitors. Finally, labelling of these compounds with fluorine-18 and initial results towards their radiopharmacological evaluation will be discussed. Pharmazeutisches Kolloquium, 03.11.2017, Bonn, Deutschland Metallic Photocathodes for Superconducting RF Photo Guns Teichert, J.; Xiang, R. Report on results and status of photocathode development and measurement in the EC project EuCARD2. Keywords: photocathode; quantum efficiency; magnesium; lead; niobium EuCARD2 WP11 Annual Meeting, 14.-15.03.2017, Warsaw/Swierk, Poland Structural characterization of (Sm,Tb)PO4 solid solutions and pressure-induced phase transitions Heuser, J. M.; Palomares, R. I.; Bauer, J. D.; Lozano Rodriguez, M. J.; Cooper, J.; Lang, M.; Scheinost, A. C.; Schlenz, H.; Winkler, B.; Bosbach, D.; Neumeier, S.; Deissmann, G. Sm1-xTbxPO4 solid solutions were synthesized and extensively characterized by powder X-ray diffraction, vibrational spectroscopy, and X-ray absorption spectroscopy. At ambient conditions solid solutions up to x = 0.75 crystallize in the monazite structure, whereas TbPO4 is isostructural to xenotime. For x = 0.8 a mixture of both polymorphs was obtained. Moreover, a phase with anhydrite structure was observed coexisting with xenotime, which was formed due to mechanical stress. Selected solid solutions were investigated at pressures up to 40 GPa using in situ high pressure synchrotron X-ray diffraction and in situ high pressure Raman spectroscopy. SmPO4 and Sm0.5Tb0.5PO4 monazites are (meta)stable up to the highest pressures studied here. TbPO4 xenotime was found to transform into the monazite structure at a pressure of about 10 GPa. The transformation of Sm0.2Tb0.8PO4 xenotime into the monazite polymorph commences already at about 3 GPa. This study describes the reversibility of the pressure-induced (Sm,Tb)PO4 xenotime-monazite transformation. Keywords: monazite; xenotime; anhydrite; solid solutions; phase transformation Journal of the European Ceramic Society 38(2018), 4070-4081 DOI: 10.1016/j.jeurceramsoc.2018.04.030 MoS₂ quantum dots as an efficient catalyst material for oxygen evolution reaction Mohanty, B.; Ghorbani-Asl, M.; Kretschmer, S.; Ghosh, A.; U. Guha, P.; Panda, S. K.; Jena, B.; Krasheninnikov, A. V.; Jena, B. K. The development of an active, earth-abundant and inexpensive catalyst for oxygen evolution reaction (OER) is highly desirable but remains a great challenge. Here, by combining experiments and first-principles calculations, we demonstrate that MoS₂ quantum dots (MSQDs) are an efficient material for OER. We use a simple route for the synthesis of MSQDs from a single precursor in aqueous medium avoiding the formation of unwanted carbon quantum dots (CQDs). The as-synthesized MSQDs exhibit higher OER activity with the lower Tafel slope as compared to that for the state-of-the-art catalyst IrO₂/C. The potential cycling of the MSQDs activates the surface and improves the OER catalytic properties. The density functional theory calculations reveal that MSQD vertices are reactive and the vacancies at the edges also promote the reaction, which indicates that the small flakes with defects at the edges are efficient for OER. The presence of CQDs affects the adsorption of reaction intermediates and dramatically suppresses the OER performance of the MSQDs. Our theoretical and experimental findings provide important insights into the synthesis process of MSQDs and their catalytic properties and suggest promising routes to tailoring the performance of the catalysts for OER applications. Keywords: MoS₂; Quantum Dots; Electrocatalysis; Oxygen Evolution Reaction; First-Principles Calculations; Defects ACS Catalysis 8(2018), 1683-1689 DOI: 10.1021/acscatal.7b03180 Si amorphization by focused ion beam milling: Point defect model with dynamic BCA simulation and experimental validation Huang, J.; Loeffler, M.; Muehle, U.; Moeller, W.; Mulders, J. J. L.; Kwakman, L. F. T.; van Dorp, W. F.; Zschech, E. A Ga focused ion beam (FIB) is often used in transmission electron microscopy (TEM) analysis sample preparation. In case of a crystalline Si sample, an amorphous near-surface layer is formed by the FIB process. In order to optimize the FIB recipe by minimizing the amorphization, it is important to predict the amorphous layer thickness from simulation. Molecular Dynamics (MD) simulation has been used to describe the amorphization, however, it is limited by computational power for a realistic FIB process simulation. On the other hand, Binary Collision Approximation (BCA) simulation is able and has been used to simulate ion-solid interaction process at a realistic scale. In this study, a Point Defect Density approach is introduced to a dynamic BCA simulation, considering dynamic ion-solid interactions. We used this method to predict the c-Si amorphization caused by FIB milling on Si. To validate the method, dedicated TEM studies are performed. It shows that the amorphous layer thickness predicted by the numerical simulation is consistent with the experimental data. In summary, the thickness of the near-surface Si amorphization layer caused by FIB milling can be well predicted using the Point Defect Density approach within the dynamic BCA model. Keywords: Amorphization; Beam plasma interactions; Computational chemistry; Defect density; Focused ion beams; High resolution transmission electron microscopy; Ion beams; IonsMilling (machining); Molecular dynamics; Point defects; Silicon; Surface defects; Transmission electron microscopy Ultramicroscopy 184(2018), 52-56 DOI: 10.1016/j.ultramic.2017.10.011 Addendum: Ion beam irradiation of nanostructures: sputtering, dopant incorporation, and dynamic annealing Holland-Moritz, H.; Johannes, A.; Möller, W.; Ronning, C. A previously published formalism to derive nanosphere sputtering yields is corrected and refined. Keywords: Ion Irradiation; Nanostructures; Sputtering Semiconductor Science and Technology 32(2017), 109401 DOI: 10.1088/1361-6641/aa88dc Relative biological effectiveness in proton beam therapy – current knowledge and future challenges Lühr, A.; von Neubeck, C.; Krause, M.; Troost, E. G. C. This review summarizes recent abstracts to international meetings and international peer-reviewed publications on the potential variation of the RBE and its possible side-effects, and compares these with past publication on photon beam irradiation. Moreover, recent literature on how to deal with potential RBE variations and the resulting uncertainty during treatment planning as well as solutions to correlate dose and LET distributions to subsequent (magnetic resonance) imaging changes are presented. Finally, the current status on RBE measured in vitro and in vivo is reviewed and input given on how to bridge the existing gap between lab and clinic. Keywords: RBE; relative biological effectiveness; clinical side-effects; in vitro and in vivo models; MRI; dose recalculation Clinical and Translational Radiation Oncology (2018)9, 35-41 Prognostic Value of Head and Neck Tumor Proliferative Sphericity From 3'-Deoxy-3'-[18F] Fluorothymidine Positron Emission Tomography Majdoub, M.; Hoeben, B.; Troost, E. G. C.; Oyen, W.; Kaanders, J.; Le Rest, C.; Visser, E.; Visvikis, D.; Hatt, M. Background: Enhanced proliferative activity in head and neck squamous cell carcinoma (HNSCC) adversely affects outcome after (chemo)radiotherapy. 3'-deoxy-3'-[18F] fluorothymidine (18F-FLT) positron emission tomography (PET) can be used for quantifying tumour proliferation and its changes during therapy. In this study, we investigated the complementary prognostic value of sphericity and standard metrics in 18F-FLT PET images before and during (chemo)radiotherapy regarding 4-year disease-free survival (DFS). Methods: 48 HNSCC patients treated with radiotherapy (n=32) or chemoradiotherapy (n=16) with curative intent, underwent 18F-FLT PET-CT scans before and in the second week of treatment. Patients were followed for a median of 52 months. Primary tumours were delineated using the Fuzzy Locally Adaptive Bayesian algorithm. The proliferative volumes were further characterized by extracting SUV, volume and sphericity. Prognostic value for disease-free survival (DFS) of features was assessed using Kaplan-Meier survival analysis. Results: In univariate analysis, the per-treatment sphericity (p=0.02, HR=4.2 [1.3– 13.9]) and SUVmean (p=0.03, HR=4.1 [1.2 – 14.2]) were prognostic factors, whereas none of the pre-treatment features were significant. Reduction in SUVmax (p=0.04, HR=4 [1.1 – 15.1]) was also a prognostic factor, but reduction of proliferative tumour volume did not reach statistical significance. The best stratification of patients for DFS was achieved with the combination of the two per-treatment features SUVmean and sphericity (p<0.001, HR=6.7 [1.8 – 25]). Conclusion: High sphericity combined with low mean 18F-FLT SUV during treatment were associated with better DFS. These results suggest the potential prognostic value of advanced tumour proliferative volume characterization from 18F-FLT PET in HNSCC that may be further explored in larger cohorts. IEEE Transactions on Radiation and Plasma Medical Sciences 2(2018)1, 33-40 DOI: 10.1109/TRPMS.2017.2777890 Strain distribution in GaAs/InxGa1-xAs core/shell nanowires grown by molecular beam epitaxy on Si(111) substrates Balaghi, L.; Hübner, R.; Bussone, G.; Grifone, R.; Hlawacek, G.; Grenzer, J.; Schneider, H.; Helm, M.; Dimakis, E. Workshop on Surface and Interface Diffraction in Condensed Matter Physics and Chemistry (CMPC), 09.-10.03.2017, DESY, Hamburg, Germany Strain distribution in highly mismatched GaAs/(In,Ga)As core/shell nanowires Balaghi, L.; Hübner, R.; Bussone, G.; Grifone, R.; Ghorbani, M.; Krasheninnikov, A.; Hlawacek, G.; Grenzer, J.; Schneider, H.; Helm, M.; Dimakis, E. The core/shell nanowire (NW) geometry is suitable for the pseudomorphic growth of highly mismatched semiconductor heterostructures, where the shell thickness can exceed significantly the critical thickness in equivalent planar heterostructures. We have investigated the accommodation of misfit strain in self-catalyzed GaAs/(In,Ga)As core/shell NWs grown on Si (111) substrates by molecular beam epitaxy. The NWs have their axis along the [111] crystallographic direction, six {11 ̅0} sidewalls, and their crystal structure is predominantly zinc blende. For strain analysis, we used Raman scattering spectroscopy, transmission electron microscopy, X-ray diffraction and photoluminescence spectroscopy. Within a certain range of core/shell dimensions and shell composition, our findings reveal that the elastic energy in NWs without misfit dislocations can be confined exclusively inside the core, allowing for the shell to be strain-free. The experimental results are also compared with theoretical simulations of the strain (continuum elasticity theory) and phonon energy (density functional theory). Nanoscale surface patterning by non-equilibrium self-assembly of ion-induced vacancies and ad-atoms Facsko, S.; Ou, X.; Engler, M.; Erb, D.; Skeren, T.; Bradley, R. M. Various self-organized nanoscale surface patterns can be produced by low- and medium-energy ion beam irradiation [1], depending on the irradiation conditions. Hexagonally ordered dot or pit patterns, checkerboard patterns, as well as periodic ripple patterns oriented perpendicular or parallel to the ion beam direction are formed spontaneously during the continuous surface erosion by ion sputtering. On amorphous surfaces, the formation of these patterns results from an interplay of different roughening mechanisms, e.g. curvature dependent sputtering, ballistic mass redistribution, or altered surface stoichiometry on binary materials, and smoothing mechanisms, e.g. surface diffusion or surface viscous flow. An additional surface instability arises above the recrystallization temperature of the material. In this case, ion induced bulk defects are dynamically annealed and amorphization is prevented. The diffusion of ion-induced vacancies and ad-atoms on the crystalline surface is now affected by the Ehrlich-Schwoebel (ES) barrier, i.e. an additional diffusion barrier to cross terrace steps. Vacancies and ad-atoms are trapped on terraces and can nucleate to form new extended pits or terraces, respectively [2]. For the mathematical description of the pattern formation and evolution in the reverse epitaxy regime, a continuum equation can be used which combines the ballistic effects of ion irradiation and effective diffusion currents due to the ES barrier on the crystalline surface. By comparison with experimental studies of pattern formation on Ge and GaAs surfaces at different angles and temperatures, we will show that the pattern evolution is determined by the surface instability due to the ES barrier, surface diffusion, and ballistic effects of ion irradiation. [2] X. Ou, K.-H. Heinig, R. Hübner, J. Grenzer, X. Wang, M. Helm, J. Fassbender, and S. Facsko, Nanoscale 7, 18928 (2015). Keywords: ion beam irradiation; surface patterning; reverse epitaxy 20th International Conference on Surface Modification of Materials by Ion Beams, 09.-14.07.2017, Lisbon, Portugal Constraining the economic potential of by-product recovery by using a geometallurgical approach: the example of rare earth element recovery at Catalão I, Brazil Pereira, L.; Birtel, S.; Möckel, R.; Michaux, B.; Silva, A. C.; Gutzmer, J. Located in Goiás state of Brazil, Catalão I is a carbonatite complex that is part of the Alto Paranaíba Igneous Province. A niobium deposit in the complex, named Boa Vista, has been exploited for more than 40 years and is currently the world's second largest niobium producer. The deposit is owned and operated by Niobrás, part of China Molybdenum Co.. Phosphates are also produced in the Catalão I complex at the Chapadão mine, an operation that is owned and operated by Copebrás, also part of China Molybdenum Co.. The phosphate production tailings are reprocessed at Boa Vista for recovering niobium as a by-product. Rare earth elements, albeit present in significant concentrations, are not recovered as by-products. This study provides quantiative mineralogical and microfabric data on the occurrence of rare earth minerals – and provides constraints for concentration of rare earth elements during current niobium beneficiation routes at the Tailings plant. Nine samples from different stages of the process were taken and characterized by Mineral Liberation Analyzer, X-ray powder diffraction and bulk rock chemistry. The recovery of rare earth elements in each of the tailing streams was quantified by mass balance. The results are used to identify the most suitable approach to recover REE as a by-product – without placing limitations on niobium production. Monazite, the most common rare earth mineral identified in the feed to the Tailings Plant, occurs as Ce-rich and La-rich varieties that can be easily distinguished by SEM-based image analysis. Quartz, FeTi-oxides and several phosphate minerals are the main gangue minerals. The highest rare earth element content concentrations (1.75 wt.% TREO) and the greatest potential for REE processing are reported for the final flotation tailings stream. To place tentative economic constraints on REE recovery from the tailings material, an analogy to the Brown's Range deposit in Australia is drawn. Its technical flow sheet was used to estimate the cost for a hypothetical REE-production at Boa Vista. Parameters derived from SEM-based image analysis were used to model possible monazite recovery and concentrate grades. This exercise illustrates that a marketable REE concentrate could be obtained at Boa Vista, if the process could recover all particles with at least 60% of monazite on their surface. Applying CAPEX and OPEX values similar to that of Brown's Range showed that such an operation would be profitable at current REE prices. Keywords: REE production; by product; geometallurgy; economic assement Economic Geology 114(2019)8, 1555-1568 DOI: 10.5382/econgeo.4637 Ion-induced patterning of Ge surfaces above the recrystallization temperature Low- and medium-energy ion beam irradiation can lead to various self-organized nanoscale surface patterns depending on the irradiation conditions [1]. If the sample temperature is below the material recrystallization temperature, the ion bombardment results in amorphization of the surface. On such amorphous surfaces, the formation of nanoscale patterns is driven by the interplay of different ion beam induced roughening and smoothing mechanisms: curvature dependent sputtering, ballistic mass redistribution or altered surface stoichiometry (on binary materials) are roughening the surface, while surface diffusion or surface viscous flow are smoothing it. An additional surface instability arises above the recrystallization temperature of the material, when the surface remains crystalline during ion irradiation. In this case, the diffusion of ion-induced vacancies and ad-atoms on the crystalline surface is affected by the Ehrlich-Schwoebel (ES) barrier, i.e. an additional diffusion barrier to cross terrace steps. Vacancies and ad-atoms are thereby trapped on terraces and nucleate to form new extended pits or islands, respectively [2]. In molecular beam epitaxy mounds with different facets are formed due to the ES barrier. In ion-induced reverse epitaxy the additionally diffusing vacancies lead to different morphologies, like inverse pyramid and checkerboard patterns. However, on Ge (001) surfaces irradiated at incidence angles greater than 50° mound patterns are formed and for angles greater than 75° the pattern turns into ripples. This transition from checkerboard over mound to ripple patterns in the reverse epitaxy regime can be described by a continuum equation which combines the ballistic effects of ion irradiation and the effective diffusion currents due to the ES barrier on the crystalline surface. [2] X. Ou, A. Keller, M. Helm, J. Fassbender, and S. Facsko, Phys. Rev. Lett. 111, 016101 (2013). Nanopatterning 2017, 26.-28.06.2017, Helsinki, Finland REE by-product potential at Catalão I: a geometallurgical assessment Birtel, S.; Pereira, L.; Silva, A. C.; Gutzmer, J. A geometallurgical study of the rare earth mineralogy and microfabric relations was undertaken at the Catalão I deposit, Chapadão mine, Goiás/Brazil. At Catalão I a carbonatite and its lateritic cap have been exploited for more than 40 years; being the world's second largest niobium deposit and producer. The study was undertaken to assess the technical possibility and feasibility to recover rare earth minerals as a by-product of niobium production. For this purpose, nine samples were collected from different stages of the beneficiation process in the so-called Tailings Plant. Mineral Liberation Analyzer (MLA), X-ray powder diffraction and bulk rock chemistry were used to characterize these samples for their processing properties. The recovery of REE in each of the tailing streams was quantified by mass balance. The results were used to identify the most suitable approach to recover REE as a by-product – without placing constraints on niobium production. Monazite is the dominant REE mineral in the feed to the Tailings Plant; this feed heralds from the exploitation of lateritic weathering residues. Quartz, FeTi-oxides and phosphate minerals are the main gangue minerals. The highest REE contents are reported in the final flotation tailings stream (1.75 wt.% TREO), with monazite-bearing particles in a size range suitable for further processing (10-120 µm size range). Yet, liberation of monazite in this particle size fraction is rather poor. By seeking an analogy to the Brown's Range deposit in Australia, a tentative economic assessment is attempted for REE production at Chapadão. For this purpose, parameters derived from MLA were used to model possible monazite recovery and concentrate grades. This assessment illustrates that a marketable REE concentrate may be obtained as a by-product at Chapadão even at current REE prices. Keywords: REE production; by product; geometallurgy Resources for Future Generations PREMIER CONFERENCE ON ENERGY • MINERALS • WATER • THE EARTH, 16.-21.06.2018, Vancouver, Kanada Resources for Future Generations PREMIER CONFERENCE ON ENERGY • MINERALS • WATER • THE EARTH, 16.-21.06.2018, Vancouver, Canada Extremely lattice mismatched GaAs/InxGa1-xAs core/shell nanowires: coherent growth and strain distribution Compound semiconductors are versatile materials due to the possibility to tailor their (opto)electronic properties by selecting their composition appropriately. When grown heteroepitaxially, though, this possibility is constrained by the lattice mismatch with the substrate. InxGa1-xAs is a good example because it can have, depending on x, a suitable direct optical band gap for optoelectronic applications in the infrared (e.g. telecommunication wavelengths) or high electron mobility for high-speed transistors. However, the practical choices of x are limited by the available substrates, typically GaAs for low x or InP for x≈0.53. Nanowires are a promising alternative for the realization of epitaxial heterostructures with high lattice mismatch due to their unique geometry and high surface-to-volume ratio. In addition, the possibility of monolithic integration in Si-CMOS platforms adds to their technological significance. In this work, we have investigated the growth of free-standing GaAs/InxGa1-xAs core/shell nanowires on Si(111) substrates by molecular beam epitaxy and the accommodation of lattice mismatch therein. Specifically, we have concentrated on highly lattice mismatched heterostructures (x=0.20-0.80) and very thin cores (diameter < 25 nm). Self-catalyzed growth of very thin GaAs core nanowires with a sufficiently low number density (to avoid beam shadowing during the shell growth) was possible on native-oxide/Si(111) substrates, after an in situ treatment of the latter with Ga droplets. This resulted in zinc blende nanowires with their axis along the [111] crystallographic direction and six {1-10} sidewalls. Subsequently, conformal overgrowth of the InxGa1-xAs shell was obtained only under kinetically limited growth conditions that suppressed mismatch-induced bending phenomena. The strain in the core and the shell was studied systematically as a function of the shell composition and thickness. To that end, we used Raman scattering spectroscopy, transmission electron microscopy and (synchrotron/lab source) X-ray diffraction, and compared the results with theoretical predictions based on continuum elasticity and density functional theories. All findings point to the existence of anisotropic tensile strain in the core that increases (as quantified by Raman measurements) with increasing the shell thickness, whereas the corresponding compressive strain in the shell decreases to zero. Our work demonstrates the opportunity to grow not only relaxed InxGa1-xAs shells with high structural quality (as adopted from the GaAs core) in a wide, if not the whole, compositional range, but also highly strained (tensile) GaAs cores with (opto)electronic properties that remain to be explored. Keywords: core/shell nanowire; strained core; relaxed shell Nanowire Week 2017, 29.05.-02.06.2017, Lund, Sweden Nanoscale surface patterning of crystalline semiconductor surfaces by broad ion beam irradiation arious self-organized nanoscale surface patterns can be produced by low- and medium-energy ion beam irradiation [1], depending on the irradiation conditions. Hexagonally ordered dot or pit patterns, checkerboard patterns, as well as periodic ripple patterns oriented perpendicular or parallel to the ion beam direction are formed spontaneously during the continuous surface erosion by ion sputtering. On amorphous surfaces, the formation of these patterns results from an interplay of different roughening mechanisms, e.g. curvature dependent sputtering, ballistic mass redistribution, or altered surface stoichiometry on binary materials, and smoothing mechanisms, e.g. surface diffusion or surface viscous flow. For the description of the pattern formation and evolution in the reverse epitaxy regime, a continuum equation can be used which combines the ballistic effects of ion irradiation and effective diffusion currents due to the ES barrier on the crystalline surface. By comparison with experimental studies of pattern formation on Ge and GaAs surfaces at different angles and temperatures, we will show that the pattern evolution is determined by the combined action of surface instability due to the ES barrier, surface diffusion, and ballistic effects of ion irradiation. Keywords: ion irradiation; surface patterning; reverse epitaxy MRS Fall Meeting, 26.11.-1.12.2017, Boston, USA Positronium probing of pores in zirconia nanopowders Prochazka, I.; Cizek, J.; Lukac, F.; Melikhova, O.; Hruska, P.; Anwand, W.; Liedke, M. O.; Konstantinova, T. E.; Danilenko, I. A. Zirconium di-oxide (ZrO2 , zirconia) receives nowadays a big attention because of a variety of advantageous properties which make zirconia-based materials useful in numerous fields of practice, in particular, in ceramic industry and other high-temperature applications. To make high-temperature phases of zirconia stabilised down to room temperature, doping of the host lattice by proper metal cations has to be usually performed. Nanopowders are currently focused on as starting substances for manufacturing ZrO2-based ceramics by sintering, because well-homogenised materials of a low porosity can be produced more easily. Nanometer-sized defects associated to grain boundaries (GBs) become then to play an enhanced role in nanopowders due to enlarged volume fraction of GBs. Positrons and positronium (Ps) atoms can serve as efficient probes of different structures encountered in particular stages of manufacturing ZrO2-based functional materials. In the present contribution, conventional positron and Ps lifetime measurements were carried out on a variety of zirconia-based nanopowders and ceramics obtained by sintering these nanopowders. Nanopowders studied were doped with various metal cations (Y3+, Cr3+, Ce4+, Mg2+) and differed also in thermal treatment prior sintering. Lifetime experiments were conducted in air or in vacuum and combined with Doppler broadening measurements using slow-positron beam and supplemented with X-ray diffraction (XRD) and mass-density (MD) measurements. In Figure, variability of the lifetime spectra observed is illustrated. In a range of lifetimes from a few ns to ≈ 70 ns, up to three individual lifetime components could be identified, see Figure, (a) and (b). Such observations unambiguously testified Ps formation with subsequent ortho-Ps annihilation. On the other hand, an absence of the ortho-Ps components was found in certain nanopowders giving thus an evidence of a strong Ps inhibition, see Figure, (c). Pore sizes were estimated using current models of correlation between observed ortho-Ps lifetime and pore size. Origins of pores will be discussed on the basis of the ortho-Ps data in combination with the results of slowpositron beam, XRD and MD measurements. Keywords: zirconia nanopowders; Positronium; grain boundaries; PAS; PALS 12th International Workshop on Positron and Positronium Chemistry, 28.08.-01.09.2017, Lublin, Poland Positronium formation in nanostructured metals Čížek, J.; Melikhova, O.; Hruška, P.; Vlček, M.; Anwand, W.; Liedke, M. O.; Novotný, M.; Bulíř, J.; Cheng, Y. Nanostructured metals containing nano- and micro-cavities can be prepared by various methods. Morphology of cavities can be controlled by varying the parameters of preparation. This enables fabrication of nanostructured metals with properties tailored for particular applications, e.g. nanostructured metals containing fractal-like cavities with a wide size distribution are used as omnidirectional absorbers of light from the visible into the infrared spectral region. Positronium (Ps) is a non-destructive probe of nanoscopic cavities capable of precise determination of their size distribution. In conventional metals Ps does not form since any bound state of positron and electron is quickly destroyed by the screening of conduction electrons. However, a thermalized positron escaping from a metal through inner surface into a cavity may form Ps by picking an electron on the surface. This process was examined in the present work on nanostructured metals prepared three various methods: (i) thin films of black metals (Au and Al) evaporated in N2 atmosphere; (ii) nano-porous bulk Pd prepared by electrochemical etching of PdCo alloy; (iii) nanostructured Gd prepared by selective evaporation of Mg from MgGd alloy. Our investigations confirmed that Ps was formed in nanostructured metals. The the size distribution of nano-pores in the samples has been determined. The mechanism of Ps formation in these samples is discussed in the paper. Keywords: Positronium; Nanostructured metals; pores; black Au; Al; Pd; Gd Nanomembranes Modified by Highly Charged Ions Facsko, S.; Wilhelm, R. A.; Gruber, E.; Heller, R.; Gölzhäuser, A.; Beyer, A.; Turchanin, A.; Aumayr, F. Smart membranes play a key role in different sensor applications, e.g. for drug and explosive detection. By tailoring the structure and properties of these membranes physical-chemical functionality can be added to the sensor. One way of modifying membranes is by particle irradiation with electrons or ions. Specifically, highly charged ions (HCI) carry a large amount of potential energy (the stored ionization energy) which is released when interacting with the membrane creating nanopores by a single HCI impact. In order to be able to control the ion induced modification, e.g. defining the pore size, the energy deposition in the membranes has to be determined. For the interaction of HCI with thin membranes this is particularly interesting because the HCIs are still in a pre-equilibrium interaction regime for thicknesses below a few nm. Within 1 nm thick carbon nano membranes (CNMs) for instance, holes are produced by the passage of highly charged Xeq+ ions only above a threshold in the potential energy of the HCI which depends on the kinetic energy [1]. In order to study the stopping force of the HCIs in the membrane we examined the charge state and the energy loss of the Xeq+ ions after their passage through the CNM. Surprisingly, two distinct exit charge distributions were observed [2]. While some of the ions pass the membrane with almost no charge loss, other ions lose most of their charge. Apparently, the observed charge distribution reflects two different impact parameter regimes. The different impact parameter regimes are also connected to different energy losses: ions with large impact parameters are not stopped, whereas ions in close collisions exhibit high stopping force which is strongly dependent on the incident charge state. [1] R.A. Wilhelm, E. Gruber, R. Ritter, R. Heller, A. Beyer, A. Turchanin, N. Klingner, R. Hübner, M. Stöger-Pollach, H. Vieker, G. Hlawacek, A. Gölzhäuser, S. Facsko, and F. Aumayr, 2D Mater. 2, 1 (2015). [2] R.A. Wilhelm, E. Gruber, R. Ritter, R. Heller, S. Facsko, F. Aumayr, Phys. Rev. Lett. 112, 153201 (2014). Keywords: highly charged ions; nanomembranes Porosimetry of ultra-low K materials and transformed porous glass-thin layers by Monenergetic Positron Source at ELBE facility Attallah, A. G.; Koehler, N.; Dornberg, G.; Butterling, M.; Liedke, M. O.; Wagner, A.; Schulz, S. E.; Badawi, E.; Enke, D.; Krause-Rehberg, R. The pore size of spin-on coated ultra-low K (ULK) materials cured at 4500C for different times was studied by the pulsed slow positron beam (MePS) at ELBE/HZDR. To investigate the pore formation in cured porous spin-on dielectrics, the pore size as a function of positron implantation energy was obtained for samples with different curing times. Such a study is performed to understand the dielectric damage behaviour of ULK dielectrics for the integration in Back-End of Line (BEOL). MePS results revealed that the films contain open and closed pores with ~ 3 nm in diameter which was confirmed by capping the samples. The highest pore concentration is located beneath the surface in the 0.2 - 0.5 m range (We plan to carry out ellipsometric porosimetry and FTIR during this summer). Pseudomorphic transformation of porous glass-thin layers, with pores of 40 - 50 nm diameter and a relatively small surface area, to MCM-41 with ~4 nm pores, with a higher surface area, was studied by MePS. The small pore size of MCM-41 was successfully detected with an intensity growth with transformation degree but the large pores were not detected at all. To understand the disability of detecting the large pores by positron annihilation lifetime, we plan to perform SEM measurements in the same depth as that of the implanted positrons (0.005-2.4 m). Additionally, the increase in the intensity of positronium lifetime, which correlates the small pores, as a function of positron implantation energy could reflect inner pore isolation or poor interconnectivity. Keywords: ultra-low K materials; Porosimetry; MePS; ELBE; Positronium In-situ investigations of the curing process in ultra low-k materials Liedke, M. O.; Koehler, N.; Butterling, M.; Attallah, A. G.; Krause-Rehberg, R.; Hirschmann, E.; Schulz, S. E.; Wagner, A. Porous spin-on glasses belong to ultra low-k (ULK) dielectrics and are promising candidates for integration in the semiconductor device fabrication technology. Their microstructure consists usually of interconnected pore networks distributed across the film rather than separated voids. The pore size and distribution are controllable to a large extent, however, the pore formation process itself is still not well understood. A dielectric damage during integration and material degradation of films with large porosity are still problematic issues. The first results on in-situ investigations of the pore formation during a curing process – a porogen removal by vacuum annealing will be presented. The main motivation is to obtain the insight into early stages of the pore formation and up to its full development. The in-situ annealing and Doppler Broadening – Positron Annihilation Spectroscopy (DB-PAS) measurements have been done on our Apparatus for In-situ Defect Analysis (AIDA) system [1], which is the end-station of the slow positrons beamline at HZDR. The comparison between preliminary ex-situ studies by means of DB-PAS [see Fig. 1], Positron Annihilation Lifetime Spectroscopy (PALS), and Fourier Transform Infrared Spectroscopy (FTIS) will be given. In Fig. 1(a) it is shown that o-Ps emission increases with t, thus can be a probe of films porosity as long as they are capped. The curing time of 5-30min. is sufficient to fully develop the pore network [Fig. 1(b)]. Porosity development and distribution will be discussed for annealing temperatures in the 100-400°C range and varied annealing time. [1] M.O. Liedke et al., Journal of Applied Physics 117, 163908 (2015). Keywords: low-k materials; curing process; Porous spin-on glasses; PAS; AIDA; Positron Annihilation Spectroscopy Ion Beam-Enabled CMOS-Compatible Manufacturing of SETs Operating at Room Temperature Facsko, S.; Heinig, K. H.; Stegemann, K. H.; Pruefer, T.; Xu, X.; Hlawacek, G.; Huebner, R.; Wolf, D.; Bischoff, L.; Moeller, W.; von Borany, J. Electronics has been dominated by silicon since half a century. Si will dominate electronics another decade, however its functionality might change from classical field-controlled currents through channels (the Field Effect Transistor FET) to quantum mechanical effects like field-controlled hopping of single electrons from a source to a drain via a quantum dot (the Single Electron Transistor SET). Due to single electron hopping, the SET is the champion of low-power consumption. This is very attractive for the expanding Internet of Things (IoT): more and more devices need batteries and plugs. Therefore, together with improved batteries, advanced computation and communication must be delivered at extremely low-power consumption. At very low temperatures, the perfect functionality of SETs has been proven for tiny metal dots [1] and larger Si islands [2]. However, large-scale use of SETs requires Room Temperature (RT) operation, which can be achieved with tiny Si dots (<4 nm) in SiO2, exactly located between source and drain with distances of ~1…2 nm allowing quantum mechanical tunneling. Manufacturability of such nanostructures is the roadblock for large-scale use of SETs. Lithography cannot deliver the feature sizes of 1…3 nm required for RT operation. Therefore, there are currently intense studies to fulfill these requirements by self-organization processes. Convincing proof of concepts have been reported [see, e.g., 3] on room temperature operation of silicon based SETs. However, the self-organization processes developed so far are not reliable enough for large-scale integration. The ion beam technique is a well-established technology in microelectronics used for doping and amorphization of semiconductors and even for ion beam synthesis of buried layers. The parameters of ion beam processing like ion flux, fluence and energy as well as the temperature and time of the subsequent thermal treatment are very well controllable. Therefore we searched for a self-organization process based on ion irradiation which overcomes the bottleneck of manufacturability of SETs working at room temperature. Thus, in the framework of an international project funded by the European Commission [4], we develop an ion-assisted, CMOS-compatible process [4] which will provide both (i) self-assembly of a single Si dot and (ii) its self-alignment with source and drain. Based on our knowledge of ion implantation [5,6] and irradiation [7] induced phase separation and Ostwald ripening processes as well as ion-assisted fabrication of non-volatile nanocluster memories [8], we concluded by computer simulations that phase separation of tiny, metastable SiOx volumes (<103 nm3) will transiently lead to a single Si nanodot in SiO2 (see Fig.2). The tiny, metastable SiOx volume is formed by ion beam mixing of a bulk Si/SiO2/a-Si layer stack. In order to get the very small SiOx volume necessary for single dot formation, two approaches are used: (i) point-like Ne+ irradiation for fundamental studies, and (ii) broad beam Si+ irradiation of nanopillars for the device fabrication (see Fig. 3). For both approaches, the predictive computer simulations use for the dynamical 3D ion beam mixing the recently developed program TRI3DYN [9]. TRI3DYN provides the initial conditions for phase separation and coarsening processes simulated (see, e.g. Fig. 2) with the 3D kinetic Monte Carlo program 3DkMC [6]. First results of our studies with the Helium Ion Microscope are shown in Figs. 4 and 5. The ion beam mixing of the SiO2 layer as imaged by EFTEM agrees nicely with that predicted by TRI3DYN simulations. Using this mixing profile as input for 3DkMC simulations, a single Si nanocluster is formed (Fig. 4). Although it appears to be extremely difficult to image a single Si nanodot of 2…3 nm diameter embedded in SiO2 in a ~50 nm thick TEM lamella, Fig 5 proves that after annealing such a single cluster can be formed. The next activities will be focused on the single Si nanodot fabrication in Si nanopillars and the optimization of this process for RT-SET fabrication. This work has been funded by the European Union's Horizon 2020 research and innovation program under grant agreement No 688072. 1. K. Maeda et al., ACS Nano (2012) 2798. 2. S. Ihara et al., Appl. Phys. Lett. 107 (2015) 13102. SET in SOI 3. V. Deshpande et al., Proc. of the IEDM12-Conf. (2012) 195. 4. Research Project IONS4SET funded by the European Commission 5. M. Strobel et al., NIM B147 (1999) 343. 6. M. Strobel, K.-H. Heinig, W. Möller, Phys. Rev. B64 (2001) 245422. 7. K.H. Heinig, T. Müller, B. Schmidt, M. Strobel, W. Möller, Appl. Phys. A77 (2003) 17. 8. T. Mueller et al., Appl .Phys. Lett. 81 (2002) 3049; ibid 85 (2004) 2373. 9. W. Möller, NIM B322 (2014) 23. Keywords: ion irradiation; self-assembly; Si nanocrystals; single electron transistor Ion-Surface Interactions 2017, 21.-25.08.2017, Moskow, Russia Hydrogen-induced defects in Ti and their thermal stability Melikhova, O.; Čížek, J.; Hruška, P.; Lukac, F.; Knapp, J.; Havela, L.; Mašková, S.; Anwand, W.; Liedke, M. O. Titanium readily absorbs hydrogen and undergoes phase transition into the hydride phase (TiH2). In the hydride phase Ti is able to absorb the hydrogen concentration as high as 1.4 wt.%. These properties make Ti and Ti-based alloys attractive for hydrogen storage applications. Hydrogen absorption in titanium matrix may introduce open volume defects since the volume of TiH2 phase exceeds that of titanium matrix. Absorbed hydrogen may segregate at these defects forming defect-hydrogen complexes. In the present work positron annihilation spectroscopy was employed for characterization of hydrogen-induced defects in titanium. Defects created by hydrogen loading from the gas phase were compared with those introduced by electrochemical hydrogen charging. In general hydrogen loading introduces a high density of dislocations and vacancy clusters created by agglomeration of hydrogen-induced vacancies. The mean size of vacancy clusters depends on the hydrogen absorption temperature. Thermal stability of hydrogen absorbed in titanium and recovery of hydrogen-induced defects were studied by positron lifetime spectroscopy combined with in-situ X-ray diffraction and thermal desorption spectroscopy. Fig. 1 shows the temperature dependence of positron lifetimes and relative intensities of individual components for hydrogen gas loaded titanium. The decomposition of TiH2 phase is accompanied with introduction of additional vacancies agglomerating into vacancy clusters. Further annealing of the sample above 500 °C leads to recovery of dislocations. Keywords: Ti; hydrogen; hydride phase; open volume defects; positron annihilation spectroscopy; positron lifetime The International Workshop on Positron Studies and Defects 2017 (PSD-17), 03.-08.09.2017, Dresden, Deutschland Defects in high entropy alloy HfNbTaTiZr prepared by spark plasma sintering Lukac, F.; Dudr, M.; Cinert, J.; Vilemova, M.; Cizek, J.; Harcuba, P.; Vlasak, T.; Zyka, J.; Malek, J.; Liedke, M. O. High entropy alloys exhibit various combinations of interesting physical properties due to the formation of solid solution stabilized by high configurational entropy. High entropy alloy HfNbTaTiZr exhibits single phase solid solution with BCC structure when prepared by arc melting [1]. Grain refinement achieved in cold rolled samples after recrystallization remarkably enhanced ductility of this alloy [2]. Mechanical alloying by milling and subsequent sintering is a frequent production way of preparing fine grained alloys from chemical elements with high melting temperature. In addition, spark plasma sintering (SPS) method with applied pressure serves as a unique tool of powder metallurgy thanks to fast heating rates and low time of exposition to elevated temperatures. Therefore, the deformation energy introduced during mechanical alloying may be effectively consumed during short sintering process and presents the additional parameter for grain refinement. The present work presents characterization of HfNbTaTiZr alloy prepared by SPS. Microstructure of samples prepared by SPS was compared with as cast ingots. The samples were characterized by X-ray diffraction and scanning electron microscopy. Positron annihilation spectroscopy was employed for characterization of defects introduced by SPS and their thermal stability. [1] O.N. Senkov, J.M. Scott, S.V. Senkova. D.B. Miracle, C.F. Woodward: Journal of Alloys and Compounds 509, 6043-6048 (2011). [2] O.N. Senkov, S.L. Semiatin; Journal of Alloys and Compounds 649, 1110-1123 (2015). Keywords: High entropy alloys; HfNbTaTiZr; spark plasma sintering; Positron annihilation spectroscopy; X-ray diffraction; scanning electron microscopy Slow positron beam spectroscopy study of PMMA nanocomposite films with ion-synthesized silver nanoparticles Kavetskyy, T. S.; Iida, K.; Nagashima, Y.; Elsayed, M.; Liedke, M. O.; Srinivasan, N.; Wagner, A.; Krause-Rehberg, R.; Šauša, O.; Telbiz, G.; Stepanov, A. L. Understanding how the size, shape, and the aggregation state of the silver nanoparticles (NPs) are changed after integration into a target matrix is critical to enhance their performance, including molecular diagnostics, photonic and biomedical devices, which take advantage of the novel optical properties of these nanomaterials. In particular, the nanocomposites containing noble metal NPs dispersed in the polymer matrix by high-dose (> 1016 ions/cm2) implantation at low-energy ions (< 100 keV) can be used for the construction of plasmonic waveguides [1] and diffraction gratings [2]. Typically, form and size of Ag NPs in optically transparent matrices are connected with an appearance in visible absorption spectra of composite a surface plasmon resonance band. However, synthesis of Ag NPs by ion implantation in transparent polymer matrix such as polymethylmethacrylate (PMMA) has been found [1] to be quite difficult and unusual. This problem can be solved with a powerful technique for the characterization of thin films – positron annihilation spectroscopy (PAS) using a variable-energy positron beam (VEPAS), – allowing depth-profiles from tens of nanometers up to several micrometers. This technique has been emerged as a key experimental tool for the understanding high-dose 40 keV boron-ion-implanted polymethylmethacrylate (B:PMMA) [3] with carbon nanostructures and Ag NPs loaded polymer brushes [4]. Also, the first attempt to find difference between the effects of carbonization and formation of Ag NPs in high-dose B:PMMA and Ag:PMMA nanocomposites has been done in the work [5] by using the Doppler broadening slow positron beam spectroscopy (DB-SPBS). In the present work, the DB-SPBS technique was applied to characterize further the 30 keV Ag:PMMA nanocomposites fabricated by low-energy high-dose Ag-ion implantation. The results of depth profile of the S(Ep) parameter in the near-surface region of irradiated polymer were used to clarify indirectly a formation of Ag NPs in PMMA in dependence on ion dose. By comparative analysis with the S(Ep) parameter trend in polymer brushes with loaded Ag NPs [4], it is found that the density or mass of Ag NPs ('Ag filling') in Ag:PMMA increases as ion dose grows. The results obtained are discussed in terms of the positronium formation fraction in the irradiated part of polymer matrix and the model of carbon-shell Ag-core nanoparticles. [1] A.L. Stepanov, Tech. Phys. 49, 143 (2004). [2] M.F. Galyautdinov et al., Tech. Phys. Lett. 42, 182 (2016). [3] T. Kavetskyy et al., J. Phys. Chem. B 118, 4194 (2014). [4] G. Panzarasa et al., Nanotechnology 27, 02LT03 (2016). [5] T. Kavetskyy et al., J. Phys.: Conf. Ser. 791, 012028 (2017). Keywords: positron annihilation spectroscopy (PAS); variable-energy positron beam (VEPAS); PMMA nanocomposite films; Ag Defects and porosity in zirconia-based nanomaterials: a study by slow-positron beam technique Prochazka, I.; Cizek, J.; Melikhova, O.; Lukac, F.; Hruska, P.; Anwand, W.; Liedke, M. O.; Brauer, G.; Konstantinova, T. E.; Danilenko, I. A. A variety of advantageous thermal, electrical and mechanical properties of zirconium di-oxide (ZrO2, zirconia) make zirconia-based materials widely used in many industrial areas, in particular, in ceramic industry and other high-temperature applications. Doping of the ZrO2 host lattice by proper metal cations is a prerequisite of stabilisation of the high- temperature cubic and tetragonal phases down to room temperature as well as improvement of other functional properties. The use of nanopowders as initial substances in manufacturing ZrO2-based nanoceramics by sintering leads to well-homogenised materials of a low porosity. Due to an appreciable volume fraction of grain boundaries (GBs), pores and nanometer-sized open-volume defects associated to GBs become significant in nanopowders. Obviously, positron as well as positronium (Ps) atom becomes efficient probes of microstructure evolution during production of ZrO2-based functional nanomaterials by sintering. In the present contribution, investigation of several zirconia-based nanopowders as well as ceramics, obtained by sintering these nanopowders, will be reported. Nanopowders under study were doped with metal cations of various valency (Mg2+, Y3+, Cr3+, Ce4+) and differed also in thermal treatment. Doppler broadening (DB) measurements using slow-positron beam were conducted in the positron energy E ranging from 0.03 eV to 35 keV and the ordinary S and W shape parameters as well as the relative 3γ fractions were evaluated as functions of E. In Figure, an example of measured S(E) curves is given illustrating the sintering induced disappearance of open volume defects and para-Ps formation as well as grain growth could be observed. The VEPFIT models were fitted to the measured S(E), W(E) curves. The DB experiments were supplemented with the conventional positron lifetime, X-ray diffraction (XRD) and mass-density (MD) measurements. Nature and depth distributions of open-volume defects will be discussed on the basis of the slowpositron beam results correlated with the data on positron lifetimes, XRD and MD. Keywords: positron annihilation spectroscopy; zirconia; nanomaterials; nanopowders Slow positron annihilation studies of Pd-Mg multilayers Hruška, P.; Čížek, J.; Bulíř, J.; Lukáč, F.; Anwand, W.; Liedke, M. O.; Fekete, L.; Melikhova, O.; Lančok, J. Palladium is well known for its excellent hydrogen absorption kinetics. The gravimetric hydrogen absorption capacity of Pd is however only 0.93 wt. %. Magnesium exhibits a high hydrogen absorption capacity up to 7.6 wt. %, however the hydrogen absorption kinetics is slow. The aim of this work was to create thin Pd-Mg multilayered films combining positive hydrogen absorption properties of both elements. Pd-Mg multilayers were deposited by RF magnetron sputtering on fused silica substrates coated with 100 nm thick Pd wetting layer. The multilayers consist of alternating Pd and Mg layers (3, 12 and 60) of the same thickness. Three types of Pd-Mg multilayers were compared: (i) as deposited samples, (ii) hydrogen gas loaded samples at room temperature and H2 pressure of 4000 Pa for 2 h, (iii) samples annealed up to 450°C under Ar atmosphere. Defect structure of Pd-Mg multilayers was characterized using variable energy positron annihilation spectroscopy. Doppler broadening of the annihilation photopeak was analyzed using the S and W line-shape parameters and the measured S(E) curves were fitted using the VEPFIT code. The development of the structure during the annealing of the films was monitored by in-situ X-ray diffraction. Atomic force microscopy was employed for the study of the surface morphology. All films were characterized by nanocrystalline structure with a high density of grain boundaries with open-volume defects capable of positron trapping. The density of grain boundaries is determined by the mean grain size which increases with increasing thickness of a single phase layer. Hydrogen loading led to buckling of the film and introduced additional defects into the film. Annealing of the multilayers leads to diffusion of Mg atoms into the Pd layers and precipitates of Mg-Pd phase are formed. Keywords: PAS; slow positron beam; positron annihilation spectroscopy; Pd; Mg; multilayers Magnetic phase transitions in ns-laser irradiated FeAl systems: the role of open volume defects Liedke, M. O.; Bali, R.; Hübner, R.; Gradauskaite, E.; Ehrler, J.; Wang, M.; Potzger, K.; Zhou, S.; Wagner, A. Fe60Al40 alloys exhibit disorder dependent magnetic phase transitions (MPT), e.g., a ferromagnetic disordered A2-phase turns into a paramagnetic ordered B2-phase [1]. The ordered B2-phase, formed due to annealing up to 500°C in vacuum can be reversed to the disordered A2-phase via ion-irradiation [2]. It has been shown that the physical origin of MPT is related to the so-called anti-site disorder (ASD), i.e., variations in the number of Fe-Fe nearest neighbors due to disordering of the system [3]. However, variations of the lattice parameter, secondary phases, and changes in the concentration and size of open volume defects may play an important role as well. Here, an excimer UV ns-laser has been utilized to induced defects and examine the role of ASD and defects onto magnetic properties of Fe60Al40. Samples of 40 nm thick Fe60Al40 films with different initial order levels were exposed to a range of laser fluences: (i) Ne+ irradiated fully-disordered (A2- Fe60Al40), and (ii) vacuum annealed ordered alloys (B2- Fe60Al40) and (iii) as-grown semi-disordered (A2/B2- Fe60Al40). It is seen that for laser pulses of fluences below 100 mJ·cm^-2 cause subtle changes to the magnetization depending on the Fe60Al40 initial state, whereas for fluences above 150 mJ·cm-2, strong increase in ferromagnetism is observed for all Fe60Al40 initial states. The laser irradiated samples were probed with the Positron Annihilation Spectroscopy (PAS) to analyze for the existence of vacancies and/or phase separation. Although the low fluence region shows nearly no variation in vacancy defect concentration, a slight increase in the number of Al atoms around defect sites is found. For the high fluence regime, it is seen that a large variation in vacancy defects occurs, followed by pronounced phase separation. Structural analysis of the phase separated films shows strong migration of Al atoms leaving behind Fe-enriched regions, consistent with the PAS spectra. [1] M. O. Liedke et al., J. Appl. Phys. 117, 163908 (2015) [2] J. Fassbender, et. al., Phys. Rev. B 77, 174430 (2008) [3] R. Bali, et al., Nano Lett. 14, 435 (2014) Keywords: positron; positron annihilation spectroscopy; MOKE; ns-laser; magnetic phase transition; order; disorder Reversible Tuning of Ferromagnetism and Resistive Switching in ZnO/Cu Thin Films Younas, M.; Xu, C.; Arshad, M.; Ho, L.; Zhou, S.; Azad, F.; Akhtar, M.; Su, S.; Azeem, W.; Ling, F. Systematic magnetic, electronic, and electrical studies on the Cu0.04Zn0.96O/Ga0.01Zn0.99O cell structure grown on (001) sapphire by the pulsed laser deposition technique show that the Cu multivalent (CuM+) ions modulate magnetic and resistive states of the cells. The magnetic moment is found to be reduced by ∼30% during the high resistance state (HRS) to low resistance state (LRS) switching. X-ray photoelectron spectroscopy results reveals an increase of the Cu+/Cu2+ oxidation state ratio (which has been determined by the relative positions of the Fermi level and the Cu acceptor level) during the HRS to LRS transition. This decreases the effective spin-polarized Cu2+−Vö−Cu+ channels and thus the magnetic moment. A conduction mechanism involving the formation of conductive filaments from the coupling of the CuM+ ions and Vö has been suggested. ACS Omega 2(2017), 8810-8817 DOI: 10.1021/acsomega.7b01192 Symmetries and localization properties of defect modes in metamaterial magnonic superlattices Gallardo, R. A.; Schneider, T.; Roldán-Molina, A.; Langer, M.; Núñez, A. S.; Lenz, K.; Lindner, J.; Landeros, P. Symmetries and localization properties of defect modes of a one-dimensionsional bi-component magnonic superlattice are theoretically studied. The magnonic superlattice can be seen as a periodic array of nanostripes, where stripes with different width, termed as defect stripes, are periodically introduced. By controlling the geometry of the defect stripes, a transition from dispersive to practically at spin-wave defect modes can be observed inside the magnonic band gaps. It is shown that the spin-wave profile of the defect modes can be either symmetric or asymmetric by depending on the geometry of the defect. Due to the localization peculiarities of the defect modes, a particular magnonic superlattice is proposed, wherein the excitation of either symmetric or antisymmetric at modes is enabled at the same time. Also, it is demonstrated that the relative frequency position of the asymmetric mode inside the band gap does not significantly change with the application of an external field, while the symmetric modes move to the edges of the frequency band gaps. The results are complemented by numerical simulations, where an excellent agreement is observed between both methods. The proposed theory allows exploring different ways to control the dynamic properties of the defect modes in metamaterial magnonic superlattices, which can be useful for applications on multifunctional microwave devices operating over a broad frequency range. Keywords: ferromagnetic resonance; spin waves; magnetization dynamics; magnonic crystals; plane wave method; dispersion; superlattices; daemon-eshbach Final Draft PDF 12,8 MB Secondary publication Towards Substitutionally-Inert Ru(II) Complexes as Photoactivatable Anticancer Agents Joshi, T.; Pierroz, V.; Ferrari, S.; Spiccia, L.; Gasser, G. The severe side effects encountered with platinum-based anticancer agents has driven the pursuit of new metal-based chemotherapeutics. The best examples of these are ruthenium compounds which have shown a promising potential to circumvent these side effects on account of their broad antiproliferative profile and novel mechanistic of action against cancer cells. Our work aims at the development of substitutionally inert Ru(II)-tris(diimine) complexes as new anticancer agents. Here we present the anticancer action and cytotoxicity mechanism of a [Ru(dppz)2(CppH)](PF6)2 (1) (CppH = 2-(2′-pyridyl)pyrimidine-4-carboxylic acid; dppz = dipyrido[3,2-a:2′,3′-c]phenazine), a substitutionally-inert polypyridyl Ru(II) complex. Complex 1 induces inhibitory effects comparable to that of cisplatin, targets mitochondria and impairing the mitochondrial membrane potential eventually leads to cell death by apoptosis [1]. Structure-activity correlation studies identify the key functional role of the carboxylate group on the CppH ligand and of the bis(dppz) framework in the cytotoxic activity of 1, with any lipophilicity, charge, and size-based structural and functional modifications resulting in its decreased activity [2]. Complementing these findings, we recently illustrated the first example of a substitutionally-inert metal complex-based prodrug candidate which can efficiently respond to activation by UV-A light (2.58 J cm-2) to display cytotoxicity "on demand" against cervical (HeLa) and bone cancer (U2OS) cells [3]. The reported findings represent a major advancement towards achieving a site-directed spatially and temporally controlled anti-cancer activity from such metallo cytotoxics. [1] V. Pierroz, T. Joshi, A. Leonidova, C. Mari, J. Schur, I. Ott, L. Spiccia, S. Ferrari, G. Gasser J. Am. Chem. Soc. 134 (2012) 20376−20387. [2] T. Joshi, V. Pierroz, S. Ferrari, G. Gasser ChemMedChem 9 (2014) 1419–1427. [3] T. Joshi, V. Pierroz, C. Mari, L. Gemperle, S. Ferrari, G. Gasser Angew. Chem. Int. Ed. 53 (2014) 2960–2963. II International Caparica Congress on Translational Chemistry, 04.-07.12.2017, Caparica, Lisbon, Portugal Neomycin B–cyclen conjugates and their Zn(II) complexes as RNA-binding agents Joshi, T.; Kong, B.; Tor, Y.; Graham, B.; Spiccia, L. Aminoglycosides are one of the most well-studied classes of naturally occurring antibacterial agents [1]. Their antibiotic activity derives from their selective binding to the bacterial ribosomal RNA A-site, which ultimately leads to disruption of protein synthesis. Unfortunately, despite exhibiting a promising antibacterial profile, the widespread use of aminoglycosides as antibiotics has been hampered by their adverse side effects and the emergence of bacterial resistance. Herein, we present the synthesis of a series of new neomycin B conjugates featuring a polyazamacrocycle, 1,4,7,10-tetraazacyclododecane (cyclen), appended to the D-ribose ring, together with an examination of their A-site binding properties, as well as those of the corresponding Zn(II) complexes (C1–C3 and Zn(II)-C1–Zn(II)-C3). Since the high affinity of aminoglycosides for RNA is associated with the formation of complementary electrostatic interactions between protonated amine groups on the aminoglycosides and the RNA, the tethered cyclen macrocycle, enhances affinity for the A-site RNA motif due to the introduction of additional ionisable amino groups [2,3]. Furthermore, in agreement with previous findings that Zn(II)-cyclen complexes form reasonably strong interactions with phosphate groups as well as the deprotonated imide groups of the nucleobases, uracil and thymine, complexation of Zn(II) by cyclen in the neomycin B conjugates serves to enhance the affinity further still by allowing the conjugates to form coordination bonds with the RNA target [2,3]. The conjugates are worthy of further investigation as potential new antibiotic agents. References: [1] Y. Tor, ChemBioChem 2003, 4, 998. [2] B. Kong, T. Joshi, et al., J. Inorg. Biochem. 2016, 162, 334. [3] T. Joshi, et. al., Acc. Chem. Res. 2015, 48, 2366. 6th Asian Conference on Coordination Chemistry, 24.-28.07.2017, Melbourne, Australia Spectroscopic Studies on Photoinduced Reactions of the Anticancer Prodrug, trans,trans,trans-[Pt(N3)2(OH)2(py)2] Vernooij, R. R.; Joshi, T.; Horbury, M. D.; Graham, B.; Izgorodina, E. I.; Stavros, V. G.; Sadler, P. J.; Spiccia, L.; Wood, B. R. The photodecomposition mechanism of trans,trans,trans-[Pt(N3)2(OH)2(py)2] (1, py = pyridine), an anticancer prodrug candidate, was probed using complementary Attenuated Total Reflection Fourier Transform Infrared (ATR-FTIR), transient electronic absorption and UV-Vis spectroscopy. Data fitting using Principal Component Analysis (PCA) and multi-curve resolution alternating least squares, suggests the formation of a trans-[Pt(N3)(py)2(OH/H2O)] intermediate and trans [Pt(py)2(OH/H2O)2] as the final product upon 420 nm irradiation of 1 in water. Rapid disappearance of the hydroxido ligand stretching vibration upon irradiation is correlated with a -10 cm-1 shift to the anti-symmetric azido vibration, suggesting a possible second intermediate. Experimental proof of subsequent dissociation of azido ligands from platinum is presented, where at least one hydroxyl radical is formed in the reduction of Pt(IV) to Pt(II). Additionally, the photoinduced reaction of 1 with the nucleotide 5'-guanosine monophosphate (5'-GMP) was comprehensively studied, and the identity of key photoproducts was assigned with the help of ATR FTIR spectroscopy, mass spectrometry and density functional theory calculations. The identification of marker bands for some of these photoproducts (e.g., trans-[Pt(N3)(py)2(5'-GMP)] and trans-[Pt(py)2(5'-GMP)2] will aid elucidation of the chemical and biological mechanism of anticancer action of 1. In general, these studies demonstrate the potential of vibrational spectroscopic techniques as promising tools for studying such metal complexes. Keywords: vibrational spectroscopy; Attenuated Total Reflection (ATR); Pt(IV) prodrugs; mechanism of action; anticancer Chemistry - A European Journal 24(2018)22, 5790-5803 DOI: 10.1002/chem.201705349 A new data processing approach to study particle motion using ultrafast X-ray tomography scanner: case study of gravitational mass flow Waktola, S.; Bieberle, A.; Barthel, F.; Bieberle, M.; Hampel, U.; Grudzień, K.; Babout, L. In most industrial products, granular materials are often required to flow under gravity in various kinds of silo shapes and usually through an outlet in the bottom. There are several interrelated parameters which affect the flow, such as internal friction, bulk and packing density, hopper geometry, and material type. Due to the low spatial resolution of electrical capacitance tomography or scanning speed limitation of standard X-ray CT systems, it is extremely challenging to measure the flow velocity and possible centrifugal effects of granular materials flow effectively. However, the ultrafast electron beam X-ray CT scanner (ROFEX) opens new avenues of granular flow investigation due to its very high temporal resolution. This paper aims to track particles movements and evaluate the local grain velocity during silo discharging process in the case of mass flow. The study has considered the use of the Seramis material, which can also serve as a type of tracer particles after impregnation, due to its porous nature. The presented image processing and analysis approach allows satisfyingly measuring individual particle velocities but also tracking their movements. Keywords: mass flow; ultrafast X-ray CT; particle tracking; matrix of inertia Experiments in Fluids 69(2018), 59-69 Production of 51Cr by proton irradiation of natV and purification by precipitation and ion exchange chromatography Mansel, A.; Franke, K. European demand for chromium has grown dramatically, leading to the need for a detailed understanding of recycling of steel sludges and separation methods. To simulate these processes, we will use the radiotracer technique. 51Cr (T1/2 = 27.7 d) was choosen as a radionuclide. The isotope can be produced by the nuclear reaction natV(p,n)51Cr at a cyclotron. We used our recently installed cyclotron Cyclone® 18/9 (IBA) for the irradiation of natV (99.75% 51V). The vanadium foil was put in an aluminium holder with a diameter of 10 mm and a depth of 100 µm. The target was covered by a 100 µm thick aluminium foil. The irradiation was done at a beam of 16 MeV protons and a current of 10 µA for 4 hours. For the separation of 51Cr we established a multistage treatment. After cooling for 20 hours, the vanadium foil was dissolved with 2 ml conc. nitric acid. After addition of 20 mg iron(III) chloride, the hydroxide was subjected to a threefold cycle of precipitation with ammonia and dissolution with nitric acid. Vanadium(V) is soluble under these conditions. The separation of the radionuclide 51Cr and iron(III) was performed by ion exchange chromatography with AG 1-X8 (BIORAD) in conc. HCl. The 51Cr solution was eluted on the resin and the resin was washed six times with 2 ml conc. hydrochloric acid to remove the iron(III). The combined 51Cr solutions were evaporated to dryness and the residue was dissolved in 0.01 M sulfuric acid. The detection of 51Cr was done by gamma-counting (320 keV; 9.91%). The radiochemical yield was 66% at a production rate of 0.575 MBq/µAh. Keywords: Radiochromium; Vanadium target; Cyclotron; Proton induced nuclear reaction; Radiochemical separation 18th Radiochemical Conference (RadChem2018), 13.-18.05.2018, Marianske Lazne (Marienbad), Czech Republic Melanoma brain metastases: Local therapies, targeted therapies, immune checkpoint inhibitors and their combinations- chances and challenges Kuske, M.; Rauschenberg, R.; Garzarolli, M.; Meredyth-Stewart, M.; Beissert, S.; Troost, E.; Glitza, I.; Meier, F. Recent phase 2 trials have shown that BRAF/MEK inhibitors and immune checkpoint inhibitors are active in patients with melanoma brain metastases (MBM), reporting intracranial disease control rates of 50-75%. Furthermore, retrospective analyses suggest that combining stereotactic radiosurgery with immune checkpoint inhibitors or BRAF/MEK inhibitors prolongs overall survival. These data stress the need for interand multidisciplinary cooperation that takes into account the individual prognostic factors in order to establish the best treatment for each patient. Although the management of MBM has dramatically improved, a substantial number of patients still progress and die from brain metastases. Therefore, there is an urgent need for prospective studies in patients with MBM that focus on treatment combinations and sequences, new treatment strategies, and biomarkers of treatment response. Moreover, further research is needed to decipher brain-specific mechanisms of therapy resistance. American Journal of Clinical Dermatology 19(2018)4, 529-541 Der Beitrag geowissenschaftlicher Forschung zur Erkundung von mineralischen Rohstoffen in Deutschland Gutzmer, J.; Markl, G. Invited keynote presentation for the 3rd BGR-Rohstoffkonferenz in Hannover. 3. BGR-Rohstoffkonferenz, 29.-30.11.2017, Hannover, Deutschland HIM-Time-of-Flight-SIMS at HZDR Klingner, N.; Heller, R.; Hlawacek, G.; Möller, W.; Facsko, S. HIM-TOF-SIMS at HZDR PicoFIB Meeting, 31.01.2018, Dresden, Deutschland Clinical application of dual-energy computed tomography improves stopping-power prediction Wohlfahrt, P.; Möhler, C.; Greilich, S.; Richter, C. Purpose/Objective: Assessment of accuracy and robustness of treatment planning on dual-energy CT (DECT) in a multi-step validation and clinical implementation scheme (Fig.1,2). Material/Methods: To ensure reliable translation of DECT into routine clinical application, scan settings and stopping-power prediction methods were optimized and validated using 13 different animal tissues and an anthropomorphic ground-truth phantom. The clinical relevance of DECT-based stopping-power prediction was evaluated on dual-spiral DECT scans of 102 brain-, 25 prostate- and 3 lung-tumor patients treated with protons. DECT-derived voxelwise correlations of CT number and stopping-power ratio (SPR) were used for adapting the clinically applied CT-number-to-SPR conversion (HLUT) and quantifying intra- and inter-patient variability. The accuracy of DECT-based stopping-power prediction was within 0.3% stopping-power and 1mm range uncertainty of validation measurements. Clinically relevant mean range shifts (±1SD) of 1.2(±0.7)% for brain-, 1.7(±0.5)% for prostate- and 2.2(±1.2)% for lung-tumor patients were obtained between dose calculations using HLUT or DECT-derived SPR. These deviations were significantly reduced (p<<0.001, two-sample t-test) below 0.3% by HLUT refinement based on patient-specific DECT information. Still, the remaining large intra-patient soft tissue diversity of approx. 6% (95% CI) and age-dependent inter-patient bone variability of 5% cannot be considered by any HLUT-based range prediction. Additional tissue information provided by DECT allows for accurate stopping-power prediction and incorporation of patients' tissue diversity in treatment planning. These advantages can contribute to reduce CT-related range uncertainty and have been gradually translated in our clinical routine: (1) DECT-derived pseudo-monoenergetic CT dataset with generic HLUT, (2) DECT-based HLUT adaptation and soon (3) patient-specific DECT-basted stopping-power prediction. Keywords: proton therapy; range uncertainty; dual-energy CT 57th Annual Particle Therapy Co-Operative Group (PTCOG) Meeting, 24.-26.05.2018, Cincinnati, Ohio, USA Improved performance of laser wakefield acceleration by tailored self-truncated ionization injection Irman, A.; Couperus, J. P.; Debus, A.; Köhler, A.; Krämer, J. M.; Pausch, R.; Zarini, O.; Schramm, U. We report on tailoring ionization-induced injection in laser wakefield acceleration so that the electron injection process is self-truncating following the evolution of the plasma bubble. Robust generation of high-quality electron beams with shot-to-shot fluctuations of the beam parameters better than 10% is presented in detail. As a novelty, the scheme was found to enable well-controlled yet simple tuning of the injected charge while preserving acceleration conditions and beam quality. Quasimonoenergetic electron beams at several 100MeV energy and 15% relative energy spread were routinely demonstrated with a total charge of the monoenergetic feature reaching 0.5 nC. Finally these unique beam parameters, suggesting unprecedented peak currents of several 10 kA, are systematically related to published data on alternative injection schemes. Keywords: Self-Truncation Ionization Injection; beam loading; laser wakefield acceleration Plasma Physics and Controlled Fusion 60(2018), 044015 DOI: 10.1088/1361-6587/aaaef1 Rhenium recovery from diluted solutions by solvent extraction A total of 230.000 tons of the so called "Theisenschlamm", a waste material of the former copper shale processing in the Mansfeld region, was deposited between the years 1978 and 1990. Besides about 20 wt.-% of zinc and minor amounts of lead, copper and tin, this material also contains valuable strategic elements, like rhenium and germanium. Nowadays rhenium is used as an important alloying element in nickel-base superalloys, or in catalysts. The r4-project "Theisenschlamm" aims at a recovery of the valuable metal content by bioleaching, followed by element specific separation methods. An effective method for the selective separation and concentration of metals is solvent extraction where an organic phase is used to extract the elements of interest from an aqueous phase. But since the metal concentrations in the bioleaching solution are low, the processing is challenging. However, there are multiple parameters allowing an optimization of the solvent extraction process. The findings for a selective enrichment of rhenium from synthetic (bioleaching) solution will be presented in this talk. 67. Berg- und Hüttenmännischer Tag, 08.-10.06.2016, Freiberg, Deutschland Recovery of Rhenium from Low Concentrated Bioleaching Solutions by Solvent Extraction The r4-project "Theisenschlamm" focuses on the recovery of elements of strategic economic importance (like Rhenium or Molybdenum). The first step within the research approach is bioleaching of the Theisenschlamm, which is a waste material of the former copper shale processing in the Mansfeld region (Germany). As a next process step the project partners investigate different element selective separation methods to recover the valuable elements from the bioleaching solution. An efficient rhenium recovery from synthetic (bioleaching) solutions is achieved using solvent extraction with teriary amines. It could be shown that rhenium can be enriched in the organic phase and that a good selectivity over zinc, copper, cobalt, germanium and iron(III) is obtained. "24 Stunden für Ressourceneffizienz", Ressourceneffizienz-Kongress für Nachwuchsforscherinnen und Nachwuchsforscher, 14.-15.02.2017, Pforzheim, Deutschland Atomistic study of the hardening of ferritic iron by Ni-Cr decorated dislocation loops Bonny, G.; Bakaev, A.; Terentyev, D.; Zhurkin, E.; Posselt, M. The exact nature of the radiation defects causing hardening in reactor structural steels consists of several components that are not yet clearly determined. While generally, the hardening is attributed to dislocation loops, voids and secondary phases (radiation-induced precipitates), recent advanced experimental and computational studies point to the importance of solute-rich clusters (SRCs). Depending on the exact composition of the steel, SRCs may contain Mn, Ni and Cu (e.g. in reactor pressure vessel steels) or Ni, Cr, Si, Mn (e.g. in high-chromium steels for generation IV and fusion applications). One of the hypotheses currently implied to explain their formation is the process of radiation-induced diffusion and segregation of these elements to small dislocation loops (heterogeneous nucleation), so that the distinction between SRCs and loops becomes somewhat blurred. In this work, we perform an atomistic study to investigate the enrichment of loops by Ni and Cr solutes and their interaction with an edge dislocation. The dislocation loops decorated with Ni and Cr solutes are obtained by Monte Carlo simulations, while the effect of solute segregation on the loop's strength and interaction mechanism is then addressed by large scale molecular dynamics simulations. The synergy of the Cr-Ni interaction and their competition to occupy positions in the dislocation loop core are specifically clarified. Keywords: Iron; Ferritic steel; Precipitation; Dislocation; Molecular dynamics Journal of Nuclear Materials 498(2018), 430-437 Causative treatment of acid aspiration induced acute lung injury – recent trends from animal experiments and critical perspective Gramatté, J.; Pietzsch, J.; Bergmann, R.; Richter, T. Aspiration of low-pH gastric fluid leads to an initial pneumonitis, which may become complicated by subsequent pneumonia or acute respiratory distress syndrome. Current treatment is at best supportive, but there is growing experimental evidence on the significant contribution of both neutrophils and platelets in the development of this inflammatory pulmonary reaction, a condition that can be attenuated by several medicinal products. This review aims to summarize novel findings in experimental models on pathomechanisms after an acid-aspiration event. Given the clinical relevance, specific emphasis is put on deduced potential experimental therapeutic approaches, which make use of the characteristic alteration of microcirculation in the injured lung. Keywords: Acute respiratory distress syndrome; critical care medicine; pneumonitis; pulmonary inflammation; pulmonary blood flow; targeted anti-inflammatory therapies Clinical Hemorheology and Microcirculation 69(2018), 187-195 Final Draft PDF 91 kB Secondary publication The Transition from a Thin Film to a Full Magnonic Crystal and the Role of the Demagnetizing Field Langer, M.; Röder, F.; Gallardo, R. A.; Schneider, T.; Stienen, S.; Gatel, C.; Hübner, R.; Lenz, K.; Lindner, J.; Landeros, P.; Fassbender, J. The transition from a film to a full magnonic crystal is studied by sequentially ion-milling a 40 nm Ni80Fe20 film. The spin-wave resonances of each stage are detected by ferromagnetic resonance for both in-plain field main axes. Theoretical calculations and micromagnetic simulations yield the individual mode profiles, which are analyzed in order to track changes of the mode character. The latter is strongly linked to the evolution of the internal demagnetizing field. It's role is further studied by electron holography measurements of a hybrid magnonic crystal with 10 nm deep surface modulation. The complex effects of mode coupling, mode localization and anisotropy-like contributions by the internal field are unraveled. Simple transition rules from the 𝑛th film mode to the 𝑚th mode of the full magnonic crystal are formulated. Keywords: Ferromagnetic resonance; magnonic crystals; demagnetizing fields; thin films; spin waves 81. Frühjahrstagung der Sektion Kondensierte Materie der DPG, 19.-24.03.2017, Dresden, Deutschland The Role of the Demagnetizing Fields in the Transition from Thin Films to Magnonic Crystals Lenz, K.; Langer, M.; Röder, F.; Gallardo, R. A.; Schneider, T.; Stienen, S.; Gatel, C.; Hübner, R.; Lindner, J.; Landeros, P.; Fassbender, J. The transition from a film to a full magnonic crystal is studied by sequentially ion-milling a periodic stripe pattern into a 40 nm thick Ni$_{80}$Fe$_{20}$ film. The spin-wave resonances of each milling stage are detected by ferromagnetic resonance for both in-plain main field axes, i.e. parallel and perpendicular to the stripe pattern. Theoretical calculations and micromagnetic simulations yield the individual mode profiles, which are analyzed in order to track changes of the mode character. The latter is strongly linked to the evolution of the internal demagnetizing field. It's role is further studied and imaged by electron holography measurements on a hybrid magnonic crystal, which is made with a 10 nm deep surface modulation. The complex effects of mode coupling, mode localization, and anisotropy-like contributions by the internal field are unraveled. Simple transition rules from the $𝑛^_\mathrm{th}$ film mode to the $𝑚\mathrm{th}$ mode of the full magnonic crystal are formulated. This work has been supported by DFG grant KL2443/5-1. Keywords: Ferromagnetic resonance; magnonic crystals; demagnetizing fields Maternal immune activation results in complex microglial transcriptome signature in the adult offspring that is reversed by minocycline treatment Mattei, D.; Ivanov, A.; Ferrai, C.; Jordan, P.; Guneykaya, D.; Buonfiglioli, A.; Schaafsma, W.; Przanowski, P.; Deuther-Conrad, W.; Brust, P.; Hesse, S.; Patt, M.; Sabri, O.; Ross, T. L.; Eggen, B. J. L.; Bodecke, E. W. G. M.; Kaminska, B.; Beule, D.; Pombo, A.; Kettenmann, H.; Wolf, S. A. Maternal immune activation (MIA) during pregnancy has been linked to an increased risk of developing psychiatric pathologies in later life. This link may be bridged by a defective microglial phenotype in the offspring induced by MIA, as microglia have key roles in the development and maintenance of neuronal signaling in the central nervous system. The beneficial effects of the immunomodulatory treatment with minocycline on schizophrenic patients are consistent with this hypothesis. Using the MIA mouse model, we found an altered microglial transcriptome and phagocytic function in the adult offspring accompanied by behavioral abnormalities. The changes in microglial phagocytosis on a functional and transcriptional level were similar to those observed in a mouse model of Alzheimer's disease hinting to a related microglial phenotype in neurodegenerative and psychiatric disorders. Minocycline treatment of adult MIA offspring reverted completely the transcriptional, functional and behavioral deficits, highlighting the potential benefits of therapeutic targeting of microglia in psychiatric disorders. Translational Psychiatry 7(2017)e1120, 1-13 DOI: 10.1038/tp.2017.80 Fulltext from www.nature.com Prof Markus Reuter, Director at the Helmholtz Institute for Resource Technology in Freiberg in Germany, was awarded the degree Doctor of Engineering (DEng), honoris causa for his outstanding contributions to the science and technology of the production and recycling of metals, as well as to the integration of academic research and practice. His work on recycling, design for recycling, and resource efficiency has contributed towards the creation of processes and tools to develop a sustainable society. Awarding of Degrees, Diplomas and Certificates (including Doctoral Degrees), 04.-08.12.2017, Stellenbosch, Südafrika Digitizing the Circular Economy Metallurgy is a key enabler of a circular economy (CE), its digitization is the metallurgical Internet of Things (m-IoT). In short: Metallurgy is at the heart of a CE, as metals all have strong intrinsic recycling potentials. Process metallurgy, as a key enabler for a CE, will help much to deliver its goals. The first-principles models of process engineering help quantify the resource efficiency (RE) of the CE system, connecting all stakeholders via digitization. This provides well-argued and first-principles environmental information to empower a tax paying consumer society, policy, legislators, and environmentalists. It provides the details of capital expenditure and operational expenditure estimates. Through this path, the opportunities and limits of a CE, recycling, and its technology can be estimated. The true boundaries of sustainability can be determined in addition to the techno-economic evaluation of RE. The integration of metallurgical reactor technology and systems digitally, not only on one site but linking different sites globally via hardware, is the basis for describing CE systems as dynamic feedback control loops, i.e., the m-IoT. It is the linkage of the global carrier metallurgical processing system infrastructure that maximizes the recovery of all minor and technology elements in its associated refining metallurgical infrastructure. This course will illustrate some of these concepts with hands-on training. Keywords: circular economy; recycling Digitizing the Circular Economy / Summer school, 17.-20.07.2017, Leuven, Belgien Diffuse interface model to simulate the rise of a fluid droplet across a cloud of particles Lecrivain, G.; Kotani, Y.; Yamamoto, R.; Hampel, U.; Taniguchi, T. A large variety of industrial and natural systems involve the adsorption of solid particles to the fluidic interface of droplets in motion. A diffuse interface model is here suggested to directly simulate the three-dimensional dynamics of a fluid droplet rising across a cloud of large particles. In this three-phase model the two solid-fluid boundaries and the fluidic boundary are replaced with smoothly spreading interfaces. A significant advantage of the method lies in the fact that the capillary effects, the three-phase flow hydrodynamics, and the inter-particle collisions are all resolved. We first report important numerical limitations associated with the inter-particle collisions in diffuse interface models. In a second stage the effect of the particle concentration on the terminal velocity of a rising fluid droplet is investigated. It is found that, in a quiescent environment, the terminal velocity of the rising the fluid droplet decreases exponentially with the particle concentration. This exponential decay is also confirmed by a simple rheological model. Keywords: Diffuse interface model; rising droplet; particles at fluidic interface; direct numerical simulation; three phase flows Physical Review Fluids 3(2018)9, 094002 DOI: 10.1103/PhysRevFluids.3.094002 Original PDF 1,2 MB Secondary publication Attachment of non-spherical particles to the fluidic surface – Experiment and direct numerical simulations Lecrivain, G.; Eckert, K.; Hampel, U.; Yamamoto, R.; Taniguchi, T. The attachment of colloidal particles to the fluidic surface of immersed fluid droplets is central to a wide variety of industrial applications, among which stand out the recovery of minerals by gas bubbles, a process known as flotation. The flotation process involves the attachment of hydrophobised colloidal particles to the surface of rising air bubbles, while the commercially valueless hydrophilic material settles down the cell. Experimental and numerical works dealing with the attachment of non-spherical particles to a fluidic interface are here presented. Using an optical microbubble sensor the various microprocesses associated with the colloidal attachment of elongated fibers are first investigated. In a second stage direct numerical simulations are used to predict the dynamics of such particles at a fluidic interface. Unlike spherical particles, it is found that plate-like particles attach more rapidly to a fluidic interface and are subsequently harder to dislodge when subject to an external force. Keywords: Flotation Fundamentals: Physics and Chemistry; bubble-particle interactions Flotation '17, 13.-16.11.2017, Cape Town, South Africa Structure variations within certain rare earth-disilicides Nentwich, M.; Zschornak, M.; Sonntag, M.; Gumeniuk, R.; Gemming, S.; Leisegang, T.; Meyer, D. C. The dimorphism of the RSi2 and R2TSi3 compounds is a well known phenomenon (R is an alkaline earth metal, rare earth metal or actinoide, T is a transition metal). They crystallize in structures, which derive from hexagonal AlB2 or tetragonal ThSi2 prototypes. Despite their local similarities, both prototypes do not have a common root in the Bärnighausen diagram, which summarizes the symmetry relations between the high symmetrical basic structures and their lower symmetric variations. We performed an extensive literature research based on more than 400 structure reports of the RSi2 and R2TSi3 compounds. To gain an overview of the various structure reports within these compounds we summarized composition, lattice parameters a and c, ratios c/a, formula units per unit cell, and structure types in an extensive table. We performed DFT calculations on carefully chosen compounds to evaluate the probability of a successful synthesis. Finally, we discuss peculiarities of symmetry distribution among the RSi2 and R2TSi3 compounds and several correlations related to structural parameters. We found that the thermal treatment has a massive effect to the formation of superstructures. Furthermore, there are two different kinds of hexagonal R2TSi3 compounds being ionic or metallic, depending on the R element. Additionally, the main influence to the variation of the Si-T bonds is the electronic interplay between R element and Si lattice rather than the R radii. Keywords: rare earth; silicides; density-functional IUCR 2017 - 24th Congress of the International Union of Crystallography, 21.-27.08.2017, Hyderabad, Indien THz-spectroscopic studies on electron dynamics in a GaAs single quantum well and an InAs single quantum dot Schneider, H.; Schmidt, J.; Stephan, D.; Bhattacharyya, J.; Winnerl, S.; Dimakis, E.; Helm, M. Intense, spectrally narrow terahertz fields from the free-electron laser (FEL) facility FELBE in Dresden, Germany, provide interesting opportunities for investigating the carrier dynamics in III-V semiconductor nanostructures. This talk will focus on recent FEL studies on dressing intersubband transitions in a wide GaAs single quantum well using terahertz time-domain spectroscopy, and on exciton dynamics in a single InAs/GaAs quantum dot using time-dependent photoluminescence. Keywords: terahertz free-electron laser; intersubband; exciton; quantum well; quantum dot 14-th International Conference on Intersubband Transitions in Quantum Wells (ITQW2017), 10.-15.09.2017, Singapore, Singapore Luminescence of defects in the structural transformation of layered tin dichalcogenides Sutter, P.; Komsa, H.-P.; Krasheninnikov, A. V.; Huang, Y.; Sutter, E. Layered tin sulfide semiconductors are both of fundamental interest and attractive for energy conversion applications. Sn sulfides crystallize in several stable bulk phases with different Sn:S ratios (SnS2, Sn2S3, and SnS), which can transform into phases with a lower sulfur concentration by introduction of sulfur vacancies (VS). How this complex behavior affects the optoelectronic properties remains largely unknown but is of key importance for understanding light-matter interactions in this family of layered materials. Here, we use the capability to induce VS and drive a transformation between few-layer SnS2 and SnS by electron beam irradiation, combined with in-situ cathodolumines- cence spectroscopy and ab-initio calculations to probe the role of defects in the luminescence of these materials. In addition to the characteristic band-edge emission of the endpoint structures, our results show emerging luminescence features accompanying the SnS2 to SnS transformation. Comparison with calculations indicates that the most prominent emission in SnS2 with sulfur vacancies is not due to lumi- nescence from a defect level but involves recombination of excitons bound to neutral VS in SnS2. These findings provide insight into the intrinsic and defect-related optoelectronic properties of Sn chal- cogenide semiconductors. Keywords: 2D materials; spectroscopy; defects; first-principles calculations Terahertz dephasing of Landau level transitions in graphene Schneider, H.; König-Otto, J. C.; Pashkin, A.; Helm, M.; Winnerl, S.; Wang, Y.; Belyanin, A. Using degenerate four-wave mixing (DFWM), we have investigated the coherent polarization between the lowest Landau levels in graphene under resonant excitation with narrowband THz pulses. A pronounced DFWM signal is observed and its dependence on THz field strength and magnetic field detuning is explored and compared with theoretical expectations. Keywords: terahertz; graphene; four-wave mixing; coherent polarization; free-electron laser The 42nd International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz'2017), 27.08.-01.09.2017, Cancun, Mexico Proceedings of the IRMMW-THz'2017 DOI: 10.1109/IRMMW-THz.2017.8066873 Protecting Pulsed High-Power Lasers with Real-Time Image Classification Kelling, J.; Gebhardt, R.; Helbig, U.; Bock, S.; Schramm, U.; Juckeland, G. Learn how to combine computer vision techniques and deep learning to improve the sensitivity of a real-time, GPU-powered safety system. In Petawatt laser systems, firing at 10Hz, suddenly appearing scatterers can damage components. Damage(-spreading) can be avoided by suspending operation immediately on occurrence of such an event. We present our approach for the automatic detection of critical failure states from intensity profiles of the laser beam. By Incorporating quick feature detection and learned heuristics for feature classification, both real-time constraints and limited available training data are accommodated. Localization of triggering feature is crucial for when the problem is located in non-sensitive sections and will not be removed from the beam in production. extended abstract: High-power lasers are operated at our research center for investigations of exotic states of matter and medical applications, among others. This project to improve the automatic shutdown/interlock system of two lasers (one in operation, one currently under construction) has the goal of reducing the probability of, potentially expensive, damage-spreading scenarios, while at the same time avoiding false alarms at high sensitivity. The project to be presented is currently in a proof-of-concept phase, with workable proof existing for a specific failure mode for which data was available (the breaking of a single mirror). Next to the 100ms real-time constraint, the lack sufficient training data, demanded a two-stage approach to solve this problem: Classical feature detection with a low threshold works as a fast anomaly detector, followed by feature-classification using CNNs (mostly GoogLeNet) to identify true positive triggers. From this, the audience can learn how to design for short response times (to which end we employ Caffe, OpenCV on GPU and use C++ as main programming language). The application also demonstrates how prior domain knowledge and known algorithms can be combine with machine learning to create heuristics to fill in gaps. Keywords: Image Classification; Caffe; automatik Laser-safety shutdown; GoogLeNet GTC 2018 Silicon Valley, 26.-29.03.2018, San Jose, CA, USA 1st MLC Workshop, 15.05.2018, Dresden, Deutschland The Exchange bias in oxygen-implanted Co/Au thin film heterostructures Perzanowski, M.; Gregor-Pawłowski, J.; Zarzycki, A.; Böttger, R.; Hübner, R.; Potzger, K.; Marszałek, M. Magnetic systems exhibiting exchange bias effect are being considered as functional parts of modern data storage devices. A model system for the investigation of this effect is an antiferromagnetic-ferromagnetic CoO/Co interface. In this paper we present the studies of magnetic properties of Co-CoO/Au multilayers where the cobalt oxide was formed by oxygen ion beam implantation. Special emphasis is given to the role of the oxygen concentration profile in the magnetic properties. By properly designed the implantation conditions (ion beam energy and fluence) it is possible to fabricate a system revealing controlled stepwise magnetization reversal process. This underlines the great potential of this approach to tailor the magnetic properties through modification of implantation profiles. This work was supported by DAAD Service with contract No. PPP-PL 57214850 "Magnetic anisotropies in cobalt heterostructures induced by oxidation". Keywords: Ion Implantation; Magnetic multilayers The European Conference Physics of Magnetism 2017, PM'17, 26.-30.06.2017, Poznan, Polen Metallurgy key enabler of the Circular Economy Metallurgy is a key enabler of a circular economy (CE), its digitalization is the metallurgical Internet of Things (m-IoT). In short: Metallurgy is at the heart of a CE, as metals all have strong intrinsic recycling potentials. Process metallurgy, as a key enabler for a CE, will help much to deliver its goals. The first-principles models of process engineering help quantify the resource efficiency (RE) of the CE system, connecting all stakeholders via digitalization. This provides well-argued and first-principles environmental information to empower a tax paying consumer society, policy, legislators, and environmentalists. It provides the details of capital expenditure and operational expenditure estimates. Through this path, the opportunities and limits of a CE, recycling, and its technology can be estimated. Keywords: Circular Economy; Circular Economy Engineering; Fairphone; Recycling Workshop, 30.11.2017, Madrid, Spanien The Akademii Nauk ice core and solar activity Fritzsche, D.; von Albedyll, L.; Merchel, S.; Opel, T.; Rugel, G.; Scharf, A. Ice cores are well-established archives for paleo-environmental studies, but this requires a reliable ice core chronology. The concentration of cosmogenic radionuclides in ice cores reflects the solar activity in the past and, thus, can be used as a dating tool for ice cores. Accelerator mass spectrometry (AMS) allows the determination of nuclides in high resolution. Here, we present results of a 10Be study in an ice core from Akademii Nauk (Severnaya Zemlya, Russian Arctic). AMS analyses of more than 500 samples were carried out using the 6 MV accelerator facility of the Ion Beam Center of the Helmholtz-Zentrum Dresden-Rossendorf. For the time period 400 to 2000 CE the temporal variations of 10Be reflect the centennial variations of solar activity known from similar studies of Greenlandic ice cores and from 14C production reconstructions. The 10Be peak of 775 CE, today understood as result of the strongest known solar particle storm, was found by high-resolution core analysis. This peak is used as a tie point (additionally to volcanic reference horizons) for the development of the depth-age relationship of the Akademii Nauk ice core. Indications of the so called "Carrington Event" of 1859 CE, 20 to 30 times weaker than 775 CE, could also be detected in the core. Keywords: AMS; climate; ice core 27th International Polar Conference, 25.-29.03.2018, Rostock, Deutschland Search for Recent 60Fe Deposition in Antarctic Snow via AMS Koll, D.; Busser, C.; Faestermann, T.; Fimiani, L.; Gomez-Guzman, J. M.; Kinast, A.; Korschinek, G.; Krieg, D.; Lebert, M.; Merchel, S.; Sterba, J.; Welch, J.; Kipfstuhl, S. 60Fe with a half-life of 2.6 Myr [1] is produced in stellar Environments and ejected into space mainly by core-collapse supernovae. Due to its long half-life, traces of 60Fe were deposited and incorporated on Earth and on the Moon and have been detected there [2,3,4,5]. Here, a new possible reservoir will be presented: Antarctic snow. This time, in contrast to former investigations, any signal detected would be recent material which might origin from the local interstellar cloud. 500 kg of Antarctic snow were chemically processed and are going to be analyzed by AMS in Munich at the 14 MV tandem. First results for 60Fe measurements will be presented as well as chemical extraction methods applied. [1] Rugel et. al. ; Phys. Rev. Lett. 103, 072502 (2009) [2] Knie et. al. ; Phys. Rev. Lett. 93, 171103 (2004) [3] Ludwig et. al. ; PNAS 113 (33), 9232-9237 (2016) [4] Wallner et. al. ; Nature 532, 69-72 (2016) [5] Fimiani et. al. ; Phys. Rev. Lett. 116, 151104 (2016) Keywords: supernovae; AMS DPG Frühjahrstagung des Arbeitskreises Atome, Moleküle, Quantenoptik und Plasmen (AMOP), 04.-09.03.2018, Erlangen, Deutschland The Flux of Interplanetary Dust on Earth: Status Krieg, D.; Busser, C.; Faestermann, T.; Fimiani, L.; Gomez-Guzman, J. M.; Kinast, A.; Koll, D.; Korschinek, G.; Lebert, M.; Merchel, S.; Welch, J.; Kipfstuhl, S. Earth's accumulation rate of Interplanetary Dust Particles (IDPs) is a matter of discussion, ranging from 5 (middle atmosphere measurements) up to 300 (space borne dust detection) tons per day. A new approach for a more precise measurement of this accumulation rate is made by extracting manganese from 500 kg of Antarctic snow collected near the Kohnen station, and measuring the concentration of 53Mn with AMS at the MLL in Munich. This 53Mn (t1/2 = 3.7 Ma) is mostly produced by nuclear reactions of cosmic rays on the iron of the IDPs. Relating the amount of 53Mn to the precipitation rate, a meridional transport and deposition model based on 10Be measurements, and to a chemical model of meteoritic ablation will help to reduce the uncertainty of the IDP input on Earth. The method of our measurement and the status of this study will be discussed. Keywords: IDP; AMS Phase Formation and Selectivity on Cr (co-)Doped TiOU+2082 through Interface Engineering and Post-Deposition Flash Lamp Annealing Gago, R.; Prucnal, S.; Palomares, J.; Jiménez, I.; Hübner, R. Many applications of TiO2 partially rely on its good performance as solvent for numerous impurities [1]. In particular, metal (cation) dopants have been used to functionalize or enhance TiO2 as catalyst [2], diluted magnetic semiconductor [3] or transparent conductor [4]. One of the most interesting properties of TiO2 relies on its photoactivity, exploited in many applications from catalysis, hydrogen production, pigments or solar cells [2]. However, TiO2 is mostly active in the ultraviolet (UV) region of the solar spectrum (band-gap > 3 eV) and there is a great interest in band-gap narrowing of TiO2 to achieve visible-light (VISL) response [2]. Metal doping do so and increases VISL absorption significantly but, unfortunately, introduces structural distortions in the host matrix that result in carrier recombination centers [5]. Apart from the structural quality, another relevant consideration on the production of doped TiO2 relies on the particular oxide matrix phase (anatase/rutile) [6]. For example, anatase has superior photoactivity than rutile although phase mixtures with high anatase content may present even higher photoactivity [7]. Therefore, special attention should also be devoted to the phase selectivity. Moreover, (heavily) doped TiO2 may display a completely different electronic structure that the pristine oxide material. The aim of this study is to promote customized phase formation in Cr (co-)doped TiO2 films produced by magnetron co-sputtering. Special attention is paid to the structural arrangements around host and dopant sites from the X-ray absorption near-edge structure. We report the conditions driving to single- or mixed-phase formation with the novelty of exploring film architectures based on interface engineering and/or post-deposition flash-lamp annealing (FLA) [8]. The latter is a non-contact rapid thermal processing extensively used in Microelectronics but yet to be explored in the present context. Hence, FLA can be attractive for many industrial applications dealing with the synthesis of band-gap engineered TiO2-based materials. [1] Sacerdoti et al., J. Solid State Chem. 177, 1781 (2004); [2] Henderson, Surf. Sci. Rep. 66, 185 (2011); [3] Matsumoto et al. Science 291, 854 (2001); [4] Furubayashi et al., Appl. Phys. Lett. 86, 252101 (2005); [5] Serpone et al., J. Phys. Chem. B 110, 24287 (2006); [6] Yang, et al., Phys. Rev. B 76, 195201 (2007); [7] Scanlon et al., Nat. Mater. 12, 798 (2013); [8] D. Reichel et al., Phys. Status Solidi C 9, 2045 (2012) 2017 MRS Fall Meeting & Exhibit, 26.11.-01.12.2017, Boston, MA, USA Energy-filtered TEM studies on silicon nanoparticles acting as quantum dots in single electron transistors Wolf, D.; Xu, X.; Prüfer, T.; Hlawacek, G.; Bischoff, L.; Möller, W.; Engelmann, H.-J.; Facsko, S.; von Borany, J.; Heinig, K.-H.; Hübner, R. The miniaturization of computing devices and the introduction of the internet of things generate an increasing demand for the development of low-power devices. Single electron transistors (SETs) are ideally suited for this demand, because they are promising very low power dissipation devices. For roomtemperature operation of an SET it is necessary to create a single quantum dot (QD) with a diameter below 5 nm exactly positioned between source and drain at a tunnel distance of only a few nanometers. Within the IONS4SET project [1], we aim to achieve this goal by ion irradiation induced Si-SiO2 mixing and subsequent thermally activated self-assembly of single Si nanocrystals surrounded by a thin SiO2 layer. This process is illustrated in Fig. 1 by means of simulations results. Here, we present energy-filtered (EF)TEM studies in order to monitor the influence of process parameters, such as stack geometry, ion fluence for irradiation, annealing temperature and annealing time, on the self-assembly of Si QDs. Fig. 2 shows representative EFTEM micrographs of a Si-SiO2-Si layer stack imaged using different electron energy-loss (EEL) windows. The Si plasmon-loss filtered images yield thereby the best signal-to-noise for detection of Si nanodots, because the Si plasmon peak is the most intense peak with a relatively small FWHM of 4 eV in the EEL spectrum. Moreover, since the obtained (raw) EFTEM images provide only qualitative information about the Si concentration in the oxide layer, they cannot give a clear answer if for example the observed contrast corresponds to one or more Si nanodots (NDs) in projection. Therefore, EFTEM images are quantified further by converting them into so-called thickness over mean free path length (MFPL) t/λSi maps, in which λSi is the MFPL corresponding to the chosen energy range. The experimental t/λSi maps are then compared with simulated t/λSi maps of a single Si ND. Fig. 3 depicts that our approach enables us not only to detect single Si nanodots (Fig. 3c,e) but also to count them if they are arranged in projection direction of the electron beam (Fig. 3d,f). For these experiments, the layer stacks were irradiated with Ne+ ions within an Orion NanoFab (Zeiss). This allows controlled line or point irradiation and ensures Si QD formation within a confined region. In a next step, confined regions will be established by fabricated nanopillars that enhances reproducibility as the volume relevant for the self-assembly of the nanocluster will be better defined. [1] We thank for financial support within the European Union"s Horizon 2020 research and innovation program under Grant Agreement No 688072 (Project IONS4SET). Microscopy Conference 2017, MC 2017, 21.-25.08.2017, Lausanne, Switzerland Microscopy Conference 2017, MC 2017, 21.-25.08.2017, Lausanne, Switzerland, 12-14 Insights into the 3D electric potential structure of III-V semiconductor core-multishell nanowires through combined STEM and holographic tomography Wolf, D.; Hübner, R.; Sturm, S.; Lubk, A. Off-axis electron holographic tomography (EHT) has been successfully applied to reveal the 3D structure of III-V semiconductor core-shell nanowires (NWs) [1,2]. The technique probes the phase shift of an electron wave transmitted through such a NW that is proportional to the NWs projected electrostatic potential. Thus, a tilt series of phase images (projected potentials) can be used as input to compute a 3D tomogram of the electrostatic potential by tomographic reconstruction algorithms. Typically, the recovered 3D potential is dominated by the mean inner potential (MIP), which is related to the materials composition. Consequently, space charge potentials determining for example the electric properties, e.g., at interfaces or pn-junctions in semiconductors [2] may be superimposed by MIP variations caused by compositional changes within the heterostructures. Here, we show on the example of a GaAs/AlGaAs core-multishell NW, how the space charge potentials can be uncovered from materials contrast (MIP) by determining the latter independently: To this end, high-angle annular dark-field (HAADF) STEM tomography was applied in addition to EHT on the same NW. STEM tomograms provide solely materials contrast that depends exponentially on the atomic number. Fig. 1 compares both methods in terms of the relation between reconstructed signal and projected property, exemplary for three different tomogram regions identified as pure Au, GaAs and AlGaAs: In case of EHT between the reconstructed potential and the MIP, and in case of STEM tomography between the reconstructed intensity and the atomic number. The latter relation enables converting the STEM tomogram in units of (mean) atomic numbers. Tilt series were acquired from -70° to +71° with 3° tilt steps in holography mode, and from -68° to +68° with 2° tilt steps in STEM mode. Since phase images of axially scattered electrons are used for EHT, it suffers much more from diffraction contrast than STEM tomography (high-angle scattering). Consequently, only 39 projections could be used for tomographic reconstruction in the case of EHT compared to 68 in the case of STEM tomography. For this reason, resolution and contrast in the 3D potential are slightly lower than in the STEM tomogram, which can be seen on the cross-section of the NW in Fig. 2. Nevertheless, the core-shell structure, the ca. (5-10) nm thick GaAs shell acting as quantum well tube (QWT), and unintended Al segregations are clearly resolved in both cases. Last but not least, longitudinal slices (Fig.3) exhibit clear differences of both tomograms that strongly suggest additional local space-charge related potentials to be investigated in greater detail in a next step. [1] A Lubk, D Wolf, P Prete, N Lovergine, T Niermann, S Sturm and H Lichte, Phys. Rev. B 90 (2014) p. 125404. [2] D Wolf, A Lubk, P Prete, N Lovergine and H Lichte, J. Phys. D: Appl. Phys. 49 (2016) p. 364004 [3] We thank N Lovergine of University of Salento, Lecce for provision of the samples. [4] We thank the group of Michael Lehmann at TU Berlin for access to the TEM FEI Titan 80-300 Berlin Holography Special. [5] DW acknowledges financial support within the European Union"s Horizon 2020 research and innovation program under Grant Agreement No 688072 (Project IONS4SET). AL has received funding from the European Research Council (ERC) under the European Union"s Horizon 2020 research and innovation programme (grant agreement No 715620). Microscopy Conference 2017, MC 2017, 21.-25.08.2017, Lausanne, Switzerland, 753-755 Subsurface Engineering of Silicon for 3D Devices Tokel, O.; Turnali, A.; Makey, G.; Elahi, P.; Ilday, S.; Colakoglu, T.; Yavuz, O.; Hübner, R.; Zolfaghari, M.; Pavlov, I.; Bek, A.; Turan, R.; Ilday, O. Recently we have demonstrated a new 3D-laser-fabrication method which enabled, for the first time, creating highly-controlled subsurface structural modifications (structural imperfections, or defects) buried deep inside Silicon (Si) wafers [1]. Characterizing the material properties of these subsurface Si structures are very critical towards enabling new optical and micro-mechanical applications inside chips [2,3]. Here, we present optical, chemical and microscopic analysis of these buried structures. Specifically, Transmission Electron Microscopy (TEM) studies, Optical Birefringence Analysis and Selective Chemical Etching analysis of the modifications will be presented. Infrared Transmission Microscopy will be shown to be applicable for subsurface imaging, providing a diagnostic tool without damaging the samples. Material properties of the disruptions in the crystal lattice are then exploited for fabricating various micro-devices. For instance, oxidation-reduction chemistry on laser-induced modifications enables the creation of highly-controllable, uniform and large-area micropillar arrays for solar cell applications, embedded microfludic channels for chip cooling and thru-Si vias for electrical interconnects in Si. These elements, which are challenging to form with conventional methods, can find use in various MEMS and electronics applications. The optical properties (refractive index change) of the structures are used to fabricate functional components such as lenses and gratings buried in chips. Further, the birefringence effect induced in Si may lead to holograms and other photonic applications, such as creating wave plates and polarizers. These functional optical and MEMS elements created inside Si, may find use in imaging and sensing in the near- and mid-infrared wavelength range, as well as in micro-devices towards micro-surgical tools, micro-motors, and micro-resonators. Thus, these capabilities are leading to a new fabrication approach in Si, which is fully CMOS compatible, rapid and mechanically robust, and builds on the optical,electrical and chemical properties of the modified volumes in Si. [1] Tokel et. al., arxiv.org/abs/1409.2827 [2] Tokel et. al, Direct Laser Writing of Volume Fresnel Zone Plates in Silicon., CLEO/Europe - EQEC, Munich, Germany, 2015. [3] Tokel et. al., 3D Functional Elements Deep Inside Silicon with Nonlinear Laser Lithography, APS March Meeting, Baltimore, USA, 2016. 2017 MRS Spring Meeting & Exhibit, 17.-21.04.2017, Phoenix, AZ, USA Application of Ion Beams to Fabricate and Modify Properties of Dilute Ferromagnetic Semiconductors Yuan, Y.; Helm, M.; Sawicki, M.; Dietl, T.; Zhou, S. Dilute ferromagnetic semiconductors (DFS) have been investigated for more than two decades due to their potentials for spintronics. Mn doped III-V semiconductors have been regarded as the prototype of the type. In this contribution, we will show how ion beams can be utilized in fabricating and modifying DFS. First, ion implantation followed by pulsed laser melting (II-PLM) provides an alternative to low-temperature molecular beam epitaxy (LTMBE) to prepare diverse DFS. The prepared DFSs exhibit pronounced magnetic anisotropy, large X-ray magnetic circular dichroism, anomalous Hall effect and magnetoresistance [1-9]. Going beyond LTMBE, II-PLM is successful to bring two new members, GaMnP and InMnP, into the family of III-Mn-V. Both GaMnP and InMnP show clear signatures of ferromagnetism and an insulating behavior. Second, helium ions can be used to precisely compensate the holes while keeping the Mn concentration constant [10-12]. For a broad range of samples including (Ga,Mn)As and (Ga,Mn)(As,P) with various Mn and P concentrations, we observe a smooth decrease of TC over a wide temperature range with carrier compensation while the conduction is changed from metallic to insulating. We can tune the uniaxial magnetic easy axis of (Ga,Mn)(As,P) from out-of-plane to in-plane with an isotropic-like intermediate state. These materials synthesized or modified by ion beams provide an alternative avenue to understand how carrier-mediated ferromagnetism is influenced by localization. Modelling of turbulence modulation in bubbly flows with the aid of DNS Ma, T.; Santarelli, C.; Ziegenhein, T.; Lucas, D.; Fröhlich, J. Modelling of turbulence modulation in bubbly flows with the aid of DNS data 15th Multiphase Flow Conference and Short Course, 14.-17.11.2017, Dresden, Deuschland new model for bubble-induced turbulence based on direct numerical simulation data Three main issues are addressed in the present study. First, an appropriate time scale is selected with the aid of the energy spectra determined on the basis of the DNS data. Then, links between the unclosed terms in the transport equations of the turbulence quantities and the DNS data for small bubbles are established. Third, a suitably chosen iterative procedure employing the full Reynolds-averaged model provides suitable coefficients for the closure of the terms resulting from BIT while largely removing the influence of others. Here, using DNS data with iterations to obtain term-by-term match (Figure 1b) in the model equations avoids pitfalls of ad hoc models targeting the TKE only. At the same time these results validate the closure, exhibiting very good agreement with the DNS and better performance than the standard closures. Beyond the resulting model itself the study also furnishes a systematic procedure which is of general use. The model is now ready for use and can be employed in practical Euler-Euler simulations. The 3rd International Conference on Numerical Methods in Multiphase Flows, 26.-29.06.2017, Tokyo, Japan DNS-based RANS closure for bubble-induced turbulence Keywords: DNS; RANS; bubble-induced turbulence ProcessNet Jahrestreffen Dresden, 14.-17.03.2017, Dresden, Deuschland A contribution to turbulence modelling in bubbly flows Ma, T. Modelling turbulence in bubbly flows that arise in the engineering and environment is of great challenges for multiphase Computational Fluid Dynamics. In the present book various turbulence modelling approaches are investigated in bubble columns and bubbly channel flows. The considered approaches contain Scale Resolving Simulations and the traditional Reynolds-averaged Navier-Stokes closure. The focus is set on the representation of the so-called bubble-induced turbulence in the modelling framework. A major chapter addresses a complete route to construct such a model embedded in Euler-Euler approach with the aid of Direct Numerical Simulation data. This procedure is employed to propose an improved model for bubble-induced turbulence. Keywords: Scale Resolving Simulation; Direct Numerical Simulation data; bubble-induced turbulence Book (Authorship) Dresden: TUDpress, 2017 Simulation of Reconfigurable Field-effect Transistors: Impact of the NiSi2-Si Interfaces, Strain, and Crystal Orientation Fuchs, F.; Schuster, J.; Gemming, S. Reconfigurable transistors (RFETs) can be switched between electron and hole current by changing the polarity of the gate potential. The device performance of such a transistor is strongly dominated by the contact physics. In this work, the electron transport across the NiSi2-Si interface is studied using the NEGF formalism and density functional theory. A new model is presented which relates the electron transport through the interface to the transfer characteristic of an RFET. The model is compared to experimental data showing good agreement. Based on the model, the influence of strain and the choice of the crystal orientation is discussed. It is demonstrated that best symmetry between electron and hole current is achieved for the <110> orientation. Furthermore, this symmetry can be tuned by strain, which is not possible for the <100> and <112> orientations. A discussion of these differences based on band structure analysis will be given, too. Keywords: Reconfigurable field-effect transistor; silicon; interface IHRS NanoNet Annual Workshop 2017, 16.-18.08.2017, Neuklingenberg, Deutschland Injection locking of constriction based spin Hall nano-oscillators Hache, T.; Weinhold, T.; Arekapudi, S. S. P. K.; Hellwig, O.; Schultheiss, H. Spin-Hall nano-oscillators (SHNOs) are modern auto-oscillation devices. Their simple geometry allows for an optical characterization by Brillouin-Light-Scattering microscopy at room temperature. Here we report on the observation of auto-oscillations in constriction based SHNOs under the forcing influence of an added alternating current. We show the possibility of injection locking between the applied external signal and the auto-oscillations driven by a direct current. Within the locking range the frequency of the auto-oscillations is forced to the external stimulus. In addition the intensity of the oscillations is increased strongly and the linewidth decreases. Due to the controllability of the auto-oscillations of the magnetization, injection locking can be used to influence the properties of future communication technologies, i.e. based on synchronized constriction based spin Hall nano-oscillators arrays. Keywords: spin Hall; spin Hall nano-oscillators; auto-oscillations; injection locking; phase locking; Auto-Oszillationen; Spin-Hall Nanooszillatoren 2017 European School on Magnetism: Condensed Matter Magnetism : bulk meets nano, 09.-20.10.2017, Cargese, France Nano-Magnonics Workshop 2018, 19.-21.02.2018, Diemerstein, Kaiserslautern, Deutschland Intermag 2018, 23.-27.04.2018, Marina Bay Sands Convetion Center, Singapore IEEE Magnetics Society Summer school, 03.-08.06.2018, Universidas San Francisco de Quito, Equador The Joint European Magnetic Symposia 2018, 03.-07.09.2018, Rheingoldhalle, Mainz, Deutschland Möglichkeiten der Kreislaufwirtschaft Knapp 780.000 Tonnen Elektro-Altgeräte wurden laut Umweltbundesamt im Jahr 2010 in Deutschland gesammelt. Das entspricht 8,8 Kilogramm pro Einwohner und Jahr. Viele wertvolle Metalle, Legierungen, funktionale Materialien sowie Kunststoffe sind darin enthalten. Wie kann man die Wertstoffe im Sinne einer Kreislaufwirtschaft (Circular Economy) bestmöglich zurückgewinnen, um neue Güter herzustellen? Was leistet Recycling schon heute? Was müsste getan werden, um es weiter zu verbessern? Und wie energetisch sinnvoll ist es überhaupt? Prof. Markus Reuter, Direktor am Helmholtz-Institut Freiberg für Ressourcentechnologie (HIF) des HZDR, widmet sich diesen Fragen. Der Metallurge und Recyclingexperte beschäftigt sich mit der Wiederverwertbarkeit von Produkten und erforscht innovative, digitale Systeme und Prozesse für optimales Recycling. Keywords: Kreislaufwirtschaft Möglichkeiten der Kreislaufwirtschaft / Wintersemester der Seniorenakademie, 06.11.2017, Dresden, Deutschland Auto-oscillations in double constriction spin Hall nano-oscillators Hache, T.; Wagner, K.; Arekapudi, S. S. P. K.; Hellwig, O.; Lindner, J.; Schultheiss, H. Spin-Hall nano-oscillators (SHNOs) are modern auto-oscillation devices. Their simple geometry allows for an optical characterization by Brillouin-Light-Scattering microscopy at room temperature. Here we report on the observation of auto-oscillations in constriction based SHNOs. These are devices where the current density is increased locally due to lateral confinement. Hence, the spin current generated by the spin Hall effect can create well defined hot-spots for auto-oscillations. We present BLS measurements of auto-oscillations in Co60Fe20B20(5 nm)/Pt(7 nm) based samples with two interacting, neighbouring nanoconstrictions. The precession amplitude in these samples can be driven far from equilibrium, resulting in clear nonlinear signatures in the spinwave spectra. The spatial distributions of the observed modes and current dependencies are shown. Keywords: spin Hall; spin Hall nano-oscillators; Spin-Hall Nanooszillatoren; spin current; auto-oscillations; Autooszillationen DPG-Frühjahrstagung SKM, 19.-24.03.2017, Dresden, Deutschland Circular Economy within and beyond manufacturing processes This workshop will gather around 30 representatives of business support organisations, including established and prospective providers of resource efficiency advisory and consulting services from European countries and regions in the process of setting up resource efficiency services for small and medium-sized enterprises (SMEs). Resource Efficiency in the Manufacturing Industry - Workshop, 24.11.2017, Berlin, Deutschland Dual energy CT: Benefits for proton therapy planning and beyond Richter, C.; Wohlfahrt, P.; Möhler, C.; Greilich, S. For about a decade, dual-energy CT (DECT) has been clinically available, mainly for radiology applications. In contrast, in the field of radiotherapy DECT has gained relevant interest over the last few years and here clinical use is still far away from being clinical standard. In this lecture benefits of DECT for radiotherapy applications will be discussed. The focus will be on application for treatment planning in proton therapy, namely the individual prediction of tissue's stopping power relative to water (SPR) as an alternative to the standard approach using a generic look-up table (HLUT). The manifold information gathered by two CT scans with different X-ray spectra allow for a patient-specific and direct calculation of relative electron density and SPR [1,2]. This enables the consideration of intra- and inter-patient variabilities in CT-based SPR prediction and ultimately a more accurate range prediction. The talk will cover the validation of the SPR prediction accuracy in realistic ground-truth scenarios [3,4], the investigation of clinical relevant differences between the DECT-based and the standard HLUT-based SPR prediction in clinical patient data [5] as well as the status of its clinical implementation [6]. Furthermore, additional applications in radiotherapy, e.g. for photon treatment planning, delineating and material differentiating will be briefly discussed. ESTRO 37, 20.-24.04.2018, Barcelona, España Radiotherapy and Oncology 127(2018), S289-S290 Dual energy CT for range prediction in proton therapy: Validation, clinical benefit & status of implementation Richter, C. Overview of DECT project + outlook 33rd Conference on Clinical and Experimental Research in Radiation Oncology (C.E.R.R.O.), 13.-20.01.2018, Les Menuires, Frankreich Inter-centre variability of CT-based range prediction in particle therapy: Survey-based evaluation Taasti, V.; Bäumer, C.; Dahlgren, C.; Deisher, A.; Ellerbrock, M.; Free, J.; Gora, J.; Kozera, A.; Lomax, T.; de Marzi, L.; Molinelli, S.; Teo, K.; Wohlfahrt, P.; Petersen, J.; Muren, L.; Hansen, D.; Richter, C. Purpose: To assess the inter-center variability of the conversion between CT number and particle stopping power ratio (SPR), a survey-based evaluation was carried out in the framework of the European Particle Therapy Network (EPTN). The CT-to-SPR conversion (Hounsfield look-up table, HLUT) is applied to treatment planning CT scans to finally derive the particle range in patients. Currently, CT scan protocols for treatment planning are not standardized regarding image acquisition and reconstruction parameters. Hence, the HLUT depends on the selected scan settings and must be defined by each center individually. Aiming to access the current inter-center differences, this investigation is a first step towards better standardization of CT-based SPR derivation. Methods: A questionnaire was sent to particle therapy centers involved in the EPTN and two centers in the United States. The questionnaire asked for details on CT scanners, acquisition and reconstruction parameters, the calibration and definition of the HLUT, as well as body-region specific HLUT selection. It was also assessed whether the influence of beam hardening was investigated and if an experimental validation of the HLUT was performed. Furthermore, different future techniques were rated regarding their potential to improve range prediction accuracy. Results: Twelve centers completed the survey (ten in Europe, two in the US). Scan parameters, in particular reconstruction kernel and beam hardening correction, as well as the HLUT generation showed a large variation between centers. Eight of twelve centers applied a stoichiometric calibration method, three defined the HLUT entirely based on tissue substitutes while one center used a combination of both. All facilities performed a piecewise linear fit to convert CT numbers into SPRs, yet the number of line segments used varied from two to eleven. Nine centers had investigated the influence of beam hardening, and seven of them had evaluated the object size dependence of their HLUT. All except two centers had validated their HLUT experimentally, but the validation schemes varied widely. Most centers acquired CT scans at 120 kVp, all centers individually customized their HLUT, and dual-energy CT was seen as a promising technique to improve SPR calculation. Conclusions: In general, a large inter-center variability was found in implementation of CT scans, image reconstruction and especially in specification of the CT-to-SPR conversion. A future standardization would reduce time-intensive institution-specific efforts and variations in treatment quality. Due to the interdependency of multiple parameters, no conclusion can be drawn on the derived SPR accuracy and its inter-center variability. As a next step within the EPTN, an inter-center comparison of CT-based SPR prediction accuracy will be performed with a ground-truth phantom. Keywords: proton therapy; particle therapy; range prediction; stopping-power ratio; Hounsfield look-up-table; inter-center comparison Physics and Imaging in Radiation Oncology 6(2018), 25-30 DOI: 10.1016/j.phro.2018.04.006 Inherited control of crystal surface reactivity Fischer, C.; Kurganskaya, I.; Lüttge, A. Material and environmental sciences have a keen interest in the correct prediction of material release as a result of fluid-solid interaction. For crystalline materials, surface reactivity exerts fundamental control on dissolution reactions; however, it is continuously changing during reactions and governs the dynamics of porosity evolution. Thus, surface area and topography data are required as input parameters in reactive transport models that deal with challenges such as corrosion, CO2 sequestration, and extraction of thermal energy. Consequently, the analysis of surface reaction kinetics and material release is a key to understanding the evolution of dissolution-driven surface roughness and topography. Kinetic Monte Carlo (KMC) methods simulate such dynamic systems. Here we apply these techniques to study the evolution of reaction rates and surface topography in crystalline materials. The model system consists of domains with alternating reactivity, implemented by low vs. high defect densities. Our results indicate complex and dynamic feedbacks between domains of high versus low defect density, with the latter apparently limiting the overall dissolution rate of the former - a limitation that prevails even after their disappearance. We introduce the concept of "inherited" control, consistent with our observation that maximum dissolution rates in high defect density domains are lower than they would be in the absence of low defect density neighboring domains. The controlling factor is the spatial pattern of surface accessibility of fluids. Thus, the distribution of large etch pits centers is inherited almost independently of spatial contrasts in crystal defect density during ongoing reactions. As a critical consequence, the prediction of both the material flux from the reacting surface and the evolution of topography patterns in crystalline material is constrained by the reaction history. Important applications include the controlled inhibition of reactivity of crystalline materials as well as the quantitative evaluation and prediction of material failure in corrosive environments. Keywords: Kinetic Monte Carlo simulation; rate spectra; crystal dissolution; surface reactivity; surface topography and roughness patterns Applied Geochemistry 91(2018), 140-148 DOI: 10.1016/j.apgeochem.2018.02.003 Pulsating dissolution of crystalline matter Fischer, C.; Lüttge, A. Fluid-solid reactions result in material flux from or to the solid surface. The prediction of the flux, its variations and changes with time are of interest to a wide array of disciplines, ranging from the material and earth sciences to pharmaceutical sciences. Reaction rate maps that are derived from sequences of topography maps illustrate the spatial distribution of reaction rates across the crystal surface. Here we present highly spatially-resolved rate maps that reveal the existence of rhythmic pulses of the material flux from the crystal surface. This observation leads to a change in our understanding of the way crystalline matter dissolves. Rhythmic fluctuations of the reactive surface site density and potentially concomitant oscillations in the fluid saturation imply spatial and temporal variability in surface reaction rates. Knowledge of such variability could aid attempts to upscale microscopic rates and predict reactive transport through changing porous media. Keywords: surface reactivity; kinetics; dissolution; fluid-solid interaction; rate spectra Proceedings of the National Academy of Sciences of the United States of America 115(2018)5, 897-902 Vibrational properties of metal phosphorus trichalcogenides from first principles Hashemi, A.; Komsa, H.-P.; Puska, M.; Krasheninnikov, A. V. Two-dimensional (2D) sheets of transition metal phosphorus trichalcogenides (TMPTs) offer unique magnetic and optical properties that can complement those found in other 2D materials. Insights into the structure and properties of these materials can be obtained by a juxtaposition of the experimental and calculated Raman spectra, but there is very little theoretical knowledge of the vibrational properties of TMPTs. Using first-principles calculations, we study mechanical and vibrational properties of a large set of monolayer TMPTs. From the phonon dispersion curves, we assess the dynamical stabilities and general trends on the atomic character of the vibrational modes. We determine Raman active modes from group theory, calculate Raman intensities, and analyze them with the help of the corresponding atomic displacements. We evaluate how the mode frequencies shift in response to a biaxial strain. We also determine elastic properties, which show that these systems are softer than many other layered materials. In addition to shedding light on the general features of vibrational properties of these materials, our results should also prove useful for interpreting experimental Raman spectra. Keywords: 2D materials; transition metal phosphorus trichalcogenides; first-principles calculations; Raman spectra Journal of Physical Chemistry C 121(2017), 27207-27217
CommonCrawl
LuxRep: a technical replicate-aware method for bisulfite sequencing data analysis Maia H. Malonzo ORCID: orcid.org/0000-0003-1739-05031, Viivi Halla-aho1, Mikko Konki2, Riikka J. Lund2 & Harri Lähdesmäki1,2 BMC Bioinformatics volume 23, Article number: 41 (2022) Cite this article DNA methylation is commonly measured using bisulfite sequencing (BS-seq). The quality of a BS-seq library is measured by its bisulfite conversion efficiency. Libraries with low conversion rates are typically excluded from analysis resulting in reduced coverage and increased costs. We have developed a probabilistic method and software, LuxRep, that implements a general linear model and simultaneously accounts for technical replicates (libraries from the same biological sample) from different bisulfite-converted DNA libraries. Using simulations and actual DNA methylation data, we show that including technical replicates with low bisulfite conversion rates generates more accurate estimates of methylation levels and differentially methylated sites. Moreover, using variational inference speeds up computation time necessary for whole genome analysis. In this work we show that taking into account technical replicates (i.e. libraries) of BS-seq data of varying bisulfite conversion rates, with their corresponding experimental parameters, improves methylation level estimation and differential methylation detection. DNA methylation is a form of epigenetic regulation wherein cytosine is either methylated or demethylated. It is known to both repress and promote gene expression depending on its location relative to the target gene (e.g. CpG islands, shelves, shores or open sea) and pattern (hypomethylated or hypermethylated). As such, its dysregulation is associated with many diseases, including cancer. One of the most widely used methods for measuring DNA methylation is bisulfite sequencing [1]. When single-stranded DNA reacts with bisulfite, unmethylated cytosine is converted into uracil whereas methylated cytosine does not. Subsequent sequencing generates thymine in place of the converted unmethylated cytosine. To determine methylation counts, the resulting sequences are mapped to a reference genome to identify cytosine loci and so differentiate between unmethylated cytosine and thymine loci. Several methods have been developed to estimate methylation levels and analyze differential methylation. One of the methods, Methylkit, uses two approaches, logistic regression (for samples with replicates) and Fisher's exact test [2]. Another method, BSmooth, assumes that methylation count follows a binomial distribution and estimates methylation levels using a local likelihood smoother within a given window [3]. Many of the methods use the beta-binomial distribution to model methylation levels. RADmeth uses the beta-binomial regression model (with the logit link function) to estimate methylation levels [4]. BiSeq uses a binomial model in smoothing methylation levels within a window (cluster) with weights from a triangular kernel which is a function of distance between CpG loci [5]. MethylSig uses a beta-binomial approach with an approximation method for estimating the beta parameters [6]. MOABS, apart from using the beta-binomial model to estimate methylation levels, estimates a credible interval for the methylation difference between single cytosines ("credible methylation difference") [7]. The paper mentions a feature for estimating bisulfite conversion rate but does not elaborate or mention if the estimate is integrated into the model estimating methylation. DSS-general also uses beta-binomial regression to model count data and it uses the arcsine link function [8]. DMRfinder clusters CpG sites into regions given a specified distance threshold then uses a hierarchical beta-binomial model [9]. Save for MOABS, none of these methods estimate bisulfite conversion rate and none, including MOABS, takes this rate into account when estimating methylation level or detecting differential methylation. In the optimal case, the bisulfite conversion rate of a DNA library is high (e.g. above 99%) [10]. However, when an experiment yields a low conversion rate the common lab practice is to exclude the DNA library so as to avoid overestimation of methylation levels, resulting in additional costs or smaller sample size depending on whether a replacement library is prepared or not. An advanced computational approach to handle poor conversion rates would render exclusion of samples unnecessary. The methylation analysis method LuxGLM [11] estimates methylation levels from bisulfite sequencing data using a probabilistic model that accounts for bisulfite conversion rate. It showed that taking into account experimental parameters like bisulfite conversion efficiency improved accuracy of methylation analysis. However, though this model was able to handle biological replicates with a general linear model component, it assumed data from each sample consisted of only a single bisulfite-converted DNA library. In this work we propose LuxRep, an improved method and software to allow use of replicates from different DNA libraries with varying bisulfite conversion rates. To make LuxRep tool computationally efficient and thus more applicable to genome-wide analysis we also propose to use variational inference. Our software consists of two modules: (1) estimation of experimental parameters from control data ("experimental parameters") and (2) inference of methylation level ("biological parameters") and differential methylation from DNA bisulfite sequencing data using the previously estimated experimental parameters. While LuxGLM was originally designed for analysis of both methylated (5mC) and hydroxymethylated cytosines (5hmC), the level for only one methylation modification (methylcytosine, 5mC) is included in this work (although our model can also be extended to 5hmC). Plate diagram of LuxRep model for the module analyzing experimental parameters from control data Plate diagram of the LuxRep model for estimating methylation level of a single cytosine with biological as well as technical replicates. The circles represent latent (white) and observed (gray) variables and the unbordered nodes represent constants Plate diagram of LuxRep model for generating dummy control data To facilitate genome wide analysis, in our model implementation the experimental parameters are first computed from the control data since all cytosines per technical replicate have the same value for these parameters (Fig. 1). Methylation levels are then determined individually for each cytosine, and differential methylation thereafter, using the pre-computed experimental parameters as fixed input (Fig. 2). We will next describe these two models in detail in Sects. 2.1–2.2. Experimental parameters Methylation estimates are a function of experimental parameters: bisulfite conversion rate (\(\text{BS}_{\rm eff}\)), sequencing error (\(\text{seq}_{\rm err}\)) and incorrect bisulfite conversion rate (\(\text{BS}^*_{\rm eff}\)). A BS-seq library with low \(\text{BS}_{\rm eff}\) results in overestimation of methylation levels. High \(\text{seq}_{\rm err}\), on the other hand, can lead to both over and underestimation of methylation levels. Though typically not measured in high-throughput bisulfite sequencing experiments, high \(\text{BS}^*_{\rm eff}\) leads to underestimation of methylation level. To demonstrate that differences in technical parameters (specifically bisulfite conversion rate) is common we took a real bisulfite sequencing dataset [12] and compared the bisulfite conversion efficiencies of the technical replicates (i.e. libraries) per biological replicate (Additional file 1: Fig. S1). Most samples had significantly variable conversion rates, i.e. differences in technical parameters is common. Moreover, in practice, BS-seq datasets obtained with non-optimal conversion efficiencies are commonly ignored as currently there does not exist a statistical analysis tool that would allow analyzing BS-seq datasets with different conversion efficiencies. This in turn leads to loss of data, decrease in statistical power, loss of a biological sample, and increase in sequencing costs. We start by briefly reviewing the underlying statistical model [11] and then introduce our extension that can handle technical replicates. Briefly, the conditional probability of a sequencing readout being "C" in BS-seq data is a function of the experimental parameters that include \(\text{seq}_{\rm err}\) and \(\text{BS}_{\rm eff}\), and depends on the methylation level \(\theta \in [0,1]\). If a read was generated from an unmethylated cytosine (C), the conditional probability \(p_{\rm{BS}}(\text{``C''|C})\) is given by $$p_{\rm{BS}}(\text{``C''|C})=(1-\text{BS}_{\rm eff})(1-\text{seq}_{\rm err}) + \text{BS}_{\rm eff} \text{seq}_{\rm err}.$$ The term \((1-\text{BS}_{\rm eff})(1-\text{seq}_{\rm err})\) refers to the condition wherein unmethylated cytosine is incorrectly not converted into uracil and correctly sequenced as "C" whereas the term \(\text{BS}_{\rm eff} \text{seq}_{\rm err}\) represents the condition wherein the unmethylated cytosine is correctly converted into uracil but incorrectly sequenced as "C". Similarly, in the case of methylated cytosine $$p_{\rm BS}(\text{``C''|5mC}) = (1-\text{BS}^*_{\rm eff})(1-\text{seq}_{\rm err}) + \text{BS}^*_{\rm eff} \text{seq}_{\rm err},$$ where \((1-\text{BS}^*_{\rm eff})(1-\text{seq}_{\rm err})\) denotes the case that methylated cytosine is correctly not converted to uracil and correctly sequenced as "C" while the term \(\text{BS}^*_{\rm eff} \text{seq}_{\rm err}\) represents the case that methylated cytosine is incorrectly converted to uracil and incorrectly sequenced as "C". In [11], bisulfite conversion, sequencing error and incorrect bisulfite conversion rates were specific to each biological replicate, not technical replicate. The experimental parameters follow a logistic normal distribution, where the bisulfite conversion rate \(\text{BS}_{\rm eff}\) is given by $$\text{BS}_{\rm eff}=\text{logit}^{-1}(\mu _{\rm{BS}_{\rm eff}} + \sigma _{\rm{BS}_{\rm eff}}r_{\rm{BS}_{\rm eff}})$$ and its hyperparameters are $$\mu_{\rm{BS}_{\rm eff}} \sim {\mathcal {N}}(\psi ^{\mu ,\mu }_{\rm{BS}_{\rm eff}},\psi ^{\mu ,\sigma }_{\rm{BS}_{\rm eff}})$$ $$\text{ln}(\sigma _{{{\rm BS}_{\rm eff}}}) \sim {\mathcal {N}}(\psi ^{\sigma ,\mu }_{\rm{BS}_{\rm eff}},\psi ^{\sigma ,\sigma }_{\rm{BS}_{\rm eff}})$$ $$r_{\rm{BS}_{\rm eff}} \sim {\mathcal {N}}(0,1),$$ such that \(\text{logit}(\text{BS}_{\rm eff}) \sim {\mathcal {N}}(\mu _{\rm{BS}_{\rm eff}},\sigma _{\rm{BS}_{\rm eff}})\), where \(\mu _{\rm{BS}_{\rm eff}}\) is the mean and \(\sigma _{\rm{BS}_{\rm eff}}\) is the standard deviation (\(\psi ^{\mu ,\mu }_{\rm{BS}_{\rm eff}}=4\), \(\psi ^{\mu ,\sigma }_{\rm{BS}_{\rm eff}}=1.29\), \(\psi ^{\sigma ,\mu }_{\rm{BS}_{\rm eff}}=0.4\) and \(\psi ^{\sigma ,\sigma }_{\rm{BS}_{\rm eff}}=0.5\)). See [13] for details. The sequencing error \(\text{seq}_{\rm err}\) is modeled similarly $$\text{seq}_{\rm err} = \text{logit}^{-1}(\mu _{\rm{seq}_{\rm err}} + \sigma _{\rm{seq}_{\rm err}}r_{\rm{seq}_{\rm err}})$$ $$\mu _{\rm{seq}_{\rm err}} \sim {\mathcal {N}}(\psi ^{\mu ,\mu }_{\rm{seq}_{\rm err}},\psi ^{\mu ,\sigma }_{\rm{seq}_{\rm err}})$$ $$\text{ln}(\sigma _{\rm{seq}_{\rm err}}) \sim {\mathcal {N}}(\psi ^{\sigma ,\mu }_{\rm{seq}_{\rm err}},\psi ^{\sigma ,\sigma }_{\rm{seq}_{\rm err}})$$ $$r_{\rm{seq}_{\rm err}} \sim {\mathcal {N}}(0,1),$$ such that \(\text{logit}(\text{seq}_{\rm err}) \sim {\mathcal {N}}(\mu _{\rm{seq}_{\rm err}},\sigma _{\rm{seq}_{\rm err}})\), where \(\mu _{\rm{seq}_{\rm err}}\) is the mean and \(\sigma _{\rm{seq}_{\rm err}}\) is the standard deviation (\(\psi ^{\mu ,\mu }_{\rm{seq}_{\rm err}}=-8\), \(\psi ^{\mu ,\sigma }_{\rm{seq}_{\rm err}}=1.29\), \(\psi ^{\sigma ,\mu }_{\rm{seq}_{\rm err}}=0.4\) and \(\psi ^{\sigma ,\sigma }_{\rm{seq}_{\rm err}}=0.5\)). The hyperparameter values above were used since they worked well in a previously published related work [11] although we chose a lower \(\psi ^{\mu ,\mu }_{\rm{seq}_{\rm err}}\) since it generated more robust methylation estimates with mid-values of theta (i.e. 0.3 and 0.7). Other than that, to confirm that the results were not sensitive to hyperparameter values we tested different values ranging from low (\(\psi ^{\mu ,\mu }_{\rm{BS}_{\rm eff}}=1\), \(\psi ^{\mu ,\sigma }_{\rm{BS}_{\rm eff}}=1\), \(\psi ^{\sigma ,\mu }_{\rm{BS}_{\rm eff}}=0.1\), \(\psi ^{\sigma ,\sigma }_{\rm{BS}_{\rm eff}}=0.1\), \(\psi ^{\mu ,\mu }_{\rm{seq}_{\rm err}}=-10\), \(\psi ^{\mu ,\sigma }_{\rm{seq}_{\rm err}}=1\), \(\psi ^{\sigma ,\mu }_{\rm{seq}_{\rm err}}=0.1\) and \(\psi ^{\sigma ,\sigma }_{\rm{seq}_{\rm err}}=0.1\)) to high (\(\psi ^{\mu ,\mu }_{\rm{BS}_{\rm eff}}=10\), \(\psi ^{\mu ,\sigma }_{\rm{BS}_{\rm eff}}=10\), \(\psi ^{\sigma ,\mu }_{\rm{BS}_{\rm eff}}=1\), \(\psi ^{\sigma ,\sigma }_{\rm{BS}_{\rm eff}}=1\), \(\psi ^{\mu ,\mu }_{\rm{seq}_{\rm err}}=-1\), \(\psi ^{\mu ,\sigma }_{\rm{seq}_{\rm err}}=10\), \(\psi ^{\sigma ,\mu }_{\rm{seq}_{\rm err}}=1\) and \(\psi ^{\sigma ,\sigma }_{\rm{seq}_{\rm err}}=1\)) hyperparameter values, relative to the values used in this paper, and indeed the methylation estimates were robust regardless of hyperparameter values (Additional file 1: Fig. S2). The BS-seq experiments typically include completely unmethylated DNA fragments as controls (such as the lambda phage genome) that allow estimation of \(\text{BS}_{\rm eff}\) and \(\text{seq}_{\rm err}\). However, as BS-seq experiments typically do not include completely methylated DNA fragments as controls that would be needed to estimate the incorrect bisulfite conversion rate \(\text{BS}^*_{\rm eff}\), it is set to a constant value (e.g. \(\text{BS}^*_{\rm eff}=0\), see Sections "Estimating experimental parameters" and "Estimating methylation levels" for specific values used in results). Note also that the bisulfite conversion rate and sequencing error parameters are specific for each biological samples and technical replicate. In Fig. 1, \(\theta ^{\text {control}}\) represents the proportions of DNA methylation modifications in the control cytosine. In this case the proportion consists of unmethylated DNA, but this can be adjusted if additional DNA methylation modifications are included. Following Eqs. 1 and 2, the observed total number of "C" readouts for a single control cytosine is binomially distributed, $$N_{\rm{BS,C}}^{\text {control}} \sim \text {Bin}(N_\text{BS}^{\text {control}},p_{\rm{BS}}\text{(``C'')}^{\text {control}}),$$ where \(N_{\rm{BS}}^{\text {control}}\) is the total number of reads and the probability of observing "C" is given by $$\begin{aligned} p_{\rm{BS}}(\text{``C''})^{\text {control}}&= p_\text{BS}(\text{``C''} | \text{5mC})\theta ^{\text {control}} + p_\text{BS}(\text{``C''} | \text{C})(1-\theta ^{\text {control}}). \end{aligned}$$ Using the sequencing read counts from the control cytosines \(N_{\rm{BS,C}}^{\text {control}}\) and \(N_{\rm{BS}}^{\text {control}}\), posterior distributions of unknowns in this model are obtained using the inference methods described in section "Variational inference". Posterior means of \(\text{BS}_{\rm eff}\) and \(\text{seq}_{\rm err}\) (and \(\text{BS}^*_{\rm eff}\) if available) are then used in the actual methylation level analysis as described in the next section. Biological parameters For computing the biological parameters, the observed total number of "C" readouts for a single noncontrol cytosine is similar to Eq. 11, \(N_{\rm{BS,C}} \sim \text {Bin}(N_\text{BS},p_{\rm{BS}}\text{(``C'')})\), where \(N_{\rm{BS}}\) is the total number of reads and the probability of observing "C", similar to Eq. 12, is given by $$p_{\rm{BS}}(\text{``C''}) = p_\text{BS}(\text{``C''} | \text{5mC})\theta + p_\text{BS}(\text{``C''} | \text{C})(1-\theta )$$ where \(\theta =p(\text{5mC})\). LuxRep retains the general linear model with matrix normal distribution used by LuxGLM to handle covariates wherein matrix normal distribution is a generalisation of multivariate normal distribution to matrix-valued random variables. The following section summarizes the linear model (see [11] for more details). In the general linear model component of LuxGLM (Fig. 2) $${\mathbf {Y}}={{\mathbf {D}}}{{\mathbf {B}}} + {\mathbf {E}},$$ where \({\mathbf {Y}} \in {\mathbb {R}}^{N \times 2}\) contains the unnormalized methylation fractions, \({\mathbf {D}}\) is the design matrix (size N-by-p, where p is the number of parameters), \({\mathbf {B}} \in {\mathbb {R}}^{p \times 2}\) is the parameter matrix, and \({\mathbf {E}} \in {\mathbb {R}}^{N \times 2}\) is the noise matrix. To derive the (normalized) methylation proportions \(\varvec{\theta } = (\theta _1,\ldots ,\theta _N)^T\), LuxGLM uses the softmax link function (or transformation) $$\theta _i = \text{Softmax}(\text{row}_i({\mathbf {Y}})).$$ The softmax function is obtained when generalizing the logistic function to multiple dimensions. That is, the softmax function \(\sigma : {\mathbb {R}}^K \rightarrow [0,1]^K\) is defined by \(\sigma ({\mathbf {z}})_i =\) \(\frac{e^{z_i}}{ \sum _{j=1}^{K} e^{z_j} }\). In matrix normal distribution, $${\mathbf {X}} \sim {{\mathcal {M}}}{{\mathcal {N}}}({\mathbf {M}},{\mathbf {U}},{\mathbf {V}})$$ where \({\mathbf {M}}\) is the location matrix and \({\mathbf {U}}\) and \({\mathbf {V}}\) are scale matrices. Alternatively, \({\mathbf {X}}\) (in Eq. 16) can also be written as the multivariate normal distribution $$\text{vec}({\mathbf {X}}) \sim {\mathcal {N}}(\text{vec}({\mathbf {M}}),{\mathbf {U}} \otimes {\mathbf {V}}),$$ where \(\text{vec}(\cdot )\) denotes vectorization of a matrix and \(\otimes\) denotes the Kronecker product. Given Eq. 14, \({\mathbf {B}}\) and \({\mathbf {E}}\) take on the following prior distributions $${\mathbf {E}}|\mathbf {U_E},\mathbf {V_E} \sim {{\mathcal {M}}}{{\mathcal {N}}}({\mathbf {0}}, \mathbf {U_E}, \mathbf {V_E})$$ $${\mathbf {B}}|\mathbf {M_B}, \mathbf {U_B}, \mathbf {V_B} \sim {{\mathcal {M}}}{{\mathcal {N}}}(\mathbf {M_B}, \mathbf {U_B}, \mathbf {V_B}).$$ Using the vectorized multivariate normal distribution formulation of the matrix normal distribution, matrix \({\mathbf {Y}}\) then becomes $$\begin{aligned} &\text{vec}({\mathbf {Y}})|{\mathbf {D}}, {\mathbf {M_B}}, {\mathbf {U_B}}, {\mathbf {V_B}}, {\mathbf {U_E}}, {\mathbf {V_E}} \sim {\mathcal {N}}(({\mathbf {I}} \otimes {\mathbf {D}}) \text{vec} ({\mathbf {M_B}}), \\&({\mathbf {I}} \otimes {\mathbf {D}})({\mathbf {V_B}} \otimes {\mathbf {U_B}})({\mathbf {I}} \otimes {\mathbf {D}})^{\mathbf {T}} + {\mathbf {V_E}} \otimes {\mathbf {U_E}}). \end{aligned}$$ Assuming the scale matrices \({\mathbf {U_B}}\), \({\mathbf {V_B}}\), \({\mathbf {U_E}}\) and \({\mathbf {V_E}}\) are all diagonal with parameter and noise specific variances \(\sigma ^\text{2}_{\mathbf {B}}\) and \(\sigma ^\text{2}_{\mathbf {E}}\), probability densities for \({\mathbf {B}}\), \({\mathbf {E}}\) and \({\mathbf {Y}}\) can be stated as $$\text{vec} ({\mathbf {B}}) \sim \ {\mathcal {N}}(\text{vec}({\mathbf {0}}), \sigma ^\text{2}_{\mathbf {B}}({\mathbf {I}} \otimes {\mathbf {I}}))$$ $$\text{vec} ({\mathbf {E}}) \sim \ {\mathcal {N}}(\text{vec}({\mathbf {0}}), \sigma ^\text{2}_{\mathbf {E}}({\mathbf {I}} \otimes {\mathbf {I}}))$$ $$\begin{aligned}&\text{vec}({\mathbf {Y}})|{\mathbf {D}},\sigma ^\text{2}_{\mathbf {B}}, \sigma ^\text{2}_{\mathbf {E}} \sim \ {\mathcal {N}}(\text{vec}({\mathbf {0}}), \nonumber \\&\sigma ^\text{2}_{\mathbf {B}} ({\mathbf {I}} \otimes {\mathbf {D}})({\mathbf {I}} \otimes {\mathbf {I}}) ({\mathbf {I}} \otimes {\mathbf {D}})^{\mathbf {T}} + \sigma ^\text{2}_{\mathbf {E}}({\mathbf {I}} \otimes {\mathbf {I}})). \end{aligned}$$ Parameter estimates of \(\text{BS}_{\rm eff}\) and \(\text{seq}_{\rm err}\). The x-axis shows whether the samples were drawn from 'G' (good quality) or 'B' (bad/low quality) technical replicates corresponding to \(\text{BS}_{\rm eff}^B\) and \(\text{BS}_{\rm eff}^G\), respectively, and grouped according to the two scenarios, discussed in section "Estimating methylation levels", 'GGB' and 'GBB'. Low and high coverage refer to \(N_{\rm{BS}} = 1 \dotsc 3\) and \(N_{\rm{BS}} = 10\), respectively Plate diagram of LuxRep model for simulating data for estimating methylation level Plate diagram of the reduced LuxRep model that mimics the traditional approach of not accounting for experimental parameters Boxplots of estimates of methylation levels. Datasets were analysed with the full and reduced LuxRep models with varying methylation levels (columns, values shown at topmost panel), varying number of reads (rows, values shown on right panel), different combinations of replicates with varying \(\text{BS}_{\rm eff}\) ('G' and 'B') (x-axis), and using either HMC or ADVI to evaluate or approximate the posterior, respectively. The boxplots show the posterior means (\(n=100\)) Plate diagram of model for generating dummy data for differential methylation analysis AUROCs of differential methylation calls. Accuracy in determining differential methylation was measured by generating datasets consisting of two groups (A and B) with varying \(\Delta \theta\) (\(\theta _A\) and \(\theta _B\) levels are shown in top panels and when one or two of three replicates have low \(\text{BS}_{\rm eff}\) ('GGB' and 'GBB', respectively.) For 'GBB' (top box) and 'GGB' (bottom box) \(N_{\rm{BS}} = 10\) and \(N_{\rm{BS}} = 6\), respectively, for each technical replicate. X-axis shows whether HMC or ADVI was used to evaluate or approximate the posteriors Variance \(\sigma ^2_{\mathbf {B}} = 5\) and \(\sigma ^2_{\mathbf {E}} \sim \Gamma ^{-1}(\alpha ,\beta )\), where \(\alpha = \beta = 1\), are used in this work. We chose to use the hyperparameter value \(\sigma _B^2=5\) because that seems to be widely applicable and provides robust inference results. To confirm that the results are not sensitive to the particular choice of \(\sigma _B^2\) value, we carried out an ablation study where we repeated the methylation level estimation experiment (from Fig. 7) with three different values of \(\sigma _B^2\): 1, 5, and 10. Our results in Fig. S3 confirm that the results have very little or no variation depending on the choice of \(\sigma _B^2\) value. The inverse gamma distribution was used as prior for \(\sigma _E^2\) since (with the alpha and beta hyperparameters used) it is uninformative and makes no strong assumptions with regards to the spread of the noise term. Also, the inverse gamma distribution is a conjugate prior to a normal distribution with known mean \(\mu\) and unknown variance \(\sigma ^2\). We extend the model to allow modelling of technical replicates wherein the methylation level \(\theta\) is the same for all different bisulfite-converted DNA libraries from the same biological sample but the experimental parameters (\(\text{seq}_{\rm err}\) and \(\text{BS}_{\rm eff}\)) vary across both the biological replicates as well as the technical replicates. In the modified model (Figs. 1 and 2), \(N_{\rm{BS,C}}\) and \(N_{\rm{BS}}\) represent the observed "C" and total counts, respectively, from each of the \(M_i\) technical replicates per biological sample \(i \in \left\{ 1,..,N\right\}\). Note that the experimental parameters \(\text{BS}_{\rm eff}\) and \(\text{seq}_{\rm err}\), taken from the posterior means, are sample and replicate-specific. To detect differential methylation, hypothesis testing was done using Bayes factors (via the Savage-Dickey density ratio method) as implemented in [11]. Variational inference [11] used Hamiltonian Monte Carlo (HMC) for model inference (since the model is analytically intractable), whereas in variational inference (VI) the posterior \(p(\phi |\mathbf{X} )\) of a model is approximated with a simpler distribution \(q(\phi ;\rho )\), which is selected from a chosen family of distributions by minimizing Kullback-Leibler divergence between \(p(\phi |\mathbf{X} )\) and \(q(\phi ;\rho )\). We use the automatic differentiation variational inference algorithm (ADVI) from [14], which is integrated into Stan. ADVI is used to generate samples from the approximative posterior \(q(\phi ;\rho )\). There are a few parameters which can be tuned to make the ADVI algorithm [14] fast but accurate. These parameters are number samples used in Monte Carlo integration approximation of expectation lower bound (ELBO), number of samples used in Monte Carlo integration approximation of the gradients of the ELBO and number of samples taken from the approximative posterior distribution. The default values for gradient samples \(N_G\) and ELBO samples \(N_E\) are 100 and 1 respectively. Here we compare the computation times and preciseness of the Savage-Dickey estimate computed using HMC and ADVI with different \(N_E\) and \(N_G\) values. The tested values for \(N_E\) were 100, 200, 500 and 1000 and for \(N_G\) 1, 10 and 100. To make the HMC and ADVI methods comparable, the number of samples retrieved from the approximative posterior distribution is set to be the same for both methods. To choose the best number of gradient samples and ELBO samples, simulation tests on LuxGLM model were executed. These tests were conducted in the following way: First, simulated data from the LuxGLM model was generated. The number of reads and replicates were varied (the tested values were 6, 12, 24 and 6, 10, 20 respectively) and for each combination data sets with differential methylation and without differential methylation were generated. The calculation of the Bayes factors was made using different \(N_E\) and \(N_G\) values. For each setting 100 data sets were simulated and Bayes factors were calculated. Using the computed Bayes factors, ROC curves and AUROC statistics were produced. Also, the computation times for each parameter value combination were taken down. The results of these tests for the case of 12 reads and 10 replicates are shown in Additional file 1: Figs. S4 and S5. In Additional file 1: Fig. S4 the computation times for different parameter values are shown. In Additional file 1: Fig. S5 the computation time was plotted as a function of accuracy of the method when compared to the HMC approach. The average computation time for the HMC method is plotted in red. From the figures we can see that with the all tested parameter combinations the computing Savage-Dickey estimate with ADVI is faster than with HMC. In Additional file 1: Fig. S5, on the left side of the dashed line are the parameter combinations which gave better precision than HMC approach. Estimating experimental parameters Samples prepared for BS-seq are typically spiked-in with unmethylated control DNA (often Lambda phage genome) that allows estimation of bisulfite conversion efficiency \(\text{BS}_{\rm eff}\). For demonstration purposes, dummy control cytosine data were generated using the model illustrated in Fig. 3. Based on a cursory examination of an actual dataset generated from spiked-in Lambda phage DNA (data not shown), bisulfite sequencing data for 444 control cytosine were simulated with number of reads per cytosine \(N_\text{BS} \in \{1, \dots ,3\}\). For comparison, another set-up was generated with coverage \(N_\text{BS}=10\). Experimental parameters were set to fixed values while the methylation modification fractions \(\theta ^{\text{control}}\) were drawn from \(\text{Dir}(\alpha )\) (parameters listed below). $$\begin{aligned} \alpha _\text{control}&= (999,1) \\ \text{BS}^*_{\rm eff}&= 0.001\\ \text{seq}_{\rm err}&= 0.001\\ \text{BS}_{\rm eff}&\in \{0.995,0.9\}\\ K_\text{control}&= 444\\ N_\text{BS}&\in \{1 \dots 3, 10\}\\ \\ \end{aligned}$$ The choice to use 90% as the low bisulfite conversion efficiency is based on Additional file 1: Fig. S1 which shows low conversion efficiencies to be around 90%. To test our method also with a lower conversion efficiency (<90%) we added the conversion efficiency 85% (Additional file 1: Fig. S6). As the plots show, the full model generates more accurate median on average than the reduced model also at 85% conversion efficiency. Sequencing error and bisulfite conversion rates were estimated using the model illustrated in the plate diagram in Fig. 1 based on the dummy control cytosine data. Incorrect bisulfite conversion rate, \(\text{BS}^*_{\rm eff}\), was set to a fixed value (0.1%) (in LuxGLM it was estimated from control data) because genome scale bisulfite sequencing typically do not include methylated cytosine control data. The data consists of N biological samples (\(i \in \{1,\ldots ,N\}\)), each of which has \(M_i\) technical replicates corresponding to different bisulfite-converted DNA library preparations. The LuxGLM model [11] was modified to determine experimental parameters for each technical replicate separately (shown as the "replicates" plate in the diagram in Fig. 1). The circles represent latent (white) and observed (gray) variables and the squares/unbordered nodes represent fixed values (for parameters and hyperparameters). Figure 4 shows the estimates for the experimental parameters. LuxRep generated good estimates for \(\text{BS}_{\rm eff}\) and \(\text{seq}_{\rm err}\), particularly with technical replicates that had high \(\text{BS}_{\rm eff}\) (99.5%), even with extremely low coverage (\(N_{BS}=1 \dotsc 3\)). Technical replicates with higher coverage (\(N_{BS}=10\)), though, were more accurate in terms of median closer to the actual values and lower variance. Estimating methylation levels For estimating methylation levels and analyzing differential methylation, we first simulated technical replicates with low (\(\text{BS}_{\rm eff}^B \sim \text{beta}(90,10)\)) and high (\(\text{BS}_{\rm eff}^G \sim \text{beta}(99.5,0.5)\)) BS conversion rates with varying sequencing depth \(N_{\rm{BS}}\) and methylation level (\(\theta \in [0.1,0.9]\)). The datasets were generated following the model illustrated in Fig. 5 with methylation levels and experimental parameters generated following the beta distribution with parameters set to values listed below. $$\begin{aligned}&\text{BS}_{\rm eff}^B \sim \text{beta}(90,10)\\&\text{BS}_{\rm eff}^G \sim \text{beta}(99.5,0.5)\\&\text{seq}_{\rm err} \sim \text{beta}(0.1,99.9)\\&\text{BS}^*_{\rm eff} \sim \text{beta}(0.1,99.9)\\&N_\text{BS} \in \{6,12,24\}\\&K_\text{cytosine}=4\\&\theta _1 \sim \text{beta}(100,900)\\&\theta _2 \sim \text{beta}(300,700)\\&\theta _3 \sim \text{beta}(700,300)\\&\theta _4 \sim \text{beta}(900,100)\\&N^\text{control}_\text{BS}=20\\&K^\text{control}_\text{cytosine}=100\\&\theta ^\text{control} \sim \text{Dir}(999,1)\\ \end{aligned}$$ where the Dirichlet distribution is denoted by Dir(\(\cdot\)). Two scenarios were simulated consisting of three technical replicates each: (i) two replicates with high \(\text{BS}_{\rm eff}\) (i.e. good samples, 'G') and one with low \(\text{BS}_{\rm eff}\) (i.e. bad sample, 'B')('GGB'), and (ii) one 'G' replicate and two 'B' replicates ('GBB'). Each scenario was analyzed using (i) the full LuxRep model (Fig. 2) and (ii) a reduced model with experimental parameters fixed to \(\text{BS}_{\rm eff}=1\), \(\text{seq}_{\rm err}=0\) and \(\text{BS}^*_{\rm eff}=0\), and using the "C" and "T" counts from only the 'G' samples (those with \(\text{BS}_{\rm eff}=99.5\%\) and above) to simulate the traditional approach of not accounting for experimental parameters (Fig. 6). Results from estimating the models with HMC and ADVI were also compared. Datasets (\(n=100\)) were analysed with the full and reduced LuxRep models with varying methylation levels, varying number of reads, different combinations of replicates with varying \(\text{BS}_{\rm eff}\) ('G' and 'B'), and using either HMC or ADVI to evaluate or approximate the posterior, respectively (Fig. 7). For each simulated data set we estimated the methylation level \(\theta\) using the posterior mean of samples (\(S=1000\)) drawn from the posterior (HMC) and approximate posterior (ADVI) distribution. The variance of the estimates using the full model was generally lower compared to the reduced model across \(\theta\) and \(N_{\rm {BS}}\) values (Fig. 7) demonstrating the utility of using LuxRep with replicates of varying \(\text{BS}_{\rm eff}\). The decrease in variance was generally greater with the second scenario ('GBB'), highlighting the capability of LuxRep to make use of samples with low \(\text{BS}_{\rm eff}\). Improvements in the estimates were comparable when using HMC and ADVI. Notable also is the comparable accuracy between the two scenarios 'GGB' and 'GBB', i.e. 'GBB' was relatively as accurate as 'GGB' even though it had more replicates with low \(\text{BS}_{\rm eff}\). To more directly address the question of whether the full model significantly improves accuracy compared with traditional methods we performed methylation estimation using the full and reduced (representing traditional methods) methods with varying bisulfite conversion rates, including all samples for both the full and reduced models (Additional file 1: Fig. S6). Lower bisulfite conversion rates (85% and 90%) generated greater differences in estimates with the full model generally showing a more accurate median, specially with \(\theta\) values of 0.3 and 0.7. The median were generally similar with higher bisulfite conversion rates. In terms of variance, the differences varied according to methylation level and bisulfite conversion rate (e.g. the variance of the full model was generally slightly higher with \(\theta\) values of 0.1 and 0.3, whereas the variance of the reduced model was generally higher with theta 0.9). Since most genomic regions tend to be unmethylated we queried the estimates when the actual methylation level approaches zero (\(\theta = 0.1\)). As shown in Fig. 7 and Additional file 1: Fig. S6, at low methylation levels (e.g. 0.1), the median is below the actual value, that is the methylation levels tend to be underestimated. It follows that for genomic regions that are unmethylated it is unlikely that the method will erroneously estimate a higher methylation level. To test the utility of LuxRep on an actual bisulfite sequencing dataset, methylation levels were estimated from an RRBS dataset [12] consisting of two individuals and three replicates each (two low and one high \(\text{BS}_{\rm eff}\), individual 1: 96.38%, 99.32% and 99.96%; individual 2: 94.59%, 98.67% and 99.98%). The replicate with high \(\text{BS}_{\rm eff}\) was analyzed with the full model while the two low \(\text{BS}_{\rm eff}\) replicates were analyzed with both the full and reduced models. The difference in the estimated methylation levels (1000 CpG sites) between the high \(\text{BS}_{\rm eff}\) replicate and the low \(\text{BS}_{\rm eff}\) replicates using the full and reduced models were measured by taking their Euclidean distance which showed greater similarity when using the full model (individual 1: reduced: 2.29, full: 2.23; individual 2: reduced: 2.55, full: 2.49). Detecting differential methylation Accuracy in determining differential methylation was measured by generating datasets consisting of two groups (A and B) with varying methylation level difference \(\Delta \theta\) between the two groups and when one or two of three replicates have low \(\text{BS}_{\rm eff}\) ('GGB' and 'GBB', respectively). Each group consisted of four biological replicates wherein each biological replicate had three technical replicates each (with different sequencing read coverage, \(N_{\rm{BS}}=10\) or \(N_{\rm{BS}}=6\); the standard threshold for total sequencing read coverage is \(N_{\rm{BS}}=10\)). The model for generating simulated data is described in Fig. 8 (where \(\theta \sim \text{Beta}(\alpha _\theta ,\beta _\theta )\), with parameters shown in Table 1). Table 1 \({\overline{\theta }}\) parameters Differential methylation was analysed using the full and reduced LuxRep models (see Figs. 2 and 6, respectively, and, for additional details of hyperpriors used, [11]), evaluated with HMC and ADVI. Eq. 26 shows the design matrix \({\mathbf {D}}\) and parameter matrix \({\mathbf {B}}\) used in the general linear model component (Bayes factors were computed using the Savage-Dickey density ratio estimator using samples of \(b_\text{2,1}\) and \(b_\text{2,2}\), \(S=1600\) and \(S=1000\) from the posterior distributions approximated with HMC and ADVI, respectively). AUROCs were calculated based on \(\sim\) \(200\) positive (\(\Delta \theta \ne 0\)) and \(\sim\) \(200\) negative (\(\Delta \theta =0\)) samples (Fig. 9). The full model consistently generated higher AUROCs compared to the reduced model, moreso with the 'GBB' subsets, showing that LuxRep is able to utilize DNA libraries with low \(\text{BS}_{\rm eff}\) to improve differential methylation analysis. Select ROC curves of differential methylation calls generated from the full and reduced models (with technical replicates 'GBB' and 'G', respectively) where \(\theta _\text{A}=0.2\) and \(\theta _\text{B}\) was set to 0.3, 0.4 and 0.5 (top, middle and bottom panels, respectively). Samples were generated from the approximated posterior using variational inference Comparison of running times using HMC and ADVI for model evaluation Select ROC curves generated from the full and reduced models show notable increase in AUROCs when using the full over the reduced model (Fig. 10). Moreover, the difference in AUROCs increases with decreasing \(\Delta \theta\). In addition to AUROC, to provide empirical statistical power, we calculated the true positive rates for differential methylation (Additional file 1: Fig. S7). True positive rates were generally higher in the full model compared to the reduced model, as expected. Comparing running times Running times were measured using the Stan [15] time records and by a Python function, and with or without the additional time required for post-processing the output files (i.e. parsing relevant information), with varying number of reads (Fig.11). The computations were performed using a computing cluster; a single core with 2GB memory was used for ADVI approximation (HMC sampling could be more efficiently run with one core for each MCMC chain hence run time was based on the slowest chain). Significant reduction in running times were observed with using ADVI over HMC. LuxRep tool described in this paper allows technical replicates with varying bisulfite conversion efficiency to be included in the analysis. LuxRep improves the accuracy of methylation level estimates and differential methylation analysis and lowers running time of model-based DNA methylation analysis by using ADVI. LuxRep is open source and freely available from https://github.com/tare/LuxGLM/tree/master/LuxRep. Datasets that support the findings of this study are available in [12]. ADVI: Automatic differentiation variational inference BS-seq: Bisulfite sequencing Frommer M, McDonald LE, Millar DS, Collis CM, Watt F, Grigg GW, Molloy PL, Paul CL. A genomic sequencing protocol that yields a positive display of 5-methylcytosine residues in individual dna strand. Nucleic acids research. Proc Natl Acad Sci USA. 1992;89:1827–31. Akalin A, Kormaksson M, Li S. methylkit: a comprehensive r package for the analysis of genome-wide dna methylation profiles. Genome Biol. 2012;13:1–9. Hansen KD, B L, Irizarry RA. Bsmooth: from whole genome bisulfite sequencing reads to differentially methylated regions. Genome biology 2012; 3, 1–10. Dolzhenko E, Smith AD. Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments. BMC Bioinform. 2014;15:1–8. Hebestreit K, Dugas M, Hans-Ulrich K. Detection of significantly differentially methylated regions in targeted bisulfite sequencing data. Bioinformatics. 2013;29:1647–53. Park Y, Figueroa ME, Rozek LS, Sartor MA. Methylsig: a whole genome dna methylation analysis pipeline. Bioinformatics. 2014;30:2414–22. Sun D, Xi Y, Rodriguez B, Park HJ, Tong P, Meong M, Goodell MA, Li W. Moabs: model based analysis of bisulfite sequencing data. Genome Biol. 2014;15:1–12. Park Y, Hao W. Differential methylation analysis for bs-seq data under general experimental design. Bioinformatics. 2016;32:1446–53. Gaspar JM, Hart PH. Dmrfinder: efficiently identifying differentially methylated regions from methylc-seq data. BMC Bioinform. 2017;18:1–8. Wreczycka K, Gosdschan A, Yusuf D, Grüning B, Assenov Y, Akalin A. Strategies for analyzing bisulfite sequencing data. J Biotechnol. 2017;261:105–15. Äijö T, Huang Y, Mannerström H, Chavez L, Tsagaratou A, Rao A, Lähdesmäki H. A probabilistic generative model for quantification of dna modifications enables analysis of demethylation pathways. Genome Biol. 2016;17:1–22. Konki M, Malonzo M, Karlsson IK, Lindgren N, Ghimire B, Smolander J, Scheinin NM, Ollikainen M, Laiho A, Elo LL, Lönnberg T, Matias R, Pedersen NL, Kaprio J, Lähdesmäki H, Rinne JO, Lund RJ. Peripheral blood dna methylation differences in twin pairs discordant for alzheimer's disease. Clin Epigenet. 2019;11:1–12. Äijö T, Yue X, Rao A, Lähdesmäki H. Luxglm: a probabilistic covariate model for quantification of dna methylation modifications with complex experimental design. Bioinformatics. 2016;32:511–9. Kucukelbir A, Ranganath R, Gelman A, Blei D. Automatic variational inference in stan. In: Cortes, C, Lee DD, Sugiyama M, R G (eds) Advances in neural information processing systems 28 (NIPS 2015), pp. 568–576 2015. Neural Information Processing Systems. Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt MB, Guo J, Li P, Riddell A. Stan: a probabilistic programming language. J Stat Software. 2017;76:1–32. We acknowledge the computational resources provided by the Aalto Science-IT project and the Finnish Functional Genomics Centre and Biocenter Finland. This work was supported by the Academy of Finland (292660, 311584, 335436). The funding body played no role in the design of the study, the collection, analysis, interpretation of data, or in writing the manuscript. Department of Computer Science, Aalto University, 00076, Espoo, Finland Maia H. Malonzo, Viivi Halla-aho & Harri Lähdesmäki Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland Mikko Konki, Riikka J. Lund & Harri Lähdesmäki Maia H. Malonzo Viivi Halla-aho Mikko Konki Riikka J. Lund Harri Lähdesmäki MM developed the package, VH contributed to the computational analysis. RL and MK performed lab experiments. MM, VH and HL wrote the manuscript. All authors read and approved the final version of the manuscript. Correspondence to Maia H. Malonzo. Consent to publish Availability and requirements Project name: LuxRep. Project home page: https://github.com/tare/LuxGLM/tree/master/LuxRep. Operating system(s): Mac OSX and Linux. Programming language: Python, CmdStan, pystan, Numpy, Scipy. Other requirements: CmdStan (tested on version 2.18.0), Python (tested on version 3.7.5), pystan (tested on version 2.17.1.0), Numpy (tested on version 1.20.2), Scipy (tested on version 1.1.0). LuxRep is freely available at https://github.com/tare/LuxGLM/tree/master/LuxRep along with documentation. License: MIT License Any restrictions to use by non-academics: Not applicable. Additional figures containing: (i) Demonstrating differences in technical parameters, (ii) Testing different hyperparameters for sequencing error and bisulfite conversion rates, (iii) Testing different \(\sigma_B^2\) for methylation level estimation, (iv) Choosing parameters for variational inference, (v) Comparing full and reduced models in methylation level estimation, and (vi) True positive rates of differential methylation. Malonzo, M.H., Halla-aho, V., Konki, M. et al. LuxRep: a technical replicate-aware method for bisulfite sequencing data analysis. BMC Bioinformatics 23, 41 (2022). https://doi.org/10.1186/s12859-021-04546-1 Submission enquiries: [email protected]
CommonCrawl
Journal of Agricultural, Biological and Environmental Statistics pp 1–28 | Cite as A Case Study Competition Among Methods for Analyzing Large Spatial Data Matthew J. Heaton Abhirup Datta Andrew O. Finley Reinhard Furrer Joseph Guinness Rajarshi Guhaniyogi Florian Gerber Robert B. Gramacy Dorit Hammerling Matthias Katzfuss Finn Lindgren Douglas W. Nychka Furong Sun Andrew Zammit-Mangion First Online: 14 December 2018 The Gaussian process is an indispensable tool for spatial data analysts. The onset of the "big data" era, however, has lead to the traditional Gaussian process being computationally infeasible for modern spatial data. As such, various alternatives to the full Gaussian process that are more amenable to handling big spatial data have been proposed. These modern methods often exploit low-rank structures and/or multi-core and multi-threaded computing environments to facilitate computation. This study provides, first, an introductory overview of several methods for analyzing large spatial data. Second, this study describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology. Specifically, each research group was provided with two training datasets (one simulated and one observed) along with a set of prediction locations. Each group then wrote their own implementation of their method to produce predictions at the given location and each was subsequently run on a common computing environment. The methods were then compared in terms of various predictive diagnostics. Supplementary materials regarding implementation details of the methods and code are available for this article online. Big data Gaussian process Parallel computing Low-rank approximation Supplementary materials for this article are available at https://doi.org/10.1007/s13253-018-00348-w. For decades, the Gaussian process (GP) has been the primary tool used for the analysis of geostatistical (point-referenced) spatial data (Schabenberger and Gotway 2004; Cressie 1993; Cressie and Wikle 2015; Banerjee et al. 2014). A spatial process \(Y(\varvec{s})\) for \(\varvec{s} \in \mathcal {D} \subset \mathbb {R}^2\) is said to follow a GP if any realization \(\varvec{Y} = (Y(\varvec{s}_1),\dots ,Y(\varvec{s}_N))'\) at the finite number of locations \(\varvec{s}_1,\dots ,\varvec{s}_N\) follows an N-variate Gaussian distribution. More specifically, let \(\mu (\varvec{s}): \mathcal {D} \rightarrow \mathbb {R}\) denote a mean function returning the mean at location \(\varvec{s}\) (typically assumed to be linear in covariates \(\varvec{X}(\varvec{s}) = (1,X_1(\varvec{s}),\dots ,X_P(\varvec{s}))'\)) and \(\mathbb {C}(\varvec{s}_1,\varvec{s}_2): \mathcal {D}^2 \rightarrow \mathbb {R}^+\) denote a positive-definite covariance function. Then, if \(Y(\varvec{s})\) follows a spatial Gaussian process, \(\varvec{Y}\) has the density function, $$\begin{aligned} f_{\varvec{Y}}(\varvec{y})&= \left( \frac{1}{\sqrt{2\pi }}\right) ^{N} |\varvec{\Sigma }|^{-1/2} \exp \left\{ -\frac{1}{2}(\varvec{y}-\varvec{\mu })'\varvec{\Sigma }^{-1}(\varvec{y}-\varvec{\mu })\right\} \end{aligned}$$ where \(\varvec{\mu } = (\mu (\varvec{s}_1),\dots ,\mu (\varvec{s}_N))'\) is the mean vector and \(\varvec{\Sigma } = \{\mathbb {C}(\varvec{s}_i,\varvec{s}_j)\}_{ij}\) is the \(N\times N\) covariance matrix governed by \(\mathbb {C}(\varvec{s}_i,\varvec{s}_j)\) (e.g., the Matérn covariance function). From this definition, the appealing properties of the Gaussian distribution (e.g., Gaussian marginal and conditional distributions) have rendered the GP an indispensable tool for any spatial data analyst to perform such tasks as kriging (spatial prediction) and proper uncertainty quantification. With the modern onset of larger and larger spatial datasets, however, the use of Gaussian processes for scientific discovery has been hindered by computational intractability. Specifically, evaluating the density in (1) requires \(\mathcal {O}(N^3)\) operations and \(\mathcal {O}(N^2)\) memory which can quickly overwhelm computing systems when N is only moderately large. Early solutions to this problem included factoring (1) into a series of conditional distributions (Vecchia 1988; Stein et al. 2004), the use of pseudo-likelihoods (Varin et al. 2011; Eidsvik et al. 2014), modeling in the spectral domain (Fuentes 2007) or using tapered covariance functions (Furrer et al. 2006; Kaufman et al. 2008; Stein 2013). Beginning in the late 2000's, several approaches based on low-rank approximations to Gaussian processes were developed (or became popular) including discrete process convolutions (Higdon 2002; Lemos and Sansó 2009), fixed rank kriging (Cressie and Johannesson 2008; Kang and Cressie 2011; Katzfuss and Cressie 2011), predictive processes (Banerjee et al. 2008; Finley et al. 2009), lattice kriging (Nychka et al. 2015) and stochastic partial differential equations (Lindgren et al. 2011). Sun et al. (2012), Bradley et al. (2016) and Liu et al. (2018) provide exceptional reviews of these methods and demonstrate their effectiveness for modeling spatial data. After several years of their use, however, scientists have started to observe shortcomings in many of the above methods for approximating GPs such as the propensity to oversmooth the data (Simpson et al. 2012; Stein 2014) and even, for some of these methods, an upper limit on the size of the dataset that can be modeled. Hence, recent scientific research in this area has focused on the efficient use of modern computing platforms and the development of methods that are parallelizable. For example, Paciorek et al. (2015) show how (1) can be calculated using parallel computing while Katzfuss and Hammerling (2017) and Katzfuss (2017) develop a basis-function approach that lends itself to distributed computing. Alternatively, Barbian and Assunção (2017) and Guhaniyogi and Banerjee (2018) propose dividing the data into a large number of subsets, draw inference on the subsets in parallel and then combining the inferences. Datta et al. (2016a, c) build upon Vecchia (1988) by developing novel approaches to factoring (1) as a series of conditional distributions based only on nearest neighbors. Given the plethora of choices to analyze large spatially correlated data, for this paper, we seek to not only provide an overview of modern methods to analyze massive spatial datasets, but also lightly compare the methods in a unique way. Specifically, this research implements the common task framework of Wikle et al. (2017) by describing the outcome of a friendly case study competition between various research groups across the globe who each implemented their own method to analyze the same spatial datasets (see the list of participating groups in Table 1). That is, several research groups were provided with two spatial datasets (one simulated and one real) with a portion of each dataset removed to validate predictions (research groups were not provided with the removed portion so that this study is "blinded"). The simulated data represent a scenario where the Gaussian process assumption is valid (i.e., a correctly specified model), whereas the real dataset is a scenario when the model is potentially mis-specified due to inherent non-stationarity or non-Gaussian errors. Each group then implemented their unique method and provided a prediction (and prediction interval or standard error) of the spatial process at the held out locations. The predictions were compared by a third party and are summarized herein. The case study competition described herein is unique and novel in that, typically, comparisons/reviews of various methods is done by a single research group implementing each method (see Sun et al. 2012; Bradley et al. 2016). However, single research groups may be more or less acquainted with some methods leading to a possibly unfair comparison with those methods they are less familiar with. In contrast, for the comparison/competition here, each method was implemented by a research group with strong expertise in the method and who is well-versed in any possible intricacies associated with its use. Further, unlike the previous reviews of Sun et al. (2012); Bradley et al. (2016), we provide a comparison of each method's ability to quantify the uncertainty associated with predictions. Hence, in terms of scientific contributions, this paper (i) serves as a valuable review, (ii) discusses a unique case study comparison of spatial methods for large datasets, (iii) provides code to implement each method to practitioners (see supplementary materials), (iv) provides a comparison of the uncertainty quantification associated with each method and (v) establishes a framework for future studies to follow when comparing various analytical methods. The remainder of this paper is organized as follows. Section 2 gives a brief background on each method. Section 3 provides the setting for the comparison along with background on the datasets. Section 4 then summarizes the results of the comparison in terms of predictive accuracy, uncertainty quantification and computation time. Section 5 draws conclusions from this study and highlights future research areas for the analysis of massive spatial data. 2 Overview of Methods for Analyzing Large Spatial Data This section contains a brief overview of the competitors in this case study competition. For convenience, we group the methods into one of the following categories: (i) low rank, (ii) sparse covariance matrices, (iii) sparse precision matrices and (iv) algorithmic. The low-rank approaches are so classified because these typically involve reducing the rank of the \(N\times N\) matrix \({\varvec{\Sigma }}\). Sparse covariance methods work by introducing "0's" into \({\varvec{\Sigma }}\) allowing for sparse matrix computations. Sparse precision methods, in contrast, induce sparsity in the precision matrix to allow for efficient computation. Algorithmic approaches (which is perhaps the most vaguely defined category) differ from the previous approaches in that they focus more on a transductive approach to learning by focusing more on fitting schemes than model building. Importantly, we introduce these categories as a subjective classification purely for clarity in exposition. As with any subjective grouping, a single method may include pieces of various categories. As such, we strongly encourage viewing the method as a whole rather than solely through the lens of our subjective categorization. 2.1 Low-Rank Methods 2.1.1 Fixed Rank Kriging Fixed Rank Kriging (FRK, Cressie and Johannesson 2006, 2008) is built around the concept of a spatial random effects (SRE) model. In FRK, the process \(\widetilde{Y}(\varvec{s}), \varvec{s} \in \mathcal {D},\) is modeled as $$\begin{aligned} \widetilde{Y}(\varvec{s}) = \mu (\varvec{s}) + w(\varvec{s}) + \xi (\varvec{s}),\quad \varvec{s} \in \mathcal {D}, \end{aligned}$$ where \(\mu (\varvec{s})\) is the mean function that is itself modeled as a linear combination of known covariates (i.e., \(\mu (\varvec{s}) = \varvec{X}'(\varvec{s})\varvec{\beta }\) where \(\varvec{X}(\varvec{s})\) is a vector of covariates evaluated at location \(\varvec{s}\) and \(\varvec{\beta }\) are the associated coefficients), \(w(\varvec{s})\) is a smooth process, and \(\xi (\varvec{s})\) is a fine-scale process, modeled to be approximately spatially uncorrelated with variance \(\sigma ^2_\xi v(\varvec{s})\) where \(v(\varvec{s})\) is a known weighting function. The process \(\xi (\varvec{s})\) in (2) is designed to soak up variability in \(\widetilde{Y}(\varvec{s})\) not accounted for by \(w(\varvec{s})\). The primary assumption of FRK is that the spatial process \(w(\cdot )\) can be approximated by a linear combination of K basis functions \(\varvec{h}(\varvec{s}) = (h_1(\varvec{s}),\dots ,h_K(\varvec{s}))', \varvec{s} \in \mathcal {D},\) and K basis-function coefficients \(\varvec{w}^\star = (w_1^\star ,\dots ,w_K^\star )'\) such that, $$\begin{aligned} w(\varvec{s}) \approx {\widetilde{w}}(\varvec{s}) = \sum _{k=1}^K h_k(\varvec{s})w^\star _k, \quad \varvec{s} \in \mathcal {D}. \end{aligned}$$ The use of K basis functions ensures that all estimation and prediction equations only contain inverses of matrices of size \(K \times K\), where \(K \ll N\). In practice, the set \(\{h_k(\varvec{\cdot })\}\) in (3) is comprised of functions at R different resolutions such that (3) can also be written as $$\begin{aligned} {\widetilde{w}}(\varvec{s}) = \sum _{r=1}^R\sum _{k=1}^{K_r} h_{rk}(\varvec{s})w_{rk}^\star ,\quad \varvec{s} \in \mathcal {D}, \end{aligned}$$ where \(h_{rk}(\varvec{s})\) is the \(k\mathrm{th}\) spatial basis function at the \(r\mathrm{th}\) resolution with associated coefficient \(w^\star _{rk}\), and \(K_r\) is the number of basis functions at the \(r\mathrm{th}\) resolution, such that \(K=\sum _{r=1}^R K_r\) is the total number of basis functions used. In the experiments we used \(R=3\) resolutions of bisquare basis functions and a total of \(K = 475\) basis functions. The coefficients \(\varvec{w}^\star = (w^\star _{rk}: r = 1,\dots ,R;~k = 1,\dots , K_r)'\) have as covariance matrix \(\mathbb {V}\text {ar}(\varvec{w}^\star ) = \varvec{\Sigma }_{w^\star }(\varvec{\theta })\), where \(\varvec{\theta }\) are parameters that need to be estimated. In this work, \(\varvec{\Sigma }_{w^\star }(\varvec{\theta })\) is a block-diagonal matrix composed from R dense matrices, where the \(r\mathrm{th}\) block has \((i,j)\mathrm{th}\) element \(\sigma ^2_r\exp (-d_r(i,j)/\phi _r)\) and where \(d_r(i,j)\) is the distance between the centroids of the \(i\mathrm{th}\) and \(j\mathrm{th}\) basis function at the \(r\mathrm{th}\) resolution; \(\sigma ^2_r\) is the variance at the \(r\mathrm{th}\) resolution; \(\phi _r\) is the spatial correlation parameter of the exponential correlation function at the \(r\mathrm{th}\) resolution; and \(\varvec{\theta } = (\sigma ^2_1,\dots ,\sigma ^2_R,\phi _1,\dots ,\phi _R)'\). Note that \(\varvec{\Sigma }_{w^\star }(\varvec{\theta })\) can also be unstructured in which case \(K(K+1)/2\) parameters need to be estimated; however, this case is not considered here. There are several variants of FRK. In this work, we use the implementation by Zammit-Mangion and Cressie (2018) which comes in the form of the R package FRK, available from the Comprehensive R Archive Network (CRAN). In this paper we utilize v0.1.6 of that package. In FRK, the process evaluated at \(\varvec{s}_i\), \(\widetilde{Y}(\varvec{s}_i)\), is assumed to be observed with measurement error \(\varepsilon (\varvec{s}_i)\). The data model is therefore given by $$\begin{aligned} Y(\varvec{s}_i) = \mu (\varvec{s}_i) + {\widetilde{w}}(\varvec{s}_i) + \xi (\varvec{s}_i) + \varepsilon (\varvec{s}_i), \quad i = 1,\dots , N, \end{aligned}$$ where \(\varepsilon (\varvec{s}_i)\) denotes independent and identically normally distributed measurement error with mean 0 and measurement-error variance \(\sigma ^2_\varepsilon \). Under this specification, the joint model for \(Y(\cdot )\) evaluated at all N observed locations is, $$\begin{aligned} \varvec{Y} = {\varvec{X}}\varvec{\beta }+{\varvec{H}}\varvec{w}^\star + \varvec{\xi } + \varvec{\varepsilon }, \end{aligned}$$ where \({\varvec{X}}\) is the design matrix; \(\varvec{\beta }\) are the regression coefficients; \({\varvec{H}}\) is the \(N\times K\) matrix of spatial basis functions with associated random coefficients \(\varvec{w}^\star \sim \mathcal {N}(\varvec{0},\varvec{\Sigma }_{w^\star }(\varvec{\theta }))\); \(\varvec{\xi } \sim \mathcal {N}(\varvec{0},\sigma ^2_\xi {\varvec{D}})\) with \({\varvec{D}}\) being a known, diagonal weight matrix specified by the user (here we just use \({\varvec{D}} = {\varvec{I}}\) but this need not be the case); and \(\varvec{\varepsilon } \sim \mathcal {N}(\varvec{0},\sigma ^2_\varepsilon {\varvec{i}})\). The package FRK is used to first estimate \(\varvec{\theta }, \sigma ^2_\xi \) and \(\sigma ^2_\varepsilon \) using a combination of semivariogram and maximum-likelihood techniques (see Kang et al. 2009) and, subsequently, do prediction with the estimated parameters 'plugged-in.' More details on the implementation of FRK for this study are included in the supplementary materials. 2.1.2 Predictive Processes For the predictive-process (PP) approach, let \(\varvec{s}^\star _1,\dots ,\varvec{s}^\star _K\) denote a set of "knot" locations well dispersed over the spatial domain \(\mathcal {D}\). Assume that the SREs (\(w(\varvec{s})\)) in (2) follow a mean zero Gaussian process with covariance function \(\mathbb {C}(\varvec{s},\varvec{s}') = \sigma ^2_w \rho (\varvec{s},\varvec{s}')\) where \(\rho (\cdot ,\cdot )\) is a positive-definite correlation function. Under this Gaussian process assumption, the SREs \(\varvec{w}^\star = (w(\varvec{s}^\star _1),\dots ,w(\varvec{s}^\star _K))' \sim \mathcal {N}(0,{\varvec{\Sigma }}_{w^\star })\) where \({\varvec{\Sigma }}_{w^\star }\) is a \(K\times K\) covariance matrix with \(ij\mathrm{th}\) element \(\mathbb {C}(\varvec{s}^\star _i,\varvec{s}_j^\star )\). The PP approach exploits the Gaussian process assumption for the SREs and replaces \(w(\varvec{s})\) in (2) with $$\begin{aligned} {\widetilde{w}}(\varvec{s}) = \mathbb {C}'(\varvec{s},\varvec{s}^\star ){\varvec{\Sigma }}_{w^\star }^{-1}\varvec{w}^\star \end{aligned}$$ where \(\mathbb {C}(\varvec{s},\varvec{s}^\star ) = (\mathbb {C}(\varvec{s},\varvec{s}_1^\star ),\dots ,\mathbb {C}(\varvec{s},\varvec{s}_K^\star ))'\). Note that (7) can be equivalently written as the basis function expression given above in (3) where the basis functions are \(\mathbb {C}(\varvec{s},\varvec{s}^\star ){\varvec{\Sigma }}_{w^\star }^{-1}\) and \(\varvec{w}^\star \) effectively plays the role of the basis coefficients. In the subsequent analyses presented in Sect. 4, we applied a fairly coarse 14\(\times \)14 knot grid in an attempt to balance computing time with predictive performance. Increasing the number of knots beyond 196 will improve inference, at the cost of longer run time. Finley et al. (2009) noted that the basis-function expansion in (7) systematically underestimates the marginal variance \(\sigma ^2_w\) from the original process. That is, \(\mathbb {V}\text {ar}({\widetilde{w}}(\varvec{s})) = \mathbb {C}'(\varvec{s},\varvec{s}^\star ){\varvec{\Sigma }}_{w^\star }^{-1}\mathbb {C}'(\varvec{s},\varvec{s}^\star ) \)\(\le \sigma ^2_w\). To counterbalance this underestimation of the variance, Finley et al. (2009) use the structure in (5), $$\begin{aligned} Y(\varvec{s}) = \mu (\varvec{s}) + {\widetilde{w}}(\varvec{s}) + \xi (\varvec{s}) + \varepsilon (\varvec{s}) \end{aligned}$$ where \(\xi (\varvec{s})\) are spatially independent with distribution \(\mathcal {N}(0,\sigma ^2_w-\mathbb {C}'(\varvec{s},\varvec{s}^\star ){\varvec{\Sigma }}_{w^\star }^{-1}\mathbb {C}'(\varvec{s},\varvec{s}^\star ))\) such that \(\mathbb {V}\text {ar}({\widetilde{w}}(\varvec{s}) + \xi (\varvec{s})) = \sigma ^2_w\) as in the original parent process. This adjustment in (8) is called the "modified" predictive process and is what is used in this competition. As with FRK, the associated likelihood under (8) only requires calculating the inverse and determinant of a dense \(K\times K\) matrix and diagonal \(N\times N\) matrices which results in massive computational savings when \(K \ll N\). However, one advertised advantage of using the PP approach as opposed to FRK or LatticeKrig is that the PP basis functions are completely determined by the choice of covariance function \(\mathbb {C}(\cdot ,\cdot )\). Hence, the PP approach is unaltered even when considering modeling complexities such as anisotropy, non-stationarity or even multivariate processes. At the same time, however, when \(\mathbb {C}(\cdot ,\cdot )\) is governed by unknown parameters (which is nearly always the case) the PP basis functions need to be calculated iteratively rather than once as in FRK or LatticeKrig which will subsequently increase computation time. 2.2 Sparse Covariance Methods 2.2.1 Spatial Partitioning Let the spatial domain \(\mathcal {D} = \bigcup _{d=1}^D \mathcal {D}_d\) where \(\mathcal {D}_1,\dots ,\mathcal {D}_D\) are subregions that form a partition (i.e., \(\mathcal {D}_{d_1} \bigcap \mathcal {D}_{d_2} = \emptyset \) for all \(d_1 \ne d_2\)). The modeling approach based on spatial partitioning is to again assume the model in (6) but take on the assumption of independence between observations across subregions. More specifically, if \(\varvec{Y}_d = \{Y(\varvec{s}_i): \varvec{s}_i \in \mathcal {D}_d\}\) where \(d=1,\dots ,D\), then $$\begin{aligned} \varvec{Y}_d&= {\varvec{X}}_d\varvec{\beta }+{\varvec{H}}_d\varvec{w}^\star +\varvec{\xi }_d + \varvec{\varepsilon }_d \end{aligned}$$ where \({\varvec{X}}_d\) is a design matrix containing covariates associated with \(\varvec{Y}_d\), \({\varvec{H}}_d\) is a matrix of spatial basis functions (such as those used in predictive processes, fixed rank kriging or lattice kriging mentioned above) and \(\varvec{\xi }_d\) and \(\varvec{\varepsilon }_d\) are the subvectors of \(\varvec{\xi }\) and \(\varvec{\varepsilon }\) corresponding to region d. Notice that, in (9) each subregion shares common \(\varvec{\beta }\) and \(\varvec{w}^\star \) parameters which allows smoothing across subregions in spite of the independence assumption. Further, the assumption of independence across subregions effectively creates a block-diagonal structure for \({\varvec{\Sigma }}\) and allows the likelihood to be computed in parallel (with one node per subregion) thereby facilitating computation. By way of distinction, this approach is inherently different from the "divide and conquer" approach (Liang et al. 2013; Barbian and Assunção 2017). In the divide and conquer approach, the full dataset is subsampled, the model is fit to each subset and the results across subsamples are pooled. In contrast, the spatial partition approach uses all the data simultaneously in obtaining estimates, but the independence across regions facilitates computation. The key to implementing the spatial partitioning approach is the choice of partition, and the literature is replete with various options. A priori methods to define the spatial partitioning include partitioning the region into equal areas (Sang et al. 2011), partitioning based on centroid clustering (Knorr-Held and Raßer 2000; Kim et al. 2005) and hierarchical clustering based on spatial gradients (Anderson et al. 2014; Heaton et al. 2017). Alternatively, model-based approaches to spatial partitioning include treed regression (Konomi et al. 2014) and mixture modeling (Neelon et al. 2014), but these approaches typically require more computation. For this analysis, a couple of different partitioning schemes were considered, but each scheme resulted in approximately equivalent model fit to the training data. Hence, based on the results from the training data, for the competition below we used an equal area partition of approximately 6000 observations per subregion. 2.2.2 Covariance Tapering The idea of covariance tapering is based on the fact that many entries in the covariance matrix \(\varvec{\Sigma }\) in (1) are close to zero and associated location pairs could be considered as essentially independent. Covariance tapering multiplies the covariance function \(\mathbb {C}(\varvec{s}_i,\varvec{s}_j)\) with a compactly supported covariance function, resulting in another positive-definite covariance function but with compact support. From a theoretical perspective, covariance tapering (in the framework of infill-asymptotics) is using the concept of Gaussian equivalent measures and mis-specified covariance functions (see, e.g., Stein 1999 and references therein). Subsequently, Furrer et al. (2006) have assumed a second-order stationary and isotropic Matérn covariance to show asymptotic optimality for prediction under tapering. This idea has been extended to different covariance structures (Stein 2013), non-Gaussian response (Hirano and Yajima 2013) and multivariate and/or spatiotemporal setting (Furrer et al. 2016). From a computational aspect, the compact support of the resulting covariance function provides the computational savings needed by employing sparse matrix algorithms to efficiently solve systems of linear equations. More precisely, to evaluate density (1), a Cholesky factorization for \({\varvec{\Sigma }}\) is performed followed by two solves of triangular systems. For typical spatial data settings, the solve algorithm is effectively linear in the number of observations. For parameter estimation in the likelihood framework, one- and two-taper approaches exist (see Kaufman et al. 2008; Du et al. 2009; Wang and Loh 2011; Bevilacqua et al. 2016, for relevant literature). To distinguish the two approaches, notice that the likelihood in (1) can be rewritten as $$\begin{aligned} f_{\varvec{Y}}(\varvec{y})&= \left( \frac{1}{\sqrt{2\pi }}\right) ^{N} |\varvec{\Sigma }|^{-1/2} \text {etr}\left\{ -\frac{1}{2}(\varvec{y}-\varvec{\mu })(\varvec{y}-\varvec{\mu })'{\varvec{\Sigma }}^{-1}\right\} \end{aligned}$$ where \(\text {etr}({\varvec{A}}) = \exp (\text {trace}(A))\). In the one-taper setting, only the covariance is tapered such that \({\varvec{\Sigma }}\) in (10) is replaced by \({\varvec{\Sigma }}\odot {\varvec{T}}\) where "\(\odot \)" denotes the Hadamard product and \({\varvec{T}}\) is the \(N\times N\) tapering matrix. In the two-tapered approach both the covariance and empirical covariance are affected such that not only is \({\varvec{\Sigma }}\) replaced by \({\varvec{\Sigma }}\odot {\varvec{T}}\) but \((\varvec{y}-\varvec{\mu })(\varvec{y}-\varvec{\mu })'\) is replaced by \((\varvec{y}-\varvec{\mu })(\varvec{y}-\varvec{\mu })' \odot {\varvec{T}}\). The one-taper equation results in biased estimates of model parameters while the two-taper approach is based on estimating equations (and is, therefore, unbiased) but comes at the price of a severe loss of computational efficiency. If the one-taper biased estimates of model parameters are used for prediction, the biases may result in some loss of predictive accuracy (Furrer et al. 2016). Although tapering can be adapted to better take into account uneven densities of locations and complex anisotropies, we use a simple straightforward approach for this competition. The implementation here relies almost exclusively on the R package spam (Furrer and Sain 2010; Furrer 2016). Alternatively to likelihood approaches and in view of computational costs, we have minimized the squared difference between an empirical covariance and parameterized covariance function. The gridded structure of the data is exploited and the empirical covariance is estimated for a specific set of locations only; and thus is close to classical variogram estimation and fitting (Cressie 1993). 2.3 Sparse Precision Methods 2.3.1 LatticeKrig LatticeKrig (LK, Nychka et al. 2015) uses nearly the same setup as is employed by FRK. Specifically, LK assumes the model (6) but omits the fine-scale process \(\xi (\cdot )\). LatticeKrig also follows the multiresolution approach in (4) for the matrix \({\varvec{H}}\), but LK uses a different structure and constraints than FRK. First, the marginal variance of each resolution \(\varvec{h}_{r}'(\varvec{s})\varvec{w}_r^\star \) where \(\varvec{h}_r'(\varvec{s}) = (h_{r1}(\varvec{s}),\dots ,h_{rK_r}(\varvec{s}))'\) are the basis functions of the \(r\mathrm{th}\) resolution with coefficients \(\varvec{w}^\star _{r} = (w^\star _{r1},\dots ,w^\star _{rK_r})'\) is constrained to be \(\sigma ^2_{w^\star }\alpha _r\) where \(\sigma ^2_{w^\star },\alpha _r>0\) and \(\sum _{r=1}^R\alpha _r = 1\). To further reduce the number of parameters, LK sets \(\alpha _r \sim r^{-\nu }\) where \(\nu \) is a single free parameter. LatticeKrig obtains multiresolution radial basis functions by translating and scaling a radial function in the following manner. Let \(\varvec{u}_{rk}\) for \(r=1,\dots ,R\) and \(k=1,\dots ,K_r\) denote a regular grid of \(K_r\) points on \(\mathcal {D}\) corresponding to resolution r. For this article, LK defines $$\begin{aligned} h_{rk}(\varvec{s}) = \psi (\Vert \varvec{s}-\varvec{u}_{rk}\Vert /\lambda _r) \end{aligned}$$ where the distance is taken to be Euclidean because the spatial region in this case is of small geographic extent and \(\lambda _r = 2^{-r}\). Further, LK defines $$\begin{aligned} \psi (d) \propto {\left\{ \begin{array}{ll}\frac{1}{3}(1-d)^6 ( 35d^2 + 18d+3) &{} \text { if } d \le 1\\ 0 &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$ which are Wendland polynomials and are positive definite (an attractive property when the basis is used for interpolation). Finally, the basis functions in (12) are normalized at each resolution so that the process marginal variance at all \(\varvec{s}\) is \(\sigma ^2_{w^\star } \alpha _r\). This reduces edge effects and makes for a better approximation to a stationary covariance function. LatticeKrig assumes the coefficients at each resolution \(\varvec{w}^\star _{r} = (w^\star _{r1},\dots ,w^\star _{rK_r})'\) are independent (similar to the block-diagonal structure used in FRK) and follow a multivariate normal distribution with covariance \(\varvec{Q}_r^{-1}(\phi _r)\) parameterized by a single parameter \(\phi _r\). Because the locations \(\{\varvec{u}_{rk}\}_{k=1}^{K_r}\) are prescribed to be a regular grid, LK uses a spatial autoregression/Markov random field (see Banerjee et al. 2014, Section 4.4) structure for \(\varvec{Q}_r^{-1}(\phi _r)\) leading to sparsity and computational tractability. Furthermore, because \(\varvec{Q}_r(\phi _r)\) is sparse, LK can set K to be very large (as in this competition where \(K=136,000 > N\)) without much additional computational cost. The supplementary material to this article contains additional information about the implementation of LatticeKrig used in this case study. 2.3.2 Multiresolution Approximations The multiresolution approximation (MRA) can be viewed as a combination of several previously described approaches. Similar to FRK or LatticeKrig, the MRA also uses the basis-function approach in (4) but uses compactly supported basis functions at different resolutions. In contrast to FRK or LatticeKrig, the MRA basis functions and the prior distribution of the corresponding weights are chosen using the predictive-process approach to automatically adapt to any given covariance function \(\mathbb {C}(\cdot )\), and so the MRA can adjust flexibly to a desired spatial smoothness and dependence structure. Scalability of the MRA is ensured in that for increasing resolution, the number of basis functions increases while the support of each function (i.e., the part of the spatial domain in which it is nonzero) decreases allowing the number of basis functions to be approximately the same as the data. Decreasing support (and increasing sparsity of the covariance matrices of the corresponding weights) is achieved either by increasingly severe tapering of the covariance function (MRA-taper; Katzfuss and Gong 2017) or by recursively partitioning the spatial domain (MRA-block; Katzfuss 2017). This can lead to (nearly) exact approximations with quasilinear computational complexity. While the MRA-taper has some attractive smoothness properties, we focus here on the MRA-block which is based on a recursive partitioning of the domain \(\mathcal {D}\) into smaller and smaller subregions up to some level M. Within each (sub-)region at each resolution, there is a small number, say \(r_0\), of basis functions. The resulting approximation of the process (including its variance and smoothness) in each region at resolution M is exact. In addition, it is feasible to compute and store the joint posterior covariance matrix (i.e., not just its inverse as with related approaches) for a large number of prediction locations as a product of two sparse matrices (Jurek and Katzfuss 2018). The MRA-block is designed to take full advantage of high-performance computing systems, in that inference is well suited for massively distributed computing, with limited communication overhead. The computational task is split into small parts by assigning a computational node to each region of the recursive partitioning. The nodes then deal in parallel with the basis functions corresponding to their assigned regions leading to a polylogarithmic computational complexity. For this project, we use \(M=9\) levels, partition each domain in 2 parts and set the number of basis function in each partition to \(r_0=64\). 2.3.3 Stochastic PDEs The stochastic partial differential equation approach (SPDE) is based on the equivalence between Matérn covariance fields and stochastic PDEs, in combination with the Markov property that on two-dimensional domains holds for integer valued smoothness parameters in the Matérn family. The starting point is a basis expansion for \(w(\varvec{s})\) of the form (3), where the basis functions \(h_k(\varvec{s})\) are chosen to be piecewise linear on a triangulation of the domain (Lindgren et al. 2011). The optimal joint distribution for the \(w_k^\star \) coefficients is obtained through a finite element construction, which leads to a sparse inverse covariance matrix (precision) \(\varvec{Q}_\theta (\varvec{\phi })\). The precision matrix elements are polynomials in the precision and inverse range parameters (\(1/\phi _{\sigma }^2\) and \(1/\phi _r\)), with sparse matrix coefficients that are determined solely by the choice of triangulation. This differs from the sequential Markov construction of the NNGP method which instead constructs a square-root-free \(\varvec{L}\varvec{D}\varvec{L}'\) Cholesky decomposition of its resulting precision matrix (in a reverse order permutation of the elements). The spatial process is specified through a joint Gaussian model for \(\varvec{z}=(\varvec{w}^\star ,\, \varvec{\beta })\) with prior mean \(\varvec{0}\) and block-diagonal precision \(\varvec{Q}_z=\text {diag}(\varvec{Q}_{w^\star },\varvec{Q}_\beta )\), where \(\varvec{Q}_\beta =\varvec{I}\cdot 10^{-8}\) gives a vague prior for \(\varvec{\beta }\). Introducing the sparse basis evaluation matrix \(\varvec{H}\) with elements \(H_{ij}=h_j(\varvec{s}_i)\) and covariate matrix \(\varvec{X}=X_j(\varvec{s}_i)\), the observation model is then \(\varvec{Y} = \varvec{X}\varvec{\beta } + \varvec{H} \varvec{w}^\star + \varvec{\varepsilon } = {\varvec{A}}\varvec{z} + \varvec{\varepsilon }\) where \(\varvec{A}=(\varvec{H},\,\varvec{X})\), and \(\varvec{\varepsilon }\) is a zero mean observation noise vector with diagonal precision \(\varvec{Q}_\varepsilon =\varvec{I}/\sigma _\varepsilon ^2\). Using the precision based equations for multivariate Normal distributions, the conditional precision and expectation for \(\varvec{z}\) are given by \(\varvec{Q}_{z|y} = \varvec{Q}_z + \varvec{A'} \varvec{Q}_\varepsilon \varvec{A}\) and \(\varvec{\mu }_{z|y} = \varvec{Q}_{z|y}^{-1} \varvec{A'} \varvec{Q}_\varepsilon \varvec{Y}\), where sparse Cholesky factorisation of \(\varvec{Q}_{z|y}\) is used for the linear solve. The elements of \(\varvec{z}\) are automatically reordered to keep the Cholesky factors as sparse as possible. The resulting computational and storage cost for the posterior predictions and multivariate Gaussian likelihood of a spatial Gaussian Markov random field of this type with K basis functions is \(\mathcal {O}(K^{3/2})\). Since the direct solver does not take advantage of the stationarity of the model, the same prediction cost would apply to non-stationary models. For larger problems, more easily parallelizable iterative sparse solvers (e.g., multigrid) can be applied, but for the relatively small size of the problem here, the straightforward implementation of a direct solver is likely preferable. The implementation of the SPDE method used here is based on the R package INLA (Rue et al. 2017), which is aimed at Bayesian inference for latent Gaussian models (in particular Bayesian generalized linear, additive, and mixed models) using integrated nested Laplace approximations (Rue et al. 2009). The parameter optimization for \(\varvec{\phi }=(\phi _{r},\phi _{\sigma },\sigma _\varepsilon ^2)\) uses general numerical log-likelihood derivatives, thus the full Bayesian inference was therefore turned off, leading to an empirical Bayes estimate of the covariance parameters. Most of the running time is still spent on parameter optimization, but using the same parameter estimation technique as for LK, in combination with a purely Gaussian implementation, substantively reduces the total running time even without specialized code for the derivatives. 2.3.4 Nearest Neighbor Processes The nearest neighbor Gaussian process (NNGP) developed in Datta et al. (2016a, b) is defined from the conditional specification of the joint distribution of the SREs in (2). Let \(w(\varvec{s})\) in (2) follow a mean zero Gaussian process with \(\mathbb {C}(\varvec{s},\varvec{s}') = \sigma ^2_w\rho (\varvec{s},\varvec{s}')\) where \(\rho (\cdot )\) is a positive-definite correlation function. Factoring the joint distribution of \(w(\varvec{s}_1),\dots ,w(\varvec{s}_N)\) into a series of conditional distributions yields that \(w(\varvec{s}_1) = 0+\eta (\varvec{s}_1)\) and $$\begin{aligned} w(\varvec{s}_i) \mid \varvec{w}_{1:(i-1)} = \mathbb {C}'(\varvec{s}_1,\varvec{s}_{1:(i-1)}){\varvec{\Sigma }}_{1:(i-1)}^{-1} \varvec{w}_{1:(i-1)} + \eta (\varvec{s}_i) \end{aligned}$$ where \(\varvec{w}_{1:(i-1)} = (w(\varvec{s}_1),\dots ,w(\varvec{s}_{i-1}))'\), \(\mathbb {C}(\varvec{s}_1,\varvec{s}_{1:(i-1)}) = (\mathbb {C}(\varvec{s}_i,\varvec{s}_1),\dots ,\mathbb {C}(\varvec{s}_i,\varvec{s}_{i-1})'\), \({\varvec{\Sigma }}_{1:(i-1)} = \mathbb {V}\text {ar}(\varvec{w}_{1:(i-1)})\) and \(\eta \)'s are independent, mean zero, normally distributed random variables. More compactly, (13) is equivalent to \(\varvec{w} = \varvec{A} \varvec{w} + \varvec{\eta }\) where \(\varvec{A} = (a_{ij})\) is a lower triangular matrix with zeroes along the diagonal and \(\varvec{\eta }= (\eta (\varvec{s}_1),\dots ,\eta (\varvec{s}_n))' \sim N(0, \varvec{D})\) with diagonal entries \(\mathbb {C}(\varvec{s}_i,\varvec{s}_i) - \mathbb {C}'(\varvec{s}_1,\varvec{s}_{1:(i-1)}){\varvec{\Sigma }}_{1:(i-1)}^{-1}\mathbb {C}(\varvec{s}_1,\varvec{s}_{1:(i-1)})\). This effectuates a joint distribution \(\varvec{w} \sim N(0, \varvec{\Sigma })\) where \(\varvec{\Sigma }^{-1} = (\varvec{I} - \varvec{A})'\varvec{D} ^ {-1} (\varvec{I} - \varvec{A})\). Furthermore, when predicting for any \(\varvec{s} \notin \{\varvec{s}_1,\dots ,\varvec{s}_N\}\), one can define $$\begin{aligned} w(\varvec{s})\mid \varvec{w}_{1:N} = \varvec{a}'(s)\varvec{w}_{1:N} + \eta (\varvec{s}) \end{aligned}$$ similar to (13). A sparse formulation of \(\varvec{A}\) ensures that evaluating the likelihood of \(\varvec{w}\) (and, hence, of \(\varvec{Y}\)) will be computationally scalable because \({\varvec{\Sigma }}^{-1}\) is sparse. Because spatial covariances decrease with increasing distance, Vecchia (1988) demonstrated that replacing the conditional set \(\varvec{w}_{1:(i-1)}\) by the smaller set of m nearest neighbors (in terms of Euclidean distance) of \(\varvec{s}_i\) provides an excellent approximation to the conditional density in (13). Datta et al. (2016a) demonstrated that this is equivalent to \(\varvec{A}\) having at-most m nonzero entries in each row (in this study we take \(m=25\)) and thereby corresponds to a proper probability distribution. Similarly, for prediction at a new location \(\varvec{s}\), a sparse \(\varvec{a}(\varvec{s})\) in (14) is constructed based on m-nearest neighbors of \(\varvec{s}\) among \(\varvec{s}_1, \dots , \varvec{s}_N\). The resulting Gaussian process is referred to as the Nearest Neighbor Gaussian Process (NNGP). Generalizing the use of nearest neighbors from expedient likelihood evaluations as in Vecchia (1988) and Stein et al. (2004) to the well-defined NNGP on the entire domain enables fully Bayesian inference and coherent recovery of the latent SREs. Using an NNGP, the model can be written as \(\varvec{Y} \sim N(\varvec{X}\varvec{\beta }, \varvec{\widetilde{\Sigma }} (\varvec{\phi }) )\) where \(\varvec{\widetilde{\Sigma }}\) is the NNGP covariance matrix derived from the full GP. A Bayesian specification is completed by specifying priors for the parameters \(\varvec{\beta }\) and \(\varvec{\phi }\). For this application, the covariance function \(\mathbb C\) consists of an stationary exponential GP with variance \(\sigma ^2\) and range \(\phi \) and a nugget process with variance \(\sigma ^2_\varepsilon \) (see (5)). We assign a normal prior for \(\varvec{\beta }\), inverse gamma priors for \(\sigma ^2_w\) and \(\sigma ^2_\varepsilon \) and a uniform prior for \(\phi \). A Gibbs sampler for the model involves conjugate updates for \(\varvec{\beta }\) and metropolis random walk updates for \(\varvec{\phi }= (\sigma ^2_w, \sigma ^2_\varepsilon , \phi )'\). Letting \(\alpha = \sigma ^2_\varepsilon /\sigma ^2_w\), the model can also be expressed as \(\varvec{Y} \sim N(\varvec{X}\varvec{\beta }, \sigma ^2_w \varvec{\widetilde{R} } (\phi ,\alpha ) )\) where \(\varvec{\widetilde{R} }\) is the NNGP matrix derived from \(\varvec{C}(\phi ) + \alpha \varvec{I}\), \(\varvec{C}(\phi )\) being the correlation matrix of the exponential GP. Fixing \(\alpha \) and \(\phi \) gives a conjugate normal–inverse Gamma posterior distribution for \(\varvec{\beta }\) and \(\sigma ^2_w\). Predictive distributions for y(s) at new locations can also be obtained as t-distributions. The fixed values of \(\alpha \) and \(\phi \) can be chosen from a grid-search by minimizing root-mean-square predictive error score based on K-fold cross-validation. This hybrid approach departs from fully Bayesian philosophy by using hyper-parameter tuning. However, it offers a pragmatic solution for massive spatial datasets. We refer to this model as the conjugate NNGP model and will be the model used in this computation. Detailed algorithms for both the models are provided in Finley et al. (2018). NNGP models for analyzing massive spatial data are available on CRAN as the R-package spNNGP (Finley et al. 2017). 2.3.5 Periodic Embedding When the observation locations form a regular grid, and the model is stationary, methods that make use of the discrete Fourier transform (DFT), also known as spectral methods, can be statistically and computationally beneficial, since the DFT is an approximately decorrelating transform, and it can be computed quickly and with low memory burden using fast Fourier transform (FFT) algorithms. For spatially gridded data in two or higher dimensions—as opposed to time series data in one dimension—there are two prominent issues to be addressed. The first is edge effects, and the second is missing values. By projecting onto trigonometric bases, spectral methods essentially assume that the process is periodic on the observation domain, which leads to bias in the estimates of the spectrum (Guyon 1982; Dahlhaus and Künsch 1987). Guinness and Fuentes (2017) and Guinness (2017) propose the use of small domain expansions and imputing data in a periodic fashion on the expanded lattice. Imputation-based methods also solve the second issue of missing values, since the missing observations can be imputed as well. The methods presented here follow the iterative semiparametric approach in Guinness (2017). Guinness and Fuentes (2017) provide an alternative parametric approach. For this section, let \(\varvec{N} = (N_1,N_2)\) give the dimensions of the observation grid [in the case study datasets below \(\varvec{N} = (300,500)\)]. Let \(\tau \) denote an expansion factor, and let \(m = \lfloor \tau \varvec{N} \rfloor \) denote the size of the expanded lattice. We use \(\tau = 1.2\) in all examples, so that \(m = (360,600)\) in the surface temperature dataset. Let \(\varvec{U}\) be the vector of observations, and \(\varvec{V}\) be the vector of missing values on the grid of size m, making the full vector \(\varvec{Y} = (\varvec{U}',\varvec{V}')'\). The discrete Fourier transform of the entire vector is $$\begin{aligned} J(\varvec{\omega }) = \frac{1}{\sqrt{m_1 m_2}} \sum _{\varvec{s}} Y(\varvec{s}) \exp (-i\varvec{\omega }'\varvec{s}), \end{aligned}$$ \(\varvec{\omega } = (\omega _1,\omega _2)'\) is a spatial frequency with \(\omega _j \in [0,2\pi ]\), \(i = \sqrt{-1}\), and \(\varvec{\omega }'\varvec{s} = \omega _1 s_1 + \omega _2 s_2\). The procedure is iterative. At iteration k, the spectrum \(f_k\) is updated with $$\begin{aligned} f_{k+1}(\omega ) = \sum _{\varvec{\nu }} E_k( |J(\varvec{\nu })|^2 \, | \, \varvec{U} ) \alpha ( \varvec{\omega } - \varvec{\nu } ), \end{aligned}$$ where \(\alpha \) is a smoothing kernel, and \(E_k\) is expected value under the multivariate normal distribution with stationary covariance function $$\begin{aligned} R_k( \varvec{h} ) = \frac{1}{m_1 m_2} \sum _{\varvec{\omega } \in \mathbb {F}_m } f_k(\varvec{\omega }) \exp (i \varvec{\omega } ' \varvec{h}), \end{aligned}$$ where \(\mathbb {F}_m\) is the set of Fourier frequencies on a grid of size m. This is critical since it ensures that \(R_k\) is periodic on the expanded grid. In practice, the expected value in (15) is replaced with \(|J(\varvec{\nu })|^2\) computed using an imputed vector \(\varvec{V}\), a conditional simulation of missing values given \(\varvec{U}\) under covariance function \(R_k\). This ensures that the imputed vector \(\varvec{V}\) is periodic on the expanded lattice and reduces edge effects. The iterative procedure can also be run with an intermediate parametric step in which the Whittle likelihood (Whittle 1954) is used to estimate a parametric spectral density, which is used to filter the imputed data prior to smoothing the spectrum. See Guinness (2017) for details about more elaborate averaging schemes and monitoring for convergence of the iterative method. 2.4 Algorithmic Approaches 2.4.1 Metakriging Spatial metakriging is an approximate Bayesian method that is not tied to any specific model and is partly algorithmic in nature. In particular, any spatial model described above can be used to draw inference from subsets (as described below). From (1), let the \(N\times N\) covariance matrix be determined by a set of covariance parameters \(\varvec{\phi }\) such that \({\varvec{\Sigma }}= {\varvec{\Sigma }}(\varvec{\phi })\) (e.g., \(\varvec{\phi }\) could represent decay parameters from the Matérn covariance function) and \(\mu (\varvec{s}) = \varvec{X}'(\varvec{s})\varvec{\beta }\) where \(\varvec{X}(\varvec{s})\) is a set of known covariates with unknown coefficients \({\varvec{\beta }}\). Further, let the sampled locations \(\mathcal {S}=\{{\varvec{s}}_1, \ldots ,{\varvec{s}}_N\}\) be partitioned into sets \(\{\mathcal {S}_1, \ldots ,\mathcal {S}_K\}\) such that \(\mathcal {S}_i\cap \mathcal {S}_j=\emptyset \) for \(i\ne j\), \(\bigcup _{i=1}^K\mathcal {S}_i=\mathcal {S}\) and the corresponding partition of the data be given by \(\{{\varvec{y}}_k, {\varvec{X}}_k\}\), for \(k=1,2,\ldots ,K\), where each \({\varvec{y}}_k\) is \(n_k\times 1\) and \({\varvec{X}}_k\) is \(n_k\times p\). Assume that we are able to obtain posterior samples for \({\varvec{\Omega }}= \{{\varvec{\beta }}, {\varvec{\phi }}\}\) from (1) applied independently to each of K subsets of the data in parallel on different cores. To be specific, assume that \({\varvec{\Omega }}_k = \{{\varvec{\Omega }}_k^{(1)}, {\varvec{\Omega }}_k^{(2)},\ldots , {\varvec{\Omega }}_k^{(M)}\}\) is a collection of M posterior samples from \(p({\varvec{\Omega }}\,|\,{\varvec{y}}_k)\). We refer to each \(p({\varvec{\Omega }}\,|\,{\varvec{y}}_k)\) as a "subset posterior." The metakriging approach we outline below attempts to combine, optimally and meaningfully, these subset posteriors to arrive at a legitimate probability density. We refer to this as the "metaposterior." Metakriging relies upon the unique geometric median (GM) of the subset posteriors (Minsker et al. 2014; Minsker 2015). For a positive-definite kernel \(h(\cdot )\), define the norm between two distributions \(\pi _1(\cdot )\) and \(\pi _2(\cdot )\) of \({\varvec{\Omega }}\) by \(d_{h}(\pi _1(\cdot ),\pi _2(\cdot ))=\Vert \int h({\varvec{\Omega }},\cdot )d(\pi _1-\pi _2)({\varvec{\Omega }})\Vert \). We envision the individual posterior densities \(p_k \equiv p({\varvec{\Omega }}\,|\,{\varvec{y}}_k)\) to be residing on a Banach space \(\mathcal{H}\) equipped with norm \(d_h(\cdot ,\cdot )\). The GM is defined as $$\begin{aligned} \pi ^*({\varvec{\Omega }}\,|\,{\varvec{y}}) = \arg \min \limits _{\pi \in \mathcal {H}}\sum _{k=1}^{K}d_{h}(p_k,\pi )\; , \end{aligned}$$ where \({\varvec{y}}= ({\varvec{y}}_1', {\varvec{y}}_2',\ldots ,{\varvec{y}}_K')'\). In what follows, we assume \(h(z_1,z_2)=\exp (-||z_1-z_2||^2)\). The GM is unique. Further, the geometric median lies in the convex hull of the individual posteriors, so \(\pi ^*({\varvec{\Omega }}\,|\,{\varvec{y}})\) is a legitimate probability density. Specifically, \(\pi ^*({\varvec{\Omega }}\,|\,{\varvec{y}})=\sum _{k=1}^{K} \xi _{h,k}({\varvec{y}})p_k\), \(\sum _{k=1}^{K}\xi _{h,k}({\varvec{y}})=1\), each \(\xi _{h,k}({\varvec{y}})\) being a function of \(h,{\varvec{y}}\), so that \(\int _{{\varvec{\Omega }}}\pi ^*({\varvec{\Omega }}\,|\,{\varvec{y}})\hbox {d}{\varvec{\Omega }}=1\). Computation of the geometric median \(\pi ^*\equiv \pi ^*({\varvec{\Omega }}\,|\,{\varvec{y}})\) proceeds by employing the popular Weiszfeld's iterative algorithm that estimates \(\xi _{h,k}({\varvec{y}})\) for every k from the subset posteriors \(p_k\). To further elucidate, we use a well known result that the geometric median \(\pi ^*\) satisfies, \(\pi ^*=\left[ \sum _{k=1}^{K}p_k/d_h(p_k,\pi ^*)\right] \left[ \sum _{k=1}^{K}1/d_h(p_k,\pi ^*)\right] ^{-1}\) so that \(\xi _{h,k}({\varvec{y}})= (1/d_h(p_k,\pi ^*))/ \sum _{j=1}^{K}(1/d_h(p_j,\pi ^*))\). Since there is no apparent closed form solution for \(\xi _{h,k}({\varvec{y}})\) that satisfies this equation, one needs to resort to the Weiszfeld iterative algorithm outlined in Minsker et al. (2014) to produce an empirical estimate of \(\xi _{h,k}({\varvec{y}})\) for all \(k=1,..,K\). Guhaniyogi and Banerjee (2018) show that, for a large sample, \(\pi ^*(\cdot \,|\,{\varvec{y}})\) provides desirable approximation of the full posterior distribution in certain restrictive settings. It is, therefore, natural to approximate the posterior predictive distribution \(p(y(s_0)\,|\,{\varvec{y}})\) by the subset posterior predictive distributions \(p(y(s_0)\,|\,{\varvec{y}}_k)\). Let \(\{y(s_0)^{(j,k)}\}_{j=1}^{M}\), \(k=1,\ldots ,K\), be samples obtained from the posterior predictive distribution \(p(y(s_0)|{\varvec{y}}_k)\) from the k-th subset posterior. Then, $$\begin{aligned} p(y(s_0)\,|\,{\varvec{y}})\approx \sum _{k=1}^K\xi _{h,k}({\varvec{y}})p(y(s_0)\,|\,{\varvec{y}}_k)=\sum _{k=1}^K\xi _{h,k}({\varvec{y}})\int p(y(s_0)\,|\,{\varvec{\Omega }}, {\varvec{y}}_k)p({\varvec{\Omega }}\,|\,{\varvec{y}}_k)\hbox {d}{\varvec{\Omega }}\; , \end{aligned}$$ Therefore, the empirical posterior predictive distribution of the metaposterior is given by \(\sum _{k=1}^{K}\sum _{j=1}^{M}\frac{\xi _{h,k}({\varvec{y}})}{M}1_{y(s_0)^{(j,k)}}\), from which the posterior predictive median and the 95% posterior predictive interval for the unobserved \(y(s_0)\) are readily available. The spatial metakriging approach has additional advantages over Minsker et al. (2014). Minsker et al. (2014) suggest computing the stochastically approximated posterior from each subset, which limits users from employing standard R packages to draw posterior samples from them. In contrast, metakriging allows subset posterior computation using popular R packages. Additionally, Minsker et al. (2014) mainly focuses on prediction and restricts its applicability only to i.i.d. settings. On the contrary, Guhaniyogi and Banerjee (2018) present comprehensive analysis on parameter estimation, residual surface interpolation and prediction for spatial Gaussian processes. Theoretical results supporting the proposed approach under restrictive assumptions have been presented in the supplementary material to Guhaniyogi and Banerjee (2018). One important ingredient of spatial metakriging (SMK) is partitioning the dataset into subsets. For this article, we adopt a random partitioning scheme that randomly divides data into \(K=30\) exhaustive and mutually exclusive subsets. The random partitioning scheme facilitates each subset to be a reasonable representative of the entire domain, so that each subset posterior acts as a "weak learner" of the full posterior. We have explored more sophisticated partitioning schemes and found similar predictive inference. For the sake of definiteness, this article uses the stationary Gaussian process model for each subset inference which may lead to higher run time. Indeed, the version of metakriging approach presented here yields more accurate results when a stationary Gaussian process model is fitted in each subset. However, the metakriging approach lends much more scalability when any of the above models is employed in each subset. In fact, an extension to spatial metakriging, referred to as distributed spatial kriging (DISK) (Guhaniyogi et al. 2017), scales the non-stationary modified predictive process to millions of observations. Ongoing research on a more general extension of metakriging, coined as Aggregated Monte Carlo (AMC), involves scaling spatiotemporal varying coefficient models to big datasets. 2.4.2 Gapfill The gapfill method (Gerber et al. 2018) differs from the other herein presented methods in that it is purely algorithmic, distribution-free, and, in particular, not based on Gaussian processes. Like other prediction methods popular within the satellite imaging community (see Gerber et al. 2018; Weiss et al. 2014 for reviews), the gapfill method is attractive because of its low computational workload. A key aspect of gapfill is that it is designed for parallel processing, which allows the user to exploit computing resources at different scales including large servers. Parallelization is enabled by predicting each missing value separately based on only a subset of the data. To predict the value \(Y(\varvec{s}_0)\) at location \(\varvec{s}_0\) gapfill first selects a suitable subset \(\varvec{A}=\{Y(\varvec{s}_i): \varvec{s}_i \in \mathcal {N}(\varvec{s}_0)\}\), where \(\mathcal {N}(\varvec{s}_0)\) defines a spatial neighborhood around \(\varvec{s}_0\). Finding \(\varvec{A}\) is formalized with rules, which reassure that \(\varvec{A}\) is small but contains enough observed values to inform the prediction. In this study, we require \(\varvec{A}\) to have an extent of at least \(5\times 5\) pixels and to contain at least 25 non-missing values. Subsequently, the prediction of \(Y(\varvec{s}_0)\) is based on \(\varvec{A}\) and relies on sorting algorithms and quantile regression. Moreover, prediction intervals are constructed using permutation arguments (see Gerber et al. 2018 for more details on the prediction and uncertainty intervals). The gapfill method was originally designed for spatiotemporal data, in which case the neighborhood \(\mathcal {N}(\varvec{s}_0)\) is defined in terms of the spatial and temporal dimensions of the data. As a consequence, the implementation of gapfill in the R package gapfill (Gerber 2017) requires multiple images to work properly. To mimic this situation, we shift the given images by one, two, and three pixels in both directions along the x and y-axes. Then the algorithm is applied to those 13 images in total (one original image and 12 images obtained through shifts of the original image). 2.4.3 Local Approximate Gaussian Processes The local approximate Gaussian process (laGP, Gramacy and Apley 2015) addresses the big-N problem in GP regression by taking a so-called transductive approach to learning, where the fitting scheme is tailored to the prediction problem (Vapnik 1995) as opposed to the usual inductive approach of fitting first and predicting later conditional on the fit. A special case of laGP, based on nearest neighbors, is simple to describe. In order to predict at \(\varvec{s}\), simply train a Gaussian process predictor on the nearest m neighbors to \(\varvec{s}\); i.e., use the data subset \(\mathcal {Y}_m = \{Y(\varvec{s}_i): \varvec{s}_i \in \mathcal {N}_m(\varvec{s})\}\), where \(\mathcal {N}_m(\varvec{s})\) are the m closest observed locations to \(\varvec{s}\) in terms of Euclidean distance. If the data-generating mechanism is not at odds with modeling assumptions (e.g., having a well-specified covariance structure), then one can choose m to be as large as possible, up to computational limitations, in order to obtain an accurate approximation. Observe that this use of nearest neighbors (NNs) for prediction is more akin to the classical statistical/machine learning variety, in contrast to their use in determining the global (inverse) covariance structure as described in Sect. 2.3. Interestingly, NNs do not comprise an optimal data subset for prediction under the usual criteria such as mean-squared error. However, finding the best m of \(N!/(m!(N-m)!)\) possible choices represents a combinatorially huge search. The laGP method generalizes this so-called nearest neighbor prediction algorithm (whose modern form in spatial statistical literature is described by Emery 2009) by approximating that search with a greedy heuristic. First, start with a NN set \(\mathcal {Y}_{m_0}(\varvec{s})= \{Y(\varvec{s}_i): \varvec{s}_i \in \mathcal {N}_{m_0}(\varvec{s}))\) where \(m_0 < m\), and then for \(j=m_0+1,\dots ,m\) successively choose \(\varvec{s}_{j}\) to augment \(\mathcal {Y}_{m_0}\) building up a local design data set one point at a time according to one of several simple objective criteria related to mean-square prediction error. The idea is to repeat in this way until there are m observations in \(\mathcal {Y}_m(\varvec{s})\). Gramacy and Apley's preferred variation targets \(\varvec{s}_{j}\) which maximizes the reduction in predictive variance at \(\varvec{s}\). To recognize a similar global design criterion called active learning Cohn (Cohn 1996), they dubbed this criterion ALC. Qualitatively, these local ALC designs tend to have a cluster of neighbors and "satellite" points and have been shown to offer demonstrably better predictive properties than NN and even full-data alternatives especially when the data-generating mechanism is at odds with the modeling assumptions. The reason is that local fitting offers a way to cope with a certain degree of non-stationarity which is common in many real data settings. ALC search iterations and GP updating considerations as designs are built up, are carefully engineered to lead to a method whose computations are of \(\mathcal {O}(N^3)\) complexity (i.e., the same as the simpler NN alternative). A relatively modest local design size of \(m=50\) typically works well. Moreover, calculations for each \(\varvec{s}\) are statistically independent of the next, which means that they can be trivially parallelized. Through a cascade of multi-core, multi-node and GPU parallelization, Gramacy et al. (2014) and Gramacy and Haaland (2016) illustrated how N in the millions, in terms of both training and testing data sizes could be handled (and yield accurate predictors) with less than an hour of computing time. The laGP method has been packaged for R and is available on CRAN (Gramacy 2016). Symmetric multi-core parallelization (via OpenMP) and multi-node automations (via the built-in parallel package) work out-of-the box. GPU extensions are provided in the source code but require custom compilation. A disadvantage to local modeling in this fashion is that a global predictive covariance is unavailable. Indeed, the statistically independent nature of calculation is what makes the procedure computationally efficient and parallelizable. In fact, the resulting global predictive surface, over a continuum of predictive \(\varvec{s}\)-locations, need not even be smooth. However, in most visual representations of predictive surfaces it can be difficult to distinguish between a genuinely smooth surface and what is plotted via the laGP predictive equations. Finally, it is worth noting that although laGP is applied here in a spatial modeling setting (i.e., with two input variables), it was designed for computer simulation modeling and has been shown to work well in input dimension as high as ten. 3 The Competition At the initial planning phase of this competition, we desired to compare a broad variety of approaches: from frequentist to Bayesian and from well-established to modern developments. In accordance with this plan, efforts were made to contact a variety of research groups with strong expertise in a method to analyze the datasets. After this outreach period, the research teams listed in Table 1 agreed to participate and implement their associated method. Research groups participating in the competition along with their selected method (competitor). Abhirup Datta and Andrew Finley Nearest Neighbor Processes Andrew Finley Predictive Processes Covariance Tapering Gapfill Raj Guhaniyogi Metakriging Spatial Partitioning Fixed Rank Kriging Matthias Katzfuss and Dorit Hammerling Multiresolution Approximations Stochastic Partial Differential Equations Periodic Embedding Douglas Nychka Lattice Kriging Robert Gramacy and Furong Sun Local Approximate Gaussian Processes Each group listed in Table 1 was provided with two training datasets: one real and one simulated. The simulated dataset represented a case where the covariance function was specified correctly while the real dataset represented a scenario where the covariance function was mis-specified. Both datasets consisted of observations on the same 500\(\times \)300 grid ranging longitude values of \(-\,95.91153\) to \(-\,91.28381\) and latitude values of 34.29519 to 37.06811. The real dataset consisted of daytime land surface temperatures as measured by the Terra instrument onboard the MODIS satellite on August 4, 2016 (Level-3 data). The data were downloaded from the MODIS reprojection tool web interface (MRTweb). While this exact tool was discontinued soon after this project began, the data are provided on GitHub at https://github.com/finnlindgren/heatoncomparison. The latitude and longitude ranges, as well as the date, were chosen because of the sparse cloud cover over the region on this date (rather than by scientific interest in the date itself). Namely, only 1.1% of the Level-3 MODIS data were corrupted by cloud cover leaving 148,309/150,000 observed values to use for our purposes. The top row displays the a full and b training satellite datasets. The bottom row displays the c full and d training simulated data. The simulated dataset was created by, first, fitting a Gaussian process model with constant mean, exponential covariance function and a nugget effect to a random sample of 2500 observations from the above MODIS data. The resulting parameter estimates were 4 / 3, 16.40, 0.05, and 44.49 for the spatial range, spatial variance, nugget variance and constant mean, respectively. The spatial range parameter of 4 / 3 equated to an approximate effective spatial range (the distance at which the correlation is equal to 0.05) of approximately 210 miles (338 km). These parameters were then used to simulate 150,000 observations on the same grid as the MODIS data. To define test and training sets, the missing data pattern on August 6, 2016 from the same MODIS satellite data product was used to separate each dataset into training and test sets. After the split, the training set for the MODIS data consisted of 105,569 observations leaving 42,740 observations in the test set. The training set for the simulated data also consisted of 105,569 observations but a test set size of 44,431 (the difference in test set size is contributed to missing data due to cloud cover in the original MODIS data). Research teams were provided with the training set and the locations of the test set (but not the actual observation in the test set). Figure 1 displays the full datasets along with the corresponding training set provided to each research group. All datasets used in this article are provided on the public GitHub repository https://github.com/finnlindgren/heatoncomparison. Each group independently wrote code (also included on the accompanying GitHub page) that provided (i) a point prediction for each location in the test set, (ii) a 95% prediction interval for location in the test set or a corresponding standard error for the prediction, and (iii) the total clock time needed to implement the method. In order to minimize the number of confounding factors in this competition, each group was instructed to use an exponential correlation function (if applicable to their chosen method) and a nugget variance. For the simulated data the groups were instructed to only use a constant mean (because this was how the data were originally simulated). However, for the satellite data, the groups used a linear effect for latitude and longitude so that the residual process more closely resembled the exponential correlation. The code from each team was then run on the Becker computing environment (256 GB of RAM and 2 Intel Xeon E5-2680 v4 2.40GHz CPUs with 14 cores each and 2 threads per core - totaling 56 possible threads for use in parallel computing) located at Brigham Young University (BYU). Each team's code was run individually and no other processes were simultaneously run so as to provide an accurate measure of computing time. Each method was compared in terms of mean absolute error (MAE), root-mean-squared error (RMSE), continuous rank probability score (CRPS; see Gneiting and Raftery 2007; Gneiting and Katzfuss 2014), interval score (INT; see Gneiting and Raftery 2007) and prediction interval coverage (CVG; the percent of intervals containing the true value). To calculate the CRPS, we assumed the associated predictive distribution was well approximated by a Gaussian distribution with mean centered at the predicted value and standard deviation equal to the predictive standard error. In cases where only a prediction interval was provided, the predictive standard error was taken as \((U-L)/(2\times \Phi ^{-1}(0.975))\) where U and L are the upper and lower ends of the interval, respectively. 4 Competition Results 4.1 Results for Simulated Data The numerical results for the simulated data competition are displayed in Table 2. First, consider the predictive accuracy as measured by the MAE and RMSE in Table 2. In terms of predictive accuracy, the best MAE was 0.61 while the worst was only 1.03 (68% difference). Similarly, the best RMSE was 0.83 compared to a worst RMSE of only 1.31 (a 57% difference). Yet, notably, with only a single simulated dataset these results are suggestive but not conclusive regarding which methods give consistently better predictions. Considering uncertainty quantification (UQ) some of the methods fared better than others. For example, LatticeKrig, LAGP, metakriging, MRA, periodic embedding, NNGP, and PP all achieved near the nominal 95% coverage rate. In contrast, FRK, Gapfill, and partitioning achieved lower than nominal coverage while SPDE and tapering have higher than nominal coverage. Considering UQ further, Gapfill has a large interval score suggesting possible wide predictive intervals in addition to the penalty incurred from missing the true value. In this regard, it is important to keep in mind that LAGP, metakriging, MRA, NNGP and PP all can specify the "correct" exponential correlation function. Additionally, LK and SPDE have settings that can approximate the exponential correlation function well. In contrast, some methods such as FRK and Gapfill are less suited to model fields with exponential correlation functions, which may partially explain their relatively poor prediction or coverage performance in this instance. Numerical scoring for each competing method on the simulated data. RMSE Run time (min) Cores used FRK LatticeKrig LAGP NNGP Pred. Proc. SPDE To explore differences among the methods further, we calculated RMSE and CRPS for predictions in 5 categories where the categories were created from distance to the nearest training point. The number of observations per class was 36106, 5419, 1918, 729 and 259 from shortest to longest distance categories, respectively (i.e., there were 36,106 predictions classified as "short distance"). Figure 2 displays the RMSE and CRPS of the top 5 performing methods (in terms of overall RMSE) for each prediction distance class. While there is little difference among the methods for short distance predictions, there is more spread in the methods at longer distances. That is MRA , SPDE and NNGP seem to be preferred for longest distance predictions over spatial partitioning and LK. The difference between these methods is larger when considering uncertainty (CRPS) rather than just predictive accuracy (RMSE). a RMSE and b CRPS by distance to the nearest observation for the top performers on the simulated dataset. 4.2 Results for Real Data The results for the real MODIS data are displayed in Table 3 and largely reiterate the results from the simulated data. Namely, each method performed very well in terms of predictive accuracy. The largest RMSE was only 2.52 which, when considered on the data range of \(55.41-24.37=31.04\), is very small. Relative to the simulated data, the observed RMSEs were considerably higher for all methods which attributed to model misspecification. We note that, under the setup of the competition, some of the methods were forced to approximate a GP with isotropic exponential covariance function, which is the true covariance function of the simulated data, but most certainly not for the real data. Thus, the scores are lowest for those approximations that happened to result in a good fit to the data and not necessarily lowest for those methods that best approximated the exponential covariance. This might also explain why MRA performed well for long-distance predictions in the simulated example but did not perform as well for long-distance satellite prediction. Further, because many of the top performing methods strive to approximate an exponential covariance, the subtle differences between the top performing methods on simulated versus real data should not be attributed to robustness in model misspecification. Numerical scoring for each competing method on the satellite data. The largest discrepancies among the competing methods is again in terms of uncertainty quantification. Lattice kriging, metakriging, MRA, NNGP and periodic embedding again achieved near nominal coverage rates with small interval scores and CRPS. The SPDE and tapering approaches did better in terms of coverage in that the empirical rates were near nominal (recall that the corresponding coverage rates were too high for the simulated data for these methods). In contrast, the coverage rates on the MODIS data for FRK, Gapfill, LAGP, partitioning and PP were too small resulting in larger interval scores. Figure 3 displays the results for RMSE and CRPS as a function of distance category for the 5 top performing methods (in terms of overall RMSE) and one low-rank method (FRK) in the satellite case study. When considering prediction distance, more noticeable differences are found between the methods in this real data application as opposed to the simulated data application. NNGP and SPDE perform consistently well across all distance categories for both the simulated and satellite data. Further, it is apparent from this plot that prediction performance of low-rank methods is inferior (see Table 3) because they do not perform well for short-range predictions (this was expected for FRK, where the number of basis functions used is relatively small). However, they still do well, comparatively, when predicting over large gaps. a RMSE and b CRPS by distance to the nearest observation for the top performers on the satellite dataset. 5 Conclusions The contribution of this article was fourfold: (i) provide an overview of the plethora of methods available for analyzing large spatial datasets, (ii) provide a brief comparison of the methods by implementing a case study competition among research groups, (iii) make available the code to analyze the data to the broader scientific community and (iv) provide an example of the common task framework for future studies to follow when comparing various analytical methods. In terms of comparison, each of the methods performed very well in terms in predictive accuracy suggesting that any of the above methods are well suited to the task of prediction. However, the methods differed in terms of their ability to accurately quantify the uncertainty associated with the predictions. While we saw that some methods did consistently well in both predictive performance and nominal coverage on the simulated and real data, in general we can expect performance of any method to change with size of the dataset, measurement-error variance, and the nature of missingness. Further, while the results in Table 1 are suggestive, with only one simulated and one real dataset we cannot definitively claim that any one method provides consistently better predictions than any other method. However, the data scenario's considered here are relatively representative of a typical spatial analysis such that our results can be used as a guide for practitioners. Each of the above methods performed well for both scenarios considered in this paper. However, situations where each respective method does not perform well are also of interest. For example, it is known that low-rank methods such as FRK and predictive processes will struggle with high signal-to-noise ratio and where the process has a small spatial range (as was seen here for the simulated data; see Zammit-Mangion and Cressie 2018; Zammit-Mangion et al. 2018). The gapfill method may struggle if the data are not on a regular grid. Moreover, depending on the parameters and the pattern of missing values in the data the predictions from gapfill, LAGP and spatial partitioning may show discontinuities. Likewise, it is know that metakriging approach described here is less accurate when each subset uses a non-stationary GP instead of a stationary GP but recent research seeks to remedy this issue (Guhaniyogi et al. 2017). At the outset of this study, run time and computation time for each method was of interest. However, because many of these methods are very young in their use and implementation, the variability across run time was too great to be used as a measure to compare the methods. For example, some methods are implemented in R while others are implemented in MATLAB. Still, others use R as a front end to call C-optimized functions. Hence, while we reported the run times in the results section, we provide these as more of an "off the shelf" run time estimate rather than an optimized run time. Until time allows for each method to be further developed and software becomes available comparing run times can be misleading. Importantly, no effort was made to standardize the time spent on this project by each group. Some groups were able to quickly code up their analysis from existing R or MATLAB libraries. Others, however, had to spend more time writing code specific to this analysis. Undoubtedly, some groups likely spent more time running "in house" cross-validation studies to validate their model predictions prior to the final run on the BYU servers while others did not. Because of this difference, we note that some of the discrepancies in results seen here may be attributable to the amount of effort expended by each group. However, we still feel that the results displayed herein give valuable insight into the strengths and weaknesses of each method. This study, while thorough, is non-comprehensive in that other methods for large spatial data (e.g., Sang and Huang 2012; Stein et al. 2013; Kleiber and Nychka 2015; Castrillon-Candás et al. 2016; Sun and Stein 2016; Litvinenko et al. 2017) were not included. Additionally, methods are sure to be developed in the future which are also viable for modeling large spatial data (see Ton et al. 2017; Taylor-Rodriguez et al. 2018). We made attempts to invite as many groups as possible to participate in this case study but, due to time and other constraining factors, not all groups were able to participate. However, in our opinion, the methods compared herein are representative of the most common methods for large spatial data at the time of writing. We note that the data scenarios considered in this case study do not cover the spectrum of issues related to spatial data. That is, spatial data may exhibit anisotropy, non-stationarity, large and small range spatial dependence as well as various signal-to-noise ratios. Hence, we note that further practical distinctions between these various methods could be made depending on their applicability to these various spatial data scenarios. However, the comparison included here serves as a nice baseline case for method performance. Further research can develop case study competitions for these more complicated scenarios. Notably, each method was compared only in terms of predictive accuracy. Further comparisons could include estimation of underlying model parameters. The difficulty in comparing estimation, however, is that not all the methods use the same model structure. For example, NNGP uses an exponential covariance while Gapfill does not require a specified covariance structure. Hence, we leave the comparison of the parameter estimates to a future study. This comparison focused solely on spatial data. Hence, we stress that the results found here are applicable only to the spatial setting. However, spatiotemporal data are often considerably larger and more complex than spatial data. Many of the above methods have extensions to the space time setting (e.g., Gapfill is built directly for spatiotemporal settings). Further research is needed to compare these methods in the spatiotemporal setting. This material was based upon work supported by the National Science Foundation (NSF) under Grant Number DMS-1417856. Dr. Katzfuss was partially supported by NSF Grants DMS–1521676 and DMS–1654083. Dr. Gramacy and Furong Sun are partially supported by NSF Award #1621746. Dr. Finley was partially supported by NSF DMS-1513481, EF-1241874, EF-1253225, and National Aeronautics and Space Administration (NASA) Carbon Monitoring System (CMS) grants. Dr. Guhaniyogi is partially supported by ONR N00014-18-1-2741. Dr. Gerber and Dr. Furrer were partially supported by SNSF Grant 175529 and acknowledge the support by the University of Zurich Research Priority Program on Global Change and Biodiversity. Dr. Zammit-Mangion's research was supported by an Australian Research Council (ARC) Discovery Early Career Research Award, DE180100203. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the ARC, NSF or NASA. 13253_2018_348_MOESM1_ESM.pdf (165 kb) Supplementary Materials Additional details regarding the implementation of some of the methods to the training datasets. Anderson, C., Lee, D., and Dean, N. (2014), "Identifying clusters in Bayesian disease mapping," Biostatistics, 15, 457–469.CrossRefGoogle Scholar Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2014), Hierarchical modeling and analysis for spatial data, Crc Press.Google Scholar Banerjee, S., Gelfand, A. E., Finley, A. O., and Sang, H. (2008), "Gaussian predictive process models for large spatial data sets," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70, 825–848.MathSciNetCrossRefzbMATHGoogle Scholar Barbian, M. H. and Assunção, R. M. (2017), "Spatial subsemble estimator for large geostatistical data," Spatial Statistics, 22, 68–88.MathSciNetCrossRefGoogle Scholar Bevilacqua, M., Faouzi, T., Furrer, R., and Porcu, E. (2016), "Estimation and Prediction using Generalized Wendland Covariance Function under Fixed Domain Asymptotics," arXiv:1607.06921v2. Bradley, J. R., Cressie, N., Shi, T., et al. (2016), "A comparison of spatial predictors when datasets could be very large," Statistics Surveys, 10, 100–131.MathSciNetCrossRefzbMATHGoogle Scholar Castrillon-Candás, J. E., Genton, M. G., and Yokota, R. (2016), "Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets," Spatial Statistics, 18, 105–124.MathSciNetCrossRefGoogle Scholar Cohn, D. A. (1996), "Neural Network Exploration Using Optimal Experimental Design," in Advances in Neural Information Processing Systems, Morgan Kaufmann Publishers, vol. 6(9), pp. 679–686.Google Scholar Cressie, N. (1993), Statistics for spatial data, John Wiley & Sons.Google Scholar Cressie, N. and Johannesson, G. (2006), "Spatial prediction for massive data sets," in Mastering the Data Explosion in the Earth and Environmental Sciences: Proceedings of the Australian Academy of Science Elizabeth and Frederick White Conference, Canberra, Australia: Australian Academy of Science, pp. 1–11.Google Scholar — (2008), "Fixed rank kriging for very large spatial data sets," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70, 209–226.MathSciNetCrossRefzbMATHGoogle Scholar Cressie, N. and Wikle, C. K. (2015), Statistics for spatio-temporal data, John Wiley & Sons.Google Scholar Dahlhaus, R. and Künsch, H. (1987), "Edge effects and efficient parameter estimation for stationary random fields," Biometrika, 74, 877–882.MathSciNetCrossRefzbMATHGoogle Scholar Datta, A., Banerjee, S., Finley, A. O., and Gelfand, A. E. (2016a), "Hierarchical nearest-neighbor Gaussian process models for large geostatistical datasets," Journal of the American Statistical Association, 111, 800–812.MathSciNetCrossRefGoogle Scholar — (2016b), "On nearest-neighbor Gaussian process models for massive spatial data," Wiley Interdisciplinary Reviews: Computational Statistics, 8, 162–171.MathSciNetCrossRefGoogle Scholar Datta, A., Banerjee, S., Finley, A. O., Hamm, N. A., Schaap, M., et al. (2016c), "Nonseparable dynamic nearest neighbor Gaussian process models for large spatio-temporal data with an application to particulate matter analysis," The Annals of Applied Statistics, 10, 1286–1316.MathSciNetCrossRefzbMATHGoogle Scholar Du, J., Zhang, H., and Mandrekar, V. S. (2009), "Fixed-domain asymptotic properties of tapered maximum likelihood estimators," Ann. Statist., 37, 3330–3361.MathSciNetCrossRefzbMATHGoogle Scholar Eidsvik, J., Shaby, B. A., Reich, B. J., Wheeler, M., and Niemi, J. (2014), "Estimation and prediction in spatial models with block composite likelihoods," Journal of Computational and Graphical Statistics, 23, 295–315.MathSciNetCrossRefGoogle Scholar Emery, X. (2009), "The kriging update equations and their application to the selection of neighboring data," Computational Geosciences, 13, 269–280.CrossRefGoogle Scholar Finley, A., Datta, A., and Banerjee, S. (2017), spNNGP: Spatial Regression Models for Large Datasets using Nearest Neighbor Gaussian Processes, r package version 0.1.1.Google Scholar Finley, A. O., Datta, A., Cook, B. C., Morton, D. C., Andersen, H. E., and Banerjee, S. (2018), "Efficient algorithms for Bayesian Nearest Neighbor Gaussian Processes," arXiv:1702.00434. Finley, A. O., Sang, H., Banerjee, S., and Gelfand, A. E. (2009), "Improving the performance of predictive process modeling for large datasets," Computational statistics & data analysis, 53, 2873–2884.MathSciNetCrossRefzbMATHGoogle Scholar Fuentes, M. (2007), "Approximate likelihood for large irregularly spaced spatial data," Journal of the American Statistical Association, 102, 321–331.MathSciNetCrossRefzbMATHGoogle Scholar Furrer, R. (2016), spam: SPArse Matrix, r package version 1.4-0.Google Scholar Furrer, R., Bachoc, F., and Du, J. (2016), "Asymptotic Properties of Multivariate Tapering for Estimation and Prediction," J. Multivariate Anal., 149, 177–191.MathSciNetCrossRefzbMATHGoogle Scholar Furrer, R., Genton, M. G., and Nychka, D. (2006), "Covariance tapering for interpolation of large spatial datasets," Journal of Computational and Graphical Statistics, 15, 502–523.MathSciNetCrossRefGoogle Scholar Furrer, R. and Sain, S. R. (2010), "spam: A Sparse Matrix R Package with Emphasis on MCMC Methods for Gaussian Markov Random Fields," J. Stat. Softw., 36, 1–25.CrossRefGoogle Scholar Gerber, F. (2017), gapfill: Fill Missing Values in Satellite Data, r package version 0.9.5.Google Scholar Gerber, F., Furrer, R., Schaepman-Strub, G., de Jong, R., and Schaepman, M. E. (2018), "Predicting missing values in spatio-temporal satellite data," IEEE Transactions on Geoscience and Remote Sensing, 56, 2841–2853.CrossRefGoogle Scholar Gneiting, T. and Katzfuss, M. (2014), "Probabilistic forecasting," Annual Review of Statistics and Its Application, 1, 125–151.CrossRefGoogle Scholar Gneiting, T. and Raftery, A. E. (2007), "Strictly proper scoring rules, prediction, and estimation," Journal of the American Statistical Association, 102, 359–378.MathSciNetCrossRefzbMATHGoogle Scholar Gramacy, R. and Apley, D. (2015), "Local Gaussian Process Approximation for Large Computer Experiments," Journal of Computational and Graphical Statistics, 24, 561–578.MathSciNetCrossRefGoogle Scholar Gramacy, R., Niemi, J., and Weiss, R. (2014), "Massively Parallel Approximate Gaussian Process Regression," Journal of Uncertainty Quantification, 2, 564–584.MathSciNetCrossRefzbMATHGoogle Scholar Gramacy, R. B. (2016), "laGP: Large-Scale Spatial Modeling via Local Approximate Gaussian Processes in R," Journal of Statistical Software, 72, 1–46.MathSciNetCrossRefGoogle Scholar Gramacy, R. B. and Haaland, B. (2016), "Speeding up neighborhood search in local Gaussian process prediction," Technometrics, 58, 294–303.MathSciNetCrossRefGoogle Scholar Guhaniyogi, R. and Banerjee, S. (2018), "Meta-kriging: Scalable Bayesian modeling and inference for massive spatial datasets," Technometrics.Google Scholar Guhaniyogi, R., Li, C., Savitsky, T. D., and Srivastava, S. (2017), "A Divide-and-Conquer Bayesian Approach to Large-Scale Kriging," arXiv preprint arXiv:1712.09767. Guinness, J. (2017), "Spectral Density Estimation for Random Fields via Periodic Embeddings," arXiv preprint arXiv:1710.08978. Guinness, J. and Fuentes, M. (2017), "Circulant embedding of approximate covariances for inference from Gaussian data on large lattices," Journal of Computational and Graphical Statistics, 26, 88–97.MathSciNetCrossRefGoogle Scholar Guyon, X. (1982), "Parameter estimation for a stationary process on a d-dimensional lattice," Biometrika, 69, 95–105.MathSciNetCrossRefzbMATHGoogle Scholar Heaton, M. J., Christensen, W. F., and Terres, M. A. (2017), "Nonstationary Gaussian process models using spatial hierarchical clustering from finite differences," Technometrics, 59, 93–101.MathSciNetCrossRefGoogle Scholar Higdon, D. (2002), "Space and space-time modeling using process convolutions," in Quantitative methods for current environmental issues, Springer, pp. 37–56.Google Scholar Hirano, T. and Yajima, Y. (2013), "Covariance tapering for prediction of large spatial data sets in transformed random fields," Annals of the Institute of Statistical Mathematics, 65, 913–939.MathSciNetCrossRefzbMATHGoogle Scholar Jurek, M. and Katzfuss, M. (2018), "Multi-resolution filters for massive spatio-temporal data," arXiv:1810.04200. Kang, E., Liu, D., and Cressie, N. (2009), "Statistical analysis of small-area data based on independence, spatial, non-hierarchical, and hierarchical models," Computational Statistics & Data Analysis, 53, 3016–3032.MathSciNetCrossRefzbMATHGoogle Scholar Kang, E. L. and Cressie, N. (2011), "Bayesian inference for the spatial random effects model," Journal of the American Statistical Association, 106, 972–983.MathSciNetCrossRefzbMATHGoogle Scholar Katzfuss, M. (2017), "A multi-resolution approximation for massive spatial datasets," Journal of the American Statistical Association, 112, 201–214.MathSciNetCrossRefGoogle Scholar Katzfuss, M. and Cressie, N. (2011), "Spatio-temporal smoothing and EM estimation for massive remote-sensing data sets," Journal of Time Series Analysis, 32, 430–446.MathSciNetCrossRefzbMATHGoogle Scholar Katzfuss, M. and Gong, W. (2017), "Multi-resolution approximations of Gaussian processes for large spatial datasets," arXiv:1710.08976. Katzfuss, M. and Hammerling, D. (2017), "Parallel inference for massive distributed spatial data using low-rank models," Statistics and Computing, 27, 363–375.MathSciNetCrossRefzbMATHGoogle Scholar Kaufman, C. G., Schervish, M. J., and Nychka, D. W. (2008), "Covariance tapering for likelihood-based estimation in large spatial data sets," Journal of the American Statistical Association, 103, 1545–1555.MathSciNetCrossRefzbMATHGoogle Scholar Kim, H.-M., Mallick, B. K., and Holmes, C. (2005), "Analyzing nonstationary spatial data using piecewise Gaussian processes," Journal of the American Statistical Association, 100, 653–668.MathSciNetCrossRefzbMATHGoogle Scholar Kleiber, W. and Nychka, D. W. (2015), "Equivalent kriging," Spatial Statistics, 12, 31–49.MathSciNetCrossRefGoogle Scholar Knorr-Held, L. and Raßer, G. (2000), "Bayesian detection of clusters and discontinuities in disease maps," Biometrics, 56, 13–21.CrossRefzbMATHGoogle Scholar Konomi, B. A., Sang, H., and Mallick, B. K. (2014), "Adaptive bayesian nonstationary modeling for large spatial datasets using covariance approximations," Journal of Computational and Graphical Statistics, 23, 802–829.MathSciNetCrossRefGoogle Scholar Lemos, R. T. and Sansó, B. (2009), "A spatio-temporal model for mean, anomaly, and trend fields of North Atlantic sea surface temperature," Journal of the American Statistical Association, 104, 5–18.MathSciNetCrossRefGoogle Scholar Liang, F., Cheng, Y., Song, Q., Park, J., and Yang, P. (2013), "A resampling-based stochastic approximation method for analysis of large geostatistical data," Journal of the American Statistical Association, 108, 325–339.MathSciNetCrossRefzbMATHGoogle Scholar Lindgren, F., Rue, H., and Lindström, J. (2011), "An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73, 423–498.MathSciNetCrossRefzbMATHGoogle Scholar Litvinenko, A., Sun, Y., Genton, M. G., and Keyes, D. (2017), "Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets," arXiv preprint arXiv:1709.04419. Liu, H., Ong, Y.-S., Shen, X., and Cai, J. (2018), "When Gaussian Process Meets Big Data: A Review of Scalable GPs," arXiv preprint arXiv:1807.01065. Minsker, S. (2015), "Geometric median and robust estimation in Banach spaces," Bernoulli, 21, 2308–2335.MathSciNetCrossRefzbMATHGoogle Scholar Minsker, S., Srivastava, S., Lin, L., and Dunson, D. B. (2014), "Robust and scalable Bayes via a median of subset posterior measures," arXiv preprint arXiv:1403.2660. Neelon, B., Gelfand, A. E., and Miranda, M. L. (2014), "A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores," Journal of the Royal Statistical Society: Series C (Applied Statistics), 63, 737–761.MathSciNetCrossRefGoogle Scholar Nychka, D., Bandyopadhyay, S., Hammerling, D., Lindgren, F., and Sain, S. (2015), "A multiresolution Gaussian process model for the analysis of large spatial datasets," Journal of Computational and Graphical Statistics, 24, 579–599.MathSciNetCrossRefGoogle Scholar Paciorek, C. J., Lipshitz, B., Zhuo, W., Kaufman, C. G., Thomas, R. C., et al. (2015), "Parallelizing Gaussian Process Calculations In R," Journal of Statistical Software, 63, 1–23.CrossRefGoogle Scholar Rue, H., Martino, S., and Chopin, N. (2009), "Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71, 319–392.MathSciNetCrossRefzbMATHGoogle Scholar Rue, H., Martino, S., Lindgren, F., Simpson, D., Riebler, A., Krainski, E. T., and Fuglstad, G.-A. (2017), INLA: Bayesian Analysis of Latent Gaussian Models using Integrated Nested Laplace Approximations, r package version 17.06.20.Google Scholar Sang, H. and Huang, J. Z. (2012), "A full scale approximation of covariance functions for large spatial data sets," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74, 111–132.MathSciNetCrossRefGoogle Scholar Sang, H., Jun, M., and Huang, J. Z. (2011), "Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors," The Annals of Applied Statistics, 2519–2548.Google Scholar Schabenberger, O. and Gotway, C. A. (2004), Statistical methods for spatial data analysis, CRC press.Google Scholar Simpson, D., Lindgren, F., and Rue, H. (2012), "In order to make spatial statistics computationally feasible, we need to forget about the covariance function," Environmetrics, 23, 65–74.MathSciNetCrossRefGoogle Scholar Stein, M. L. (1999), Interpolation of Spatial Data, Springer-Verlag, some theory for Kriging.Google Scholar — (2013), "Statistical properties of covariance tapers," Journal of Computational and Graphical Statistics, 22, 866–885.MathSciNetCrossRefGoogle Scholar — (2014), "Limitations on low rank approximations for covariance matrices of spatial data," Spatial Statistics, 8, 1–19.MathSciNetCrossRefGoogle Scholar Stein, M. L., Chen, J., Anitescu, M., et al. (2013), "Stochastic approximation of score functions for Gaussian processes," The Annals of Applied Statistics, 7, 1162–1191.MathSciNetCrossRefzbMATHGoogle Scholar Stein, M. L., Chi, Z., and Welty, L. J. (2004), "Approximating likelihoods for large spatial data sets," Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66, 275–296.MathSciNetCrossRefzbMATHGoogle Scholar Sun, Y., Li, B., and Genton, M. G. (2012), "Geostatistics for large datasets," in Advances and challenges in space-time modelling of natural events, Springer, pp. 55–77.Google Scholar Sun, Y. and Stein, M. L. (2016), "Statistically and computationally efficient estimating equations for large spatial datasets," Journal of Computational and Graphical Statistics, 25, 187–208.MathSciNetCrossRefGoogle Scholar Taylor-Rodriguez, D., Finley, A. O., Datta, A., Babcock, C., Andersen, H.-E., Cook, B. D., Morton, D. C., and Baneerjee, S. (2018), "Spatial Factor Models for High-Dimensional and Large Spatial Data: An Application in Forest Variable Mapping," arXiv preprint arXiv:1801.02078. Ton, J.-F., Flaxman, S., Sejdinovic, D., and Bhatt, S. (2017), "Spatial Mapping with Gaussian Processes and Nonstationary Fourier Features," arXiv preprint arXiv:1711.05615. Vapnik, V. (1995), The Nature of Statistical Learning Theory, New York: Springer Verlag.CrossRefzbMATHGoogle Scholar Varin, C., Reid, N., and Firth, D. (2011), "An overview of composite likelihood methods," Statistica Sinica, 5–42.Google Scholar Vecchia, A. V. (1988), "Estimation and model identification for continuous spatial processes," Journal of the Royal Statistical Society. Series B (Methodological), 297–312.Google Scholar Wang, D. and Loh, W.-L. (2011), "On fixed-domain asymptotics and covariance tapering in Gaussian random field models," Electron. J. Statist., 5, 238–269.MathSciNetCrossRefzbMATHGoogle Scholar Weiss, D. J., Atkinson, P. M., Bhatt, S., Mappin, B., Hay, S. I., and Gething, P. W. (2014), "An effective approach for gap-filling continental scale remotely sensed time-series," ISPRS J. Photogramm. Remote Sens., 98, 106–118.CrossRefGoogle Scholar Whittle, P. (1954), "On stationary processes in the plane," Biometrika, 434–449.Google Scholar Wikle, C. K., Cressie, N., Zammit-Mangion, A., and Shumack, C. (2017), "A Common Task Framework (CTF) for Objective Comparison of Spatial Prediction Methodologies," Statistics Views.Google Scholar Zammit-Mangion, A. and Cressie, N. (2018), "FRK: An R Package for Spatial and Spatio-Temporal Prediction with Large Datasets," arXiv preprint arXiv:1705.08105. Zammit-Mangion, A., Cressie, N., and Shumack, C. (2018), "On statistical approaches to generate Level 3 products from satellite remote sensing retrievals," Remote Sensing, 10, 155.CrossRefGoogle Scholar OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1. Brigham Young UniversityProvoUSA Heaton, M.J., Datta, A., Finley, A.O. et al. JABES (2018). https://doi.org/10.1007/s13253-018-00348-w Received 30 November 2018 First Online 14 December 2018 DOI https://doi.org/10.1007/s13253-018-00348-w
CommonCrawl
Another ounce of theory Math.SE report 2015-04 Mystery of the misaligned lowercase 'p' The annoying boxes puzzle The annoying boxes puzzle: solution Tue, 28 Jul 2015 A few months ago I wrote an article here called an ounce of theory is worth a pound of search and I have a nice followup. When I went looking for that article I couldn't find it, because I thought it was about how an ounce of search is worth a pound of theory, and that I was writing a counterexample. I am quite surprised to discover that that I have several times discussed how a little theory can replace a lot of searching, and not vice versa, but perhaps that is because the search is my default. Anyway, the question came up on math StackExchange today: John has 77 boxes each having dimensions 3×3×1. Is it possible for John to build one big box with dimensions 7×9×11? OP opined no, but had no argument. The first answer that appeared was somewhat elaborate and outlined a computer search strategy which claimed to reduce the search space to only 14,553 items. (I think the analysis is wrong, but I agree that the search space is not too large.) I almost wrote the search program. I have a program around that is something like what would be needed, although it is optimized to deal with a few oddly-shaped tiles instead of many similar tiles, and would need some work. Fortunately, I paused to think a little before diving in to the programming. How to Solve It from Powell's For there is an easy answer. Suppose John solved the problem. Look at just one of the 7×11 faces of the big box. It is a 7×11 rectangle that is completely filled by 1×3 and 3×3 rectangles. But 7×11 is not a multiple of 3. So there can be no solution. Now how did I think of this? It was a very geometric line of reasoning. I imagined a 7×11×9 carton and imagined putting the small boxes into the carton. There can be no leftover space; every one of the 693 cells must be filled. So in particular, we must fill up the bottom 7×11 layer. I started considering how to pack the bottommost 7×11×1 slice with just the bottom parts of the small boxes and quickly realized it couldn't be done; there is always an empty cell left over somewhere, usually in the corner. The argument about considering just one face of the large box came later; I decided it was clearer than what I actually came up with. I think this is a nice example of the Pólya strategy "solve a simpler problem" from How to Solve It, but I was not thinking of that specifically when I came up with the solution. For a more interesting problem of the same sort, suppose you have six 2×2x1 slabs and three extra 1×1×1 cubes. Can you pack the nine pieces into a 3×3x3 box? I've seen this ad on the subway at least a hundred times, but I never noticed this oddity before: Specifically, check out the vertical alignment of those 'p's: Notice that it is not simply an unusual font. The height of the 'p' matches the other lowercase letters exactly. Here's how it ought to look: At first I thought the designer was going for a playful, informal logotype. Some of the other lawyers who advertise in the subway go for a playful, informal look. But it seemed odd in the context of the rest of the sign. As I wondered what happened here, a whole story unfolded in my mind. Here's how I imagine it went down: The 'p', in proper position, collided with the edge of the light-colored box, or overlapped it entirely, causing the serif to disappear into the black area. The designer (Spivack's nephew) suggested enlarging the box, but there was not enough room. The sign must fit a standard subway car frame, so its size is prescribed. The designer then suggested eliminating "LAW OFFICES OF", or eliminating some of the following copy, or reducing its size, but Spivack refused to cede even a single line. "Millions for defense," cried Spivack, "but not one cent for tribute!" Spivack found the obvious solution: "Just move the up the 'p' so it doesn't bump into the edge, stupid!" Spivack's nephew complied. "Looks great!" said Spivack. "Print it!" I have no real reason to believe that most of this is true, but I find it all so very plausible. [ Addendum: Noted typographic expert Jonathan Hoefler says "I'm certain you are correct." ] [Other articles in category /IT/typo] permanent link [ Notice: I originally published this report at the wrong URL. I moved it so that I could publish the June 2015 report at that URL instead. If you're seeing this for the second time, you might want to read the June article instead. ] A lot of the stuff I've written in the past couple of years has been on Mathematics StackExchange. Some of it is pretty mundane, but some is interesting. I thought I might have a little meta-discussion in the blog and see how that goes. These are the noteworthy posts I made in April 2015. Languages and their relation : help is pretty mundane, but interesting for one reason: OP was confused about a statement in a textbook, and provided a reference, which OPs don't always do. The text used the symbol !!\subset_\ne!!. OP had interpreted it as meaning !!\not\subseteq!!, but I think what was meant was !!\subsetneq!!. I dug up a copy of the text and groveled over it looking for the explanation of !!\subset_\ne!!, which is not standard. There was none that I could find. The book even had a section with a glossary of notation, which didn't mention !!\subset_\ne!!. Math professors can be assholes sometimes. Is there an operation that takes !!a^b!! and !!a^c!!, and returns !!a^{bc}!! is more interesting. First off, why is this even a reasonable question? Why should there be such an operation? But note that there is an operation that takes !!a^b!! and !!a^c!! and returns !!a^{b+c}!!, namely, multiplication, so it's plausible that the operation that OP wants might also exist. But it's easy to see that there is no operation that takes !!a^b!! and !!a^c!! and returns !!a^{bc}!!: just observe that although !!4^2=2^4!!, the putative operation (call it !!f!!) should take !!f(2^4, 2^4)!! and yield !!2^{4\cdot4} = 2^{16} = 65536!!, but it should also take !!f(4^2, 4^2)!! and yield !!4^{2\cdot2} = 2^4 = 256!!. So the operation is not well-defined. And you can take this even further: !!2^4!! can be written as !!e^{4\log 2}!!, so !!f!! should also take !!f(e^{2\log 4}, e^{2\log 4})!! and yield !!e^{4(\log 4)^2} \approx 2180.37!!. They key point is that the representation of a number, or even an integer, in the form !!a^b!! is not unique. (Jargon: "exponentiation is not injective".) You can raise !!a^b!!, but having done so you cannot look at the result and know what !!a!! and !!b!! were, which is what !!f!! needs to do. But if !!f!! can't do it, how can multiplication do it when it multiplies !!a^b!! and !!a^c!! and gets !!a^{b+c}!!? Does it somehow know what !!a!! is? No, it turns out that it doesn't need !!a!! in this case. There is something magical going on there, ultimately related to the fact that if some quantity is increasing by a factor of !!x!! every !!t!! units of time, then there is some !!t_2!! for which it is exactly doubling every !!t_2!! units of time. Because of this there is a marvelous group homomophism $$\log : \langle \Bbb R^+, \times\rangle \to \langle \Bbb R ,+\rangle$$ which can change multiplication into addition without knowing what the base numbers are. In that thread I had a brief argument with someone who thinks that operators apply to expressions rather than to numbers. Well, you can say this, but it makes the question trivial: you can certainly have an "operator" that takes expressions !!a^b!! and !!a^c!! and yields the expression !!a^{bc}!!. You just can't expect to apply it to numbers, such as !!16!! and !!16!!, because those numbers are not expressions in the form !!a^b!!. I remembered the argument going on longer than it did; I originally ended this paragraph with a lament that I wasted more than two comments on this guy, but looking at the record, it seems that I didn't. Good work, Mr. Dominus. how 1/0.5 is equal to 2? wants a simple explanation. Very likely OP is a primary school student. The question reminds me of a similar question, asking why the long division algorithm is the way it is. Each of these is a failure of education to explain what division is actually doing. The long division answer is that long division is an optimization for repeated subtraction; to divide !!450\div 3!! you want to know how many shares of three cookies each you can get from !!450!! cookies. Long division is simply a notation for keeping track of removing !!100!! shares, leaving !!150!! cookies, then !!5\cdot 10!! further shares, leaving none. In this question there was a similar answer. !!1/0.5!! is !!2!! because if you have one cookie, and want to give each kid a share of !!0.5!! cookies, you can get out two shares. Simple enough. I like division examples that involve giving cookies to kids, because cookies are easy to focus on, and because the motivation for equal shares is intuitively understood by everyone who has kids, or who has been one. There is a general pedagogical principle that an ounce of examples are worth a pound of theory. My answer here is a good example of that. When you explain the theory, you're telling the student how to understand it. When you give an example, though, if it's the right example, the student can't help but understand it, and when they do they'll understand it in their own way, which is better than if you told them how. How to read a cycle graph? is interesting because hapless OP is asking for an explanation of a particularly strange diagram from Wikipedia. I'm familiar with the eccentric Wikipedian who drew this, and I was glad that I was around to say "The other stuff in this diagram is nonstandard stuff that the somewhat eccentric author made up. Don't worry if it's not clear; this author is notorious for that." In Expected number of die tosses to get something less than 5, OP calculated as follows: The first die roll is a winner !!\frac23!! of the time. The second roll is the first winner !!\frac13\cdot\frac23!! of the time. The third roll is the first winner !!\frac13\cdot\frac13\cdot\frac23!! of the time. Summing the series !!\sum_n \frac23\left(\frac13\right)^nn!! we eventually obtain the answer, !!\frac32!!. The accepted answer does it this way also. But there's a much easier way to solve this problem. What we really want to know is: how many rolls before we expect to have seen one good one? And the answer is: the expected number of winners per die roll is !!\frac23!!, expectations are additive, so the expected number of winners per !!n!! die rolls is !!\frac23n!!, and so we need !!n=\frac32!! rolls to expect one winner. Problem solved! I first discovered this when I was around fifteen, and wrote about it here a few years ago. As I've mentioned before, this is one of the best things about mathematics: not that it works, but that you can do it by whatever method that occurs to you and you get the same answer. This is where mathematics pedagogy goes wrong most often: it proscribes that you must get the answer by method X, rather than that you must get the answer by hook or by crook. If the student uses method Y, and it works (and if it is correct) that should be worth full credit. Bad instructors always say "Well, we need to test to see if the student knows method X." No, we should be testing to see if the student can solve problem P. If we are testing for method X, that is a failure of the test or of the curriculum. Because if method X is useful, it is useful because for some problems, it is the only method that works. It is the instructor's job to find one of these problems and put it on the test. If there is no such problem, then X is useless and it is the instructor's job to omit it from the curriculum. If Y always works, but X is faster, it is the instructor's job to explain this, and then to assign a problem for the test where Y would take more time than is available. I see now I wrote the same thing in 2006. It bears repeating. I also said it again a couple of years ago on math.se itself in reply to a similar comment by Brian Scott: If the goal is to teach students how to write proofs by induction, the instructor should damned well come up with problems for which induction is the best approach. And if even then a student comes up with a different approach, the instructor should be pleased. ... The directions should not begin [with "prove by induction"]. I consider it a failure on the part of the instructor if he or she has to specify a technique in order to give students practice in applying it. [Other articles in category /math/se] permanent link I presented this logic puzzle on Wednesday: There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "exactly one of the labels is true". The green box is labelled "the treasure is in this box." Can you figure out which box contains the treasure? It's not too late to try to solve this before reading on. If you want, you can submit your answer here: The treasure is in the red box The treasure is in the green box There is not enough information to determine the answer Something else: There were 506 total responses up to Fri Jul 3 11:09:52 2015 UTC; I kept only the first response from each IP address, leaving 451. I read all the "something else" submissions and where it seemed clear I recoded them as votes for "red", for "not enough information", or as spam. (Several people had the right answer but submitted "other" so they could explain themselves.) There was also one post attempted to attack my (nonexistent) SQL database. Sorry, Charlie; I'm not as stupid as I look. 66.52% 300 red 25.72 116 not-enough-info 3.55 16 green 2.00 9 other 1.55 7 spam 0.44 2 red-with-qualification 0.22 1 attack 100.00 451 TOTAL One-quarter of respondents got the right answer, that there is not enough information given to solve the problem, Two-thirds of respondents said the treasure was in the red box. This is wrong. The treasure is in the green box. Let me show you. I stated: The labels are as I said. Everything I told you was literally true. The treasure is definitely not in the red box. No, it is actually in the green box. (It's hard to see, but one of the items in the green box is the gold and diamond ring made in Vienna by my great-grandfather, which is unquestionably a real treasure.) So if you said the treasure must be in the red box, you were simply mistaken. If you had a logical argument why the treasure had to be in the red box, your argument was fallacious, and you should pause and try to figure out what was wrong with it. I will discuss it in detail below. The treasure is undeniably in the green box. However, correct answer to the puzzle is "no, you cannot figure out which box contains the treasure". There is not enough information given. (Notice that the question was not "Where is the treasure?" but "Can you figure out…?") (Fallacious) Argument A Many people erroneously conclude that the treasure is in the red box, using reasoning something like the following: Suppose the red label is true. Then exactly one label is true, and since the red label is true, the green label is false. Since it says that the treasure is in the green box, the treasure must really be in the red box. Now suppose that the red label is false. Then the green label must also be false. So again, the treasure is in the red box. Since both cases lead to the conclusion that the treasure is in the red box, that must be where it is. What's wrong with argument A? Here are some responses people commonly have when I tell them that argument A is fallacious: "If the treasure is in the green box, the red label is lying." Not quite, but argument A explicitly considers the possibility that the red label was false, so what's the problem? "If the treasure is in the green box, the red label is inconsistent." It could be. Nothing in the puzzle statement ruled this out. But actually it's not inconsistent, it's just irrelevant. "If the treasure is in the green box, the red label is meaningless." Nonsense. The meaning is plain: it says "exactly one of these labels is true", and the meaning is that exactly one of the labels is true. Anyone presenting argument A must have understood the label to mean that, and it is incoherent to understand it that way and then to turn around and say that it is meaningless! (I discussed this point in more detail in 2007.) "But the treasure could have been in the red box." True! But it is not, as you can see in the pictures. The puzzle does not give enough information to solve the problem. If you said that there was not enough information, then congratulations, you have the right answer. The answer produced by argument A is incontestably wrong, since it asserts that the treasure is in the red box, when it is not. "The conditions supplied by the puzzle statement are inconsistent." They certainly are not. Inconsistent systems do not have models, and in particular cannot exist in the real world. The photographs above demonstrate a real-world model that satisfies every condition posed by the puzzle, and so proves that it is consistent. "But that's not fair! You could have made up any random garbage at all, and then told me afterwards that you had been lying." Had I done that, it would have been an unfair puzzle. For example, suppose I opened the boxes at the end to reveal that there was no treasure at all. That would have directly contradicted my assertion that "One [box] contains a treasure". That would have been cheating, and I would deserve a kick in the ass. But I did not do that. As the photograph shows, the boxes, their colors, their labels, and the disposition of the treasure are all exactly as I said. I did not make up a lie to trick you; I described a real situation, and asked whether people they could diagnose the location of the treasure. (Two respondents accused me of making up lies. One said: There is no treasure. Both labels are lying. Look at those boxes. Do you really think someone put a treasure in one of them just for this logic puzzle? What can I say? I did put a treasure in a box just for this logic puzzle. Some of us just have higher standards.) "But what about the labels?" Indeed! What about the labels? The labels are worthless The labels are red herrings; the provide no information. Consider the following version of the puzzle: There are two boxes on a table, one red and one green. One contains a treasure. Which box contains the treasure? Obviously, the problem cannot be solved from the information given. Now consider this version: There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "gcoadd atniy fnck z fbi c rirpx hrfyrom". The green box is labelled "ofurb rz bzbsgtuuocxl ckddwdfiwzjwe ydtd." One is similarly at a loss here. (By the way, people who said one label was meaningless: this is what a meaningless label looks like.) But then the janitor happens by. "Don't be confused by those labels," he says. "They were stuck on there by the previous owner of the boxes, who was an illiterate shoemaker who only spoke Serbian. I think he cut them out of a magazine because he liked the frilly borders." The point being that in the absence of additional information, there is no reason to believe that the labels give any information about the contents of the boxes, or about labels, or about anything at all. This should not come as a surprise to anyone. It is true not just in annoying puzzles, but in the world in general. A box labeled "fresh figs" might contain fresh figs, or spoiled figs, or angry hornets, or nothing at all. What is the Name of this Book? Why doesn't every logic puzzle fall afoul of this problem? I said as part of the puzzle conditions that there was a treasure in one box. For a fair puzzle, I am required to tell the truth about the puzzle conditions. Otherwise I'm just being a jerk. Typically the truth or falsity of the labels is part of the puzzle conditions. Here's a typical example, which I took from Raymond Smullyan's What is the name of this book? (problem 67a): … She had the following inscriptions put on the caskets: THE PORTRAIT IS IN THIS CASKET THE PORTRAIT IS NOT IN THIS CASKET THE PORTRAIT IS NOT IN THE GOLD CASKET Portia explained to the suitor that of the three statements, at most one was true. Which casket should the suitor choose [to find the portrait]? Notice that the problem condition gives the suitor a certification about the truth of the labels, on which he may rely. In the quotation above, the certification is in boldface. A well-constructed puzzle will always contain such a certification, something like "one label is true and one is false" or "on this island, each person always lies, or always tells the truth". I went to What is the Name of this Book? to get the example above, and found more than I had bargained for: problem 70 is exactly the annoying boxes problem! Smullyan says: Good heavens, I can take any number of caskets that I please and put an object in one of them and then write any inscriptions at all on the lids; these sentences won't convey any information whatsoever. Had I known ahead of time that Smullyan had treated the exact same topic with the exact same example, I doubt I would have written this post at all. But why is this so surprising? 16 people correctly said that the treasure was in the green box. This has to be counted as a lucky guess, unacceptable as a solution to a logic puzzle. One respondent referred me to a similar post on lesswrong. I did warn you all that the puzzle was annoying. I started writing this post in October 2007, and then it sat on the shelf until I got around to finding and photographing the boxes. A triumph of procrastination! [ Addendum 20150911: Steven Mazie has written a blog article about this topic, A Logic Puzzle That Teaches a Life Lesson. ] [Other articles in category /math/logic] permanent link Wed, 01 Jul 2015 Here is a logic puzzle. I will present the solution on Friday. Starting on 2015-07-03, the solution will be here.
CommonCrawl
Chemical reactions in white dwarf and carbon allotropes White dwarfs consists mostly of carbon and oxygen. In my opinion, they are too hot to contain these elements in molecular form and hence chemical reactions does not happen (I think resulting CO2 will also decompose under such high temperature and pressure). I have two questions related to that: 1) After a few billion years, can the WD cool down enough to sustain chemical reactions which will result in production of CO2. 2) Given, there is very high pressure and temperature, after a few billion years what will be the dominant allotrope of carbon (Do we have, in future, diamonds in the sky ;) I strongly doubt given the extreme density of white dwarfs( I think density of white dwarfs wont change over time), if any of the scenario is possible. white-dwarf Knu8Knu8 White dwarfs are objects the size of the Earth, but with a mass more similar to the Sun. Typical internal densities are $10^{9}$ to $10^{11}$ kg/m$^{3}$. White dwarfs are born as the contracting core of asymptotic giant branch stars that do not quite get hot enough to initiate carbon fusion. They have initial central temperatures of $\sim 10^{8}$K, that swiftly (millions of years) drops to a few $10^{7}$K due to neutrino emission. An important point to make is that after that, the interior of a white dwarf is almost isothermal. This is because the degenerate electrons that provide the pressure support also have extremely long mean free paths for scattering interactions and thus the thermal conductivity is extremely high. The exterior of the white dwarf is a factor of 100 cooler than the interior. The temperature drop happens over a very thin shell (perhaps 1% of the outer part of the white dwarf), where the degenerate gas transitions to becoming non-degenerate at the surface. This outer layer acts like an insulating blanket and makes the cooling timescales of white dwarfs very long. From interior temperatures of say $3\times 10^{7}$ K it takes a billion years or so to cool to $5\times 10^{6}$ K and then another 10 billion years to cool to around $10^{6}$K, and such white dwarfs, which must have arisen from the first stars that were born with progenitor masses of 5 to 8 solar masses, will be the coolest white dwarfs in the Galaxy. At these temperatures there is no possibility of the carbon undergoing chemical reactions, it is completely ionised; the carbon and oxygen nuclei are in a crystalline lattice at these densities, surrounded by a degenerate electron gas. There is evidence that crystallisation does take place, via asteroseismology of some pulsating massive white dwarfs. The details of the crystalline structure in these objects is unknown, and the subject of theoretical investigation. However, diamond is pure carbon and white dwarfs are expected to be a carbon/oxygen mixture. A further complication is that the process of crystallisation may be accompanied by gravitational separation of the carbon and oxygen, so that the inner core is more oxygen-rich than the outer core. Original ideas were that the crystalline form would be body-centred cubic (bcc), but other more complex possibilites are opened up by the mixture of carbon and oxygen. bcc Carbon would be a new allotrope of carbon and not like diamond - it is a denser way of arranging the nuclei. EDIT: To answer a point in the comments. Even if you were to wait trillions of years and allow white dwarfs to cool to the thousands or even hundreds of degrees that you might think would allow electrons to recombine and for chemistry to occur, that is not how it works. In the degenerate electron gas, the typical Fermi energy of the electrons is an MeV or so, compared with the eV-keV of bound electron states, and this is completely independent of the temperature. So the high electron number densities ensure that they will never recombine with the carbon nuclei (a theory first developed by Kothari 1938). Rob JeffriesRob Jeffries $\begingroup$ "bcc Carbon" Carbon never stops surprising us. IMO, it is most fascinating element. $\endgroup$ – Knu8 May 6 '16 at 12:33 $\begingroup$ Given enough time, white dwarfs will eventually cool down to temperature of the range of few 100Ks. Are chemical reactions feasible then? $\endgroup$ – Knu8 May 6 '16 at 13:15 $\begingroup$ @Knu8 Hypothetical eschatology is not my field. To cool that far would take more than trillions of years. However at these pressures I do not think the electrons and ions can ever recombine to give you chemistry. $\endgroup$ – Rob Jeffries May 6 '16 at 14:02 Not the answer you're looking for? Browse other questions tagged white-dwarf or ask your own question. What is the difference between a neutron star and a white dwarf? Can life survive on the equator of cooled and fast rotating white dwarf or neutron star? How can white dwarf form Oxygen ? (Temperature problem) What is the most distant observable White Dwarf known? white-dwarf merge in binaries White Dwarf and Degenerate Matter What is a cold white dwarf? How long does it take for a white dwarf to cool to a black dwarf? White Dwarf/Degenerate Gas Behaviour
CommonCrawl
Growth kinetics and pathogenicity of Photorhabdus luminescens subsp. akhurstii SL0708 María Teresa Orozco-Hidalgo1, Balkys Quevedo-Hidalgo2 & A. Sáenz-Aponte1 Photorhabdus luminescens subsp. akhurstii SL0708 (Enterobacteriaceae) is a symbiont of the entomopathogenic nematode (EPN), Heterorhabditis indica SL0708 (Nematoda: Rhabditida), used for insect pest biological control. In the present study, P. luminescens subsp. akhurstii SL0708 growth kinetic was evaluated considering growth and metabolic phases (phase I, intermediate phase, phase II), as well as pathogenicity. The study can be useful in determining bacterium feeding times in H. indica SL0708 production in liquid culture media. The logarithmic phase of the growth of bacterium was from 0 to 24 h, with a specific growth velocity of 0.21 h−1; during this phase, bacterium at metabolic phase I was detected. Maximum bioluminescence was registered at 24 h (3.437 luminescence AU). Finally, it was evidenced that the bacterial metabolic phase had an effect on the greater wax moth, Galleria mellonella L., larvae mortality rate. Moreover, biochemical tests were the same for all P. luminescens subsp. akhurstii SL0708 sampling times. This research is particularly relevant, since no reports are available on this bacterium isolate in Colombia. In the future, this will allow massive H. indica SL0708 production, because when pre-incubated with its symbiont, it provides essential nutrients for the EPNs development and reproduction. For integrated insect pest management, several investigations have demonstrated that entomopathogenic nematodes (EPNs) use is an effective alternative, because it controls different pest species and it is friendly with humans and the environment (Devi and Dhrubajyoti 2017). Of all the nematodes studied for insect biological control, Heterorhabditis is one of the most studied genera for EPNs purposes. In Colombia, Heterorhabditis indica SL0708 was isolated from guadua bamboo soils in Valle del Cauca, demonstrating its high efficacy in the management of various pests in crops of economic importance (San-Blas et al. 2019). Given its pest management use, in vitro EPN production has gained importance in recent years; therefore, it is necessary to meet the demands of the biological controller in the field (Abd-Elgawad et al. 2017). This process consists of designing culture media to supply EPNs nutritional requirements. The bacterial symbiont must digest complex compounds in the media, so the nematode can use them to complete its life cycle (Cho et al. 2011). This production strategy presents various advantages in comparison to traditional in vivo EPNs production; it is less laborious and easier to scale-up to industrial levels (Inman et al. 2012), in search of high infective juveniles (IJs) yields to sufficiently supply for their use in different Colombian crops. Photorhabdus luminescens SL0708 is a symbiotic bacterium presents in the H. indica SL0708 IJs. This bacterium is responsible for EPNs insect host death. P. luminescens is a facultative anaerobe, gram-negative Bacillus belonging to the luminescent Enterobacteriaceae family (Stock et al. 2017). Its life cycle can be divided into three stages: mutualistic association with the IJ, as insect pathogen, and as food source for the nematode (Devi and Dhrubajyoti 2017). To mass produce H. indica SL0708, its bacterial symbiont P. luminescens SL0708 must be mass produced as well. To grow P. luminescens, it must first grow in a nutrient medium up to the stationary phase. After approximately 24 h, IJs which are suspended in an enriched nutrient broth are inoculated into the bacterial culture. To check the entompathogenicity, the greater wax moth, Galleria mellonella Linnaeus (Lepidoptera: Pyralidae), was infected by the nematodes along with its symbiont, and its death occurs within 48 h, and the insect pathogenicity of the H. indica-P. luminescens combination is confirmed (Salazar-Gutiérrez et al. 2017). Culturing P. luminescens in vitro has allowed to identify different phases associated with its metabolism and pathogenicity. Metabolic phase I is associated with the insect and IJ intestines. It is characterized by producing an array of pathogenic factors, toxin complexes (Tc's), and Photorhabdus insect-related (Pir) toxins and makes caterpillars floppy toxins (Mcf), Photorhabdus virulence cassettes (PVC), and bioluminescence (Hu and Webster 2000; Lang 2010; Nouh and Hussein 2015). Phase II is the other metabolic phase related to in vitro bacterium culture. It is characterized by loss of pathogenicity factors associated with phase I. Similarly, an intermediate phase has been identified between the two phases associated with in vitro culture, where the bacterium starts to lose its pathogenic factors (Han and Ehlers 2001). Change between phase I to intermediate phase and phase II entails nourishment depletion, therefore, a loss of capability to associate with the nematode, when it enters into IJ stage. The latter directly affects pathogenicity, because without bacteria, the nematode cannot kill the insect host (Turlin et al. 2006). In vitro bacterium characterization has been performed by NBTA media (nutrient agar supplemented with bromothymol blue and triphenyltetrazolium chloride), where bacteria in phase I are bioluminescent in the dark. Additionally, they grow into blue-green round colonies, as they absorb bromothymol blue stain, with mucous or creamy texture. Phase II bacteria do not present bioluminescence in the dark and has reddish-brown colonies with a sticky texture (Nouh and Hussein 2015). Phases can also be verified by qualitative enzyme activity tests, such as lipolytic activity (tributyrin agar media), proteolytic (egg yolk and gelatin media), and hemolytic activities (media with sheep blood), for which all are positive for phase I and negative for phase II (Salazar-Gutiérrez et al. 2017). Accordingly, to produce nematodes in vitro, bacterium must be at phase I. It is therefore a requirement to establish when the other phases take place, thus, allowing to supply IJ demand for use in the field. Therefore, the present study aimed to evaluate P. luminescens subsp. akhurstii SL0708 growth kinetics and pathogenicity in H. indica SL0708 under in vitro production media. The culture media are as follows: NBTA agar medium: 0.025 g/l bromothymol blue, 0.04 g/l 2, 3, 5-triphenyltetrazolium chloride, 20 g/l nutrient agar, pH 7. LB medium (Luria Bertani): 10 g/l NaCl, 5 g/l yeast extract, 10 g/l tryptone, 12 g/l agar-agar, pH 7 (Sezonov et al. 2007). H. indica SL0708 in vitro liquid production media: 10 g/l yeast extract, 10 g/l peptone, 3 g/l soy flour, 30 g/l soy oil, 6 egg yolk, 3 g/l glucose, 4 g/l NaCl, 0.3 g/l CaCl2, 0.2 g/l MgSO4, 0.05 g/l FeSO4, 0.35 g/l KCl, pH 7 (Johnigk et al. 2004). Microorganism and inoculum The strain of the bacterial symbiont, P. luminescens SL0708, was obtained from the collection of microorganisms of Pontificia Universidad Javeriana (CMPUJ), conserved in glycerol stocks at − 80 °C. It was suspended in LB and NBTA media and incubated for 8 h at 28 °C, to be inoculated later; 100-ml Erlenmeyer flasks were filled with 90 ml H. indica SL0708 in vitro production media. Media was inoculated with 10 ml P. luminescens subsp. akhurstii SL0708 culture, containing bioluminescent colonies corresponding to approximately 108 cells per ml; Erlenmeyer flasks with inoculum were incubated at 28 °C and 150 rpm for 24 h. Batch fermentation Eighteen milliliters of sterile production media were placed into 100-ml flask and inoculated with 2 ml P. luminescens subsp. akhurstii SL0708 inoculum. Media was incubated at 28 °C. Flasks were shaken in an orbital shaker incubator (New Brunswick Scientific™ Innova™ 42, Thermo Fisher Scientific Inc. Waltham, MA, USA) at 150 rpm for 36 h. Biomass content of each flask was monitored by sampling at 0, 8, 12, 24, 32, and 36 h (it is taken until this time, because the kinetics of P. luminescens subsp. akhurstii SL0708 allows to establish the time of inoculation of infective juveniles in vitro cultures of H. indica SL0708). Cells were separated from media by centrifugation (Sorvall RC-6 plus Thermo Scientific Co Waltham, MA, USA) at 10,800×g for 20 min at 4 °C. Obtained supernatant was used to evaluate pH and reducing sugar concentration, equivalent to glucose. Reducing sugars concentration was quantified by 3,5-dinitrosalicilic acid (DNS) technique (Miller 1959), employing an Evolution 60 UV–VIS Thermo Scientific Co. spectrophotometer (Waltham, MA, USA). Values were determined based on 0.5 to 2 g/l glucose standard curve. Growth and bioluminescence analyses Due to the presence of microemulsion droplets in the system, biomass concentration could not be quantified by spectrophotometric methodologies. Microbial growth was determined from colonies formed (colony-forming unit CFU) on agar plates in NBTA media. Colonies were described based on their morphology, color, and bioluminescence for each collected sample. Each experiment was repeated three times. The bioprocess parameters duplication time (td) and specific growth rate (μ) were calculated from the experimental data (Eqs. 1–3) during the logarithmic phase of the growth (from 0 to 24 h). The specific growth rate (μ, h−1) of the microorganism was calculated from Eq. (1). $$ \frac{dX}{dt}=\mu X $$ where X is biomass at time t (h), and CFU/ml μx was calculated by integrating Eq. (1) with conditions, X = X0 at t = t0 $$ \mu =\frac{lnX-\mathit{\ln}{X}_0}{t-{t}_0} $$ μ was determined by plotting ln (X/X0) vs. time. Doubling time (td), is the time required for the microorganism to double its population, calculated from Eq. (3). $$ {t}_d=\frac{\ln 2}{\mu } $$ Bioluminescence was quantitated by assaying in FluostarOptima luminescence plate reader (BMG labtech, Germany) with a 355-nm excitation and 460-nm emission filters. To this end, 250-μl fresh culture media were placed in 96-well COSTAR plate. Results are reported as luminescence arbitrary units (LAU). Bacterium metabolic phases Phase I was determined for bacterium during growth kinetics. To this end, morphological characteristics in NBTA media, bioluminescence, and pathogenicity were taken into account (Table 1). Table 1 Photorhabdus luminescens subsp. akhurstii SL0708 metabolic characteristics To determine biochemical characteristics at different times of P. luminescens subsp. akhurstii SL0708 culture, an API 20NE (BioMerieux, Inc. Durham, NC) was used following manufacturer's instructions. Galleria mellonella pathogenicity assays Bacteria colonies were obtained from NBTA cultures at 24, 48, 72, and 96 h and re-suspended in 0.85% (w/v) saline solution at 1 × 104 CFU/ml. Additionally, G. mellonella last instar larvae were previously disinfected with 1% hypoclorite and rinsed with distilled water. Larvae were then infected by 10-μl culture suspension with 1 mL BD ultra-fine syringe and incubated at 28 °C for 96 h. Mortality percentage, color, and larvae bioluminescence were registered every 24 h. Results are presented as mean ± SD (n = 3) with three replicates per fermentation. Mortality percentage is the average of three replicas ± SD. To determine statistical differences for mortality percentages, one-way ANOVA analysis was performed with data from bacterial infections at 24, 48, 72, and 96 h. Growth kinetics, bacterium metabolic phases, and bioluminescence P. luminescens subsp. akhurstii SL0708 growth in production media, reducing sugars consumption, and pH are presented in (Fig. 1). Up to 24 h, a logarithmic phase of the growth was observed with a respective specific growth rate of 0.21 h−1 (R2= 0.99) and duplication time (td) 3.3 h. The biomass in this phase ranged between 2.4 × 107 CFU/ml and 5.8 × 109 CFU/ml. After 24 h, it was constant up to 36 h (stationary phase), with a biomass yield obtained from glucose (Yx/s) of 2.4 ×1012 CFU/g. At this time, (77.4%) of the reducing sugars were consumed, equivalent to the available glucose in the media. Regarding the pH profile during fermentation, it has a tendency to increase towards alkalinity from the beginning to the end of the culture. Possibly, the microorganism could have used available amino acids and proteins as an energy source and, consequently, ammonium ions were produced (Belur et al. 2013). Specific growth velocity (μ) differs from previously reported (μ = 0.36 h−1 td = 2.1 h) (Singh et al. 2012; Belur et al. 2013), which could be attributed to different media composition, culture conditions, and bacterial isolates. Growth kinetic of Photorhabdus luminescens subsp. akhurstii SL0708 According to data described in (Table 1), P. luminescens subsp. akhurstii SL0708 metabolic phases were determined in the following manner: phase I, from 0 to 36 h; intermediate phase, between 36 and 72 h; and phase II from 72 to 96 h (Fig. 2). Maximum bioluminescence value was at 24 h (3,436.7 LAU), with a 21% decrease until reaching a value of 2709.8 LAU at 36 h, corresponding to the stationary phase. Obtained results are in agreement for the growth phase with those reported by Belur et al. (2013), who affirmed very high bioluminescence observed in a defined medium during the early stationary phase, and it was maintained throughout the stationary phase. On the contrary, they differ from other reports, demonstrating minimal bioluminescence until cells reach late logarithmic or stationary phase (Schmitz et al. 1999). Photorhabdus luminescens subsp. akhurstii SL0708 luminescence arbitrary units (LAU) Likewise, an intermediate and phase II gene expression were established, resulting in a more active metabolism, greater production of oxidative stress proteins, and "stock" proteins. These mechanisms are designed to facilitate adaptation to new conditions, favoring an increase in growth velocity (Turlin et al. 2006). Further, phase II reduced levels of dye absorption, pigmentation, production of antibiotic substances and degradative enzymes, occurrence of crystalline inclusion proteins, and bioluminescence (Ffrench-Constant et al. 2003). Possibly, the above can be attributed to the fact that this gave the bacteria a strategy to adapt to more than one particular environment, especially low osmotic resistance or anoxic environments. When glucose was added to EPNs production media, it had a positive effect on phase I maintenance, which was of interest for production processes. Furthermore, constant supplementation of this carbohydrate significantly increased (up to twice as much) biomass production and antibiotic production (140% increase) in phase I (Jeffke et al. 2000). This not only hinders contamination in culture media, but also benefits recovery percentage and final IJs yield, since a greater bacterial biomass at phase I results in a higher accumulation of the "feeding signal" (Gil et al. 2002). This molecule induces IJ's mouth and anus opening to start the feeding process. Moreover, it prompts the IJ to molt to the next stage and start its life cycle, resulting in new IJ generations (Aumann and Ehlers 2001). Bacterium metabolic phase I is required for EPN production, particularly due to four metabolic characteristics: (1) capacity to degrade media into compounds that can be assimilated and necessary for IJs, distinctively sterols by lytic enzymes; (2) secretion of "feeding signals" to induce development and production of new IJs generations; (3) capacity to establish symbiosis or re-associate/adherence to the nematode's intestines; and (4) secretion of antibiotic substances, avoiding contamination in the media (Ehlers 2001; Salazar-Gutiérrez et al. 2017). Consequently, it is most important to maintain phase I during in vitro EPNs production to increase recovery percentage and to obtain high IJ yield during the production process (Stock et al. 2017). Therefore, it is possible to achieve this under strictly controlled culture conditions, optimized media composition, and knowledge of isolated bacterial growth kinetics in EPN production media. In this study, it was observed under the evaluated culture conditions (media composition, batch culture, 28 °C, 150 rpm 20% effective work volume) that glucose was a determinant factor phase I in maintenance. Thus, IJs inoculation should be performed, when glucose has not been totally consumed, i.e., up to 36 h. During this time lapse, media was degraded and it can be assimilated by EPNs. Accumulation of "feeding signaling" takes place, favoring EPN development. Moreover, bacteria present are capable of establishing symbiosis with new IJs generations (Cho et al. 2011). Regarding P. luminescens subsp. akhurstii SL0708 growth kinetics and metabolic phases, it was reported that phase I was present in late exponential phase or at the start of the stationary phase, suggesting population density was a determinant factor in phase I variation to intermediate or phase II (Aumann and Ehlers 2001; Yoo et al. 2001; Cho et al. 2011). However, in this study, it was observed that metabolic phase I was at the beginning of the exponential phase (Fig. 1). This suggests that for the media studied, the evaluated conditions, and the bacterium isolate were related to the bacterial population density and the metabolic phase, and it was primarily influenced by the media composition for in vitro H. indica SL0708 production. P. luminescens akhurstii SL0708 pathogenicity Regarding P. luminescens subsp. akhurstii SL0708 pathogenicity obtained from cultures at 24, 48, 72, and 96 h, they presented 100% of G. mellonella larvae mortality percentage during the first 24 h of exposure (Fig. 3). Insignificant differences were observed among bacterium pathogenicity from cultures at 24, 48, 72, and 96 h and infection time course, i.e., 24, 48, 72, and 96 h of exposure (F = 0.934; df = 15; p = 0.5339). Galleria mellonella larvae mortality percentage over the course of 4 days after infection with bacteria obtained from cultures at 24 (black circle), 48 (black square), 72 (black triangle), and 96 (black star) hours Likewise, G. mellonella observed signs were the same for all evaluated times, consistent with reports for this isolate (Saenz-Aponte et al. 2014). Moreover, independent of bacterium metabolic phase in culture media, when injected into the larva, it generated pathogenicity. This could be attributed to the bacterium capacity to revert its metabolic phase, from phase II to phase I, when it entered into a controlled condition (larva), with specific composition, favoring metabolite expression characteristic of phase I (Turlin et al. 2006). The bacterium presented the same biochemical profiles, regardless to time of culture (24, 48, 72, and 96 h) and metabolic phase (phase I, intermediate phase, or phase II), as was evidenced by biochemical test result profile (API 20NE) (Table 2). Results are in agreement with previous reports of this isolate (Saenz-Aponte et al. 2014). Enzyme production could be induced by the substrate present in the test or bacterium did not present variations against all biochemical characteristics depending on culture conditions. Table 2 Photorhabdus luminescens subsp. akhurstii SL0708 biochemical characteristics (API 20NE) for bacteria obtained at 24, 48, 72, and 96 h of culture P. luminescens subsp. akhurstii was cultured in H. indica SL0708 in vitro liquid production media, allowing to establish that the growth phase had a direct relationship with the bioluminescence. On the other hand, the bacterium metabolic phase change had no effect on G. mellonella larvae pathogenicity. Based on the results, the inoculation at 36 h of culture possibly assured the highest H. indica SL0708 IJ yield and recovery percentage. Therefore, given the glucose importance as a substrate to maintain it in the metabolic phase I, future studies should evaluate high glucose concentrations or addition of a carbohydrate source during the different time points of the process to IJ H. indica SL0708 culture media to prolong or sustain this metabolic phase and plausibly obtain a greater recovery and IJ yield at the end of the process in comparison to in vivo results. All data of the study have been presented in the manuscript, and the materials, which are used in this study, are of high quality and grade. Abd-Elgawad MM, Askary TH, Coupland J (2017) Biocontrol agents: entomopathogenic and slug parasitic nematodes. Research Inc, Canada: CAB International, Wallingford Aumann J, Ehlers RU (2001) Physico-chemical properties and mode of action of a signal from the symbiotic bacterium Photorhabdus luminescens inducing dauer juvenile recovery in the entomopathogenic nematode Heterorhabditis bacteriophora. Nematology 3(8):849–853. https://doi.org/10.1163/156854101753625344 Belur PD, Floyd LI, Leonard DH (2013) Determination of specific oxygen uptake rate of Photorhabdus luminescens during submerged culture in lab scale bioreactor. Biocontrol Sci Tech 23(12):1458–1468. https://doi.org/10.1080/09583157.2013.840361 Cho CH, Kyung SW, Gaugler R, Sun KY (2011) Submerged monoxenic culture medium development for Heterorhabditis bacteriophora and its symbiotic bacterium Photorhabdus luminescens: protein sources. J Microbiol Biotechnol 21(8):869–873. https://doi.org/10.4014/jmb.1010.10055 Devi G, Dhrubajyoti N (2017) Entomopathogenic nematodes: a tool in biocontrol of insect pests of vegetables-a review. Agric Rev 38(02). https://doi.org/10.18805/ag.v38i02.7945 Ehlers RU (2001) Mass production of entomopathogenic nematodes for plant protection. Appl Microbiol Biotechnol 56(5–6):623–633. https://doi.org/10.1007/s002530100711 Ffrench-Constant R, Waterfield N, Daborn P, Joyce S, Bennett H, Au C, Dowling D, Boundy S, Reynolds S, Clarke D. 2003. Photorhabdus: towards a functional genomic analysis of a symbiont and pathogen. FEMS Microbiology Reviews, 26 (5): 433–456, https://doi.org/10.1111/j.1574-6976.2003.tb00625.x. Gil GH, Choo HY, Gaugler BR (2002) Enhancement of entomopathogenic nematode production in in-vitro liquid culture of Heterorhabditis bacteriophora by fed-batch culture with glucose supplementation. Appl Microbiol Biotechnol 58(6):751–755. https://doi.org/10.1007/s00253-002-0956-1 Han R, Ehlers RU (2001) Effect of Photorhabdus luminescens phase variants on the in vivo and in vitro development and reproduction of the entomopathogenic nematodes Heterorhabditis bacteriophora and Steinernema carpocapsae. FEMS Microbiol Ecol 35:239–247 Hu K, Webster JM (2000) Antibiotic production in relation to bacterial growth and nematode development in Photorhabdus-Heterorhabditis infected Galleria mellonella larvae. FEMS Microbiol Lett 189(2):219–223. https://doi.org/10.1111/j.1574-6968.2000.tb09234.x Inman FL, Sunita S, Holmes LD (2012) Mass production of the beneficial nematode Heterorhabditis bacteriophora and its bacterial symbiont Photorhabdus luminescens. Indian J Microbiol 52(3):316–324. https://doi.org/10.1007/s12088-012-0270-2 Jeffke T, Jende D, Mätje C, Ehlers RU, Berthe-Corti L (2000) Growth of Photorhabdus luminescens in batch and glucose fed-batch culture. Appl Microbiol Biotechnol 54(3):326–330. https://doi.org/10.1007/s002530000399 Johnigk SA, Ecke F, Poehling M, Ehlers RU (2004) Liquid culture mass production of biocontrol nematodes, Heterorhabditis bacteriophora (Nematoda: Rhabditida): improved timing of dauer juvenile inoculation. Appl Microbiol Biotechnol 64(5):651–658. https://doi.org/10.1007/s00253-003-1519-9 Lang AE (2010) Photorhabdus luminescens toxins. Science 327(February):1139–1143. https://doi.org/10.1126/science.1184557 Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31(3):426–428. https://doi.org/10.1021/ac60147a030 Nouh GM, Hussein M (2015) Pathogenicity of Photorhabdus luminescens , a symbiotic bacterium of an entomopathogenic nematode against Galleria mellonella L. and Spodoptera littoralis (Boisd.). Egypt J Zool 63:91–97. https://doi.org/10.12816/0014493 Saenz-Aponte A, Pulido OF, Jaramillo C (2014) Isolation and characterization of bacterial symbiont Photorhabdus luminescens SL0708 (Enterobacteriales: Enterobacteriaceae). Afr J Microbiol Res 8(33):3123–3130 Salazar-Gutiérrez JD, Castelblanco A, Rodríguez-Bocanegra MX, Teran W, Sáenz-Aponte A (2017) Photorhabdus luminescens subsp. akhurstii SL0708 pathogenicity in Spodoptera frugiperda (Lepidoptera: Noctuidae) and Galleria mellonella (Lepidoptera: Pyralidae). J Asia Pac Entomol 20(4):1112–1121. https://doi.org/10.1016/j.aspen.2017.08.001 San-Blas E, Campos-Herrera R, Dolinski C, Monteiro C, Andaló V, Leite LG, Rodríguez MG (2019) Entomopathogenic nematology in Latin America: a brief history, current research and future prospects. J Invertebr Pathol 165:22–45. https://doi.org/10.1016/J.JIP.2019.03.010 Schmitz RPH, Kretkowski C, Eisenträger A, Wolfgang D (1999) Ecotoxicological testing with new kinetic Photorhabdus luminescens growth and luminescence inhibition assays in microtitration scale. Chemosphere 38(1):67–78. https://doi.org/10.1016/S0045-6535(98)00174-X Sezonov G, Joseleau-Petit D, Richard D (2007) Escherichia coli physiology in Luria-Bertani broth. J Bacteriol 189:8746–8749. https://doi.org/10.1128/JB.01368-07 Singh S, Moreau E, Inman F, Holmes DL (2012) Characterization of Photorhabdus luminescens growth for the rearing of the beneficial nematode Heterorhabditis bacteriophora. Indian J Microbiol 52(3):325–331. https://doi.org/10.1007/s12088-011-0238-7 Stock S, Kusakabe A, Orozco RA (2017) Secondary metabolites produced by Heterorhabditis symbionts and their application in agriculture: what we know and what to do next. J Nematol 49(4):373–383 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770283/pdf/373.pdf Turlin E, Pascal G, Rousselle JC, Lenormand P, Ngo S, Danchin A, Derzelle S (2006) Proteome analysis of the phenotypic variation process in Photorhabdus luminescens. Proteomics 6(9):2705–2725. https://doi.org/10.1002/pmic.200500646 Yoo SK, Brown I, Cohen N, Gaugler R (2001) Medium concentration influencing growth of the entomopathogenic nematode Heterorhabditis bacteriophora and its symbiotic bacterium Photorhabdus luminescens. J Microbiol Biotechnol 11(4):644–648 The authors thank Pontificia Universidad Javeriana for financing this study as part of "In vitro production and cryopreservation of the entomopathogenic nematode Heterorhabditis sp. SL0708 (Rhabditida: Heterorhabditidae)" project grant No. ID 6649 and grant No. 7190 for English editing and translation services. This study is part of the project "Producción in vitro y criopreservación del nematodo entomopatógeno Heterorhabditis sp. SL0708 (Rhabditida: Heterorhabditidae)". Departamento de Biología, Laboratory for Biological Control, Biología de Plantas y Sistemas Productivos, Facultad de Ciencias, Pontificia Universidad Javeriana, Bogotá, Colombia María Teresa Orozco-Hidalgo & A. Sáenz-Aponte Departamento de Microbiología, Laboratory Applied Biotechnology, Facultad de Ciencias, Grupo de Biotecnología Ambiental e Industrial, Pontificia Universidad Javeriana, Bogotá, 110561, Colombia Balkys Quevedo-Hidalgo Search for María Teresa Orozco-Hidalgo in: Search for Balkys Quevedo-Hidalgo in: Search for A. Sáenz-Aponte in: MTOH conducted research experiments. BQH and ASA supervised the project and wrote the paper. ASA submitted the paper. All the authors approved the submission of this original article, which has not been submitted or is not under consideration for publication elsewhere. All authors understood and agreed that the published article will not be published elsewhere in the same form, in any language including electronically, without the consent of the copyright holder. All authors read and approved the final manuscript. Correspondence to A. Sáenz-Aponte. Not applicable for this section. Orozco-Hidalgo, M.T., Quevedo-Hidalgo, B. & Sáenz-Aponte, A. Growth kinetics and pathogenicity of Photorhabdus luminescens subsp. akhurstii SL0708. Egypt J Biol Pest Control 29, 71 (2019). https://doi.org/10.1186/s41938-019-0172-2 Bacterial symbiont Liquid culture media Heterorhabditis indica
CommonCrawl
All issues Volume 8 (2017) BIO Web Conf., 8 (2017) 01025 References 2016 International Conference on Medicine Sciences and Bioengineering (ICMSB2016) Session I: Medicine Sandhoff K., Harzer K., J. Neurosci. Gangliosides and gangliosidoses: principles of molecular and metabolic pathogenesis. 25, 10195–10208 (2013). [Google Scholar] Giordano G., Sanchez-Perez A.M., Burgal M., et al. J. Neurochem. Chronic exposure to ammonia induces isoform reselective alterations in the intracellular distribution and NMDA receptor–mediated translocation of protein kinas C in cerebellar neurons in culture. 1, 143–157 (2005). [Google Scholar] Lallemend F., Hadjab S., Hans G., et al. J. cell. Sci. Activation of protein kinase CbetaI constitutes a new neurotrophic pathway for deafferented spiral ganglionneurons. 19, 4511–4525 (2005). [Google Scholar] Yates A. J., Saqr H.E., Van Brocklyn J. J. Neurooncol. Ganglioside modulation of the PDGF receptor. A model for ganglioside functions. 1, 65–73 (1995). [CrossRef] [Google Scholar] Beni S.M., Tsenter J., Alexandrovich A.G., et a1. J. Cereb Blood Flow Metab. CuZn SOD deficiency, rather than overexpression, is associated with enhanced recovery and attenuated activation of NF-kappa B after brain trauma in mice. 4, 478–490 (2006). [Google Scholar] Duchemin A. M., Ren Q., Neff N.H., Hadjiconstantinou M. J. Neurosci. GM1-induced activation of phosphatidylinositol 3-kinase: involvement of Trk receptors. 104, 1466–1477 (2008). [Google Scholar] Duchemin A. M., Ren Q., Mo L., et al. J. Neurosci. GM1 ganglioside induces phosphorylation and activation of Trk and Erk in brain. 81, 696–707 (2002). [Google Scholar] Israel A. Trends Cell Biol. The IKK complex: an integrator of all signals that activate NF-κB. 10, 129–133 (2000). [CrossRef] [PubMed] [Google Scholar] A. Campos, V. Vasconcelos. Int. J. Mol. Sci. Molecular mechanisms of microcystin toxicity in animal cells. 11, 268–287 (2010). [CrossRef] [PubMed] [Google Scholar] Wiemerslage L., Ismael S., Lee D.. Mitochondrion. Early alterations of mitochondrial morphology in dopaminergic neurons from Parkinson's disease-like pathology and time-dependent neuroprotection with D2 receptor activation. (2016). [Google Scholar] Zhen Y., Wang B., Zhou W.Q. Chinese Journal of Clinical Rehabilitation. The apoptosis effect of salicylic acid on P12 cells. 6, 70–72 (2006). [Google Scholar] Ayumi T., Katsumi H., et al. J. Neurochem. Lysosomal accumulation of Trk protein in brain of GM1-gangliosidosis mouse and its restoration by chemical chaperone. 3, 399–406 (2011). [Google Scholar] Ji H., Zhang X., Du Y., Liu H., Li S., et al. Brain. Res. Bull. Polydatin modulates inflammation by decreasing NF-κB activation and oxidative stress by increasing Gli1, Ptch1, SOD1 expression and ameliorates blood-brain barrier permeability for its neuroprotective effect in pMCAO rat brain. 1, 50–59 (2012). [Google Scholar] Mattson M.P. J. Neurochem. NF-kappa B in the survival and plasticity of neurons. 6, 883–893 (2005). [CrossRef] [Google Scholar] Dragicevic N., Delic V., Cao C., et al. Neuropharmacology. Caffeine increases mitochondrial function and blocks melatonin signaling to mitochondria in Alzheimer's mice and cells. 8, 1368–1379 (2012). [Google Scholar] Gynura bicolor aqueous extract attenuated H2O2 induced injury in PC12 cells BioMedicine (June 2019) 9:12 Induction of apoptosis and NF-$\kappa$B by quercetin in growing murine L1210 lymphocytic leukaemic cells potentiated by TNF-$\alpha$ Reprod. Nutr. Dev. 40, 441-465 (2000) Understanding the protective effects of wine components and their metabolites in the brain function Can Hybrid Synapse be Formed between Rat Spinal Motor Neurons and Major Pelvic Ganglion Neurons in vitro? Wuhan University Journal of Natural Sciences 2021, Vol.26 No.6, 521-526 Cholera toxin promotes the generation of semi-mature porcine monocyte-derived dendritic cells that are unable to stimulate T cells Vet. Res. 38, 597-612 (2007)
CommonCrawl
nature neuroscience Hippocampal spatial representations exhibit a hyperbolic geometry that expands with experience Angular and linear speed cells in the parahippocampal circuits Davide Spalla, Alessandro Treves & Charlotte N. Boccara Linking hippocampal multiplexed tuning, Hebbian plasticity and navigation Jason J. Moore, Jesse D. Cushman, … Mayank R. Mehta Recalibration of path integration in hippocampal place cells Ravikrishnan P. Jayakumar, Manu S. Madhav, … James J. Knierim Entorhinal velocity signals reflect environmental geometry Robert G. K. Munn, Caitlin S. Mallory, … Lisa M. Giocomo Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation Malcolm G. Campbell, Samuel A. Ocko, … Lisa M. Giocomo Neuronal vector coding in spatial cognition Andrej Bicanski & Neil Burgess Context-dependent representations of objects and space in the primate hippocampus during virtual navigation Roberto A. Gulli, Lyndon R. Duong, … Julio C. Martinez-Trujillo Geometry of abstract learned knowledge in the hippocampus Edward H. Nieh, Manuel Schottdorf, … David W. Tank Coherent encoding of subjective spatial position in visual cortex and hippocampus Aman B. Saleem, E. Mika Diamanti, … Matteo Carandini Huanqiu Zhang1,2, P. Dylan Rich3, Albert K. Lee4 & Tatyana O. Sharpee ORCID: orcid.org/0000-0002-7483-50811,2 Nature Neuroscience volume 26, pages 131–139 (2023)Cite this article 359 Altmetric Neural encoding Daily experience suggests that we perceive distances near us linearly. However, the actual geometry of spatial representation in the brain is unknown. Here we report that neurons in the CA1 region of rat hippocampus that mediate spatial perception represent space according to a non-linear hyperbolic geometry. This geometry uses an exponential scale and yields greater positional information than a linear scale. We found that the size of the representation matches the optimal predictions for the number of CA1 neurons. The representations also dynamically expanded proportional to the logarithm of time that the animal spent exploring the environment, in correspondence with the maximal mutual information that can be received. The dynamic changes tracked even small variations due to changes in the running speed of the animal. These results demonstrate how neural circuits achieve efficient representations using dynamic hyperbolic geometry. Many aspects of our daily lives can be described using hierarchical systems. As we decide how to spend an afternoon, we may choose to read a book in a coffee shop or go to a shopping mall, which gives rise to a set of new questions: what coffee, which store and so forth, instantiating a decision tree1,2. Spatial navigation provides an example of decision-making in which hierarchical planning is useful3. Hierarchical organization in networks can provide several advantages, including achieving maximally informative representation of input signals4 and efficient routing of signals in cases where network links are subject to change5. The latter property is especially useful for neural networks where connections between neurons change over time. However, to realize these advantages, the networks should be organized hierarchically in such a way as to follow a hidden hyperbolic geometry5. Unlike Euclidean geometry, hyperbolic geometry is negatively curved. This results in an exponential expansion of volume with the distance from the center, perceptual compression of large distances and distortions in the shortest distance paths between points, which now curve toward the representation center (Fig. 1a)6. Thus, hyperbolic organization would go against the everyday intuition that the brain represents distances around us linearly. Fig. 1: Construction of hierarchical organization of place cell responses that reflects underlying hyperbolic geometry. a, A Poincaré disk model of 2D hyperbolic geometry is shown for visualization of its similarity to a tree structure. Each curve represents the geodesic between the two connected points, and all triangles have the same size. b, Illustration of the construction of hierarchical representation from neuronal response properties. The tree structure does not have to be perfect to allow for mapping onto a hyperbolic geometry14. Some loops can be present (dashed lines) due to partial overlap between disks of neurons from different orders in the hierarchy. c, Place field size versus location of 264 place fields from 63 putative pyramidal cells from dorsal CA1 of a rat running on a 48-m-long linear track (Fig. 3b and Supplementary Fig. 1)20. d, Histogram of place field sizes shown in c. Gray line shows the maximum likelihood exponential fit. P value of χ2 GOF test is 0.851 (χ2 = 5.56 and d.f. = 10). Inset shows the same plot with log-scale on the y axis. The straight line shows the least-squares linear regression with slope forced to be the exponent of the exponential fit. We investigated whether hyperbolic geometry underlies neural networks by analyzing responses of sets of neurons from the dorsal CA1 region of the hippocampus. This region is considered essential for spatial representation7,8,9 and trajectory planning10. CA1 neurons respond at specific spatial locations, termed place fields11. We will first illustrate the main idea by describing how neurons can be organized into a hierarchical tree-like network using a simplified picture where each neuron has only one place field in an environment. We will then follow up with analyses that take into account the presence of multiple place fields per neuron12. Geometry of neural representation in CA1 The idea that hyperbolic geometry potentially underlies the neural representation of space follows from the construction illustrated in Fig. 1b. Each point in the two-dimensional (2D) plane represents an abstract neural response property, and a disk represents the collection of all the properties for one neuron in CA1. The more two disks overlap, the more similar the response properties of the corresponding two neurons are and, thus, the higher their response correlation. We now assign neurons that have larger disks in the plane to higher positions within the hierarchy (quantified by the z coordinate in the three-dimensional (3D) space) compared to neurons with smaller disks. The x,y coordinates are taken directly as the center positions of their disks. A link from a higher-to-lower-tier neuron is made if the larger disk contains (at least partially) the smaller disk. The resulting construction generates an approximate tree-like structure. The tree-like structure can, in turn, be viewed as a discrete mesh over the underlying hyperbolic geometry13,14. The leaves of the tree correspond to peripheral points, whereas roots of the tree correspond to positions closer to the origin of the hyperbolic geometry (Fig. 1a). We note that, in general, the plane can be of any dimension and is drawn here as 2D mainly for illustrative purposes. In the simplified case of each neuron having only one place field, the plane can be interpreted as physical space and the disks as being the place fields of the neurons. In this setting, the distribution of one-dimensional (1D) place field sizes should follow approximately an exponential distribution: $$p\left( s \right) = \frac{{\zeta \sinh \left( {\zeta \left( {s_{\max } - s} \right)} \right)}}{{\cosh \left( {\zeta s_{\max }} \right) - 1}} \approx \zeta e^{ - \zeta s}$$ where p(s) is the probability density of place field sizes, and smax is the maximal place field size. The exponent in this distribution is the curvature ζ of the representation (or, equivalently, the size of the hyperbolic geometry with unit curvature). In Fig. 1c,d, we indeed find that an exponential distribution fits the distribution of place field sizes well (P = 0.851, χ2 goodness-of-fit (GOF) test). A caveat with this analysis, however, is that, strictly speaking, it applies to the case where each neuron has one place field, whereas multiple place fields are observed per neuron in moderately large environments12. To quantitatively test whether hyperbolic geometry underlies neural representation in CA1, we employed a statistical tool from topology that does not directly rely on the computation of place fields. Instead, the method defines distances between neurons based on the pairwise correlations between their activities. This pairwise correlation is intuitively reflective of the degree of place field overlap between two neurons15 and is capable of accounting for neurons with multiple fields. These distances are then analyzed to determine if they produce the same topological signatures as points sampled from different geometries16,17. The topological signatures used by the methods are the so-called Betti curves16. These are obtained by varying the threshold for what constitutes a significant connection between neurons and counting cycles in the thereby generated graph (Fig. 2a–c). One of the advantages of this method is that, because it considers all possible values of the threshold, it is invariant under any non-linear monotonic transformations of the correlation values. Similar topological methods based on persistent homology have been used to study manifold structures of population responses in the head direction and grid cell systems18,19. Fig. 2: Neuronal activity in CA1 is topologically consistent with a 3D hyperbolic but not a Euclidean geometry. a–c, Illustration of the topological algorithm on an example correlation matrix. a, Example pairwise correlation matrix for six neurons. b, Correlation matrices after various thresholding. The threshold gradually decreases from top-left to bottom-right such that correlation between more pairs of neurons becomes significant. c, An edge connecting two nodes is formed when the corresponding entry in b is non-zero. Edge density measures the fraction of edges connected out of the total number possible. The numbers in each set of parentheses represent the number of 1D, 2D and 3D cycles (or holes) in the corresponding graph, respectively (note that the dimensionality of the cycles is not directly related to the dimensionality of the underlying geometry). This example demonstrates how a 1D cycle appears due to new edges being formed and then disappears because of formation of cliques. The number of such cycles across the entire range of edge densities (or thresholds) gives the Betti curves. d,g, Experimental Betti curves (dashed) for 1D (red), 2D (green) and 3D (blue) cycles are statistically indistinguishable from those generated by sampling (n = 300 replicates) from 3D hyperbolic geometry (solid line and shading indicate mean ± s.d.) for linear track exploration with radius = 15.5 (d) and square box exploration with radius = 11.5 (g). Insets show comparison of the curve integrals and the L1 distances from model Betti curves against those of the experimental Betti curves. Boxes show the model interquartile range. Upper and lower whiskers encompass 95% of the model range. Black lines indicate experimental values. e,h, Betti curves generated from 3D Euclidean geometry are statistically different from the experimental Betti curves in both linear (e) and square (h) environments. f,i, After shuffling spike trains, Betti curves generated from 3D hyperbolic geometry are no longer consistent with the experimental Betti curves in both linear (f) and square (i) environments. Session ID for square box: ec014.215. We first applied this method to recordings of putative active pyramidal (average firing rate between 0.1 Hz and 7 Hz) cells (n = 113, average firing rate ± s.d. = 0.42 ± 0.35 Hz) from dorsal CA1 of three rats (one session per animal) running on a novel 48-m-long linear track20. We found that experimental Betti curves were not consistent with samples from a Euclidean geometry but were consistent with a 3D hyperbolic geometry (Fig. 2d,e). Statistical analyses showed that 3D hyperbolic geometry provided the overall best fit compared to other dimensionalities of the hyperbolic geometry (Extended Data Fig. 1; two-way ANOVA across sessions: F = 7,924.61, P < 10−8; Tukey's post hoc test: P < 10−8 for all pairwise comparisons between 3D fits and fits from other dimensions). Similar low dimensionality of population activity in dorsal hippocampus has been reported with other manifold inference methods21,22. We note that three dimensions is also the expected dimensionality for a hierarchical set of 2D place fields according to the construction illustrated in Fig. 1b. The pairwise correlations were computed separately for each experimental session. Different sessions yielded latent 3D hyperbolic geometries of different radii (15.5, 15 and 13; see Methods and Extended Data Fig. 2a on how hyperbolic radius is estimated for each session). The same conclusions held for datasets of dorsal CA1 neurons recorded when rats explored 2.5-m linear tracks and 1.8 × 1.8-m square boxes for longer durations. We tested two (total number of putative active pyramidal neurons is 77, average firing rate ± s.d. = 0.92 ± 0.75 Hz) and seven (n = 274, average firing rate ± s.d. = 0.68 ± 0.61 Hz) publicly available datasets, respectively, for the above two environments, selecting recording sessions that had more than 50 neurons (not necessarily active) recorded simultaneously23,24. In each case, a 3D hyperbolic geometry produced Betti curves that matched those computed from data. In Fig. 2, we show one example analysis for each environment type (linear or square), and the rest of the datasets are shown in Extended Data Figs. 2, 3 and 4. The fitting statistics and the hyperbolic radii estimated for all sessions are summarized in Supplementary Table 1. As a control, we verified that shuffled spike trains (spike times of individual neurons were shifted by the same amount to preserve firing statistics) produced correlation matrices that were not consistent with a 3D hyperbolic geometry (Fig. 2f,i). Also, restricting our analyses to subsets of neurons with high spatial information yields the same results that 3D hyperbolic geometry, but not Euclidean geometry, underlies the neural representation (Extended Data Fig. 5). We note that the hierarchical organization we study works in addition to those observed across larger extents of the hippocampus and the entorhinal cortex25,26,27. This is because the variation of place field sizes determined from our data was not significantly correlated with the anatomical positions of the neurons recorded within the hippocampus (Extended Data Fig. 6), which, in our experiments, are largely confined to dorsal CA1, comprising only a fraction of the span of dorsal–ventral axis that was recorded in the above references25,27. These analyses demonstrate that neural activity in CA1 consistently conforms to a 3D hyperbolic geometry, with variations in the size of the hyperbolic geometry across environments, with values that range within 10.5−15.5 (in units of inverse curvature). Hyperbolic representation expands with experience and information acquired The environments in which these hyperbolic representations were observed differed in shape (linear versus square), size and also in the amount of time that the animals spent exploring them. First, we verified that the size of the hyperbolic representation was similar for data collected from linear and square environments, as long as the animals initially spent similar amounts of time in them (Extended Data Fig. 7a). Next, we investigated the effect of exploration time on the size of the resulting hyperbolic representation of the same environment that was initially novel to the animal. We found that the size of the hyperbolic representation was larger when the animal was more familiar with the environment, using the above topological method. The size increased with the logarithm of time that the animal had to explore it (Fig. 3a and Extended Data Fig. 7b,c). This is interesting because the logarithm of time is approximately proportional to the maximal amount of information the animal can acquire from the novel environment (considering an animal receiving a distinct combination of stimuli at each time step)28, with the exact relationship given by: $$I = \log \left( {1 + \frac{T}{{t_0}}} \right) + \frac{T}{{t_0}}\log \left( {1 + \frac{{t_0}}{T}} \right)$$ where t0 is a constant that represents the product of sampling interval and the ratio between the effective signal Sn and noise Nn variances. In Fig. 3a, we show that this relationship accounts well for the expansion of the CA1 hyperbolic representations as the animal is exploring the environment. For these analyses, we sorted the datasets according to the number of times the animal had a chance to explore the square box; we also analyzed separately data from individual exposures (indicated by different colors in Fig. 3a) to the box in 20-min sections for their hyperbolic radii. We note that, although CA1 neural representations undergo substantial dynamic changes over these hour-long time scales of exposures across days29,30, the neural responses continued to be described by a 3D hyperbolic geometry with a systematic increase in its size. We verified that this was not due to changes in the animal's locomotion and behavioral states; no significant correlation between representation size and parameters, such as the average running speed and active exploration time, was observed (Extended Data Fig. 8a). Excluding time periods with speed <5 cm s−1 also do not change the results (Extended Data Fig. 8b). We also verified that time elapsed without exploration (that is, outside the environment) cannot account for the expansion of representation observed (Extended Data Fig. 8c). Fig. 3: Hyperbolic geometry of neural representation expands with the logarithm of exploration time. a, Radius of the hyperbolic representation of square box is estimated using topological analysis. Dots represent median estimates, and lines represent 95% confidence interval. Dashed black line is the least-squares regression of data with log-scale on the x axis (r = 0.85, P = 3 × 10−6). Dashed gray line shows the least-squares fit of Eq. (2). b, Scale drawing of the 48-m linear track. The start of each track section is marked by numbers. c, Schematic of the experimental paradigm. During the first epoch, only the first section of the track is available to an animal. The linear track is progressively extended after each epoch. Total track length in the four epochs was 3, 10, 22 and 48 m. Black lines indicate periods when the animal is exposed to the corresponding section for the first time. Red lines indicate non-first exposures. d, Hyperbolic radius grows with temporal familiarity with the segments. Dashed black lines show the least-squares regression of data with log-scale on the x axis (r = 0.50, P = 0.0003), and dashed gray lines show the least-squares fit of Eq. (2). e, Field sizes increase with the animal's first-pass speed through the field (r = 0.35, P = 9 × 10−21). Different symbols indicate fields of different animals. f, Animals run slower in segments of higher entropy (r = −0.56, P = 4 × 10−5). g, Radius of representation increases with entropy of the track (r = 0.42, P = 0.004). h, During subsequent exposures (red lines in c), hyperbolic radius increases with familiarity (r = 0.62, P = 0.0057). Symbols represent different animals, whose least familiar points are used as normalization. Each data point is an animal in one epoch-section period. The same relationship between the size of the hyperbolic representation and exploration time was observed on shorter time scales, including the first seconds during initial exploration of a novel environment. For these analyses, we used the dataset from the 48-m-long linear track (Fig. 3b). The track was initially novel to the animals. In each epoch, an animal traversed the current total length of the available track 3–5 times. Between epochs, it was confined to the original start location while the track was extended (Fig. 3c). We first tested the relationship during initial formation of place fields by considering only periods when the animal explored the additional novel sections of the track introduced in each epoch (Fig. 3c, black lines). We estimated hyperbolic radii for each 1-m segment of the track by examining the distribution of place field sizes within it. In this situation, we are close to the one-field-per-neuron case for each 1-m segment (93.4% ± 6.6% place fields are from different neurons) and, thus, estimated the hyperbolic radii by examining the exponent of place field size distribution in Eq. (1) (Fig. 1d, arrow). We did not use the topological analysis here, because, with short recording duration, the method produces biased estimates of the radius, with larger bias for shorter recordings (Extended Data Fig. 9). We found that the hyperbolic radius increased with temporal familiarity with the environment (Fig. 3d). The temporal familiarity was calculated as the inverse of the mean speed (Methods). The same results were obtained when speed was computed for the initial traversal or across multiple traversals or when using a different segment length (Extended Data Fig. 10). Also, a significant positive correlation between field size and its first-pass speed was observed (Fig. 3e). Therefore, higher speed and, thus, less familiarity with the nearby environment produced larger place fields. This, in turn, led to a smaller hyperbolic radius or the exponent in Eq. (1). We also tested the relationship among the radius of the representation, the speed of the animal and the information content of the environment. The shape of the environment is a key aspect of spatial information that can be perceived by the animal. For a linear track, the shape is determined by its angles. In our experiment, some 1-m segments had more turns, whereas others were mostly straight (Fig. 3b). One can quantify the additional information provided by the changes in trajectory of the track by computing the probability distribution of the change in angle δθ between segments and using the standard formula for entropy31. With this definition, straight segments have zero additional entropy, with larger values for more curved and varied segments. We found that animal speed was inversely related to the additional information (Fig. 3f), with lower speed in more informative segments. The more informative segments also generated representations with larger hyperbolic radius (Fig. 3g). However, the differences in the hyperbolic radius can be fully explained by differences in the time that the animal spent in a given segment. This is because, once controled for speed, representation size was no longer significantly correlated with additional information (linear partial correlation coefficient = 0.18, P = 0.236). We then focused on subsequent exposures to sections of the track (Fig. 3c, red lines). Similarly to the above observations, we found that the radius of the representation continued to increase with additional familiarity using the same measure of hyperbolic radius (Fig. 3h). These analyses indicate that the neural representation maintains hyperbolic organization after experience-induced re-organization with systematic increases in the size of the hyperbolic geometry. Observed hyperbolic representation maximizes spatial information given the number of CA1 neurons To quantify how well the CA1 neurons could support spatial localization, we computed Bayesian maximum likelihood decoder for the animal's location based on neural responses from the square box datasets. To avoid the potential confound associated with differences in spike rates across sessions, we subsampled, for each session, subsets of neurons resulting in different overall spike rates and estimated how much the median decoding error decreases with increasing the number of spikes used for decoding. The decoding error decreased exponentially with the number of spikes (Fig. 4a). The exponent c in this case indicates how efficient an extra spike is in reducing decoding error. Similarly to the radius of hyperbolic representation, we found that c increased consistently over time (Fig. 4b). Figure 4c shows that exponent c also correlated significantly with hyperbolic radius. To establish a causal relationship between representation radius and accuracy of spatial localization, we computed the Fisher information, a measure of read-out accuracy32,33, for populations of model neurons with place field sizes distributed according to different distributions (Fig. 4d). In the one-field-per-neuron case, we found that populations of neurons whose place field sizes were exponentially distributed, as expected for a hidden hyperbolic geometry, provided more accurate spatial decoding compared to the cases of uniformly or log-normally distributed place field sizes (Fig. 4e and Supplementary Fig. 2). Furthermore, for a given network size, there was an optimal size of the hyperbolic representation that maximizes Fisher information (Fig. 4f). The optimal value is determined as a tradeoff between two factors. On the one hand, larger representations include exponentially more small place fields and have higher capacity to represent space. However, they also require more neurons for their sampling and can become undersampled if the number of neurons is limited. The interplay between these two factors determines the optimal size of the representation for a given number of neurons. The simulations indicate that the optimal representation size increases with the logarithm of the number of neurons (Fig. 4g). This theoretical prediction can be tested against data on the number of active neurons in the CA1 region. The CA1 region of rat hippocampus contains roughly 320,000–490,000 pyramidal neurons34,35 with about 30–40% of neurons being active in any given environment29,36,37 (Extended Data Fig. 7e). Extrapolating the theoretical curve in Fig. 4g to this number of neurons, we found a close match to the representation size extracted from the analysis of correlation in neural responses in square box (Fig. 3a). In any moderately large environment, however, individual CA1 neurons can have multiple place fields12,20,38,39. Therefore, we also simulated the case where each cell can have multiple place fields according to a gamma-Poisson model20 with statistics matched to the experimental data, and we found that the same conclusions hold (Fig. 5). This match indicates that the hierarchical arrangement of neural responses is well-matched to the overall number of neurons and their propensities for forming fields. Fig. 4: Increase in size of the hyperbolic representation results in higher accuracy in spatial localization. a, A Bayesian decoder with integration time window of δt = 500 ms is used for different number of subsampled neurons for each session (Methods). An example relationship is shown here, which is fitted with an exponential function of the form indicated to determine the value of c for each session. b, Estimates of c increase over time. Error bar represents the standard error of the estimate. Dashed gray line shows the least-squares fit with log-scale on the x axis (r = 0.86, P = 10−6). c, Estimates of c correlate significantly with hyperbolic radius (r = 0.70, P = 0.0007). Black line shows the least-squares fit. Jitter has been added to x values to improve visualization. d, Schematic of the modeling framework for computing Fisher information for how accurately an animal's position can be decoded from neural responses: place fields are modeled as 2D Gaussian functions47, and neural response has Poisson variability48. e, Networks with exponentially distributed place field size provide more information about the animal's position than networks where place field size is uniformly distributed (with the same mean size as the exponential distribution) or log-normally distributed (with the same mean and variance as the exponential distribution). P < 0.001 for equal means between information from exponential and uniform distributions, or exponential and log-normal distributions, at all number of neurons, using an unpaired two-sample t-test. f, Fisher information per neuron for networks of different sizes and of hyperbolic representations of different radii. g, Optimal hyperbolic radius depends logarithmically on the number of neurons in a network (with dashed portion showing extrapolation of the relationship). The extrapolated value for the full CA1 circuit agrees with median values (upper and lower endpoints of the cross) of hyperbolic radius determined in the last exposure of the square box in Fig. 3a. Left and right endpoints indicate range of experimental values 320,000 × 30% and 490,000 × 40%, respectively, for the number of active CA1 neurons. Fig. 5: Theoretically optimal representation radius is consistent with the experimental value when each cell can have potentially multiple place fields. In this simulation, the statistics of number of place fields per cell are matched to the experimental values of sessions shown in Fig. 3 (mean = 0.98 and s.d. = 1.10, which determines the gamma distribution that, in general, describes number of place fields per cell20), and the size of the environment is matched to the experimental value of 1.8 × 1.8 m. a, Fisher information per neuron for networks of different sizes and of hyperbolic representations of different radii. b, The extrapolated value for the full CA1 circuit agrees with median values of radius determined in the last exposure of Fig. 3. Same notation as in Fig. 4g, except left and right endpoints of the cross indicate range of experimental values 320,000 and 490,000, respectively, for the number of all CA1 neurons, as the gamma distribution is used to capture number of place fields for all neurons as opposed to only active neurons. Exponential distribution of place field sizes So far, we focused on using an exponential function to analyze the distribution of neural place fields, because this distribution connects with hyperbolic representations and makes it possible to infer its curvature. However, other skewed distributions, in particular the log-normal distributions, have similar properties at large values and have been shown to match neural data40. Therefore, it is worthwhile to discuss factors that distinguish and relate both model distributions to each other. First, it is important to note that these two distributions differ most for small place field sizes, and these are the place field sizes that are most affected by biases in the current experimental recording and place field determination methods. Very small place fields are often discarded by smoothing procedures used to estimate neural place fields or even by default (note their absence in Fig. 1c)40. Figure 6 presents simulations showing that this sampling bias alone can transform an exponential distribution of place field sizes into an almost log-normal one. Therefore, previous demonstrations of log-normal distributions of place field sizes do not necessarily argue against the presence of exponential distributions. Second, as we have observed, the curvature of hyperbolic representations (or, equivalently, their radius relative to unit curvature) varies across environments depending on the animal exploration time (Fig. 3). Averaging of exponential distributions of place field sizes with different exponents will further bring the resultant distribution closer to the log-normal family of curves (Fig. 6, right panel). This effect will be even stronger when data collected from different animals and environments are pooled together as more different distributions are summed together. Fig. 6: Illustration of observation of a log-normal distribution of place field sizes. Random samples from each of eight different sinh distributions (left) are drawn. Suppressing the probability of observing small place fields according to a sigmoidal function (left inset) leads to a distribution that approximates log-normal (middle). The middle panel shows the resulting observed histogram for one sinh distribution. Inset shows log-scale on the x axis. When sample size is not too large (n = 800 in this simulation), the histogram can be fitted by a log-normal distribution with P = 0.07 (χ2 GOF test, χ2 = 11.66 and d.f. = 6). Black curve represents the theoretical density function, and gray line represents the fitted log-normal distribution. The right panel shows the results after pooling from eight distributions where an even better fit with log-normal can be obtained (P = 0.43, χ2 GOF test, χ2 = 4.87 and d.f. = 5) at a moderate sample size (n = 800 in this simulation). pdf, probability density function. Several previous studies described how place fields re-organize with experience29,30,41,42. Here we describe that these changes not only include the emergence of fine-grained place fields but also result in coordinated changes across the network. It turns out that the logarithmic growth of the hyperbolic radius R ~ log(T) that we observe here can be explained by a simple mechanistic model where place fields of a small size appear as soon as the animal spends a certain minimal required time t0 within the field (Fig. 7). Because small place fields map to the edge of the hyperbolic geometry, their number N increases exponentially with the radius R. Thus, the hyperbolic representation will approximately reach radius R once the total exploration time T reaches T = Nt0 ~ exp(R)t0, or: $$\log \left( {T/t_0} \right) = R$$ as observed experimentally (Fig. 3). The logarithmic growth of R with time in turn matches the entropy that can be acquired from the environment (Eq. (2)). These arguments demonstrate that it is possible to achieve efficient representations from the information theory point of view using simple mechanistic rules for the addition of small place fields (Eq. (3)). The constant t0 now acquires multiple interpretations. In the mechanistic model of place field addition and expansion of hyperbolic representation (Eq. (3)), t0 represents the minimal time required to form a place field. In the information acquisition formula, Eq. (2), t0 represents the temporal sampling interval. Being the only parameter in the information acquisition equation, t0 also determines the transition when the initially novel environment becomes familiar: during the initial exploration, the information and the hyperbolic radius both increase approximately linearly with time. For times much longer than t0, the linear increase changes over to a logarithmic one. The increase in the relative proportion of small place fields over time (Fig. 7 and Extended Data Fig. 7d) may also be achieved as a result of the overall decrease of firing rate in CA1 with familiarity41,43. Fig. 7: Schematic illustration of how increased hyperbolic radius affects neural representation of space. As radius grows over time, exponential distribution of place field sizes shifts toward smaller place fields (Extended Data Fig. 7d) or larger depth in a discrete tree structure (Eq. 1), to improve spatial localization. The hyperbolic representation was recently demonstrated for the olfactory system17,44. However, those studies analyzed only natural olfactory stimuli17,44 or human perceptual responses17. This left open the question of whether neural responses also have hyperbolic representation. We address this question here, not in the same sensory modality but for the more general hippocampal representation that interacts with many sensory modalities. Hyperbolic representation describes a specific instantiation of hierarchical organization that offers many functional advantages45. One particularly important advantage for neural circuits is that it allows effective communication within the network using only local knowledge of the network and in situations where network links are changing in time5. We show here that hyperbolic representations also support more accurate spatial localization (Figs. 4 and 5). Related to this, recent literature indicates the presence of multi-scale place fields within dorsal CA1, which could improve spatial decoding especially in large environments38,39. The presence of hyperbolic geometry could be further supported by hierarchical representation across broader scales in hippocampus and entorhinal cortex25,26,27. In addition to predictions for the spatial distribution of place field sizes, the hyperbolic representation also makes predictions for the temporal aspects of the neural code. The reason for this is that, within hyperbolic representations, addition is not commutative, meaning that the order in which vectors are added matters. For neural systems, this implies that the order in which spikes are received from different neurons should matter. A detailed investigation of these results comprises a promising direction for future research. To summarize, we have shown that organizing neural circuits to follow latent hyperbolic geometry makes it possible to implement coherent large-scale reorganization. For example, this could be achieved by adjusting neuronal thresholds to control place field sizes (Fig. 7 and Extended Data Fig. 7d). These changes allow a continuous increase in representational capacity (Figs. 4 and 5). As such, this type of organization might provide a general principle underlying large structural changes of neural representations to benefit communication between brain areas, as has been observed, for example, in the parietal cortex46. Data and processing The dataset in which the rat was exposed to a 48-m linear track was recorded from five different animals along the entire track (see original paper20 for detailed procedure of surgery, training, recording, etc.). All procedures were performed according to the Janelia Research Campus Institutional Animal Care and Use Committee guidelines on animal welfare (protocol 11–73). In brief, the subjects were five adult male Long-Evans rats, 400–500 g at the time of surgery. Two rats had fewer than 30 active neurons (overall average firing rate within the range 0.1–7 Hz) recorded, and they were excluded from further analyses. The track and room were both entirely novel to the animals. Animals made 3–5 traversals of the full extent of the track during each epoch (Fig. 3b,c) while their neural activity was recorded using a 64-channel system. Spikes were detected by a 60–70-µV negative threshold on a 600–6,000-Hz filtered signal, and waveforms (32 samples at 32 kHz) were captured around each threshold crossing. Local field potential (LFP) (0.1–9,000 Hz) was recorded continuously at 32 kHz. The position of the animals was reconstructed from video taken by three wide-angle overhead cameras synchronized to the time stamp clock from the acquisition system using a time stamp video titler. Exponential fitting of place field size distribution is done with a maximum likelihood estimate. To take into account that small place fields tend to be undersampled, we explicitly excluded the smallest place fields (<25 cm, first column of histogram in Fig. 1d) and modified the prior accordingly. In this way, the presence or absence of these fields does not affect the estimation of curvature. For computation of the entropy of each segment of the linear track, turn angles were considered in each segment (with angles = 0 for straight segments). All turn angles were discretized into 15 bins, and the entropy for each segment was calculated as \(\mathop {\sum}\nolimits_i {p_i\log _2\frac{1}{{p_i}}}\), where i indexes bins of the turning angles. The first-pass speed for each place field was computed by dividing the field size by the first-time pass duration of the animal through the place field with the cell firing at least one spike (this ≥1-spike requirement is assumed for all of the following description of first-pass duration). The average speed in each 40-inch (~1-m) segment during place field formation was calculated as follows: average segment speed = (∑ili)/(∑i first-pass duration), where i is the index of place fields that were centered within the segment, and li is the size of each place field. We can then compute the temporal familiarity of the rat with the segment during place field formation as the time spent per unit length: temporal familiarity (s) = 1 m/average segment speed, which are used in Fig. 3d as the x coordinates. When segment length is not unit (as in Extended Data Fig. 10a), a normalization of segment length is applied to make the familiarity measures comparable. For locating the neurons recorded in CA1, either immediately after the experiment or within a few days of it, 3–4 tetrodes were electrolytically lesioned (20 µA for 10 seconds) as fiducials, and animals were transcardially perfused with PBS, followed by a 4% paraformaldehyde solution. Brains were cut in 50-µm sections and stained with cresyl violet. Fiducial lesions, electrode tracks and the relative locations of the tetrode guide cannulas in each microdrive, as well as allowance for brain shrinkage, were used to estimate the AP and ML coordinates of each tetrode with respect to a rat brain atlas49. Only tetrodes localized to the CA1 region were used in analysis. The atlas was used to construct a 3D model of the CA1 pyramidal cell layer, allowing an estimate of the tetrode locations with respect to the septotemporal and proximodistal axes of CA1. All the other datasets were obtained from http://crcns.org/data-sets/hc/hc-3, contributed by the Buzsáki laboratory at New York University23,24. Neural activity in these datasets was recorded using either four or eight shank probes at 20 kHz. Each shank has eight recording sites, making 32 or 64 possible recording sites (channels). Spike detection was performed on the output from raw data with an 800–5,000-Hz bandpass filtering. See http://crcns.org/files/data/hc3/crcns-hc3-processing-flowchart.pdf for more details about recording and experiments. For rats exploring a 180 × 180-cm box, all sessions that have more than 50 simultaneously recorded CA1 neurons were included for analysis. We also excluded neurons that are marked as inhibitory or not identified and those that have average firing rates outside the range 0.1–7 Hz during the entire recording session. We also excluded neurons that have average firing rates between 0.1 Hz and 7 Hz but remain silent (fire zero spikes) for more than 30 min for potential death of the cell or movement of electrodes. Details of sessions used are summarized in Supplementary Table 2. Place field determination and spatial information calculation For place field determination on the linear track, animals were allowed to make 3–5 traversals of the full extent of the track in four epochs20. Rate maps were constructed by taking the number of spikes in each 1-cm spatial bin of the track divided by the occupancy in that bin, for each of the two directions of movement. Both were smoothed with a Gaussian kernel with an s.d. of 10 cm. Only periods when the animal's velocity was greater than 5 cm s−1 were included in the spatial firing rate maps. A place field was defined as at least 15 contiguous centimeters of the rate map in which the firing rate exceeded 2 Hz. Because, in linear tracks, place fields may be directional, place fields were detected independently for each directional firing rate map, outbound and inbound, and then fields in different directions were merged if either field showed at least 50% overlap with the other (for more details on experimental procedure, see ref. 20). For place field determination in Fig. 3h, we did not impose the 15-cm length requirement and did not merge place fields with different directions to have numerically more place fields and small place fields for estimating hyperbolic radius. For place field determination in the square box of size 180 × 180 cm, rate maps were constructed by taking the number of spikes in each 2 × 2-cm spatial bin of the box divided by the occupancy in that bin. The rate map is then smoothed with a 2D Gaussian kernel with an s.d. of 10 cm. Again, only periods when the animal's velocity was greater than 5 cm s−1 were included in the spatial firing rate maps. We then detected potential place fields as all contiguous regions in the rate map in which the firing rate exceeded 2 Hz, followed by a manual breakdown of some of these place fields into smaller ones when they have more than one peak in the firing rate map. For spatial information calculation, the following formula was used50: $${{{\mathrm{Information}}}}\,{{{\mathrm{per}}}}\,{{{\mathrm{spike}}}} = \mathop {\sum}\nolimits_{i = 1}^N {p_i} \frac{{\lambda _i}}{\lambda }\log _2\frac{{\lambda _i}}{\lambda }$$ $${{{\mathrm{Information}}}}\,{{{\mathrm{per}}}}\,{{{\mathrm{second}}}} = \mathop {\sum}\nolimits_{i = 1}^N {p_i} \lambda _i\log _2\frac{{\lambda _i}}{\lambda }$$ where i = 1, · · ·; N are pixel indices in the rate map; pi is the probability of occupancy of pixel i; λi is the mean firing rate of pixel i; and λ is the overall mean firing rate of the cell on the whole track or box. Position decoding with a maximum likelihood decoder The spatial firing rate maps are computed with the procedure described above except that we now use a 5 × 5-cm spatial bin instead of 2 × 2 cm. The time window we use is δt = 500 ms. Windows with mean speed less than 5 cm s−1 were not used for decoding. For each time window, we compute the log-likelihood of observing the spike count vector \(\left\{ {n_j} \right\}_{j = 1}^N\) at each pixel i assuming conditional independence among neural responses: $$\begin{array}{*{20}{l}}{\log p\left( {\left\{ {n_j} \right\}_{j = 1}^N|i} \right)}{ \,=\, \log \left( {\mathop {\prod}\nolimits_{j = 1}^N {p\left( {n_j|i} \right)}} \right)}\\ \qquad\qquad\qquad\quad\,\,{=\, \log \left( {\mathop {\prod}\nolimits_{j = 1}^N {Poisson\left( {n_j;\lambda _{ij} \ast \delta _t} \right)}} \right)}\\ \qquad\qquad\qquad\quad\,\,{=\, \mathop {\sum}\nolimits_{j = 1}^N {\log \left( {Poisson\left( {n_j;\lambda _{ij} \ast \delta _t} \right)} \right)}}\end{array}$$ where λij is the mean firing rate of the jth neuron in pixel i. The decoded location is then taken as the pixel that maximizes \(\log p\left( {\left\{ {n_j} \right\}_{j = 1}^N|i} \right)\). To take into account the dependence of the decoding error on the overall firing rate of the population, which differs across sessions, we subsampled between 17 and 25 neurons from all the active pyramidal cells for each time window in a 20-min session and computed the decoding error for each number of subsampled neurons. The dependence of the median decoding error as a function of the total number of spikes is shown in Fig. 4a. This dependence is exponential, with exponent that characterizes the decoding accuracy per spike. Computation of the pairwise correlation matrices The cross-correlogram of spike trains of two neurons at time delay τ is computed as \(ccg_{ij}\left( \tau \right) = \frac{1}{T}{\int}_0^T {f_i\left( t \right)f_j\left( {t + \tau } \right)dt}\), where fi(t) is the spike train of the ith neuron, and T is the total duration of the recording considered. Then, the correlation of firing of two neurons on a time scale of τmax is computed as: \(C_{ij} = \frac{1}{{\tau _{max}r_ir_j}}\max \left( {{\int}_0^{\tau _{max}} {ccg_{ij}\left( \tau \right)d\tau } ,\,{\int}_0^{\tau _{max}} {ccg_{ji}\left( \tau \right)d\tau } } \right)\), where ri is the average firing rate of the ith neuron during the recording time considered. Notice that the resulting correlation value is always non-negative. This computation replicates Giusti et al.16, which developed the clique topology method to be used for analyzing the pairwise correlation matrix. For all figures shown, we used τmax = 1 second. For all datasets, only neurons with average firing rates in the range 0.1–7 Hz were included in the pairwise correlation computation. For datasets obtained from CRCNS, we used spike times from only the last two-thirds of the total duration of each session. Changing τmax to 500 ms does not change our conclusions: (1) Betti curves of all the displayed datasets can be fitted with a 3D hyperbolic geometry (P > 0.05 for both curve integral and L1 distance for all three Betti curves); (2) low-dimensional (≤10D) Euclidean geometry still cannot explain the Betti curves that are used to falsify Euclidean geometry in Fig. 2e,h and Extended Data Figs. 3 and 4; and (3) hyperbolic radius increases proportionally to the logarithm of time that the animal had to explore it. For time shifting the spike trains as controls for Betti curves, a random time lag is added to each neuron's spike train with periodic boundary condition applied so that the firing statistics of single neurons are preserved. The time lags are different across neurons to destroy the pairwise correlation statistics. Clique topology method for finding underlying hidden geometry in neural population responses A useful technique to determine the geometry in which neuronal responses reside is clique topology. This method detects structures in a pairwise similarity matrix (or negative distance matrix) of points that are (1) invariant to any monotonic linear or non-linear transformations of the entries of the similarity matrix, making it ideal to use for neuronal responses that are known to be distorted by various monotonic functions and (2) characteristic of the underlying geometry. Here, we use pairwise correlation between CA1 neuronal responses (see above) for the similarity matrix as Sij = Cij, where Sij is the similarity between responses of cells i and j, and Cij is the correlation between two cells' responses. This metric intuitively is reflective of degree of place field overlap between two neurons15. Provided with the symmetric pairwise correlation matrix, the algorithm passes the matrix through a step function of different thresholds to only keep those entries in the matrix above the threshold (Fig. 2b). Based on these values, a topological graph is created (Fig. 2c). This graph can then be characterized by its numbers of cycles (holes) in one, two or higher dimensions (called one cycle, two cycles, etc.), excluding those that arise from boundaries of higher-dimensional cliques. With high thresholds of the step function, the number of cycles will, in general, be low, as most nodes are not interconnected. With low thresholds, the number of cycles will also, in general, be low, as nodes form fully connected networks. These numbers of cycles in one, two or higher dimensions at a set threshold are called the 1st Betti number, 2nd Betti number and so on. Plotting them as a function over the entire density range ρ of links (from highest threshold to lowest threshold) gives the so-called 1st Betti curve β1 (ρ), 2nd Betti curve β2 (ρ) and so on. These Betti curves are very sensitive to similarity matrices produced from different geometries and, hence, can be used for identifying the underlying geometric structure of CA1 neural representation. For our datasets, we use the first three Betti curves to search for the underlying geometries of the data. The CliqueTop MATLAB package was obtained online from the original authors16. The pairwise correlation matrices of CA1 neuronal responses that we used for Betti curve generations are computed as described above. From a correlation matrix, we can also define the distance matrix between neuronal responses as Dij = −Cij, where Dij is the distance between neurons i and j, and Cij is the correlation between two neurons as defined above. This definition makes neurons that are more correlated with each other to have shorter distance between them, consistent with intuition. We note that, with the algorithm being invariant under any non-linear monotonic transformations, it can be easily shown that this definition of distance can be monotonically transformed to become a true distance metric. Notably, the relative ordering of all the entries of a certain distance matrix may not be realizable in some geometries no matter how the points are configured, but, in other geometries, it becomes possible. Thus, the tool can be used to test geometries for their ability to support a found correlation matrix by inspecting those invariant features, namely the Betti curves, βm(ρ), where ρ is the edge density from 0 to 1. To determine the geometry of the neuronal responses, we screened two kinds of geometries: Euclidean geometry of different dimensionality and hyperbolic geometry (native model) with different dimensionality and radii. In each geometry, we sampled points (same number as number of used CA1 neurons) uniformly according to the geometric properties. In d-dimensional Euclidean geometry, the points were sampled uniformly in the d-dimensional unit cube. In d-dimensional hyperbolic model, ζ is set as 1 while the maximal radius Rmax of the geometry was adjusted to different values (this is equivalent to fixed maximal radius Rmax with changing curvature ζ). The points are then sampled uniformly to have uniformly distributed angles and their radii r ∈ [0,Rmax] following the distribution ρ(r) ~ sinh(d−1)(r), which is approximately proportional to e(d−1)r when r ≫ 1. With points sampled, we then compute their pairwise distance matrices with their respective distance metric: Euclidean distance for points sampled from Euclidean geometry and hyperbolic distance for points sampled from hyperbolic geometry. The distance metric between two points in a hyperbolic geometry of constant curvature K = − ζ2 < 0, ζ > 0, is given by the hyperbolic law of cosines: $$\cosh \left( {\zeta x} \right) = \cosh \left( {\zeta r} \right)\cosh \left( {\zeta r\prime } \right) - \sinh \left( {\zeta r} \right)\sinh \left( {\zeta r\prime } \right)\cos \Delta \theta$$ where x is the hyperbolic distance; ζ is set as 1 in our model; r and r′ are the radial distances of the two points from the origin; and ∆θ is the angle between them. Considering that a small amount of noise may exist in the correlation between firing of two CA1 neurons due to the different trajectories taken by the animal, the stochastic nature of neuronal firing and other high-order cognitive processes, we also added i.i.d. multiplicative Gaussian noise to each entry of the pairwise distance matrices obtained for both Euclidean model and hyperbolic model before generating the Betti curves from them (no noise added to the experimental pairwise correlation matrices). Thus, the final distance matrices to be used for Betti curve analyses have entries \(D_{ij} = D_{ij}^{geo} \cdot\) (1 + ϵ∙N(0,1)), where \(D_{ij}^{geo}\) is the geometric distance computed based on the coordinates of the sampled points i and j, and ϵ is the noise level set to be 0.05 for all the analyses in this paper. To summarize, sampled points from different geometries give different Betti curves through (1) different distribution of points in these geometries and (2) different distance metrics used to compute the pairwise distances. To determine whether a geometry underlies the correlation among CA1 neuronal responses, we compare the Betti curves generated from that geometry (by sampling points as described above) to the Betti curves generated from the pairwise correlation matrices of CA1 neurons (experimental Betti curves). Specifically, we uniformly sample points (same number as number of CA1 neurons) from the geometry, compute the pairwise distance matrix, use clique topology on the negative pairwise distance matrix (to correspond to correlation matrix) to generate the Betti curves and repeat 300 times. As a result, we have 300 × 3 Betti curves, as for each sampling we obtain the first three Betti curves (β1 (ρ), β2 (ρ) and β3 (ρ)), counting number of cycles in one, two and three dimensions, respectively. For each Betti curve βm (ρ), we compute its integrated Betti value, defined as \(\bar \beta _m = {\int}_0^1 {\beta _m\left( \rho \right)d\rho }\), where ρ is the edge density, and βm (ρ) denotes the m-th Betti curve. The integrated Betti values computed from the experimental Betti curves can then be compared to those generated from sampling in the given geometry (model Betti curves). We report the P values of integrated Betti values as the two-tailed percentiles for where the experimental integrated Betti values fall within the sampling-generated distributions (of 300 samples), separately for β1 (ρ), β2 (ρ) and β3 (ρ). Besides integrated Betti values, we also tested for L1 distances. Specifically, for each geometry, we calculate the average model Betti curves \(\beta _1^{{{{\mathrm{ave}}}}}\left( \rho \right)\), \(\beta _2^{{{{\mathrm{ave}}}}}\left( \rho \right)\) and \(\beta _3^{{{{\mathrm{ave}}}}}\left( \rho \right)\) by averaging over all 300 sampling-generated model Betti curves. Then, we compute the L1 distance between the experimental Betti curves (βm (ρ), m = 1, 2, 3) and the averages: \(l_m = {\int}_0^1 {\left| {\beta _m\left( \rho \right) - \beta _m^{ave}\left( \rho \right)} \right|} d\rho\). The P value of this L1 distance is the one-tailed percentile of itself in the distribution of L1 distances computed from each sampling-generated Betti curve and the average (% sampling that resulted in a larger or equal L1 distance to average than the experimental L1 distance). For visualization purposes only, the experimental Betti curves were smoothed with a kernel size of 1/50 total number of edge densities of each Betti curve (no smoothing was used for computing integrated Betti values and L1 distances). If a geometry gives statistically similar Betti curves compared to the experimental Betti curves, then the geometry is viable in explaining the neural representation. On the contrary, if the model Betti curves are dissimilar to the experimental Betti curves, then the geometry under testing is not viable. Determination of optimal R max or dimension of hyperbolic geometry with Betti curves To determine the optimal Rmax of the hyperbolic geometry that captures a correlation matrix in Figs. 2 and 3, we searched over Rmax in {5:0.5:24}. For each Rmax, we calculate the P values for integrated Betti values and L1 distances of the first two Betti curves (the 3rd Betti curve was excluded as its integrated Betti value becomes unstable when number of neurons is small). Multiplying the four P values, we obtain the P value product of them. The optimal Rmax is defined as the Rmax that results in the maximum P value product out of all the Rmax values that have been tested. For a recording session, we randomly sample 75% of active CA1 neurons, compute their pairwise correlation matrix, determine the optimal Rmax as above and repeat 100 times to obtain a distribution of Rmax for the recording session. To quantify how well the experimental Betti curves are fitted by model Betti curves from hyperbolic geometry of different dimensions, we used χ2 statistics. Specifically, we first binned the Betti curves (nbins = 10 for each curve) and then computed the square of the difference between the experimental Betti curves after binning and the mean of the model Betti curves (estimated with bootstrap of 500 repetitions) after binning, denoted Dexpm. We also computed the square of the difference between model curves from each sampling of points and the mean of the model curves and took their average, denoted Dmodel. The χ2 statistic is defined as \(\chi ^2 = \frac{1}{{n_{bins}}}\mathop {\sum}\nolimits_{bins} {\frac{{D_{expm}}}{{D_{model}}}}\), where the division is bin-wise. Note that, when this χ2 statistic is small, it means that the deviation of model curves from the experimental curves is small, which indicates that the experimental curves are well-explained by the model and vice versa. We also tested whether the dimension of hyperbolic geometry has an effect on this χ2 statistic with two-way ANOVA, which also controls for the effect of different animals and sessions. Bayesian estimator of curvature/hyperbolic radius from place field sizes We used a Bayesian estimator on place field sizes, taking into account that we can only observe place fields within a range of sizes [sl,su] due to the size of the entire experimental environment, the threshold for determination of place fields and grid sizes for computing rate maps. The probability to observe one place field size s given curvature ζ of the representation is given by: $$P\left( {s|\zeta } \right) = \frac{1}{{Z\left( \zeta \right)}}\zeta e^{ - \zeta s}$$ when sl < s < su and 0 otherwise, and $$Z\left( \zeta \right) = {\int}_{s_l}^{s_u} {ds\zeta e^{ - \zeta s}}$$ Then, by Bayes' theorem, $$P\left( {\zeta |s_1, \cdots ,s_N} \right) = \frac{{\zeta ^N}}{{Z\left( \zeta \right)^N}}e^{ - \mathop {\sum}\nolimits_{n = 1}^N {\zeta s_n} }P\left( \zeta \right)$$ where P(ζ) is the prior on ζ which we used uniform distribution. Then, the estimate of ζ is the ζ that maximizes P(ζ|s1,⋯,sN). Simulation on the effect of undersampling on estimated R max For this simulation, we fix the radius of the hyperbolic geometry to be 10 and randomly draw 41 points from it (same as the number of neurons in Fig. 2d–f). Then, at each time step of 1 second (same time scale as τmax in the correlation calculation), we observe a noise-corrupted version of the pairwise distances (or negative pairwise correlations) among their responses. Specifically, we apply multiplicative Gaussian noise to the true pairwise distance matrix entrywise with ϵ = 0.5 in the formula above. At each readout time, we average all the observations so far to obtain one pairwise distance matrix to be used for Betti curve analysis on estimating the underlying Rmax, using the method just described above. Fisher information of model neurons We modeled the neurons to have Gaussian tuning: \(f_i\left( s \right) = A_iexp\left( { - \frac{1}{2}\left( {s - \mu _i} \right)^T\mathop {\sum}\nolimits_i^{ - 1} {\left( {s - \mu _i} \right)} } \right)\), with stimulus s = [x, y] as a 2D random variable taking values uniformly from [0, 1] × [0, 1]. For each neuron, we sample Ai ∼ unif [5,25], μi ∼ unif([0, 1] × [0, 1]) and Σi = R(θi)diag(\(\sigma _{i1}^2,\sigma _{i2}^2\))R(θi)T, where R(θi) is the rotation matrix \(\left( {\begin{array}{*{20}{c}} {\cos \theta _i} & { - \sin \theta _i} \\ {\sin \theta _i} & {\cos \theta _i} \end{array}} \right)\) with θi ∼ unif [0, 2π], and σi1 and σi2 are i.i.d. ∼ exp(−ζσ), with ζ denoting the curvature of the representation geometry, which dictates the decay rate of the exponential distribution, and ζ−1 is the mean of the distribution. For uniform distribution of place field sizes, σi1 and σi2 are i.i.d. ∼ unif [0, 2ζ−1] to keep the same mean as in the exponential case. We also assume the neurons to have independent Poisson variability48. Thus, given a stimulus s, the number of spikes fired by a neuron is ri|s ∼ Poisson(fi (s)). These neurons' responses then encode a posterior distribution on s: p(s|r1,⋯,rN). For each of 10,000 iterations, a stimulus s is randomly generated ∼ unif([0, 1] × [0, 1]) with Ai, μi and Σi all randomly generated again with the above distributions. The Fisher information matrix I(s) is computed by definition: $$I\left( s \right)_{i,j} = E\left[ {\left( {\frac{\partial }{{\partial s_i}}\log p\left( {r_1, \cdots ,r_N|s} \right)} \right)\left( {\frac{\partial }{{\partial s_j}}\log p\left( {r_1, \cdots ,r_N|s} \right)} \right)|s} \right]$$ which, for the independent neuron model as we assume here, simplifies to: $$I\left( s \right)_{i,j} = - \mathop {\sum}\nolimits_{i = 1}^N E \left[ {\frac{{\partial ^2}}{{\partial s_i\partial s_j}}\log p\left( {r_i|s} \right)|s} \right]$$ So, with Fisher information matrix I(s) computed each time, we can compute the average magnitude of the 10,000 Fisher information matrices, defined as det(I(s))0.5. Second-order polynomial is used for fittings, and the optimal size of the hyperbolic representation is determined as the Rmax under the peak of the fitting polynomial. For the case of each cell having potentially multiple place fields, we extracted the number of place fields from three 20-min sections in Fig. 3. Dividing them by the number of all cells gives mean number of place fields per cell. Similarly, we can compute standard deviations of the number of place fields across cells. The number of place fields per cell is experimentally described by a gamma distribution20, whose scale and shape parameters can now be determined by the average values of means and standard deviations computed from the three sessions. After drawing the number of place fields for each of N cells, the number is rounded to the closest integer. Then, for each place field, Ai and Σi are randomly generated as above, and the Fisher information matrix can be computed for a random s. To also match the size of the environment to the experiment (1.8 × 1.8 m), μi is now drawn from unif([0, 1.8] × [0, 1.8]) and so is s for each of 10,000 iterations. Statistics and reproducibility No statistical method was used to predetermine sample size, but our sample sizes are similar to those reported in previous publications12,16,30,41. We analyzed data from previous publications20,23,24. For rats exploring the 48-m linear track, two rats had fewer than 30 active neurons (overall average firing rate within the range 0.1–7 Hz) recorded, and they were excluded from further analyses. For rats exploring a 180 × 180-cm box, all sessions that have more than 50 simultaneously recorded CA1 neurons were included for analysis. We also excluded neurons that are marked as inhibitory or not identified and those that have average firing rates outside the range 0.1–7 Hz during the entire recording session. We also excluded neurons that have average firing rates between 0.1 Hz and 7 Hz but remain silent (fire zero spikes) for more than 30 min for potential death of the cell or movement of electrodes. Details of sessions used are summarized in Supplementary Table 2. There was no randomization or division into experimental groups. Analyses were not performed blinded to the conditions of the experiments. Throughout the study, we employed non-parametric statistical methods. For comparisons between two groups, we used unpaired two-sample t-tests. Data distribution was assumed to be normal, but this was not formally tested. For comparisons among more than two groups, ANOVA with Tukey's post hoc test was used. For checking whether samples are likely coming from a specific theoretical distribution, χ2 GOF test was used. Correlations are reported using Pearson's correlation coefficient. More information can be found in the Nature Research Reporting Summary. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. All datasets used in this study, except rats running on the 48-m linear track, were generously contributed by György Buzsáki at http://crcns.org/data-sets/hc/hc-3 (refs. 23,24). See Supplementary Table 2 for the sessions used. The datasets in which rats ran on the 48-m linear track are made available on https://crcns.org/data-sets/hc/hc-31/. Source data are provided with this paper. Code availability Code written in MATLAB 2018b to simulate Fisher information models is publicly accessible on GitHub at https://github.com/HuanqiuZhang/Fisher_info_codes. The CliqueTop MATLAB package (2015 version) used for computing Betti curves is obtained from the original authors16 at https://github.com/nebneuron/clique-top. Other codes are available upon reasonable request. Newell, A. Unified Theories of Cognition (Harvard University Press, 1990). Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT Press, 1998). Balaguer, J., Spiers, H., Hassabis, D. & Summerfield, C. Neural mechanisms of hierarchical planning in a virtual subway network. Neuron 90, 893–903 (2016). Cubero, R. J., Jo, J., Marsili, M., Roudi, Y. & Song, J. Statistical criticality arises in most informative representations. J. Stat. Mech. 2019, 063402 (2019). Boguna, M., Papadopoulos, F. & Krioukov, D. Sustaining the internet with hyperbolic mapping. Nat. Commun. 1, 62 (2010). Urdapilleta, E., Troiani, F., Stella, F. & Treves, A. Can rodents conceive hyperbolic spaces? J. R. Soc. Interface 12, 20141214 (2015). Scoville, W. B. & Milner, B. Loss of recent memory after bilateral hippocampal lesions. J. Neurol. Neurosurg. Psychiatry 20, 11–21 (1957). O'Keefe, J. & Nadel, L. The Hippocampus as a Cognitive Map (Clarendon Press, 1978). O'Keefe, J., Burgess, N., Donnett, J. G., Jeffery, K. J. & Maguire, E. A. Place cells, navigational accuracy, and the human hippocampus. Philos. Trans. R. Soc. Lond. Ser. B 353, 1333–1340 (1998). Pfeiffer, B. E. & Foster, D. J. Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497, 74–79 (2013). O'Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971). Fenton, A. A. et al. Unmasking the CA1 ensemble place code by exposures to small and large environments: more place cells and multiple, irregularly arranged, and expanded place fields in the larger space. J. Neurosci. 28, 11250–11262 (2008). Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A. & Boguna, M. Hyperbolic geometry of complex networks. Phys. Rev. E 82, 036106 (2010). Gromov, M. Metric Structures for Riemannian and Non-Riemannian Spaces (Birkhauser, 2007). Hampson, R. E., Byrd, D. R., Konstantopoulos, J. K., Bunn, T. & Deadwyler, S. A. Hippocampal place fields: relationship between degree of field overlap and cross-correlations within ensembles of hippocampal neurons. Hippocampus 6, 281–293 (1996). Giusti, C., Pastalkova, E., Curto, C. & Itskov, V. Clique topology reveals intrinsic geometric structure in neural correlations. Proc. Natl Acad. Sci. USA 112, 13455–13460 (2015). Zhou, Y., Smith, B. H. & Sharpee, T. O. Hyperbolic geometry of the olfactory space. Sci. Adv. 4, eaaq1458 (2018). Chaudhuri, R., Gerc¸ek, B., Pandey, B., Peyrache, A. & Fiete, I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22, 1512–1520 (2019). Gardner, R. J. et al. Toroidal topology of population activity in grid cells. Nature 602, 123–128 (2022). Rich, P. D., Liaw, H.-P. & Lee, A. K. Large environments reveal the statistical structure governing hippocampal representations. Science 345, 814–817 (2014). Low, R. J., Lewallen, S., Aronov, D., Nevers, R. & Tank, D. W. Probing variability in a cognitive map using manifold inference from neural dynamics. Preprint at https://www.biorxiv.org/content/10.1101/418939v2 (2018). Nieh, E. H. et al. Geometry of abstract learned knowledge in the hippocampus. Nature 595, 80–84 (2021). Mizuseki, K., Sirota, A., Pastalkova, E., Diba, K. & Buzsáki, G. Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. CRCNS.org. https://doi.org/10.6080/K09G5JRZ (2013). Mizuseki, K. et al. Neurosharing: large-scale data sets (spike, LFP) recorded from the hippocampal-entorhinal system in behaving rats. F1000Res. 3, 98 (2014). Jung, M. W., Wiener, S. I. & McNaughton, B. L. Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the rat. J. Neurosci. 14, 7347–7356 (1994). Hafting, T., Fyhn, M., Molden, S., Moser, M.-B. & Moser, E. I. Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801–806 (2005). Kjelstrup, K. B. et al. Finite scale of spatial representation in the hippocampus. Science 321, 140–143 (2008). Bialek, W. Biophysics: Searching for Principles (Princeton University Press, 2012). Wilson, M. A. & McNaughton, B. L. Dynamics of the hippocampal ensemble code for space. Science 261, 1055–1058 (1993). Ziv, Y. et al. Long-term dynamics of CA1 hippocampal place codes. Nat. Neurosci. 16, 264–266 (2013). Cover, T. M. & Thomas, J. A. Elements of Information Theory (John Wiley & Sons, 2012). Brunel, N. & Nadal, J.-P. Mutual information, Fisher information, and population coding. Neural Comput. 10, 1731–1757 (1998). Kloosterman, F., Layton, S. P., Chen, Z. & Wilson, M. A. Bayesian decoding using unsorted spikes in the rat hippocampus. J. Neurophysiol. 111, 217–227 (2014). Boss, B. D., Turlejski, K., Stanfield, B. B. & Cowan, W. M. On the numbers of neurons on fields CA1 and CA3 of the hippocampus of Sprague-Dawley and Wistar rats. Brain Res. 406, 280–287 (1987). West, M., Slomianka, L. & Gundersen, H. J. G. Unbiased stereological estimation of the total number of neurons in the subdivisions of the rat hippocampus using the optical fractionator. Anat. Rec. 231, 482–497 (1991). Thompson, L. & Best, P. Place cells and silent cells in the hippocampus of freely-behaving rats. J. Neurosci. 9, 2382–2390 (1989). Lee, I., Yoganarasimha, D., Rao, G. & Knierim, J. J. Comparison of population coherence of place cells in hippocampal subfields CA1 and CA3. Nature 430, 456–459 (2004). Eliav, T. et al. Multiscale representation of very large environments in the hippocampus of flying bats. Science 372, eabg4020 (2021). Harland, B., Contreras, M., Souder, M. & Fellous, J.-M. Dorsal CA1 hippocampal place cells form a multi-scale representation of megaspace. Curr. Biol. 31, 2178–2190 (2021). Buzsáki, G. & Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nat. Rev. Neurosci. 15, 264–278 (2014). Karlsson, M. P. & Frank, L. M. Network dynamics underlying the formation of sparse, informative representations in the hippocampus. J. Neurosci. 28, 14271–14281 (2008). Bittner, K. C., Milstein, A. D., Grienberger, C., Romani, S. & Magee, J. C. Behavioral time scale synaptic plasticity underlies CA1 place fields. Science 357, 1033–1036 (2017). Nitz, D. & McNaughton, B. Differential modulation of CA1 and dentate gyrus interneurons during exploration of novel environments. J. Neurophysiol. 91, 863–872 (2004). Ghaninia, M. et al. Hyperbolic odorant mixtures as a basis for more efficient signaling between flowering plants and bees. PLoS ONE 17, e0270358 (2022). Sharpee, T. O. An argument for hyperbolic geometry in neural circuits. Curr. Opin. Neurobiol. 58, 101–104 (2019). Driscoll, L. N., Pettit, N. L., Minderer, M., Chettih, S. N. & Harvey, C. D. Dynamic reorganization of neuronal activity patterns in parietal cortex. Cell 170, 986–999 (2017). O'Keefe, J. & Burgess, N. Geometric determinants of the place fields of hippocampal neurons. Nature 381, 425–428 (1996). Tolhurst, D. J., Movshon, J. A. & Dean, A. F. The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vis. Res. 23, 775–785 (1983). Paxinos, G. & Watson, C. The Rat Brain in Stereotaxic Coordinates (Elsevier Academic Press, 2005). Skaggs, W. E., McNaughton, B. L. & Gothard, K. M. An information-theoretic approach to deciphering the hippocampal code. in: Advances in Neural Information Processing Systems 1030–1037 (1993). We are grateful to Y. Zhou and W. Hsu for helpful discussions and feedback on the manuscript. This research was supported by an AHA-Allen Initiative in Brain Health and Cognitive Impairment award made jointly through the American Heart Association and the Paul G. Allen Frontiers Group (19PABH134610000); the Janelia Visiting Scientist Program; the Dorsett Brown Foundation; the Mary K. Chapman Foundation; the Aginsky Fellowship; National Science Foundation (NSF) grant IIS-1724421; the NSF Next Generation Networks for Neuroscience Program (award 2014217); National Institutes of Health grants U19NS112959 and P30AG068635 (to T.O.S.); and the Howard Hughes Medical Institute (to P.D.R. and A.K.L.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, USA Huanqiu Zhang & Tatyana O. Sharpee Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA P. Dylan Rich Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, USA Albert K. Lee Huanqiu Zhang Tatyana O. Sharpee H.Z. and T.S. designed the study and wrote the manuscript. P.D.R. and A.K.L. designed the experiments. P.D.R. performed the experiments. H.Z. performed the analyses and simulations. All authors discussed the results and contributed to writing the final manuscript. Correspondence to Tatyana O. Sharpee. Nature Neuroscience thanks Dori Derdikman and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Extended Data Fig. 1 Betti curves generated from hyperbolic geometry of low dimensions (with optimized radius) are statistically indistinguishable from the experimental Betti curves for both 48m-linear track (a) and square box (b) environments with 3D being the optimal dimension in general. For each environment type, three sessions with the most number of neurons recorded are shown. Session IDs are shown on the left. The Betti curves are statistically similar both in terms of the area under the curve (measured by integrated Betti value) and the curve shape (measured by L1 distances between curves). Boxes show model interquartile range (n = 300 independent replicates). Center line indicates model median and upper and lower whiskers extend to the most extreme data points excluding outliers. Black lines indicate experimental values. (c) χ2 statistics (see Methods) are used to quantify how well the experimental Betti curves are fitted by model Betti curves from different dimensions. Results from two example sessions are shown, one for each environmental type (linear vs. square). Boxes show interquartile range (n = 500 bootstraps). Center line indicates median. Upper and lower whiskers encompass 95% of data. One-way ANOVA for each session: p < 10−8 across all dimensions (linear: F = 11239.87, square: F = 13613.23). Tukey's post hoc test indicates that 3D fits are significantly better than fits from other dimensions (p < 10−8 for all 3 pairwise comparisons). Same conclusions hold for two-way ANOVA across all sessions in (a) and (b). Extended Data Fig. 2 Betti curves generated from 3D hyperbolic geometry are statistically indistinguishable from the experimental Betti curves for both linear track and square box environments. (a) Illustration on how the hyperbolic radius is estimated for each session. 3D hyperbolic geometries of different radii are used to generate model Betti curves (solid lines and shading in the insets indicate mean ± std), which are then compared against the experimental Betti curves (dashed lines in the insets) of the session for p-values of integrated Betti values and L1 distances. The product of p-values versus radii are plotted and the radius of the session is estimated to be the radius that gives the largest product. (b) Linear track datasets. Notations are as in Fig. 2. Boxes show model interquartile range (n = 300 independent replicates). Center line indicates model median and upper and lower whiskers encompass 95% of model range. Black lines indicate experimental values. 48-m linear track datasets are from Rich, 2014. The rest are publicly available datasets collected by Dr. Buzsaki's group including the two 2.5-m linear track datasets (2nd row in b) and the square box datasets (c). Session IDs are shown at the upper-left corner of each panel. See Supplementary Table 1 for all the radii used and the fitting statistics. Extended Data Fig. 3 Betti curves generated from low-dimensional Euclidean geometries are statistically different from the experimental Betti curves in 48-m linear track exploration. Top to bottom rows represent different animals with N = 41, 38 and 34, respectively. (a,d,g) Distribution of experimental integrated Betti values (black lines) versus model statistics (boxplots). (b,e,h) Distribution of experimental L1 distances (black lines) versus model statistics (boxplots). Boxes show model interquartile range (n = 300 independent replicates). Center line indicates model median and upper and lower whiskers encompass 95% of model range. Black lines indicate experimental values. (c,f,i) Experimental Betti curves (dashed lines) and model Betti curves (solid lines and shading indicate mean ± std) from Euclidean geometry of dimension shown on the top. Extended Data Fig. 4 Betti curves generated from low-dimensional Euclidean geometries are statistically different from the experimental Betti curves in square box exploration. Top to bottom rows represent different datasets (session IDs shown on the left) with N = 56, 48 and 42, respectively. (a,d,g) Distribution of experimental integrated Betti values (black lines) versus model statistics (boxplots). (b,e,h) Distribution of experimental L1 distances (black lines) versus model statistics (boxplots). Boxes show model interquartile range (n = 300 independent replicates). Center line indicates model median and upper and lower whiskers encompass 95% of model range. Black lines indicate experimental values. (c,f,i) Experimental Betti curves (dashed lines) and model Betti curves (solid lines and shading indicate mean ± std) from Euclidean geometry of dimension shown on the top. Extended Data Fig. 5 Betti curves generated from neurons with high spatial information are statistically indistinguishable from those of 3D hyperbolic geometry. (a) Distribution of information of each neuron's response about the animal's location per second. Black line indicates 25 percentile cutoff of the distribution. Those above the cutoff are used for Betti curve analyses in b,c. (b,c) Same notations as in Fig. 2. (d) Distribution of information of each neuron's response about the animal's location per spike. Black line indicates 25 percentile cutoff of the distribution. Those above the cutoff are used for Betti curve analyses in e,f. For a–f, the animal with the most number of active neurons recorded on the 48-m linear track is used (same one as in Fig. 2d-f). (g–l) Same analyses for a session in square box. Again, the session with the most number of active neurons is used (same one as in Fig. 2g-i, session ID: ec014.215). For all boxplots, boxes show model interquartile range (n = 300 independent replicates). Center line indicates model median and upper and lower whiskers encompass 95% of model range. Black lines indicate experimental values. Extended Data Fig. 6 Place field size on the linear track as a function of anatomical recording location. (a) The location of each place field-corresponding cell viewed from above the septal half of CA1, with the field size denoted by the color scale at the right. (b) The same fields as in (a) but plotted in terms of the relative septal-temporal and proximal-distal coordinates of CA1. (c) As in (b) but separated by animal. (d) and (e) show the field sizes with respect to the two hippocampal axes, as well as the Pearson's correlation coefficients and associated two-sided p-values. Jitter has been added to all anatomical values to improve visualization. Extended Data Fig. 7 The hyperbolic representation sizes are similar initially across environments and change over time. (a) Estimated hyperbolic radii are similar for linear track and square box environments during the initial exposures. Dots represent median estimates (n = 100 independent sampling), lines represent 95% confidence interval. For linear track, same sessions used as in Fig. 2d and Extended Data Fig. 2b first row; square box session used: ec014.215. (b) Estimated radius increases for the same animal's 1st and 8th exposure to a linear track environment. Session ID: ec014.468 and ec014.639. p-value = 4∙10−15 using two-sample t-test for same mean (two-sided). (c) Estimated radius increases for another animal's 6th and 10th exposure to a square box environment. Session ID: ec016.397 and ec016.582. p-value = 1.3∙10−5 using two-sample t-test for same mean (two-sided). Hyperbolic radii are estimated using topological analyses for both b and c. For boxplots in b and c, boxes show interquartile range (n = 100 independent sampling). Center line indicates median and upper and lower whiskers encompass 95% of data. (d) Exponent of distribution of place field sizes increases over time (r = 0.85, p = 2∙10−6 with x-axis has log-scale). Same square box sessions used as in Fig. 3a. Dots and error bars (n = 100 independent sampling) indicate medians and 95% confidence intervals. Distribution of 4 sample 20-min sections indicated in the left panel are shown on the right. Same notation as in Fig. 1d. (e) % active neurons (firing frequency between 0.1 and 7 Hz) decays with familiarity with the environment (number of times the animal has been exposed to the environment). Each data point is a session. Straight line shows the least-squares fit. r = −0.9174, with associated two-sided p = 0.00361. Extended Data Fig. 8 The increase of hyperbolic radius over time is independent of behavior, network state and time elapsed without experience. (a) No consistent change in the animal's behavior is observed when the hyperbolic radius changes. Error bars represent standard deviations from jackknife resampling using 15 min out of 20 min (n = 4 jackknives). Same sessions used as in Fig. 3a. Gray lines show least-squares regression. Two-sided p > 0.05 for all three panels. Total area covered is calculated by breaking the square box into 2 × 2 cm grids and counting how many of the grids are passed by the animal during 15 min out of each of the 20-min intervals. (b) Hyperbolic radius increases over exploration time when periods with running speed <5 cm/sec were excluded from Betti curve calculation. Dots and error bars (n = 100 independent sampling) indicate medians and 95% confidence intervals. Dashed black line is the least-squares regression of data with log scale on the x-axis (r = 0.91, two-sided p = 2∙10−8). Dashed gray line shows the least-squares fit of Eq. 2. (c) Two animals are identified (one for each panel) which explored section 4 of the linear track during epoch 4 (Fig. 3c) for more than 15 min. They were put back into the sleep box for at least 4 h (at most 5 h) and then returned to the long track where they ran an additional 4–5 laps. Hyperbolic radii are estimated using the topological method for epoch 4 and after-sleep exploration. The middle boxplots are obtained by extrapolating hyperbolic radius during epoch 4 to after 4-h sleep using Eq. 2. For both animals, the actual hyperbolic radii estimated during exploration after 4-h sleep are significantly lower than the extrapolated values (two-sided p = 6∙10−8, 1.5∙10−15 for two animals, respectively, using two-sample t-test for same mean). Boxes show interquartile range (n = 100 independent sampling). Center line indicates median and upper and lower whiskers encompass 95% of data. Jitter has been added to the estimated hyperbolic radii to improve visualization. Extended Data Fig. 9 Bias exist when hyperbolic radius is estimated using Betti curve analyses from short neural recordings. (a) The estimated hyperbolic radius exhibits strong positive bias when the recording time is less than 1 min in a simulation (see Methods for the simulation details.). Black line at y = 10 represents the ground truth radius value used in the simulation. Boxes show interquartile range (n = 100 independent sampling). Center line indicates median. Upper and lower whiskers encompass 95% confidence interval of the radius estimate. (b) Hyperbolic radius is estimated from actual neural recordings of different length. Shown here are results for two example sessions (session IDs are given on the top left). Boxes show interquartile range (n = 100 independent sampling). Center line indicates median. Upper and lower whiskers encompass 95% of the radius estimate. An exponential function is then fitted to the medians with the time constant denoted on the plot. Extended Data Fig. 10 The radius of hyperbolic representation increases over timescale of seconds independent of segment length and exposures used in measure of familiarity. (a) Hyperbolic radius increases over time when 2-m segments are used instead of 1-m segments (r = 0.48, p = 0.02). (b) Hyperbolic radius increases over time when mean speed of all passes are used instead of first passes (r = 0.54, p = 8∙10−5). Supplementary Tables 1 and 2 and Supplementary Figs. 1 and 2. Source Data Fig. 1 Statistical Source Data Source Data Extended Data Fig. 1 Source Data Extended Data Fig. 10 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Zhang, H., Rich, P.D., Lee, A.K. et al. Hippocampal spatial representations exhibit a hyperbolic geometry that expands with experience. Nat Neurosci 26, 131–139 (2023). https://doi.org/10.1038/s41593-022-01212-4 Nature Neuroscience (Nat Neurosci) ISSN 1546-1726 (online) ISSN 1097-6256 (print)
CommonCrawl
Associating somatic mutations to clinical outcomes: a pan-cancer study of survival time Paul Little2, Dan-Yu Lin2 & Wei Sun ORCID: orcid.org/0000-0002-6350-11071,2,3 Genome Medicine volume 11, Article number: 37 (2019) Cite this article We developed subclone multiplicity allocation and somatic heterogeneity (SMASH), a new statistical method for intra-tumor heterogeneity (ITH) inference. SMASH is tailored to the purpose of large-scale association studies with one tumor sample per patient. In a pan-cancer study of 14 cancer types, we studied the associations between survival time and ITH quantified by SMASH, together with other features of somatic mutations. Our results show that ITH is associated with survival time in several cancer types and its effect can be modified by other covariates, such as mutation burden. SMASH is available at https://github.com/Sun-lab/SMASH. Somatic mutations, including somatic point mutations (SPMs; e.g., single nucleotide variants or indels) and somatic copy number alterations (SCNAs), are the underlying driving force for tumor growth. In this sense, cancer is a genetic disease. Therefore, association studies between somatic mutations and clinical outcomes may provide insights into tumor biology or personalized treatment selection. However, few efforts have been reported toward this end, partly because most somatic mutations or even gene-level mutations are too rare to conduct meaningful association studies. An alternative to a mutation-by-mutation or gene-by-gene association study is to summarize mutation information by certain features and then associate such features with clinical outcomes. In this paper, we consider three such features: tumor mutation burden (TMB, i.e., the total number of SPMs), SCNA burden, and the degree of (genetic) intra-tumor heterogeneity (ITH), which refers to the fact that tumor cells can be grouped in subclones such that the cells within one subclone share similar sets of somatic mutations. ITH is a fundamental characteristic of somatic mutations and has been associated with clinical outcomes such as survival time or immunotherapy treatment response [1, 2]. We estimate TMB by counting the number of non-synonymous point mutations [3, 4] and estimate the burden of SCNAs using allele-specific copy number estimates derived from ASCAT [5]. While measuring TMB and SCNA burden is relatively straightforward, quantifying ITH is much more challenging. Computational methods have been developed to characterize ITH, e.g., to identify the phylogenetic tree of subclones and the mutations belonging to each subclone [6–11]. However, consensus on the optimal approach for ITH inference and the appropriate approach for quantifying ITH in association studies does not exist. The estimation uncertainty of ITH is often unavoidable because the observed data may be compatible with more than one subclone configuration. Therefore, such uncertainty should be incorporated in association studies. Counting the number of subclones is a straightforward approach to quantify ITH. Andor et al. [1] assessed the association between the number of subclones and survival time in 12 cancer types using data derived from The Cancer Genome Atlas (TCGA). These investigators did not find any significant associations, except for gliomas. Morris et al. [12] assessed the association between ITH and survival time in nine cancer types and found significant associations for several cancer types. They treated ITH as a binary variable based on whether or not the number of subclones was larger than a threshold. An apparent drawback of the aforementioned two approaches is that the subclone proportion information is lost. For example, if a tumor sample has two subclones with cellular proportions being 99% and 1%. Intuitively, this tumor sample is fairly homogenous and may be better classified as one subclone instead of two subclones. A second drawback of the thresholding approach of Morris et al. [12] was that only a small number of patients (3 to 11 patients across nine cancer types, median of six patients) were classified as having both high ITH and non-censored survival time. As a result, the association results can be highly unstable with respect to ITH inference. An alternative metric to quantify ITH is mutant-allele tumor heterogeneity (MATH) [13], which is defined as 100×MAD/median, where median is the median of the variant allele frequencies (VAFs) of all somatic point mutations within a sample, and MAD is the median absolute deviation of the VAFs. MATH pertains to the ratio between the center and spread of the VAF distribution. This approach ignores the fact that VAF can be affected by SCNAs (see Fig. 1 for an illustration). ITH example with and without CNAs. a Visualization of a tree, where each node represents a subclone within a tumor sample. N denotes the normal cells, and A, B, C, and D denote the descending subclones. To simplify notation, we also use A, B, C, and D to denote the mutations that arise from the corresponding four subclones. We simulated a tumor purity of 0.762 with 1000 variants under the following scenarios: (1) no somatic copy number alterations (SCNAs) and (2) SCNAs in which mutations are equally distributed across clonal copy number states (0,1), (1,1), and (1,2). A copy number state denotes the number of copies of the two alleles. For example, copy number state (0,1) denotes deletion in one allele. b The second column corresponds to the cellular proportions of each subclone after accounting for tumor purity. The third and fourth columns correspond to the cellular prevalence and mean VAF (without SCNAs), respectively, of the mutations arising from each subclone. In (c) and (d), the black curve is the overall VAF density, and the colored curves are the subclone-specific VAF densities. Multiple subclone-specific VAF peaks with SCNA are due to combinations of multiplicity and subclone allocation Although many methods have been developed for ITH inference, none of them are ideal for large-scale association studies. In most solid tumors, a significant proportion of the genome is affected by SCNA, and so, those methods that cannot account for SCNA [8, 14–17] are not appropriate for our purpose. Several methods either explicitly or implicitly require multiple samples per patient [6, 8, 10, 14, 15, 17] and thus cannot be used for our association analysis of TCGA data, where each patient only has one sample. PyClone [11] is arguably the most popular method for ITH study and has been used in two pan-cancer studies [1, 12]. However, PyClone is designed for targeted sequencing studies, where a small number of loci are sequenced with ultra-high coverage (e.g., >1,000 × coverage). Its Bayesian Markov Chain Monte Carlo (MCMC) implementation requires an extended runtime. In addition, PyClone performs clustering of somatic mutations, but does not infer phylogeny. Many other existing methods for ITH study [7, 9–11] also use Bayesian MCMC implementation and their computational burden makes them undesirable for large-scale association studies. Another class of methods use combinatorial approaches [6, 16, 17]. Several approaches do not account for SCNA [16, 17]. SPRUCE [6], a more recent algorithm, jointly models SPMs and SCNAs by multi-state perfect phylogeny mixtures. It is designed for multi-sample study with a small number of mutations or mutation clusters. For example, as shown in their simulations, even with only 5 mutations or mutation clusters, at 500 × coverage, the median number of solutions is between 1000 and 10,000 when there are two samples, and around 100 when there are 5 samples. Other ITH quantification methods either do not provide an easy-to-use uncertainty measurement [18] or require additional (hard-to-get) information such as phasing between sparsely distributed somatic mutations [19]. Given these considerations, we developed a new method for ITH study, called subclone multiplicity allocation and somatic heterogeneity or SMASH. To overcome the limitations of the aforementioned approaches for quantifying ITH, we quantify ITH, as previous studies have done [12, 20], using entropy \(-\sum _{s=1}^{S} \vartheta _{s} \log {(\vartheta _{s})}\), where 𝜗s is the proportion of tumor cells that belong to the sth subclone and S is the total number of subclones. We assessed the performance of SMASH and a few other methods in large-scale simulated association analysis. Then we used these methods to study the association between survival time and TMB, SCNA burden, frequently mutated genes, and ITH using data on 5898 TCGA tumor samples from 14 cancer types [21]. The major contributions of our work are threefold. First, we propose a new computational method that is designed for large-scale studies of ITH with higher computational efficiency. Second, we evaluated the benefit to incorporate uncertainty of ITH estimates in association studies and conclude that there is positive but relatively minor benefit. Third, in the large-scale real data analysis, we found several interesting patterns such as the interaction between mutation burden and ITH. SMASH is a frequentist approach to identify tumor subclones through clustering somatic mutation read counts, while accounting for copy number alterations. We enumerate all possible phylogenetic trees that are compatible with the observed data and quantify the probability of each phylogenetic tree. We make the following assumptions when enumerating phylogenic trees. Primary tumors arise from a founder clone or have unicellular origin. Loci harboring SPMs associated with ITH have homozygous reference alleles in normal cells and a mixture of reference and alternate alleles in tumor. Each SPM event occurs only once on a single allele and a locus will not undergo more than one point mutation or revert back to its original base. At most two descendant subclones can evolve from an ancestral subclone. SCNAs are clonal events. Assumption (1) follows from the clonal evolution theory of tumor growth [22]. Assumption (2) is automatically satisfied because genetic loci with germline mutations are filtered out during somatic mutation calling. Assumption (3) is referred to as the infinite site assumption [23, 24], which is reasonable because the number of mutated loci is very small relative to the size of the genome. This assumption implies that tumor evolution is consistent with a "perfect and persistent phylogeny" [9, 11] such that each subclone has only one parental subclone and all mutations of the parental subclone. Assumption (4) is reasonable when we consider tumor evolution in a refined time scale, and it is helpful to reduce the number of enumerated phylogenies. Assumption (5) is the only restrictive one, and it is a crucial assumption made by ASCAT [5], which is the method we use to infer copy numbers. Assumption (5) is also adopted by PyClone [11] and EXPANDS [18], the two methods that have been used in previous pan-cancer studies [1, 12]. To the best of our knowledge, Canopy [10] is the only method that can infer both subclonal SCNA and subclonal point mutations. However, Canopy carries a high computational cost and emphasizes multiple sample design, which makes it unsuitable for our study. By assuming clonal SCNA, all subclonal SPMs occur after the SCNA event and thus have a multiplicity of one. On the other hand, clonal SPMs can occur before or after SCNA and thus can have varying multiplicities, depending on the copy number state. We obtain SCNA-related information, including tumor purity, ploidy, and allele-specific copy numbers per SPM through ASCAT [5]. Notation and framework Let T and \(\tilde {T}\) denote the failure time and the corresponding censoring time, respectively. Define \(X = \min (T,\tilde {T})\) and \(\Delta = I(T \leq \tilde {T})\). Let Z=(Z1,…,Zp)T represent a p-vector of baseline covariates. Let l=1,…,L index each locus harboring a SPM after mutation calling and filtering. The lth SPM is characterized by a pair of alternate and reference read counts derived from the tumor sample denoted by Al and Rl, respectively. The summation Tl=Al+Rl is referred to as the total read depth. The corresponding clonal copy number state is denoted by (Cl1,Cl2), where Cl1≤Cl2. For a given subject, the observed clinical data consist of (X,Δ,Z), and genomic data are represented by (Al,Rl,Cl1,Cl2) for l=1,…,L. Assume that the tumor sample of interest has S subclones. These S subclones relate to each other through a phylogenic tree describing the order in which subclones emerged. In Fig. 2, we enumerated all phylogenic trees for one to five subclones that capture the possible linear and branching evolutions between subclones. A possible allocation of somatic mutations across the S subclones can be described by a vector of length S: \(\boldsymbol {q}_{u}^{T} = (q_{u1}, \ldots,q_{uS})\) such that qus is an indicator of whether this mutation occurs in the sth subclone. Each phylogenic tree that we enumerate in this paper is compatible with a set of allocations. Let k index each enumerated phylogenic tree, and let Qk denote a set of allocations of the kth phylogenic tree. For both simulation and real data analysis, we enumerated all phylogenic trees with one to five subclones. In simulation, given a phylogenic tree, each SPM was randomly assigned an allocation with equal probability. Subclone configurations. Examples of subclone configurations with subclone numbers ranging from 1 to 5. Nodes represent subclones, and vertices link the parental and descendant subclones To illustrate, a clonal sample (S=1) would have Q1=(q11), where q11=1 for all SPMs because each SPM is present in all cancer cells. For a sample with two subclones (S=2), only one possible tree A→B exists with a founding subclone A and a new subclone B. Then, the set of allocations are Q2=(q21,q22), where \(\boldsymbol {q}_{21}^{\mathrm {T}} = (1,1)\) and \(\boldsymbol {q}_{22}^{\mathrm {T}} = (0,1)\). The SPMs with allocation q21 arise in the founding subclone A, and the SPMs with allocation q22 arise in the new subclone B. For S=3, we need to distinguish between linear and branching trees. Let \(\boldsymbol {q}_{31}^{\mathrm {T}} = (1,1,1)\), \(\boldsymbol {q}_{32}^{\mathrm {T}} = (0,1,1)\), \(\boldsymbol {q}_{33}^{\mathrm {T}} = (0,0,1)\), and \(\boldsymbol {q}_{34}^{\mathrm {T}} = (0,1,0)\). The linear tree is characterized by Q3=(q31,q32,q33), whereas a branching tree is characterized by Q4=(q31,q33,q34). (See Additional file 1: Section C.2 for all enumerated configurations based on the list of subclonal assumptions.) For a clonal SPM located in a region of SCNA, we need to infer its multiplicity, or the number of mutant alleles. If the SPM occurs before the SCNA, its multiplicity is one of the two allele-specific copy numbers of the SCNA; otherwise, its multiplicity is 1. In contrast, based on our assumption that SCNAs are clonal, the multiplicity of a subclonal SPM is always 1. Let Ml be the set of possible multiplicities given the copy number states. Then, Ml={m|m>0 and m∈unique(1,Cl1,Cl2)}, where unique (Z) denotes the unique elements of Z. With S subclones, let ηs denote the proportion of cells in a tumor sample that belong to subclone s, and let ηT=(η1,…,ηS). Tumor samples derived from bulk tissues are practically never 100% pure, and hence, a proportion of normal cells will contaminate the sample. Let \(\phi = \sum _{s=1}^{S} \eta _{s}\) denote a tumor sample's purity. In addition, write 𝜗s=ηs/ϕ and 𝜗T=(𝜗1,…,𝜗S). The vector 𝜗 can be interpreted as the set of subclone proportions in the cancer cell population. To characterize ITH within a tumor sample, we utilize the notion of "entropy" or Shannon Index characterized by the expression $$E = -\sum_{s=1}^{S} \vartheta_{s} \log({\vartheta_{s}}), $$ which corrects for the normal contamination (ϕ) because normal cells in the tumor do not contribute to subclonal heterogeneity. This characterization states that more subclones generally lead to a greater degree of ITH and allows for two samples composed of an equal number of subclones to have different degrees of ITH. In addition, the largest possible entropy given S subclones is bounded above by log(S), corresponding to equal proportions of each subclone (𝜗s=1/S). Example: allocation, multiplicity, and cellular prevalence Here, we give a concrete example to explain the notation: allocation, multiplicity, and cellular prevalence. Suppose that a tumor sample is composed of three subclones forming a branching tree: B←A→C. The respective subclone proportions are denoted by ηA, ηB, and ηC. Thus, the sample purity is ϕ≡ηA+ηB+ηC, and possible cellular prevalences are (ηA+ηB+ηC)/ϕ=1, ηB/ϕ, and ηC/ϕ. Q4=(q31,q33,q34) characterizes three allocations to consider: q31 for clonal mutations; and q33 and q34 for subclonal mutations that only occur in subclones B and C, respectively. Suppose that each SPM has one of three copy number states with allele-specific copy numbers being (0,2), (1,1), or (1,3). For SPMs with copy number state (0,2), clonal mutations have multiplicity of 2 if they occur before SCNA and multiplicity of 1 if they occur after SCNA. For SPMs with state (1,1), all mutations (clonal or subclonal) have multiplicity of 1. For SPMs with state (1,3), clonal mutations have multiplicity of 1 or 3 if they occur before the SCNA and multiplicity of 1 if they occur after the SCNA. All combinations of allocation and multiplicity are listed in Table 1. Table 1 Enumerating combinations of allocation and multiplicity for each copy number state Modeling SPM read counts Recall that Al and Tl denote the alternative read depth and total read depth of the lth SPM. For a pre-specified tree structure and copy number estimates, we model Al given Tl by a mixture of binomial distributions across possible allocations and multiplicities. Next, we provide details to specify such mixture distributions. We assume that copy number states and tumor purity were estimated by another algorithm, e.g., ASCAT. For the lth SPM, denote its copy number state (i.e., allele-specific copy numbers) by Cl=(Cl1,Cl2). Suppose that there are altogether W unique copy number states: c1,..., cW. Given the wth copy number state, assume that there are Dw possible combinations of allocation and multiplicity, and denote the dth combination by ewd=(qd,mwd), where qd denotes the allocation that depends on the tree structure but not copy number states, and mwd denote the multiplicity that depends on copy number states. We also allow the estimation of proportion of variants unexplained by combinations of Ul and Ml following a discrete uniform distribution with proportion parameter denoted ε. The mixture proportions of the Dw combinations is denoted by \(\phantom {\dot {i}\!}\boldsymbol {\pi }_{w} = (\pi _{w1},\ldots,\pi _{wD_{w}})^{\mathrm {T}}\). Let Θ=(ε,𝜗,{πw}). Let Ul and Ml be the random variables for the latent allocation and multiplicity for the lth SPM, respectively, and let El=(Ul,Ml). Write Gl=(Tl,Cl,ϕ,Θ). For a single SPM, $$\begin{array}{@{}rcl@{}} && P \left(A_{l}|\boldsymbol{G}_{l}, C_{l} = c_{w}\right) \\ &=& \epsilon \frac{1}{T_{l}} + (1-\epsilon)\sum_{d=1}^{D_{w}} P \left(\boldsymbol{E}_{l}=\boldsymbol{e}_{wd},A_{l}|\boldsymbol{G}_{l}, C_{l} = c_{w}\right) \\ &=& \epsilon \frac{1}{T_{l}} + (1-\epsilon)\sum_{d=1}^{D_{w}} P \left(\boldsymbol{E}_{l}=\boldsymbol{e}_{wd}|\boldsymbol{G}_{l}, C_{l} = c_{w}\right) \\ && \quad \quad P (A_{l}|\boldsymbol{E}_{l}=\boldsymbol{e}_{wd},\boldsymbol{G}_{l}, C_{l} = c_{w})\\ &=& \epsilon \frac{1}{T_{l}} + (1-\epsilon)\sum_{d=1}^{D_{w}} \pi_{wd} P(A_{l}|\boldsymbol{E}_{l}=\boldsymbol{e}_{wd},\boldsymbol{G}_{l}, C_{l} = c_{w}), \end{array} $$ $$A_{l}|\boldsymbol{E}_{l}=\boldsymbol{e}_{wd},\boldsymbol{G}_{l} \sim \text{Binomial}(T_{l},p_{wd}), $$ and \(p_{wd} = \frac {m_{wd} \phi \boldsymbol {\vartheta }^{\mathrm {T}} \boldsymbol {q}_{d}}{(C_{l1}+C_{l2}) \phi + 2(1-\phi)}\). In the notation above, \(\boldsymbol {\vartheta }^{\mathrm {T}} \boldsymbol {q}_{d} = \sum _{s=1}^{S} \vartheta _{s} q_{ds}\) is the cellular prevalence of a SPM among the tumor's cancer cells. Given tumor purity and copy number states, in addition to a particular phylogenetic tree, the likelihood for L SPMs is proportional to $$\prod_{w=1}^{W} \prod_{l: C_{l} = c_{w}} P(A_{l}|\boldsymbol{G}_{l}, C_{l} = c_{w}). $$ Maximization of this likelihood is accomplished by introducing the pair of latent variables (Ul,Ml), writing the complete-data likelihood, and using an expectation-maximization algorithm, where each iteration of the M-step for πw has closed form updating equations, while 𝜗 is updated with the quasi-Newton Raphson method Broyden-Fletcher-Goldfarb-Shanno on the expected complete-data log-likelihood conditional on the observed data. In the presence of local optima for this observed mixture likelihood, multiple random initializations of 𝜗 are used, while we initialize πw by uniform distribution and ε=10−3. Inferring the optimal configuration is accomplished using the optimal BIC. Suppose that after running SMASH on L SPMs with every enumerated phylogenetic configuration and applying multiple runs of parameter initialization, we arrive at B models. For model b=1,…,B, let Lb, mb, BICb, Sb, and Eb denote the log likelihood, model size, BIC, number of subclones, and estimated entropy, respectively, evaluated at the maximum likelihood estimate \(\boldsymbol {\widehat {\Theta }}_{b} = (\widehat {{\epsilon }}_{b},\boldsymbol {{\widehat {\vartheta }}}_{b},\boldsymbol {{\widehat {\pi }}}_{b})\). Define BICb=2Lb−mb log(L); models with larger BIC are preferable to models with smaller BIC. We define the posterior probability of model b by $$p_{b} = \frac{\exp\left(0.5 \, BIC_{b}\right)}{\sum_{b^{'}=1}^{B} \exp\left(0.5 \, BIC_{b^{'}}\right)} $$ because BIC provides a large-sample approximation to the log posterior probability associated with the approximating model [25, 26]. Let p∗= maxb=1,…,B(pb). It is possible for two or more configurations to have the same BIC. Therefore, we explore two possible definitions of entropy. The first one is a simple average of entropies across all "optimal BIC-decided" models, referred to as "optimally inferred" entropy. The second one is a weighted average of entropies across all models, referred to as "weighted" entropy. These two entropy estimates are $$E_{o} = \sum_{b=1}^{B} \frac{I \, \, ({p_{b} = p^{*}}) p_{b}}{\sum_{b^{'}=1}^{B} I\,\,({p_{b^{'}}=p^{*}}) p_{b^{'}}} E_{b} $$ $$E_{w} = \sum_{b=1}^{B} \frac{p_{b}}{\sum_{b^{'}=1}^{B} p_{b^{'}}} E_{b}. $$ The summation incorporated into Eo accounts for the situation when various configurations or subclone proportions equally fit the observed data. SMASH is available as an R package integrating Rcpp [27] and RcppArmadillo [28]. The software and source code can be downloaded at https://github.com/Sun-lab/SMASH. Brief overview of SMASH, PyClone, and PhyloWGS We compared the performance of SMASH versus two popular and representative methods: PyClone [11] and PhyloWGS [9]. PyClone clusters somatic mutations based on their VAFs. From PyClone output (see Additional file 1: Table S1 for an example), one can estimate the number of subclones by the number of mutation clusters. However, to estimate subclone proportions from VAF clusters, we need to know the phylogenetic tree structure (see Additional file 1: Section C.1 for more details). Since PyClone does not estimate a phylogenetic tree, we cannot use PyClone to estimate subclone proportions and thus cannot estimate entropy that is a function of subclone proportions. Unlike PyClone, PhyloWGS was designed to estimate the underlying phylogenetic tree. SMASH is a frequentist method to infer ITH using a likelihood-based framework. SMASH and PyClone assume each subclone shares the same SCNA profile and that SCNAs and tumor purity have been estimated from an existing algorithm, e.g., ASCAT [5] or ABSOLUTE [29]. Unlike PyClone and PhyloWGS, SMASH explicitly enumerates all possible phylogenetic trees (up to k subclones, with default value of k=5) and quantifies the likelihood of each tree configuration (refer to the Additional file 1: Section C.2). For each tree configuration, the model parameters are estimated by an EM algorithm that accounts for unobserved somatic mutation allocation across subclones and multiplicity (i.e., copy number of the mutated allele). We can select the optimal phylogenic tree configurations based on the Bayesian information criterion (BIC) and then calculate entropy based on the optimal configuration. Alternatively, to account for the uncertainty of ITH estimation, we can take a weighted summation of ITH entropies, where the weights are the probabilities of different configurations. To directly compare PyClone and SMASH, we constructed an indicator of high ITH, as done in Morris et al. [12], denoted by H, such that H = 1 when the number of subclones is greater than κ, a predefined integer threshold, and H = 0 otherwise. For SMASH, the number of subclones is estimated using the tree configuration with the best BIC. Because PhyloWGS provides estimates of subclone proportions, we can compare the performance of SMASH and PhyloWGS using both entropy and H. To simulate ITH variables, first enumerate the list of tree configurations from one to five subclones, sample the number of subclones denoted S. Then, sample among trees with S subclones with equal probability. Generate subclone proportions for S subclones, denoted as η=(η1,…,ηS)T. Simulate U=(U1,…,US)T, where Us is simulated from a uniform distribution defined on interval (−3,1). Then calculate \(\eta _{s} = \exp ({U_{s}})/[1 + \sum _{s'=1}^{S} \exp ({U_{s'}})]\). Tumor purity is \(\phi = \sum _{s=1}^{S} \eta _{s}\), and the subclone proportion for the sth subclone is 𝜗s=ηs/ϕ. Calculate entropy \(E = -\sum _{s=1}^{S} \vartheta _{s} \log (\vartheta _{s})\), as well as H=I(S>κ), where I is an indicator function and κ=3. These steps are repeated until the minimum underlying subclone proportion is greater than 0.05, and the minimum difference between the cellular prevalences of two subclones is greater than 0.05 to ensure clusters are separable. To simulate sequence read counts for the lth SPM given a phylogenic tree configuration, we simulated read depth Tl from a negative binomial distribution, sampled copy number state, and then sampled SPM multiplicity and allocation with equal probability. Finally, we generated the number of alternative reads from a binomial distribution. We randomly simulated 5 covariates Z=(Z1,…,Z5)T to resemble sex, age, and tumor stage indicators. see Additional file 1: Section A.1 for details. We simulated the first set of survival times conditional on linear terms Z and E (entropy) and the second set of survival times conditional on linear terms Z and H, both from the Cox proportional hazards model with a constant baseline hazard: $$\begin{array}{rcl} \lambda(t|E,\boldsymbol{Z}) =& \lambda_{0}(t) \exp\left(\beta_{E} E + \boldsymbol{\gamma}_{Z}^{\mathrm{T}} \boldsymbol{Z}\right), & \text{or} \\ \lambda(t|H,\boldsymbol{Z}) =& \lambda_{0}(t) \exp\left(\beta_{H} H + \boldsymbol{\gamma}_{Z}^{\mathrm{T}} \boldsymbol{Z}\right) &\end{array} $$ where λ0(t)=λ0= exp(−7.0), βH=βE=0.5, and \(\boldsymbol {\gamma }_{Z}^{\mathrm {T}} = (0.55, 0.15, 0.8, 1.7, 2.7)\). Censoring times were simulated from the continuous uniform distribution \(\tilde {T} \sim U(0,\tau)\), and the value of τ was tuned to generate the desired proportion of censored subjects. We considered 18 simulation setups, with three censoring rates (20%, 50%, and 70%), three sequencing depths from the negative binomial (parameter values μ = 100, 500, and 1000 and δ = 2), and two samples sizes (N= 400 and 800). For each ITH method, we applied an extra filtering criterion that each subclone includes at least two mutations that are not part of its parental subclone. PyClone output contains the cellular prevalence for all SPMs, and the SPMs assigned to the same cluster have the same cellular prevalence. Following Morris et al. [12], we removed clusters with only one SPM. Additional file 1: Table S1 provides an example of pre-filtered PyClone output with multiple clusters composed of one SPM. Output of SMASH includes the ITH estimates for each tree configuration (i.e., number of subclones, subclone proportions, and mutations belonging to each subclone) (refer to Additional file 1: Table S2 for a pre-filtered example). We removed configurations where at least one subclone has only one SPM. Similarly, in PhyloWGS, sampled trees with at least one subclone with only one SPM were excluded. We used the simulated data to compare the results from five methods: PyClone, PhyloWGS using the optimal tree configuration, SMASH using the configuration with best BIC or weighted summation of entropy/number of subclones, and the ideal situation where true values of entropy or number of subclones are given. Each of the methods was run in two model setups, with the ITH variable being entropy E or indicator of high number of subclones H. In other words, when the true model contains E, we compared the models using E or H, as shown in Fig. 3. Results for when the true model contains H as well as the results for the standard errors of the parameter estimates and coverage probabilities under both models are presented in Additional file 1: Section A.3. ITH simulation results when the true model contains E. The x-axis denotes the mean sequencing depth. The y-axis denotes the bias of parameter estimates of regression coefficients (βE or βH) and power at α=0.05. Dotted lines denote the bias/power when ITH is known and serve as a benchmark against the estimated ITH metric. H is estimated by PhyloWGS (PhyloWGS(H)), PyClone (PyClone(H)), and SMASH (SMASH(H)). E is estimated by PhyloWGS's optimal tree (PhyloWGS(oE)), SMASH's optimal entropy (SMASH(oE)), and SMASH's weighted entropy (SMASH(wE)) Regardless of the ITH variable used, the bias of parameter estimates remains similar for sample sizes of 400 or 800, and as expected, power increases with sample size (Fig. 3). Given the sample size, bias decreases and power increases as sequence depth increases or censoring rate decreases. Comparing the two ITH metrics, E or H, the entropy metric has lower bias and higher power. The difference in performance between these two ITH metrics decreases as sequencing depth increases. As mentioned, PyClone's result does not allow us to calculate entropy. Therefore, we compared the performance of PyClone, PhyloWGS, and SMASH using the indicator metric H. At an average sequencing depth of 100 ×, SMASH has similar or slightly better performance than PyClone or PhyloWGS, in terms of bias and power. At average depths of 500 × or 1000 ×, SMASH shows much better performance than both PyClone and PhyloWGS (Fig. 3). SMASH demonstrates better performance than PyClone or PhyloWGS when inferring the number of subclones (Fig. 4 and Additional file 1: Figure S3). We calculated the Spearman correlation between the estimated number of subclones and the true number of subclones across 800 samples for each of 250 replicates. The median Spearman correlations from SMASH are consistently higher than those from PyClone and PhyloWGS, except for the comparison with PhyloWGS at read depth 100, in which case PhyloWGS performs slightly better. As read depth increases, the advantage of SMASH against other methods becomes more apparent, which is consistent with their relative performance in association studies (Fig. 3). Comparing PhyloWGS and PyClone, PhyloWGS performs better in terms of capturing the relative order of subclone number, reflected by the Spearman correlation comparison (Fig. 4), but PyClone performs better in terms of estimating the number of subclones (Additional file 1: Figure S3). ITH simulation, inferring the optimal number of subclones and entropy. The left plot pertains to Spearman correlations between the true and inferred number of subclones across simulated replicates as a function of sequencing depth and ITH method. The number of subclones are estimated by PyClone (PyClone), PhyloWGS (PhyloWGS), and SMASH using optimal BIC (SMASH(oS)). The right plot pertains to Spearman correlations between the true and estimated entropy using the optimal tree from PhyloWGS (PhyloWGS), optimally inferred entropy from SMASH (SMASH(oE)), and weighted entropy from SMASH (SMASH(wE)) When we simulated data using entropy as the ITH metric, as expected, models fit using entropy had higher power and lower bias (Fig. 3). However, even when we simulated the data using H, the results using entropy were still better when read depth is low. When read depth is high (e.g., 500 × or 1000 ×), using the estimate of H as the ITH variable gives better results, although the difference between using entropy and H is often not large (Additional file 1: Figure S2). Another important comparison is whether weighted entropy, which incorporates uncertainty across all fitted configurations, has better performance than entropy from optimal configurations. Weighted entropy does provide more accurate estimation of true entropy than the optimal entropy (Fig. 4). However, in terms of association estimation, the two approaches have similar performance (Fig. 3). Optimal entropy tends to underestimate the association, while weighted entropy tends to overestimate the association, although the biases are small. In terms of power, both entropies appear to perform equally well. Both weighted and optimal entropies from SMASH are more accurate estimates of the true entropy than the estimate from PhyloWGS's optimal tree. In our simulation studies, the vast majority of computational time was spent on ITH inference. On average with 100 mutations, SMASH ran in less than 5 min for ITH inference. In contrast, PyClone and PhyloWGS had run-times ranging from just under 10 min to over 90 min. Additional file 1: Figure S4 presents a summary of computational run-time. Among the three methods with default settings, the order of computational time is SMASH < PyClone < PhyloWGS. Subclonal SCNA simulation The previous simulation setup assumed SCNAs are clonal. In Additional file 1: Section A.6, we describe the simulation details to allow for subclonal SCNAs. In this analysis, we treated SCNAs as clonal and calculated the copy number by rounding the weighted average of copy numbers across subclones to the nearest integer. As described in Additional file 1: Section A.5, we simulated copy number scenarios 1 and 2 to mimic two patterns of SCNA abundance in real data. When the true model contains E, we compared the results of 6 methods, dichotomized indicator H estimated from Pyclone, PhyloWGS, and SMASH, and entropy estimated from PhyloWGS, SMASH with optimal configuration or weighted average (Additional file 1: Figure S8). All three methods using entropy E have similar performances and perform much better than the three methods using the dichotomized indicator H. Coverage probability was maintained at 95% for E estimates but not for H estimates. There were no clear differences in performance between both copy number scenarios. When the true model contains H, magnitudes of association bias using E estimates are generally less than those of H estimates (Additional file 1: Figure S9). Therefore, the overall results were consistent with the earlier simulation setup without subclonal SCNAs: using entropy is preferred even if the true model is based on H, and entropy from SMASH and PhyloWGS have similar performance at 100 × read depth. Preprocessing pipeline We downloaded SPM calls by MuTect2 from NCI's Genomic Database Commons (GDC) [21, 30]. To derive SCNA data, we processed controlled-access SNP Array 6.0 CEL files corresponding to primary tumors, along with their paired blood-derived normal or solid tissue normal. Specifically, we applied a pipeline involving Birdseed, PennCNV [31], and ASCAT v2.4 [5] to obtain estimates of tumor purity, ploidy, and inferred copy number states. The complete data workflow is shown in Additional file 1: Figure S10. We downloaded SPM and SCNA data on 5898 tumor samples from 14 TCGA cancer types (Additional file 1: Table S3). Before running PyClone, PhyloWGS, and SMASH, we applied a set of filters to the SPM data by retaining the base substitution SPMs that are located along autosomes and have at least seven reads supporting the alternative allele. Also, those SPMS with inferred total copy number of zero were excluded. Then, we passed the formatted SPM and SCNA data to PyClone, PhyloWGS, and SMASH for ITH inference. After running all three ITH methods, we applied the "at least two mutations per subclone/cluster" criterion that was used in the simulation. Somatic mutation landscape varies across cancer types We first summarized tumor purity, ploidy, and somatic mutation rate for each tumor type (Fig. 5). The relative ordering of tumor types by mutation rate is consistent with the results reported in an earlier study [32]. Those cancer types with lower mutation rate (e.g., PRAD, LGG, BRCA, KIRC, GBM, and OV) tend to have more subclonal mutations (top panel of Fig. 5). In all cancer types except OV, more than 50% of somatic mutations are clonal (with cellular prevalence larger than 99%) (Additional file 1: Figure S19). Ovarian cancer appeared to be an outlier with the larger number of subclones. This may be partly due to batch effects. The ovarian cancer samples used whole genome amplification (WGA) before DNA sequencing that may have reduced the quality of DNA samples [33, 34]. On the other hand, some previous work did show a high level of ITH in ovarian cancers [35–37]. Blagden [36] mentioned that the phylogenetic tree of ovarian cancer "has a short trunk and many branches, representing early clonal expansion and high genomic instability." This was consistent with our finding that ovarian cancer has higher levels of ITH. The ploidy values of most cancer types tended to cluster around 2 and 4 (genome-wide duplication). This clustering pattern was less clear for BRCA, suggesting a greater degree of SCNA in BRCA. Mutation rate, purity, ploidy, and proportion of clonal mutation summary: mutation load per megabase within the whole exome and ASCAT-derived purity and ploidy across 14 cancer types ordered by a median mutation rate. Violin plots in magenta contain nested boxplots with the median represented by the black box. The top panel shows the distribution of the proportion of inferred clonal mutations across all samples for each cancer type. The cancer types are bladder urothelial carcinoma (BLCA), breast invasive carcinoma (BRCA), colon adenocarcinoma (COAD), glioblastoma multiforme (GBM), head/neck squamous cell carcinoma (HNSC), kidney renal clear cell carcinoma (KIRC), lower-grade glioma (LGG), liver hepatocellular carcinoma(LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), ovarian serous cystadenocarcinoma (OV), prostate adenocarcinoma (PRAD), skin cutaneous melanoma (SKCM), and stomach adenocarcinoma (STAD) We examined the cellular prevalence of 49 genes that are among the top 10 mutated genes for at least one of the 14 cancer types (Additional file 1: Figure S21). Similar to our approach to calculate weighted entropy (refer to the "Methods" section), each mutation's cellular prevalence was calculated as the weighted average across the sample's ITH configurations. A gene's cellular prevalence was calculated as the average cellular prevalence of all mutations on that gene across all samples. TP53 mutations have average cellular prevalences near 1.0 for all cancer types except KIRC, which was the same observation made by Morris et al. [12]. IDH1 mutations were subclonal in GBM and clonal in LGG and SKCM. VHL was uniquely called in KIRC, with a cellular prevalence of 1.0. Except for TP53, the remaining 48 genes have relatively low cellular frequency in OV. This was consistent with the results of an earlier study of 31 ovarian tumor samples from six patients, and they found TP53 was the only gene mutated in all samples, and other known tumor driver genes may be mutated in some but not all samples of a patient [38]. Hierarchical clustering was performed on the 49 genes and 14 cancer types. At least two clusters of cancer types and at least two clusters of genes were apparent. LGG, KIRC, and PRAD form one cluster of cancer types without many mutations on these 49 genes. The number of subclones by tumor type and ITH method are summarized in Additional file 1: Figure S14. Across all cancers, SMASH consistently identified more subclones than PyClone. Between SMASH and PhyloWGS, the resulting number of subclones was very similar for all tumor types except for OV. PyClone was run on two independent Markov chains on each tumor sample using its default setup with 20,000 MCMC samples drawn, 1000 burnin and retaining every tenth sample with all default prior hyperparameters. PhyloWGS also was run twice but with default arguments. There were slight inconsistencies from the results of the two runs (Additional file 1: Tables S4 and S5). In the next section on association analysis, we used the first run of results from PyClone and PhyloWGS. Baseline covariates and variable selection The common set of baseline covariates included age at diagnosis, gender, pathological tumor stage, tumor mutation burden (total number of point mutations, TMB), and genome-wide SCNA burden. Specifically, we define genome-wide SCNA burden as $$\sum_{k} \frac{L_{k}}{\sum_{k^{'}} L_{k^{'}}} \left[\left|C^{A}_{k} - 1\right| + \left|C^{B}_{k} - 1\right|\right], $$ where k indexes genome segments, Lk is the length of the kth segment, and \((C^{A}_{k},C^{B}_{k})\) are the segmental clonal copy numbers of the minor and major alleles, respectively. The SCNA burden can be interpreted as the distance between the normal and cancer genomes, in terms of copy number. Both TMB and SCNA burden were binned into three equal groups using the 33rd and 66th quantiles as cutoffs. We investigated possible non-linear forms of entropy (e.g., dichotomized entropy, polynomial transformation, or log transformation) and the validity of the proportional hazard assumption using R functions fcov() and prop() from R package goftte [39, 40]. Our analysis suggested that the simple linear form of entropy is appropriate. Since our simulation studies showed that the weighted entropy provides better estimates of the true entropy than the optimal entropy (Fig. 4), we chose to conduct the following analysis using weighted entropy. In addition to baseline covariates, additional covariates to include in each tumor type's full model were carefully selected. The top four frequently mutated genes were included. Other tumor type-specific covariates were histological subtype for BLCA (papillary vs. non-papillary), PAM50 subtype for BRCA [41] (Basal, Her2, LumA, or LumB, and the normal-like subtype was removed due to its small sample size), tumor grade for KIRC, IDH/CNA status for LGG (IDH wild-type, IDH mutant without chr1p and 19q co-deletion, IDH mutant with chr1p and 19q co-deletion), and Gleason score and PSA level for PRAD. We also considered the pairwise interactions of all baseline covariates with weighted entropy. The final model for each tumor type was selected based on step-wise model fitting and assessed with Akaike information criterion (AIC). When the final model contained pairwise interactions involving entropy, then the interactions were retained if their minimum p value was less than or equal to 0.02. Otherwise, the interaction was removed, and our variable selection was re-run without the interaction term. When the final model excluded entropy, it was added back in the final step. TMB and ITH are associated with survival time in multiple cancer types In the PRAD cohort, because very few deaths were observed, we only analyzed progression-free survival (PFS). For all other cancer types, we studied both overall survival (OS) and PFS. We used a p value cutoff of 0.05 to define statistical significance. For OS, entropy or its interaction with other variables were statistically significant in the final model for 6 of 14 cancer types: BRCA, COAD, HNSC, KIRC, LIHC, LUSC (Fig. 6). Total mutation burden (TMB) was statistically significant for 7 cancer types: BLCA, COAD, GBM, LGG, LUAD, OV, and STAD (Additional file 1: Figure S17). SCNA burden (SCNAB) was statistically significant for LGG and SKCM (Additional file 1: Table S6–S19). Significant associations between gene-level mutation status and OS include TP53 for BLCA, GBM, HNSC, LIHC, LUSC and STAD, TTN for GBM and LUSC, and MUC16 for SKCM (Additional file 1: Table S6–S19). Comparing p values of all the ITH-related variables across tumor types. For each cancer type, we assessed the association between ITH and survival time by comparing the final model to the reduced model obtained by excluding all ITH-related variables. The horizontal line indicates the p value cutoff 0.05. H(W), H(P), H(S) denote the indicator for three or more subclones from PhyloWGS, PyClone, and SMASH, respectively. E(W) and E(S) denote entropy from PhyloWGS and SMASH, respectively In addition to these somatic mutation-based predictors, age at diagnosis was statistically significant for all tumor types except LIHC and LUAD. Sex was statistically significant for GBM, HNSC, and LIHC. All GBM tumors are stage IV. Among all other cancer types, tumor stage was associated with overall survival except for LGG and OV. Other tumor type-specific covariates associated with OS include PAM50 for BRCA, tumor grade for KIRC, and IDH/CNV status for LGG (Additional file 1: Table S6–S19). The model fits for PFS were similar to the ones for OS for most cancer types. For GBM, KIRC, LUSC, OV, SKCM, and STAD, the final model for PFS was the same as the final model for OS survival. Covariates present in one model but not in the other model were highlighted in Additional file 1: Table S6–S19. We also reported the results when replacing SMASH's weighted entropy (E(S)) with PhyloWGS's entropy (E(W)), the dichotomized number of subclones from SMASH (H(S)), PyClone (H(P)), and PhyloWGS (H(W)) (Fig. 6 and Additional file 1: Table S6–S19). H(S), H(P), and H(W) were constructed as indicators of 3 or more subclones. This cutoff was chosen so that there were enough samples with non-censored survival time in the high ITH group. Overall, the associations we detected by H(S), H(P), or H(W) were consistent with the results by E(S) and E(W), and the p values by E(S) tended to be smaller. An exception was in STAD, where H(S) identified significant associations for both OS and PFS that were missed by H(P), H(W), E(W), and E(S). Our results bring new insights that have not been reported by previous studies [1, 12]. Andor et al. [1] studied 1165 samples of 12 cancer types. They found significant association between ITH (the number of subclones) and survival time in only one cancer type: gliomas (combining two types of cancer from LGG and GBM). Morris et al. [12] studied 3300 tumor samples in 9 cancer types. They used dichotomized number of subclones as ITH measurement (# of subclone >4 for most cancer types), which is very unstable because few samples had more than 4 subclones. They found significant associations between ITH and survival time in 5 out of 9 cancer types: BRCA, HNSC, KIRC, LGG, and PRAD. They also added mutation burden into the Cox model for these five cancer types and found mutation burden was not significant in all five cancer types. We have 5898 TCGA tumor samples from 14 cancer types. We considered both dichotomized number of subclones and entropy as measurements of ITH. While Morris et al. did not find mutation burden to be informative for prognosis, we found it is significantly associated with survival time (or marginally significant) in 7 of the 14 cancer types. What is truly new in our findings is that we consider both ITH measurement and its interaction with other covariates, such as mutation burden, tumor stage, and mutation status of a particular gene. We found a considerable amount of heterogeneity for the results across cancer types. Quantification of ITH We considered two ITH metrics: entropy and indicator for high number of subclones. When we simulated survival time given entropy, as expected, using entropy instead of the indicator as the ITH metric led to better performance in association analysis (Fig. 3). Interestingly, when we simulated survival time given the indicator, the model with entropy has either higher power (when read depth is 100) or comparable power (when read depth is 500 or 1000) (Additional file 1: Figure S2). In real data analysis, using entropy as the ITH metric also led to more discoveries. Therefore, we recommend using entropy as an ITH metric in association studies. One reason for entropy delivering better results is that, as a continuous variable, entropy is more robust to noise in ITH inference. Specifically, the addition or deletion of a subclone with small cellular proportion may change entropy slightly but may change the indicator variable from 0 to 1. In addition, some information about the degree of ITH is lost when dichotomizing the number of subclones. Of course, an intermediate choice is to use the number of subclones. As shown in Additional file 1: Figure S13, entropy was highly associated with the number of subclones and provides a more refined quantification for samples with the same number of subclones. Another question that we sought to answer was whether it was beneficial to incorporate the uncertainty of ITH inference in association analysis. Towards this end, we studied two versions of entropy from SMASH, the optimal entropy derived from the mean entropy of the tree configurations with optimal BIC versus the weighted entropy across all estimated tree configurations. The weighted entropy has slightly higher correlation with the true entropy than with the optimal entropy, although these two quantities have similar power to detect associations. Study design for future ITH studies Our simulation results suggested that when using entropy as the ITH metric, more power was gained by increasing the sample size from 400 to 800 than by increasing the read depth from 100 to 500 or even 1000 (Fig. 3, Additional file 1: Figure S2). In contrast, when using the indicator H as an ITH metric, increasing read depth can also bring some relatively large power gains (Fig. 3, Additional file 1: Figure S2). One issue that warrants future study is the benefit of having multiple tumor samples per patient. ITH measurement may be affected by somatic mutation calling accuracy. A previous study [42] showed that the sensitivity of somatic mutation calling is around 0.8–0.9, and the number of false positive mutation calls is around 30 mutations for the whole exome using mutation callers such as Strelka or Mutect. We can further reduce the number of false positives by taking the intersection of mutation calls from multiple callers, with the trade off to reduce sensitivity of mutation calls. Our method is robust to low sensitivity of mutation calls because we use cellular frequency of subclones to estimate entropy, and if, for example, 6 of 10 mutations of a subclone are called, we can still use these 6 mutations to estimate subclone cellular frequency. Therefore, if one suspects a high proportion of false positive mutation calls, one strategy is to restrict the analysis to the mutations called by more than one caller. Association between survival time and ITH or TMB In most cancer types, when TMB is included in the final model, it is negatively associated with hazard, and thus higher mutation burden leads to longer survival time (Additional file 1: Figure S18). This may be explained by the observation that tumors with higher TMB are more likely recognized and attacked by the immune system [43]. However, higher TMB is associated with worse survival time in LGG. TMB is positively associated with entropy measurement of ITH, although the correlation is not strong enough to create any concerns with co-linearity when using both variables in a model (Additional file 1: Figure S16). We also observed interactions between TMB and ITH for both OS and PFS in COAD and LUSC. In both cases, association between survival time and entropy is not significant when TMB is low. However, higher entropy is associated with worse survival time when TMB is high. In LUSC, we also observed interaction between entropy and TP53 mutation. When TP53 is mutated, higher entropy is associated with longer survival time for both OS and PFS (Additional file 1: Figure S20). These results suggest that the effect of ITH on survival time may depend on other factors. Our analyses have some limitations. One limitation is the assumption of clonal SCNA. Employing this assumption allows us to use copy number calls from mature and widely used methods such as ASCAT or ABSOLUTE and to maintain high computational efficiency. However, this assumption also risks classifying SPMs in subclonal SCNA regions as SPMs from a new subclone. This risk may not bias the entropy estimate because a new subclone with subclonal SCNA is captured by SPMs. As shown in two simulation settings with subclonal copy number, SMASH has similar performance as PhyloWGS's when there are high levels of subclonal SCNAs. Another limitation, shared by all methods for inferring ITH from SPMs, is that we cannot distinguish two subclones whose somatic mutations have very similar cellular prevalence. For example, in Fig. 1, the mutations from subclones A and B have very similar cellular prevalence and hence cannot be distinguished. However, this is a limitation of the input data rather than the methodology. This limitation can be overcome if multiple samples per patient are available. The infinite site assumption may be considered too strong an assumption. One study demonstrated possible evidence of recurrent mutations in their single-cell sequencing data [44]. Conceptually, if mutations were recurrent, somatic mutations from bulk sequencing could not be utilized for modeling multiplicity and somatic inheritance among subclones. Therefore, ITH inference and association analyses could only be conducted with single-cell sequencing to better infer cellular multiplicities. Though, if only a handful of mutations were recurrent and at a small fraction of cells, their inferred cellular prevalence may slightly decrease relative to an identical non-recurrent mutation, leading to a biased subclone proportion estimate. This entropy estimate could be treated as being an extra "noisy" estimate. But as long as this biased estimate correlates with the underlying entropy, there may still be power to detect the association between entropy and clinical outcomes. We have conducted a pan-cancer analysis to study the associations between somatic mutations and survival time in 14 cancer types. Several types of somatic mutation features are included in our analysis, including mutation burden, copy number alteration burden, mutation status of a few frequently mutated genes, and intra-tumor heterogeneity (ITH) inferred by our method SMASH. We conclude that using entropy instead of high ITH indicator as the ITH metric leads to higher power in association analysis. The effect of ITH may depend on other somatic mutation features such as mutation burden. Accounting for the uncertainty of ITH inference has some but limited benefit. To improve the power for association analysis, it is much more effective to increase the sample size than generating more reads per sample. ASCAT: Allele-specific copy number analysis of tumors ITH: Intra-tumor heterogeneity MATH: Mutant-allele tumor heterogeneity SCNA: Somatic copy number alterations SMASH: Subclone multiplicity allocation and somatic heterogeneity SPMs: Somatic point mutations TCGA: The Cancer Genome Atlas TMB: Tumor mutation burden VAF: Variant allele frequencies Andor N, Graham TA, Jansen M, Xia LC, Aktipis CA, Petritsch C, Ji HP, Maley CC. Pan-cancer analysis of the extent and consequences of intratumor heterogeneity. Nat Med. 2016; 22(1):105–13. McGranahan N, Furness AJ, Rosenthal R, Ramskov S, Lyngaa R, Saini SK, Jamal-Hanjani M, Wilson GA, Birkbak NJ, Hiley CT, et a.l. Clonal neoantigens elicit t cell immunoreactivity and sensitivity to immune checkpoint blockade. Science. 2016; 351(6280):1463–9. Goodman AM, Kato S, Bazhenova L, Patel SP, Frampton GM, Miller V, Stephens PJ, Daniels GA, Kurzrock R. Tumor mutational burden as an independent predictor of response to immunotherapy in diverse cancers. Mol Cancer Ther. 2017; 16(11):2598–2608. Campbell BB, Light N, Fabrizio D, Zatzman M, Fuligni F, de Borja R, Davidson S, Edwards M, Elvin JA, Hodel KP, et al.Comprehensive analysis of hypermutation in human cancer. Cell. 2017; 171(5):1042–56. Van Loo P, Nordgard SH, Lingjærde OC, Russnes HG, Rye IH, Sun W, Weigman VJ, Marynen P, Zetterberg A, Naume B, et al.Allele-specific copy number analysis of tumors. Proc Natl Acad Sci. 2010; 107(39):16910–5. El-Kebir M, Satas G, Oesper L, Raphael BJ. Inferring the mutational history of a tumor using multi-state perfect phylogeny mixtures. Cell Sys. 2016; 3(1):43–53. Jiao W, Vembu S, Deshwar AG, Stein L, Morris Q. Inferring clonal evolution of tumors from single nucleotide somatic mutations. BMC Bioinformatics. 2014; 15(1):35. Zare H, Wang J, Hu A, Weber K, Smith J, Nickerson D, Song C, Witten D, Blau CA, Noble WS. Inferring clonal composition from multiple sections of a breast cancer. PLoS Comput Biol. 2014; 10(7):1003703. Deshwar AG, Vembu S, Yung CK, Jang GH, Stein L, Morris Q. Phylowgs: reconstructing subclonal composition and evolution from whole-genome sequencing of tumors. Genome Biol. 2015; 16(1):35. Jiang Y, Qiu Y, Minn AJ, Zhang NR. Assessing intratumor heterogeneity and tracking longitudinal and spatial clonal evolutionary history by next-generation sequencing. Proc Natl Acad Sci. 2016; 113(37):5528–37. Roth A, Khattra J, Yap D, Wan A, Laks E, Biele J, Ha G, Aparicio S, Bouchard-Côté A, Shah SP. Pyclone: statistical inference of clonal population structure in cancer. Nat Methods. 2014; 11(4):396–8. Morris LG, Riaz N, Desrichard A, Şenbabaoğlu Y, Hakimi AA, Makarov V, Reis-Filho JS, Chan TA. Pan-cancer analysis of intratumor heterogeneity as a prognostic determinant of survival. Oncotarget. 2016; 7(9):10051. Mroz EA, Rocco JW. Math, a novel measure of intratumor genetic heterogeneity, is high in poor-outcome classes of head and neck squamous cell carcinoma. Oral Oncol. 2013; 49(3):211–5. Miller CA, White BS, Dees ND, Griffith M, Welch JS, Griffith OL, Vij R, Tomasson MH, Graubert TA, Walter MJ, et al.Sciclone: inferring clonal architecture and tracking the spatial and temporal patterns of tumor evolution. PLoS Comput Biol. 2014; 10(8):1003665. Popic V, Salari R, Hajirasouliha I, Kashef-Haghighi D, West RB, Batzoglou S. Fast and scalable inference of multi-sample cancer lineages. Genome Biol. 2015; 16(1):91. Hajirasouliha I, Mahmoody A, Raphael BJ. A combinatorial approach for analyzing intra-tumor heterogeneity from high-throughput sequencing data. Bioinformatics. 2014; 30(12):78–86. El-Kebir M, Oesper L, Acheson-Field H, Raphael BJ. Reconstruction of clonal trees and tumor composition from multi-sample sequencing data. Bioinformatics. 2015; 31(12):62–70. Andor N, Harness JV, Mueller S, Mewes HW, Petritsch C. Expands: expanding ploidy and allele frequency on nested subpopulations. Bioinformatics. 2013; 30(1):50–60. Yuan K, Sakoparnig T, Markowetz F, Beerenwinkel N. Bitphylogeny: a probabilistic framework for reconstructing intra-tumor phylogenies. Genome Biol. 2015; 16(1):36. Park SY, Gönen M, Kim HJ, Michor F, Polyak K. Cellular and genetic diversity in the progression of in situ human breast carcinomas to an invasive phenotype. J Clin investig. 2010; 120(2):636–44. GDC Team. TCGA pan-cancer data. NCI Genomic Data Commons (GDC) Data Portal. https://portal.gdc.cancer.gov/. Accessed May 2018. Nowell PC. The clonal evolution of tumor cell populations. Science. 1976; 194(4260):23–8. Kimura M. The number of heterozygous nucleotide sites maintained in a finite population due to steady flux of mutations. Genetics. 1969; 61(4):893. CAS PubMed PubMed Central Google Scholar Hudson RR. Properties of a neutral allele model with intragenic recombination. Theor Popul Biol. 1983; 23(2):183–201. Schwarz G, et al.Estimating the dimension of a model. Ann Stat. 1978; 6(2):461–4. Hoeting JA, Madigan D, Raftery AE, Volinsky CT. Bayesian model averaging: a tutorial. Stat Sci. 1999; 14(4):382–417. Eddelbuettel D, François R, Allaire J, Ushey K, Kou Q, Russel N, Chambers J, Bates D. Rcpp: Seamless R and C++ integration. J Stat Softw. 2011; 40(8):1–18. Eddelbuettel D, Sanderson C. RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Comput Stat Data Anal. 2014; 71:1054–63. Carter SL, Cibulskis K, Helman E, McKenna A, Shen H, Zack T, Laird PW, Onofrio RC, Winckler W, Weir BA, et al.Absolute quantification of somatic DNA alterations in human cancer. Nat Biotechnol. 2012; 30(5):413–21. Grossman RL, Heath AP, Ferretti V, Varmus HE, Lowy DR, Kibbe WA, Staudt LM. Toward a shared vision for cancer genomic data. N Engl J Med. 2016; 375(12):1109–12. Wang K, Li M, Hadley D, Liu R, Glessner J, Grant SF, Hakonarson H, Bucan M. Penncnv: an integrated hidden Markov model designed for high-resolution copy number variation detection in whole-genome snp genotyping data. Genome Res. 2007; 17(11):1665–74. Lawrence MS, Stojanov P, Polak P, Kryukov GV, Cibulskis K, Sivachenko A, Carter SL, Stewart C, Mermel CH, Roberts SA, et al.Mutational heterogeneity in cancer and the search for new cancer-associated genes. Nature. 2013; 499(7457):214–8. Buckley AR, Standish KA, Bhutani K, Ideker T, Lasken RS, Carter H, Harismendy O, Schork NJ. Pan-cancer analysis reveals technical artifacts in TCGA germline variant calls. BMC Genomics. 2017; 18(1):458. Ellrott K, Bailey MH, Saksena G, Covington KR, Kandoth C, Stewart C, Hess J, Ma S, Chiotti KE, McLellan M, et al.Scalable open science approach for mutation calling of tumor exomes using multiple genomic pipelines. Cell Syst. 2018; 6(3):271–81. McGranahan N, Swanton C. Clonal heterogeneity and tumor evolution: past, present, and the future. Cell. 2017; 168(4):613–28. Blagden SP. Harnessing pandemonium: the clinical implications of tumor heterogeneity in ovarian cancer. Front Oncol. 2015; 5:149. Schwarz RF, Ng CK, Cooke SL, Newman S, Temple J, Piskorz AM, Gale D, Sayal K, Murtaza M, Baldwin PJ, et al.Spatial and temporal heterogeneity in high-grade serous ovarian cancer: a phylogenetic analysis. PLoS Med. 2015; 12(2):1001789. Bashashati A, Ha G, Tone A, Ding J, Prentice LM, Roth A, Rosner J, Shumansky K, Kalloger S, Senz J, et al.Distinct evolutionary trajectories of primary high-grade serous ovarian cancers revealed through spatial mutational profiling. J Pathol. 2013; 231(1):21–34. Sfumato P, Boher J-M. Goftte: Goodness-of-Fit for Time-to-Event Data. 2017. https://CRAN.R-project.org/package=goftte R package version 1.0.5. Accessed December 2017. Lin DY, Wei L-J, Ying Z. Checking the cox model with cumulative sums of martingale-based residuals. Biometrika. 1993; 80(3):557–72. Ciriello G, Gatza ML, Beck AH, Wilkerson MD, Rhie SK, Pastore A, Zhang H, McLellan M, Yau C, Kandoth C, et al.Comprehensive molecular portraits of invasive lobular breast cancer. Cell. 2015; 163(2):506–19. Xu H, DiCarlo J, Satya RV, Peng Q, Wang Y. Comparison of somatic mutation calling methods in amplicon and whole exome sequence data. BMC Genomics. 2014; 15(1):244. Schumacher TN, Schreiber RD. Neoantigens in cancer immunotherapy. Science. 2015; 348(6230):69–74. Kuipers J, Jahn K, Raphael BJ, Beerenwinkel N. Single-cell sequencing data reveal widespread recurrence and loss of mutational hits in the life histories of tumors. Genome Res. 2017; 27(11):1885–94. We appreciate the constructive comments and suggestions from three anonymous reviewers. This work is supported in part by NIH grants P01 CA142538, R01 GM105785, R21CA224026, R01 GM126550, R01 GM07335, and R01HG009974. The datasets analyzed during the current study are available in the NCI GDC repository, https://portal.gdc.cancer.gov/ [21]. Public Health Sciences Division, Fred Hutchinson Cancer Research Center, 1100 Fairview Ave N, Seattle, 98109, WA, USA Wei Sun Department of Biostatistics, University of North Carolina Chapel Hill, Dauer Drive, Chapel Hill, 27599, NC, USA Paul Little, Dan-Yu Lin & Wei Sun Department of Biostatistics, University of Washington, NE Pacific St, Seattle, 98195, WA, USA Paul Little Dan-Yu Lin WS and DYL conceived the study. PLL performed the data analysis. WS, DYL and PLL wrote the manuscript. All authors read and approved the final manuscript. Corresponding authors Correspondence to Dan-Yu Lin or Wei Sun. Additional file 1 Supplementary results and methods, including Additional file 1: TableS1-S19 and Additional file 1: FigS1-S21. (PDF 3184 KB) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Little, P., Lin, DY. & Sun, W. Associating somatic mutations to clinical outcomes: a pan-cancer study of survival time. Genome Med 11, 37 (2019). https://doi.org/10.1186/s13073-019-0643-9 Copy number alteration Somatic mutations Subclone Submission enquiries: [email protected]
CommonCrawl
Title: Continuous-time finite element analysis of multiphase flow in groundwater hydrology (English) Author: Chen, Zhangxin Journal: Applications of Mathematics Summary lang: English Category: math Summary: A nonlinear differential system for describing an air-water system in groundwater hydrology is given. The system is written in a fractional flow formulation, i.e., in terms of a saturation and a global pressure. A continuous-time version of the finite element method is developed and analyzed for the approximation of the saturation and pressure. The saturation equation is treated by a Galerkin finite element method, while the pressure equation is treated by a mixed finite element method. The analysis is carried out first for the case where the capillary diffusion coefficient is assumed to be uniformly positive, and is then extended to a degenerate case where the diffusion coefficient can be zero. It is shown that error estimates of optimal order in the $L^2$-norm and almost optimal order in the $L^\infty $-norm can be obtained in the nondegenerate case. In the degenerate case we consider a regularization of the saturation equation by perturbing the diffusion coefficient. The norm of error estimates depends on the severity of the degeneracy in diffusivity, with almost optimal order convergence for non-severe degeneracy. Existence and uniqueness of the approximate solution is also proven. (English) Keyword: mixed method Keyword: finite element Keyword: compressible flow Keyword: porous media Keyword: error estimate Keyword: air-water system MSC: 65M60 MSC: 65N30 MSC: 76S05 idZBL: Zbl 0847.76030 idMR: MR1332314 DOI: 10.21136/AM.1995.134291 Date available: 2009-09-22T17:47:54Z Stable URL: http://hdl.handle.net/10338.dmlcz/134291 Reference: [1] J. Bear: Dynamics of Fluids in Porous Media.Dover, New York, 1972. Reference: [2] F. Brezzi, J. Douglas, Jr., R. Durán, and M. Fortin: Mixed finite elements for second order elliptic problems in three variables.Numer. Math. 51 (1987), 237–250. MR 0890035, 10.1007/BF01396752 Reference: [3] F. Brezzi, J. Douglas, Jr., M. Fortin, and L. Marini: Efficient rectangular mixed finite elements in two and three space variables.RAIRO Modèl. Math. Anal. Numér 21 (1987), 581–604. MR 0921828, 10.1051/m2an/1987210405811 Reference: [4] F. Brezzi, J. Douglas, Jr., and L. Marini: Two families of mixed finite elements for second order elliptic problems.Numer. Math. 47 (1985), 217–235. MR 0799685, 10.1007/BF01389710 Reference: [5] M. Celia and P. Binning: Two-phase unsaturated flow: one dimensional simulation and air phase velocities.Water Resources Research 28 (1992), 2819–2828. Reference: [6] G. Chavent and J. Jaffré: Mathematical Models and Finite Elements for Reservoir Simulation.North-Holland, Amsterdam, 1978. Reference: [7] Z. Chen: Analysis of mixed methods using conforming and nonconforming finite element methods.RAIRO Modèl. Math. Anal. Numér. 27 (1993), 9–34. Zbl 0784.65075, MR 1204626, 10.1051/m2an/1993270100091 Reference: [8] Z. Chen: Finite element methods for the black oil model in petroleum reservoirs.IMA Preprint Series $\#$ 1238, submitted to Math. Comp. Reference: [9] Z. Chen and J. Douglas, Jr.: Approximation of coefficients in hybrid and mixed methods for nonlinear parabolic problems.Mat. Aplic. Comp. 10 (1991), 137–160. MR 1172090 Reference: [10] Z. Chen and J. Douglas, Jr.: Prismatic mixed finite elements for second order elliptic problems.Calcolo 26 (1989), 135–148. MR 1083050, 10.1007/BF02575725 Reference: [11] Z. Chen, R. Ewing, and M. Espedal: Multiphase flow simulation with various boundary conditions.Numerical Methods in Water Resources, Vol. 2, A. Peters, et als. (eds.), Kluwer Academic Publishers, Netherlands, 1994, pp. 925–932. Reference: [12] S. Chou and Q. Li: Mixed finite element methods for compressible miscible displacement in porous media.Math. Comp. 57 (1991), 507–527. MR 1094942, 10.1090/S0025-5718-1991-1094942-7 Reference: [13] P. Ciarlet: The Finite Element Method for Elliptic Problems.North-Holland, Amsterdam, 1978. Zbl 0383.65058, MR 0520174 Reference: [14] J. Douglas, Jr.: Finite difference methods for two-phase incompressible flow in porous media.SIAM J. Numer. Anal. 20 (1983), 681–696. Zbl 0519.76107, MR 0708451, 10.1137/0720046 Reference: [15] J. Douglas, Jr. and J. Roberts: Numerical methods for a model for compressible miscible displacement in porous media.Math. Comp. 41 (1983), 441–459. MR 0717695, 10.1090/S0025-5718-1983-0717695-3 Reference: [16] J. Douglas, Jr. and J. Roberts: Global estimates for mixed methods for second order elliptic problems.Math. Comp. 45 (1985), 39–52. MR 0771029 Reference: [17] N. S. Espedal and R. E. Ewing: Characteristic Petrov-Galerkin subdomain methods for two phase immiscible flow.Comput. Methods Appl. Mech. Eng. 64 (1987), 113–135. MR 0912516, 10.1016/0045-7825(87)90036-3 Reference: [18] R. Ewing and M. Wheeler: Galerkin methods for miscible displacement problems with point sources and sinks-unit mobility ratio case.Mathematical Methods in Energy Research, K. I. Gross, ed., Society for Industrial and Applied Mathematics, Philadelphia, 1984, pp. 40–58. MR 0790511 Reference: [19] K. Fadimba and R. Sharpley: A priori estimates and regularization for a class of porous medium equations.Preprint, submitted to Nonlinear World. MR 1376946 Reference: [20] K. Fadimba and R. Sharpley: Galerkin finite element method for a class of porous medium equations.Preprint. MR 2025071 Reference: [21] D. Hillel: Fundamentals of Soil Physics.Academic Press, San Diego, California, 1980. Reference: [22] C. Johnson and V. Thomée: Error estimates for some mixed finite element methods for parabolic type problems.RAIRO Anal. Numér. 15 (1981), 41–78. MR 0610597, 10.1051/m2an/1981150100411 Reference: [23] H. J. Morel-Seytoux: Two-phase flows in porous media.Advances in Hydroscience 9 (1973), 119–202. 10.1016/B978-0-12-021809-7.50009-2 Reference: [24] J. C. Nedelec: Mixed finite elements in $\Re ^3$.Numer. Math. 35 (1980), 315–341. MR 0592160, 10.1007/BF01396415 Reference: [25] J. Nitsche: $L_\infty $-Convergence of Finite Element Approximation.Proc. Second Conference on Finite Elements, Rennes, France, 1975. MR 0568857 Reference: [26] D. W. Peaceman: Fundamentals of Numerical Reservoir Simulation.Elsevier, New York, 1977. Reference: [27] O. Pironneau: On the transport-diffusion algorithm and its application to the Navier-Stokes equations.Numer. Math. 38 (1982), 309–332. MR 0654100, 10.1007/BF01396435 Reference: [28] P.A. Raviart and J.M. Thomas: A mixed finite element method for second order elliptic problems.Lecture Notes in Math. 606, Springer, Berlin, 1977, pp. 292–315. MR 0483555 Reference: [29] M. Rose: Numerical Methods for flow through porous media I.Math. Comp. 40 (1983), 437–467. MR 0689465, 10.1090/S0025-5718-1983-0689465-6 Reference: [30] A. Schatz, V. Thomée, and L. Wahlbin: Maximum norm stability and error estimates in parabolic finite element equations.Comm. Pure Appl. Math. 33 (1980), 265–304. MR 0562737, 10.1002/cpa.3160330305 Reference: [31] R. Scott: Optimal $L^\infty $ estimates for the finite element method on irregular meshes.Math. Comp. 30 (1976), 681–697. MR 0436617 Reference: [32] D. Smylie: A near optimal order approximation to a class of two sided nonlinear degenerate parabolic partial differential equations.Ph. D. Thesis, University of Wyoming, 1989. Reference: [32] M. F. Wheeler: A priori $L_2$ error estimates for Galerkin approximation to parabolic partial differential equations.SIAM J. Numer. Anal. 10 (1973), 723–759. MR 0351124, 10.1137/0710062 AplMat_40-1995-3_4.pdf 3.273Mb application/pdf View/Open Back to standard record
CommonCrawl
Probability Density Function Calculator This probability can be computed as a double integral: Example 2. Namely, the probability density function. interactive probability calculators and distributions applets. Scipy has a quick easy way to do integrals. Normal distributions are important i. The area under the probability density function f(x), over all values of the random variables X, is equal to one 3. 3 Mean, Median, Variance, and Standard Deviation KEY CONCEPTS REVIEW EXERCISES CASE STUDY TECHNOLOGY GUIDES Calculus Applied to Probability and Statistics P 1 Case Study: Creating a Family Trust You are a financial planning consultant at a neighborhood bank. * Survival function: 1-F(t) The goal of survival analysis is to estimate and compare survival experiences of different groups. By using this calculator, users may find the probability P(x) & expected mean (μ) of gamma distribution. The inputs are Binompdf(number of trials, probability of success, x) Example: n=14, p=0. Glencoe Mcgraw Pre Algebra Study Notebook Teacher Edition Isbn 91zrg0rm7 Paperback Free Printable Math Addition Worksheets Problem Finder Fraction Wall Worksheet Normal Density Function. In general, the probability that a quantum particle will be found in a very small region about the point is Since particles can exhibit wave-like behavior, the amplitude or wave function should have a wave-like form. A wave function for an electron in an atom is called an atomic orbital; this atomic orbital describes a region of space in which there is a high probability of finding the electron. If X is a discrete random variable (i. DIST() function to create your data set for the chart, e. Because of. The phrase distribution function is usually reserved exclusively for the cumulative distribution function CDF (as defined later in the book). Glencoe Mcgraw Pre Algebra Study Notebook Teacher Edition Isbn 91zrg0rm7 Paperback Free Printable Math Addition Worksheets Problem Finder Fraction Wall Worksheet Normal Density Function. From the definition of the standard deviation we can get. No cable box required. Each distribution has a certain probability density function and probability distribution function. This is because the probability of any given value for a continuous probability density function is zero, as can be shown through probability theory. The Normal Probability Distribution menu for the TI-83+/84+ is found under DISTR (2nd VARS). TDIST for the T distribution e. The PDF is the density of probability rather than the probability mass. I want to add on to what Mark Fischler wrote, and it'll probably be too long for a comment, so it'll go in an answer. The Probability Density Function Calculator an online tool which shows Probability Density Function for the given input. Choose a distribution. 1 Ultimately, she would like to know the. I saw the prob function. Important: The focus of this course is on math - specifically, data-analysis concepts and methods - not on Excel for its own sake. So the Excel command includes "DIST" e. So, I'm looking for:. NORMSDIST for the standard normal distribution e. NOTE: A mean of zero and a standard deviation of one are considered to be the default values for a normal distribution on the calculator, if you choose not to set these values. In other words, the marginal density function of x from f (x, y) may be attained via:. Here is a list of best free probability calculator software for Windows. In this lesson, we'll start by discussing why probability density functions are needed in. The calculator will find the simple and cumulative probabilities, as well as mean, variance and standard deviation of the geometric distribution. Compute the probability density function (PDF) for the continuous uniform distribution, given the point at which to evaluate the function and the upper and lower limits of the distribution. Just as we describe the probability distribution of a discrete random variable by specifying the probability that the random variable takes on each possible value, we describe the probability distribution of a continuous random variable by giving its density function. This function accepts non-integer degrees of freedom. extreme values. Probability density functions. In this lab we will consider the role of improper integrals in probability, which also has many applications in science and engineering. Instead, we can usually define the probability density function (PDF). This calculator calculates probability density function, cumulative distribution function, mean and variance of a binomial distribution for given n and p. The probability density function, f(x), of a random variable has the following properties 1. Since continuous random variables are uncountable, it is difficult to write down the probabilities of all possible events. It is used as a tool to calculate probabilities. edu is a platform for academics to share research papers. Frequently, it is necessary to calculate the probability (density) function of a function of two random variables, given the joint probability (density) function. The probability density function or PDF of a continuous random variable gives the relative likelihood of any outcome in a continuum occurring. More about the uniform distribution probability. ibvodcasting. Observation: Figure 1 shows a graph of the probability density function for B(20,. If you want to calculate value of the function with λ = 1, at the value x=0. Probability Density Functions De nition Let X be a continuous rv. I'd like to calculate a probability distribution for prices given the option prices for that stock? Any ideas how to do this? My desire is to do this daily and then see how the price PD changes over. Did you check the weather forecast? Busted!. 1 Introduction As we have seen in Charter 3, a probability density function (pdf) or a cumulative distribution function (cdf) can completely describe a random variable. pdf: Power Spectral Density Units [ G^2 / Hz ]. The probability density above is defined in the "standardized" form. Probability of event A Independent and identically distributed Continuous uniform distribution on [a, b], < a < b < oo The binomial distribution with n trials and success probability p Normal distribution with mean e IR and variance c; 2 > 0 Probability density function of N(O, 1) Cumulative distribution function of N(O, 1). The CDF function for the chi-square distribution returns the probability that an observation from a chi-square distribution, with df degrees of freedom and non-centrality parameter nc, is less than or equal to x. When multiplied by the length of a small time interval at t, the. pdf(y) / scale with y = (x - loc) / scale. Normal distributions are important i. To learn that if X is continuous, the probability that X takes on any specific value x is 0. Using the electron density significantly speeds up the calculation. The PDF function for the chi-square distribution returns the probability density function of a chi-square distribution, with df degrees of freedom and non-centrality parameter nc. It will calculate the probability that variable x falls below or at a specified value. May 27, 2012 · I have a vector with 200. In other words, the marginal density function of x from f (x, y) may be attained via:. Figure 1 Binomial distribution. DIST() function to create your data set for the chart, e. Set books The notes cover only material in the Probability I course. A function $\displaystyle \geq 0$ is a probability density function when its integral is 1. A random variable which has a normal distribution with a mean m=0 and a standard deviation σ=1 is referred to as Standard Normal Distribution. Probability function (p-) and Quantile function (q-) Probability function (p-): Given an x value, it returns the probability (AUC) of having a value lower than x. Power density is the measure of the power from an antenna to a certain distance D. Therefore, we plug those numbers into the Binomial Calculator and hit the Calculate button. The Cumulative Distribution Function for a Random Variable \ Each continuous random variable has an associated \ probability density function (pdf) 0ÐBÑ \. Finding the mean and median of a probability density function. Probability Density Function (PDF) Calculator for the Uniform Distribution. Derivatives of probability functions and some applications Stanislav Uryasev* International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria Probability functions depending upon parameters are represented as integrals over sets given by inequalities. This statistical tool is used to measure the risk associated with events such as shooting craps, playing cards, or investing in securities. The area under a curve y = f(x) from x = a to x = b is the same as the integral of f(x)dx from x = a to x = b. A function f(x) that satisfies the above requirements is called a probability functionor probability distribu-tion for a continuous random variable, but it is more often called a probability density functionor simplyden-sity function. 7 and the corresponding probability density function, find? What does probability density function mean?. Your calculator will output the binomial probability associated with each possible x value between 0 and n, inclusive. When is a continuous random variable with probability density function, the formula for computing its expected value involves an integral, which can be thought of as the limiting case of the summation found in the discrete case above. Fl child support calculator worksheet. 2 and a stdev of 4. Byju's Probability Density Function Calculator is a tool which makes calculations very simple and interesting. In the first video I show you how to handle a p. Probability Density Function Calculator - Uniform Distribution - Define the Uniform variable by setting the limits a and b in the fields below. To begin our discussion, we will look at some basic ideas of. Exponential distribution is a particular case of the gamma distribution. You can create histograms with the function hist(x) where x is a numeric vector of values to be plotted. To do this, we use the numpy, scipy, and matplotlib modules. Poisson Probability Calculator. You forgot to tell Minitab to use the Probability Density when you did the Calc / Probability Distribution command. DIST function is categorized under Excel Statistical functions. Getting Started with your TI-89 for Statistics This is a first draft of these TI-89 basic instructions for statistics. The probability density function of the gamma distribution can be expressed in terms of the gamma function parameterized in terms of a shape parameter k and scale parameter θ. Probability Density Function All probability density functions have the property that the area under the function is 1. Just be careful on (b) the integral has limits 0 to +oo. with just one function and then in the second video how to handle two or more functions in a p. To generate a binomial probability distribution, we simply use the binomial probability density function command without specifying an x value. The area under the curve of a probability density function must always sum to one. peaked at a particular value of x, and the probability density, being its square, is likewise peaked there as well. Statistics 100A Homework 5 Solutions Ryan Rosario Chapter 5 1. The probability density function (PDF) The probability density function is the function that most people use to define a distribution. in its sample space): i. Ask Question Asked 4 years, 11 months ago. Below are given some problems based on Margin of error which may be helpful for you. melanogaster, we did not find a decreasing density of these short-repeat motifs toward centromeres and telomeres. Normal (Gaussian) Distribution calculator, formulas, work with steps, step by step calculation, real world and practice problems to learn how to estimate area under Bell curve which gives the probability which is higher or lower than any arbitrary X. DIST function is categorized under Excel Statistical functions. Introduction to the Gamma Function. o Distinguish between discrete and continuous distributions. Please enter the necessary parameter values, and then click 'Calculate'. o Probability density. The PDF function for the chi-square distribution returns the probability density function of a chi-square distribution, with df degrees of freedom and non-centrality parameter nc. Let Xbe a random variable whose distribution function F X has a derivative. Let ( ) sin6 , 0 6 f x k x x S , be the probability density function of a continuous random variabl e X. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. To perform calculations of this type, enter the appropriate values for N, k, and p. New derivative formulas for the intergrals over a volume are considered. Since only one out of five possible answers is correct, the probability of answering a question correctly by random is 1/5=0. Normal distribution. This free probability calculator can calculate the probability of two events, as well as that of a normal distribution. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. We calculate probabilities of random variables and calculate expected value for different types of random variables. It "records" the probabilities associated with as under its graph. Maximum Flux Density (Bmax) Calculator Formulas and equations Enter the required values and click on calculate. Despite the strong association in D. The above chart on the right shows the probability density functions for the exponential distribution with the parameter λ set to 0. This function completely describes. The concept is very similar to mass density in physics: its unit is probability per unit length. Unlike the case of discrete random variables, for a continuous random variable any single outcome has probability zero of occurring. PDF estimation was done using parametric (Maximum Likelihood estimation of a Gaussian model), non-parametric (Histogram, Kernel based and - K nearest neighbor) and semi-parametric methods (EM algorithm and gradient based optimization). One example is the density \begin{gather*} \rho(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}, \end{gather*} which is graphed below. Let be a random vector having joint probability density function. 2 and a stdev of 4. Cancel anytime. ©2016 Matt Bognar Department of Statistics and Actuarial Science University of Iowa. o Expectation. and inverse c. In probability theory, the normal (or Gaussian or Gauss or Laplace-Gauss) distribution is a very common continuous probability distribution. The percentile of a distribution, denoted as x percent, is defined as the value of x such that the probability of getting a value less than or equal to x is α/100 and the probability of getting value greater than or equal to x is. Introduction to the Gamma Function. The Cumulative Distribution Function for a Random Variable \ Each continuous random variable has an associated \ probability density function (pdf) 0ÐBÑ \. But this is not true in any reasonable sense; quantum probability violates certain inequal-ities that hold in classical probability (Section ??). So let's first talk about a probability density function. Compute the probability density function (PDF) for the normal distribution, given the point at which to evaluate the function x, the mean, and the standard deviation. NOTE: A mean of zero and a standard deviation of one are considered to be the default values for a normal distribution on the calculator, if you choose not to set these values. • Probability and Statistics for Engineering and the Sciences by Jay L. For example, the median is the 50 th percentile, the first quartile is the 25 th percentile, and the third quartile is the 75 th percentile. Cumulative Distribution Function states that the probability of the real-valued random variable X, will always take a value less than or equal to X. I saw the prob function. Author(s) David M. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Introduction to the Gamma Function. This is represented graphically in the following plot. The Cumulative Distribution Function for a Random Variable \ Each continuous random variable has an associated \ probability density function (pdf) 0ÐBÑ \. The probability density function is the smooth blue line. But you have to supply it probablity values of all the points in the array. o Recognize a distribution and its relationship to statistics and probability. customers entering the shop, defectives in a box of parts or in a fabric roll, cars arriving at a tollgate, calls arriving at the switchboard) over a continuum (e. If Xand Yare continuous, this distribution can be described with a joint probability density function. Toggle between the probability density function and the cumulative distribution function of the distribution; Modify your graph in order calculate a cumulative probability (e. For a list of distribution-specific functions, see Supported Distributions. Since only one out of five possible answers is correct, the probability of answering a question correctly by random is 1/5=0. Below you will find complete descriptions and links to 8 different analytics calculators for computing probability density functions (PDF). The corresponding term cumulative probability mass function or something similar is then used for F(x). Using the electron density significantly speeds up the calculation. To generate a binomial probability distribution, we simply use the binomial probability density function command without specifying an x value. It may interest you to know that another name for PDF is probability density function, and this alludes to a possibly useful way to think of the PDF. Definition of a Joint Probability Density Function. The Standard Deviation σ in both cases can be found by taking the square root of the variance. Each distribution has a certain probability density function and probability distribution function. …probability mass function is the probability density function, also denoted by f (x). Then and Thus, the Weibull distribution provides usable mathematical descriptions of reliability and failure rate:. The probability of a result x in an experiment consisting of a large number of equally probable independent trials n is approximated by the normal probability density function: where μ, the mean value, is n/2 and σ, the standard deviation, is a measure of the breadth of the curve which, for. This probability density function (pdf) calculator is featured to generate the work with steps for any corresponding input values to help beginners to learn how the input values are being used in such calculations of triangular distribution. By using this calculator, users may find the probability P(x) & expected mean (μ) of gamma distribution. To verify that the area under the curve is equal to 1, we recognize that the graph above can be viewed as a triangle. This function accepts non-integer degrees of freedom. In this lesson, we'll start by discussing why probability density functions are needed in. In the example, a probability density function and a transformation function were given and the requirement was to determine what new probability density function results. DensityTrace STATBEAN® Purpose: This probability density function calculator estimates the probability density function for a single column of numeric data. For example, the median is the 50 th percentile, the first quartile is the 25 th percentile, and the third quartile is the 75 th percentile. Mar 17, 2016: R, Statistics A probability distribution is a way to represent the possible values and the respective probabilities of a random variable. Frequently, it is necessary to calculate the probability (density) function of a function of two random variables, given the joint probability (density) function. We discuss methods for calculating multivariate normal probabilities by simulation and two new Stata programs for this purpose: mdraws for deriving draws from the standard uniform density using either Halton or pseudorandom sequences, and an egen function, mvnp(), for calculating the probabilities them- selves. Probability Density Function Calculator - Uniform Distribution - Define the Uniform variable by setting the limits a and b in the fields below. This is the most important example of a continuous random variable, because of something called the. 041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 View the complete course: http://ocw. Random is a website devoted to probability, mathematical statistics, and stochastic processes, and is intended for teachers and students of these subjects. Toggle between the probability density function and the cumulative distribution function of the distribution; Modify your graph in order calculate a cumulative probability (e. Sample problem: Calculate a cumulative probability function for a beta distribution in Excel at 0. Because probability is given by area, it is not hard to compute probabilities based on a uniform distribution:. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently , is often called the bell curve because of its characteristic shape (see the example below). 3: Expected Value and Variance If X is a random variable with corresponding probability density function f(x), then we define the expected value of X to be. Define the random variable and the value of 'x'. Battleship Probability Calculator. 2: a function of a continuous random variable whose integral over an interval gives the probability that its value will fall within the interval. o Distinguish between discrete and continuous distributions. August 2010 16:30 An: [email protected] Notice: Undefined index: HTTP_REFERER in /home/forge/theedmon. The inputs are Binompdf(number of trials, probability of success, x) Example: n=14, p=0. Multiple Event Probability Calculator. We begin by defining a continuous probability density function. The Cumulative Distribution Function for a Random Variable \ Each continuous random variable has an associated \ probability density function (pdf) 0ÐBÑ \. The area under the probability density function f(x), over all values of the random variables X, is equal to one 3. Is there a relationship between Xand Y? If so, what kind? If you're given information on X, does it give you information on the distribution of Y? (Think of a conditional distribution). Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = Z b a f(x)dx That is, the probability that X takes on a value in the interval [a;b] is the. ) is to find the constant k. When you edit this value (either manually or with the microscrolls), Statistica computes the associated p-value for the specified shape parameter. In other words, the syntax is binompdf(n,p). Finally, we briefly discuss interpretations of selected lines of sight by comparing them to models computed using the Meudon PDR code. Description. The Standard Deviation σ in both cases can be found by taking the square root of the variance. 1 Introduction As we have seen in Charter 3, a probability density function (pdf) or a cumulative distribution function (cdf) can completely describe a random variable. This is a java program to generate random numbers using a probability distribution. The probability that a random variable assumes a value between a and b is equal to the area under the density function bounded by a and b. Density Functions and Probability We begin a discussion of density functions, which are used to describe what proportion of a population has a certain characteristic. We calculate probabilities of random variables and calculate expected value for different types of random variables. pdf(x)) We then show this graph plot with the line, plt. Function of random variables and change of variables in the probability density function. The symmetric triangular distribution on is implemented in the Wolfram Language as TriangularDistribution[a, b], and the triangular distribution on with mode as TriangularDistribution[a, b, c]. A bivariate function with values f(x 1, x 2) defined over the x 1x 2-plane is called a joint probability density function of the continuous random variables X 1 and X 2 if, and only if, P [(X 1, X 2) ∈ A] = Z A Z f(x 1, x 2)dx 1 dx 2 for any region A ∈ the x 1x 2-plane (3) 4. When you ask for a random set of say 100 numbers between 1 and 10, you are looking for a sample from a continuous uniform distribution, where α = 1 and β = 10 according to the following definition. It may interest you to know that another name for PDF is probability density function, and this alludes to a possibly useful way to think of the PDF. Using the probability density function calculator is as easy as 1,2,3: 1. Probability Density Function - PDF: Probability density function (PDF) is a statistical expression that defines a probability distribution for a continuous random variable as opposed to a discrete. interactive probability calculators and distributions applets. Normal Distribution. The percentile of a distribution, denoted as x percent, is defined as the value of x such that the probability of getting a value less than or equal to x is α/100 and the probability of getting value greater than or equal to x is. We can find the probability of having exactly 4 correct answers by random attempts as follows. The figure above shows the graph of a probability density function f x( ) of a continuous random variable X. \+,œTÐ+Ÿ\Ÿ,Ñœ0ÐBÑ. In other words, the pdf defines the probability that X takes on a value in the interval [a,b] is the area under the density function from a to b. 3 Normal distribution Normal probability density function f(x). Chapter 4 Commonly Used Probability Distributions 1 Chapter Four Commonly Used Probability Distributions 4. The area under a curve y = f(x) from x = a to x = b is the same as the integral of f(x)dx from x = a to x = b. 2-Probabilty Density Function They are under F3. For this example, type ". How Do You Address the ASI Slaves via the 243-2 in STEP 7-Micro WIN. For continuous distributions, the CDF gives the area under the probability density function, up to the x-value that you specify. You can use the NORM. The probability density function or PDF of a continuous random variable gives the relative likelihood of any outcome in a continuum occurring. You want to plot the pdf in the range [40 - 80]. R Command Visualizing the normal distribution. May 27, 2012 · I have a vector with 200. There are different definitions on the internet. f(t) is the probability density function (PDF). The other distinction is between the probability density function (PDF) and the cumulative distribution function. Probability density functions: Continuous probability distributions. No cable box required. That the graph looks a lot like the normal distribution is not a coincidence (see Relationship between Binomial and Normal Distributions) Property 1: Click here for a proof of Property 1. Learn more about different types of probabilities, or explore hundreds of other calculators covering the topics of math, finance, fitness, and health, among others. 2) Let Y denote survival time, and let fY (y) be its probability density function. Substitute the values and get the answer. Normal (Gaussian) Distribution calculator, formulas, work with steps, step by step calculation, real world and practice problems to learn how to estimate area under Bell curve which gives the probability which is higher or lower than any arbitrary X. To simplify assessments and computations, practitioners of decision analysis discretize these to a few points. Function Description. To do this, we use the numpy, scipy, and matplotlib modules. The product features 11 continuous and 4 discrete functions. The calculator reports that the cumulative binomial probability is 0. The PDF function is evaluated at the value x. The probability of a result x in an experiment consisting of a large number of equally probable independent trials n is approximated by the normal probability density function: where μ, the mean value, is n/2 and σ, the standard deviation, is a measure of the breadth of the curve which, for. This not exactly a exponential probability density calculator, but it is a cumulative exponential normal distribution calculator. Such a curve is denoted f(x) and is called a (continuous) probability density function. Please enter the necessary parameter values, and then click 'Calculate'. A probability density function is defined such that the likelihood of a value of X between a and b equals the integral (area under the curve) between a and b. Your calculator will output the binomial probability associated with each possible x value between 0 and n, inclusive. The other distinction is between the probability density function (PDF) and the cumulative distribution function. Formulas: Probability of event A occurring P(A) = n(A) / n(S). The probability density function, f(x), of a random variable has the following properties 1. Press 2nd DISTR and scroll down. Suppose instead that two probability density functions are given and the requirement is to nd a function which transforms one into the other. The figure above shows the graph of a probability density function f x( ) of a continuous random variable X. Here is a list of best free probability calculator software for Windows. * Survival function: 1-F(t) The goal of survival analysis is to estimate and compare survival experiences of different groups. pdf(y) / scale with y = (x - loc) / scale. In the example, a probability density function and a transformation function were given and the requirement was to determine what new probability density function results. COMMAND/ARGUMENTS Binomcdf (n, p, x) Computes cumulative probability at x for binomial distribution with probability p of. If you want to calculate the value of this function at x = 50, this can be done using the Excel Normdist function, as follows:. Normal density: dnorm(x, mean=0, sd=1) By default it is the standard normal density. The graph of the density function is shown next. The area under the probability density function f(x), over all values of the random variables X, is equal to one 3. Available in Excel with the XLSTAT software. 0 f(x;y) 2. I want to add on to what Mark Fischler wrote, and it'll probably be too long for a comment, so it'll go in an answer. Using the electron density significantly speeds up the calculation. The probability of finding in a range is The probability density function (PDF) is where is the probability of being in the range. 1Overview Density functions determine continuous distributions. This is an important, and often overlooked, point. Poisson Probability Calculator. This calculator calculates probability density function, cumulative distribution function, mean and variance of a binomial distribution for given n and p. Random number distribution that produces floating-point values according to a normal distribution, which is described by the following probability density function: This distribution produces random numbers around the distribution mean (μ) with a specific standard deviation (σ). Let be a random vector having joint probability density function. Therefore, we plug those numbers into the Binomial Calculator and hit the Calculate button. This function completely describes. As you move to the right, the area to the left gets bigger and bigger and you get a curve that looks like the one shown. Learn about different probability distributions and their distribution functions along with some of their properties. What is a probability density func-tion? The probability density function (PDF) is the PD of a continuous random variable. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. classical probability, it is tempting to suppose that quantum mechanics is a set of probabilistic objects, in effect a special case of probability rather than a generalization. Suppose instead that two probability density functions are given and the requirement is to nd a function which transforms one into the other. Empirical Rule Calculator calculator, formula and work with steps to estimate the percentage of values around the mean for the standard deviation width of 1σ, 2σ & 3σ. After all, for a woman who hasn't gone into labor by today the probability of spontaneous labor starting yesterday is, by definition, 0%. If you want to calculate the value of this function at x = 50, this can be done using the Excel Normdist function, as follows:. Subsequent arguments are the parameters of the distribution. View Test Prep - AnswersMidtest-2019-for-lecture. Please enter the necessary parameter values, and then click 'Calculate'. By using this calculator, users may find the probability P(x) & expected mean (μ) of gamma distribution. Calculating Probability with a Uniform Density Function. Lecture 32: Survivor and Hazard Functions (Text Section 10. website calculator solving equations homework rate of work word problems Algebra Mcgraw Hill. The graph of the density function is shown next. Finding the mean and median of a probability density function. I saw the prob function. You can use the PDF function to draw the graph of the probability density function. The distribution function is continuous and strictly increases from 0 to 1 on the interval, but has derivative 0 at almost every point!. 5(DIST) e(F) b(P. The equation defining the probability density function of a gamma-distributed random variable x is (lebih…). Basics Comulative Distribution Function F X(x) = P(X x) Probability Density Function F X (x) = Z 1 1 f X t)dt Z 1 1 f X(t)dt= 1 f X(x) = d dx F X(x) Quantile Function. Try other values of x, m and s. None of these quantities are fixed values and will depend on a variety of factors. You can also use this information to determine the probability that an observation will be greater than a certain value, or between two values. I'd like to calculate a probability distribution for prices given the option prices for that stock? Any ideas how to do this? My desire is to do this daily and then see how the price PD changes over. If you studied the writing equations unit, you learned how to write equations given two points and given slope and a point. The gradient expressions given in [14, 15, 19] have the form of surface inte-. The graph consists of two straight line segments of equal length joined up at the point where x = 3. The binomial probability density function lets you obtain the probability of observing exactly x successes in n trials, with the probability p of success on a single trial. Expected Value of Joint Random Variables. Toggle between the probability density function and the cumulative distribution function of the distribution; Modify your graph in order calculate a cumulative probability (e. We calculate probabilities of random variables and calculate expected value for different types of random variables. The probability of finding in a range is The probability density function (PDF) is where is the probability of being in the range. Although this function is still available for backward compatibility, you should consider using the new functions from now on, because this function may not be available in future versions of Excel. Definition of a Joint Probability Density Function. So, I'm looking for:. Probability is the likelihood of one or more events happening divided by the number of possible outcomes. The PDF is the density of probability rather than the probability mass. The Labor Probability Calculator shows the probability of spontaneous based on how far along she is by renormalizing the distribution to include only the possible remaining days in a woman's pregnancy. The probability density function (PDF) of a random variable, X, allows you to calculate the probability of an event, as follows: For continuous distributions, the probability that X has values in an interval (a, b) is precisely the area under its PDF in the interval (a, b). 7 and the corresponding probability density function, find? What does probability density function mean?. Auto-suggest helps you quickly narrow down your search results by. The predicted probability of deviation is calculated for all path candidates. Frequently, it is necessary to calculate the probability (density) function of a function of two random variables, given the joint probability (density) function. In this video, I give a very BRIEF discussion on probability density functions and continuous random variables. It "records" the probabilities associated with as under its graph. The future of live TV with 70+ channels.
CommonCrawl
9.5 Composition of Functions Advanced Functions Nelson Purchase this Material for $15 You need to sign up or log in to purchase. Subscribe for All Access Use f(x)=2x-3 and g(x) = 1 -x^2 to evaluate the following expressions. f(g(0)) Buy to View g(f(4)) (f\circ g)(-8) \displaystyle{(g\circ g)\left(\frac{1}{2}\right)} (f\circ f^{-1})(1) (g\circ g)(2) Given f=\{(0,1),(1,2),(2,5),(3,10)\} and g=\{(2,0),(3,1),(4,2),(5,3),(6,4)\}, determine the following values. (g\circ f)(2) (f\circ f)(1) (f\circ g)(5) (g^{-1}\circ f)(1) Use the graph of f and g to evaluate each expression. (g\circ g)(-2) For a car travelling at a constant speed of 80 km h, the distance driven, d kilometres, is represented by d(t)=80t, where t is the time in hours. The cost of gasoline, in dollars, for the drive is represented by C(d)=0.09d Determine C(d(5)) numerically, and interpret your results. In each case, functions f and g are defined for x\in\mathbb{R}. For each pair of functions, determine the expression and the domain of f(g(x)) and g(f(x)). f(x)=3x^2,g(x)=x-1 In each case, functions f and g are defined for x\in\mathbb{R}. For each pair of functions, determine the expression and the domain of f(g(x)) and g(f(x)). Graph each result. f(x)=2x^2+x,g(x)=x^2+1 *2x^3-3x^2+x-1,g(x)=2x-1 f(x)=x^4-x^2,g(x)=x+1 f(x)=\sin x,g(x)=4x f(x)=|x|,g(x)=x+5 f(x)=3x,g(x)=\sqrt{x-4} determine the defining equation for f\circ g and g\circ f determine the domain and range of f\circ g and g\circ f For each of the following, f(x)=\sqrt{x},g(x)=3x+1 f(x)=\sqrt{4-x^2},g(x)=x^2 f(x)=2^x,g(x)=\sqrt{x-1} f(x)=10^x,g(x)=\log x f(x)=\sin x,g(x)=5^{2x}+1 For each function h, find two functions, f g, such that h(x)=f(g(x)). h(x)=\sqrt{x^2+6} h(x)=(5x-8)^6 h(x)=2^{(6x+7)} \displaystyle{h(x)=\frac{1}{x^3-7x+2}} h(x)=\sin^2(10x+5) h(x)=\sqrt[3]{(x+4)^2} Let f(x)=2x-1 and g(x)=x^2. Determine (f\circ g)(x). Let f(x)=2x-1 and g(x)=x^2. Graph f,g, and f\circ g on the same set of axes Let f(x)=2x-1 and g(x)=x^2 Describe the graph of f\circ g as a transformation of the graph of y=g(x). Let f(x)=2x-1 and g(x)=3x+2. Determine f(g(x)), and describe its graph as a transformation of g(x). Determine g(f(x)), and describe its graph as a transformation of f(x) A banquet hall charges $975 to rent a reception room, plus $39.95 per person. Next month, however, the banquet hall will be offering a 20% discount off the total bill. Express this discounted cost as a function of the number of people attending. The function f(x)=0.08x represents the sales tax owed on a purchase with a selling price of x dollars, and the function g(x)=0.75x represents the sale price of an item with a price tag of x dollars during a 25% off sale. Write a function that represents the sales tax owed on an item with a price tag of x dollars during a 25% off sale. An airplane passes directly over a radar station at time t=0. The plane maintains an altitude of 4 km and is flying at a speed of 560 km/h. Let d represent the distance from the radar station to the plane, and let s represent the horizontal distance travelled by the plane since it passed over the radar station. a) Express d as a function of s, and s as a function of t. b) Use composition to express the distance between the plane and the radar station as a function of time. In a vehicle test lab, the speed of a car, v kilometres per hour, at a time of t hours is represented by v(t)=40+3t+t^2. The rate of gasoline consumption of the car, c litres per kilometre, at a speed of v kilometres per hour is represented by \displaystyle{c(v)=\left(\frac{v}{500}-0.1\right)^2+0.15}. Determine algebraically c(v(t)), the rate of gasoline consumption as a function of time. Determine, using technology, the time when the car is running most economically during a 4 h simulation. Given the graph of y=f(x) shown and the functions below, match the correct composition with each graph. Justify your choices. i) g(x)=x+3 ii) m(x)=2x iii) h(x)=x-3 iv) n(x)=-0.5x v) k(x)=-x vi) p(x)=x-4 y=(f\circ g)(x) y=(f\circ h)(x) y=(f\circ k)(x) y=(f\circ m)(x) Q14d y=(f\circ n)(x) y=(f\circ p)(x) y=(g\circ f)(x) Q14g *y=(h\circ f)(x) Q14h g(x)=x+3 Q14i Which graph is y=(m\circ f)(x)? Q14j y=(n\circ f)(x) Q14k y=(p\circ f)(x) If y=3x-2,x=3t+2, and t=3k-2, find an expression for y=f(k). Express y as a function of k if y=2x+5,x=\sqrt{3t-1}, and t=3k-5. Review of Combining functions Composition Analogy Composition with Set of Points Composition with Algegra Decomposition of Functions ex. Decompose \displaystyle y = \frac{1}{\sqrt{1 -x^2}} Decomposition example ex. Find f, g, h for f\circ g\circ h(x) = \sqrt{x^2 - 1} Decomposition example 3 Triple Composition
CommonCrawl
Synergisms of genome and metabolism stabilizing antitumor therapy (GMSAT) in human breast and colon cancer cell lines: a novel approach to screen for synergism Jérôme Ruhnau1 na1, Jonas Parczyk ORCID: orcid.org/0000-0002-9539-05871 na1, Kerstin Danker1, Britta Eickholt1 & Andreas Klein1 Despite an improvement of prognosis in breast and colon cancer, the outcome of the metastatic disease is still severe. Microevolution of cancer cells often leads to drug resistance and tumor-recurrence. To target the driving forces of the tumor microevolution, we focused on synergistic drug combinations of selected compounds. The aim is to prevent the tumor from evolving in order to stabilize disease remission. To identify synergisms in a high number of compounds, we propose here a three-step concept that is cost efficient, independent of high-throughput machines and reliable in its predictions. We created dose response curves using MTT- and SRB-assays with 14 different compounds in MCF-7, HT-29 and MDA-MB-231 cells. In order to efficiently screen for synergies, we developed a screening tool in which 14 drugs were combined (91 combinations) in MCF-7 and HT-29 using EC25 or less. The most promising combinations were verified by the method of Chou and Talalay. All 14 compounds exhibit antitumor effects on each of the three cell lines. The screening tool resulted in 19 potential synergisms detected in HT-29 (20.9%) and 27 in MCF-7 (29.7%). Seven of the top combinations were further verified over the whole dose response curve, and for five combinations a significant synergy could be confirmed. The combination Nutlin-3 (inhibition of MDM2) and PX-478 (inhibition of HIF-1α) could be confirmed for all three cell lines. The same accounts for the combination of Dichloroacetate (PDH activation) and NHI-2 (LDH-A inhibition). Our screening method proved to be an efficient tool that is reliable in its projections. The presented three-step concept proved to be cost- and time-efficient with respect to the resulting data. The newly found combinations show promising results in MCF-7, HT-29 and MDA-MB231 cancer cells. Although a lot of progress has been made in the research of potential anti-cancer agents over the last decade, secondary therapy failure and disease progression is still the major problem in most tumor entities especially in the metastatic state of solid tumors [1, 2]. The tumor microevolution gives constantly rise to new populations of cancer cells with diverse properties [3] making it difficult to target them. Therefore, we developed a combinatory therapeutic approach that targets the tumor microevolution and its driving forces. Industrial funds become more important in research. As industrial funding [4] and the focus on commercial interests increase, research is favourably conducted on newly bioengineered and patentable drugs [5] rather than generic compounds. Therefore, we aimed to establish a cost-efficient screening strategy that is feasible for independent work groups. In order to screen a relatively high number of potential compounds for their synergistic potency, we present here a three-step approach including a minimalistic drug interaction screening (MDIS) that is cost-efficient and can easily be established with basic laboratory equipment independent of expensive high-throughput devices. The tumor microevolution and its driving forces Unfortunately, initial antitumor treatment frequently leaves residual disease from which the tumor regrows [6]. Microevolution of cancer cells often leads to drug resistance and tumor recurrence [7]. Important driving forces of the microevolution are the genomic instability [8], the tumor metabolism [9, 10] and a deregulated cell cycle [11] that converge in a high proliferation rate combined with a high occurrence of mutations. To treat such complex diseases, combinations of drugs that target different aspects of the disease and at best, act synergistically may be the method of choice. Another complex disease that can currently be kept in remission with a combinatory approach (combined antiretroviral therapy, "cART") [12] is the infection with the human immunodeficiency virus (HIV). As HIV itself undergoes a microevolution due to the high mutagenesis by virus reverse transcriptase [13] it took decades to find an adequate multi-target treatment. And even with cART, the development of drug resistances especially for nucleotide reverse transcriptase inhibitors (NRTI) is still a major problem [14]. Due to the complexity of cancer, it can be anticipated that more sophisticated combinatory approaches are needed. An example for such a concept is CUSP9 where multiple drugs that are approved for non-cancer indications are combined as a treatment approach for recurrent glioblastoma [15,16,17]. The combination of compounds can lead to a broader effect on different tumor subtypes which may reduce chances of relapses or keep the tumor in a progression free state [18]. Genome and metabolism stabilizing antitumor therapy (GMSAT) The here presented combinatory approach aims to counteract the tumor microevolution by targeting the genome, tumor metabolism as well as growth and survival (Fig. 1). PRIMA-1met and Nutlin-3 are two compounds targeting p53 which is often referred to as the "guardian of the genome" [19]. PRIMA-1met binds and reactivates mutated p53 [20] whereas Nutlin-3 increases p53 levels by disrupting the p53-MDM2 interaction and thereby inhibiting its degradation [21]. Likewise, SJ172550 counteracts the p53-MDM4 interaction which also leads to elevated p53 levels [22]. Compounds that modulate metabolism include Dichloroacetate (DCA) which aims to reverse the Warburg effect via activation of pyruvate dehydrogenase (PDH) by inhibition of pyruvate dehydrogenase kinase, promoting the entry of pyruvate into tricarboxylic acid cycle [23]. Other important metabolism targeting compounds used for our study are the hypoxia-inducible factor 1α (HIF-1α) inhibitor PX-478 (Koh et al. 2008), Metformin, which inhibits complex 1 of the respiratory chain [24], the inhibitor of lactate dehydrogenase A (LDH-A) NHI-2 (Allison et al. 2014) and the hexokinase 2 (HK2) inhibitor 3-Bromopyruvate (Ko, Pedersen, and Geschwind 2001). Another important energy source in cancer is Glutamine metabolism [25] which is targeted by the Glutaminase inhibitor CB-839 [26]. Finally, compounds targeting growth and survival are the survivin inhibitor YM155 [27], the phosphatidylinositol 3-kinase (PI3K) inhibitor pictilisib/GDC-0941 [28], InoC2PAF [29, 30] and the ginger derivate 6-Shogaol targeting the AKT/mTOR pathway [31]. 13 genome, metabolism and growth−/ survival targeting agents according to the GMSAT concept as well as Cisplatin as a reference to conventional chemotherapy are illustrated with their respective target structures in brackets. "-I "stands for inhibition Screening for and evaluation of synergisms In order to screen for potent synergisms, various successful methods have been tested and published recently [32, 33]. While some are relying on high throughput [34, 35] others are partially computerised to reduce the amount of actual experimental data points being investigated like the Feedback System Control [36,37,38]. There are also methods investigating synergism via mostly computerised analyses (Stochastic Searching Model, Statistical Model and Multi-Scale Agent-Based Model) [33, 39]. In literature, more than 10 different ways of defining synergism are described [40]. First referred to as the Loewe Additivity [41], quantification of synergistic drug interaction by the combination index (CI) is nowadays widely accepted. A precise method to estimate the specific dosages of fractional effects needed to calculate the CI, is the median effect method of Chou and Talalay that is derived from the mass action law [42, 43]. Quantification of synergisms via the CompuSyn software [44] based on multiple concentrations across the dose response curves is a well-established procedure [45]. MCF-7 breast cancer cells express p53 wild-type, are estrogen (ER) and progesterone receptor (PR) positive and express low levels of human epidermal growth factor receptor 2 (HER2) [46, 47]. MDA-MB-231 breast cancer cells that were originally isolated from a human breast cancer pleural effusion express a p53-mutation (R280K), are negative for ER and PR and express no amplification of HER2 [46, 48]. Both breast cancer cell lines were a kind gift of Göran Landberg (Sahlgrenska Cancer Center, University of Gothenburg, Gothenburg, Sweden) and were initially purchased from ATCC (Catalogue number: CRL-3435 and HTB-26). The primary colon cancer cell line HT-29 was isolated in 1964 by Fogh and Trempe. HT-29 cells carry a p53 mutation (R273H) and are deregulated for c-MYC [48]. HT-29 was a kind gift from Karsten Parczyk (Bayer AG) and initially purchased from ATCC (Catalogue number: HTB-38). All cell lines were routinely tested for mycoplasma contamination. For testing of mycoplasma contamination either PCR (GATC Biotech) or staining with Hoechst 33342 dye (Sigma-Aldrich, Steinheim, Germany) was conducted. HT-29 and MCF-7 cells were cultured in DMEM and the MDA-MB-231 in DMEM/F12 containing penicillin/streptomycin (100 U ml− 1), L-glutamine (DMEM: 584 mg l− 1, DMEM/F12: 365,1 mg l− 1) and 10% heat-inactivated fetal calf serum (FCS) at 37 °C in a humidified incubator with 5% CO2. Cells were harvested using 0.05% trypsin/0.02% EDTA in PBS. Fourteen compounds were used: Prima-1met, Nutlin-3, SJ 172550, YM155 (Selleck Chemicals, Houston, TX, USA), 6-Shogaol (Hölzel Diagnostika Handels GmbH, Cologne, Germany), Pictilisib (Absource Diagnostics GmbH, Munich, Germany), Ino-C2-PAF (1-O-octadecyl-2-O-(2-(myo-inositolyl)-ethyl)-sn-glycero-3-(r/s)-phosphatidylcholine) [29], PX-478 (Hölzel Diagnostika Handels GmbH, Cologne, Germany), DCA, Metformin-hydrochloride (Sigma-Aldrich, Munich, Germany), CB-839 (Selleck Chemicals, Houston, TX, USA), 3-Bromopyruvate (Santa Cruz Biotechnology, Dallas, Texas, USA), NHI-2 (Bio-Techne GmbH, Wiesbaden-Nordenstadt, Germany) and Cisplatin (Cayman Chemical Ann Arbor, MI, USA). 3-Bromopyruvate, Cisplatin, Dichloroacetate, Metformin, PRIMA-1-met, PX-478, YM155 and Ino-C2-PAF were solved in distilled water. Dimethyl sulfoxide (DMSO) was used to solubilize 6-Shogaol, CB-839, NHI-2, Nutlin-3, Pictilisib and SJ-17255. Finally, DMSO concentration was kept under 0.6 μl per well (0.6%). All data collected in this study can be found in the additional file (Additional file 1). This includes all data produced for dose response curves and all combination experiments. Cell viability assay and cell proliferation assay 0.5 × 104 MCF-7, 1.5 104 HT-29 and 1.5 104 MDA-MB-231 cells per well were seeded in flat bottom 96-well plates. After 24 h and reaching a cell-confluence of approximately 50%, the respective compound or combination was added. As a negative control, cells were cultured in the presence of 0.6% DMSO. However, we could not detect any differences in cell viability between 0.6% DMSO and no DMSO. After 48 h of further incubation, either MTT assay (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, a tetrazole assay, Bio-Techne GmbH, Germany) or SRB (Sulforhodamin B) assay were applied. The MTT assay was performed according to the manufacturer's instructions. For the SRB assay, cells were treated with 10% trichloroacetic acid (w/v) and stained with 0.06% SRB in 1% acetic acid for 30 min. Cells were then repeatedly washed using 1% acetic acid (v/v) followed by dissolution in 10 mM Tris (pH 10.5). Protein mass was monitored using a microplate reader at an optical density of 492 nm. All experiments were performed at least with two replicates in three independent experiments. Dose response curves were obtained for 14 compounds using GraphPad Prism statistical analysis software 7.05. EC50 of the respective compounds was determined via nonlinear regression. Minimalistic drug interaction screening (MDIS) MCF-7 and HT-29 cells were treated with 14 single and their 91 pairwise combinations at dosages of approximately EC25. All experiments were performed at least with three biological and two technical replicates. Thus, for one cell lines we produced about 909 data points (303 per biological replicate). The conjectured synergistical potency (CSP) of a combination was quantified by adding up the effect of the single compounds and subtracting the result from the combination's effect. E.g.: Single dose A: 20% cell viability-reduction, single dose B: 10% cell viability-reduction and the combination of A and B exhibit cell viability-reduction of 37%. Thus, the combination of A and B reduces the cell viability 7% more than it is expected from simply adding up the effects of the single compounds (CSP = 7). Analyses were performed with Graph pad prism and Microsoft Excel. Confirmation of synergism Synergism predicted by MDIS was evaluated with three to seven concentrations as suggested by Chou and Talalay [49]. MCF-7 and HT-29 cells were treated with the respective combination of compounds at a constant EC50:EC50 ratio as well as the same concentrations of each drug individually. Significant differences between single compound viabilities and combination viability was assessed by unpaired t-test. Only concentrations with p-values ≤0.05 for both compounds were considered as significant and marked by an asteriks (*) in the figures. The combination indices (CI) were calculated using the CompuSyn software [44]. The CI is a quantitative value for the synergism of a drug combination at specific concentrations. A value below 0.3 indicates a "strong", 0.3–0.7 a "robust" (originally referred to as "synergism" by Chou and Talalay), 0.7 to 0.85 a "moderate" and 0.85 to 0.9 a "slight" synergism. Values from 0.9 to 1.1 show an "additive" effect and a CI above 1.1 indicates "antagonism" [50, 51]. The CI was calculated as follows: $$ CI=\frac{(D)_1}{(Dx)_1}+\frac{(D)_2}{(Dx)_2} $$ In the numerators, (D)1 and (D)2, are the concentrations of drug 1 and drug 2 in the drug-combination which have a certain effect on cell viability (x %). In the denominators, (Dx)1 and (Dx)2, stand for the concentration of each drug alone (drug 1 or drug 2) that is necessary to obtain the same effect (x %) as the drug-combination (drug 1 and drug 2). The concentrations (Dx)1 and (Dx)2 were calculated by CompySyn referring to individual cell-viability data of the concerning compounds. To enhance rigidity, (Dx)1 and (Dx)2 were predominantly generated via direct experimental data points. This way, potential calculation errors are ruled out as suggested by Zhao et al. [45]. To produce the median effect plots the following equation was used: $$ {D}_x={D}_m{\left[\frac{fa}{1- fa}\right]}^{1/m} $$ Dm is the median effect dose, m counts for the slope of the median-effect plot and fa stands for fraction affected. Three-step concept to identify synergisms between selected compounds In this work, we applied the following three steps to identify synergisms between the compounds for our combinatory approach (Fig. 2). Dose response curves aiming to detect the single drug effect in cancer cell lines and calculate fractional effects like EC50 or EC25. The minimalistic drug interaction screening (MDIS) to identify potential synergies. Verification by the method of Chou and Talalay to reliably prove the projected synergisms. 14 compounds were selected and analysed using MTT- or SRB Assay in HT-29, MCF-7 and MDA-MB-231 cells in order to obtain dose response curves and EC50. A minimalistic drug interaction screening (MDIS) was applied to detect synergies in the 91 possible combinations. The combinations with the most synergistic potential were then further verified Following these steps, we identified 27 potential synergisms in MCF-7 (29.7%) and 19 in HT-29 (20.9%) of the 91 pairwise combinations. A selection of combinations was further analysed by the method of Chou and Talalay. Dose response curves in MCF-7, MDA-MB-231 and HT-29 cells Dose response experiments were conducted in order to identify the dose range for MDIS and evaluate the antitumor effects of the single compounds in different cell lines. Therefore, MCF-7, MDA-MB-231 and HT-29 cells were cultivated for 24 h before being treated with increasing concentrations of the 14 different compounds (Fig. 1). After an additional cultivation period of 48 h, cell viability or protein mass were quantified using the MTT or SRB assay. In Fig. 3, we exemplarily illustrated the dose response curves of Nutlin-3 and DCA for all three cell lines. Furthermore, we calculated the median effective concentration (EC50) for all compounds with the help of GraphPad Prism (Table 1). Data for all dose respond curves can be found in the Additional file 1. dose response curves Cells were seeded into a 96 well plate at a density of 1.5 (HT-29, MDA-MB) and 0.5 × 104/well (MCF-7), incubated 24 h to a confluence of 50%, then cells were treated with increasing concentrations of the 14 selected drugs for 48 h. Viability was assessed using the MTT-Assay and curves were obtained using the four-parameter variable slope function of Graphpad Prism. Exemplarily the resulting curves for Nutlin-3 and DCA are shown for the three cell lines Table 1 EC50 of the 14 compounds Overall, we observe that the triple negative breast cancer cell line MDA-MB-231 is the most resistant cell line requiring the highest dosages in 11 out of the 14 tested compounds. Although Prima-1met is intended to stabilize p53-mut, the strongest efficacy is shown in the p53 wild-type cell line MCF-7. YM155 is effective at very low concentrations at EC50 in a nM range in all three cell lines. To identify synergistic actions of compound combinations, we developed a minimalistic drug interaction screening (MDIS). For this experiment, HT-29 and MCF-7 cells were treated with 14 different compounds in all 91 possible pairwise combinations. In this approach, dosages of approximately EC25 were used for all compounds. The conjectured synergistical potency (CSP) of a combination was quantified adding up the effect of the single compounds and subtracting the result from the combination's effect (c.f. Material and Methods). We applied this rather simple mathematical approach not to prove synergisms, but to narrow down the number of effective combinations. The overall average standard deviations in MDIS were 7.5% for MCF-7 and 10.6% for HT-29 respectively. CSP values above 10 were chosen as a cut off for a 'possible' (+) synergism, 15 for a "likely" (++) and 25 for a "very likely" (+++) synergism (Fig. 4). Pure numerical values can be found in Additional file 1. minimalistic drug interaction screening. HT-29 and MCF-7 cells were seeded into a 96 well plate at a density of 1.5 (HT-29) or 0.5 × 104/well (MCF-7) and incubated 24 h to a confluence of 50%. Then, cells were incubated with 14 single compounds and the respective 91 combinations at a concentration about EC25 for 48 h. Viability was assessed using the MTT-Assay and the CSP (conjectured synergistical potency) values were calculated. CSP of a combination was quantified by adding up the effect of the single compounds and subtracting the result from the combination's effect. All CSP values above ten are highlighted in green. Values between ten and 15 are marked by one plus (+), between 15 and 25 by two plus (++), greater than 25 by three plus (+++) and referred to as "possible", "likely" and "very likely" synergism respectively. The number of total "+" is given in the first column below the name of the compounds and summarizes the number and strength of projected synergistic interactions For HT-29 cells, a total amount of 19 synergistic projections out of the 91 combinations (20.9%) were predicted. Eleven of the latter were "possible" (12.1%), seven "likely" (7.7%) and one a "very likely" synergism (1.1%). For the p53 wild type breast cancer cell line MCF-7, a total of 27 combinations (29.7%) were identified, including 16 "possible" (17.6%), ten "likely" (11.0%) and one "very likely" (1.1%) synergism. The highest CSP could be achieved in HT-29 for the combination of DCA + PX-478 which led to an average increase in inhibition of cell growth of 62.4% compared to the sum of the single dose effects determined for both drugs. Therefore, we performed deeper investigations with the combination of DCA + PX-478 in different cancer cell lines in a separate study. The second highest value was obtained for DCA + NHI-2 (43.4%) in MCF-7. Four combinations were projected to be synergistic in both cell lines: Nutlin-3 + YM155, DCA + Metformine, DCA + PX-478 and Nutlin-3 + PX-478. DCA, PX-478, Nutlin-3 and NHI-2 exhibit highest potential for synergistic interactions in MDIS There were substantial differences in the count of potential synergies and their strength for the 14 compounds. The total number of "+" attributed to a compound by MDIS illustrates the synergistic potential of a compound since it summarizes quantity and quality of predicted synergistic interactions. With a total of 19 "+" the two compounds DCA and PX-478 have the highest synergistic potential. While PX-478 has the highest count of possible synergisms [12], DCA compensates a lower count [10] with stronger predictions (one vs. two "very likely" synergisms). Additionally, with a total of 11 projections each, Nutlin-3 with 16 "+" and NHI-2 with 15 "+" show high synergistic potential. The lowest count of synergistic interaction was identified for the two PI3K-pathway targeting drugs Pictilisib and InoC2PAF with 0 and 2 predictions, respectively. YM155 had seven projections in MCF-7 and only one in HT-29. For 6-Shogaol, the opposite was the case: Five predictions in HT-29 and none in MCF-7. Analysis of the synergies by the method of Chou and Talalay For further evaluation of these predicted synergisms according to the method of Chou and Talalay, we used the CompuSyn Software to calculate the combination indices (CI). The CI is a quantitative value for the synergism of a drug combination at specific concentrations. A value below 0.9 indicates synergism and the lower a CI, the stronger a synergism: A value below 0.3 indicates a "strong", 0.3 to 0.7 a "robust" 0.7 to 0.85 a "moderate" and 0.85 to 0.9 a "slight "synergism. Values from 0.9 to 1 show a nearly "additive" effect and a CI above 1.1 indicates "antagonism". Furthermore, significance in the differences between a combination and the respective single compounds was evaluated by unpaired T-test. We evaluated seven combinations projected by MDIS (Table 2). Five of the latter could be confirmed by the method of Chou and Talalay, while two combinations, PRIMA-1met + Nutlin-3 and Nutlin-3 + 3-Bromopyruvate did not reach significant p-values in detected synergisms (CI = 0.89 and 0.72 respectively). Since the combination of DCA + NHI-2 was promising in MCF-7 cells in both the screening trial (CSP = 43) and the method of Chou and Talalay (CI = 0.27), we further verified it in HT-29 (Table 2 and Fig. 6-C, D). Although it could not be detected by MDIS, we found the combination to be synergistic in HT-29 cells (CI = 0.50). Furthermore, we verified the most promising synergisms in MDA-MB-231 by calculating the CI-value using the dose response curves and equation of Loewe [41]. Thereby, we could confirm the top synergies DCA + NHI-2 (CI = 0.) and Nutlin-3 + PX-478 (CI = 0.62). Since we found a "likely" synergism between DCA + Nutlin-3 in p53 wild-type MCF-7 cells (Fig. 4), we checked the combination of p53mut binding PRIMA-1met + DCA in the p53-mutated MDA-MB-231 cells. Interestingly, a synergy exclusively found in MDA-MB-231 cells could be confirmed (CI = 0.78). After the evaluation of MDIS, we named synergies with CSP values between ten and 15 "possible", 15 and 25 "likely" and greater than 25 "very likely" synergisms. Out of the seven verified synergies, we could prove all "likely" and "very likely" (4/4) but only two of the four possible synergisms. Thus, we detected eight (8.8%) and 11 (12.1%) "likely" and "very likely" synergisms in HT-29 and MCF-7 respectively. Table 2 Verified synergies Interpretation of the combination index When analysing drug interactions, looking at certain concentrations alone may lead to a false interpretation of synergism [42, 45]. The example of the synergism between PRIMA-1met + YM155 illustrates the principle of the CI-value interpretation (Fig. 5). At first sight, the combination of Prima-1met + YM155 shown in Fig. 5-D seems to exhibit stronger synergistic effects compared to lower dosages presented in Fig. 5-B. Contrarily to that assumption, the opposite is the case: 5-B shows indeed a "robust" synergism (CI = 0.34) while the effects shown in Fig. 5-D are not even "additive" (CI = 1.19). The explanation for this counter-intuitive finding is that doubling the single doses of PRIMA-1met + YM155 in EC50 results in a much stronger effect than the combination of both drugs at EC50 (Fig. 5-D, E). Therefore, one can conclude that the shape of and position on the curve is important to accurately describe and interpret synergisms. The easiest method to interpret synergistic effects of these curves consists in doubling the fractions of EC50. As a result, the CI calculations are mainly based on experimental data and can easily be interpreted by studying the curve progression. This method also helps minimizing errors that might occur with mathematical dose fitting [45]. Synergy interpretation. HT-29 and MCF-7 cells were seeded into a 96 well plate at a density of 1.5 (HT-29) and 0.5 × 104/well (MCF-7), incubated 24 h to a confluency of 50%, then medicated with increasing concentrations of PRIMA-1met, YM155 and their combination for 48 h. Cell viability was assessed using the MTT-Assay and curve was further analysed using Graphpad Prism. CI-Values were calculated by CompuSyn and illustrated with red dots in the diagram on the right. Each dot corresponds to the respective combination shown in the graph to the left. CI-values underneath the dashed line (< 1) imply a synergism. Viability data were also illustrated in a bar-chart design at 0.125x (B) 0.25x (C), 1x (D) and 2x EC50 (E) The combinations of Nutlin-3 + PX-478 and DCA + NHI-2 act synergistically in MCF-7, MDA-MB and HT-29 cells The combination Nutlin-3 (inhibition of MDM-2) + PX-478 (inhibition of HIF-1α) was predicted to be synergistic by MDIS for HT-29 and MCF-7 cells. Via the method of Chou and Talalay, we analysed this synergism over the whole dose response curve. Exemplarily, we show in Fig. 6a and b the dose response curves for the combination Nutlin-3 + PX-478 and the single compounds. Best CI-values were 0.33 for MCF-7 (Fig. 6a) as well as 0.63 and 0.62 for HT-29 and MDA-MB-231, respectively (Table 2). In the reduction of protein mass (Fig. 6b) as well as the reduction of viability (Fig. 6a) it was mainly synergistic at 0.125x, 0.25x and 0,5x EC50. Further, we confirmed the synergism of DCA + NHI-2 (PDH activation and LDH-A inhibition) in all three cell lines (Fig. 6c and d for MCF-7 and Table 2 for HT-29 and MDA-MB-231). A "strong" synergism was identified for the cell line MCF-7 (CI = 0.27) whereas a "robust" synergism could be found in HT-29 (CI = 0.50) and MDA-MB-231 (CI = 0.62). Nutlin-3 + PX-478 and DCA + NHI-2. MCF-7 cells were seeded into a 96 well plate at a density of 0.5 × 104/well (MCF-7), incubated 24 h to a confluency of 50%, then incubated with increasing concentrations of Nutlin-3, PX-478 and their combination (a, b) as well as DCA, NHI-2 and their combination (c, d) and) for 48 h. Then, viability was assessed using MTT assay (a, c) and protein mass was assessed using SRB assay (b, d). CI-Values were calculated by CompuSyn and illustrated with red dots in the diagram on the right. Each dot corresponds to the respective combination shown in the graph to the left. The effects of EC50of DCA, NHI-2 and DCA + NHI-2 on the cell confluency is illustrated on the bottom (e) We present here a three-step concept to systematically screen for and reliably describe synergies between a high number of compounds at a minimal cost and time budget. With that concept, we identified five synergistic combinations of genome and metabolism stabilizing compounds of which Nutlin-3 + PX-478 as well as DCA + NHI-2 were found in all three cell lines MCF-7, MDA-MB-231 (breast cancer) and HT-29 (colon cancer). In contrast to the here presented approach, Borisy and colleagues designed a sophisticated high-throughput robot-assisted approach where 30 antifungal drugs and their 435 pairwise combinations were screened for potential synergistic interactions. For their screening experiment, six different concentrations with two technical replicates were used, resulting in a total of 31,320 data points [34]. For 14 compounds the same experimental design would result in 6552 compared to 303 data points with MDIS. While this approach provides a substantial amount of valuable information, it is material, cost and time intensive. Thus, optimization in material use and number of conducted experiments is needed to make drug interaction research feasible for a broader range of work groups. Dose-ratio based screening Yin and colleagues reviewed, how computational based approaches such as the Feedback System Control [37] or Stochastic Searching Model with an heuristic idea can help to minimize costs of mainly experimental approaches [32]. Both approaches incorporate different dose-ratios already in the screening process. This design respects the fact that compounds interacting synergistically at a specific dose ratio may be antagonistic at other ratios [35]. Consequently, a screening without different dose-ratios may fail to detect synergisms that have antagonistic, additive or just slightly synergistic effects in the tested dose-ratio. In the here presented minimalistic drug interaction screening, this phenomenon is reflected in the fact that DCA + NHI-2 has not been projected to be synergistic by MDIS in HT-29 but could be proved by the method of Chou and Talalay (CI = 0.50). The opposite accounts for PRIMA-1met + YM155 which is synergistic in low doses (e.g. 0.125x EC50) and antagonistic at 8x EC50. Nevertheless, MDIS represents a substantial decrease in experimental scope: If for example three concentrations (e.g. EC25, EC50 and EC75) and all possible dose-ratios are used instead of one, the number of combinations increases from one to nine. Additionally, MDIS resulted in a total of 19 potential synergisms in HT-29 and 27 in MCF-7, a number that requires immense efforts to further verify and describe. Even when selecting only "likely" (++) and "very likely" (+++) synergisms, nine (HT-29) and 11 (MCF-7) combinations remain (Fig. 4). The focus on mechanistically interesting and most solid combinations in different cell lines is necessary to select most promising candidates. A dose-ratio based screening method is likely to detect even weak synergisms at an optimized dose-ratio and in that way it multiplies the number of projections. Therefore, we recommend the here presented cost-efficient design for projects that aim to evaluate interesting compounds of newly anticipated antitumor concepts for their synergistic potency. We recommend verifying the synergy over the entire dose-response curve at a constant dose-ratio before the determination of the optimal dose-ratios. Dose-ratio based screening might rather be appropriate for detailed analyses in order to optimize therapies of already implemented compounds [34]. Synergy interpretation After performing the three phases of the here proposed concept, we consider "likely" and "very likely" synergisms predicted by MDIS as the most relevant and solid results. In HT-29, we detected eight (8.8%) and in MCF-7 11 (12.1%) "likely" and "very likely" synergisms. Out of this group, we could confirm four of four tested synergisms (Table 2). In the case of "possible" synergisms, only two of four tested combinations could be confirmed. Nutlin-3 + PRIMA-1met and Nutlin-3 + 3-Bromopyruvate did reach synergistic CI values at some concentrations (CI: 0.89 and 0.72 respectively), but without significance. Furthermore, the CI-values over the whole dose-respond curve of these combinations were mainly additive or even antagonistic. Another "possible" synergisms detected by MDIS in MCF-7 is Metformin + Nutlin-3 which has already been described for mesothelioma cells by Shimazu et al. [52]. In general, "possible" synergisms might be worth examining as the "robust" synergistic effect between Nutlin-3 + PX-478 in HT-29 (CI = 0.63) illustrates (Table 2). Out of the five detected and proven synergies, two top combinations were synergistic in all three cell lines. Nutlin-3 inhibits p53 degradation [21] while PX-478 modulates metabolism by inhibiting HIF-1α and thereby aerobic glycolysis [53]. While a mechanistic overlap is described in literature, we were – to the best of our knowledge - the first to detect this synergism. Lee and colleagues reported in 2009 that Nutlin-3 inhibits HIF-1α in a p53 dependent and vascular endothelial growth factor (VEGF) in a p53 independent manner [54]. These findings are supported by the fact that the Nutlin-3 + PX-478 showed the strongest synergism in the p53 wild-type cell line MCF-7 (CI = 0.33) compared to the p53 mutated cell lines HT-29 (CI = 0.63) and MDA-MB (CI = 0.62). The second combination present in all three cell lines is DCA (PDH activation [55]) + NHI-2 (LDH-A inhibition [56]) which showed a "strong" synergism for the cancer cell line MCF-7 (CI = 0.27) and "robust" synergisms for HT-29 (CI = 0.50) and MDA-MB-231 (CI = 0.62). This combination has not been described in literature yet and is particularly interesting as both compounds target the "Warburg" effect [55], inhibiting the conversion of pyruvate to lactate and promoting its entrance into the tricarboxylic acid cycle. Out of the other four synergisms we were able to identify and prove, DCA + Metformine was already described thoroughly in literature [57]. Validation of conjectured synergies For the verification of the synergisms projected by MDIS, the widely accepted median-effect principle of the mass action law implemented in the method of Chou and Talalay was used [58]. To keep the transformation error low, we decided not to simplify our experiments by the overextended use of calculation and curve fitting for the determination of synergism [45]. In detail, we combined our compounds in a constant ratio of EC50 to EC50, stepwise doubling the dosages. We favour this method as the data necessary to calculate the CI-values have a solid empirical base. When a combination commends itself for further investigation, we suggest the following analyses: The dose-ratio is crucial in the description of synergisms but cost and time expensive. Therefore, we suggest evaluating the most effective dose-ratios after a synergy has successfully been identified and proven. To further evaluate the effectiveness of the detected combination, we recommend utilizing cell lines with different properties (e.g. p53 status) and or in different tumor entities [35]. In this work, we focused intensively on synergistic drug interaction in the detection of potential combinatory approaches. Synergistic effects are desirable, but additive effects or in some cases even compounds with slight antagonisms might be useful as well [18, 59]. For example, if the necessary single dose cannot be reached in vivo for pharmacodynamics reasons or dose limiting toxicity, a combination with a higher cumulative dose might result in a better outcome. With respect to the genome and metabolism stabilizing antitumor approach, we conducted a systematic literature research to identify matching compounds. In contrast, large-scale prediction of drug combinations via different databases [18, 39] is another promising way of narrowing down the field of potential compounds. Generally, we based the calculation of the CI-values on substantial experimental data. If only half of the curve is measured experimentally while the other parts are calculated via curve fitting, changes in slope might be missed which could lead to false low CI-values [45]. Therefore, the amount of experimental data points and EC-range covered must be considered in the interpretation of the resulting CI-values. Clinical implications To further evaluate promising combinations, taking already conducted clinical trials of the respective single compounds into account is important to identify potential obstacles and problems in the translational phase. When looking at DCA, "clinicaltrials.gov" does list 37 studies in the context of cancer and 81 studies in total. In one trial where patients with previously treated metastatic breast or non-small cell lung cancer were treated with DCA, the authors concluded that DCA should be used for patients with longer life expectancy and potentially in combination [60] (ClinicalTrials.gov Identifier: NCT01029925). PX-478 seems to be abandoned since the last clinical trial was conducted in 2010 (ClinicalTrials.gov Identifier: NCT00522652). In this phase 1 clinical trial PX-478 has been well tolerated in low doses with consistent HIF-1α inhibition in patients with advanced solid tumors [61]. A sufficient effect with well tolerated doses to commence with a phase 2 clinical trial seemed to be missing although a HIF-1α inhibition was achieved. As a conclusion, it can be stated that these two drugs are tolerated in the respectively needed dose while a convincing effect on cancer was missing. We believe that synergism is an important way to successfully include promising compounds like DCA and or PX-478 in the therapy of cancer. The synergisms with NHI-2 or Nutlin-3 identified in this study may be a solution in this context. For NHI-2 and Nutlin-3 no literature on clinical trials is available. However, it also seems that the effect of NHI-2 and Nutlin-3 on normal non-cancerous cells is tolerable. In vitro treatment with Nutlin-3 induced a significant cytotoxicity on primary CD19(+) B-CLL cells, but not on normal CD19(+) B lymphocytes, peripheral-blood mononuclear cells or bone marrow hematopoietic progenitors [62]. As for the molecular mechanism of NHI-2, Calvaresi et al. stated that LDH-A inhibition is unlikely to harm normal tissues [63]. The here presented three-step concept proved to be cost and time efficient with respect to the resulting data at the example of our combinatory approach. "Likely" and "very likely" synergisms proved to be reliable predictions of MDIS after verification by the method of Chou and Talalay. The combination of Nutlin-3 + PX-478 as well as DCA + NHI-2 could be identified in all three cell lines. In vivo experiments are required to evaluate the potential of these combinations for clinical studies. All data generated or analysed during this study are included in this published article and its supplementary information file (Additional file 1). CSP: Conjectured Synergistical Potency CI: combination index DCA: Dichloroacetat GMSAT: Genome and Metabolism Stabilizing Antitumor Therapy HIF-1α: hypoxia inducible factor α MDIS: Minimal Drug Interaction Screening PI3K: phosphatidylinositol 3-kinase w/v: weight per volume v/v: volume per volume Society AC. American Cancer Society. Cancer Facts & Figures 2018 (p14-15, p26-27). Atlanta: American Cancer Society; 2018. Available from: https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf. Güth U, Magaton I, Jane D, Fisher R, Schötzau A, Vetter M. Primary and secondary distant metastatic breast cancer : two sides of the same coin. The Breast. 2014;23(1):26–32 Available from: https://doi.org/10.1016/j.breast.2013.10.007. Nowell P. The clonal evolution of tumor cell populations. Science (80- ). 1976;194(4260):23–8. [cited 2019 Aug 10] Available from: http://www.ncbi.nlm.nih.gov/pubmed/959840. Research America. U.S. Investments in Medical and Health Research and Development. 2016 [cited 2019 Jul 22]. Available from: https://www.researchamerica.org/sites/default/files/2016US_Invest_R%26D_report.pdf. Moses H, Matheson DHM, Cairns-Smith S, George BP, Palisch C, Dorsey ER. The Anatomy of Medical Research. JAMA. 2015 [cited 2019 Jul 22];313(2):174. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25585329. Swanton C, Nicke B, Marani M, Kelly G, Downward J. Initiation of high frequency multi-drug resistance following kinase targeting by siRNAs. Cell Cycle. 2007;6(16):2001–4. Chisholm RH, Lorenzi T, Clairambault J. Cell population heterogeneity and evolution towards drug resistance in cancer: biological and mathematical assessment, theoretical treatment optimisation. Biochim Biophys Acta - Gen Subj. 2016;1860(11):2627–45 Available from: https://doi.org/10.1016/j.bbagen.2016.06.009. Andor N, Maley CC, Ji HP. Genomic Instability in Cancer: Teetering on the Limit of Tolerance. Cancer Res. 2017;77(9):2179–85. [cited 2019 Aug 15] Available from: http://www.ncbi.nlm.nih.gov/pubmed/28432052. Vander Heiden MG, DeBerardinis RJ. Understanding the Intersections between Metabolism and Cancer Biology. Cell. 2017;168(4):657–69. [cited 2019 Aug 15] Available from: http://www.ncbi.nlm.nih.gov/pubmed/28187287. Roy D, Sheng GY, Herve S, Carvalho E, Mahanty A, Yuan S, et al. Interplay between cancer cell cycle and metabolism: Challenges, targets and therapeutic opportunities. Biomed Pharmacother. 2017;89:288–96. [cited 2019 Aug 15] Available from: https://www.sciencedirect.com/science/article/abs/pii/S0753332216320923?via%3Dihub. Evan GI, Vousden KH. Proliferation, cell cycle and apoptosis in cancer. Nature. 2001;411(6835):342–8. [cited 2019 Aug 15] Available from: http://www.nature.com/articles/35077213. Moore RD, Chaisson RE. Natural history of HIV infection in the era of combination antiretroviral therapy. 1999;(June). Roberts J, Bebenek K, Kunkel T. The accuracy of reverse transcriptase from HIV-1. Science (80- ). 1988;242(4882):1171–3. [cited 2019 Jul 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/2460925. Memarnejadian A, Nikpoor AR, Davoodian N, Kargar A, Mirzadeh Y, Gouklani H. HIV-1 Drug Resistance Mutations among Antiretroviral Drug-Experienced Patients in the South of Iran. Intervirology. 2019;1–8. [cited 2019 Jul 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/31311021. Kast RE, Karpel-Massler G, Halatsch M-E. CUSP9* treatment protocol for recurrent glioblastoma: aprepitant, artesunate, auranofin, captopril, celecoxib, disulfiram, itraconazole, ritonavir, sertraline augmenting continuous low dose temozolomide. Oncotarget. 2014;5(18):8052–82. Skaga E, Skaga IØ, Grieg Z, Sandberg CJ, Langmoen IA, Vik-Mo EO. The efficacy of a coordinated pharmacological blockade in glioblastoma stem cells with nine repurposed drugs using the CUSP9 strategy. J Cancer Res Clin Oncol. 2019;145(6):1495–507. [cited 2019 Aug 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/31028540. Halatsch M, Kast RE, Dwucet A, Hlavac M, Heiland T, Westhoff M, et al. Bcl-2/Bcl-xL inhibition predominantly synergistically enhances the anti-neoplastic activity of a low-dose CUSP9 repurposed drug regime against glioblastoma. Br J Pharmacol. 2019;bph.14773. [cited 2019 Aug 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/31222722. Al-Lazikani B, Banerji U, Workman P. Combinatorial drug therapy for cancer in the post-genomic era. Nat Biotechnol. 2012;30(7):679–92 Available from: http://www.ncbi.nlm.nih.gov/pubmed/22781697%5Cnhttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4320499&tool=pmcentrez&rendertype=abstract. Lane DP. p53, guardian of the genome. Nature. 1992;358(6381):15–6. [cited 2019 Aug 2] Available from: http://www.nature.com/articles/358015a0. Bykov VJN, Issaeva N, Shilov A, Hultcrantz M, Pugacheva E, Chumakov P, et al. Restoration of the tumor suppressor function to mutant p53 by a low-molecular-weight compound. Nat Med. 2002;8(3):282–8. [cited 2019 Jul 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/11875500. Vassilev LT, Vu BT, Graves B, Carvajal D, Podlaski F, Filipovic Z, et al. In vivo activation of the p53 pathway by small-molecule antagonists of MDM2. Science. 2004;303(5659):844–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/14704432. Lemos A, Leão M, Soares J, Palmeira A, Pinto M, Saraiva L, et al. Medicinal Chemistry Strategies to Disrupt the p53-MDM2/MDMX Interaction. Med Res Rev. 2016;36(5):789–844. [cited 2018 Oct 14] Available from: http://www.ncbi.nlm.nih.gov/pubmed/27302609. Chen Z, Lu W, Garcia-Prieto C, Huang P. The Warburg effect and its cancer therapeutic implications. J Bioenerg Biomembr. 2007;39(3):267–74. [cited 2019 Jul 23] Available from: http://link.springer.com/10.1007/s10863-007-9086-x. Wheaton WW, Weinberg SE, Hamanaka RB, Soberanes S, Sullivan LB, Anso E, et al. Metformin inhibits mitochondrial complex I of cancer cells to reduce tumorigenesis; 2014. p. 1–18. Cluntun AA, Lukey MJ, Cerione RA, Locasale JW. Glutamine Metabolism in Cancer: Understanding the Heterogeneity. Trends in cancer. 2017 [cited 2019 Sep 11];3(3):169–80. Available from: http://www.ncbi.nlm.nih.gov/pubmed/28393116. Gross MI, Demo SD, Dennison JB, Chen L, Chernov-Rogan T, Goyal B, et al. Antitumor Activity of the Glutaminase Inhibitor CB-839 in Triple-Negative Breast Cancer. Mol Cancer Ther. 2014;13(4):890–901. [cited 2018 Oct 15] Available from: http://www.ncbi.nlm.nih.gov/pubmed/24523301. Nakahara T, Takeuchi M, Kinoyama I, Minematsu T, Shirasuna K, Matsuhisa A, et al. YM155, a Novel Small-Molecule Survivin Suppressant , Induces Regression of Established Human Hormone-Refractory Prostate Tumor Xenografts 2007;(17):8014–8021. Vadas O, Burke JE, Zhang X, Berndt A, Williams RL. Structural Basis for Activation and Inhibition of Class I Phosphoinositide 3-Kinases. Sci Signal. 2011;4(195):re2–re2. [cited 2019 Jul 24] Available from: https://stke.sciencemag.org/content/4/195/re2.long. Fischer A, Müller D, Zimmermann-Kordmann M, Kleuser B, Mickeleit M, Laabs S, et al. The ether lipid inositol-C2-PAF is a potent inhibitor of cell proliferation in HaCaT cells. ChemBioChem. 2006;7(3):441–9. Pelz C, Häckel S, Semini G, et al. Inositol-C2-PAF acts as a biological response modifier and antagonizes cancer-relevant processes in mammary carcinoma cells. Cell Oncol (Dordr). 2018;41(5):505–16. Available from: https://pubmed.ncbi.nlm.nih.gov/30047091/. Hung J-Y, Hsu Y-L, Li C-T, Ko Y-C, Ni W-C, Huang M-S, et al. 6-Shogaol, an Active Constituent of Dietary Ginger, Induces Autophagy by Inhibiting the AKT/mTOR Pathway in Human Non-Small Cell Lung Cancer A549 Cells. J Agric Food Chem. 2009 28;57(20):9809–16. [cited 2019 Jul 24] Available from: https://pubs.acs.org/doi/10.1021/jf902315e. Yin Z, Deng Z, Zhao W, Cao Z. Searching synergistic dose combinations for anticancer drugs. Front Pharmacol. 2018;9(MAY):1–7. Sheng Z, Sun Y, Yin Z, Tang K, Cao Z. Advances in computational approaches in identifying synergistic drug combinations. Brief Bioinform. 2017;19(6):1172–82. [cited 2019 Jul 26] Available from: http://www.ncbi.nlm.nih.gov/pubmed/28475767. Borisy AA, Elliott PJ, Hurst NW, Lee MS, Lehar J, Price ER, et al. Systematic discovery of multicomponent therapeutics. Proc Natl Acad Sci U S A. 2003;100(13):7977–82. [cited 2019 Jul 29] Available from: http://www.ncbi.nlm.nih.gov/pubmed/12799470. Mayer LD, Janoff AS. Optimizing Combination Chemotherapy by Controlling Drug Ratios. Mol Interv. 2007;7(4):216–23. [cited 2019 Jul 29] Available from: http://www.ncbi.nlm.nih.gov/pubmed/17827442. Weiss A, Berndsen RH, Ding X, Ho C-M, Dyson PJ, van den Bergh H, et al. A streamlined search technology for identification of synergistic drug combinations. Sci Rep. 2015;5:14508. [cited 2019 Jul 26] Available from: http://www.ncbi.nlm.nih.gov/pubmed/26416286. Nowak-Sliwinska P, Weiss A, Ding X, Dyson PJ, van den Bergh H, Griffioen AW, et al. Optimization of drug combinations using Feedback System Control. Nat Protoc. 2016;11(2):302–15. [cited 2019 Jul 29] Available from: http://www.ncbi.nlm.nih.gov/pubmed/26766116. Weiss A, Ding X, van Beijnum JR, Wong I, Wong TJ, Berndsen RH, et al. Rapid optimization of drug combinations for the optimal angiostatic treatment of cancer. Angiogenesis. 2015;18(3):233–44. [cited 2019 Aug 21] Available from: http://www.ncbi.nlm.nih.gov/pubmed/25824484. Li P, Huang C, Fu Y, Wang J, Wu Z, Ru J, et al. Large-scale exploration and analysis of drug combinations. Bioinformatics. 2015;31(12):2007–16. Greco WR, Bravo G, Parsons JC. The Search for Synergy: A Critical Review from a Response Surface Perspective*. 1995 . [cited 2019 Jul 24] Available from: http://pharmrev.aspetjournals.org/content/pharmrev/47/2/331.full.pdf. LOEWE S. The problem of synergism and antagonism of combined drugs. Arzneimittelforschung. 1953;3(6):285–90. [cited 2019 Aug 27] Available from: http://www.ncbi.nlm.nih.gov/pubmed/13081480. Chou T-C. Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies. Pharmacol Rev. 2006;58(3):621–81. [cited 2015 Sep 26] Available from: http://www.ncbi.nlm.nih.gov/pubmed/16968952. Lines C, Krueger SA, Wilson GD. Cancer Cell culture. Methods. 2011;731:359–70 Available from: http://www.springerlink.com/index/10.1007/978-1-61779-080-5. Martin N, Trials HIVC, Tumor X, Nude I, Basis T, Design E, et al. CompuSyn by Ting-Chao Chou. 2010;2005(D):3–4. Zhao L, Wientjes MG, Au JL-S. Evaluation of combination chemotherapy: integration of nonlinear regression, curve shift, isobologram, and combination index analyses. Clin Cancer Res. 2004;10(23):7994–8004. [cited 2016 Apr 13] Available from: http://www.ncbi.nlm.nih.gov/pubmed/15585635. Dai X, Cheng H, Bai Z, Li J. Breast cancer cell line classification and its relevance with breast tumor subtyping. J Cancer. 2017;8(16):3131–41. Comşa Ş, Cîmpean AM, Raica M. The story of MCF-7 breast Cancer Cell line: 40 years of experience in research. Anticancer Res. 2015;35(6):3147–54. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26026074. Berglind H, Pawitan Y, Kato S. Analysis of p53 mutation status in human cancer cell lines. Cancer Biol … 2008;(April):701–10. Available from: http://www.landesbioscience.com/journals/cbt/14BerglindCBT7-5.pdf. Chou TC, Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. Adv Enzyme Regul. 1984;22:27–55. [cited 2016 mar 8] Available from: http://www.ncbi.nlm.nih.gov/pubmed/6382953. Chou T-C. Preclinical versus clinical drug combination studies. Leuk Lymphoma. 2008;49(11):2059–80. [cited 2016 Apr 9] Available from: http://www.ncbi.nlm.nih.gov/pubmed/19021049. Chou T. Preclinical versus clinical drug combination studies. 2016;8194(April). Shimazu K, Tada Y, Morinaga T, Shingyoji M, Sekine I, Shimada H, et al. Metformin produces growth inhibitory effects in combination with nutlin-3a on malignant mesothelioma through a cross-talk between mTOR and p53 pathways. BMC Cancer. 2017;17(1):309. [cited 2018 Jun 9] Available from: http://bmccancer.biomedcentral.com/articles/10.1186/s12885-017-3300-y. Koh MY, Spivak-Kroizman T, Venturini S, Welsh S, Williams RR, Kirkpatrick DL, et al. Molecular mechanisms for the activity of PX-478, an antitumor inhibitor of the hypoxia-inducible factor-1. Mol Cancer Ther. 2008;7(1):90–100. [cited 2018 Apr 13] Available from: http://www.ncbi.nlm.nih.gov/pubmed/18202012. Lee YM, Lim JH, Chun YS, Moon HE, Lee MK, Huang LE, et al. Nutlin-3, an Hdm2 antagonist, inhibits tumor adaptation to hypoxia by stimulating the FIH-mediated inactivation of HIF-1α. Carcinogenesis. 2009;30(10):1768–75. Li B, Zhu Y, Sun Q, Yu C, Chen L, Tian Y, et al. Reversal of the Warburg effect with DCA in PDGF-treated human PASMC is potentiated by pyruvate dehydrogenase kinase-1 inhibition mediated through blocking Akt/GSK-3β signalling. Int J Mol Med. 2018;42(3):1391–400. [cited 2019 Jul 23] Available from: http://www.ncbi.nlm.nih.gov/pubmed/29956736. Allison SJ, Knight JRP, Granchi C, Rani R, Minutolo F, Milner J, et al. Identification of LDH-A as a therapeutic target for cancer cell killing via (i) p53/NAD(H)-dependent and (ii) p53-independent pathways. Oncogenesis. 2014;3(5):e102 Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4035693&tool=pmcentrez&rendertype=abstract. Li B, Li X, Ni Z, Zhang Y, Zeng Y, Yan X, et al. Dichloroacetate and metformin synergistically suppress the growth of ovarian cancer cells. Oncotarget. 2016;7(37):1–13 Available from: http://www.ncbi.nlm.nih.gov/pubmed/27449090. Chou T. Preclinical versus clinical drug combination studies. Leuk Lymphoma. 2008;49(11):2059–80. [cited 2016 Apr 9] Available from: http://www.ncbi.nlm.nih.gov/pubmed/19021049. Zimmermann GR, Lehár J, Keith CT. Multi-target therapeutics: when the whole is greater than the sum of the parts. Drug Discov Today. 2007;12(1–2):34–42. Garon EB, Christofk HR, Hosmer W, Britten CD, Bahng A, Crabtree MJ, et al. Dichloroacetate should be considered with platinum - based chemotherapy in hypoxic tumors rather than as a single agent in advanced non - small cell lung cancer; 2014. p. 443-52 . Tibes R, R, Falchook GS, Von Hoff DD, Weiss GJ, Iyengar T, Kurzrock R, et al. Results from a phase I, dose-escalation study of PX-478, an orally available inhibitor of HIF-1α. J Clin Oncol. 2010 20;28(15_suppl):3076–3076. [cited 2019 Jul 22] Available from: http://ascopubs.org/doi/10.1200/jco.2010.28.15_suppl.3076. Secchiero P, Barbarotto E, Tiribelli M, Zerbinati C, Di Iasio MG, Gonelli A, et al. Functional integrity of the p53-mediated apoptotic pathway induced by the nongenotoxic agent nutlin-3 in B-cell chronic lymphocytic leukemia (B-CLL). Blood. 2006;107(10):4122–9. Calvaresi EC, Granchi C, Tuccinardi T, et al. Dual targeting of the Warburg effect with a glucose-conjugated lactate dehydrogenase inhibitor. Chembiochem. 2013;14(17):2263–7. Available from: https://pubmed.ncbi.nlm.nih.gov/24174263/. We gratefully acknowledge the support of Sarra Amroune for editing the English version and for critical discussions of this manuscript. We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Fund of Charité – Universitätsmedizin Berlin. Thanks for the support throughout the project to Dr. Jutta Hinke-Ruhnau, Dr. Karsten Parczyk and Lilith Marie Bechinger. Jonas Parczyk received a 6-month scholarship by the Berlin Institute of Health during his doctoral thesis. Jérôme Ruhnau and Jonas Parczyk contributed equally to this work. Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Biochemistry, Charitéplatz 1, 10117, Berlin, Germany Jérôme Ruhnau, Jonas Parczyk, Kerstin Danker, Britta Eickholt & Andreas Klein Jérôme Ruhnau Jonas Parczyk Kerstin Danker Britta Eickholt Andreas Klein Conceptualization of the project was done by JP, JR and AK. Experiments were performed by JP and JR. AK was responsible for project administration and supervision. Writing and editing was done by JP, JR, AK, BE and KD. All authors read and approved the manuscript. Correspondence to Jérôme Ruhnau or Jonas Parczyk. None of our cell lines required ethics approval. Combination experiments, MCF-7 dose-respond-curves, HT-29 dose respond curves, MDA-MB dose respond curves and MDIS (minimalistic drug interaction screening). In this file all data concerning the combination experiments, the dose respond curves of the three cell lines and the MDIS can be found. Ruhnau, J., Parczyk, J., Danker, K. et al. Synergisms of genome and metabolism stabilizing antitumor therapy (GMSAT) in human breast and colon cancer cell lines: a novel approach to screen for synergism. BMC Cancer 20, 617 (2020). https://doi.org/10.1186/s12885-020-07062-2 drug combination cancer therapy Nutlin-3 Dichloroacetate NHI-2 Experimental therapeutics and drug development
CommonCrawl
2.E: Foundation (exercises) [ "showtoc:no" ] Book: Intermediate Algebra (OpenStax) 2: Solving Linear Equations Chapter Review Exercises Use a General Strategy to Solve Linear Equations Use a Problem-Solving Strategy Solve a formula for a Specific Variable Solve Mixture and Uniform Motion Applications Solve Linear Inequalities Solve Compound Inequalities Solve Absolute Value Inequalities Solve Equations Using the General Strategy for Solving Linear Equations In the following exercises, determine whether each number is a solution to the equation. \(10x−1=5x,x= \frac{1}{5}\) \(−12n+5=8n,n=−\frac{5}{4}\) In the following exercises, solve each linear equation. \(6(x+6)=24\) \(−(s+4)=18\) \(s=−22\) \(23−3(y−7)=8\) \(\frac{1}{3}(6m+21)=m−7\) \(m=−14\) \(4(3.5y+0.25)=365\) \(0.25(q−8)=0.1(q+7)\) \(q=18\) \(8(r−2)=6(r+10)\) \(5+7(2−5x)=2(9x+1)−(13x−57)\) \(x=−1\) \((9n+5)−(3n−7)=20−(4n−2)\) \(2[−16+5(8k−6)]=8(3−4k)−32\) \(k=\frac{3}{4}\) Classify Equations In the following exercises, classify each equation as a conditional equation, an identity, or a contradiction and then state the solution. \(17y−3(4−2y)=11(y−1)+12y−1\) \(9u+32=15(u−4)−3(2u+21)\) \(contradiction; no solution\) \(−8(7m+4)=−6(8m+9)\) Solve Equations with Fraction or Decimal Coefficients In the following exercises, solve each equation. \(\frac{2}{5}n−\frac{1}{10}=\frac{7}{10}\) \(n=2\) \(\frac{3}{4}a−\frac{1}{3}=\frac{1}{2}a+\frac{5}{6}\) \(\frac{1}{2}(k+3)=\frac{1}{3}(k+16)\) \(k=23\) \(\frac{5y−1}{3}+4=\frac{-8y+4}{6}\) \(0.8x−0.3=0.7x+0.2\) \(x=5\) \(0.10d+0.05(d−4)=2.05\) Use a Problem Solving Strategy for Word Problems In the following exercises, solve using the problem solving strategy for word problems. Three-fourths of the people at a concert are children. If there are 87 children, what is the total number of people at the concert? There are 116 people. There are nine saxophone players in the band. The number of saxophone players is one less than twice the number of tuba players. Find the number of tuba players. Solve Number Word Problems In the following exercises, solve each number word problem. The sum of a number and three is forty-one. Find the number. One number is nine less than another. Their sum is negative twenty-seven. Find the numbers. One number is two more than four times another. Their sum is negative thirteen. Find the numbers. \(−3,−10\) The sum of two consecutive integers is \(−135\). Find the numbers. Find three consecutive even integers whose sum is 234. Find three consecutive odd integers whose sum is 51. Koji has $5,502 in his savings account. This is $30 less than six times the amount in his checking account. How much money does Koji have in his checking account? Solve Percent Applications In the following exercises, translate and solve. What number is 67% of 250? 12.5% of what number is 20? \(160\) What percent of 125 is 150? In the following exercises, solve. The bill for Dino's lunch was $19.45. He wanted to leave 20% of the total bill as a tip. How much should the tip be? \($3.89\) Dolores bought a crib on sale for $350. The sale price was 40% of the original price. What was the original price of the crib? Jaden earns $2,680 per month. He pays $938 a month for rent. What percent of his monthly pay goes to rent? \(35%\) Angel received a raise in his annual salary from $55,400 to $56,785. Find the percent change. Rowena's monthly gasoline bill dropped from $83.75 last month to $56.95 this month. Find the percent change. Emmett bought a pair of shoes on sale at 40% off from an original price of $138. Find ⓐ the amount of discount and ⓑthe sale price. Lacey bought a pair of boots on sale for $95. The original price of the boots was $200. Find ⓐ the amount of discount and ⓑ the discount rate. (Round to the nearest tenth of a percent, if needed.) ⓐ \($105\) ⓑ \(52.5%\) Nga and Lauren bought a chest at a flea market for $50. They re-finished it and then added a 350% mark-up. Find ⓐthe amount of the mark-up and ⓑ the list price. Solve Simple Interest Applications Winston deposited $3,294 in a bank account with interest rate 2.6% How much interest was earned in five years? \($428.22\) Moira borrowed $4,500 from her grandfather to pay for her first year of college. Three years later, she repaid the $4,500 plus $243 interest. What was the rate of interest? Jaime's refrigerator loan statement said he would pay $1,026 in interest for a four-year loan at 13.5%. How much did Jaime borrow to buy the refrigerator? \($1,900\) In the following exercises, solve the formula for the specified variable. Solve the formula \(V=LWH\) for L. \(A=\frac{1}{2}d_1d_2\) for \(d_2\). \(d_2=\frac{2A}{d_1}\) \(h=48t+\frac{1}{2}at^2\) for t. 4x−3y=12 for y. \(y=\frac{4x}{3}−4\) Use Formulas to Solve Geometry Applications In the following exercises, solve using a geometry formula. What is the height of a triangle with area 67.567.5 square meters and base 9 meters? The measure of the smallest angle in a right triangle is 45°45° less than the measure of the next larger angle. Find the measures of all three angles. \(22.5°,67.5°,90°\) The perimeter of a triangle is 97 feet. One side of the triangle is eleven feet more than the smallest side. The third side is six feet more than twice the smallest side. Find the lengths of all sides. Find the length of the hypotenuse. \(26\) Find the length of the missing side. Round to the nearest tenth, if necessary. Sergio needs to attach a wire to hold the antenna to the roof of his house, as shown in the figure. The antenna is eight feet tall and Sergio has 10 feet of wire. How far from the base of the antenna can he attach the wire? Approximate to the nearest tenth, if necessary. Seong is building shelving in his garage. The shelves are 36 inches wide and 15 inches tall. He wants to put a diagonal brace across the back to stabilize the shelves, as shown. How long should the brace be? The length of a rectangle is 12 cm more than the width. The perimeter is 74 cm. Find the length and the width. \(24.5\) cm, \(12.5\) cm The width of a rectangle is three more than twice the length. The perimeter is 96 inches. Find the length and the width. The perimeter of a triangle is 35 feet. One side of the triangle is five feet longer than the second side. The third side is three feet longer than the second side. Find the length of each side. 9 ft, 14 ft, 12 ft Solve Coin Word Problems Paulette has $140 in $5 and $10 bills. The number of $10 bills is one less than twice the number of $5 bills. How many of each does she have? Lenny has $3.69 in pennies, dimes, and quarters. The number of pennies is three more than the number of dimes. The number of quarters is twice the number of dimes. How many of each coin does he have? nine pennies, six dimes, 12 quarters Solve Ticket and Stamp Word Problems In the following exercises, solve each ticket or stamp word problem. Tickets for a basketball game cost $2 for students and $5 for adults. The number of students was three less than 10 times the number of adults. The total amount of money from ticket sales was $619. How many of each ticket were sold? 125 tickets were sold for the jazz band concert for a total of $1,022. Student tickets cost $6 each and general admission tickets cost $10 each. How many of each kind of ticket were sold? 57 students, 68 adults Yumi spent $34.15 buying stamps. The number of $0.56 stamps she bought was 10 less than four times the number of $0.41 stamps. How many of each did she buy? Solve Mixture Word Problems Marquese is making 10 pounds of trail mix from raisins and nuts. Raisins cost $3.45 per pound and nuts cost $7.95 per pound. How many pounds of raisins and how many pounds of nuts should Marquese use for the trail mix to cost him $6.96 per pound? \(2.2\) lbs of raisins, \(7.8\) lbs of nuts Amber wants to put tiles on the backsplash of her kitchen counters. She will need 36 square feet of tile. She will use basic tiles that cost $8 per square foot and decorator tiles that cost $20 per square foot. How many square feet of each tile should she use so that the overall cost of the backsplash will be $10 per square foot? Enrique borrowed $23,500 to buy a car. He pays his uncle 2% interest on the $4,500 he borrowed from him, and he pays the bank 11.5% interest on the rest. What average interest rate does he pay on the total $23,500? (Round your answer to the nearest tenth of a percent.) \(9.7%\) Solve Uniform Motion Applications When Gabe drives from Sacramento to Redding it takes him 2.2 hours. It takes Elsa two hours to drive the same distance. Elsa's speed is seven miles per hour faster than Gabe's speed. Find Gabe's speed and Elsa's speed. Louellen and Tracy met at a restaurant on the road between Chicago and Nashville. Louellen had left Chicago and drove 3.2 hours towards Nashville. Tracy had left Nashville and drove 4 hours towards Chicago, at a speed one mile per hour faster than Louellen's speed. The distance between Chicago and Nashville is 472 miles. Find Louellen's speed and Tracy's speed. Louellen 65 mph, Tracy 66 mph Two busses leave Amarillo at the same time. The Albuquerque bus heads west on the I-40 at a speed of 72 miles per hour, and the Oklahoma City bus heads east on the I-40 at a speed of 78 miles per hour. How many hours will it take them to be 375 miles apart? Kyle rowed his boat upstream for 50 minutes. It took him 30 minutes to row back downstream. His speed going upstream is two miles per hour slower than his speed going downstream. Find Kyle's upstream and downstream speeds. upstream 3 mph, downstream 5 mph At 6:30, Devon left her house and rode her bike on the flat road until 7:30. Then she started riding uphill and rode until 8:00. She rode a total of 15 miles. Her speed on the flat road was three miles per hour faster than her speed going uphill. Find Devon's speed on the flat road and riding uphill. Anthony drove from New York City to Baltimore, which is a distance of 192 miles. He left at 3:45 and had heavy traffic until 5:30. Traffic was light for the rest of the drive, and he arrived at 7:30. His speed in light traffic was four miles per hour more than twice his speed in heavy traffic. Find Anthony's driving speed in heavy traffic and light traffic. heavy traffic 32 mph, light traffic 66 mph Graph Inequalities on the Number Line In the following exercises, graph the inequality on the number line and write in interval notation. \(x<−1\) \(x\geq −2.5\) \(x\leq \frac{5}{4}\) \(x>2\) \(−2<x<0\) \(-5\leq x<−3\) \(0\leq x\leq 3.5\) In the following exercises, solve each inequality, graph the solution on the number line, and write the solution in interval notation. \(n−12\leq 23\) \(a+\frac{2}{3}\geq \frac{7}{12}\) \(9x>54\) \(\frac{q}{−2}\geq −24\) \(6p>15p−30\) \(9h−7(h−1)\leq 4h−23\) \(5n−15(4−n)<10(n−6)+10n\) \(\frac{3}{8}a−\frac{1}{12}a>\frac{5}{12}a+\frac{3}{4}\) Translate Words to an Inequality and Solve In the following exercises, translate and solve. Then write the solution in interval notation and graph on the number line. Five more than z is at most 19. Three less than c is at least 360. Nine times n exceeds 42. Negative two times a is no more than eight. Solve Applications with Linear Inequalities Julianne has a weekly food budget of $231 for her family. If she plans to budget the same amount for each of the seven days of the week, what is the maximum amount she can spend on food each day? Rogelio paints watercolors. He got a $100 gift card to the art supply store and wants to use it to buy 12″ × 16″ canvases. Each canvas costs $10.99. What is the maximum number of canvases he can buy with his gift card? Briana has been offered a sales job in another city. The offer was for $42,500 plus 8% of her total sales. In order to make it worth the move, Briana needs to have an annual salary of at least $66,500. What would her total sales need to be for her to move? at least $300,000 Renee's car costs her $195 per month plus $0.09 per mile. How many miles can Renee drive so that her monthly car expenses are no more than $250? Costa is an accountant. During tax season, he charges $125 to do a simple tax return. His expenses for buying software, renting an office, and advertising are $6,000. How many tax returns must he do if he wants to make a profit of at least $8,000? at least 112 jobs Jenna is planning a five-day resort vacation with three of her friends. It will cost her $279 for airfare, $300 for food and entertainment, and $65 per day for her share of the hotel. She has $550 saved towards her vacation and can earn $25 per hour as an assistant in her uncle's photography studio. How many hours must she work in order to have enough money for her vacation? Solve Compound Inequalities with "and" In each of the following exercises, solve each inequality, graph the solution, and write the solution in interval notation. \(x\leq 5\) and \(x>−3\) \(4x−2\leq 4\) and \(7x−1>−8\) \(5(3x−2)\leq 5\) and \(4(x+2)<3\) \(34(x−8)\leq 3\) and \(15(x−5)\leq 3\) \(34x−5\geq −2\) and \(−3(x+1)\geq 6\) \(−5\leq 4x−1<7\) Solve Compound Inequalities with "or" \(5−2x\leq −1\) or \(6+3x\leq 4\) \(3(2x−3)<−5\) or \(4x−1>3\) \(34x−2>4\) or \(4(2−x)>0\) \(2(x+3)\geq 0\) or \(3(x+4)\leq 6\) \(12x−3\leq 4\) or \(13(x−6)\geq −2\) Solve Applications with Compound Inequalities Liam is playing a number game with his sister Audry. Liam is thinking of a number and wants Audry to guess it. Five more than three times her number is between 2 and 32. Write a compound inequality that shows the range of numbers that Liam might be thinking of. Elouise is creating a rectangular garden in her back yard. The length of the garden is 12 feet. The perimeter of the garden must be at least 36 feet and no more than 48 feet. Use a compound inequality to find the range of values for the width of the garden. \(6\leq w\leq 12\) Solve Absolute Value Equations \(|x|=8\) \(|y|=−14\) no solution \(|z|=0\) \(|3x−4|+5=7\) \(x=2,x=\frac{2}{3}\) \(4|x−1|+2=10\) \(−2|x−3|+8=−4\) \(x=9,x=−3\) \(|12x+5|+4=1\) \(|6x−5|=|2x+3|\) \(x=2,x=14\) Solve Absolute Value Inequalities with "less than" In the following exercises, solve each inequality. Graph the solution and write the solution in interval notation. \(|x|\leq 8\) \(|2x−5|\leq 3\) \(|6x−5|<7\) \(|5x+1|\leq −2\) Solve Absolute Value Inequalities with "greater than" In the following exercises, solve. Graph the solution and write the solution in interval notation. \(|x|>6\) \(|x|\geq 2\) \(|x−5|>−2\) \(|x−7|\geq 1\) \(3|x|+4\geq 1\) Solve Applications with Absolute Value A craft beer brewer needs 215,000 bottle per day. But this total can vary by as much as 5,000 bottles. What is the maximum and minimum expected usage at the bottling company? The minimum to maximum expected usage is 210,000 to 220,000 bottles At Fancy Grocery, the ideal weight of a loaf of bread is 16 ounces. By law, the actual weight can vary from the ideal by 1.5 ounces. What range of weight will be acceptable to the inspector without causing the bakery being fined? \(−5(2x+1)=45\) \(\frac{1}{4}(12m+28)=6+2(3m+1)\) \(8(3a+5)−7(4a−3)=20−3a\) \(a=41\) \(0.1d+0.25(d+8)=4.1\) \(14n−3(4n+5)=−9+2(n−8) \) contradiction; no solution \(3(3u+2)+4[6−8(u−1)]=3(u−2)\) \(\frac{3}{4}x−\frac{2}{3}=\frac{1}{2}x+\frac{5}{6}\) \(|3x−4|=8\) \(x=−2,x=−13\) \(x+2y=5\) for y. \(x<\frac{11}{4}\) \(−2\leq x<5\) \(8k\geq 5k−120\) \(3c−10(c−2)<5c+16\) \(\frac{3}{4}x−5\geq −2\) and \(\frac{1}{2}x−3\leq 4\) or \(\frac{1}{3}(x−6)\geq −2\) \(|4x−3|\geq 5\) In the following exercises, translate to an equation or inequality and solve. Four less than twice x is 16. Find the length of the missing side. \(10.8\) One number is four more than twice another. Their sum is \(−47\). Find the numbers. The sum of two consecutive odd integers is \(−112\).. Find the numbers. \(−57,−55\) Marcus bought a television on sale for $626.50 The original price of the television was $895. Find ⓐ the amount of discount and ⓑ the discount rate. Bonita has $2.95 in dimes and quarters in her pocket. If she has five more dimes than quarters, how many of each coin does she have? 12 dimes, seven quarters Kim is making eight gallons of punch from fruit juice and soda. The fruit juice costs $6.04 per gallon and the soda costs $4.28 per gallon. How much fruit juice and how much soda should she use so that the punch costs $5.71 per gallon? The measure of one angle of a triangle is twice the measure of the smallest angle. The measure of the third angle is three times the measure of the smallest angle. Find the measures of all three angles. \(30°,60°,90°\) The length of a rectangle is five feet more than four times the width. The perimeter is 60 feet. Find the dimensions of the rectangle. Two planes leave Dallas at the same time. One heads east at a speed of 428 miles per hour. The other plane heads west at a speed of 382 miles per hour. How many hours will it take them to be 2,025 miles apart? \(2.5\) hours Leon drove from his house in Cincinnati to his sister's house in Cleveland, a distance of 252 miles. It took him \(4\frac{1}{2}\) hours. For the first half hour, he had heavy traffic, and the rest of the time his speed was five miles per hour less than twice his speed in heavy traffic. What was his speed in heavy traffic? Sara has a budget of $1,000 for costumes for the 18 members of her musical theater group. What is the maximum she can spend for each costume? At most $55.56 per costume. 2.7: Solve Absolute Value Inequalities 5: Polynomial and Polynomial Functions
CommonCrawl
oxidation number of sulphur in na2s4o6 ; When oxygen is part of a peroxide, its oxidation number is -1. . plz explain Share with your friends. To find the correct oxidation state of S in SO2 (Sulfur dioxide), and each element in the molecule, we use a few rules and some simple math. a. CO d. Na2C2O4 g. SO2 j. Na2SO3 m. SCl2 b. CO2 e. CH4 h. SO3 k. Na2S2O3 n. Na2S2 c. Na2CO3 f. H2CO i. Na2SO4 l. Na2S4O6 o. SOCl2 Oxidation number of sulphur in Na 2 S 4 O 6 :. For an atom in its elemental form (Zn, Cl 2, C (graphite), etc.) Find the oxidation number of Boron in BH 3 andBF 3. let x= oxidation number of sulphur, and +1 is oxidation number of Na, -2 is oxidation number of O, also assume total charge on compound = 0 then solving we get. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. Log in. 500+ VIEWS. Hence, oxidation number of S in blue = +5. is done on EduRev Study Group by Class 11 Students. Share 1. 3 0. Therefore, S = -1. of sulphur in Na2S4O6 is 2.5. In almost all cases, oxygen atoms have oxidation numbers of -2. Log in. Check Answer and Solution for above Chemistry question - Tardigrade The salt normally is obtained as the dihydrate (x = 2).It is a colorless, water-soluble solid. Why? Hence, Option c is correct. Hydrogen halide with a high percentage of ionic character is, The oxidation number of sulphur in $Na_2S_4O_6$ is. The tetrathionate anion, S 4 O 2− 6, is a sulfur oxoanion derived from the compound tetrathionic acid, H 2 S 4 O 6.Two of the sulfur atoms present in the ion are in oxidation state 0 and two are in oxidation state +5. Atomic sulfur has oxidation number 0. Calculate the oxidation number for sulfur in each of the following ions. Give the oxidation number of all elements in CH3COOH? Indicate the oxidation number of carbon and sulfur in the following compounds. redox reactions; aiims; neet; Share It On Facebook Twitter Email. (a) $CO_2(g)$ is used as refrigerant for ice-cream and frozen food. Answer 7=> F is always in the oxidation state of -1. The structure of Na 2 S 4 O 6 is. What is Oxidation state of sulphur in Na2S4O6?Help me.. See answer rohanparsaila1234 is waiting for your help. In Na2S4O6 oxidation number on sulphur is (2×5+2×0)/4 = 2.5 SO2 => 4 SO3 => 6 H2SO3 => 4 Upvote. What is the oxidation number of sulfur in Na 2 S 2 O 3 ~5H 2 O ? O.S. Chemistry. Relevance. Procedure A. CO B. CO2 C. Na2CO3 D. Na2C2O4 E. CH4 F. H2CO G. SO2 H. SO3 I. Na2SO4 J. Na2SO3 K. Na2S2O3 L. Na2S4O6 M. SCl2 N. Na2S2 O. SOCl2 This one is tricky, if you look up info on tetrathionate you'll find that two sulfur atoms have a 0 charge and two will have 5+, the average works out to 2.5 (N2O2) 2-: This one I had trouble with, I know the structure of the ion is N-NO2, and I'm assuming that oxygen has a 1- charge. Answered The average oxidation state of sulphur in na2s4o6 is 1 See answer Reason: Two S-atoms are not directly linked with O-atoms. Choose the disproportionation reaction among the following redox reactions. During formation of a wide variety of compounds, the oxidation status of sulfur may differ from -2 to +6. (as per suggestions). Find an answer to your question The average oxidation state of sulphur in na2s4o6 is 1. The salt normally is obtained as the dihydrate (x = 2).It is a colorless, water-soluble solid. The oxidation number of sulphur in are respectively 2.9k LIKES. In the +6 oxidation state, the most important species formed by chromium are the chromate, CrO 4 2−, and dichromate, Cr 2 O 7 2−, ions. IIT-JEE Question Solution - The difference in the oxidation numbers of the two types of sulphur atoms in Na2S4O6 is Assertion: The formal oxidation number of sulphur in Na 2 S 4 O 6 is 2.5. 0 0. MuhammadJunaidRathor MuhammadJunaidRathor Answer: oxidation state of sulphur=2.5. The oxidation numbers can be found using the periodic table, mostly except for group 4A or 14. The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. Determine the oxidation number of sulfur in the compound Na2S4O6. 2. (A) If both assertion and 3 2–, the polyatomic anion has a charge of 2 –. How The oxidation number of Fe in Fe 3 O 4 is fractional. Hence , … Add your answer and earn points. Assertion: The formal oxidation number of sulphur in Na2S4O6 is 2.5. The sulphur atom 1 and 4 has Na 2 S 4 O 6 has 2 + 2x+6(-2) = 0 2x=10 The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. What is the oxidation number of S in Na2S4O6? NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. 1. AIIMS 2011: Assertion: The formal oxidation no. Join now. Join now. Which of them does not react with aqueous solution of $KMnO_4$ in presence of $H_2SO_4$ ? Check Answer and Solution for above question from Chemistry in Redox Reactions - Tardigrade The oxidation state of sulphur in `Na_(2)S_(4)O_(6)` is . Give the oxidation number of all elements in CH3COOH? Join now. All molecules have a net charge of 0. (as per suggestions). a. CO b. CO2 c. Na2CO3 d. Na2C2O4 e. CH4 f. H2CO g. SO2 h. SO3 i. Rajasthan PMT 2002: The oxidation number of sulphur in Na2S4O6 is (A) 1.5 (B) 2.5 (C) 3 (D) 2. sulphur in Na2S4O6 Ask for details ; Follow Report by Akshta1 27.02.2019 Log in to add a comment What do you need to know? The oxidation number is synonymous with the oxidation state. When we write the formula of Na 2 S 4 O 6 . Get answers by asking now. On electrolysis of dil.sulphuric acid using Platinum (Pt) electrode, the product obtained at anode will be: An element has a body centered cubic (bcc) structure with a cell edge of 288 pm. 1 decade ago. What is the oxidation number of gold in the complex $\ce{[AuCl_4]^{1-}}$ ? Find the difference in the oxidation numbernumbers present of the two types of sulphur present in Na2S4O6 [sodium - 2 sulphur - 4 oxygen - 6]. In Peroxydisulphiric acid (H2S2O8) has sulphur in the +6 oxidation state, hydrogen in the +1 oxidation state, the two O's between the two S-atoms are in the -1 oxidation state and all other oxygens are in the -2 oxidation state. The oxidation number of sulphur in Na2S4O6 is, The electronegativities of $F, Cl, Br $ and $1$ are $4.0, 3.0, 2.8, 2.5$ respectively. PrajwalR5459 09.04.2018 Chemistry Secondary School +13 pts. Consider the part of -SO₃⁻. Answer to Indicate the oxidation number of carbon and sulfur in the following compounds. Answer 6=> One atom of sulphur is in 0 Oxidation state and other atom is sulphur in in +5 Oxidation state. Join Yahoo Answers and get 100 points today. Sodium tetrathionate is a salt of sodium and tetrathionate with the formula Na2S4O6xH2O. How many sulphur atoms in `Na_(2)S_(4)O_(6)` have zero oxidation state? Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). 500+ SHARES. Hence, oxidation number of each S atom in red = 0. Oxidation number, also called Oxidation State, the total number of electrons that an atom either gains or loses in order to form a chemical bond with another atom. Solution. Thus, S2 must have -2 to balance up. Favorite Answer. Answer and Explanation: The oxidation number of O and Na is -2 and +1 respectively. … let x= oxidation number of sulphur, and +1 is oxidation number of Na, -2 is oxidation number of O, also assume total charge on compound = 0 then solving we get. Sodium tetrathionate is a salt of sodium and tetrathionate with the formula Na 2 S 4 O 6. x H 2 O. A few stable compounds of the +5, +4, and +1 states, however, are known. a) CO= +2 b) CO= +4 c) Na2CO3= +4 d) Na2C2O4= +3 e) CH4= -4 f) H2CO= 0 g) SO2= +4 h) SO3= +6 i) Na2SO4= +6 j) Na2SO3= +4 k) Na2S2O3= +2 l) Na2S4O6= +5/2 m)SCl2= +2 n) Na2S2= -1 o) SOCL2= +4 2. Mohammad Abbasi, Najmeh Nowrouzi, Saadat Mousavi, Aerobic Oxidation of Thiols and In Situ Generated Thiols to Symmetrical Disulfides (Disulfanes) Catalyzed by Na2S4O6, ChemistrySelect, 10.1002/slct.201903099, 4, 42, (12227-12231), (2019). 1. BITSAT 2013: The oxidation state of sulphur in Na2S4O6 is (A) + 6 (B) dfrac+32 (C) dfrac+52 (D) -2. S atoms namely S 1 and S 4 have oxidation number +5 each.. Equivalent weight of potassium permanganate in alkaline solution is equal to m, Oxidation number of $Cr$ in $Cr_{2} O^{2-}_{7}$ is, Highest oxidation state of Mn is present in, In Wolff‐Kishner reduction, the carbonyl group of aldehydes and ketones is converted into. The oxidation number is synonymous with the oxidation state. Question: Indicate The Oxidation Number Of Carbon And Sulfur The In Following Compounds. Consider the part of -SO₃⁻. Hence, oxidation number of each S atom in red = 0. how to calculate the oxidation number of. The oxidation state of sulphur in `Na_(2)S_(4)O_(6)` is . steve_geo1. 2Na = +2. Identify compound X in the following sequence of reactions: Identify a molecule which does not exist. Sulphur bonded to three oxygen is considered to have +6 (Sulphur A) and other sulphur has -2 (Sulphur B). Sodium tetrathionate is a salt of sodium and tetrathionate with the formula Na 2 S 4 O 6. x H 2 O. of sulphur in Na2S4O6 is 2.5. In most compoiunds, oxygen has an oxidation state of #color(blue)(-2)#. How The oxidation number of Fe in Fe 3 O 4 is fractional. Reason: Two S-atoms are not directly linked with O-atoms. The oxidation number of sulphur in $S_{8}$ molecule is. Reaction between acetone and methyl magnesium chloride followed by hydrolysis will give : Identify the correct statements from the following: i know its -2 for oxygen and +1 for hydrogen but how do i work our the rest? What is the oxidation state of sulphur in Na2S4O6 Ask for details ; Follow Report by SatyajeetDasg 09.09.2019 Log in to add a comment Can you explain this answer? (Hint :- sulphur-sulphur linkages present) Asked by acv27joy | 8th Oct, 2018, 09:08: AM View Answer. What is the oxidation number of sulphur in Na2S4O6? In Na2S4O6 oxidation number on sulphur is (2×5+2×0)/4 = 2.5 SO2 => 4 SO3 => 6 H2SO3 => 4 Upvote. Physics. To find the correct oxidation state of C in Na2CO3 (Sodium carbonate), and each element in the compound, we use a few rules and some simple math. Therefore, in Na2S4O6 N a 2 S 4 O 6, of sulphur in Na2S4O6 is 2.5. Hence, analysis of structure is essential. The oxidation state +2.5 is just the average oxidation state for the S atom. These are not oxidation states!] Join now. The answer on the mark scheme is +2, but i don't understand how you get to that. Assertion: The formal oxidation no. find the difference in the oxidation numbernumbers present of the two types of sulphur present in na2s4o6 sodium 2 sulphur 4 oxygen 6 hint sulphur sul - Chemistry - TopperLearning.com | igx0j6jj Two S atoms namely S 2 and S 3 are joined together, and with two other S atoms named S 1 and S 4.Therefore their oxidation state will be zero. Can you explain this answer? Books. Lv 7. Physics. Books. H2S4O6 is known as Tetrathionic acid and has unsymmetrical Sulphur atoms, therefore conventional method won't work. What is the structure of na2s4o6? Answered The average oxidation state of sulphur in na2s4o6 is 1 See answer therefore the oxidation number of S in Na2S2 is -1. Determine the oxidation number of sulfur in the compound Na2S2O3. Thank you! Hydrated sodium thiosulfate has the formula Na 2 S 2 O 3 ~5H 2 O . 84 views. Na2S4O6: 2(+1) + 4x + 6(-2) = 0. x = 2.5. Using the rule and adding the oxidation numbers in the compound, the equation becomes x +(-4 ) = 0. We assign oxygen an oxidation number of – 2 and write the equation. The oxidation number of the sulfur atom in the SO42- ion must be +6, for example, because the sum of the oxidation numbers of the atoms in this ion must equal -2. For alkali metal it oxidation number must be +1, thus. The tetrathionate anion, S 4 O 2− 6, is a sulfur oxoanion derived from the compound tetrathionic acid, H 2 S 4 O 6.Two of the sulfur atoms present in the ion are in oxidation state 0 and two are in oxidation state +5. The Sulfur (S) oxidation number is +2 and the Chlorines (Cl) oxidation number is -1 . Therefore: 2(ox. O. Ask your question. Na = +1. We assign oxygen an oxidation number of – 2 and write the equation (d) In S. 2. According to the structure, the symmetry suggests a -1 on each bridging sulfur (color(blue)(blue)) (just like the bridging O atoms in a peroxide), and a +6 (color(red)(red)) on each central sulfur (like in sulfate). Reply; Share; This discussion on In which one of the following compounds, the oxidation number of sulphur is the leasta)SO2b)SO3c)Na2S4O6d) H2SO3Correct answer is option 'C'. Calculate the Oxidation number of Sulphur in S 2 O 8 2-ion. We're being asked to determine the oxidation state of each element in SO 3 (sulfur trioxide).. Given that the ionic product of $Ni(OH)_2$ is $2 \times 10^{-15}$. What is the oxidation number of sulphur in Na2S4O6? What is the oxidation number of sulphur in Na. O6 contributes -12, so S4 is +10, and each S is +10/4 = +5/2. Reply; Share; This discussion on In which one of the following compounds, the oxidation number of sulphur is the leasta)SO2b)SO3c)Na2S4O6d) H2SO3Correct answer is option 'C'. Hence resultant oxidation number of sulphur in Na 2 S 4 O 6 : (0 + 0 + 5 + 5) / 4 = 2.5 0 votes . These ions form the basis for a series of industrially important salts. Ask your question. Please explain why the oxidation number of Sulfur is +5/2 in Na2S4O6. Can the oxidation number of an element be zero? What is oxidation number of sulphur in the following molecules / ions (n) sinH2S sin SO4^2- sinH2SO4. Consider the 2 S atoms in red. the oxidation number of S in Na2S4O6 is . The oxidation number of the sulfide ion is -2. Difference in oxidation number = 6 - (-1) = 7 This discussion on The difference in the oxidation number of the two types of sulphur atoms in Na2S4O6 is......... [HTJEE2011]Correct answer is '7'. Still have questions? Reason: Two S-atoms are not directly linked with O-atoms. The most common oxidation states of chromium are +6, +3, and +2. (a) +2 +2x -6=0 x= +2 b) +2 +4x -12=0 x= +2.5 c) +2 +x -6=0 x= +4 d) +2 +x -8=0 x= +6. Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). The Questions and Answers of The difference in the oxidation number of the two types of sulphur atoms in Na2S4O6 is .....[HTJEE2011]Correct answer is '7'. Ques 3=> The difference in the oxidation number of the two types of sulphur atoms in Na2S4O6 is. +2.5 (No, this isn't an anomaly. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. The atomic radiusis: Find out the solubility of $Ni(OH)_2$ in 0.1 M NaOH. Chemistry. 2x + 2(-1) = -2. x = 0 1 decade ago. In thiosulphate, two sulphurs have oxidation state of -2 and +6. S_4O_6^"2-" : overall oxidation state is -2 [oxidation state of S x 4] + [oxidation state of O atom x 6] = -2 The most common oxidation state of oxygen is -2. Let the oxidation number of S is x x. Find the oxidation state of sulphur in the following compounds? (A) If both assertion and Oxidation number of sulphur in Na 2 S 4 O 6 :. Sulphur bonded to three oxygen is considered to have +6 (Sulphur A) and other sulphur has -2 (Sulphur B). Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. Together the compounds' oxidation number is 0. S atoms namely S 1 and S 4 have oxidation number +5 each.. It is one member of the polythionates, which have the formula [S n (SO 3) 2] 2-.Other members include trithionite (n = 1), pentathionate (n = 3), hexathionate (n = 4). ← Prev Question Next Question → 0 votes . Assign an oxidation number of -2 to oxygen (with exceptions). It is one member of the polythionates, which have the formula [S n (SO 3) 2] 2-.Other members include trithionite (n = 1), pentathionate (n = 3), hexathionate (n = 4). Now focus on the sulfate anion. As with any of these oxidation problems, you set it up, as follows: The sum of the oxidation states of all of the individual atoms must equal the overall charge of the molecule or ion. Find an answer to your question The average oxidation state of sulphur in na2s4o6 is 1. The rules for oxidation states are as follows: A. To find the oxidation number of sulfur, it is simply a matter of using the formula SO2 and writing the oxidation numbers as S = (x) and O2 = 2(-2) = -4. Sodium has an oxidation state that matches its overall ionic charge, so right from the start you know that sodium will have an oxidation state of #color(blue)(+1)#. Each of the S atoms in red is attached to 2 S atoms. In my book the oxidation number of oxygen in this compound is -1? Reason: Two S-atoms are not directly linked with O-atoms (a) Both A and R are true and R is the correct explanation of A (b) Both A and R are true but R is not correct explanation of A (c) A is true but R is false (d) A and R are false Ask question + 100. General Rules: 1. Let me explain first.) Books. Since the part has a change of -1, the total oxidation numbers = 0 (Total oxidation numbers of O) + (Oxidation number of S in blue) = -1 (-2) × 3 + (Oxidation number of S in blue) = -1. The two sulphur atoms in N a2 S 2 O3 have -2 and +6 oxidation states. Please explain why the oxidation number of Sulfur is +5/2 in Na2S4O6. Average oxidation number of S = 2.5 ==== Oxidation numbers of individual S atoms : Refer to the S₄O₆²⁻ in the diagram below. SS. Two S atoms namely S 2 and S 3 are joined together, and with two other S atoms named S 1 and S 4.Therefore their oxidation state will be zero. Thus, [oxidation state of S x 4] + [(-2) (6)] = -2 Let color (red) y be the oxidation state of S. Physics. Assertion: The formal oxidation number of sulphur in Na 2 S 4 O 6 is 2.5. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. In thiosulphate, two sulphurs have oxidation state of -2 and +6. Calculate the Oxidation number of Sulphur in S 2 O 8 2-ion. = 0. The Na2 contributes +2 to oxidation number, so S4O6 is -2. How many sulphur atoms in `Na_(2)S_(4)O_(6)` have zero oxidation state? Among the following elements, which one exhibits both positive and negative oxidation states? 7 Answers. Find out the oxidation number of sulphur in the HSO4^–. asked Jan 7, 2019 in Chemistry by Hiresh (82.9k points) What is the oxidation number of sulphur in Na 2 S 4 O 6? The sulphur atoms 2 and 3 are connected with 1 and 4 and has no oxidation state. Why? Solving for x, it is evident that the oxidation number for sulfur is +4. What is the oxidation number of vanadium in $ R{{b}_{4}}Na[H{{V}_{10}}{{O}_{28}}] $ ? Reason: Two S-atoms are not directly linked with O-atoms (a) Both A and R are true and R is the correct explanation of A (b) Both A and R are true but R is not correct explanation of A (c) A is true but R is false (d) A and R are false Which of the following set of molecules will have zero dipole moment ? Can the oxidation number of an element be zero? Log in. 'Sodium thiosulfate is a compound of sulfur used to develop photographs. Hence resultant oxidation number of sulphur in Na 2 S 4 O 6 : (0 + 0 + 5 + 5) / 4 = 2.5 are solved by group of students and teacher of Class 11, which is also the largest student community of Class 11. In the reaction, $ 3B{{r}_{2}}+6CO_{3}^{2-}+3{{H}_{2}}O\xrightarrow{{}}5B{{r}^{-}} $ $ +BrO_{3}^{-}+6\,HCO_{3}^{-} $ Choose the correct statement. Answer Save. PrajwalR5459 09.04.2018 Chemistry Secondary School +13 pts. Ask your question. When we write the formula of Na 2 S 4 O 6 . Start with what you know. First we need to calculate the oxidation state of S atom the usual way. (a) 3/5 (b) 3/2 (c) 2/5 (d) 5/2. View Answer. Assertion: The formal oxidation no. Find the oxidation number of Boron in BH 3 andBF 3. Log in. Oxidation state of sulphur in Na2S4O6(Test yourself solution on tricks to find oxidation number) In my book the oxidation number of oxygen in this compound is -1? Number for sulfur is +5/2 in Na2S4O6 is the diagram below … assign an state... 4 is fractional to +6 need to calculate the oxidation number of – 2 write! Average oxidation state +2.5 is just the average oxidation state B ) teachers/experts/students to get solutions their! Tetrathionate with the formula Na 2 S 2 O3 have -2 to +6, oxidation number of sulphur S! 27.02.2019 Log in to add a comment what do you need to calculate the oxidation number carbon. F is always in the compound, the polyatomic anion has a charge of 2 – S namely... S 1 and S 4 O 6: ( OH ) _2 $ in 0.1 M NaOH the formal number! Can be found using the periodic table, mostly except for Group 4A 14! ( B ) sulfur may differ from -2 to +6 See answer rohanparsaila1234 is waiting your. States are as follows: a unique platform where Students can interact with teachers/experts/students to get solutions their! Common oxidation states -2 and +6 oxidation states of chromium are +6, +3, and +2 to up... $ \ce { [ AuCl_4 ] ^ { 1- } } $ exceptions.... Redox reactions ^ { 1- } } $ has -2 ( sulphur a ) If both assertion assertion. By Class 11 Students has no oxidation state ( S ) oxidation number each... -2 for oxygen and +1 for hydrogen but how do i work our the rest with 1 and and! Compound x in the HSO4^– atom is sulphur in are respectively 2.9k LIKES contributes,! Number is synonymous with the formula of Na 2 S 2 O on the mark is. Is -2 i work our the rest first we need to calculate the oxidation state of sulphur in in oxidation... Following redox reactions ; aiims ; neet ; Share it on Facebook Twitter Email can found... Almost all cases, oxygen has an oxidation number, so S4O6 is -2 of them not. -2 ) # sulphur in Na 2 S 4 O 6 is 2.5 does not react with solution... In to add a comment what do you need to calculate the status! The sulphur atoms in ` Na_ ( 2 ).It is a salt of sodium tetrathionate! Gold in the following elements, which one exhibits both positive and negative states! 4 and has no oxidation state of -2 and +6 your question the average oxidation +2.5... Has the formula Na 2 S 4 O 6 is 2.5 which exhibits... The atomic radiusis: find out the oxidation numbers in the following sequence reactions! Is 2.5 If both assertion and assertion: the formal oxidation number oxidation number of sulphur in na2s4o6 -1 of is. Andbf 3 oxygen is considered to have +6 ( sulphur B ) Boron in BH 3 andBF 3 scheme +2! Work our the rest alkali metal it oxidation number of sulphur in the HSO4^– ) O_ ( 6 ) have. Atom of sulphur in S 2 O to Indicate the oxidation number of in. Salt normally is obtained as the dihydrate ( x = 2 ) S_ ( 4 ) O_ ( 6 `. In in +5 oxidation state are as follows: a unique platform Students! Oxidation numbers of -2 and +6 1 and S 4 O 6 is 2.5, solid. A2 S 2 O 8 2-ion, +3, and +2 rule adding... For the S atom for a series of industrially important salts important salts welcome to Sarthaks:! And +6 oxidation states of chromium are +6, +3, and +1,... +6 ( sulphur a ) If both assertion and Please explain why the status... However, are known elements in CH3COOH the HSO4^– with aqueous solution of Ni! S. 2 atomic radiusis: find out the solubility of $ Ni ( )! Ions ( N ) sinH2S sin SO4^2- sinH2SO4 dipole moment to balance up … assign an oxidation for. D ) 5/2 O 3 ~5H 2 O oxygen an oxidation number of sulfur may differ from -2 to up! And negative oxidation states of chromium are +6, +3, and each S atom may differ -2... { 8 } $ molecule is to have +6 ( sulphur B.. The rule and adding the oxidation number of S in Na2S2 is -1 are +6,,. Oxidation number for sulfur in the compound Na2S2O3 have -2 to oxygen ( with exceptions ) Study Group by 11. Zero dipole moment 7= > F is always in the oxidation numbers of S... Find the oxidation number of S in Na2S2 is -1 molecule is the structure of Na 2 2. But how do i work our the rest and each S atom its! A unique platform where Students can interact with teachers/experts/students to get solutions to their queries -2... Class 11 Students few stable compounds of the sulfide ion is -2 +2, but i do n't how... Is always in the compound, the equation becomes x + ( -4 ) = 0 peroxide its. ( Zn, Cl 2, c ( graphite ), etc. to Sarthaks eConnect: a in compoiunds... Following sequence of reactions: identify a molecule which does not react with aqueous solution of $ Ni ( ). Aiims ; neet ; Share it on Facebook Twitter Email 2 O3 have -2 to.... Oxygen is part of a wide variety of compounds, the oxidation number of and! 11 Students in S 2 O and 4 and has unsymmetrical sulphur oxidation number of sulphur in na2s4o6 in ` Na_ ( 2 S_... ; neet ; Share it on Facebook Twitter Email M NaOH ( )... First we need to know compound is -1 \ce { [ AuCl_4 ] ^ { 1- } $... 2, c ( graphite ), etc. OH ) _2 $ is? Help me.. See rohanparsaila1234. Aucl_4 ] ^ { 1- } } $ molecule is 1- } } $ other sulphur has (. Number +5 each 'sodium thiosulfate is a salt of sodium and tetrathionate with the formula Na! Is n't an anomaly ( with exceptions ): identify a molecule which does react! As the dihydrate ( x = 2 ).It is a salt sodium. Tetrathionic acid and has no oxidation state 4 is fractional 2, c ( graphite ), etc. of. Thiosulfate is a salt oxidation number of sulphur in na2s4o6 sodium and tetrathionate with the formula of Na 2 2! Atomic radiusis: find out the solubility of $ Ni ( OH ) $! Unsymmetrical sulphur atoms, therefore conventional method won ' t work of Boron in BH 3 andBF 3 Na2S4O6xH2O... If both assertion and assertion: the formal oxidation no ) 3/2 ( c 2/5... Do i work our the rest a comment what do you need know. Sulfur may differ from -2 to balance up d ) in S. 2 oxygen... As the dihydrate ( x = 2 ) S_ ( 4 ) O_ ( 6 ) ` have zero moment! ) 3/5 ( B ) the diagram below its oxidation number of Boron in BH 3 andBF 3 important! Add a comment what do you need to know ncert DC Pandey Sunil Batra HC Verma Pradeep Errorless molecules. Of an element be zero alkali metal it oxidation number of sulphur in $ S_ 8... Let the oxidation number of an element be zero ( S ) oxidation of... Reaction among the following ions: identify a molecule which does not with... In each of the +5, +4, and each S is +10/4 = +5/2 scheme is and... Is +10/4 = +5/2 ( x = 2 ).It is a colorless, water-soluble solid +5,,! Number must be +1, thus linked with O-atoms S₄O₆²⁻ in the compound Na2S2O3 `... Sin SO4^2- sinH2SO4 2 O 3 ~5H 2 O oxidation no sulphur atoms in red is to... Of gold in the compound Na2S4O6 oxygen ( with exceptions ) ) 5/2 blue! +10, and +1 states, however, are known \times 10^ { -15 $! Of sulfur used to develop photographs tetrathionate with the formula Na2S4O6xH2O +10/4 = +5/2 ( )! Work our the rest average oxidation state the diagram below rohanparsaila1234 is waiting for your.... Number +5 each ( -2 ) # in BH 3 andBF 3 O 6 is 2.5 to (... Tetrathionic acid and has no oxidation state of sulphur in Na2S4O6? Help..... ) = 0 is in 0 oxidation state of sulphur in S O... Formal oxidation number +5 each 2 \times 10^ { -15 } $ molecule is to three oxygen is to! Of 2 – obtained as the dihydrate ( x = 2 ).It is a colorless water-soluble. Numbers in the oxidation number of Fe in Fe 3 O 4 is fractional 2/5 ( d in. Water-Soluble solid in Na2S4O6 colorless, water-soluble solid \times 10^ { -15 } $ ions! +6 ( sulphur B ) 3/2 ( c ) 2/5 ( d in... > one atom of sulphur in Na2S4O6? Help me.. See answer rohanparsaila1234 is waiting for your.! ) If both assertion and assertion oxidation number of sulphur in na2s4o6 the formal oxidation no the periodic,. +5 oxidation state how the oxidation number of sulphur in Na2S4O6 Ask for details Follow... Thiosulfate is a colorless, water-soluble solid, water-soluble solid: find out the solubility of $ Ni OH. Na_2S_4O_6 $ is $ 2 \times 10^ { -15 } $ Pradeep Errorless for..., are known bonded to three oxygen is considered to have +6 ( sulphur B.... Percentage of ionic character is, the oxidation number of sulphur in Na2S4O6 Ask details! Sea Salt Caramel Chocolate, Designer Pedestal Fan, Nas Ether Song, Can Dogs Sense Death, Katmai National Park, What Are Phytoplasmas, Margot Lee Shetterly Books, Cartoon Sound Effects Library, Oxidation State Problems, Mango Shake Benefits, Iterative And Incremental Methods, 0 responses on "oxidation number of sulphur in na2s4o6" A Commenter on Hello world!
CommonCrawl
The standard normal distribution vs the t-distribution Given an IID normally distributed sample $X_1,...,X_n$ for $n$ small with mean $\mu$, standard deviation $\sigma$, sample mean $\overline{X}$ and sample standard deviation $s$ (the unbiased estimator form). I understand that $$\frac{\overline{X} - \mu}{\frac{\sigma}{\sqrt{n}}} \sim N(0,1),$$ but I'm having trouble reconciling this with the fact that $$\frac{\overline{X} - \mu}{\frac{s}{\sqrt{n}}} \sim t_{n-1}.$$ Since the $t$-distribution is like the standard normal distribution but with a higher variance (smaller peak and fatter tails), this would seem to suggest that the sample standard deviation $s$ systematically underestimates the population standard deviation, making it in fact a biased estimator. normal-distribution standard-deviation ThothThoth $\begingroup$ $s^2$ is unbiased for $\sigma^2,$ so, yes, $s$ is biased for $\sigma.$ $\endgroup$ – soakley Jul 22 '14 at 1:51 Actually $s$ doesn't need to systematically underestimate $\sigma$; this could happen even if that weren't true. As it is, $s$ is biased for $\sigma$ (the fact that $s^2$ is unbiased for $\sigma^2$ means that $s$ will be biased for $\sigma$, due to Jensen's inequality*, but that's not the central thing going on there. * Jensen's inequality If $g$ is a convex function, $g\left(\text{E}[X]\right) \leq \text{E}\left[g(X)\right]$ with equality only if $X$ is constant or $g$ is linear. Now $g(X)=-\sqrt{X}$ is convex, so $-\sqrt{\text{E}[X]} < \text{E}(-\sqrt{X})$, i.e. $\sqrt{\text{E}[X]} > \text{E}(\sqrt{X})\,$, implying $\sigma>E(s)$ if the random variable $s$ is not a fixed constant. Edit: a simpler demonstration not invoking Jensen -- Assume that the distribution of the underlying variable has $\sigma>0$. Note that $\text{Var}(s) = E(s^2)-E(s)^2$ this variance will always be positive for $\sigma>0$. Hence $E(s)^2 = E(s^2)-\text{Var}(s) < \sigma^2$, so $E(s)<\sigma$. So what is the main issue? Let $Z=\frac{\overline{X} - \mu}{\frac{\sigma}{\sqrt{n}}}$ Note that you're dealing with $t=Z\cdot\frac{\sigma}{s}$. That inversion of $s$ is important. So the effect on the variance it's not whether $s$ is smaller than $\sigma$ on average (though it is, very slightly), but whether $1/s$ is larger than $1/\sigma$ on average (and those two things are NOT the same thing). And it is larger, to a greater extent than its inverse is smaller. Which is to say $E(1/X)\neq 1/E(X)$; in fact, from Jensen's inequality: $g(X) = 1/x$ is convex, so if $X$ is not constant, $1/\left(\text{E}[X]\right) < \text{E}\left[1/X\right]$ So consider, for example, normal samples of size 10; $s$ is about 2.7% smaller than $\sigma$ on average, but $1/s$ is about 9.4% larger than $1/\sigma$ on average. So even if at n=10 we made our estimate of $\sigma$ 2.7-something percent larger** so that $E(\widehat\sigma)=\sigma$, the corresponding $t=Z\cdot\frac{\sigma}{\widehat\sigma}$ would not have unit variance - it would still be a fair bit larger than 1. **(at other $n$ the adjustment would be different of course) Since the t-distribution is like the standard normal distribution but with a higher variance (smaller peak and fatter tails) If you adjust for the difference in spread, the peak is higher. Why does the t-distribution become more normal as sample size increases? Glen_b -Reinstate MonicaGlen_b -Reinstate Monica Not the answer you're looking for? Browse other questions tagged normal-distribution standard-deviation or ask your own question. Why is sample standard deviation a biased estimator of $\sigma$? Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? Unbiased Estimator for the CDF of a Normal Distribution T-distribution versus normal distribution (sample means and linear inference) On finding the asymptotic distribution of the sample variance using the delta method Intuitive explanation of expected value of sample standard deviation
CommonCrawl
What is 1/8 of 10? What is 1 / 8 of 10 and how to calculate it yourself Fraction of a Number Calculator 1 / 8 of 10 = 1.25 1 / 8 of 10 is 1.25. In this article, we will go through how to calculate 1 / 8 of 10 and how to calculate any fraction of any whole number (integer). This article will show a general formula for solving this equation for positive numbers, but the same rules can be applied for numbers less than zero too! Let's dive into how to solve! Please see our introduction to fractions if you need a refresher on these concepts! Here's how we will calculate 1 / 8 of 10: 1: First step in solving 1 / 8 of 10 is understanding your fraction 1 / 8 has two important parts: the numerator (1) and the denominator (8). The numerator is the number above the division line (called the vinculum) which represent the number of parts being taken from the whole. For example: If there were 14 cars total and 1 painted red, 1 would be the numerator or parts of the total. In this case of 1 / 8, 1 is our numerator. The denominator (8) is located below the vinculum and represents the total number. In the example above 14 would be the denominator of cars. For our fraction: 1 is the numerator and 8 is the denoimator. 2: Write out your equation of 1 / 8 times 10 When solving for 1 / 8 of a number, students should write the equation as the whole number (10) times 1 / 8. The solution to our problem will always be smaller than 10 because we are going to end up with a fraction of 10. $$ \frac{ 1 }{ 8 } \times 10 $$ 3. Convert your whole number (10) into a fraction (10/1) To convert any whole number into a fraction, add a 1 into the denominator. Now place 1 / 8 next to the new fraction. This gives us the equation below. Tip: Always write out your fractions 10 / 1 and 1 / 8. It might seem boring or taxing, but dividing fractions can be confusing. Writing out the conversion simplifies our work. $$ \frac{ 1 }{ 8 } \times \frac{ 10 }{1} $$ 4. Multiply your fractions together Once we set our equations 1 / 8 and 10 / 1, we now need to multiple your values starting with the numerators. In this case, we will be multiplying 1 (the numerator of 1 / 8) and 10 (the numerator of our new fraction 10/1). If you need a refresher on multiplying fractions, please see our guide here! $$ \frac{ 1 }{ 8 } \times \frac{ 10 }{1} = \frac{ 10 }{ 8 } $$ Our new numerator is 10. Then we need to do the same for our denominators. In this equation, we multiply 8 (denominator of 1 / 8) and 1 (the denominator of our new fraction 10 / 1). Our new denominator is 8. 5. Divide our new fraction (10 / 8) After solving for our new equation off 10 / 8, our last job is to simplify this problem using long division. For longer fractions, we recommend to all of our students to write this last part down and use left to right long division. $$ \frac{ 10 }{ 8 } = 1.25 $$ And so there you have it! Our solution is 1.25. Quick recap: Turn 10 into a fraction: 10 / 1 Multiply 10 / 1 by our fraction, 1 / 8 Multiply the numerators and the denominators together We get 10 / 8 from that Perform a standard division: 10 divided by 8 = 1.25 Additional way of calculating 1 / 8 of 10 You can also write our fraction, 1 / 8, as a decimal by simply dividing 1 by 8 which is 0.12. If you multiply 0.12 with 10 you will see that you will end up with the same answer as above. You may also find it useful to know that if you multiply 0.12 with 100 you get 12.0. Which means that our answer of 1.25 is 12.0 percent of 10. Similar Fraction to Number Problems What is 1 / 8 of 1? What is 1 / 2 of 10? Similar Fraction of Number Problems What is 1 / 2 of 72? What is 3 / 7 of 99? What is 5 / 11 of 24? What is 7 / 10 of 20? What is 5 / 8 of 58? What is 2 / 3 of 1? What is 1 / 6 of 32? What is 10 / 18 of 88? What is 6 / 9 of 83? What is 9 / 13 of 89? Area of a Rectangle/Square Angles in a Triangle and other shapes Fractions and Exponents Fractions Intro Polynomials, Multiplying Standard Deviation Area of a Hexagon
CommonCrawl
home · thoughts · recipes · notes · me The Economics of Sustainability Growth & Prosperity Statistical Sleight-of-Hand Duplication of Labor Stopping the Wheel A brief (by university standards) essay that ties together history, economics, philosophy, ethics, nutrition, and environmentalism to analyze the prosperity resulting from capitalism and industrialization in contrast to our hopes for a sustainable society. · #politics · #economics This will be a semi-organized rant that loosely ties together history, economics, philosophy, ethics, and environmentalism. We'll look at the rise of capitalism alongside industrialization and its unparalleled economic growth, but then analyze its cost on our hopes for a healthy, sustainable society. The Industrial Revolution, fueled by capitalism, has undoubtedly made a subset of humanity better off in the short-term. Even the simplest products are luxuries no one before us has ever even come close to seeing: We take it for granted today, but a single Dorito has more extreme nacho flavor than a peasant in the 1400s would get in his whole lifetime. — @MatthewPCrowley There's a corollary to this tweet: a bag of Doritos has more extreme nacho flavor than the wealthiest king in the 1400s would get in a single meal. In the early 1900s, the average salary in the United States was around \$750/year,1 working out to \$21,500/year today after adjusting for inflation. The average salary today, though, is more than double that at \$53,400/year. Even if you use the median salary, its still $34,600/year, a 61% increase.2 Whether or not this is an appropriate rise given the productivity increases in that same period (a measly 115% compensation growth vs. a contrasting 253% productivity growth since 1950)3 is a separate issue: there's still a strong argument to be made that things are better now than they've ever been. We have an encyclopedia of human knowledge available in our pockets; the flavors and cuisines of the entire world are ever-present in our grocery stores; we can communicate over thousands of miles in seconds and travel there ourselves in hours; etc. We can write and print books, take stunning photographs, record high-quality videos, and consume media created around the world without moving anything but our thumbs. Undoubtedly, capitalism and industrialization brought us a lot of luxury. It's not only the First World that's benefitted, either. The amount of people living below the poverty line has dropped ~75% since 1990,4 and the people there generally live better than they did last century: Compared with 50 years ago, the average human now earns nearly three times as much money, eats one-third more calories, buries two-thirds fewer children and can expect to live one-third longer. Poverty is nose-diving. Between 1980 and 2000, the poor doubled their consumption. The Chinese are 10 times richer and live about 25 years longer than they did 50 years ago. Nigerians are twice as rich and live nine more years. The percentage of the world's people living in absolute poverty has dropped by more than half. — Matt Ridley, The Rational Optimist: How Prosperity Evolves To put it succinctly, the numbers appear good, regardless of location. And yet, there's no way it can be all sunshine and roses. This kind of hedonism and excess is obviously delightful in the moment, but what does it mean for humanity beyond instant gratification? Specifically, what does it mean for the portion of humanity that continues to enjoy more excess rather than newfound excess? There must be a cost. How long can we kick the can of indulgence down the road until the sustainability chickens come home to roost? First, let's analyze the underlying truth of these feel-good statistics: we should be extremely skeptical of feel-good justifications for a system that are the product of said system. It's like when pharmaceutical companies fund research about the benefits of… pharmaceuticals. So let's analyze the "developments" we just talked about. There are lies, damned lies, and statistics. First, notice Matt's particular choice of words: "the poor doubled their consumption." Well, why is that necessarily a good thing? Where are the measures of, say, happiness and life satisfaction, ecological diversity, or light pollution? Second, notice the generality of these claims: "the average human," "the Chinese," and "the average salary" are all on a scale that makes manipulating numbers far too easy. While it appears that nations are better off as a whole, this notion often breaks down if you look deeper. There's even a contradiction (or, rather, a deep mislead) within the quote itself. How do you compute the average human lifespan? Do you do it naively with the arithmetic mean? For example, $$ \frac{1 + 70 + 67 + 20 + 82 + 0 + 1}{7} \approx 34.4 $$ The average here is 34 years, but obviously that's not representative of what actually happens in this society: most people either die really young or reach old age. The median isn't much better: "20" doesn't give you the full picture. Following this logic, then, if the average human buries two-thirds fewer children, the average lifespan must increase by definition: $$ \frac{1 + 70 + 67 + 20 + 82 + 68 + 55}{7} \approx 51.9 $$ As you can see, turning the child deaths to normal lifespans leads to a nearly 20-year increase to the average lifespan. So this follow-up claim that the Chinese "live about 25 years longer," does not necessarily mean that lifespans increased. It implies that humans used to live to 50 and die, and now they live to 75. The statistics are presented as independent benefits: we live longer AND less babies die. However, this is misleading: when less children die in childbirth, the average lifespan naturally goes up massively. And even despite all of this anyway, the decrease in childhood mortality can't even be attributed to capitalism or prosperity, but simply to hygiene.5 The misdirect goes well beyond this specific example. In general, statistics of economic prosperity sound like a promising attestation to the "side effects" of capitalism: industrializing a nation benefits everyone within it as well as the nations that provide the materials and labor it needs. Unfortunately, it continues to be a simple manipulatation:6 [We] openly discussed the deceptive nature of [Gross National Product]. For instance, GNP may show growth even when it profits only one person, such as an individual who owns a utility company, and even if the majority of the population is burdened with debt. The rich get richer and the poor get poorer. Yet, from a statistical standpoint, this is recorded as economic progress. — John Perkins, Confessions of an Economic Hitman The subtle lies run even deeper than that, usually perpetuated by the very people that benefit from them. Perkins continues: My staff of economists, financial experts, […] proved that that such investments — in electric power systems, highways, ports, airports, and industrial parks — would spur economic growth. The statistics were highly biased; they were skewed to the fortunes of the families that owned the industries, banks, shopping malls, supermarkets, hotels, and a variety of other businesses that prospered from the infrastructure we built. They prospered. Everyone else suffered. Money that had been budgeted for health care, education, and other social services was diverted to pay interest on the loans. The development, prosperity, and growth; the feel-good indicators of success; and the endless supply of trinkets and goodies are just a masqueraded product of economic imperialism. The developed nations (well, mostly just the U.S.) put economic shackles on developing nations and hide the chains under a statically-backed façade of progress. Even throughout the growth, for those lucky enough to experience it, there were still steep costs. The working class in developed nations had to fight tooth and nail for every safety law (there were 35,000 factory deaths/year in 19007), labor regulation (remember child labor, the lack of minimum wage, or the fact that women couldn't work?), and benefit (pensions took hold in 1920, and employer-provided health insurance didn't exist before the 1940s) relinquished by the industrialists. People forget that the 40-hour work week and minimum wage are both less than 100 years old and are the product of strong unions and collective bargaining by the working class. Their struggle still continues: it doesn't take long to find tons of supporting evidence that the economic power of a single job or single dollar is dwindling relative to what it used to be, and the fight for workers' rights is as relevant as ever today as ever before: Hyundai got caught using child labor last month, and Pinkertons are back to union-busting.8,9,10 In reality, all we did was outsource the unregulated exploitation of the working class to developing nations that can't fight back, but that's beside the point: economic progress neither painless nor free. Under capitalism's necessity of hyperspecialization, we see an excess of waste and repetition in regards to human labor. How many tire manufacturers are making the same rubber donuts with tiny variations in design and quality? How many car manufacturers have production lines for steering wheels that have tiny, superficial, aesthetic variations among each other? How many phone manufacturers are repackaging the same chipsets with their own minor form factor tweaks to the hardware and spamware tweaks to the software? How much of the e-waste generated by getting a new phone every few years11 could be eliminated by putting fragmented minds together to create something truly great? How many software engineers have spent their time creating sign-up forms and date pickers whose collective variations could be replicated by a single engineer tweaking an open source project? How many different word processors, image viewers, login forms, streaming platforms, video players, or social networks do we need? How many factories out there are dedicated just to making plastic bags? You know, the ones that tear on your way to the car, that saturate the sky on a windy day in any city, that kill 100,000 marine animals every year?12 There's an entire book dedicated to the idea that there are people out there completely wasting away doing work that is meaningless and superfluous. And planned obsolescence is a known (and probably encouraged) phenomenon that makes this worse. What the fsck are we doing? Participating in collective insanity, apparently: we've applied Einstein's definition13 on a global scale. Millions of man-hours and an immeasurable amount of natural resources spent doing the same thing over and over again. I can't stress enough how absurd this concept is. At any given moment, there are probably thousands of people out designing the exact same thing for corporations that are in competition with each other. If you put them all in the same room, there's absolutely no doubt in my mind that you'd get a better product by simple virtue of the fact that you wouldn't be wasting labor on the same task. This duplication of labor and resources segues cleanly into the next cost of economic prosperity: climate change. "Capitalism encourages innovation through competition." Let's take this as truth for the sake of argument. Then there's a follow-up nobody seems to ask: at what point should we stop encouraging innovation? When we will decide that we have enough? When the last tree is cut down, the last fish eaten, and the last stream poisoned, you will realize that you cannot eat money. — Native American saying (allegedly) Global temperatures are higher now than they've been throughout all of human history.14 That's just fact. Critics can argue about whether or not that's caused by humanity, but I won't waste my breath on that here. One thing is definitely caused by humanity: the destruction of nature. Whether that be: nearly hunting buffalo to extinction,15 feeding 40kg of plastic to a sperm whale,11 killing turtles and birds with our trash,16 fishing our oceans completely devoid of fish,17 creating an "island" of garbage in the Pacific Ocean twice the size of Texas,18 making ourselves dumber with air pollution,19 or literally engineering our own extinction from microplastics.20 Tires pollute 1000 times more than the car itself;21 how much could we improve that if we just focused everyone's efforts on making the best possible tire rather than the most profitable one? What if every car manufacturer invested in electric vehicles instead of releasing a new paint color onto it and calling it the 2022 model? What if they collaborated on the best possible EV rather than independently reinventing the wheel battery? What if we stopped releasing a new iPhone every year? Maybe that would incentivize people to stop throwing away 150 million old smartphones every year.22 Capitalism has driven the very foods we eat to a different extreme, one full of artificial scarcity, profit maximization, fake competition, and nutritional bankruptcy. Somehow, the "chip shortage" caused by supply chain issues gets manipulated into a fear of Hot Cheetos never coming back to store shelves.23 "Special editions" or "seasonal releases" of products create artificial demand by exploiting our primal fear of food scarcity. Sierra Mist "competes" with 7-Up (yet they're both owned by PepsiCo), Dasani "competes" with SmartWater (both owned by Coca Cola), and DiGiorno "competes" with Tombstone (both owned by Nestlé). There are just ten corporations dominating our shelves,24 and each one is pretending to be in competition with itself. This centralized entire industry is doing whatever it takes to make people eat more crap: [There is] a conscious effort—taking place in labs and marketing meetings and grocery-store aisles—to get people hooked on foods that are convenient and inexpensive.25 There's even a term for a food tweaked perfectly to maximize craving: its "bliss point". Companies will tweak their formulas to maximize "bliss", addiction, and cravings while also maximizing profits with disgusting imitations of real food: Natural Cheddar, which they started off with, crumbled and didn't slice very well, so they moved on to processed varieties, which could bend and be sliced and would last forever, or they could knock another two cents off per unit by using an even lesser product called "cheese food," which had lower scores than processed cheese in taste tests. Let me once again reiterate this absurdity: food companies intentionally make their foods unhealthier, tastier, and more addicting exclusively in the name of profit. And they have the audacity to blame the consumer: Well, that's what the consumer wants, and we're not putting a gun to their head to eat it. That's what they want. knowing full well that they're exploiting biology and psychology to become the direct cause of the meteoric rise in obesity, diabetes, and other health problems. Yes, one can point to "individual responsibility" to wash their hands clean like Pontius Pilate, but it's disingenuous to ignore the individual's uphill battle against an army of researchers and profiteers using best-in-class methodologies to make it as difficult to say "no" as possible. The deck is heavily stacked against the consumer to "want" exactly what the producers want them to want. What kind of snacks could we invent if we focused the $1.5 trillion industry on creating healthy and sustainable ones? With that investment, I bet we could make them just as addicting. It seems capitalism is both a blessing and a curse: on one hand, we get unforeseen technological advancement and social mobility for the lucky ones; on the other hand, we get moral, nutritional, and environmental degeneracy at the behest of the profit motive, not to mention the spread of poverty, starvation, and suffering for the fuel powering the capitalism regime: the unlucky ones. To borrow from Lenin (with a bit of tongue-in-cheek), we need to ask ourselves: "What is to be done?" Given the existential risks, this might be the only thing worth pondering. "The History of American Income" ↩︎ "Measures of Central Tendency for Wage Data" ↩︎ "Productivity vs wages: How wages in America have stagnated" ↩︎ "Global Poverty Facts" ↩︎ "Ignaz Semmelweis, Wikipedia" ↩︎ John Perkins' book (and its next edition) is one of my favorite non-fiction works ever. If you think the U.S. truly is a "good guy" on the world stage, this book is the wake-up call you need. ↩︎ "Factory Life in the 1800's" ↩︎ "Katie Halper: Starbucks Hires EX-PINKERTON, CIA Officer To WOKEIFY Union-Busting" ↩︎ "Amazon is using union-busting Pinkerton spies to track warehouse workers and labor movements at the company, according to a new report" ↩︎ "SBWorkersUnited on Twitter" ↩︎ "How Long Can A Smartphone Last?" ↩︎ ↩︎ "Plastic in our oceans is killing marine mammals" ↩︎ "Insanity is doing the same thing over and over again and expecting different results." ~ Albert Einstein ↩︎ XKCD #1732 ↩︎ "Bison hunting, Wikipedia" ↩︎ "When turtles and birds choke to death on our plastic waste" ↩︎ "Wild fish catch from bottom trawling" ↩︎ "Great Pacific Garbage Patch" ↩︎ "The role of air pollution in cognitive impairment and decline" ↩︎ "Chemicals in plastic, electronics are lowering fertility in men and women" ↩︎ "Pollution from tire wear 1,000 times worse than exhaust emissions" ↩︎ "This year's e-waste to outweigh Great Wall of China" ↩︎ "Hot Cheetos Fans Are Freaking Out Over A Possible Shortage" ↩︎ "This Infographic Shows How Only 10 Companies Own All The World's Food Brands" ↩︎ "The Extraordinary Science of Addictive Junk Food" ↩︎
CommonCrawl
Novel pH-sensitive nanoformulated docetaxel as a potential therapeutic strategy for the treatment of cholangiocarcinoma Nan Du1, Lin-Ping Song1, Xiao-Song Li1, Lei Wang2, Ling Wan1, Hong-Ying Ma1 & Hui Zhao1 Journal of Nanobiotechnology volume 13, Article number: 17 (2015) Cite this article Cholangiocarcinoma (CC) is one of the fatal malignant neoplasms with poor prognosis. The traditional chemotherapy has been resistant to CC and does not improve the quality of life. The aim of the present study is to investigate the potential of chondroitin sulphate (CS)-histamine (HS) block copolymer micelles to improve the chemotherapeutic efficacy of docetaxel (DTX). pH-responsive property of CS-HS micelles was utilized to achieve maximum therapeutic efficacy in CC. In the present study, docetaxel-loaded CS-HS micelles (CSH-DTX) controlled the release of drug in the basic pH while rapidly released its cargo in the tumor pH (pH 5 and 6.8) possibly due to the breakdown of polymeric micelles. A nanosize of <150 nm will allow its accumulation in the tumor interstitial spaces via EPR effect. CSH-DTX effectively killed the cancer kills in a time- and concentration-dependent manner and showed pronounced therapeutic action than that of free drug at all-time points. CSH-DTX resulted in higher apoptosis of cancer cells with ~30% and ~50 of cells in early apoptosis quadrant when treated with 100 and 1000 ng/ml of equivalent drug. The micellar formulations showed remarkable effect in controlling the tumor growth and reduced the overall tumor volume to 1/5th to that of control and half to that of free drug treated group with no sign of drug-related adverse effects. Immunohistochemical analysis of tumor sections showed that fewer number of Ki-67 cells were present in CSH-DTX treated group comparing to that of free DTX treated group. Our data suggests that nanoformulation of DTX could potentially improve the chemotherapy treatment in cholangiocarcinoma as well as in other malignancies. Cholangiocarcinoma (CC) is one of the fatal malignant neoplasms which arise from epithelium of biliary tract with high rate of mortality and morbidity [1,2]. The CC constitutes the 3% of gastrointestinal cancers with ~15% of overall hepatic cancers [3]. The incidence of CC among Western countries is 1–2 cases per 100000 persons however East Asia has higher incidence of CC with ~8 cases per 1000 individuals [4]. The CC has poor prognosis rate with 5-year survival rate of less than 10% and has a steady increase in the incidence rate. Approximately, 50% of CC cases are diagnosed at unresectable stage, as the symptoms are largely unknown at initial stages [5,6]. At present, surgical resection of CC tumor is the main treatment option for advanced stage tumor. However, surgical biliary bypass often causes serious postoperative complications and increases the morbidity rate. Additionally, palliative therapies such as endoscopic stent, radiation therapy, photodynamic therapy, and chemotherapy are employed to treat the CC [7]. Among all, chemotherapy is regarded as the adjuvant or main alternative treatment to CC, however traditional chemotherapy is reported to be resistant to CC and does not improve the quality of life [8,9]. Therefore, we need an effective therapeutic strategy that can overcome the limitation of conventional treatment modality and improve the chemotherapeutic effect in CC. Nanotechnology-based drug delivery system has been reported to improve the pharmacological and anticancer property of chemotherapeutic drugs [10]. Specifically, fenestrated endothelium and heavy blood flow will allow the nanoparticles to be taken by the liver. This process can be accelerated by enhanced permeability and retention effect (EPR) that will allow the preferential accumulation or passive targeting of nanocarriers to the leaky vasculature of tumor tissues [11]. Importantly, the delivery carrier can be made responsive to the local microenvironment of tumor. The physiological pH of blood is ~7.2 while the pH of extracellular spaces around tumor is 6.8 and endolysosomes of cancer cells was very acidic (pH < 6) [12]. In this regard, block copolymer-based nanosized micelles have attracted significant attention as a promising delivery system towards cancer therapy [13]. Importantly, pH-responsive anticancer drug delivery has many benefits including high accumulation in tumor tissues, long blood circulation, limited release in physiological conditions, and utilizing EPR effect [14]. In the present study, we have conjugated chondroitin sulphate (CS) with histamine (HS) to form pH-responsive nanomicelles that can enhance the cancer cell killing effect. CS is a hydrophilic compound with excellent biocompatibility and biodegradability that made it an excellent choice for in vivo applications. CS is a vital structural component of cartilage and connective tissues. CS has been reported to target cancer cells by binding to the hyaluronic acid receptors expressed on the malignant cells and internalized actively. HS on the other hand was selected due to its imidazole ring characteristics [15]. The imidazole ring has a lone pair of electron on nitrogen that gives it amphoteric nature to protonate and deprotonate [16]. Docetaxel (DTX), is regarded as one of most effective chemotherapeutic agent for the cancer treatment. DTX is a typical microtubule inhibitor that binds with the microtubule assembly of cancer cells and prohibits its cell proliferation [17]. DTX is effective against wide range of cancers including ovarian, breast, head/neck, lung cancers, and liver cancers. Despite its promising clinical potential, severe side effects such as bone marrow suppression, hypersensitivity reactions, and peripheral neuropathy became a major obstacle. Additionally, poor water solubility and poor bioavailability limited its clinical application to a great extent [18]. Therefore, main aim of the present study was to load DTX in CS-HS-based nanomicelles and to utilize the pH-responsive property to achieve maximum therapeutic efficacy in Cholangiocarcinoma. The physicochemical characteristics of DTX-loaded CS-HS micelles (CSH-DTX) were studied in terms of size and release kinetics. In vitro cytotoxicity assay and apoptosis assay of free drug and CSH-DTX was studied in QBC939 adenocarcinoma cells. Antitumor efficacy of CSH-DTX was studied in xenograft nude mice and immunohistochemical studies were performed to evaluate its systemic performance. Cholangiocarcinoma (CC) which arises from epithelium of bialy tract is one of the fatal malignant neoplasms with high rate of mortality and morbidity. At present, conventional chemotherapy is the main treatment option; however it does not improve the quality of patient life [2,4]. In this regard, nanotechnological solutions have been reported to improve the therapeutic performance of anticancer drugs. Importantly, a pH-responsive strategy would increase the accumulation in tumor tissues, extend blood circulation, and effectively improve the overall chemotherapeutic efficacy. In the present study therefore, we have conjugated chondroitin sulphate (CS) with histamine (HS) to form pH-responsive nanomicelles that can enhance the cancer cell killing effect. DTX, a typical microtubule inhibitor has been selected in this study as an anticancer drug to improve its therapeutic efficacy against CC [18]. Since the therapeutic application of DTX is hindered by its limited solubility and systemic toxicity, in the present study, DTX was loaded into CS-HS conjugate based polymeric micelles. DTX and CS-HS block copolymer when dissolved in water, hydrophobic and hydrophilic part self-assemble to form a drug loaded micelles (Figure 1). The so formed micelle (CSH-DTX) has numerous advantages including pH-sensitive drug via protonation of histidine residue, high loading efficiency, and potential clinical translational ability. Schematic representation of conjugation of chondroitin sulphate (CS)-histidine (HS) via chemical reactions. Schematic illustration of self-assembly of docetaxel (DTX) and CS-HS conjugate into polymeric micelles. Preparation and characterization of DTX-loaded micelles Physicochemical characterization of polymeric micelles was carried out in terms of particle size and polydispersity index. The particle size and PDI of CSH-DTX was measured by dynamic light scattering technique. The average size of CSH-DTX was observed to be around 110 nm with a fairly uniform dispersion of NP (PDI ~ 0.15) (Figure 2a). It has been previously reported that micelles size less than <200 nm could be preferentially accumulated in the tumor interstitial spaces via enhanced permeability and retention (EPR) effect [19]. (a) Typical size distribution analysis of CSH-DTX by dynamic light scattering technique (b) transmission electron microscope (TEM) imaging of CSH-DTX (c) scanning electron microscope (SEM) imaging of CSH-DTX. The morphology of CSH-DTX was investigated using TEM and SEM. The TEM showed a spherical particle with uniform distribution in the copper grid (Figure 2b). The size measured by TEM was smaller than observed via DLS experiment. The discrepancy in size might be attributed to the hydrodynamic state and dried state measurement. The morphology was further confirmed by SEM which showed a smooth regular surface, spherical shaped particles (Figure 2c). The size was consistent with the TEM observation. The drug loading capacity of DTX was observed to be more than 20% with a high entrapment of >95%. In vitro drug release The release study was carried out in phosphate buffered saline (PBS, pH 7.4) and acetate buffered saline (ABS, pH 6.8 and pH 5.0). As shown in Figure 3, release rate of DTX from CSH-DTX micelles markedly differed with the change in pH conditions. As expected, accelerated release of DTX was observed at lower pH, while slow release profile was seen at basic pH conditions. At pH 7.4, nearly 30% of drug released while 70% of drug released when the pH of release media was decreased to pH 6.8. Importantly, release rate was further increased when micelles were incubated in pH 5.0 containing media. Nearly 95% of drug released in pH 5.0 at the end of 72 h of study period. In all the pH conditions, although slightly faster release was observed during the initial time points however no burst release pattern was observed. The micelles exhibited a sustained release profile for DTX. It could be expected that at physiological pH conditions, core will be intact and DTX would be blocked in the highly hydrophobic core leading to low release rate. However when the pH decreased, accelerated release was observed due to the protonation of histidine residue. At lower pH, when the histidine was protonated, imbalance of hydrophilic and hydrophobic force destabilizes the micelles structure and the drug diffuses in higher rate [16]. Therefore, CSH micelles could effectively prevent the drug release or drug leakage in physiological (avoids toxicity) conditions while releases rapidly in acidic conditions in response to endosomal and lysosomal pH. Release profile of DTX from CSH-DTX micelles incubated at phosphate buffered saline (pH 7.4) and acetate buffered saline (pH 6.8 and 5.0). The samples were incubated at 37°C in a rotary shaker (100 rpm). The data are presented as mean ± SD (n = 3). *p < 0.05, *p < 0.01 is the statistical difference between drug release at pH 5.0, pH 6.5, and pH 7.4. In vitro cytotoxicity assay The in vitro cytotoxicity of blank copolymer was studied in different concentrations against QBC939 CC cells to evaluate its safety profile. The cells were treated with concentrations between 0.1 μg/ml to 500 μg/ml. As seen (Figure 4a), blank polymeric micelles did not exhibit any significant toxicity in the tested concentration range after 24 h incubation. Especially, cell viabilities remained more than >94% at all the concentrations indicating its excellent safety profile. The least or negligible cytotoxicity of blank polymer makes it ideal for in vivo cancer targeting. Followed by which cytotoxicity of free DTX and CSH-DTX was evaluated in the same cell lines in a concentration and time dependent manner. As shown in Figure 4b-d, both free drug as well as drug loaded micellar formulations exhibited a greater cytotoxicity in a time- and concentration dependent manner. It has to be noted that cytotoxicity of CSH-DTX was more pronounced than that of free drug in all the time points. IC50 value of individual formulation was calculated to quantify the cytotoxic effect. The IC50 value of free DTX remained at 6.45 μg/ml, 2.86 μg/ml, and 0.89 μg/ml after 24, 48, and 72 h incubation, respectively. On the other hand, IC50 value of CSH-DTX stood at 2.58 μg/ml, 0.98 μg/ml, and 0.49 μg/ml for the same time period, respectively. The superior cytotoxicity of CSH-DTX might be attributed to the pH-driven release of active therapeutic molecule in the cell cytoplasm. It could be expected that micelles were internalized into the cells via endocytosis mechanism where in the drug released at acidic compartments and travel to site of action [20]. The cytotoxicity was further confirmed by cellular morphology. As seen Figure 5a, control cells were densely packed on the cover slip and of regular shape, however, DTX treated cells showed signs of apoptosis and cells were round and circular. Importantly, CSH-DTX treated cells were fewer in number (viable cells were decreased) and scattered with a clear sign of membrane blebbing and apoptosis. (a) In vitro cytotoxicity of blank polymeric micelles at various concentrations against QBC939 cells (b-d in vitro cytotoxicity of free DTX and CSH-DTX against QBC939 cells incubated at 24, 48, and 72 h. The cytotoxicity of formulations was evaluated by MTT assay. The data are presented as mean ± SD (n = 6). (a) Cellular morphology of QBC939 cells following incubation with free DTX and CSH-DTX (b) fluorescence microscopy images of the cell apoptosis induced by free DTX and CSH-DTX. The apoptosis of cells was analysed by Hoechst staining. Apoptosis measurements Changes in cell morphology resulting in the rounding of cells are one of the prominent hallmarks of apoptosis. The apoptosis measurement was carried out by Hoechst 33258 staining. As shown in Figure 5b, untreated cells did not show any changes in morphology and remained same after 24 h. Additionally, cells were densely packed and present large numbers covering the entire cover slip. The free DTX however reduced the number of viable cells and exhibited typical features of apoptosis. Notably, CSH-DTX remarkably induced the apoptosis in cancer cells with typical features of cell death such as chromatic condensation, membrane blebbing and apoptotic bodies were visible. Results indicate that the drug loaded micelles could cause marked condensation and fragmentation of nuclear bodies. Apoptosis assay by flow cytometry Figure 6 shows the apoptosis analysis (early and late apoptosis) of QBC939 cells using Annexin V FITC and PI staining by flow cytometer. In the present study, cells were treated with 100 ng/ml and 1000 ng/ml of free DTX and equivalent CSH-DTX formulations and incubated for 24 h. Results indicate that the proportion of early and late apoptosis cells markedly increased with the increase in the concentration of chemotherapeutic drugs. For example, ~10% of cells were in early apoptosis quadrant when exposed with 100 ng/ml of free DTX, while it increased to ~32% for exposure to 1000 ng/ml of drug. As expected, CSH-DTX resulted in higher apoptosis of cancer cells with ~30% and ~50 of cells in early apoptosis quadrant for the same concentrations, respectively. Similarly, late apoptosis cells increased as well in a concentration-dependent manner. The result was consistent with the cytotoxicity that micellar formulation could remarkably induce the cell apoptosis. Flow cytometer analysis of cell apoptosis using annexinV-FITC and PI staining. The cells were exposed with free DTX and CSH-DTX at a concentration of 100 ng/ml and 1000 ng/ml and incubated for 24 h. **p < 0.01 is the statistical difference CSH-DTX and free DTX. In vivo antitumor efficacy The antitumor efficacy of free DTX and CSH-DTX was investigated in QBC939 cells bearing xenograft tumor model. The mice were intravenously injected with respective formulations every 3rd day for three times. The tumor volume and body weight was noted every alternative day up to day 20. As shown in Figure 7a, CSH-DTX significantly slowed down the growth of tumor in mice models comparing to that of free DTX and saline treated mice groups. As expected, blank micelles did not have any effect on the tumor volume of mice and grew along with control group. Administration of free DTX although showed some therapeutic effect however could not inhibit it's growth completely. The micellar formulations remarkably suppressed the tumor proliferation by comparison to that of control and free drug treated animal group. The final tumor volume of control, blank micelles, free DTX and CSH-DTX treated groups were ~2500, ~2500, ~1300, and ~600 mm3, respectively. The main reason behind the superior antitumor efficacy of CSH-DTX was attributed to increased accumulation of micelles in tumor regions due to EPR effect and enhanced sensitization of MDR cancer to DTX. The other reasons might be due to the sustained release of drug and prolonged blood circulation [21]. In vivo antitumor efficacy study (a) changes in tumor volume (b) changes in mice body weight (c) images of tumor sections. The antitumor study was carried out in QBC939 cells -bearing xenograft model and administered thrice at a fixed dose of 5 mg/kg. *p < 0.05, ***p < 0.001 is the statistical difference in the tumor volume between CSH-DTX and free DTX or CSH-DTX and control group. Along with therapeutic efficacy of anticancer drug loaded delivery system, minimization of side effects remains a big challenge for the successful cancer chemotherapy. The change in body weight has been considered to be an index to evaluate the systemic toxic effects. As shown in Figure 7b, mice group treated with free DTX significantly reduced its body weight. Approximately, 20% of body weight was reduced in this group indicating its systemic toxicity. On the other hand, when the same dose of drug was loaded in polymeric micelles, no body weight-loss was observed and throughout the study period the body weight was stable. It should be noted that the body weight of DTX treated group started recovering approximately after 8 days (after final injections). The body weight recovery might be due to the slow removal of free drug from the vital organs and clearance from the systemic circulation. The result therefore indicates that CS-HS based micelles effectively reduced the drug related side effects while at the same time improved its therapeutic efficacy as shown by reduced tumor volume [22]. Histopathological and immunohistochemical analysis H & E staining was performed to stain the tumor sections wherein nucleus was stained with hematoxylin (blue) and extracellular matrix was stained with eosin (pink). As shown in Figure 8a, control group exhibited clear cell morphology with excess chromatin and binucleolates. Whereas, free DTX treated group showed range of necrosis with irregular cellular morphology. The tissue necrosis further increased for CSH-DTX treated group with distinct damage to cancer cells. Lack of nuclei and lack of boundary regions were observed in this group. (a) histopathology of tumor sections (b) immunohistochemical analysis of tumor cell proliferation (Ki-67) (c) immunohistochemical analysis of cleaved PARP (apoptosis marker). Immunohistochemical staining of Ki-67 was performed to evaluate the tumor proliferation ability of individual formulations. As seen (Figure 8b), fewer number of Ki-67 cells were present in CSH-DTX treated cells comparing to that of free DTX treated group. This further confirms the enhanced accumulation of drug in the tumor tissues from micellar formulations and enhanced sensitization of MDR cancer to DTX. PARP, a DNA binding enzyme is cleaved by caspase-3 and caspase-7. PARP is an important indicator of apoptosis in cancer cells. In this study, level of PARP was considered as a marker for level of cell apoptosis. As shown in Figure 8c, cleaved PARP was detected in DTX treated group, while it was more significant in CSH-DTX treated group. The enhanced apoptosis in CSH-DTX treated group was consistent with its excellent antitumor efficacy. An amphiphilic block copolymer CS-HS-based polymeric micelles was prepared and loaded with DTX to target cholangiocarcinoma. The pH-sensitive behaviour of histamine in the block copolymer will accelerate the release of DTX in the tumor region while protects the therapeutic load in the physiological conditions. In the present study, CSH-DTX controlled the release of drug in the basic pH while rapidly released its cargo in the tumor pH (pH 5 and 6.8) possibly due to the breakdown of polymeric micelles. A nanosize of <150 nm will allow its accumulation in the tumor interstitial spaces via EPR effect. CSH-DTX effectively killed the cancer kills in a time- and concentration-dependent manner and showed pronounced therapeutic action than that of free drug at all-time points. The superior cytotoxicity of CSH-DTX might be attributed to the pH-driven release of active therapeutic molecule in the cell cytoplasm. CSH-DTX resulted in higher apoptosis of cancer cells with ~30% and ~50 of cells in early apoptosis quadrant when treated with 100 and 1000 ng/ml of equivalent drug. The micellar formulations showed remarkable effect in controlling the tumor growth and reduced the overall tumor volume to 1/5th to that of control and half to that of free drug treated group with no sign of drug-related adverse effects. Immunohistochemical analysis of tumor sections showed that fewer number of Ki-67 cells were present in CSH-DTX treated cells comparing to that of free DTX treated group. Our data suggests that nanoformulation of DTX could potentially improve the chemotherapy treatment in cholangiocarcinoma as well as in other malignancies. Docetaxel was procured from Sigma-Aldrich (China). Chondroitin sulphate (CC) was procured from Shanghai Sangon Biological Engineering Technology & Services Co. Ltd. (Shanghai, China). Histamine dihydrochloride (HS) was purchased LSB Biotechnology Inc. (Xi'an, China). 1-(3-(Dimethylamino)propyl)-3-ethylcarbodiimide hydrochloride (EDC), N hydroxysuccinimide (NHS) was obtained from Sigma-Aldrich (China). All other chemicals were of reagent grade and used without further purification. Synthesis of chondroitin sulphate (CS)-histamine (HS) conjugate Chondroitin sulphate was conjugated with histamine as reported previously [23]. Briefly, CS was dissolved in 120 ml of phosphate buffered saline (PBS, pH 6.0) maintained in a magnetic stirrer for 5 h. Carboxyl group of CS was activated by the addition of EDC and NHS in specific quantity one by one. After 30 min, HS was added and allowed the reaction mixture to proceed for 24 h. The resultant reaction mixture was dialyzed against phosphate buffer and then the process was repeated with distilled water. Finally, the product was lyophilized and stored in dark place. Preparation of docetaxel-loaded polymeric micelles DTX-loaded CS-b-HS micelles (CSH-DTX) were prepared by a solvent extraction and evaporation method. Briefly, specific quantity of DTX and CS was dissolved in 10 ml of dichloromethane. This organic solution was poured in distilled water and immediately sonicated for 5 min to form an O/W emulsion system. The reaction was allowed to proceed for 24 h in the dark conditions. Particle size and zeta potential analysis The mean diameter and surface charge was analyzed using dynamic light scattering technique by Zetasizer (Nano-ZS 90, Malvern, Worcestershire, UK). The samples were measured at 25°C at a fixed angle of 90°C. Each sample was measure in triplicate. Morphology analysis The morphological examination of nanoparticles was carried out using transmission electron microscope (TEM) (JEM-2010; JEOL, Japan). Nanoparticle dispersion was placed on the carbon-coated copper grid and negatively stained with 2% (w/v) phosphotungstic acids and air dried. The morphology and surface texture was further confirmed by scanning electron microscopy (SEM; FEI Nova NanoSEM 230). The samples were freeze dried and coated with platinum before the SEM analysis. Drug-loading and encapsulation efficiency UV–vis Spectrophotometer was used to calculate loading capacity and entrapment efficiency of DOX in CSH micelles was estimated by HPLC technique (LC 1200; Agilent Technologies, Santa Clara, CA, USA). The mobile phase consists of acetonitrile and 0.2% trimethylamine (pH adjusted to 6.4 with phosphoric acid) (48:52, v/v) at flow rate of 1 mL/min. The dried solid samples were dissolved in 1 ml of dichloromethane and sonicated vigorously for 10 min. This solution was centrifuged (20000 rpm) and supernatant was collected and injected into HPLC column. A reverse-phase C18 column (250 mm × 4.6 mm; GL Science, Tokyo, Japan) was used. The mobile phase was run at 1 ml/min and detected at 254 nm. $$ \mathrm{D}\mathrm{L}\%=\frac{\mathrm{Total}\;\mathrm{Drug}\kern0.24em \mathrm{added}}{\mathrm{Wt}.\kern0.5em \mathrm{of}\;\mathrm{P}\mathrm{olymer}\kern0.5em +\kern0.5em \mathrm{W}\mathrm{t}.\kern0.5em \mathrm{of}\;\mathrm{drug}\;\mathrm{in}\;\mathrm{N}\mathrm{P}}\kern0.5em \times \kern0.5em 100\% $$ $$ \mathrm{E}\mathrm{E}\%=\frac{\mathrm{Actual}\;\mathrm{drug}\;\mathrm{loading}}{\mathrm{Theoretical}\;\mathrm{drug}\;\mathrm{loading}}\kern0.5em \times \kern0.5em 100\% $$ Drug release study The drug release study was carried out in various pH medium. For the release study, freeze dried micelles were reconstituted in distilled water and 1 ml of it was placed in dialysis tube and clipped at both the end. The dialysis tube was placed in a falcon tube with 30 ml of release media (different pH level). The whole assembly was placed in a shaking bath at 37°C. At predetermined time intervals, 1 ml of release sample was collected and replaced with equal amount of fresh release media. The amount of drug released in the release media was calculated from the HPLC technique. QBC939 cholangiocarcinoma cells were cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin streptomycin mixture. The cells were maintained in ambient conditions of 37°C and 5% CO2. The cytotoxicity assay was measured by 3-(4,5-dimethythiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay. It is based on the reduction of yellow MTT by mitochondrial succinate dehydrogenase. MTT enters the live cells and reduced into insoluble formazan complex. For this, QBC939 cells were seeded at a density of 1 × 104 in a 96-well plate. After 24 h, cells were exposed to blank polymer, free DTX and CSH-DTX at different dosing level. The cells were incubated for 24, 48 and 72 h accordingly. At each time point, plate was removed and treated with 100 μl of MTT solution (5 mg/ml) to each 96-well plate and incubated for 4 h. The formed formazan crystals were extracted by adding DMSO and incubated for additional 30 min. The absorbance of each plate was read at 570 nm using a microplate reader (Thermo-Fisher, USA). All experiments were repeated 6 times. The morphology of cells was observed using a fluorescent microscope (Leica DM IRBE microscope) and representative images were selected. Apoptosis measurement Hoechst 33258 was used to observe the cell apoptosis. During cell apoptosis, condensation of chromatin takes place and DNA gets cleaved into small fragments. Generally, it enters the live cells and binds with adenosine-thymidine (AT) part of DNA while in apoptotic cells, it binds to condensed chromosome. Normal cells and apoptotic cells were different in their size and distinct morphology. The drug treated cells were washed with PBS and stained with Hoechst 33258 for 10 min. The cells were washed and fixed with 4% paraformaldehyde and observed under fluorescence microscope. Apoptosis analysis by flow cytometry Apoptosis assay was carried out by flow cytometer. For this, cells were seeded, incubated for 24 h and treated with respective formulations (Free DTX and CSH-DTX). The treated cells were further incubated for 24 h. The cells were harvested and washed with PBS. The pellets were resuspended with 100 μl of binding buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 5 mM KCl, 1 mM MgCl2, and 1.8 mM CaCl2). The cells were then treated with FITC-Annexin V and incubated for 20 min and then PI was added and incubated for additional 10 min. The cells were analysed for apoptotic cells using FACS (Becton Dickenson Biosciences, San Jose, CA, USA). In vivo antitumor efficacy study In vivo antitumor efficacy study was performed in 7-week old xenograft nude mice. Briefly, 1 × 106 QBC939 cells (100 μl PBS) were subcutaneously injected into the right flank of nude mice to establish cholangiocarcinoma tumor models. The tumours were allowed to grow for two weeks until it reaches ~150 mm3 size. The mice were equally divided into 4 groups with 8 mice in each group; untreated controls, blank micelles, free DTX, and CSH-DTX at a fixed dose of 5 mg/kg. The samples were injected thrice via tail vein injection during first two weeks. The tumor size was measured using Vernier calliper for every other alternative day. Tumor volume was calculated using the formula: volume = 1/2 × Dmax × (Dmin)2. The body weight was measured simultaneously as an indicator of the systemic toxicity. At the end of the study period, tumors were surgically removed and fixed in 10% neutral formalin and embedded in paraffin. Histopathological and immunohistochemical evaluations The histopathology of tumor sections was evaluated by hematoxylin and eosin (H & E) method. The embedded paraffin tumor sections were cut into 5 μm slices and stained with H & E staining agent and viewed by microscope (Nikon TE2000U). For immunohistochemical analysis, rabbit monoclonal primary antibody for cleaved poly-ADP-ribose polymerase (PARP) (Abcam, Cambridge, MA, USA) and rat anti-mouse Ki-67 monoclonal antibody (Maixin Biotechnology Co., Ltd) to quantify Ki-67 expression was used in the study. The experimental data are presented as the mean (standard deviation (SD). All statistical analyses were performed using ANOVA or a two-tailed Student's t-test (GraphPad Prism 5). CS: DTX: Khan SA, Davidson BR, Goldin RD, Heaton N, Karani J, Pereira SP, et al. Guidelines for the diagnosis and treatment of cholangiocarcinoma: an update. Gut. 2012;61:1657–69. Patel T. Cholangiocarcinoma-controversies and challenges. Nat Rev Gastroenterol Hepatol. 2011;8:189–200. Vauthey JN, Blumgart LH. Recent advances in the management of cholangiocarcinomas. Semin Liver Dis. 1994;14:109–14. Patel T. Increasing incidence and mortality of primary intrahepatic cholangiocarcinoma in the United States. Hepatology. 2001;33:1353–7. Matull WR, Khan SA, Pereira SP. Impact of classification of hilar cholangiocarcinomas (Klatskin tumors) on incidence of intra- and extrahepatic cholangiocarcinoma in the United States. J Natl Cancer Inst. 2006;21:873–5. Parkin DM, Srivatanakul P, Khlat M, Chenvidhya D, Chotiwan P, Insiripong S, et al. Liver cancer in Thailand. I. A case–control study of cholangiocarcinoma. Int J Cancer. 1991;48:323–8. Seehofer D, Kamphues C, Neuhaus P. Management of bile duct tumors. Expert Opin Pharmacother. 2008;9:2843–56. Khan SA, Thomas HC, Davidson BR, Robinson SDT. Cholangiocarcinoma. Lancet. 2005;366:1303–10. Tong R, Cheng J. Anticancer polymeric nanomedicines. Polym Rev. 2007;3:345–81. Mura S, Nicolas J, Couvreur P. Stimuli-responsive nanocarriers for drug delivery. Nat Mater. 2013;12:991–1003. Murakami M, Cabral H, Matsumoto Y, Wu S, Kano MR, Yamori T, et al. Improving drug potency and efficacy by nanocarrier-mediated subcellular targeting. Sci Transl Med. 2011;3:64ra2. Owens III DE, Peppas NA. Opsonization, biodistribution, and pharmacokinetics of polymeric nanoparticles. Int J Pharm. 2006;307:93–102. Wu XL, Kim JH, Koo H, Bae SM, Shin H, Kim MS, et al. Tumor-targeting peptide conjugated pH-responsive micelles as a potential drug carrier for cancer therapy. Bioconjug Chem. 2010;21:208–13. Lv Y, Ding G, Zhai J, Guo Y, Nie G, Xu L. A superparamagnetic Fe3O4-loaded polymeric nanocarrier for targeted delivery of evodiamine with enhanced antitumor efficacy. Colloids Surf B Biointerfaces. 2013;110:411–8. Li F, Na K. Self-assembled chlorin e6 conjugated chondroitin sulfate nanodrug for photodynamic therapy. Biomacromolecules. 2011;12:1724–30. Lundberg P, Lynd NA, Zhang Y, Zeng X, Krogstad DV, Paffen T, et al. pH-triggered self-assembly of biocompatible histamine-functionalized triblock copolymers. Soft Matter. 2013;9:82–9. Huang ZJ, Yang N, Xu TW, Lin JQ. Antitumor efficacy of docetaxel-loaded nanocarrier against esophageal cancer cell bearing mice model. Drug Res (Stuttg). 2014. [Epub ahead of print]. Noori Koopaei M, Khoshayand MR, Mostafavi SH, Amini M, Khorramizadeh MR, Jeddi Tehrani M, et al. Docetaxel loaded PEG-PLGA nanoparticles: optimized drug loading, in-vitro cytotoxicity and in-vivo antitumor effect. Iran J Pharm Res. 2014;13:819–33. Matsumura Y. Preclinical and clinical studies of NK012, an SN-38-incorporating polymeric micelles, which is designed based on EPR effect. Adv Drug Deliv Rev. 2011;63:184–92. Yu S, Wu G, Gu X, Wang J, Wang Y, Gao H, et al. Magnetic and pH-sensitive nanoparticles for antitumor drug delivery. Colloids Surf B Biointerfaces. 2013;103:1522. Ramasamy T, Kim JH, Choi JY, Tran TH, Choi HG, Yong CS, et al. pH sensitive polyelectrolyte complex micelles for highly effective combination chemotherapy. J Mater Chem B. 2014;2:6324. Danhier F, Feron O, Preat V. To exploit the tumor microenvironment: passive and active tumor targeting of nanocarriers for anti-cancer drug delivery. J Controlled Release. 2010;148:135–46. Knight V, Koshkina NV, Waldrep JC, Giovanella BC, Gilbert BE. Anticancer exffect of 9-nitrocamptothecin liposome aerosol on human cancer xenografts in nude mice. Cancer Chemother Pharmacol. 1999;44:177–86. This work supported by Grants from National Natural Science foundation of China (no.81000994), Beijing municipal Science and technology Commission (no Z121107001012080), and Beijing Nova program (no.Z131107000413104). The Second Department of Oncology, The First Affiliated Hospital of the General Hospital of the PLA, Beijing, 100048, China Nan Du , Lin-Ping Song , Xiao-Song Li , Ling Wan , Hong-Ying Ma & Hui Zhao Department of Medical, The First Affiliated Hospital of the General Hospital of the PLA, No. 51 Fucheng Road, Haidian District, Beijing, 100048, China Search for Nan Du in: Search for Lin-Ping Song in: Search for Xiao-Song Li in: Search for Lei Wang in: Search for Ling Wan in: Search for Hong-Ying Ma in: Search for Hui Zhao in: Correspondence to Lei Wang. LW has guided and written the whole manuscript. ND and LPS prepared the formulations and performed in vitro experiments. XSL and LW performed all the biological and pharmacological experiments. HYM and HZ have carried out the in vivo anticancer efficacy study. All authors read and approved the final manuscript. Nan Du and Lin-Ping Song contributed equally to this work. Du, N., Song, L., Li, X. et al. Novel pH-sensitive nanoformulated docetaxel as a potential therapeutic strategy for the treatment of cholangiocarcinoma. J Nanobiotechnol 13, 17 (2015) doi:10.1186/s12951-015-0066-8 Polymeric micelles Cancer chemotherapy
CommonCrawl
VULCAN AND COMETS RELATED SITES NEW SOLAR PLANETS 19 Nov. 2021 © Copyright - Only Italicized Comments RELATING TO A VULCAN LIKE PLANET RELATING TO OTHER NEW SOLAR PLANETS KUIPER BELT OBJECTS & COMETS Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it. - Gautama Buddha RELATED SITES LISTINGS SCIENTISTS MAY HAVE ACCIDENTALLY SPOTTED AN EXTRA PLANET IN OUR SOLAR SYSTEM - Nov 16, 2021 Okay, that's fascinating. I have carried out a search for Planet 9 in the IRAS data. At the distance range proposed for Planet 9, the signature would be a 60 micron unidentified IRAS point source with an associated nearby source from the IRAS Reject File of sources which received only a single hours-confirmed (HCON) detection. The confirmed source should be detected on the first two HCON passes, but not on the third, while the single HCON should be detected only on the third HCON. I have examined the unidentified sources in three IRAS 60micron catalogues: some can be identified with 2MASS galaxies, Galactic sources or as cirrus. The remaining unidentified sources have been examined with the IRSA Scanpi tool to check for the signature missing HCONs, and for association with IRAS Reject File single HCONs. No matches of interest survive.< 5 earth masses) in the distance range 200-400 AU, we expect a pair or triplet of single HCONs with separations 2-35 arcmin. Several hundred candidate associations are found and have been examined with Scanpi. A single candidate for Planet 9 survives which satisfies the requirements for detected and non-detected HCON passes. A fitted orbit suggest a distance of 225+/-15 AU and a mass of 3-5 earth masses. Dynamical simulations are needed to explore whether the candidate is consistent with existing planet ephemerides. If so, a search in an annulus of radius 2.5-4 deg centred on the 1983 position at visible and near infrared wavelengths would be worthwhile. PLANET 9: OLD DATA COULD UNCOVER THE SOLAR SYSTEM'S NEWEST WORLD, 38 YEARS LATE - 11.16.2021 There's a controversy in planetary science - some researchers propose that the Solar System has a ninth planet with a distant orbit around the Sun. The talk of Planet Nine first emerged in January 2015 when a duo of astronomers from the California Institute for Technology (Caltech) suggested that a Neptune-sized planet orbits our Sun in a highly elongated orbit that lies far beyond Pluto. I have carried out a search for Planet 9 in the IRAS data. At the distance range proposed for Planet 9, the signature would be a 60 micron unidentified IRAS point source with an associated nearby source from the IRAS Reject File of sources which received only a single hours-confirmed (HCON) detection. The confirmed source should be detected on the first two HCON passes, but not on the third, while the single HCON should be detected only on the third HCON. I have examined the unidentified sources in three IRAS 60micron catalogues: some can be identified with 2MASS galaxies, Galactic sources or as cirrus. The remaining unidentified sources have been examined with the IRSA Scanpi tool to check for the signature missing HCONs, and for association with IRAS Reject File single HCONs. No matches of interest survive. For a lower mass planet (< 5 earth masses) in the distance range 200-400 AU, we expect a pair or triplet of single HCONs with separations 2-35 arcmin. Several hundred candidate associations are found and have been examined with Scanpi. A single candidate for Planet 9 survives which satisfies the requirements for detected and non-detected HCON passes. A fitted orbit suggest a distance of 225+/-15 AU and a mass of 3-5 earth masses. Dynamical simulations are needed to explore whether the candidate is consistent with existing planet ephemerides. If so, a search in an annulus of radius 2.5-4 deg centred on the 1983 position at visible and near infrared wavelengths would be worthwhile. * MORE THAN 800 MINOR OBJECTS ARE SPOTTED BEYOND NEPTUNE IN DISCOVERY THAT COULD HELP IN THE SEARCH FOR THE MYSTERIOUS PLANET NINE - Sep 17, 2021 Their search has yielded 815 trans-Neptunian objects (TNOs), with 461 objects reported for the first time in a new pre-print research paper. TNOs are so-called because they're further than any minor planet or dwarf planet in the solar system with an orbit beyond Neptune. It was only last month that another team of experts plotted Planet Nine's likely location, roughly 46.5 billion miles away from the Sun. NEW STUDY REVEALS HIGH PROBABILITY OF NINTH PLANET IN SOLAR SYSTEM - Sep 8, 2021 Youtube censors comments * NEW STUDY REVEALS HIGH PROBABILITY OF NINTH PLANET IN SOLAR SYSTEM - Sep 8, 2021 NBC does not censor comments. Guess so, cause mine is still there. * IF PLANET 9 IS OUT THERE, HERE'S WHERE TO LOOK - Aug 30, 2021 Orbital oddness doesn't prove a planet exists. Just ask Planet Vulcan. Others went so far as to argue Planet 9 does exist, but we can't see it because it's a primordial black hole. * PLANET NINE: SCIENTISTS MAP ITS LIKELY LOCATION - Aug 27, 2021 From their calculations taken from Kuiper Belt Objects, the scientists came up with approximate figures for Planet Nine. They estimate Planet Nine to be about 6.2 Earth masses, with an orbit that takes it from 300 astronomical units (AU, with 1 AU being the distance from Earth to the sun) out to 380 AU from the sun. Planet Nine's orbital inclination, or how much it tilts away from the plane of the solar system, is around 16 degrees. Compare that to Earth's orbital inclination, which is zero degrees, and Pluto's, which is 17 degrees. * THE SEARCH FOR PLANET NINE: IS PLANET NINE FINALLY DEAD? - Feb 16, 2021 * 'IT DOESN'T EXIST': NEW STUDY CHALLENGES ELUSIVE PLANET X THEORY - 16.02.2021 THIS ALIEN WORLD COULD HELP US FIND PLANET NINE IN OUR OWN SOLAR SYSTEM - December 17, 2020 The gas giant orbits a pair of stars in a far-off system. * THE BIG WOBBLE - 12 Oct 2020 Astronomers are so sure of the 10th planet they think there is nothing left to do but to name it." Ray T. Reynolds, NASA (NASA press release, 1992). NASA have nicknamed it "Planet Nine," could have a mass about 10 times that of Earth and orbit about 20 times farther from the Sun on average than Neptune. * DOES THE SUN HAVE A TWIN? - OUR SOLAR SYSTEM MAY HAVE ONCE BEEN A BINARY STAR SYSTEM - Aug 20, 2020 A twin of the Sun, formed billions of years ago as the Solar System took shape, could help explain movements of bodies at the outer reaches of our family of planets, a new study suggests. Astronomers from The Harvard-Smithsonian Center for Astrophysics believe that if a ninth planet is discovered beyond the orbit of Pluto, its movements could help us better understand how the Sun and our planetary neighborhood formed. * COULD PLANET 9 BE A PRIMORDIAL BLACK HOLE? - Aug 24, 2020 IS VULCAN A CHTHONIAN PLANET THAT HAS NOT HAD ITS GASEOUS ATMOSPHERE STRIPPED AWAY? Chthonian planet - Wikipedia These exoplanets are orbiting very close to their stars and could be the remnant cores of evaporated gas giants or brown dwarfs. If cores are massive enough they could remain compressed for billions of years despite losing the atmospheric mass. As there is a lack of gaseous "hot-super-Earths" between 2.2 and 3.8 Earth-radii exposed to over 650 Earth incident flux, it is assumed that exoplanets below such radii exposed to such stellar fluxes could have had their envelopes stripped by photoevaporation. HD 209458 b is an example of a gas giant that is in the process of having its atmosphere stripped away, though it will not become a chthonian planet for many billions of years, if ever. A similar case would be Gliese 436b, which has already lost 10% of its atmosphere. Possible examples Transit-timing variation measurements indicate for example that Kepler-52b, Kepler-52c and Kepler-57b have maximum-masses between 30 and 100 times the mass of Earth (although the actual masses could be much lower); with radii about 2 Earth radii[2], they might have densities larger than that of an iron planet of the same size. These exoplanets are orbiting very close to their stars and could be the remnant cores of evaporated gas giants or brown dwarfs. If cores are massive enough they could remain compressed for billions of years despite losing the atmospheric mass. As there is a lack of gaseous "hot-super-Earths" between 2.2 and 3.8 Earth-radii exposed to over 650 Earth incident flux, it is assumed that exoplanets below such radii exposed to such stellar fluxes could have had their envelopes stripped by photoevaporation.[5] HD 209458 b, also given the nickname Osiris,[2] is an exoplanet that orbits the solar analog HD 209458 in the constellation Pegasus, some 159 light-years from the Solar System. The radius of the planet's orbit is 7 million kilometres, about 0.047 astronomical units, or one eighth the radius of Mercury's orbit. This small radius results in a year that is 3.5 Earth days long and an estimated surface temperature of about 1,000 °C (about 1,800 °F). Its mass is 220 times that of Earth (0.69 Jupiter masses) and its volume is some 2.5 times greater than that of Jupiter. The high mass and volume of HD 209458 b indicate that it is a gas giant. HD 209458 b represents a number of milestones in extraplanetary research. It was the first of many categories: a transiting extrasolar planet the first planet detected through more than one method an extrasolar planet known to have an atmosphere an extrasolar planet observed to have an evaporating hydrogen atmosphere an extrasolar planet found to have an atmosphere containing oxygen and carbon one of the first two extrasolar planets to be directly observed spectroscopically the first extrasolar gas giant to have its superstorm measured the first planet to have its orbital speed measured, determining its mass directly. Based on the application of new, theoretical models, as of April 2007, it is thought to be the first extrasolar planet found to have water vapor in its atmosphere. In July, 2014, NASA announced finding very dry atmospheres on HD 209458 b and two other exoplanets (HD 189733 b and WASP-12b) orbiting Sun-like stars. Gliese 436 b In 2019, USA Today reported that the exoplanet's burning ice continued to have scientists "flabbergasted." Its main constituent was initially predicted to be hot "ice" in various exotic high-pressure forms,[13][15] which would remain solid despite the high temperatures, because of the planet's gravity. The planet could have formed further from its current position, as a gas giant, and migrated inwards with the other gas giants. As it arrived in range, the star would have blown off the planet's hydrogen layer via coronal mass ejection. However, when the radius became better known, ice alone was not enough to account for it. An outer layer of hydrogen and helium up to ten percent in mass would be needed on top of the ice to account for the observed planetary radius. This obviates the need for an ice core. Alternatively, the planet may be a super-earth. * ELUSIVE 'PLANET NINE' THOUGHT TO BE LURKING IN THE OUTER SOLAR SYSTEM MAY ACTUALLY BE A GRAPEFRUIT-SIZED BLACK HOLE - AND A NEW TELESCOPE WILL HELP ASTRONOMERS CONFIRM THE THEORY - 13 July, 2020 * BEYOND PLUTO: THE HUNT FOR OUR SOLAR SYSTEM'S NEW NINTH PLANET - 28 JUN 2020 You'd think that if you found the first evidence that a planet larger than the Earth was lurking unseen in the furthest reaches of our solar system, it would be a big moment. It would make you one of only a small handful of people in all of history to have discovered such a thing. But for astronomer Scott Sheppard of the Carnegie Institution for Science in Washington DC, it was a much quieter affair. "It wasn't like there was a eureka moment," he says. "The evidence just built up slowly." He's a master of understatement. Ever since he and his collaborator Chad Trujillo of Northern Arizona University, first published their suspicions about the unseen planet in 2014, the evidence has only continued to grow. Yet when asked how convinced he is that the new world, which he calls Planet X (though many other astronomers call it Planet 9), is really out there, Sheppard will only say: "I think it's more likely than unlikely to exist." * PLANET X: THE RETURN OF OUR SUN'S MYSTERIOUS COMPANION - June 13, 2020 Beyond Belief TV Series Special…Something massive is tugging on the planets of our solar system, and astrophysicists tell us, it is getting closer. Jason Martell deciphers clues left by ancient cultures which reveal that the sun is but one star in a binary system. * WHY ASTRONOMERS NOW DOUBT THERE IS AN UNDISCOVERED 9TH PLANET IN OUR SOLAR SYSTEM - May 26, 2020 * COULD THEORIZED PLANET 9 BE A PRIMORDIAL BLACK HOLE? RESEARCHERS PROPOSE METHOD TO FIND OUT - MAY 25, 2020 If Planet Nine exists, it's a bit odd that we haven't found it. Several sky surveys are sensitive enough to see a planet of its size. It's possible that the planet is more distant than we expect, or has a lower albedo, but observations are starting to rule some of these out. There is, however a much more radical idea. What if Planet Nine hasn't been observed because it isn't a planet? What if it is a primordial black hole? * DOES PLANET NINE ACTUALLY EXIST? MAYBE NOT SAY ASTRONOMERS - 25 May 2020 IF PLANET NINE IS A TINY BLACK HOLE, THIS IS HOW TO FIND IT - May 7, 2020 Our best bet could be to send a swarm of nanospacecraft - propelled from Earth by a powerful laser - to take a look. Now, astronomers have a similar puzzle on their hands. For some time, they have been gathering evidence that a massive planet must be orbiting the sun at a distance of around 500 astronomical units, or 70 billion kilometers. But there is certainly motivation to try. The discovery of a black hole orbiting the sun would be quite a prize for whoever undertook such a task. Indeed, it may be the last chance to discover a significant new body orbiting our star. * PLANET NINE IS A MIRAGE ACCORDING TO EXPERTS WHO SAY IT IS A SPRAWLING DISK OF ICY DEBRIS AND NOT AN UNSEEN PLANET - May 6, 2020 * NOT A PLANET, BUT NOT A STAR - ALL ABOUT BROWN DWARFS - Feb 03, 2020. PLANET 9 MAY HAVE ALREADY BEEN FOUND, STUDY SUGGESTS - Nov. 11, 2019 Since its launch in April 2018, NASA's Transiting Exoplanet Survey Satellite (TESS) has found a number of exoplanets, including a so-called "missing link" and an exoplanet with three suns. But a new study suggests the $200 million satellite may have also discovered the mysterious Planet 9. Since TESS is able to detect objects at approximately 5 pixel displacement and Planet Nine "has an expected magnitude of 19 < V < 24," the possibility is raised "that TESS could discover it!" the authors wrote in the study. * IS PLANET 9 A BLACK HOLE? WITH DR. JAKUB SCHOLTZ AND DR. JAMES UNWIN - Oct 31, 2019 What if Planet 9 is a Primordial Black Hole? - 24 Sep 2019 We highlight that the anomalous orbits of Trans-Neptunian Objects (TNOs) and an excess in microlensing events in the 5-year OGLE dataset can be simultaneously explained by a new population of astrophysical bodies with mass several times that of Earth (M?). We take these objects to be primordial black holes (PBHs) and point out the orbits of TNOs would be altered if one of these PBHs was captured by the Solar System, inline with the Planet 9 hypothesis. Capture of a free floating planet is a leading explanation for the origin of Planet 9 and we show that the probability of capturing a PBH instead is comparable. The observational constraints on a PBH in the outer Solar System significantly differ from the case of a new ninth planet. This scenario could be confirmed through annihilation signals from the dark matter microhalo around the PBH. * WHAT IF PLANET 9 WAS A PRIMORDIAL BLACK HOLE? COULD WE DETECT IT? - (Night Sky News October 2019) - Oct. 23, 2019 * IS PLANET NINE STILL OUT THERE? DiscoverMagazine.com September/October issue * PLANET NINE COULD BE A PRIMORDIAL BLACK HOLE, NEW RESEARCH SUGGESTS - Sep 30, 2019 * ENIGMATIC PLANET X AT LEAST 45 BILLION KM AWAY FROM SUN MIGHT BE DENSE BLACK HOLE, PHYSICISTS SAY - 29.09.2019 * PLANET NINE: HOW WE'LL FIND THE SOLAR SYSTEM'S MISSING PLANET - Sep. 23, 2019 * DOES PLANET 9 EXIST? - Sep 13, 2019 A planet has been predicted to orbit the sun with a period of 10,000 years, a mass 5x that of Earth on a highly elliptical and inclined orbit. What evidence supports the existence of such a strange object at the edge of our solar system? Huge thanks to: Prof. Konstantin Batygin, Caltech Prof. David Jewitt, UCLA I had heard about Planet 9 for a long time but I wondered what sort of evidence could support the bold claim: a planet at the very limits of our ability to detect one, so far out that its period is over 60 times that of Neptune. The planet 9 hypothesis helps explain clustering of orbits of distant Kuiper belt objects. It also explains how some of these objects have highly inclined orbits - up to 90 degrees relative to the plane of the solar system. Some are orbiting in reverse. Plus their orbits are removed from the orbit of Neptune, the logical option for a body that could have ejected them out so far. The fact that the perihelion is so far out suggests another source of gravity was essential for their peculiar orbits. THE SUN MAY HAVE STARTED ITS LIFE WITH A BINARY COMPANION - August 18, 2020 A new theory published today in the Astrophysical Journal Letters by scientists from Harvard University suggests that the sun may once have had a binary companion of similar mass. If confirmed, the presence of an early stellar companion increases the likelihood that the Oort cloud was formed as observed and that Planet Nine was captured rather than formed within the solar system. * PLANET NINE FROM OUTER SPACE - KONSTANTIN BATYGIN, AT USI - Jul 2, 2019 At the edge of the solar system, beyond the orbit of Neptune, lies an expansive field of icy debris known as the Kuiper belt. The orbits of the individual asteroid-like bodies that make up the Kuiper belt trace out highly elongated elliptical paths, and require hundreds to thousands of years to complete a single revolution around the Sun. Remarkably, the most distant members of this belt exhibit an anomalous orbital alignment. In this talk, I will argue that the observed clustering of the distant solar system is maintained by a yet-unseen, Neptune-like 9th planet. The existence of such a planet naturally explains other, seemingly unrelated dynamical features of the solar system. * PLANET NINE: ASTRONOMER ALAN DUFFY ON THE SEARCH FOR MEGA WORLD - June 20, 2019 An enormous and undiscovered planet is making waves in our neighbourhood and astronomers are frantically trying to find it. Astronomers across the globe are searching for an enormous undiscovered world dubbed Planet Nine, which could change everything we think we know about the solar system. They know it's out there, such is its size — potentially 10 times bigger than earth — and its powerful gravitational pull. But Planet Nine is proving impossible to find, demonstrating how much we still don't know about what's in our own world's backyard. PLANET NINE AND SOLAR SYSTEM MYTHS - May 10, 2019 * Eugene Bagashov: Planet Nine And Solar System Myths - May 8, 2019 * PLANET 9 BREAKTHROUGH! HIDDEN SUPER SIZED PLANET MAY BE FOUND SOON, BIZARRE SOLAR ECLIPSE - Mar 9, 2019 * A MAP TO PLANET NINE: HUNTING OUR SOLAR SYSTEM'S MOST DISTANT WORLDS - Mar 6, 2019 Last December, a trio of astronomers set the record for the most distant object ever discovered in the solar system. Because the small world is located about three times farther from the sun than Pluto, the researchers dubbed it Farout. Now, not to be outdone (even by themselves), the same group of boundary pushers have announced the discovery of an even more far-flung object. And since the new find sits a couple billion miles farther out than Farout, the team has fittingly nicknamed it Farfarout. The discovery of Farfarout, which is about 140 astronomical units from the sun (where 1 AU equals the distance between Earth and the sun), is quite impressive by its own right. But Farfarout and its nearer sibling are not just record-breakers, they could be trend-setters. Depending on how their orbits shake out, the two may add to a growing pile of evidence that hints at the existence of an elusive super-Earth lurking in the fringes of our solar system: Planet Nine. AN ANCIENT STELLAR FLYBY COULD HAVE PUT PLANET NINE INTO ITS DISTANT ORBIT - March 4, 2019 d*d PLANET 9 HYPOTHESIS GETS A BOOST - March 3, 2019 * NEW EVIDENCE SUPPORTS EXISTENCE OF HYPOTHETICAL PLANET NINE - 02.28.2019 * Evidence Of A Ninth Planet - Jan 20, 2016 * MORE SUPPORT FOR PLANET NINE - FEBRUARY 27, 2019 "My favorite characteristic of the Planet Nine hypothesis is that it is observationally testable," Batygin says. "The prospect of one day seeing real images of Planet Nine is absolutely electrifying. Although finding Planet Nine astronomically is a great challenge, I'm very optimistic that we will image it within the next decade." * DOES PLANET NINE EXIST? FEATURING DR. KONSTANTIN BATYGIN - Feb 23, 2019 An exploration into the concept that a ninth, undiscovered planet may lurk in the outer solar system. We speak with Dr. Konstanin Batygin on the search for planet 9, and how he and his partner Mike Brown might discover it within only the next year or two. Astronomers Discover Solar System's Most Distant Object, Nicknamed 'FarFarOut' - Feb. 21, 2019 Like Farout, FarFarOut's orbit is not yet known; until it is, it's uncertain whether it will stay far enough away from the rest of the solar system to be free of the giant planets' gravitational tug. If it does, the two could join another of Sheppard's recent distant discoveries, "the Goblin," which ovetails with projections of the Planet Nine's possible orbit. It will take several years to determine the orbits of Farout and FarFarOut, and whether they will provide more clues. * STAR-HOP FROM ORION TO PLANET 9 - January 28, 2019 * MASSIVE DISK OF TRANS-NEPTUNIAN OBJECTS CASTS DOUBT ON PLANET NINE - Jan 23, 2019 In 2016, Caltech astronomers showed that six distant solar system objects (2007 TG422, 2013 RF98, 2004 VN112, 2012 VP113, 2012 GB174 and a minor planet called Sedna) possessed orbital features indicating they were affected by the gravity of an as-yet-undetected planet, nicknamed Planet Nine. According to University of Cambridge researcher Antranik Sefilian and American University of Beirut's Professor Jihad Touma, the strange orbits of those objects can instead be explained by the combined gravitational force of a disk of trans-Neptunian objects (TNOs) with a combined mass as much as 10 times that of Earth. * NEW STUDY CASTS DOUBT ON PLANET NINE HYPOTHESIS - January 22, 2019 * IS PLANET NINE JUST A RING OF ICY BODIES? - January 22, 2019 * DOES 'PLANET NINE' EXIST? ASTRONOMERS SAY AN UNSEEN DISK OF ICY SPACE ROCKS MAY EXPLAIN THINGS - January 22, 2019 * MYSTERY ORBITS IN OUTERMOST REACHES OF SOLAR SYSTEM NOT CAUSED BY 'PLANET NINE', SAY RESEARCHERS - 21 Jan 2019 * = Comment added, d*d = Delayed Comment Deletion, Evidence offered that Vulcan/Planet Nine may have been found by IRAS in 1983 and the find suppressed. See Vulcan's Orbital Parameters (Via Blavatsky's Theosophy & Astronomer Forbes) * PLANET NINE MAY NOT EXIST, SAY SCIENTISTS, AFTER FINDING SIMPLER EXPLANATION - 21 Jan 2019 * ASTRONOMERS FIND SUN'S LOST COMPANION, GIANT WAVES ENGULF BUILDINGS, CALIFORNIA'S FIERY APOCALYPSE - Nov. 21, 2018 Skywatch Media News * Signs Of Planet X Location As Scientists Reveal Evidence Of Wandering Dwarf Star - Nov. 13, 2018 * Ground Shaking Meteor Explosion Over South Africa, Violent Cosmic Collision In The Asteroid Belt - Jan. 23, 2019 * MYSTERY ORBITS IN OUTERMOST REACHES OF SOLAR SYSTEM NOT CAUSED BY 'PLANET NINE' - January 21, 2019 * PLANET NINE MAY NOT EXIST, SO WHAT IS BEHIND SOLAR SYSTEM CHAOS? SHOCK NEW THEORY EMERGES - January 21, 2019 POPE'S ASTRONOMER SAYS NO CONFLICT BETWEEN FAITH AND SCIENCE - 23 Dec. 2018 MORE EVIDENCE THAT PLANET 9X — AT LEAST SEVEN TIMES EARTH'S MASS — IS HIDDEN IN KUIPER BELT! -Oct. 30, 2018 "It's seven times the mass of our planet and it's 500 times further away from the sun than Earth." * NEW DISCOVERY STRENGTHENS THE CASE FOR ELUSIVE PLANET 9 - Oct. 3, 2018 The biggest problem with Planet 9 is that we haven't found it. The biggest problem for those who want to argue it doesn't exist is that we keep finding evidence it does. The most recent piece — rock? — of evidence is 2015 TG387, colloquially known as "The Goblin." What makes the Goblin so interesting is what it doesn't do: namely, interact with other planets in the solar system. It never comes close enough to Jupiter, Saturn, Uranus, or Neptune to be gravitationally influenced by them. Yet its orbit around the solar system shows that it's clearly being influenced by something. DWARF PLANET TG387 POINTS TO LARGER PLANET X IN OUR SOLAR SYSTEM - Oct. 2, 2018 Warmkessel's Comment Note that the orbital period of 2015 TG387 is in an 8:1 orbital resonance with Forbes' Planet (5000 +/- (200) years) /Vulcan's (4969.0 +/- 11.5 years) orbital period: 40000/5000 = 8.000 [Forbes Planet: an exact integer value. Barry W.] 40000/4969 = 8.050 [Vulcan: close to an integer value. - Barry W.] Both suggest that 2015 TG387 is in a 8:1 orbital resonance with Vulcan. If so, future refinements of 2015 TG387's orbit may approach 39792 years. Vulcan Revealed Vulcan's Orbital Parameters (Via Blavatsky's Theosophy & Astronomer Forbes) The chances of these three vectors fortuitously [accidently] falling in the same orbit plane inclination with such a tiny angular uncertainty as Vulcan's is about one chance in 250 [1/250 or 0.4 %]. Put another way, there are about 249 chances out of 250 [99.6%] that the body described in Table 2 is Blavatsky's 'Vulcan'. Vulcan/Planet Nine's existence was either missed or suppressed with the NEWS media releases suggesting the latter (Mystery Heavenly Body Perhaps As Big As Jupiter - O'Toole). This Vulcan/Planet Nine correlation indicates the following: 1. The very low probability of fortuitous occurrence (0.004) strongly suggests that our solar body Vulcan/Planet Nine is at or is very near one of the first nine IRAS Objects detected (1732+239). 2. Assuming that Vulcan has been photographed, a simple way to find which of the many objects on the appropriate photographic plate that contains the cited IRAS object 1732+239 is to re-photograph this region of the sky and do a 'blink' comparison using this and the new photograph. The stars will remain fixed, Vulcan will have moved on to a new location. 3. This very low probability of fortuitous occurrence suggests that IRAS point 1732+239 was moving (past aphelion but "not incoming mail"). The IRAS satellite was targeted by the Pioneer and Voyager spacecraft results thus a body (Almost As Big As Jupiter) as depicted on Sitchen's Akkadian seal was found. This IRAS point was the probable cause of the NEWS media releases. 4. The U. Of AZ astronomers estimating that Planet Nine's orbital inclination could be near 18o or 48o and Vulcan's 48.44o suggests that they are one and the same body. 5. The Spanish astronomers suggesting that Planet Nine's semi major axis in the 300 to 400 AU range re-enforces the suspicion that both it and Vulcan's/Forbes' planet (semi major axis 291.2/292.4 AU) belong to one and the same body because the probability of Fortuitous [accidental] correlation of Vulcan's/Forbes' planet orbital parameters is almost zero (0.00000263). * 'THE GOBLIN': NEW DISTANT DWARF PLANET BOLSTERS EVIDENCE FOR PLANET X - Oct. 3, 2018 The few that we do know of behave in a curious way, though. Though most orbit too far from the giant planets like Jupiter and Neptune to be influenced by their gravity, most distant objects seem to be moving in accordance with some powerful gravitational force in the outer solar system. This planetary harmony was first picked up on by Sheppard and collaborator Chad Trujillo in 2012, when they discovered 2012 VP113, but subsequent discoveries have only bolstered their theory. In essence, their orbits are arranged in such a way that it seems like there's another large planet tugging them into alignment. Called either "Planet X" or "Planet Nine," this still-hypothetical world could remain undiscovered in much the same way The Goblin did. * ASTRONOMERS DISCOVER A DISTANT SOLAR SYSTEM OBJECT - 2015 TG387 - Oct. 3, 2018 * STATE-SIZED OBJECT BEYOND PLUTO HINTS AT HIDDEN PLANET X - Oct. 2, 2018 A newly spotted dwarf planet, 2015 TG387, adds to the mounting evidence that an unseen super-Earth prowls the edge of the solar system. 2015 TG387 makes very elongated, 40,000-year orbits around our star that take it as far as 2,300 AU at its furthest point from the sun. It's actually pretty lucky that astronomers were able to spot it as they say it would be too faint to see for 99 percent of its orbit. * NEW EXTREMELY DISTANT SOLAR SYSTEM OBJECT FOUND DURING HUNT FOR PLANET X - October 2, 2018 "We think there could be thousands of small bodies like 2015 TG387 out on the Solar System's fringes, but their distance makes finding them very difficult," Tholen said. "Currently we would only detect 2015 TG387 when it is near its closest approach to the Sun. For some 99 percent of its 40,000-year orbit, it would be too faint to see." The object was discovered as part of the team's ongoing hunt for unknown dwarf planets and Planet X. It is the largest and deepest survey ever conducted for distant Solar System objects. * DWARF PLANET TG387 POINTS TO LARGER PLANET X IN OUR SOLAR SYSTEM - October 2, 2018 And that larger planet may be Vulcan. WHILE SEEKING PLANET X, ASTRONOMERS FIND A DISTANT SOLAR SYSTEM OBJECT - OCTOBER 2, 2018 "We think there could be thousands of small bodies like 2015 TG387 out on the Solar System's fringes, but their distance makes finding them very difficult," Tholen said. "Currently we would only detect 2015TG387 when it is near its closest approach to the Sun. For some 99 percent of its 40,000-year orbit, it would be too faint to see, even with today's largest telescopes." EXTREME DWARF PLANET – 'THE GOBLIN' – INFLUENCED BY PLANET X? - 2 October 2018 Astronomers searching the outer solar system for dwarf planets and the theorised but elusive Planet X far beyond Pluto have found an extremely remote body that never comes closer to the Sun than 65 astronomical units, or 9.7 billion kilometres (6 billion miles). The 40,000-year orbit is consistent with the presumed gravitational effects of the as-yet-unseen Planet X, a so-called super Earth thought to be lurking even farther away. * EXTREMELY DISTANT SOLAR SYSTEM OBJECT DISCOVERED - October 2, 2018 The object with the most-distant orbit at perihelion, 2012 VP113, was also discovered by Sheppard and Trujillo, who announced that find in 2014. The discovery of 2012 VP113 led Sheppard and Trujillo to notice similarities of the orbits of several extremely distant solar system objects, and they proposed the presence of an unknown planet several times larger than Earth -- sometimes called Planet X or Planet 9 -- orbiting the Sun well beyond Pluto at hundreds of AU. THE SEARCH FOR PLANET X GETS A BOOST WITH THE DISCOVERY OF A SUPER DISTANT OBJECT - Oct 2, 2018 TG387 takes a whopping 40,000 years to complete just one orbit around the Sun. And it's on a very elliptical path far from the inner Solar System; the closest it ever gets to the Sun is 65 Astronomical Units (AU), or 65 times the distance between the Sun and the Earth. TG387 is not influenced in any way by the large objects in the inner Solar System. Jupiter, Saturn, Uranus, and Neptune don't have any effect on its orbit. That means if this object was truly batted around by Planet X, it might hold more information about the planet's orbit than other objects do. And when the team ran simulations of the Solar System with a Planet X in it, they found that this object's orbit isn't subject to change. "This one joins an elite group of six objects that are stable," says Batygin. * PLANET NINE MIGHT BE INVISIBLE FOR AT LEAST 1,000 YEARS - Sep. 3, 2018 With the planet thought to orbit so far from Earth-based telescopes, it could be impossible to find with current telescopes. The challenge is that the planet is thought to be so dim, about a million times dimmer than Neptune, that it could hide in the light pollution from the Milky Way. Earth's most potent observatory currently is the Subaru Telescope in Hawaii. That massive telescope can view a field in the sky about the size of 4,000 full moons at once. Even with such a gigantic field of view, Planet Nine is very difficult to observe. Researchers note that if the orbit of the mysterious world is beyond the 1,000 AU limit of current telescopes, it could lie invisible for the next 1,000 years. "PLANET NINE" MIGHT BE INVISIBLE, HIDING BEYOND NEPTUNE, SCIENTISTS THINK - Sep. 2, 2018 IS THERE A MYSTERIOUS PLANET NINE LURKING IN OUR SOLAR SYSTEM BEYOND NEPTUNE? - September 2, 2018 99% OF MODERN SCIENTIFIC PAPERS ARE NOTHING MORE THAN POLITICALLY-MOTIVATED PSEUDOSCIENCE, WARNS SCIENCE PIONEER - August 14, 2018 "People just don't do it," Wharton School professor and forecasting expert J. Scott Armstrong told Brietbart.com after making the shocking claim that less that one percent of papers published in scientific journals follow the scientific method. "I used to think that maybe 10 percent of papers in my field…were maybe useful. Now it looks like maybe, one tenth of one percent follow the scientific method." * PLANET NINE: 'INSENSITIVE' TERM RILES SCIENTISTS - August 1, 2018 CATACLYSMIC' COLLISION SHAPED URANUS' EVOLUTION - July 2, 2018 NO PLANET NINE? COLLECTIVE GRAVITY MIGHT EXPLAIN WEIRD ORBITS AT SOLAR SYSTEM'S EDGE - June 10, 2018 Meanwhile, Madigan, Fleisig and Zderic have explored a new idea about the orbits of these outer solar system bodies. The new calculations show the orbits might be the result of these bodies jostling against each other and debris in that part of space. In that case, no Planet Nine would be needed. Madigan said: There are so many of these bodies out there. What does their collective gravity do? We can solve a lot of these problems by just taking into account that question. OUR OBSESSION WITH HIDDEN PLANETS DIDN'T START WITH PLANET NINE - June 8, 2018 In the early 1880s, astronomers looking for evidence of a ninth planet noticed two comets, 1862 III and 1889 III, whose orbital paths appeared to show the invisible gravitational influence of a trans-Neptunian planet, which might have stretched their orbits into ellipses. Astronomer George Forbes studied the orbits of several more comets and calculated that it would take not just one but two hidden planets to shepherd the comets onto their observed orbital paths. The masses and orbits Forbes predicted actually sound a lot like an early version of the Planet Nine that astronomers are currently searching for. And Forbes wasn't alone. In 1911, Indian astronomer Venkatesh P. Ketakar proposed that Neptune, Uranus, and two hypothetical planets he named Brahma and Vishnu were in an orbital resonance. When two objects' orbital periods are related by a ratio of two whole numbers – Jupiter's moons Io, Europa, and Ganymede are in a 1:2:4 resonance, for instance – their gravitational interactions can have a greater effect than usual. * PLANET NINE: NEW EVIDENCE STRONG BUT NOT PROOF - May 28, 2018 In a paper on a preprint archive reporting the discovery, a large team of scientists has concluded that 2015 BP519 "adds to the circumstantial evidence for the existence of this proposed new member of the Solar System". "The computer simulations we use takes all objects in the Solar System and evolves them forward or backwards in time, and looks at how the orbits of the objects change over time," lead author Juliette Becker, a PhD student at the University of Michigan, told The Indian Express by email. "When we ran a simulation without Planet Nine, we found it was very hard to make objects like BP519. When we ran a different simulation including Planet Nine, we found that it was very easy to make objects like BP519," she said. * DOES PLANET NINE EXIST? ASTRONOMERS POINT TO NEW EVIDENCE - May 27, 2018 MYSTERIOUS SPACE ROCK ADDS EVIDENCE FOR A 'PLANET NINE' IN OUR SOLAR SYSTEM, SCIENTISTS CLAIM - May 22, 2018 he space rock, dubbed 2015 BP519, was discovered by researchers led by the University of Michigan * WEIRD SPACE ROCK PROVIDES MORE EVIDENCE FOR MYSTERIOUS 'PLANET NINE' - May 21, 2018 * NEW EVIDENCE FOR EXISTENCE OF PLANET NINE - May 21, 2018 A large international team of researchers has found what they are describing as more evidence of the existence of Planet Nine. In their paper posted on the arXiv preprint server, the group describes the behavior of a newly discovered distant object as suggestive of an influence of a large planet. Discovery and Dynamical Analysis of an Extreme Trans-Neptunian Object with a High Orbital Inclination - 14 May 2018 We report the discovery and dynamical analysis of 2015 BP519, an extreme Trans-Neptunian Object detected detected by the Dark Energy Survey at a heliocentric distance of 55 AU and absolute magnitude Hr= 4.3. The current orbit, determined from a 1110-day observational arc, has semi-major axis a˜ 450 AU, eccentricity e˜ 0.92 and inclination i˜ 54 degrees. With these orbital elements, 2015 BP519 is the most extreme TNO discovered to date, as quantified by the reduced Kozai action, which is is a conserved quantity at fixed semi-major axis a for axisymmetric perturbations. MYSTERIOUS OBJECT BEYOND PLUTO GIVES CLUES TO EXISTENCE OF PLANET NINE May 20, 2018 2015 BP519, orbiting the sun at a 54 degree angle when compared to almost everything else inside the orbit of Pluto, has many astronomers theorizing that Planet Nine is the cause. 2015 BP519 (120216) 2004 EW95 * EVIDENCE OF PLANET NINE POSSIBLY FOUND - May 18, 2018 YES, THERE MIGHT ACTUALLY BE A 'PLANET NINE,' AND THE EVIDENCE IS MOUNTING - May 18, 2018 * A NEW WORLD'S EXTRAORDINARY ORBIT POINTS TO PLANET NINE - May 15, 2018 PLANET 9 PROOF? ANGLO-SAXON MANUSCRIPTS 'CONTAIN EVIDENCE OF ROGUE WORLD IN SOLAR - May 5 2018 * ANCIENT MANUSCRIPTS CONTAIN EVIDENCE OF PLANET NINE CLAIM SCIENTISTS - May 05, 2018 Specialists integrate record of comets identified by Anglo-Saxon astronomers, in addition to modern pictures of area items, consisting of information acquired from NASA and The Northern Ireland Amateur Astronomy Society. By integrating modern-day information with ancient accounts, scientists think they might limit the area of world 9. "This research study job renegotiates the significance and value of middle ages science and shows how middle ages records of comets can assist check the theory of the presence of the evasive 'World 9'". "Any strong sign that a 'World 9' is needed to fit the comet sightings tape- recorded in the Middle Ages will be a distinct outcome and will definitely have an exceptional influence on our understanding of the planetary system." ANCIENT MANUSCRIPTS CONTAIN EVIDENCE OF PLANET NINE CLAIM SCIENTISTS - ? And since planet nine has eluded discovery, scientists from Queen's University believe how ancient depictions of comets in the Dark Ages may provide crucial data on the whereabouts of the mysterious alien world. * PROOF OF 'PLANET NINE' MAY BE SEWN INTO MEDIEVAL TAPESTRIES - May 4, 2018 Meanwhile, archives back on Earth are home to dozens of medieval records documenting the passage of comets through the heavens. Now, two researchers from Queen's University Belfast in Northern Ireland are hoping to use these old scrolls and tapestries to solve the modern astronomical mystery of Planet Nine. Medieval records could provide another tool, said Pedro Lacerda, a Queen's University astronomer and the other leader of the project. "We can take the orbits of comets currently known and use a computer to calculate the times when those comets would be visible in the skies during the Middle Ages," Lacerda told Live Science. "The precise times depend on whether our computer simulations include Planet Nine. So, in simple terms, we can use the medieval comet sightings to check which computer simulations work best: the ones that include Planet Nine or the ones that do not." * KONSTANTIN BATYGIN: PLANET NINE FROM OUTER SPACE - Jan. 3, 2018 Some of the astronomers(?) have finally noticed the Vulcan web site circa April 2020. HIGH WINDS PREVENTED DISCOVERY OF PLANET NINE YET - 27/02/2018 * HIDING IN SPACE? NASA WARMS TO POSSIBILITY OF MYSTERIOUS PLANET NINE - 14.10.2017 "No other model can explain the weirdness of these high-inclination orbits," Batygin said. "It turns out that Planet Nine provides a natural avenue for their generation. These things have been twisted out of the solar system plane with help from Planet Nine and then scattered inward by Neptune." * PLANET NINE COULD BE OUR SOLAR SYSTEM'S MISSING 'SUPER EARTH' - Oct. 12, 2017 DYNAMICAL EVOLUTION INDUCED BY PLANET NINE - October 6, 2017 A recent addition to the aggregate of planetary predictions within the solar system is the Planet Nine hypothesis1 (Batygin & Brown 2016a). Within the framework of this model, the observed orbital clustering of a & 250 AU Kuiper belt objects (Figure 1) is sculpted by a m - 10 m planet residing on an appreciably eccentric (e - 0.3-0.7), large semi-major axis (a - 300-700 AU) orbit, whose plane roughly coincides with the plane of the distant bodies, and is characterized by perihelion direction that is anti- aligned with respect to the average apsidal orientation of the KBOs. Vulcan's e = 0.537. a = 291.2 AU * THE SUPER-EARTH THAT CAME HOME FOR DINNER - October 5, 2017 THE SUPER-EARTH THAT CAME HOME FOR DINNER - OCTOBER 4, 2017 One of its most dedicated trackers, in fact, says it is now harder to imagine our solar system without a Planet Nine than with one. "There are now five different lines of observational evidence pointing to the existence of Planet Nine," said Konstantin Batygin, a planetary astrophysicist at Caltech in Pasadena, California, whose team may be closing in. "If you were to remove this explanation and imagine Planet Nine does not exist, then you generate more problems than you solve. All of a sudden, you have five different puzzles, and you must come up with five different theories to explain them." THE SEARCH FOR PLANET NINE: SEPTEMBER 2017 - Sep. 21, 2017 The distant objects, on average, do not have the same inclination as Planet Nine. The distant objects live in an average orbital plane that is close to midway between that of the 8 other planets and Planet Nine. Though this result is simple to state, a lot of work (or perhaps a lot of electricity for computers) went in to that statement! And the good news is that can now estimate the node much more precisely. If we take those same eccentric distant Kuiper belt objects and look at their nodes, we find that Planet Nine has a longitude of ascending node of ~94 degrees. The average inclination of those objects, by the way, is 18 degrees, so we know that the inclination of Planet Nine is higher than this, but not much higher , because otherwise, as we found earlier, it doesn't make an anti- aligned population. * ASTRONOMERS PROBE ORIGIN OF PLANET 9 - Sep. 14, 2017 Dr Parker said: "We know that planetary systems form at the same time as stars, and when stars are very young they are usually found in groups where interactions between stellar siblings are common. Therefore, the environment where stars form directly affects planetary systems like our own, and is usually so densely populated that stars can capture other stars or planets. "In this work, we have shown that - although capture is common - ensnaring planets onto the postulated orbit of Planet 9 is very improbable. We're not ruling out the idea of Planet 9, but instead we're saying that it must have formed around the sun, rather than captured from another planetary system." ASTRONOMERS PROBE ORIGIN OF PLANET 9 - Sep. 14, 2017 * PLANET 9 IS NOT A STOLEN EXOPLANET—SO HOW DID IT END UP LURKING AT THE EDGE OF THE SOLAR SYSTEM? - 09/11/2017 Low Probability In the new study, researchers looked at the stolen exoplanet hypothesis to work out the probability of this happening. By running computer simulations, the researchers showed that only 1 to 6 percent of free-floating planets (like Planet 9) are ensnared by stars, "even with the most optimal initial conditions for capture." They found that just five to 10 of the 10,000 planets simulated were captured into orbits that would fit within the constraints required for Planet 9 to have ended up in its supposed position. However, when they put in constraints relating to the formation of the solar system in general, they found "the probability for the capture of Planet 9 to be almost zero." EXTREME TRANS-NEPTUNIAN OBJECTS LEAD THE WAY TO PLANET NINE - Sep. 9, 2017 At the beginning of this year, the astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech, USA) announced that they had found evidence of the existence of a giant planet - with a mass ten times larger than Earth's - in the confines of the solar system. Moving in an unusually elongated orbit, the mysterious planet will take between 10,000 and 20,000 years to complete one revolution around the Sun. In order to arrive at this conclusion, Batygin and Brown ran computer simulations with input data based on the orbits of six extreme trans-Neptunian objects (ETNOs). Specifically, these ETNOs are: Sedna, 2012 VP113, 2004 VN112, 2007 TG422, 2013 RF98 and 2010 GB174. Now, however, brothers Carlos and Raúl de la Fuente Marcos, two freelance Spanish astronomers, together with scientist Sverre J. Aarseth from the Institute of Astronomy of the University of Cambridge (United Kingdom), have considered the question the other way around: How would the orbits of these six ETNOs evolve if a Planet Nine such as the one proposed by K. Batygin and M. Brown really did exist? The answer to this important question has been published in the journal Monthly Notices of the Royal Astronomical Society. "With the orbit indicated by the Caltech astronomers for Planet Nine, our calculations show that the six ETNOs, which they consider to be the Rosetta Stone in the solution to this mystery, would move in lengthy, unstable orbits," warns Carlos de la Fuente Marcos. "These objects would escape from the solar system in less than 1.5 billion years," he adds, "and in the case of 2004 VN112, 2007 TG422 and 2013 RF98 they could abandon it in less than 300 million years; what is more important, their orbits would become really unstable in just 10 million years, a really short amount of time in astronomical terms." * PLANET 9 IS NOT A STOLEN EXOPLANET-SO HOW DID IT END UP LURKING AT THE EDGE OF THE SOLAR SYSTEM? - Sep. 11, 2017 Since new evidence of Planet 9's existence emerged a few years ago, experts have been weighing in on how it might have ended up in its distant orbit. One of the most popular explanations is that it was stolen by the sun in an interstellar takeover 4.5 billion years ago. Now a team of researchers from the U.K. and Switzerland have largely ruled out this hypothesis, with their calculations showing the probability that Planet 9 was captured by the sun in this way is "almost zero." DYNAMICAL EVOLUTION OF DISTANT TNOS INDUCED BY PLANET NINE - A. Morbidelli The observational census of trans-Neptunian objects with semi-major axes greater than ~ 250AU exhibits unexpected orbital structure that is most readily attributed to gravitational perturbations induced by a yet-undetected, massive planet. Although the capacity of this planet to reproduce the observed clustering of distant orbits in physical space, a coherent theoretical description of the dynamical mechanisms responsible for these effects remains elusive. HUNTING PLANET NINE: GRADUATE STUDENTS USE NOVEL SEARCH TECHNIQUE - Aug. 3, 2017 According to Medford, around 10 billion ways to combine the images exist. Even if the methodology doesn't locate Planet Nine, it could narrow the search area or help locate new objects. "I think it'd be the absolute coolest way to discover Planet Nine because they're looking within existing data," Batygin said. IS THERE A GIANT PLANET LURKING BEYOND PLUTO? - 31 Jul 2017 A race is on to discover Planet Nine using classical astronomy and new computational technique Greg Laughlin, an astronomer at Yale University, says, "Our best estimate for its current position and brightness put it about 950 times farther than Earth from the sun." Although many astronomers share Brown's enthusiasm at the prospect of finding a planet bigger than Earth for the first time in 170 years, some worry about being fooled by subtle biases or simple coincidences in the data. "My instinct— completely unjustifiable—is that there's a two-thirds chance it's really there," Laughlin says. "But our simulation gives a more precise place in the sky to look for it." Their paper, published in February, as well as more recent supercomputer simulations presented in April by Trujillo, puts Planet Nine somewhere in the constellation Cetus (the whale) or Eridanus (the river), at about 28 times the current distance to Pluto. "It's still a vast search area," Trujillo says. "I actually think we will not discover Planet Nine by scanning the sky," Brown says. "We could, but I think somebody will find it first in archival data," from surveys that have already photographed huge swathes of the heavens. * THE SEARCH FOR PLANET 9: DR. RENU MALHOTRA - July 17, 2017 PLANET 9 HYPOTHESIS ALIVE AND WELL - July 14, 2017 Caltech astronomers said over a year ago they had solid theoretical evidence for a 9th major planet in our solar system, located some 700 times farther from the sun than Earth. They nicknamed it Planet 9 and said they hoped other astronomers would search for it. At least two searches involving citizen scientists (one in the Northern Hemisphere and one in the Southern Hemisphere) are currently ongoing. Meanwhile, some astronomers have said there were "biases" in the observational data used by the Caltech astronomers, which calls their Planet 9 hypothesis into question. This week, two Spanish astronomers announced word of their analysis of the orbits of a special class of extreme trans-Neptunian objects, that is, the small, known objects beyond Neptune's orbit. The work of the Spanish astronomers confirms that something is perturbing the orbits of small bodies in the outer solar system. They say it might be an unknown planet located 300-400 times farther from the sun than Earth. * NEW EVIDENCE IN SUPPORT OF THE PLANET NINE HYPOTHESIS - July 13, 2017 For the first time, the distances from their nodes to the sun have been analysed, and the results, published in the journal MNRAS, once again indicate a planet beyond Pluto. The nodes are the two points at which the orbit of an ETNO, or any other celestial body, crosses the plane of the solar system. These are the precise points where the probability of interacting with other objects is the highest, and therefore, at these points, the ETNOs may experience a drastic change in their orbits or even a collision. "If there is nothing to perturb them, the nodes of these extreme trans-Neptunian objects should be uniformly distributed, as there is nothing for them to avoid, but if there are one or more perturbers, two situations may arise," . . . But if they are unstable, they would behave as the comets that interact with Jupiter do, tending to have one of the nodes close to the orbit of the hypothetical perturber." Using calculations and data mining, the Spanish astronomers have found that the nodes of the 28 ETNOs analysed (and the 24 extreme Centaurs with average distances from the sun of more than 150 AU) are clustered in certain ranges of distances from the sun; furthermore, they have found a correlation where none should exist between the positions of the nodes and the inclination, one of the parameters which defines the orientation of the orbits of these icy objects in space. "Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU," Note, Vulcan spends most of its time occupying the part of its orbit between 291.2 au and 447.6 au. * PLANET NINE: OUR SOLAR SYSTEM HAS ANOTHER HIDDEN PLANET IN IT, ANALYSIS SUGGESTS - July 13, 2017 Scientists have been arguing for years about whether the mysterious, distant planet really exists * NEW EVIDENCE SUPPORTS THE EXISTENCE OF PLANET NINE IN OUR SOLAR SYSTEM - July 13, 2017 Last year, the existence of an unknown planet in our solar system was announced. However, this hypothesis was subsequently called into question as biases in the observational data were detected. Now Spanish astronomers have used a novel technique to analyse the orbits of the so-called extreme trans-Neptunian objects and, once again, they point out that there is something perturbing them: a planet located at a distance between 300 to 400 times the Earth-Sun separation. "Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU," says De la Fuente Marcos, who emphasizes: "We believe that what we are seeing here cannot be attributed to the presence of observational bias". Evidence For A Possible Bimodal Distribution Of The Nodal Distances Of The Extreme Trans-Neptunian Objects: Avoiding A Trans-Plutonian Planet Or Just Plain Bias? - 06/2017 This proposed correlation is unlikely to be the result of observational bias as data for both large semimajor axis Centaurs and comets fit well into the pattern found for the ETNOs, and all these populations are subjected to similar background perturbations when moving well away from the influence of the giant planets. The correlation found is better understood if these objects tend to avoid a putative planet with semimajor axis in the range 300-400 au. Vulcan's semimajor axis is 291 AU STATUS UPDATE (PART 2) - July 2, 2017 * ASTRONOMERS CLOSE TO FINDING MASSIVE NINTH PLANET IN THE SOLAR SYSTEM - June 23, 2017 * EVIL TWIN? SUN FORMED ALONGSIDE SECOND STAR THAT COULD WREAK HAVOC ON EARTH - 17.06.2017 New research from Harvard and UC Berkeley researchers suggest that most stars are born as part of a set of "twins" – and that would include our star, the sun. "The idea that many stars form with a companion has been suggested before, but the question is: how many?" said author Sarah Sadavoy, a NASA Hubble fellow at the Smithsonian Astrophysical Observatory. "We ran a series of statistical models to see if we could account for the relative populations of young single stars and binaries of all separations in the Perseus molecular cloud. And the only model that could reproduce the data was one in which all stars form initially as wide binaries. These systems then either shrink or break apart within a million years." * OUR SUN ONCE HAD AN 'EVIL TWIN' CALLED NEMESIS - - June 15, 2017 * BAD STAR RISING: SUN'S TWIN 'NEMESIS' COULD HAVE CAUSED DINOSAUR EXTINCTION - June 14, 2017 New research has offered evidence to support the theory that our sun was born with a non-identical twin named "Nemesis" and some astronomers are blaming it for the death of the dinosaurs. * NEW EVIDENCE THAT ALL STARS ARE BORN IN PAIRS - June 14, 2017 Did our sun have a twin when it was born 4.5 billion years ago? Almost certainly yes—though not an identical twin. And so did every other sunlike star in the universe, according to a new analysis by a theoretical physicist from UC Berkeley and a radio astronomer from the Smithsonian Astrophysical Observatory at Harvard University. The new assertion is based on a radio survey of a giant molecular cloud filled with recently formed stars in the constellation Perseus, and a mathematical model that can explain the Perseus observations only if all sunlike stars are born with a companion. "We ran a series of statistical models to see if we could account for the relative populations of young single stars and binaries of all separations in the Perseus molecular cloud, and the only model that could reproduce the data was one in which all stars form initially as wide binaries. These systems then either shrink or break apart within a million years." In this study, "wide" means that the two stars are separated by more than 500 astronomical units, or AU, where one astronomical unit is the average distance between the sun and Earth (93 million miles). A wide binary companion to our sun would have been 17 times farther from the sun than its most distant planet today, Neptune. Embedded Binaries and Their Dense Cores * SUN LIKELY HAS A LONG-LOST TWIN - June 14, 2017 * CITIZEN SCIENTISTS JOIN THE SEARCH FOR PLANET 9 - June 10, 2017 * PLANET NINE SCIENTIFIC TRUTH REVEALED: ASTRONOMERS CAME UP WITH 5 NEW PREDICTIONS - May 12, 2017 ? THE SCIENTIFIC TRUTH ABOUT PLANET NINE, SO FAR (Synopsis) - May 9, 2017 * PLANET NINE: THE SCORE CARD - May 4, 2017 * CITIZEN SCIENTISTS MAY HAVE LOCATED CANDIDATES FOR PLANET NINE - April 5, 2017 NOW ANYONE CAN JOIN THE SEARCH FOR 'PLANET NINE', WORLD NEWS & TOP STORIES - Mar 3, 2017 It's possible that Planet Nine - or perhaps a "brown dwarf" star or two - is lurking in its speckled images of space. A new initiative by Nasa and the University of California at Berkeley, called Backyard Worlds: Planet 9, is crowd-sourcing the hunt for Planet Nine. It will use archived observations from NASA's Wide-field Infrared Survey Explorer (Wise) mission, which scanned the skies for asteroids and other faint objects. It's possible that Planet Nine - or perhaps a "brown dwarf" star or two - is lurking in its speckled images of space. This planet could be 500 times as far from the Sun as Earth is, but it would still be part of our solar system, with a highly elliptical orbit that never takes it anywhere close to the Sun. The mystery planet's existence is inferred from the orbits of many smaller bodies in the outer solar system. They orbit the Sun and cluster in a manner that suggests the possible gravitational influence of an unseen, large planet. Vulcan is an ultra-tiny half Jupiter mass brown dwarf star whose aphelion is at 447.6 AU. * NOW ANYONE CAN JOIN THE SEARCH FOR THE MYSTERIOUS 'PLANET NINE' - March 1, 2017 * NOW ANYONE CAN JOIN THE SEARCH FOR THE MYSTERIOUS 'PLANET NINE' - February 28, 2017 * PLANET 9 CAN'T RUN FOREVER - TWO ASTEROIDS GIVE UP SOME CLUES - February 22, 2017 * TWO SEPARATED ASTEROIDS PROVIDE NEW EVIDENCE FOR PLANET NINE'S EXISTENCE * NEW DATA ABOUT TWO DISTANT ASTEROIDS GIVE A CLUE TO THE POSSIBLE 'PLANET NINE' - February 22, 2017 * NASA, UC BERKELEY RECRUIT PUBLIC TO HELP FIND 9TH PLANET - February 20, 2017 * SCIENTISTS NEED YOUR HELP TO FIND THE MYSTERIOUS PLANET THEY SUSPECT IS LURKING IN OUR SOLAR SYSTEM - February 16, 2017 * ARMCHAIR ASTRONOMERS HELP NASA SEARCH FOR MYSTERIOUS PLANET 9 Feburary 16, 2017 ? YOU CAN NOW HELP NASA FIND THE MYSTERIOUS PLANET NINE - 16/02/2017 * SEARCHING FOR PLANET NINE - Feb 16, 2017 * JOIN THE SEARCH FOR NEW NEARBY WORLDS - Feb 15, 2017 A new website funded by NASA lets the public search for new worlds in the outer reaches of our solar system and in neighboring interstellar space. The website, called Backyard Worlds: Planet 9, allows everyone to participate in the search though brief movies made from images captured by NASA's Wide-field Infrared Survey Explorer (WISE) mission. * THINK YOU CAN FIND PLANET 9? CHECK OUT THIS CITIZEN-SCIENCE PROJECT - February 15, 2017 * DID THE SUN STEAL PLANET NINE FROM ANOTHER SOLAR SYSTEM? - January 21, 2017 * ROUGE PLANET NINE - Jan 21, 2017 * Did The Sun Steal Planet Nine From Another Solar System? - Jan 21, 2017 * COULD THE ELUSIVE PLANET NINE ORIGINATE FROM ANOTHER SYSTEM? - - January 15, 2017 * NEW THEORY SAYS PLANET 9 MAY BE A CAPTURED ROGUE PLANET - January 14, 2017 * MYSTERIOUS 'PLANET NINE' MAY NOT BE FROM OUR SOLAR SYSTEM - January 13, 2017 * SIMULATIONS SUGGEST PLANET NINE MAY HAVE BEEN A ROGUE - January 12, 2017 * OUR SOLAR SYSTEM MIGHT HAVE A NINTH PLANET, AND RESEARCHERS HAVE AN IDEA OF WHERE IT CAME FROM - January 11, 2017 * PLANET NINE MAY BE A 'ROGUE PLANET' CAPTURED BY THE SUN - January 11, 2017 * PLANET NINE MAY BE A 'ROGUE PLANET' CAPTURED BY THE SUN - January 11, 2017 * PLANET NINE MAY HAVE BEEN A ROGUE WORLD CAPTURED BY OUR SOLAR SYSTEM - 11.01.2017 * MORE PROOF THEY'RE PREPARING THE WORLD FOR 'DISCLOSURE' - 'PLANET NINE' MAKES 60 MINUTES AS ASTRONOMERS ANNOUNCE: 'THE SOLAR SYSTEM HAS SOME TRICKS UP ITS SLEEVE' - January 10, 2017 * MYSTERIOUS PLANET NINE MAY BE A CAPTURED 'ROGUE' WORLD - January 10, 2017 * OUR SUN MAY HAVE SNATCHED PLANET 9 FROM OUTSIDE THE SOLAR SYSTEM - January 10, 2017 * THE HUNT FOR PLANET NINE [Video] - January 8, 2017 VISIBLE SPECTRA OF (474640) 2004 VN112–2013 RF98 WITH OSIRIS AT THE 10.4 M GTC: EVIDENCE FOR BINARY DISSOCIATION NEAR APHELION AMONG THE EXTREME TRANS-NEPTUNIAN OBJECTS - 07 January 2017 These five ETNOs belong to the group of seven linked to the Planet Nine hypothesis. A dynamical pathway consistent with these findings is dissociation of a binary asteroid during a close encounter with a planet and we confirm its plausibility using N-body simulations. We thus conclude that both the dynamical and spectroscopic properties of 474640–2013 RF98 favour a genetic link and their current orbits suggest that the pair was kicked by a perturber near aphelion. * THE DRAGON OF REVELATION 12 COULD BE PLANET NINE AND IT WILL BE DISCOVERED "OFFICIAL" IN 2017? - - December 6, 2016 "Because Planet Nine is so massive and has an orbit tilted compared to the other planets, the solar system has no choice but to slowly twist out of alignment," says Elizabeth Bailey, a graduate student at Caltech and lead author of a study announcing the discovery. In 1983 the Washington Post wrote about the discovery of an unknown heavenly celestial body. Then, in recent years several scientists have reported on this mysterious planet and many of them died under suspicious circumstances. Furthermore, since 1983 Governments around the world building huge underground bunkers and shelters. Why? To ensure the survival of the elite from an impending cataclysm? Image left: Buzz Aldrin got on a Russian plane to visit Antarctica! Could the recent trips of Patriach Kirill, then John Kerry, then Buzz Aldrin to Antarctica have something to do with this planet Nine? * DISCOVERY OF POSSIBLE DWARF PLANET COULD LEAD TO PLANET NINE - Nov 20th, 2016 PLANET NINE: NEW EVIDENCE SUPPORTS THEORY FOR UNDISCOVERED PLANET IN SOLAR SYSTEM - November 20, 2016 * 'PLANET NINE COULD BE TILTING OUR SOLAR SYSTEM - 4 November, 2016 * PLANET X, NIBIRU, THE INVASION OF OUR SOLAR SYSTEM, AND THE DESTRUCTION OF EARTH - November 2, 2016 "Apparently the US government had made the decision to keep Nibiru and its projected earthly destruction under wraps." Then, in the mid 1990s, Wisconsin resident Nancy Lieder published her idea of the Nibiru Cataclysm on her website Zeta Talk after claiming to have received telepathic messages from aliens.She gave a number of dates for an impending apocalypse including 2012 at the end of the Mayan calendar, and NASA was forced debunk the theory, Ames Center researcher David Morrison told Space.com at the time. "While this is a joke to some people and a mystery to others, there is a core of people who are truly concerned." * SEARCHING FOR PLANET NINE WITH COADDED WISE AND NEOWISE-REACTIVATION IMAGES - November 2, 2016 Planet Nine may be detectable at 3.4?m with WISE, but single exposures are too shallow except at relatively small distances (d9?430 AU). Vulcan is about 444 AU from the Sun now. * TOMB OF CHRIST UNCOVERED: CAN SCIENCE PROVE EVENTS FROM GOSPEL WERE REAL? - 30.10.2016 PLANET NINE FROM OUTER SPACE - Clues hint at huge, undiscovered planet it our solar system - Oct. 30, 2016 * ASTRONOMERS FIND HINTS OF 'PLANET NINE' - Oct 30, 2016 IN THE MOTIONS OF DISTANT SOLAR SYSTEM OBJECTS, ASTRONOMERS FIND HINTS OF PLANET NINE - Oct 28, 2016 "From our vantage point, it looks like it's the sun that's tilted - but really it's the plane of the planets precessing around the total angular momentum of the solar system, just like a top," Bailey said. Neither study is a slam-dunk case for the planet's existence, scientists said, but the evidence continues to mount. But researchers said they're continuing to study the motions of the solar system's objects for more clues. * TILTING OF OUR SOLAR SYSTEM IS CAUSED BY MYSTERIOUS PLANET 9, THE ACADEMICS OF SPACE SCIENCE REVEALS [Video] - Oct 27, 2016 "Because Planet Nine is so massive and has an orbit tilted compared to the other planets, the solar system has no choice but to slowly twist out of alignment", Elizabeth Bailey, a graduate student at Caltech, said in Phys.Org. Curious tilt of the sun traced to undiscovered planet - October 19, 2016 * OBJECTS BEYOND NEPTUNE PROVIDE FRESH EVIDENCE FOR PLANET NINE - Oct. 25, 2016 Or maybe a giant planet. When Brown and Batygin found five more TNOs curiously clustered in the sky, they realized with extensive modeling that a giant planet's gravity would have flung any objects away from its path, leaving the orbits of the remaining objects huddled on the opposite side of the solar system. Now, additional objects may be adding to the pattern. At the conference, Sheppard and his colleague Chad Trujillo of the Gemini Observatory in Hilo, Hawaii, presented the first two new entrants: 2014 SR349 and 2013 FT28. "The big question is do they make the planet case better or worse," Sheppard says. "And they make it better." "PLANET NINE" UPDATE: POSSIBLE RESONANCES BEYOND THE KUIPER BELT? - 2016/03/08 Corralling a distant planet (arxiv) with extreme resonant Kuiper belt objects The orbital plane can be one of two: either inclined at 18 degrees or 48 degrees. NEW WORK ON PLANET NINE - OCTOBER 20, 2016 Through analysis of what they call 'extreme Kuiper Belt Objects' - on eccentric orbits with aphelia hundreds of AU out - the team finds a clustering of orbital parameters that may point to the existence of a planet of 10 Earth masses with an aphelion of more than 660 AU. Two orbital planes seem possible, one at 18 degrees offset from the mean plane, the other inclined at 48 degrees. THERE'S A NINTH PLANET IN OUR SOLAR SYSTEM AND IT'S GIANT, SAY ASTRONOMERS - October 25, 2016 And, important for trying to locate the elusive object in the night sky, the new study suggests two likely orbital planes for Planet Nine, one at about 18 degrees from the ecliptic plane and the other at 48 degrees. (The ecliptic plane describes that two-dimensional surface of the Earth's elliptical orbit around the Sun, which happens to also be, roughly, the same plane on which all the known planets travel.) CLOSING IN ON A GIANT GHOST PLANET - October 25, 2016 More potential evidence of Planet Nine's influence may be found in how long it takes outer solar system bodies to orbit the sun. For instance, the four KBOs with the longest-known orbits revolve in patterns most readily explained by the presence of Planet Nine, says astronomer Renu Malhotra, chair of theoretical astrophysics at the University of Arizona at Tucson. Work by Malhotra and her colleagues also suggests two likely tilts for Planet Nine's orbit, one closer to the plane of the solar system at 18 degrees and the other steeper at about 48 degrees-information that could help shrink the vast part of the sky to be searched. Vulcan's Orbital Parameters Max. Error Min. 2 Sigma Error Forbes'(1880)* Orbital Inclination 48.44o +3.12o/-9.05o +/- 0.23o 45o PLANET NINE MIGHT BE PULLING OUR SOLAR SYSTEM OUT OF ALIGNMENT - October 24, 2016 * Astronomers Find Hints of Planet Nine - October 24, 2016 VULCAN: THE FAMOUS PLANET THAT NEVER EXISTED - OCTOBER 21, 2016 THE MYSTERIOUS 'PLANET NINE' MIGHT BE CAUSING THE WHOLE SOLAR SYSTEM TO WOBBLE - October 20 "Because Planet Nine is so massive and has an orbit tilted compared to the other planets, the solar system has no choice but to slowly twist out of alignment," lead author Bailey said in a statement PLANET NINE' CAN'T HIDE MUCH LONGER, SCIENTISTS SAY - October 20, 2016 PLANET NINE IS TILTING THE SUN AND WOBBLING THE SOLAR SYSTEM SAYS CALTECH ASTRONOMER - October 20, 2016 * ICE WORLD WITH 20,000-YEAR ORBIT MAY POINT TO PLANET NINE - October 19, 2016 * Are Vladimir Putin and Donald Trump teaming up to tell world about doom of Planet X? - Oct 15, 2016 MYSTERIOUS 'PLANET NINE' MIGHT HAVE TILTED OUR WHOLE SOLAR SYSTEM - 19 Sep 2016 Solar Obliquity Induced by Planet Nine - 14 Jul 2016 The six-degree obliquity of the sun suggests that either an asymmetry was present in the solar system's formation environment, or an external torque has misaligned the angular momentum vectors of the sun and the planets. However, the exact origin of this obliquity remains an open question. Batygin & Brown (2016) have recently shown that the physical alignment of distant Kuiper Belt orbits can be explained by a 5-20 Earth-mass planet on a distant, eccentric, and inclined orbit, with an approximate perihelion distance of ~250 AU. Using an analytic model for secular interactions between Planet Nine and the remaining giant planets, here we show that a planet with similar parameters can naturally generate the observed obliquity as well as the specific pole position of the sun's spin axis, from a nearly aligned initial state. Thus, Planet Nine offers a testable explanation for the otherwise mysterious spin-orbit misalignment of the solar system. * PLANET NINE: THEORIES ABOUT THE HYPOTHETICAL PLANET - July 19, 2016 "Planet Nine is not going to cause the Earth's destruction. If you read that it will, you have discovered idiotic writing!" Brown said via his Twitter account, @plutokiller." He also dismissed the idea that the world played a role in mass extinctions of the past. While the planet orbits a significant distance from the sun, it isn't quite far enough out to stir up the Oort Cloud, the region of icy comets beyond the Kuiper Belt. With a 10,000-year orbit, it would also constantly bombard the Earth, Brown said. Don't Blame 'Planet Nine' For Earth's Mass Extinctions - January 25, 2016 "I suspect it has something like zero effect on us," said Mike Brown of the California Institute of Technology (Caltech) in Pasadena. "Really big planets really far away could do that," Brown told Space.com. " Planet Nine is smaller than all these things that people have called 'Planet X' - that's always been sort of Jupiter-sized, or even brown dwarf-sized, or something. This is a good bit smaller, and a good bit closer; it's not in the realm of the comets." The putative Planet Nine also completes one orbit every 10,000 years or so, he added. PLANET NINE MAY HAVE TILTED ENTIRE SOLAR SYSTEM EXCEPT THE SUN - 19 July 2016 The planet would have between 5 and 20 times Earth's mass and be in a wildly eccentric orbit, reaching 250 times the sun-Earth distance at its farthest point. That elongated trajectory has led some to suggest that it was once an exoplanet and was kidnapped by the sun. Planet Nine's tilt, not its mass, is key, says Alessandro Morbidelli at Côte d'Azur Observatory in Nice, France, who has independently come to a similar conclusion. If it were a question of mass, Jupiter would be the prime suspect. "What is important is that the perturbing planet is off-plane. Jupiter cannot cause its own tilt," he says. THE HUNT FOR PLANET NINE IS AFOOT! - 15 Sep 2016 THE HUNT FOR VULCAN BY THOMAS LEVENSON REVIEW - NEWTON, EINSTEIN AND THE INVISIBLE PLANET - 10 September 2016 * 'PLANET NINE' COULD BE A DANGER TO OUR SOLAR SYSTEM - September 1, 2016 THE PLANET X / NIBIRU SYSTEM AND EARTH'S WATER CANOPY (Video) - July 27, 2016 ANDY LLOYD'S DARK STAR BLOG 39, JUNE 2016 It's also interesting to note, then, that another current paper attempting to piece the jigsaw together does not rely on the anti-aligned orientation favoured by Batygin and Brown (7). Instead, it concerns itself with resonance relationships between the six members of this cluster, potentially in-step with a hypothetical planet whose orbital period is approximately 17,000 years: "We point out hitherto unnoticed peculiarities of the orbits of the eKBOs mentioned above: we find that the orbital period ratios of these objects are close to integer ratios. This helps us identify a particular orbital period of the hypothetical planet which is in simple integer ratios with the four most distant eKBOs. That is, we identify mean motion resonances between these eKBOs and the hypothetical planet" (8) In terms of mass, the authors conclude that the hypothetical object would have to have at least ten Earth masses to maintain a long-term resonant relationship with the four outermost eKBOs. Beyond that, the position of the planet takes on a wide selection of possibilities, with a broader range of inclination. The eight major planets still circle the sun in the original plane of their birth. The sun rotates on its own axis, but surprisingly, that spin is tilted: the axis lies at an angle of 6 degrees relative to a line perpendicular to the plane of the planets. NEW CLUES IN SEARCH FOR PLANET NINE - July 5, 2016 EXTREME TRANS-NEPTUNIAN OBJECTS LEAD THE WAY TO PLANET NINE June 13, 2016 His hypothesis is that around 4.5 billion years ago, our then young Sun "stole" this planet from a neighbouring star with the help of a series of favourable conditions (proximity of stars within a star cluster, a planet in a wide and elongated orbit,...). Other scientists, however, believe that this scenario is improbable. THE RETURN OF PLANET X -EARTH'S CONFLICT WITH A BROWN DWARF STAR - June 17, 2016 Dr. Rand shared his view that Planet X is drawing close to Earth and will wreak havoc as it makes its return passage through the solar system. Culled from his study of ancient sources, as well as his alien contacts (which began when he was a young child), he's determined that Planet X is a brown dwarf star the size of Saturn or bigger , that's on an elongated orbit which comes our way every 3,600 to 4,000 years. PLANET NINE MIGHT NOT BE ALONE: ASTRONOMERS SUGGEST THERE COULD SEVERAL HIDDEN WORLDS IN OUR SOLAR SYSTEM - June 13, 2016 PLANET NINE MIGHT NOT BE ALONE: ASTRONOMERS SUGGEST THERE COULD SEVERAL HIDDEN WORLDS IN OUR SOLAR SYSTEM - 13 June 2016 In January, Caltech astronomers predicted the existence of Planet Nine Simulations explained a clumping behaviour of a group of dwarf planets New study shows these dwarf planets would not be as stable as thought Most stable scenario would be if there was more than one extra planet STOLEN WORLD: 'PLANET 9' LIKELY CAME FROM ANOTHER STAR - June 1, 2016 "It is almost ironic that while astronomers often find exoplanets hundreds of light-years away in other solar systems, there's probably one hiding in our own backyard," study lead author Alexander Mustill, an astronomer at Lund University in Sweden, said in a statement. Its not ironic if this object causes comet swarms that threaten the inner solar system to form. THEFT BEHIND PLANET 9 IN OUR SOLAR SYSTEM - May 31, 2016 NATIONAL GEOGRAPHIC | NEMESIS: THE SUN'S EVIL TWIN - DOCUMENTARY HD 1080p - May 17, 2016 PLANET X – IS THERE SCIENTIFIC EVIDENCE? - 24 April, 2016 What about Nibiru or Planet X, as modern astronomers call it? Is it possible that there is another planet in our solar system? While, for several decades, scientists unsuccessfully have searched for Planet X, as it turns out, on December 11th, 2015, Wouter Vlemming and his scientific team announced that they finally found the renegade planet (See Washington Post article titled: "Scientists claimed they found elusive 'Planet X.' Doubting astronomers are in an uproar.") Of course, and to no one's surprise, several astronomers immediately disputed the surprising announcement, including Mike Brown (a Cal Tech astronomer best known as the "Man who killed Pluto".) Most unpredictably, however, Mike Brown and his own team, although harsh critics of the earlier announcement, less than a month later, in January 2016 stepped forward to announce their own discovery of Planet X (See article in Los Angeles Times: "Astronomers' findings point to a ninth planet, and it's not Pluto".) SCIENTISTS CLAIMED THEY FOUND ELUSIVE 'PLANET X.' DOUBTING ASTRONOMERS ARE IN AN UPROAR. - December 11, 2015 Also, a mysterious, unnamed object that appears in the sky close to the alpha centauri system that may be a "Super-Earth" planet far beyond even Pluto or a super-cool brown dwarf that's really far. It could also conceivably be an icy "trans Neptunian object," of which there are plenty in the frozen darkness past the eighth planet, but the researchers say that's less likely (it's also, not coincidentally, less interesting). HISTORICAL SKETCH OF THE LIFE OF EDWARD BIDDLE LATCH 1833 to 1911 Page 9 mentions Forbes' planet at 100 AU and possibly of it being Vulcan. PROC. ROY. SOC. EDINBURGH; Vol. 10, 1878 - 80; pg. 426 - 430 2. On Comets. By Professor Forbes. The author commenced by stating that although these researches lead him to believe in the existence of two planets revolving in orbits external to that of Neptune, and although there was a great deal of evidence to show that he had actually determined the elements of the orbits, yet the latter point, being dependent on a coincidence of probabilities only, cannot be considered a certainty until the planets are observed. At the British Association in 1879 Professor Newton of America proved some important propositions with respect to the introduction of planets into the solar system. In answer to a question he said that his theory explained why the aphelion distance of a comet is generally about the same as the distance of the planet which rendered its orbit elliptic. The author then publicly stated that there could be no longer a doubt that two planets exist beyond the orbit of Neptune, one about 100 times, the other about 300 times the distance of the earth from the sun, with periods of revolution of about 1000 and 5000 years respectively. [Additional Note, 31s£ March 1880. From the six comets whose aphelion distance is about 300 times the distance of the earth from the sun, the elements of the perturbing planet have roughly been calculated. This gives Omega long. of asc. node = 185 o i= 45o nearly the same orbit as the preceding. Planet Nine Related PLANET NINE: A WORLD THAT SHOULDN'T EXIST - May 05, 2016 WHERE DID PLANET NINE COME FROM? - May 4, 2016 THE RACE TO FIND PLANET NINE - Apr 17, 2016 Whoa! That is news, my friends. That is a freaking planet, a whole new world, on the larger side of a super-Earth, pushing into the heavier category of an ice- giant. But until it is within the frame of a telescope with a state-of-the- art spectrometer, we can only speculate on what such a planet, if it does indeed turn out to exist, might be like. The chart above gives some basic idea about the gross physical dimensions of such a planet. Beyond that, drop on down and let the speculation begin … WE ARE CLOSING IN ON POSSIBLE WHEREABOUTS OF PLANET NINE - 20 April 2016 PLAN(ET) 9 FROM OUTER SPACE: WHAT THE STILL-HYPOTHETICAL PLANET MIGHT LOOK LIKE - April 11, 2016 MYSTERIOUS PULL ON CASSINI PROBE MAY HELP FIND PLANET NINE - 15 Apr , 2016 NO. PLANET NINE IS NOT ABOUT TO WIPE OUT LIFE ON EARTH? - 11 April 2016 NEW YORK POST'S VIDEO ON PLANET NINE IS MISLEADING - 11 Apr 2016 What's All The Fuss About Planet Nine: Just Another Planet Or Something Scientists Are Worried About? - April 10, 2016 SCIENTISTS STUNNED BY NEW MODEL OF MYSTERIOUS PLANET 9 - April 10, 2016 CASSINI SPACECRAFT DID NOT DETECT PLANET NINE, NASA CLAIMS - April 10, 2016 PLANET X TO REALLY CAUSE MASS EXTINCTION THIS MONTH? - 9 Apr, 2016 OUR SUN MAY HAVE STOLEN PLANET NINE FROM PASSING STAR - April 9, 2016 WHAT MIGHT THE PUTATIVE 'PLANET NINE' LOOK LIKE?? - April 8, 2016 NEWLY DISCOVERED PLANET COULD DESTROY EARTH ANY DAY NOW - April 6, 2016 A mysterious planet that wiped out life on Earth millions of years ago could do it again, according to a top space scientist. And some believe the apocalyptic event could happen as early as this month. S MYSTERIOUS 'PLANET NINE' TUGGING ON NASA SATURN PROBE?? - April 5, 2016 Just this month, evidence from the Cassini spacecraft orbiting Saturn helped close in on the missing planet. Many experts suspect that within as little as a year someone will spot the unseen world, which would be a monumental discovery that changes the way we view our solar system and our place in the cosmos. "Evidence is mounting that something unusual is out there — there's a story that's hard to explain with just the standard picture," says David Gerdes, a cosmologist at the University of Michigan who never expected to find himself working on Planet Nine. He is just one of many scientists who leapt at the chance to prove — or disprove — the team's careful calculations. DID THE SUN STEAL PLANET NINE? - 4 Apr , 2016 Formed five billion years ago in a cluster of other stars, our Sun once had hundreds if not thousands of stellar siblings (now long since dispersed through the nearby galaxy.) As the stars developed many likely had planets form around them, just as the Sun did, and with all the young star systems in such relatively close proximity it's possible that some planets wound up ejected from their host star to be picked up — or possibly even outright stolen — by another. What the researchers found based on their models - which took into consideration the orbits of known KBOs and trans-Neptunian objects (TNOs) but not the effects of known planets - was that the Sun could very easily capture nearby exoplanets as well as clusters of smaller bodies (like "mini Oort clouds") given that the objects are far enough from their host star and the relative velocities during the "pick-up" are low. While the researchers admit that the chances of a heist scenario having actually taken place are quite small - anywhere from 0.1 to 2% - they're not zero, and so should be considered a reasonable possibility. COULD 'PLANET X' CAUSE COMET CATASTROPHES ON EARTH? - March 31, 2016 RESEARCHER LINKS MASS EXTINCTIONS TO 'PLANET X' - March 30, 2016 Though scientists have been looking for Planet X for 100 years, the possibility that it's real got a big boost recently when researchers from Caltech inferred its existence based on orbital anomalies seen in objects in the Kuiper Belt, a disc-shaped region of comets and other larger bodies beyond Neptune. If the Caltech researchers are correct, Planet X is about 10 times the mass of Earth and could currently be up to 1,000 times more distant from the sun Matese has since retired and no longer publishes. Whitmire retired from the University of Louisiana at Lafayette in 2012 and began teaching at the University of Arkansas in 2013. Whitmire says what's really exciting is the possibility that a distant planet may have had a significant influence on the evolution of life on Earth. COULD 'PLANET X' CAUSE COMET CATASTROPHES ON EARTH? - MAR 30, 2016 "Whitmire has been speculating for decades about a very distant very massive planet pushing comets around. It has to have an orbital period of something like 27 million years," said Brown. "While that idea may or may not make sense, it definitely has nothing to do with Planet Nine, which is much closer to the sun and thus 'only' takes 15,000 years to go around." "The evidence for Planet Nine says nothing about whether or not there is a more distant Planet X." 10 AMAZING FACTS ABOUT THE NEW NINTH PLANET - February 18, 2016 4) Conspiracy Theorists Are Claiming It Could Spell Doom . . . 3} . . . And There's A Very Small Chance They're Right The more intelligent doomsayers claim that Planet Nine's gravity well might slingshot asteroids toward the Earth, resulting in potentially devastating meteor strikes. Scientifically, this theory carries more weight: the gravitational effects of Planet Nine (or whatever's out there) are documented. After all, Fatty was hypothesized in the first place because of the apparent effects of its gravity well on small, rocky objects. So it's within the realm of possibility that one or two of those objects could slingshot their way toward Earth. But it's still not all that likely-remember that space is still very, very big. Even after an object was thrown back toward our neighborhood, it would still have to actually hit Earth, instead of just continuing on into the vast, surrounding emptiness. It's possible, but it's far from likely. Astronomer Scott Sheppard has said that Planet Nine could "throw a few small objects into the inner solar system every so often, but [won't] significantly increase the odds for a mass extinction event." FIND PLANET NINE! NASA'S SATURN PROBE HELPS WITH THE HUNT - February 24, 2016 SEARCHING FOR PLANET 9 - 23 February 2016 Planet X Discovered?? + Challenge Winners! | Space Time | PBS Digital Studios - Feb 17, 2016 COMPUTER SIMULATIONS HEAT UP HUNT FOR PLANET NINE - JANUARY 31, 2016 For a planet that hasn't technically been discovered yet, Planet Nine is generating a lot of buzz. Astronomers have not actually found a new planet orbiting the sun, but some remote icy bodies are dropping tantalizing clues about a giant orb lurking in the fringes of the solar system. Don't Blame 'Planet Nine' for Earth's Mass Extinctions. But Planet Nine - a newly proposed but not yet confirmed world perhaps 10 times more massive than Earth that's thought to orbit far beyond Pluto - probably could not have triggered such "death from the skies" events, researchers said. Brown and lead author Konstantin Batygin, also of Caltech, suggested the existence of Planet Nine in a paper that was published last week. They infer the planet's presence based on indirect evidence: Computer models suggest that a distant, unseen world has shaped the strange orbits of a number of small objects in the Kuiper Belt, the ring of icy bodies beyond Neptune. Planet Nine likely has an elliptical orbit, coming within 200 to 300 astronomical units (AU) of the sun at its closest approach and getting as far away as 600 to 1,200 AU, Tom Van Flandern: Dark matter, Missing Planets & New Comets, North Atlantic Books; P.O.Box 12327, Berkeley, California, 1993, Page 313. At its fullest extent, including its outlying regions, the Kuiper belt stretches from roughly 30 to 55 AU. Vulcan's perihelion (134 AU) and aphelion (448). Vulcan is more massive (half Jupiter mass - 141 +/-35 Earth masses) and closer to the Kuiper Belt than Planet Nine (~10 Earth masses). Vulcan can be blamed for taking us into and out of Ice Ages. THE SEARCH FOR PLANET NINE - January 25, 2016 Right Ascension, Declination and range of Planet 9 Vulcan is at Aphelion ~1970 (447.6 AU TIME(yrs) MEAN & TRUE ANOM RT.ASC(deg) DECLN(deg) 1970.00 180.072572 180.025907 263.077990 23.707364 At RA 263 degrees, range is 200 - 450 AU which is about right, but Dec is at -15 to + 10 degrees. At Dec of about +23 degrees, ranges is still about 200 - 450+ AU, RA is about 225 degrees. The Range is within Van Flandern's/J. Allen Hynek's spec. at least (less than 490/538 AU). * HE SEARCH FOR PLANET NINE: THE LONG AND WINDING HISTORY OF PLANET X - January 24, 2016 * COULD YOU LIVE ON PLANET NINE? - 01.22.16. HOW ASTRONOMERS COULD ACTUALLY SEE 'PLANET NINE' - January 21, 2016 'PLANET NINE' MAY EXIST: NEW EVIDENCE FOR ANOTHER WORLD IN OUR SOLAR SYSTEM - January 20, 2016 The new modeling work by Brown and Batygin bolsters this intriguing scenario. Their simulations show that the gravitational influence of a roughly 10- Earth-mass Planet Nine in an anti-aligned orbit — one in which the planet's closest approach to the sun is 180 degrees across from that of all the other planets - could explain the KBOs' odd orbits HOW ASTRONOMERS COULD ACTUALLY SEE "PLANET NINE" - January 22, 2016 Evidence of this new addition to our solar system is indirect at the moment, but direct evidence could come relatively soon, in the form of a telescope observation, Planet Nine's proposers say But, Sheppard told Space.com, "if it's not on the extreme ends of the orbit or the size, then Subaru should be able to find it." Vulcan pasted aphelion (447.3 AU) around 1970. Its now around 444 AU or so. Not to worry, they never look at those extreme places MASSIVE PLANET X NOW URGENTLY SOUGHT BY TOP PLANET-HUNTERS - January 2016 The group's calculations suggest the object orbits 20 times farther from the Sun on average than does the eighth - and currently outermost - planet, Neptune, which moves about 4.5 billion km from our star. But unlike the near-circular paths traced by the main planets, this novel object would be in a highly elliptical trajectory, taking between 10,000 and 20,000 years to complete one full lap around the Sun." Some of this data will sound eerily reminiscent of Zecharia Sitchin's work (6) - particularly the 30 degree downward tilt of the undiscovered planet's gravitational influence. It's highly elliptical. It's at least ten times the Earth's mass, they think (2). It's located about 600 Astronomical Units away, they think, with an orbit of tens of thousands of years - more like my own conclusions some years ago, as I came to realised that Sitchin's assumption of a 3600 year orbit was likely far too small a figure But why has this object so far evaded detection? Why didn't the infra-red sky survey WISE discover Planet Nine? After all, it's very substantial indeed, and relatively close (lying in the gap between the Kuiper Belt and the inner Oort Cloud). That's a critical issue here, as the scientists working on the WISE data seemed to rule this possibility out in no uncertain terms, declaring that no Saturn-sized planet could be lurking within 10,000 AU (8). By contrast, Planet Nine may be a mere 600 AU, or less, albeit much smaller than Saturn by Batygin and Brown's reckoning. Even so, surely WISE should have spotted it? In terms of the origin of this distant object, the authors think that Planet Nine may have started out as a gas giant core among the ice giants Uranus and Neptune, before being scattered from this zone by the 'gaseous component of the [primordial] nebula'. But, at the same time, they don't seem to entirely rule out a much larger object than 10 Earth masses, particularly if the orbit is highly eccentric: ABSENCE OF EVIDENCE IS NOT EVIDENCE OF ABSENCE! ASTRONOMERS SAY A NEPTUNE-SIZED PLANET LURKS BEYOND PLUTO - Jan. 20, 2016 The orbit of the inferred planet is similarly tilted, as well as stretched to distances that will explode previous conceptions of the solar system. Its closest approach to the sun is seven times farther than Neptune, or 200 astronomical units (AUs). (An AU is the distance between Earth and the sun, about 150 million kilometers.) And Planet X could roam as far as 600 to 1200 AU, well beyond the Kuiper belt, the region of small icy worlds that begins at Neptune's edge about 30 AU. A NEW PLANET IN OUR SOLAR SYSTEM ACCORDING TO RESEARCHERS - January 20, 2016 EVIDENCE FOR A DISTANT GIANT PLANET IN THE SOLAR SYSTEM - 2016 January 20 Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive. In this work we show that the orbits of distant Kuiper Belt objects (KBOs) cluster not only in argument of perihelion, but also in physical space. We demonstrate that the perihelion positions and orbital planes of the objects are tightly confined and that such a clustering has only a probability of 0.007% to be due to chance, thus requiring a dynamical origin. We find that the observed orbital alignment can be maintained by a distant eccentric planet with mass gsim10 m? whose orbit lies in approximately the same plane as those of the distant KBOs, but whose perihelion is 180o away from the perihelia of the minor bodies. In addition to accounting for the observed orbital alignment, the existence of such a planet naturally explains the presence of high-perihelion Sedna-like objects, as well as the known collection of high semimajor axis objects with inclinations between 60o and 150o whose origin was previously unclear. Continued analysis of both distant and highly inclined outer solar system objects provides the opportunity for testing our hypothesis as well as further constraining the orbital elements and mass of the distant planet. PLANET 9 "FROM OUTER SPACE" MAY WELL BE REAL - January 20, 2016 CASE MADE FOR 'NINTH PLANET' - January 20, 2016 The group's calculations suggest the object orbits 20 times farther from the Sun on average than does the eighth - and currently outermost - planet, Neptune, which moves about 4.5 billion km from our star. But unlike the near-circular paths traced by the main planets, this novel object would be in a highly elliptical trajectory, taking between 10,000 and 20,000 years to complete one full lap around the Sun. MICHIO KAKU — NIBIRU PLANET X ~ THE BEST EVIDENCE TO DATE ~ PREPARING FOR 2016! - January 13, 2016 NIBIRU ON LIVE FOX5 NEWS! EXPERT REVEALS 2 DWARF STARS – PLANET X 2016 update - January 11, 2016 DID 'DARK MATTER' OR A STAR CALLED NEMESIS KILL THE DINOSAURS? - December 11, 2015 SCIENTISTS CLAIMED THEY FOUND ELUSIVE 'PLANET X.' DOUBTING ASTRONOMERS ARE IN AN UPROAR. - December 11, 2015 Alternatively, the researchers propose, it could be an undiscovered planet floating much farther out, or even a brown-dwarf (bigger than a planet, smaller than a star) passing through interstellar space. Also, a mysterious, unnamed object that appears in the sky close to the Alpha Centauri system that may be a "Super-Earth" planet far beyond even Pluto or a super-cool brown dwarf that's really far. It could also conceivably be an icy "trans Neptunian object," of which there are plenty in the frozen darkness past the eighth planet, but the researchers say that's less likely (it's also, not coincidentally, less interesting). THE SERENDIPITOUS DISCOVERY OF A POSSIBLE NEW SOLAR SYSTEM OBJECT WITH ALMA - 8 Dec 2015 Until the nature of the source becomes clear, we have named it Gna. Unless there are yet unknown, but significant, issues with ALMA observations, we have detected a previously unknown objects in our solar system. Based on proper motion analysis we find that, if it is gravitationally bound, Gna is currently located at 12?25 AU distance and has a size of ?220?880 km. Alternatively it is a much larger, planet-sized, object, gravitationally unbound, and located within ?4000 AU, or beyond (out to ?0.3~pc) if it is strongly variable. A NEW SUBMM SOURCE WITHIN A FEW ARCSECONDS OF ? CENTAURI: ALMA DISCOVERS THE MOST DISTANT OBJECT OF THE SOLAR SYSTEM - 8 Dec 2015 A New Submm Source Within A Few Arcseconds Of ? Centauri ALMA Discovers The Most Distant Object Of The Solar System 4.2. A new member of the solar system: an ETNO? Again, a low-albedo, thermal Extreme Trans Neptunian Object (ETNO), such as the hypothetical super-Earth of Trujillo & Sheppard (2014), would be consistent with our flux data (e.g., for R ~ 1.5R?, D ~ 300 AU, Tbb ~ 15 K, Theta ~ 80 mas, Fig. 5). One may expect the distribution of the Oort cloud TNOs (Trans Neptunian Objects)initially to be isotropic. However, the vast majority of known TNOs are not very far off the ecliptic. For instance, Sedna is at i ~ 12o, and other Sedna-like objects, Biden (2012 VP113) and V774104 (10 November 2015, Science, DOI:10.1126/science.aad7414) are at i = 24o and within 15o, respectively. This is certainly due to observational bias, as one generally scans the sky around the ecliptic (Schwamb et al. 2010). However, high inclinations are not excluded, with the most massive dwarf planet Eris at i = 44o being a prime example. For reasons of sensitivity (or rather, lack thereof), TNOs on highly eccentric orbits have traditionally been firstly detected when close to their perihelia. Further away, there would have remained unseen (e.g., Sheppard et al. 2011). However, a sizable population of such bodies is expected to exist at large distances from the Sun. It is clear, therefore, that ALMA with its high submm sensitivity provides presently the only existing means to detect TNOs far from their perihelia, where temperatures are merely some tens of Kelvin. There must be a vast reservoir of objects between, say, roughly 100 and 1000 AU, of which we hitherto have seen only a tiny fraction (see also de la Fuente Marcos & de la Fuente Marcos 2014, and references therein). SO YOU THINK PLANET X DOESN'T EXIST? THAT'S NOT WHAT THE NEW YOK TIMES AND WASHINGTON POST SAY, THE THREE GIRLS FROM GARABANDAL, SPAIN. POPE REVEALS 3RD SECRET OF FATIMA AND IT'S ABOUT PLANET X. - December 2, 2015 REVIEW: THE HUNT FOR VULCAN - November 23, 2015 WASHINGTON POST HEADLINE MENTIONS PLANET X (AKA NIBIRU) ON SAME DAY, DAYLIGHT FIREBALL OVER THAILAND: "A MASSIVE OUTER SOLAR SYSTEM PERTURBER MAY EXIST" - September 7, 2015 NASA'S NEW HORIZONS PROBE TO SEARCH MYSTERIOUS PLANET X IN THE KUIPER BELT - Sep 07, 2015 NASA'S NEW HORIZONS TO SEARCH MYSTERIOUS 'PLANET X' IN KUIPER BELT - 09/07/2015 SOME SCIENTISTS INTERESTED IN SEARCHING FOR 'PLANET X' - September 6, 2015 "A massive object or great disturber would perturb or disturb anything that came close to it. So objects that stay away from the great disturber would be the most stable objects," Sheppard explained. "Thus the great disturber can "shepherd" objects into similar types of orbits with similar arguments of perihelion which are the orbits which constantly keep the smaller objects away from the bigger object." Planet X has long been a staple of legends. One conspiracy theory claims that NASA embarked on the New Horizons project two years after the publication of a 1998 study that revealed the existence and location of the then postulated tenth planet. Conspiracy theorists claim that the New Horizon's final destination is Planet X but the U.S. space agency pretended that the probe's destinations are Pluto and the Kuiper belt. The location of Planet X - 10/1988 Observed positions of Uranus and Neptune along with residuals in right ascension and declination are used to constrain the location of a postulated tenth planet. The residuals are converted into residuals in ecliptic longitude and latitude. The results are then combined into seasonal normal points, producing average geocentric residuals spaced slightly more than a year apart that are assumed to represent the equivalent heliocentric average residuals for the observed oppositions. Such a planet is found to most likely reside in the region of Scorpius, with considerably less likelihood that it is in Taurus. IS THERE A PLANET X, A 'MASSIVE PERTURBER,' HIDDEN BEYOND PLUTO? - 09/03/2015 SUN ACCUSED OF STEALING PLANETARY OBJECTS FROM ANOTHER STAR - Aug 18, 2015 New study shows the sun may have snatched Sedna, Biden and other objects away from a neighbor THE SEARCH FOR THE MISSING PLANET: OUR EARLY SOLAR SYSTEM MAY HAVE HAD A FIFTH GAS GIANT THAT DISAPPEARED AFTER A CRASH WITH NEPTUNE - 12 August 2015 Group of rocks known as the kernel has a mysterious orbit in Kuiper belt Astronomer David Nesvorny used a simulation to rewind 4 billion years <>I>He discovered the kernel was once caught in Neptune's gravitational pull But a collision with a massive planet may have caused Neptune's orbit to jump, freeing the asteroids and explaining the orbits we see today Nesvorny first proposed the missing planet theory to explain the existence of the kernel in 2011, but now he believes he has a model that shows how this interaction would have taken place. It is not clear what became of the solar system's fifth gas giant, but if Nesvorny's calculations are correct, it may have been expelled from the solar system permanently. This is not the first time astronomers have predicted the existence of possible extra planets in our solar system. In January, scientists at the Complutense University of Madrid and the University of Cambridge said there must be at least two extra planets to explain the orbital behaviour of objects orbiting near Neptune. OUR EARLY SOLAR SYSTEM MAY HAVE BEEN HOME TO A FIFTH GIANT PLANET - 11 August 2015 OUR EARLY SOLAR SYSTEM MAY HAVE HAD A FIFTH GAS GIANT THAT DISAPPEARED AFTER A CRASH WITH NEPTUNE - Aug 12, 2015 Our early solar system may have had a fifth gas giant that disappeared after a crash with Neptune Our early solar system may have had an extra planet that was ejected from its orbit by a collision with Neptune. The planet would have once been the fifth gas giant, and evidence of its existence still remains in the asteroid belt that sits in the outer reaches of the solar system. In particular, the orbit of a cluster of icy rocks in the Kuiper belt known as the 'kernel' suggests Jupiter was forced out of its original orbit by a significant impact with a large object. According to Nesvorny, the only thing that would cause Neptune to jump in this way, freeing the kernel from its gravitational pull, is another massive gravitational field - one as big as a planet. None of the other gas giants in the solar system could have been responsible, as their orbits have never interacted with Neptune's in this way, according to Nesvorny. STEALING SEDNA - AUGUST 6, 2015 The paper assigns the term 'Sednitos' (also sometimes referred to as 'Sednoids') for these Edgeworth-Kuiper Belt intruders with similar characteristics to Sedna. In 2012, 2012 VP113, dubbed the 'twin of Sedna,' was discovered by astronomers at the Cerro Tololo Inter-American Observatory in a similar looping orbit. The 'VP' designation earned the as yet unnamed remote world the brief nickname 'Biden' after U.S. Vice President Joe Biden… hey, it was an election year. There's good reason to believe something(s?) out there shepherding these Senitos into a similar orbit with a comparable argument of perihelion. Researchers have suggested the existence of one or several planetary mass objects loitering out in the 200-250 AU range of the outer solar system… note that this is a separate scientific-based discussion versus any would-be Nibiru related non- sense, don't even get us started... SEDNOID A sednoid is a trans-Neptunian object with a perihelion greater than 50 AU and a semi-major axis greater than 150 AU. Only two objects are known from this population, 90377 Sedna and 2012 VP113, both of which have perihelia greater than 75 AU, but it is suspected that there are many more. Unexplained orbits The sednoids' orbits cannot be explained by perturbations from the giant planets,[5] nor by interaction with the galactic tides. If they formed in their current locations, their orbits must originally have been circular; otherwise accretion (the coalescence of smaller bodies into larger ones) would not have been possible because the large relative velocities between planetesimals would have been too disruptive.[6] Their present elliptical orbits can be explained by several hypotheses: 1. These objects could have had their orbits and perihelion distances "lifted" by the passage of a nearby star when the Sun was still embedded in its birth star cluster. 2. Their orbits could have been disrupted by an as-yet-unknown planet- sized body beyond the Kuiper belt. 3. They could have been captured from around passing stars, most likely in the Sun's birth cluster. A STAR PASSED THROUGH THE SOLAR SYSTEM JUST 70,000 YEARS AGO - February 18, 2015 ASTRONOMERS ARE PREDICTING AT LEAST TWO MORE LARGE PLANETS IN THE SOLAR SYSTEM - January 15, 2015 WHAT THE RECENT DWARF PLANET DISCOVERY TELLS US ABOUT OUR SOLAR SYSTEM - April 2, 2014 A recent survey of the sky carried out by NASA's Wide-field Infrared Survey Explorer (WISE and its follow-up missions, NEOWISE and the All-WISE sky survey) found no evidence of an object Saturn-sized or larger out to 10,000 AUs, or 16% of one light year. And while that may rule out the existence of a brown dwarf companion to our Sun, that leaves plenty of possibilities for planet sized objects out there still waiting to be discovered. Of course this pretty much closes the door on the Nemesis hypothesis of a large dark companion to our Sun on a long, 100,000 year orbit that triggers a periodic rain of comets pummeling the inner solar system, and leaves the idea of a hypothetical remote gas giant in the extreme outer solar system known as Tyche, only slightly more tenable. And of course, the conspiracy mongers hoping to resurrect Nibiru need not apply! Still, the strange orbits of Biden and Sedna and their brethren have a story to tell. Is a good sized planet out there, tugging them out of their orbits? Or did our solar system suffer a passage from a nearby star early in its remote history that drew these objects into the bizarre orbits that we see today? Next year, we'll get our first good looks at one of these worlds, when NASA's New Horizons spacecraft passes Pluto, — the King of the Kuiper Belt — and its retinue of moons in July 2015. Astronomers are still scouring the sky beyond, looking for possible targets to explore after the encounter. This isn't your father's solar system, that's for sure. THE SUN'S DARK COMPANION, ACCORDING TO PHYSICS - July 30, 2015 The following study is his second work on the subject of irregularities in the orbits of Uranus and Neptune (after his first article <> hosted since last year on my website):in this study he points out the possibility presence of a Dark Companion Star of our Sun, called by him with the expression of Vulcan, and he focus his attention on the Sun's angular momentum. Of course there is no guarantee on these speculations, above all because we do not know all the astronomical data with enough throughness, and everything is based on clues come out in the course of the search for Planet X and Nemesis (a Dark Star, perhaps a brown dwarf) carried out by NASA, by U.S Naval Observatory and by other scientific institutions involved in the last century. The same existence of Nibiru beyond Neptune is not sure and accepted by modern Astronomy yet. The main idea of the author is that the Sun is involved in a cosmic dance with its invisible dark star companion: Vulcan (another name used after the well-known name of Nemesis). As the Author R.F. says in his scientific article, To get some idea of how the Sun is moving about in its dance with Vulcan, we can assume for the sake of discussion that Vulcan is orbiting somewhat like the orbit we guessed at to model NASA's IRAS observation of 1983. To do this we assumed an eccentricity of .54, an orbital period of 12500 years and a semi major axis a of 538 AU. If we consider the corresponding value of Vulcan's mass from Table 2.0 computed as 0.0036 S and the other numbers as indicated immediately above, then on applying equation (17) rM is found to be about 2.97 AU for this case. This implies that the Sun would have an orbit about the common center of mass with a maximum radius three times Earth's orbital radius around the Sun. With longer periods and / or a larger mass for Vulcan, the Sun's orbital radius would be appropriately larger. Presuming Cruttenden's 26000 yr period, the eccentricity also at 0.54 (he actually proposed 0.038), the orbit's semi major axis of 879 AU and a computed mass for Vulcan of 0.0063 S, then rM becomes 8.47 AU, which is larger than the orbital radius of Jupiter! An interesting thing to note is that as with this case, the orbits of the Sun and Vulcan never intersect. Vulcan appears to have the orbital character of a very out-sized planet. SPECULATIONS ON THE SUN'S DARK STAR COMPANION, ACCORDING TO PHYSICS - February 14, 2015 The following article [Speculations On The Sun's Dark Star Companion, According To Physics} - unpublished so far - is written by R.F., an American applied mathematician who lives in USA and is interested in discussing Nibiru's approach and the possible existence of a dark star, beyond the outer planets of the Solar System. He contacted me by e-mail a few months ago, after reading my articles in English language published on Internet. That's why he sent me his writings. I know his first name and surname but he kindly asked me to spread his study only with his initials. The following study is his second work on the subject of irregularities in the orbits of Uranus and Neptune (after his first article <> hosted since last year on my website):in this study he points out the possible presence of a Dark Companion Star of our Sun, called by him with the expression of Vulcan, and he focus his attention on the Sun's angular momentum. Of course there is no guarantee on these speculations, above all because we do not know all the astronomical data with enough thoroughness, and everything is based on clues come out in the course of the search for Planet X and Nemesis (a Dark Star, perhaps a brown dwarf) carried out by NASA, by U.S Naval Observatory and by other scientific institutions involved in the last century. The same existence of Nibiru beyond Neptune is not sure and accepted by modern Astronomy yet. The main idea of the author is that the Sun is involved in a cosmic dance with its invisible dark star companion: Vulcan (another name used after the well-known name of Nemesis). As the Author R.F. says in his scientific article, The eccentricities of .54 and .90 were selected because one researcher is pushing very hard for the former, and the latter induces a cometary orbit for comparison. Also, visual binary stars have been found to have eccentricities of about .50 on average anyway, so the former number is close to one we might expect anyway. The real question of the hour is does Vulcan pose a threat to the Earth? The best answer for now is it depends. All of the orbits examined or that are reasonably likely imply that Vulcan's perihelion is well outside the orbit of Pluto, but even if that's so an object of several Jovian masses coming that close to the Sun would be sorely felt throughout the solar system you can be certain. Possibly a larger threat is that Vulcan may well have its own retinue of satellites which at perihelion could come careening through the inner solar system to create cataclysmic havoc on a scale difficult to imagine. The other major threat of such an object is its likely ability to drive any number of comets hurtling in our direction. As more about Vulcan's orbit and properties becomes known, the threat assessment will become more realistic. For now, stay tuned. COMMENT ONE - THE WHISTLEBLOWER FOX NEWS 'PLANET X' SHOCKER! IS 'NIBIRU' DISCLOSURE FORTHCOMING? 'TRULY REVOLUTIONARY FOR ASTRONOMY!' - January 17, 2015 MYSTERIOUS PLANET X MAY REALLY LURK UNDISCOVERED IN OUR SOLAR SYSTEM - January 16, 2015 "Planet X" might actually exist — and so might "Planet Y." At least two planets larger than Earth likely lurk in the dark depths of space far beyond Pluto, just waiting to be discovered, a new analysis of the orbits of "extreme trans-Neptunian objects" (ETNOs) suggests. Researchers studied 13 ETNOs — frigid bodies such as the dwarf planet Sedna that cruise around the sun at great distances in elliptical paths. Theory predicts a certain set of details for ETNO orbits, study team members said. For example, they should have a semi-major axis, or average distance from the sun, of about 150 astronomical units (AU). . . . These orbits should also have an inclination, relative to the plane of the solar system, of almost 0 degrees, among other characteristics. But the actual orbits of the 13 ETNOs are quite different, with semi-major axes ranging from 150 to 525 AU and average inclinations of about 20 degrees. "This excess of objects with unexpected orbital parameters makes us believe that some invisible forces are altering the distribution of the orbital elements of the ETNOs, and we consider that the most probable explanation is that other unknown planets exist beyond Neptune and Pluto," lead author Carlos de la Fuente Marcos, of the Complutense University of Madrid, said in a statement The potential undiscovered worlds would be more massive than Earth, researchers said, and would lie about 200 AU or more from the sun — so far away that they'd be very difficult, if not impossible, to spot with current instruments. Trujillo and Sheppard suggested that the orbits of 2012 VP113 and Sedna are consistent with the continued presence of a big "perturber" — perhaps a planet 10 times more massive than Earth that lies 250 AU from the sun. "If it is confirmed, our results may be truly revolutionary for astronomy," de la Fuente Marcos said. A DISTANT PLANET MAY LURK FAR BEYOND NEPTUNE - 5 Dec. 2014 "The idea's not crazy," says David Jewitt, a planetary scientist at the University of California, Los Angeles. "But I think the evidence is slim." The trail of bread crumbs leading to an undiscovered planet is sparse: just 12 chunks of ice lead the way. But it's enough to get some researchers wondering about a ninth (or 10th, depending on your attitude regarding Pluto) planet roaming the outer solar system and how it might have arrived there. Kuiper belt clues "The exciting thing for me is that 2012 VP113 exists," says Megan Schwamb, a planetary scientist at Academia Sinica in Taipei, Taiwan. "Whatever put Sedna on its orbit should have put a whole bunch of other objects out there." The gigantic, extended circles of Sedna and 2012 Vp113 are not at all like all else in the earth's planetary group. Both are too a long way from Neptune to feel its belongings. Furthermore they're too a long way from the Oort cloud, the removed shell of ice rocks thought to conceal the earth's planetary group. Their trajectories could be a relic of a passing star, or the changing impact of the Milky Way's gravity as the sun moves around the cosmic system — or of a gigantic planet, long-gone or yet to be recognized. The case for an extra planet got stronger when Trujillo and Sheppard understood that Sedna and 2012 Vp113 had something just the same as 10 different articles. All the articles past 150 galactic units (AU) come closest to the sun, a point known as perihelion, at almost the same time that they cross the plane of the earth's planetary group. There's no purpose behind these perihelia to pack up like that. Billions of years of advancement ought to have left the perihelia scattered, in the same way as whatever remains of the Kuiper cinch — unless something was holding the perihelia set up. Trujillo and Sheppard assessed that a planet around two to 15 times as monstrous as Earth, at a separation of 250 cosmic units (AU) (around eight times as a long way from the sun as Neptune) could clarify why these 12 perihelia were packed together. Anyhow the space experts concede that is by all account not the only plausibility. A closer planet as enormous as Mars would have the same impact as a Neptune-mass body much more remote away. TRANS-NEPTUNIAN OBJECTS SUGGEST THAT THERE ARE MORE DWARF PLANETS IN OUR SOLAR SYSTEM - January 15, 2015 There could be at least two unknown dwarf planets hidden well beyond Pluto, whose gravitational influence determines the orbits and strange distribution of objects observed beyond Neptune. This has been revealed by numerical calculations. If confirmed, this hypothesis would revolutionize solar system models. Astronomers have spent decades debating whether some dark trans- Plutonian planet remains to be discovered within the solar system. According to scientists not only one, but at least two planets must exist to explain the orbital behavior of extreme trans-Neptunian objects. 2015 METEORITE,IT WAS GLACIER AND STAR VULCAN. – Susana Romero.Blog - 15 Jan, 2015? Susana Romero's version of the Vulcan Web Site. TRANS-NEPTUNIAN OBJECTS SUGGEST THAT THERE ARE MORE PLANETS IN THE SOLAR SYSTEM - January 13 2015 Astronomers have spent decades debating whether some dark trans-Plutonian planet remains to be discovered within the solar system. According to the calculations of scientists at the Complutense University of Madrid (UCM, Spain) and the University of Cambridge (United Kingdom) not only one, but at least two planets must exist to explain the orbital behaviour of extreme trans-Neptunian objects (ETNO). The most accepted theory establishes that the orbits of these objects, which travel beyond Neptune, should be distributed randomly, and by an observational bias, their paths must fulfil a series of characteristics: have a semi-major axis with a value close to 150 AU (astronomical units or times the distance between the Earth and the Sun), an inclination of almost 0° and an argument or angle of perihelion (closest point of the orbit to our Sun) also close to 0° or 180°. Yet what is observed in a dozen of these bodies is quite different: the values of the semi-major axis are very disperse (between 150 AU and 525 AU), the average inclination of their orbit is around 20 and argument of Perihelion –31, without appearing in any case close to 180. ARE WE CLOSE TO DISCOVERING PLANET X? - Dec 1 2014 The quest to find a Planet X in our solar system beyond Pluto continues — and we may be a step closer to not one, but two undiscovered planetary masses. A DISTANT PLANET MAY LURK FAR BEYOND NEPTUNE - 14 Nov. 2014 The discovery of 2012 VP113 confirmed that Sedna is not a fluke but is possibly the first of a large population of icy bodies distinct from others in the rest of the solar system. So Trujillo and Sheppard continued to poke around the Kuiper belt, and the mystery deepened. They noticed that beyond 150 astronomical units (150 times the distance from the sun to the Earth), 10 previously discovered objects, along with Sedna and 2012 VP113, follow orbits that appear strangely bunched up. "That immediately piqued our interest," says Sheppard. Could an unseen planet, a Planet X, be holding the orbits of all these far-out bodies in place? BEYOND THE EDGE OF THE SOLAR SYSTEM: THE INNER OORT CLOUD POPULATION A dwarf planet with the most distant orbit known found beyond the observed edge of our Solar System. Sheppard and Trujillo suggest a Super Earth or an even larger object at hundreds of AU could create the shepherding effect seen in the orbits of these objects, which are too distant to be perturbed significantly by any of the known planets. a) Orbit diagram for the outer solar system. The Sun and Terrestrial planets are at the center. The orbits of the four giant planet Jupiter, Saturn, Uranus and Neptune are shown by purple solid dotted light blue region just beyond the giant planets. Sedna's orbit is shown in orange while 2012 VP113's orbit is shown in red. Both objects are currently near their closest approach to the Sun (perihelion). They would be too faint to detect when in the outer parts of their orbits. Notice that both orbits have similar perihelion locations on the sky and both are far away from the giant planet and Kuiper Belt regions. b) Plot of all the known bodies in the outer solar system with their closest approach to the Sun(Perihelion) and eccentricity. There are three competing theories for how the inner Oort cloud might have formed. As more objects are found, it will be easier to narrow down which of these theories is most likely accurate. One theory is that a rogue planet could have been tossed out of the giant planet region and this planet could have perturbed objects out of the Kuiper Belt to the inner Oort cloud on its way out. This planet could have been ejected or still be in the distant solar system today. The second theory is that a close stellar encounter could put objects into the inner Oort cloud region while a third theory suggests inner Oort cloud objects are captured extra-solar planets from other stars that were near our Sun in its birth cluster. Vulcan's And Biden's Orbital Parameters 2 Sigma Error 2012VP113(Biden) Period (years) 4969.0 +/- 11.5 4320 +/- 183 yr Orbital Eccentricity 0.537 +/- 0.0085 0.696 +/- 0.011 Orbital Inclination 48.44o +/- 0.23o 24.017o +/- 0.004o Longitude of the Ascending Node 189.0o +/- 1.3o 90.887o +/- 0.010o Argument Of Perihelion 257.8o +/- 0.90o 294o +/- 2o Aphelion AU 447.6 +/- 3.2 450 +/- 13 TWO GIANT NEW PLANETS FOUND - NIBIRU? SPANISH ASTRONOMERS DISCOVER ANOMALIES IN OUR SOLAR SYSTEM - June 17, 2014 EXTREME TRANS-NEPTUNIAN OBJECTS LEAD THE WAY TO PLANET NINE - 13-JUN- 2016 Now, however, brothers Carlos and Raúl de la Fuente Marcos, two freelance Spanish astronomers, together with scientist Sverre J. Aarseth from the Institute of Astronomy of the University of Cambridge (United Kingdom), have considered the question the other way around: How would the orbits of these six ETNOs evolve if a Planet Nine such as the one proposed by K. Batygin and M. Brown really did exist? The answer to this important question has been published in the journal Monthly Notices of the Royal Astronomical Society (MNRAS). In any case, the statistical and numerical evidence obtained by the authors, both through this and previous work, leads them to suggest that the most stable scenario is one in which there is not just one planet, but rather several more beyond Pluto, in mutual resonance, which best explains the results. "That is to say we believe that in addition to a Planet Nine, there could also be a Planet Ten and even more," the Spanish astronomer points out. ARE TWO GIANT PLANETS LURKING BEYOND PLUTO? UNUSUAL ORBITS SPOTTED IN THE OUTER SOLAR SYSTEM HINT AT THE PRESENCE OF LARGE WORLDS - 13 June 2014 The aligned orbits of rocky bodies around Pluto hints at an unseen planet Spanish scientists claim this world would be 10 times the mass of Earth They believe this planet is moving in resonance with a much larger world They calculated this world would have a mass between that of Mars and Saturn and would orbit 200 times Earth's distance from the sun TWO GIANT PLANETS MAY CRUISE UNSEEN BEYOND PLUTO - 11 June 2014 The monsters are multiplying. Just months after astronomers announced hints of a giant "Planet X" lurking beyond Pluto, a team in Spain says there may actually be two supersized planets hiding in the outer reaches of our solar system. Now Carlos and Raul de la Fuente Marcos at the Complutense University of Madrid in Spain have taken another look at these distant bodies. As well as confirming their bizarre orbital alignment, the pair found additional puzzling patterns. Small groups of the objects have very similar orbital paths. Because they are not massive enough to be tugging on each other, the researchers think the objects are being "shepherded" by a larger object in a pattern known as orbital resonance. For instance, we know that Neptune and Pluto are in orbital resonance – for every two orbits Pluto makes around the sun, Neptune makes three. This is called a 2:3 resonance. Similarly, one group of small objects seems to be in lockstep with a much more distant, unseen planet. That world would have a mass between that of Mars and Saturn and would sit about 200 times Earth's distance from the sun. Some of the smaller objects have very elongated orbits that would take them out to this distance. It is unusual for a large planet to orbit so close to other bodies unless it is dynamically tied to something else, so the researchers suggest that the large planet is itself in resonance with a more massive world at about 250 times the Earth-sun distance – just like the one predicted in the previous work. The large planet would have a mass between that of Mars and Saturn (90 Earth Masses) and would sit about 200 AU. The large planet is itself in resonance with a more massive world at about 250 AU. Vulcan's mass is anticipated to be 141 to 165 Earth Masses and its average distance from the Sun is 291 AU SUMMARY: CR105 - COMET OF VULCAN - 18 January 2006 The presence of a brown dwarf companion to our Sun has long been suspected, but until recently there has been no direct evidence that could convince astronomers. Originally, only circumstantial evidence (newspaper articles, ancient artifacts and even extraterrestrial alien contacts) have support its existence. This web site has proposed its mass and orbital parameters based on multi-source data. Apparently, it forms comet swarms with a 3:2 orbital period resonant (Vulcan - 4969 years):(comet - 3312.7 years) ratio . Now, the computation of giant comet CR105's average orbital period has shown it to be statistically certain that it is in just such a predicted resonant relationship. There may be at least three other Vulcan related planetoids. A similar average period of these has not been evaluated, but the initial values are within the range expected when Vulcan is included in computer simulations. Specifically, 2001 FP185 (3433.7 years) and 2002 GB332 (3234.2 years) appear to be in a similar 3:2 resonance and 1999 DP8 (1246 years) in a 4:1 resonance with Vulcan. The monsters are multiplying. Just months after astronomers announced hints of a giant "Planet X" lurking beyond Pluto, a team in Spain says there may actually be two supersized planets hiding in the outer reaches of our solar system. When potential dwarf planet 2012 VP113 was discovered in March, it joined a handful of unusual rocky objects known to reside beyond the orbit of Pluto. These small objects have curiously aligned orbits, which hints that an unseen planet even further out is influencing their behaviour. Scientists calculated that this world would be about 10 times the mass of Earth and would orbit at roughly 250 times Earth's distance from the sun. CURRENT EVIDENCE OF SEPTIMUS' AND VULCAN'S MASS AND ORBIT - 7 JUNE 1999 Vulcan, Forbes' And Pickering's Planets Orbital Parameters Ecc./Peri./Ap. Arg.Peri./Year Long.Asc.N./Dist./S. Mass VULCAN (updated Aug. 2002) 4,969 0.537/134.8/447.6 48 deg. 256 deg./515 BC 189 deg./~446/0.05% 5,030* 0.545/134/454 49 deg. 256 deg./546 BC 192 deg./453/0.1% FORBES' TWO PLANETS 5,000* n.c./n.c./n.c. 45 deg. n.c./n.c. 185 deg./n.c./n.c. 1,076 0.167/87/122 52 deg. 115 deg./1702 247 deg./n.c./n.c. THREE OF PICKERING'S PLANETS 500,000 0.20/5040/7560 26 deg. 80 deg. 234 deg./?/3% 26,000 0.54/404/1352 86 deg. 105 deg. 93 deg./575/6% 1,400 0.35/81/169 37 deg. 160 deg. 51 deg./95/n.c. n.c. = not calculated * Probability of fortuitous correlation = 0.00007. Probability that they are the same = 0.99993 arxiv.org/pdf/1406.0715v2.pdf - 3 June 2014 an approximate mean motion resonance with an unseen planet. SCIENTISTS DISCOVER NEW DWARF PLANET 2012 VP-113 BEYOND DISTANT PLUTO - 27 March, 2014 Discovery of 2012 VP-113 beyond Pluto might hint at an invisible giant and help us understand early interplanetary movement DISTANT DWARF PLANET DISCOVERED BEYOND THE KNOWN EDGE OF OUR SOLAR SYSTEM - March 27, 2014 What's more, their work indicates the potential presence of an enormous planet, perhaps up to 10 times the size of Earth, not yet seen, but possibly influencing the orbit of 2012 VP113, as well as other inner Oort cloud objects. NEW DWARF PLANET HINTS AT GIANT WORLD FAR BEYOND PLUTO - 26 March 2014 New View Of The Outer Solar System A surprise monster may be lurking in our solar system. A newly discovered dwarf planet has grabbed the crown as the most distant known object in our solar system – and its orbit hints at a giant, unseen rocky world, 10 times the mass of Earth and orbiting far beyond Pluto. NASA's Wide-field Infrared Survey Explorer (WISE) scoured this region of space in 2010 and 2011 searching for a so-called Planet X and came up empty. However, WISE was looking for the tell-tale warmth of gas giants – a rocky "super-Earth", like the one Sheppard's team suggest, would be too cold for the telescope to pick up. "This is too faint for WISE," says Ned Wright, the space telescope's principal investigator. Even if the planet has a small internal heat source – and absorbs some sunlight, it would still not generate enough heat to register, he adds. Note: 2012 VP113 Aphelion 450 ± 13 AU VULCAN'S ORBITAL PARAMETERS FROM BLAVETSKY'S THEOSOPHY - " Vulcan's Aphelion is 41.6 billion miles (448 AU) vs. Hynek's 50 billion." DISCOVERY OF NEW DWARF PLANET HINTS AT OTHER OBJECTS IN SOLAR SYSTEM - March 26, 2014 The scientists noticed something else that seemed too odd to be a coincidence: Both Sedna and 2012 VP113 seemed to be making their closest approach to the sun at similar angles. That could mean that there's a giant planet out there, tugging at both of their orbits in the same way. If so, this ghost planet could have a size of anywhere from 1 to 20 Earth masses, Sheppard said. Future telescopes should be able to help settle whether a super-Earth exists by finding more dwarf planets like Sedna and 2012 VP113, scientists said. THE DISCOVERY OF A DWARF PLANET LEADS TO SPECULATION THAT ANOTHER ROGUE PLANET EXISTS BEYOND IT. - March 27, 2014 Figure. How 2012 VP113 And "Super Earth" Compare With Some Of Their Neighbors Orbital analysis of 2012 VP113, a tiny planetoid beyond Pluto, and other bodies indicates an undetected planet, a giant "super-earth," lurking on the outer edge of our solar system. Notice that the diameters of Earth and super Earth are compared and range from two to ten Earth's diameter implying a diameter of 80,000 miles, vs. Sitchin's reported value of 80,691 miles. These values differ by only 0.86% DWARF PLANETS IN THE EXTREME OUTER SOLAR SYSTEM - March 26, 2014 Anomalies in the orbits of Sedna and 2012 VP113 tantalizingly suggest that one or more giant planets 10 times the mass of Earth could orbit in the dark, frozen outer fringe of our solar system, 250 A.U.s or more from the Sun. This 250 A.U. values is based on a nominal 4000 year period (2501.5 = 3953). The admitted period for VP113 goes up to 4590 years (2761.5 = 4585). Thus, the corresponding value for the postulated giant planet must be more than 276 A.U. The corresponding value for Vulcan is 291 A.U. (291.21.5 = 4969). PLANET NIBIRU TO PASS EARTH BY AUGUST 2015 - Feb 25, 2014 Nibiru is a comet swarm, not a planet. More likely to pass circa August 2016. PLANET X MYTH DEBUNKED - 15 March 2014 NASA LOOKS FOR PLANET X BUT COMES UP EMPTY - March 10, 2014 NASA LOOKS FOR PLANET X BUT COMES UP EMPTY - March 10, 2014 This is why, despite the failure with Planet X, NASA thinks "there are even more stars out there left to find with WISE. We don't know our own sun's backyard as well as you might think," Wright added. STILL NO SIGN OF 'PLANET X' IN LATEST NASA SURVEY - March 12, 2014 RED SKIES DISCOVERED ON EXTREME BROWN DWARF - February 6, 2014 The brown dwarf, named ULAS J222711-004547, caught the researchers' attention for its extremely red appearance compared to "normal" brown dwarfs. However, the recently discovered brown dwarf ULAS J222711-004547 has a very different atmosphere where the sky is always red. 2014 PLANET X WATCH TRENDS - 10-December-2013 Why 1983 is When Our World Changed You can read a watered-down cover-up on Wikipedia, the truth about what IRAS found in 1983 and how it changed the course of human history is profound. The real back story on IRAS was given to us by John Maynard, Defense Intelligence Agency (Retired) who played an instrumental role in the creation of Yowusa.com in 1999 and then worked with Dr. Greer's CSETI Disclosure Project. What Maynard told us in 2000, was that while IRAS was publicly touted as a wide sky survey space based infrared telescope, it was built as a result of the preliminary data coming from the Pioneer probes suggesting a large body at the edge of our solar system. He further maintains that NASA found Planet X and this corroborated by a December 30, 1983 article published by the Washington Post. IRAS and the Elites However, what the IRAS data made these ambitions pointless given that the impending flyby of Planet X actually a comet swarm through the core of our system will lay waste to the world's various political and economic systems. HERCOLUBUS IS COMING WARNS TOP ASTRONOMER (Fascinating Video) - October 15, 2013 Dead Star and comet like planet PLANET X REFERENCES FOUND IN POPULAR SCIENCE MAGAZINES OF THE PAST - Apr 27, 2013 ASTROPHYSICS PUBLISHED PAPERS /PDF'S AND LINKS ON PX7 - 14 April 2013 THE LOCATION OF PLANET X R. S. Harrington NEMESIS ,VULCAN AND PERHAPS OTHER BODIES WE ARE NOT AWARE OF AND EARTHS SECOND MOON ,,YES REALLY, SMALLER AND HARDLY VISIBLE. - October 29, 2012 Anderson concluded that the tenth planet must have a highly elliptical orbit, carrying it far away to be undetectable now but periodically bringing it close enough to leave its disturbing signature on the paths of the outer planets. He suggests a mass of five Earth masses, an orbital period of about 700-1000 years, and a highly inclined orbit. Its perturbations on the outer planets won't be detected again until 2600. Anderson hoped that the two Voyagers would help to pin down the location of this planet. Lewis Swift (co-discoverer of Comet Swift-Tuttle, which returned 1992), also saw a 'star' he believed to be Vulcan -- but at a different position than either of Watson's two 'intra-Mercurials'. In addition, neither Watson's nor Swift's Vulcans could be reconciled with Le Verrier's or Lescarbault's Vulcan. DANGEROUS NEW JOVIAN SIZED BODY IN OUR SOLAR SYSTEM - August 11th, 2012, 11:35 am WISE FINDS FEW BROWN DWARFS CLOSE TO HOME - June 8, 2012 Improvements in WISE's infrared vision over past missions have allowed it to pick up the faint glow of many of these hidden objects. In August 2011, the mission announced the discovery of the coolest brown dwarfs spotted yet, a new class of stars called Y dwarfs. One of the Y dwarfs is less than 80 degrees Fahrenheit (25 degrees Celsius), or about room temperature, making it the coldest star-like body known. Since then, the WISE science team has surveyed the entire landscape around our sun and discovered 200 brown dwarfs, including 13 Y dwarfs. Planet X NIBIRU DISINFO - BILL COOPER 'IRAS FOUND PLANET X IN 1983?' - Oct 4, 2011 YOUNG SOLAR SYSTEM'S FIFTH GIANT PLANET? - 13 Sep. 2011 Several initial states stand out in that they show a relatively large likelihood of success in matching the constraints. Some of the statistically best results were obtained when assuming that the solar system initially had five giant planets and one ice giant, with the mass comparable to that of Uranus and Neptune, was ejected to interstellar space by Jupiter. This possibility appears to be conceivable in view of the recent discovery of a large number free-floating planets in interstellar space, which indicates that planet ejection should be common. PERSISTENT EVIDENCE OF A JOVIAN MASS SOLAR COMPANION IN THE OORT CLOUD - 02/2011 We present updated dynamical and statistical analyses of outer Oort cloud cometary evidence suggesting that the Sun has a wide-binary jovian mass companion. The results support a conjecture that there exists a companion of mass ?1-4 MJupiter orbiting in the innermost region of the outer Oort cloud. Our most restrictive prediction is that the orientation angles of the orbit plane in galactic coordinates are centered on ?, the galactic longitude of the ascending node = 319° and i, the galactic inclination = 103° (or the opposite direction) with an uncertainty in the orbit normal direction subtending <2% of the sky. Such a companion could also have produced the detached Kuiper Belt object Sedna. If the object exists, the absence of similar evidence in the inner Oort cloud implies that common beliefs about the origin of observed inner Oort cloud comets must be reconsidered. Evidence of the putative companion would have been recorded by the Wide-field Infrared Survey Explorer (WISE) which has completed its primary mission and is continuing on secondary objectives. We present updated dynamical and statistical analyses of outer Oort cloud cometary evidence suggesting that the Sun has a wide-binary jovian mass companion. The results support a conjecture that there exists a companion of mass ?1-4 MJupiter orbiting in the innermost region of the outer Oort cloud. Our most restrictive prediction is that the orientation angles of the orbit plane in galactic coordinates are centered on ?, the galactic longitude of the ascending node = 319o and i, the galactic inclination = 103o (or the opposite direction) with an uncertainty in the orbit normal direction subtending <2% of the sky. Such a companion could also have produced the detached Kuiper Belt object Sedna. If the object exists, the absence of similar evidence in the inner Oort cloud implies that common beliefs about the origin of observed inner Oort cloud comets must be reconsidered. Evidence of the putative companion would have been recorded by the Wide-field Infrared Survey Explorer (WISE) which has completed its primary mission and is continuing on secondary objectives. SPANISH ASTRONOMERS CLAIM DWARF SUN BEYOND PLUTO The group made the rounds of all the news web sites in the past two weeks, claiming they discovered something very significant. It's almost twice the size of Jupiter and just beyond our furthest planetoid, Pluto. Although it's not a planet, it appears to have planets or large satellites encircling it. It's what astronomers call a "brown dwarf star" and its official name is "G1.9". This newly discovered "brown dwarf" is believed to have formed from the same condensed matter that gave birth to our Sun. It is believed that, after the large planets formed around the Sun, they pushed it to the edge of the Solar system where it formed a sphere about 1.9MJ -- well below the mass needed to ignite it as a "sun." The newly discovered brown dwarf is reported to be located just about 60 to 66 AU (1 AU=the distance from the Sun to Earth) from us (its parigee), currently in the direction of the constellation Sagittarius. Because of periodic gravitational disturbances in areas of space further out, specifically in the Oort Cloud, the Spanish group of astronomers believe G1.9 travels in an elliptical orbit extending possibly hundreds of AU beyond the furthest known planets (its apogee). Its position just beyond Pluto suggests it is at its closest approach to the Sun and Earth. SPANISH ASTRONOMERS CLAIM DWARF SUN BEYOND PLUTO - FEBRUARY 27, 2011 The scientists also remind us that the observations for G1.9 were made 23 years ago and the most recent observations show that the object has not moved significantly as an orbiting planet or brown dwarf would be expected to do. How do you explain this? When we previewed this article to the Starviewer Team, we asked them to send us a rebuttal. We think we have focused on the Achillies Heel of their assertions. We waited for an answer and received the following statement, which was translated for us: 1.-Some self-motivated International committee of astronomers, by their own innitiative, are presently calculating the exact orbit for the Brown Dwarf Sagitarius-Oort-Kuiper perturbation, using the StarViewerTeam's work sheets based on Lissauer, Murray and Matese's original drafts. A final report, will be published by Feb 2010.2.-There are huge scientific evidences concerning to the fact that Cosmic causes and Brown Dwarf are the real causes of Climate change. On July the 10th, Dr.Paul Clark, published on Science.com an article concerning to this matter, and almost 700 scientists signed the minority Report on climate change. It appears the evidence is inferential and based on mathematics. So we must wait until February. I give this a validity rating (from 1 to 10) of 4. SECRET ROGUE PLANET MAY BE HIDING BEHIND NEPTUNE - May 14, 2012 Does Earth have a new friend? An astrophysicist says it's likely that an as- yet undiscovered planet exists on the dark fringes of our solar system, messing with the orbits of celestial bodies in the Kuiper Belt, just beyond Neptune. NEW PLANET FOUND IN OUR SOLAR SYSTEM? - May 11, 2012 Odd orbits of remote objects hint at unseen world, new calculations suggest. Too far out to be easily spotted by telescopes, the potential unseen planet appears to be making its presence felt by disturbing the orbits of so-called Kuiper belt objects, said Rodney Gomes, an astronomer at the National Observatory of Brazil in Rio de Janeiro. ASTRONOMERS DISCOVER DWARF SUN BEYOND PLUTO - Apr 08 2012 We are close to our Sun and within its gravitational influence. So as we are travel through space, it appears to us that the G1.9 is moving in an elipse between our furthest planetoid, Pluto, and the edge of our Solar system, near the Oort Cloud. The newly discovered brown dwarf is reported to be located just about 60 to 66 AU (1 AU=the distance from the Sun to Earth) from us (its parigee), currently in the direction of the constellation Sagittarius. Because of periodic gravitational disturbances in areas of space further out, specifically in the Oort Cloud, the Spanish group of astronomers believe G1.9 travels in an elliptical orbit extending possibly hundreds of AU beyond the furthest known planets (its apogee). Its position just beyond Pluto suggests it is at its closest approach to the Sun and Earth. NASA EMPLOYEE SPEAKS ABOUT NIBIRU AKA PLANET X - March 29, 2012 It so happened that just two weeks before the recent news, at the Sitchin Reunion in Chicago July 15 – 16, 2005, I reviewed the search for "Planet X" by various astronomers. A significant highlight of that search was the announcement in December 1983 by NASA's Jet Propulsion Laboratory that IRAS (the infra-red telescope) has found a planet, much larger than Earth, moving in the distant heavens in our direction. The announcement – hastily retracted as a "misunderstanding" – prompted the Reagan-Gorbachov meetings and President Reagan's speech at the U.N. about the common danger to Mankind from "an alien planet out there." SPANISH ASTRONOMERS CLAIM DWARF SUN BEYOND PLUTO - February 19, 2010 Of course there is controversy. The Spanish asrtonomers, who call themselves the "Starviewer Team" must still convince the scientific community that G.19 is not a supernova, but rather a brown dwarf star inside our Solar System. This is not an easy task. But we asked them to do just that for this article. UPDATE FEBRUARY 19, 2010: -- We patiently waited and monitored the StarViewer Team's web site for the "proof" that claimed would be forthcoming. Needless to say, it never materialized. Also, the initial popularity of their claim appears to have been nothing more than a way to attract a large viewership. The web site now is full of ridiculous claims, including some satirical stories taken from "the onion" (a very funny site) which the SV Team promoted as "real." There is no mention of the mathematical validation that was expected with regards to the G1.9 object. Perhaps the validation disproved their theory... perhaps it was never going to be validated by anyone... I think it is safe to take this theory of object G1.9 being a brown dwarf down to ZERO possibility! THERE REALLY COULD BE A GIANT PLANET HIDDEN FAR BEYOND PLUTO - FEB 21, 2012 In recent years, astronomers have discovered a bunch of planets located at least 100 astronomical units, or the distance from the Sun to the Earth, away from their host stars. These planets are gas giants - they would have to be for us to see them at all - so this is something very different the dwarf planets like Pluto and Eris discovered in our solar system's Kuiper Belt and beyond. There's almost no chance that these giant planets could have formed as part of their host star's planetary disc, considering their immense distance away. That strongly suggests that these are former rogue planets captured by the star's gravity. Hagai Perets of the Harvard-Smithsonian Center for Astrophysics and Thijs Kouwenhoven at Peking University's Kavli Institute for Astronomy and Astrophysics teamed up to figure out just how often we can expect stars - potentially including those like our own Sun - to capture these giant wandering planets. ScienceNOW reports their results: ON THE ORIGIN OF PLANETS AT VERY WIDE ORBITS FROM RE-CAPTURE OF FREE FLOATING PLANETS - Fri, 10 Feb 2012 NASA TALKS ABOUT TYCHE "PLANET X" at NEOWISE conference. 29 September 2011 A NASA press conference with some of the team from WISE has described some of the preliminary findings, and how far they've still got to go as they crunch the data available for the infra-red telescope's sky survey. During the press conference, they took a question about Planet X/Nibiru. They ruled out an incoming object, but could not rule out a substantial body in the outer solar system moving in a roughly circular orbit: Also admitted was that WISE is very good at detecting comets and asteroids and such objects that are in our 'backyard' MOON ANOMALY MAY BE DUE TO DARK STAR "In principle, a viable candidate would be a putative trans-Plutonian massive object (PlanetX/Nemesis/Tyche), recently revamped to accommodate certain features of the architecture of the Kuiper belt and of the distribution of the comets in the Oort cloud, since it would cause a non-vanishing long-term variation of the eccentricity. Actually, the values for its mass and distance needed to explain the empirically determined increase of the lunar eccentricity would be highly unrealistic and in contrast with the most recent viable theoretical scenarios for the existence of such a body. For example, a terrestrial-sized body should be located at just 30AU [Astronomical Units], while an object with the mass of Jupiter should be at 200AU." ON THE ANOMALOUS SECULAR INCREASE OF THE ECCENTRICITY OF THE ORBIT OF THE MOON On the other hand, the values for the physical and orbital parameters of such a hypothetical body required to obtain at least the right order of magnitude for e(epsilon) are completely unrealistic: suffice it to say that an Earth-sized planet would be at 30 au, while a Jovian mass would be at 200 au. Thus, the issue of finding a satisfactorily explanation for the anomalous behavior of the Moon's eccentricity remains open. THE PERIHELION PRECESSION OF SATURN, PLANET X/NEMESIS AND MOND We show that the retrograde perihelion precession of Saturn \Delta\dot\varpi, recently estimated by different teams of astronomers by processing ranging data from the Cassini spacecraft and amounting to some milliarcseconds per century, can be explained in terms of a localized, distant body X, not yet directly discovered. From the determination of its tidal parameter K = GM_X/r_X^3 as a function of its ecliptic longitude \lambda_X and latitude \beta_X, we calculate the distance at which X may exist for different values of its mass, ranging from the size of Mars to that of the Sun. The minimum distance would occur for X located perpendicularly to the ecliptic, while the maximum distance is for X lying in the ecliptic. We find for rock-ice planets of the size of Mars and the Earth that they would be at about 80-150 au, respectively, while a Jupiter-sized gaseous giant would be at approximately 1 kau. A typical brown dwarf would be located at about 4 kau, while an object with the mass of the Sun would be at approximately 10 kau, so that it could not be Nemesis for which a solar mass and a heliocentric distance of about 88 kau are predicted. If X was directed towards a specific direction, i.e. that of the Galactic Center, it would mimick the action of a recently proposed form of the External Field Effect (EFE) in the framework of the MOdified Newtonian Dynamics (MOND). Vulcan is about half Jupiter mass (141 +/- 35 Earth masses), and a tiny bit closer than its aphelion of 448 AU aphelion. CORNEL UNIVERSITY "MOON ORBIT IS WRONG" (Implies Planet-X a possibility) - 17 Nov 2011 Moon Orbit Wrong Cornell University - February 24th, 2011 A potentially viable Newtonian candidate would be a trans-Plutonian massive object (Planet X/Nemesis/Tyche) since it, actually, would affect e with a non- vanishing long-term variation. On the other hand, the values for the physical and orbital parameters of such a hypothetical body required to obtain the right order of magnitude for de/dt are completely unrealistic. Moreover, they are in neat disagreement with both the most recent theoretical scenarios envisaging the existence of a distant, planetary-sized body and with the model-independent constraints on them dynamically inferred from planetary motions. FIFTH GIANT PLANET MAY HAVE DWELLED IN OUR SOLAR SYSTEM - November 18, 2011 Within our solar system, an extra giant planet, or possibly two, might once have accompanied Jupiter, Saturn, Neptune and Uranus. HYPOTHESIS: The Akkadian Seal Indicates A Large New Planet In Our Solar System The 4500 year old Akkadian seal appears to indicate the relative size of the Jovian planets to our Sun. PROOF: The Akkadian Seal Represents The Planets In Our Solar System. A Mathematical Fit Of The Diameters Of The Jovians And The Sun Against The Logarithm Of Their Masses. This plot was done by a NASA scientist. Note that the 'fit' forms a very good straight line indicating that the Jovians planets (Jupiter, Saturn, Uranus and Neptune) and the Sun are mathematically related and that their mass relative to the Sun's was apparently known to whoever was the source of the information depicted on the Akkadian seal. See the ANALYSIS OF THE AKKADIAN SEAL for measurement details. 2012 WAKE UP CALL According to Tom Van Flandern, it is entirely possible that Planet X began its career in our solar system as a distant outer planet, which was disturbed from its orbit by the force of a passing dwarf star. This interaction would have caused Planet X to veer into the heart of the solar system, towards a fateful encounter with one of the inner planets. In his book 'Dark Matter, Missing Planets & New Comets', Van Flandern wrote as follows: 'Statistically, a few passing stars would approach within 40 times Pluto's distance of the Sun over the life of the solar system. They would tend to perturb the outermost planets... into planet-crossing orbits. Eventually the crossings would result in close encounters between planets.' According to Van Flandern, this perturbation process was not only feasible but inevitable, given the existence of planets in such distant orbits. Van Flandern noted, however, that once Planet X had been forced inwards, it would suffer repeated encounters with the other planets, eventually leading to its ejection from the solar system. Van Flandern confirmed that such a planet- crossing orbit was highly unstable and unlikely to last for more than 100,000 years, mainly due to the powerful influence of Jupiter, by far the largest planet of the solar system: If Planet X crosses Jupiter's orbit, it is a goner, either by collision with Jupiter or ejection from the solar system, within 100,000 years... The encounters with Jupiter are not merely potential, but inevitable, because of forced precession of the orbit by Jupiter... Jupiter's gravity is so strong that it can eliminate another body in a single close approach. SEARCH FOR KUIPER-BELT FLYBYS USING PIONEER 10 RADIO DOPPLER DATA - 16 January 1996 J. D. Anderson, G. Giampieri, E. L. Lau (JPL), R. T. Hammond (NDSU) Using coherent radio Doppler data generated by the Deep Space Network (DSN) at S band (wavelength \sim 13 cm), we have produced five-day averages of radial accelerations acting on the Pioneer 10 spacecraft between 40 and 60 AU. The one- sigma accuracy of the reduced accelerations is 2.5 \times 10^-10 m s^-2. Here we develop the theory of weak gravitational interactions of a spacecraft with orbiting bodies, and show that it is feasible to determine the mass and location of an unknown object from the acceleration data, subject to reasonable assumptions. We search the data record for candidate flybys, and apply the theory to one of the best candidates at distance 56 AU, ecliptic longitude 73.9 Deg and latitude 3.1 Deg. SEARCH FOR KUIPER-BELT FLYBYS USING PIONEER 10 RADIO DOPPLER DATA - 12/1995 Using coherent radio Doppler data generated by the Deep Space Network (DSN) at S band (wavelength ~ 13 cm), we have produced five-day averages of radial accelerations acting on the Pioneer 10 spacecraft between 40 and 60 AU. The one- sigma accuracy of the reduced accelerations is 2.5 x 10(-10) m s(-2) . Here we develop the theory of weak gravitational interactions of a spacecraft with orbiting bodies, and show that it is feasible to determine the mass and location of an unknown object from the acceleration data, subject to reasonable assumptions. We search the data record for candidate flybys, and apply the theory to one of the best candidates at distance 56 AU, ecliptic longitude 73.9 Deg and latitude 3.1 Deg. PROOF: A Search Was Undertaken For A Brown Dwarf In Our Solar System 50 Billion Miles Away By The Pioneer Spacecraft. A photocopy of the November 1982 Science Digest Article - Mysterious Planet X depicting both planet X and a Brown Dwarf Star in our solar system. Van Flandern (see below): 'Statistically, a few passing stars would approach within 40 times Pluto's distance of the Sun over the life of the solar system. These stars would tend to perturb the orbits any possible outermost planets (10 to 40 times more distant than Pluto's aphelion) into unstable orbits. They would be forced into orbits that would pass through the inner solar system and then these planets would be ejected from our solar system because they would be forced into orbits that would cross the orbits of other planets. Van Flandern said this concept was explained in standard celestial mechanics books. Jupiter's gravity is so strong that it can eliminate another body in a single close approach. The safe distance for a planet like object was 10 X Pluto's aphelion of 49 AU = 490 AU or 45.6 billion miles away. This value was rounded off (on the high end) to 50 billion miles in order to specify the maximum range a potential new planet like body could be in our solar system. This is why they were looking for a brown dwarf 50 billion miles away as depicted in the 1982 Science Digest article shown below. And the image of the object depicted above on the Akkadian seal could well have been the reason the discovery of "an object in our solar system possibly as big as Jupiter" by Henry O'toole was reported in the December 30, 1983 Washington Post article depiced below (Mystery Heavenly Body Discovered). KEY EVIDENCE, FROM A CHILDREN'S ENCYCLOPEDIA! One of the more amusing pieces of evidence for the existence of Planet X is a picture from the 1987 New Science and Invention Encyclopedia. In a section on space probes, the encyclopedia shows the paths of the 2 Pioneer probes and illustrates how the probes were used in the search for more planets. It shows the Earth, the Sun, a dead star (at 50 billion miles or 538 AU) and a tenth planet (at 4.7 billion miles or 50 AU). With the discovery of Eris (AKA Xena and AKA 2003 UB 313) whose orbit varies from 36 to about 97 AU, the image is a reasonable fit to Eris' position. This Vulcan web site carries Vulcan at 41, not 50 billion miles Nearly all binary stars orbit each other in fairly elliptical orbits. Some of these support a 3:2 comet resonance as does our dark star Vulcan. See comment two by our orbit analyst. Here three or four of our nearest ten star systems support such a resonance. Any astronomer skilled in celestial mechanics would be aware of this possibility. So the existence of a 'dead star' in our solar system implies a nominal 40% chance that the inner planets of our solar system would be subject to bombardment by comet swarms in resonate orbits. However, most of these star systems are close together, so the comets in resonate orbits would be rapidly swept out of these solar systems by a variety of means. However, in our solar system, the 'dead star' draws Kuiper belt bodies into the inner solar system. These bodies crumble rounding the Sun producing comet swarms. These comet swarms would stay around for several million years. Some of these comets would impact Earth causing massive natural catastrophes. The above illustration clearly associates the 'tenth planet' with the 'dead star' that in turn indicates Earth's vulnerability to natural comet impact catastrophes. This may be why the existence of 2003 UB 313 (AKA Eris) was kept secret. PROOF: Someone Was Looking For A Jovian Sized Planet With The IRAS Satellite. Photocopies of newspaper articles reporting the discovery of an object possibly as large as Jupiter (Our Vulcan) or a nearby protostar that never got hot enough to become a star (AKA brown dwarf) 50 billion miles away. POSSIBILITY: Is There A Cover-Up Of The Discovery Of A Vulcan Like Object In Our Solar System? Is somebody trying to hide something from us? One must wonder how many other planetary systems about other stars the ancient Akkadians knew about. AN ASTRONOMER'S ANALYSIS OF THE AKKADIAN SEAL - by Tom Van Flandern Referring to Figure 101, p. 205 of Sitchin's "Twelfth Planet": a large star symbol is in the center. It is way too small in diameter relative to the planets; but we might overlook that as artist's license, if only the planets were shown to scale. In summary, the Seal does not, by itself, suggest anything more to an astronomer than an artistic rendition of a star surrounded by planets. There are simply no instances where consecutive identifications of orbs with real planets support one another. Each must be argued ad hoc, and each is problematic. Given the lack of easy recognition of familiar solar system bodies, the extension to unfamiliar ones (based on the Seal alone) must be regarded as an act of pure faith. Perhaps the Akkadian Seal depicts some other planetary system around some other star; but it seems most unlikely to refer to our own solar system. PLANET X, NIRBIRU, NEMESIS, DARK STAR NEWS FROM THE PAST - 7/08/10 Back in the 1980s there was a rash of Planet X related articles that were published in various venues. Then the grand one of them all, the 1987 Encyclopedia that shows NASA sending the Pioneers to intercept the "Dead Star" and the "10th Planet". All of a sudden news of Planet X just dropped off the map. What did the Pioneer Spacecraft find? NEW YORK TIMES June 19th, 1982 Spacecraft May Detect Mystery Body in Space "If it is a dark star type of object, it may be 50 billion miles beyond the known planets" THE WASHINGTON POST December 30th, 1983 Orbiting Eye Reveals Mystery Space Monster "The most fascinating explanation of this mystery body, which is so cold it casts no light and has never been seen by optical telescopes on Earth or in space, is that it is a giant gaseous planet as large as Jupiter and as close to Earth as 50 billion miles." US NEWS WORLD REPORT Sept 10, 1984 Planet X - Is It Really Out There? "Last year, the infrared astronomical satellite (IRAS), circling in a polar orbit 560 miles from the Earth, detected heat from an object about 50 billion miles away that is now the subject of intense speculation. "All I can say is that we don't know what it is yet," says Gerry Neugesbeuer, director of the Palomar Observatory for the California Institute of Technology." TOM VAN FLANDERN Thomas C Van Flandern (June 26, 1940 – January 9, 2009) was an American astronomer and author specializing in celestial mechanics. Van Flandern had a career as a professional scientist, but was noted as an outspoken proponent of non-mainstream views related to astronomy, physics, and extra- terrestrial life. He attended Yale University on a scholarship sponsored by the U.S. Naval Observatory(USNO),[citation needed] joining USNO in 1963.[ Van Flandern was a prominent advocate of the belief that certain geological features seen on Mars, especially the "face at Cydonia", are not of natural origin, but were produced by intelligent extra-terrestrial life, probably the inhabitants of a major planet once located where the asteroid belt presently exists, and which Van Flandern believed had exploded 3.2 million years ago. "We've shown conclusively that at least some of the artifacts on the surface of Mars were artificially produced, and the evidence indicates they were produced approximately 3.2 million years ago, which is when Planet V exploded. Mars was a moon of Planet V, and we speculate that the Builders created the artificial structures as theme parks and advertisements to catch the attention of space tourists from Planet V (much as we may do on our own Moon some day, when lunar tourism becomes prevalent), or perhaps they are museums of some kind. Remember that the Face at Cydonia was located on the original equator of Mars. The Builder's civilization ended 3.2 million years ago. The evidence suggests that the explosion was anticipated, so the Builders may have departed their world, and it produced a massive flood, because Planet V was a water world. It is a coincidence that the face on Mars is hominid, like ours, and the earliest fossil record on Earth of hominids is the "Lucy" fossil from 3.2 million years ago. There have been some claims of earlier hominid fossils, but Lucy is the earliest that is definite. So I leave you with the thought that there may be a grain of truth in The War of the Worlds, with the twist that WE are the Martians. "Face on Mars" is listed the number four in an astronomers ranking of astronomical pseudo-science topics THERE IS SOMETHING OUT THERE -- part 1 - Posted by Mike Brown on Wednesday, October 20, 2010 THERE'S SOMETHING OUT THERE -- part 2 - Posted by Mike Brown on Wednesday, October 28, 2010 Seven years ago, the moment I first calculated the odd orbit of Sedna and realized it never came anywhere close to any of the planets, it instantly became clear that we astronomers had been missing something all along. Either something large once passed through the outer parts of our solar system and is now long gone, or something large still lurks in a distant corner out there and we haven't found it yet. Our first idea was that perhaps there was an unknown approximately earth- sized planet circling the sun about twice the distance of Neptune. The second possibility that we considered and wrote about was that perhaps a star had passed extremely close to our solar system at some point during the lifetime of the sun. "Extremely close" for a star means something like 20 times beyond the orbit of Neptune, The third possibility was the one that we deemed the most likely. Instead of getting one big kick from an improbably passing star, imagine that Sedna got a lot of really small kicks from many stars passing by not quite as closely. THERE'S SOMETHING OUT THERE -- part 3 - Posted by Mike Brown on Monday, November 29, 2010 Not finding anything else like Sedna was disappointing, of course, but, really, not surprising. Sedna was so far away and moving so slowly that we had almost missed it the first time around. And Sedna's orbit is so elongated that is spends the vast majority of the time even further from the sun than it is now. In fact, most of the time Sedna is so far away from the sun that it would be moving so slowly that we would have missed it entirely. Of Sedna's 12,000 year orbit around the sun,, we would only have been able to detect it for the 200 years when it was the absolute closest to the sun. The Other Theories Each Had Their Own Unique Pattern, We were lucky to have found Sedna to start with. We would need luck to find more. To be continued. And eventually finished. Promise. SITCHEN AND HARRINGTON ROBERT SUTTON HARRINGTON Robert Sutton Harrington (October 21, 1942 – January 23, 1993) was an American astronomer who worked at the United States Naval Observatory (USNO). Harrington became a believer in the existence of a Planet X beyond Pluto and undertook searches for it, with positive results coming from the IRAD probe in 1983. Harrington collaborated initially with T. C. (Tom) Van Flandern.[1] They were both "courted" by Zecharia Sitchin and his followers who believe in a planet Nibiru or Marduk, who cite the research of Harrington and van Flandern as possible collaborating evidence, though no definitive proof of a 10th planet has surfaced to date. GOOGLE, PLANET X BLACKOUT? - Dec 20, 2009 The location of Planet X - Oct. 1988, p. 1476-1478. Harrington's Planet X Forbes' Similar Planet Period: 1,076 yr. Inc.: 52 deg Ecc.: 0.167 Peri: 87 Aph.: 122 Arg.Peri./Year: 115 deg./1702 Long.Asc.N: 247 deg. THE LOCATION OF PLANET X - Astronomical Journal (ISSN 0004-6256), vol. 96, Oct. 1988, p. 1476-1478. Perihelion Epoch T: 6 August 1789 Semi-major a: 101.2 AU Eccentricity e: 0.411 Argument Of Perihelion: 208.5o Argument Of Node: 275.4o Inclination: 32.4o Mass: 4 Earth Masses NIBIRU AND DOZENS OF DEAD ASTRONOMERS/ A COINCIDENCE? - Aug 28, 2012 Have made no accusations or asserted any motives. Just the facts. None of these men died from the number one killer of men over 60. I don't know what it means. But it certainly is worthy of note and the list of dead astronomers is much, much longer than these listed here. We tried to focus on the pairs of astronomers more so than single astronomers as their deaths would be less probable as a pair. Hey everyone...do you see the attached videos below? No? Where are They? You see...they were deleted by somebody other than us, This is proof we are onto something, proof you are being lied to, proof that evil is all around us. PLANET X AND THE MYSTERIOUS DEATH OF DR. ROBERT HARRINGTON - 22-May- 2008 Dr. Robert S. Harrington, The Chief Astronomer Of The U.S. Naval Observatory, Died Before He Could Publicize The Fact That Planet X Is Approaching Our Solar System. In 1991, Dr. Robert S. Harrington, the chief astronomer of the U.S. Naval Observatory, took a puny 8-inch telescope to Black Birch, New Zealand, one of the few viewing points on Earth optimal for sighting Planet X, which he definitively calculated to be approaching from below the ecliptic at an angle of 40 degrees. Dr Harrington says the most remarkable feature predicted for Planet X is that its orbit is tilted 30 degrees away from the ecliptic, the main plane of the solar system, where all previous searches have concentrated. His models also predict a greater distance from the Sun, about 10 billion miles, or between two or three times as distant as Pluto. Furthermore, in publishing Dr. Harrington's obituary, the U.S. Naval Observatory went out of its way to gratuitously lie about Dr. Harrington's final achievement, stating that "in his final years, Dr. Harrington had lost interest in the" [two-century astronomical] "search for Planet X." The obituary had the following to say about Planet X: Considerations on the stability of the solar system led Bob to collaborate with T.C. Van Flandern in studies of the dynamical evolution of its satellites, and to an eventual search for "Planet X", conjectured to lie beyond Pluto and to be responsible for small, unexplained, residuals in the orbits of Uranus and Neptune. Late in his career Bob seemed quite skeptical of such an object, however. Dr. Harrington's colleague in the search for Planet X, Dr. Tom Van Flandern reversed his affirmative statements about the approach of Planet X and became peculiarly silent on the issue. In Meta Research Bulletin 4:3 (September 1999), he states: "Three more trans-Neptunian objects confirm the presence of a second asteroid belt in the region beyond Neptune. This probably indicates that the hypothetical Planet X is now an asteroid belt rather than an intact planet." Dr. Harrington's colleague in the search for Planet X, Dr. Tom Van Flandern reversed his affirmative statements about the approach of Planet X and became peculiarly silent on the issue; DR. ROBERT S. HARRINGTON - PLANET X Nibiru - Mar 2, 2008 NAVAL OBSERVATORY ASTRONOMERS SEARCH FOR THEORETICAL 10TH PLANET - Jan. 11, 1990 Astronomers at the U.S. Naval Observatory are narrowing their solar system search for a 10th planet, a mysterious, phantom giant that has long captured the speculation and interest of stargazers. Speculation about the existence of another planet evolved because something, perhaps a massive object orbiting the sun on the outer edge of the solar system, is giving a gravitational nudge that disrupts the predicted orbits of Uranus and Neptune, said astronomer R.S. Harrington. ''We are still incapable of predicting the location of Uranus,'' Harrington said Wednesday. ''It's clear something is wrong in the outer solar system.'' But Harrington said calculations later showed the combined mass of Pluto and its moon are about 1,000 times too small to account for the detected, but unexplained irregularity of the Neptune and Uranus orbits. (0.00218 X 1000 = 2.18) Planet X, Harrington said, is thought to be three to five times larger than Earth and moving in an orbit about three times farther from the sun than Neptune or Pluto. That would make the phantom planet about as visible as Pluto, a planet seen only by sophisticated telescopes. A planet in such a distant orbit would take about 1,000 years to circle the sun. PROOF?: A Cover-Up Exposed? In a recent unrecorded conversation with us, not yet reported on our site, Henry confirmed the existence of the "Dark Star", and that its reality had been deduced by American military astrophysicists because of the need to account for certain gravitational anomalies that had been observed. (He mentioned the Pioneer spacecraft, for instance.) This is very important: the "Dark Star" is fully known and understood in the "black world". When he told us this, we immediately realised how extraordinarily important this was. Right See: IS IRAS OBJECT 1732+239 NEAR VULCAN? - CIRCUMSTANTIAL EVIDENCE "Henry went on to explain that this was linked to Dan Burisch's testimony and to the 2012 problem. Again, he was citing real-life feet-on-the-ground "black physics", and NOT esoteric knowledge. The Dark Star (he did not mention whether it had been named, and we neglected to ask) has an extremely elliptical orbit SUMMARY: VULCAN'S NEW ORBITAL PARAMETERS Its in the C"EV orbit. THE 2006(7) AND 2012 STRIKE DATES "The problems started a few years ago when the current approach of the Dark Star began to create resonance effects on Sol, our own sun. Kerry and I presumed these resonance effects are electromagnetic in nature rather than purely gravitational, but again we didn't verify this. This, crucially, is what is causing the current increase in solar activity, and the rapid heating up not only of Earth (as in global warming), but of every planet in the system." ANALYSIS - PHYSICAL DATA IMPACTING CANDIDATE VULCAN ORBITS It rotates the core of the planets causing a heat pulse to arrive some years later. For Earth this happened in 1970 and the big heat pulse finally hit the oceans circa 2005 causing lots of hurricanes, rain, etc. It's now approaching, and NOAA knows that this is the indirect cause (triggering increased solar activity through complex forms of resonance) of the heating of all the planets, not just ours. They factor this data into all their supercomputer weather forecasting calculations. [Note: the big question is why all this has been hushed up. Henry can speculate as well as we can ?bad news on the way? ?but he did not know for certain.] See above hyper link. It past aphelion in 1970 and is now inbound. But it never gets near the known planets. The "bad news" is that it forms comet swarms in a 3:2 resonate orbit and we are due for a passage of several clusters of this swarm this century. Also See: NEAR MISSES AND POSSIBLE IMPACT EVENTS SUMMARY RED PLANET WARMING High-resolution images snapped by NASA's Mars Global Surveyor show that levels of frozen water and carbon dioxide at the Red Planet's poles have dwindled dramatically over a single Martian year. Mars, like Earth, is warming from the inside due to a shift of its primordial black hole core as Vulcan passed aphelion circa 1970. MARS EMERGING FROM ICE AGE, DATA SUGGEST AND FINALLY: ASTRONOMERS DETECT SUDDEN CLIMATE CHANGE ON PLUTO In the last 14 years, one or more changes have occurred. "Pluto's atmosphere is undergoing global cooling, while other data indicates that the surface seems to be getting slightly warmer. SUV'S ON JUPITER? Are humans responsible for climate change on the outer reaches of the solar system, or is it the sun? Or has Vulcan passing aphelion influenced the core related heating of all the solar planets? THE WHOLE SOLAR SYSTEM IS UNDERGOING GLOBAL WARMING. This issue is connected with the Roswell catastrophe. Maybe Right See: CURRENT EVIDENCE OF SEPTIMUS' AND VULCAN'S MASS AND ORBIT See Figure 5A. Extra-terrestrial Alien Description Of Our Alphabet And Solar System. It is connected with a human/alien meeting case that happened 6 October 1974. COMET HONDA? April 07, 2011, NASA ADMITS POSSIBILITY OF COMPANION BROWN DWARF SOMETHING IN SAGITTARIUS Then there is talk that our Sun has a binary partner beyond Pluto, also in late tropical Sagittarius, with a highly inclined orbit in the thousands of years. Two of the main proponents of this are researcher Andy Lloyd , author of The Dark Star, and Walter Cruttenden, author of Lost Star of Myth and Time and director of the Binary Research Institute . Each has his own particular twist on the subject, but both basically agree on the location of this binary and that it is very large and orbits in the thousands of years. Then there is Barry Warmkessel's Saturn-sized Vulcan planet postulated to have an extra-Plutonian period of just under 5000 years, and which is also currently located in late tropical Sagittarius, based on cometary orbits, IRAS obsevations, and various metaphysical and historical criteria. What all these sources have in common is that there is mounting evidence of some very influential object or source in the heavenly direction of late tropical Sagittarius that is having a significant impact on life here on Earth, physically and in every other sense. Whether it is Vulcan, the Sun's binary, or the galactic center remains to be seen, but this author also has noted the "Sagittarius effect" through the study of numerous charts, horoscopes, and historical events or epochs. VULCAN-DARK STAR IN THE SIGNS 12000 BC TO 2400 AD SPACE/PLANET X GALACTIC CENTER In the Equatorial coordinate system they are: RA 17h 45m 40.04s, Dec -29deg 00min 28.1sec (J2000epoch). Galactic Center Right Ascension: 266.4168 degrees Galactic Center Declination: -29.0078 degrees Date: 2000.0000 AD Vulcan Right Ascension: 264.185738 degrees Vulcan Declination: 23.809304 degrees ENTER PLANET X - November 1, 2003 Now that we have your attention, we should also note that there are some comparatively rationally-based websites and reports on Planet X as well. (The "xkbo" reference, incidentally, is "an Unknown Kuiper Belt Object", the latter being the supposed source of solar system comets and miscellaneous, careening objects.) More importantly at this juncture, do I understand correctly that what IRAS spotted in 1983 was a metaphysical planet? So that means that what NASA officials mistook as a heavenly body in the direction of the constellation Orion, which they claimed was possibly as large as Jupiter, and close enough to Earth to be part of this solar system, was a mythological planet? And so the mysterious heavenly orb which mathematicians now predict may be three times the size of Jupiter, with a highly elliptical orbit that runs opposite the other nine planets in the solar system, and a retrograde orbit inclined at 120 degrees, is just the replay of an ancient event in the solar system? As the famous line in Oliver Stone's JFK goes, "people, we're through the looking glass here." MORE ON PLANET-X Van Flandern wrote: "If Planet X crosses Jupiter's orbit, it is a goner, either by collision with Jupiter or ejection from the solar system, within 100,000 years... The encounters with Jupiter are not merely potential, but inevitable, because of forced precession of the orbit by Jupiter... Jupiter's gravity is so strong that it can eliminate another body in a single close approach."* Furthermore, since the Earth's orbit was almost perfectly circular, Van Flandern ruled out any possibility that Earth had suffered a major catastrophic collision. And thus he negated the idea that the Babylonian Epic of Creation, as decoded by Sitchin, was a historical record of events in our solar system. This would appear to doom Sitchin's "twelfth planet" that ancient astronomers had referred to using the names "Marduk" and "Nibiru". But notice that it does not doom the possibility that Tiamat, Marduk and Nibiru are comet swarms that could "appear as large as a planet. While these would eventually be ejected from or solar system or collide with the known planets, at other times new ones would be drawn in from the Kuiper belt.by Vulcan. Vulcan never gets closer to the Sun than 134 AU, about 2.7 times as far as Pluto's far point is from the Sun. 'Statistically, a few passing stars would approach within 40 times (2000 AU) Pluto's distance of the Sun over the life of the solar system. They would tend to perturb the outermost planets (10 to 40 times more distant than Pluto) into planet- crossing orbits. Eventually the crossings would result in close encounters between planets. Vulcan never gets farther from the Sun than 450 AU, about 9 times as distant as Pluto's far point is from the Sun. Van Flandern noted, however, that once Planet X had been forced inwards, it would suffer repeated encounters with the other planets, eventually leading to its ejection from the solar system.[11] Van Flandern confirmed that such a planet-crossing orbit was highly unstable and unlikely to last for more than 100,000 years, mainly due to the powerful influence of Jupiter, by far the largest planet of the solar system: THE MYSTERY OF PLANET X Van Flandern (whose astronomical specialty is celestial mechanics) wrote: "Statistically, a few passing stars would approach within 40 times Pluto's distance of the Sun (2000 AU) over the life of the solar system. They would tend to perturb the outermost planets... into planet-crossing orbits. Eventually the crossings would result in close encounters between planets." In a later communication (03/28/2003) Van Flandern further clarified this comment. " Orbits at large distances are unstable over the solar system's lifetime. (This is not just my opinion, but standard dynamical astronomy based on the statistics of passing stars.) So rare cases where stars are now seen at such distances must be of relatively recent origin. These are usually either systems that have recently escaped from open clusters, or multiple-star systems that have experienced a close encounter, leading to near-ejection of a member star. Less likely are transfer captures from other passing stars. There are also many cases of recent star system formation. For example, the Orion Nebula seems to be an active stellar nursery. All we can be sure of is that it is very unlikely such systems have been around for 4.5 billion years." Upon asking if there were any independent evidence that the Alpha Centauri trinary was a relatively new stellar system, Van Flandern on 03/28/2003 wrote. "None that I know of. The only recognized indicator of age would be metallicity in the stars. (High metallicity would indicate relative youth.) But I have no knowledge of the metallicity index for the Alpha Centauri system. Otherwise, the presence of a distant companion is the only clue. (But you are obviously looking for some *independent* evidence of youth.)." Vulcan's orbit, as described in this web site, falls within about a fifth of this distance. Therefore its orbit would not likely be perturbed by the passage of such a star. However, if stars pass within 2000 AU, many well also pass within ten or twenty times this distance. This would appear to doom Matese' "THE MASSIVE SOLAR COMPANION" discussed below. Van Flandern wrote: "If Planet X crosses Jupiter's orbit, it is a goner, either by collision with Jupiter or ejection from the solar system, within 100,000 years... The encounters with Jupiter are not merely potential, but inevitable, because of forced precession of the orbit by Jupiter... Jupiter's gravity is so strong that it can eliminate another body in a single close approach." This would appear to doom Sitchin's "twelfth planet" that ancient astronomers had referred to using the names "Marduk" and "Nibiru". PLANET X, 1841-1992 Tom van Flandern examined the positions of Uranus and Neptune in the 1970s. The calculated orbit of Neptune fit observations only for a few years, and then started to drift away. Uranus orbit fit the observations during one revolution but not during the previous revolution. In 1976 Tom van Flandern became convinced that there was a tenth planet. After the discovery of Charon in 1978 showed the mass of Pluto to be much smaller than expected, van Flandern convinced his USNO colleague Robert S. Harrington of the existence of this tenth planet. They started to collaborate by investigate the Neptunian satellite system. Soon their views diverged. van Flandern thought the tenth planet had formed beyond Neptune's orbit, while Harrington believed it had formed between the orbits of Uranus and Neptune. van Flandern thought more data was needed, such as an improved mass for Neptune furnished by Voyager 2. Harrington started to search for the planet by brute force -- he started in 1979, and by 1987 he had still not found any planet. van Flandern and Harrington suggested that the tenth planet might be near aphelion in a highly elliptical orbit. If the planet is dark, it might be as faint as magnitude 16-17, suggests van Flandern. Soon their views diverged. van Flandern thought the tenth planet had formed beyond Neptune's orbit, while Harrington believed it had formed between the orbits of Uranus and Neptune. van Flandern thought more data was needed, such as an improved mass for Neptune furnished by Voyager 2. Harrington started to search for the planet by brute force -- he started in 1979, and by 1987 he had still not found any planet. van Flandern and Harrington suggested that the tenth planet might be near aphelion in a highly elliptical orbit. If the planet is dark, it might be as faint as magnitude 16-17, suggests van Flandern. Voyager program This article is about the space probes launched in 1977. Pioneer program Pioneer 10 (Pioneer F) – Jupiter, interstellar medium, launched March 1972 Pioneer 11 (Pioneer G) – Jupiter, Saturn, interstellar medium, launched April 1973 This site concludes that Vulcan is of magnitude 20.5 - 21.1, passed aphelion circa 1970 and its orbit has an eccentricity of about 0.54 placing its aphelion at 41.6 billion miles. TIME LINE OF RESEARCH REGARDING THE "SEARCH FOR PLANET X" The Sumerian Descriptions of our solar system. ABADDON AND RETROGRADE ORBIT In 1983 a large planet outside our known solar system was discovered. The Washington Post, The New York Times, and other major news services carried the story. Unlike our known planets that revolve around the sun in a counter clockwise and circular direction, Planet X (as it was labeled) is in a retrograde (clockwise) and longer elliptical orbit about our sun. It is estimated by some that its perigee (closest orbit to the sun) occurs every 5,000 years or so. The effects of its intrusion into our solar system could be cataclysmic to all the planets, including earth. POSSIBLY AS LARGE AS JUPITER; MYSTERY HEAVENLY BODY DISCOVERED A heavenly body possibly as large as the giant planet Jupiter and possibly so close to Earth that it would be part of this solar system has been found in the direction of the constellation Orion by an orbiting telescope aboard the U.S. infrared astronomical satellite. The 1999 Paper estimates that this object is just a little larger than the size of Jupiter by two methods independent of the IRAS data considered in this Washington Post article. The directionof the object is considered disinformation to mislead other astronomers. When IRAS scientists first saw the mystery body and calculated that it could be as close as 50 trillion miles, there was some speculation that it might be moving toward Earth. The 1997 Paper estimates that Vulcan passed aphelion around 1970 by a method independent of this data and consistent with this conclusion by the IRAS team. NEWS ARCHIVES (IRAS data) New York Times January 30, 1983 The Washington Post, 31-Dec-1983, (a front page story) ARTICLE: MYSTERY HEAVENLY BODY DISCOVERED US News World Report PLANET X - IS IT REALLY OUT THERE? Sept 10, 1984 Last year, the infrared astronomical satellite (IRAS), circling in a polar orbit 560 miles from the Earth, detected heat from an object about 50 billion miles away that is now the subject of intense speculation. This is a nice summary of material that found its way into the media circa the mid eighties concerning Planet X. The Vulcan web site suggests that one of the objects found by the IRAS system could be Vulcan (IRAS object 1732+239) and according to Vulcan's orbit analysis it is now near far point, about 41.66 billion miles away. NIBIRU FIRST SPOTED 1981 BY IRAS IRAS was not launched until 1982. CNN NIBIRU NEWS UPDATE 2011 - Aug 14, 2011 WHERE'S TYCHE, THE 10TH? 9TH PLANET? GETTING THE FULL STORY - At the close of the interview, Ned Wright of WISE summed up the current situation in one sentence; "No, we have not found a new planet." ASTRONOMERS DOUBT GIANT PLANET 'TYCHE' EXISTS IN OUR SOLAR SYSTEM - 15 February 2011 UP TELESCOPE! SEARCH BEGINS FOR GIANT NEW PLANET - 13 February 2011 Tyche may be bigger than Jupiter and orbit at the outer edge of the solar system COMMENT The ort cloud is a hypothesis. As there have only been 4 objects found which are thought to be in it, it's hard to know what effect Tyche has had on the other several trillion objects that are supposed to make it up (the theory being it knocks at least some of them into comet behavior). But if they both do exist it seems unreasonable to suppose such a massive planet passes through the Ort cloud without clearing a path. That would be the biggest surprise of all, beating zmons aliens by a long shot. NASA MAY SOON CONFIRM ORBIT OF GIANT PLANET LURKING BEYOND PLUTO - February 15, 2011 No, we don't believe this is a marauding death star but could rather be the long sought after missing brown-dwarf type planetoid that may be lurking beyond Pluto which could account for the gradational anomalies in the 9th planet's orbit. Tyche's only threat to the planet could be its ability to gravitationally dislodge comets from regions of the Oort cloud and hurl them in the direction of Earth. NIBIRU - ARMAGEDDON PLANET OR ASTRONOMICAL BALONEY? - Oct 1, 2008 LARGEST PLANET IN THE SOLAR SYSTEM COULD BE ABOUT TO BE DISCOVERED - AND IT'S UP TO FOUR TIMES THE SIZE OF JUPITER - 14th February 2011 'If it does, [fellow astrophysicist Prof John Matese] and I will be doing cartwheels. And that's not easy at our age.' IS ANYONE EVEN LOOKING? Like the government would want people to know they are in danger from impacts from a passing comet swarm! One might expect that the astronomical community would be excited at the prospect of discovering this planet/sub-brown dwarf in the Oort Cloud. On the contrary, this subject lies on the very fringe of astronomy, and my own speculations are still more 'out there' than Murray and Matese! Planet X is sweeping out the Kuiper Belt - Official! ""There's something funny going on out there." Marc Buie of the Lowell Observatory in Arizona is talking about a strange feature at the far edge of our solar system beyond Pluto, among the swarm of small worlds called the Kuiper Belt. It's a wild, uncharted place out there, teeming with icy celestial bodies that may give us essential clues to how the planets formed. It may even be a breeding ground for life. But what's intriguing Buie at the moment is the very edge, about 50 times further out from the Sun than the Earth's orbit. Here, at the "Kuiper Cliff", the number of astronomical objects drops off precipitously. Buie won't be drawn too far but, when pressed, he speaks of the possibility that some "massive object" has swept the zone clean of debris..." [Heather Couper and Nigel Henbest (6)] Brown Dwarf at the Right Distance Only 12 light years away, the star Epsilon Indi has been found to have a surprise brown dwarf companion, which was spotted by examining various photographic plates to the star's immediate vicinity. The star experienced no tell-tale wobble because the brown dwarf is orbiting at a whooping 1500 Astronomical Units. This is a promising development because I think it likely that the Sun's own smaller and darker companion achieves aphelion between 500 and 2000AU. Given the evidence of Epsilon Indi B, this does not appear to be all that unreasonable. Vulcan is half the size of Jupiter and 448 AU away (in 1970). TIGHTENING OUR KUIPER BELT: from the edge of the solar system come hints of a disrupted youth - Out There Neptune's orbit, a nearly circular ellipse some three billion miles away from the Sun, traces the Kuiper Belt's inner edge. PLANETS PLAN BOOSTS TALLY TO 12 Charon is currently described as a moon of Pluto, but because of its size some experts consider it a twin planet. DINKY PLUTO LOSES ITS STATUS AS PLANET Pluto and objects like it will be known as "dwarf planets," which raised some thorny questions about semantics: If a raincoat is still a coat, and a cell phone is still a phone, why isn't a dwarf planet still a planet? Pluto and UB313 are planets of Vulcan, a brown dwarf star DWARF PLANET ERIS UPATE – 10/30/11 CHAOS AND STRIFE BEHIND PLUTO'S DEMISE Eris was officially named by the International Astronomical Union on Wednesday. It had previously been known as 2003 UB313. Astronomers last month voted to shrink the solar system to just eight planets and downgrade Pluto to a "dwarf planet". That category also now includes Eris and the asteroid Ceres. After Pluto lost its planetary status, hundreds of scientists circulated a petition protesting against the decision. In mythology, Eris caused a quarrel among goddesses that led to the Trojan War. Eris' moon has also received a formal name, Dysnomia, the daughter of Eris known as the spirit of lawlessness. Eris, which measures about 115km wider than Pluto, is the farthest known object in the solar system at 14.5 billion km from the Sun. It is also the third brightest object in the Kuiper belt, a disc of icy debris beyond the orbit of Neptune. COPERNICUS SMILED A real revolution is afoot in planetary science. The first shot was fired in 1930, with the discovery of Pluto, but almost no one realized its import. The second and third shots came in the late 1970s, with the discovery of distant objects called Chiron and Charon, but again, few recognized what they would portend. Rapid-fire volleys began in the 1990s, as myriad discoveries of icy bodies 100s to well over 1000 kilometers across occurred in the Kuiper Belt, just beyond Neptune, became an observational reality. But it was only this year, with the recently announced discovery of 2003 UB313-a world larger than Pluto- that we have heard the equivalent of the American Revolution's "shot heard round the world." A new theory of planet formation is needed, and here is one. The Astro-Metric Concept 17 PLANETS? ASTRONOMERS' HEADS SPINNING The discovery of new objects in the icy junkyard called the Kuiper Belt forces science to rethink the definition of a planet. BEYOND THE KUIPER BELT: ASTRONOMERS ANNOUNCE DISCOVERY OF FARTHEST OBJECT ORBITING THE SUN Even at its closest, Sedna comes no nearer to the Sun than 76 astronomical units (AU), each AU equaling the average distance of the Earth from the Sun. In contrast, says Brown, the Kuiper Belt has a well defined outer edge at about 55 AU. Furthermore Sedna's elongated orbit takes it far beyond any known KBO, to a distance of 900 AU from the Sun. GEOCENTRIC EPHEMERIS OF NEW MINOR PLANET SEDNA, 10 DAY STEPS z Ephemeris of Sedna, asteroid number 90377. STRANGE NEW OBJECT FOUND AT EDGE OF SOLAR SYSTEM A large object has been found beyond Pluto travelling in an orbit tilted by 47 degrees to most other bodies in the solar system. Astronomers are at a loss to explain why the object's orbit is so off-kilter while being almost circular. But at 47 degrees, 2004 XR190's orbit is one of the most tilted, or inclined, Kuiper Belt Objects known. 2004 XR190, however, follows a nearly circular path. And it is too distant to have come into direct contact with Neptune, traveling between 52 and 62 AU from the Sun. Its orbit is also too circular - and too small - to have been tilted by a passing star, says Allen. He points out that this object was found when it happened to be passing through the plane of the solar system - where it spends just 2% of its orbit. That suggests many more such objects remain undiscovered, tilted at orbits where most surveys do not search for them. He ventures another possible explanation - that the Sun had a twin and that both stars followed circular orbits around each other. "That could excite inclinations without exciting the eccentricities," he says. "However, this idea creates more problems than it solves, by far." SCENARIOS FOR THE ORIGIN OF THE ORBITS OF THE TRANS-NEPTUNIAN OBJECTS 2000 CR105 AND 2003 VB12 (SEDNA) - The Astronomical Journal, volume 128 (2004), pages 2564?576 In this paper, we explore five seemingly promising mechanisms to explain the origin of the orbits of these peculiar objects: (1) the passage of Neptune through a high-eccentricity phase, (2) the past existence of massive planetary embryos in the Kuiper belt or the scattered disk, (3) the presence of a massive trans-Neptunian disk at early epochs that perturbed highly inclined scattered- disk objects, (4) encounters with other stars that perturbed the orbits of some of the solar system's trans-Neptunian planetesimals, and (5) the capture of extrasolar planetesimals from low-mass stars or brown dwarfs encountering the Sun. Of all these mechanisms, the ones giving the most satisfactory results are those related to the passage of stars (4 and 5). Vulcan is just such a low-mass or brown dwarf star, and it is still in orbit about the Sun. DID OUR SUN CAPTURE ALIEN WORLDS? Kenyon and Bromley suggest that the near-collision occurred when our Sun was at least 30 million years old, and probably no more than 200 million years old. A fly-by distance of 150-200 A.U. would be close enough to disrupt the outer Kuiper Belt without affecting the inner planets. Vulcan ranges from 135 AU to 448 AU. According to the simulations, the passing star's gravity would sweep clear the outer solar system beyond about 50 A.U., even as our Sun's gravity pulled some of the alien planetoids into its grasp. The model explains both the orbit of Sedna and the observed sharp outer edge of our Kuiper Belt, where few objects reside beyond 50 A.U. Kenyon and Bromley's simulations indicate that thousands or possibly millions of alien Kuiper Belt Objects were stripped from the passing star. However, none have yet been positively identified. Sedna is probably homegrown, not captured. Among the known Kuiper Belt Objects, an icy rock dubbed 2000 CR105 is the best candidate for capture given its unusually elliptical and highly inclined orbit. But only the detection of objects with orbits inclined more than 40 degrees from the plane of the solar system will clinch the case for the presence of extrasolar planets in our backyard. like 2003 UB313's 44o orbital inclination. 'MINI SOLAR SYSTEM' DISCOVERED Scientists found a tiny brown dwarf, or failed star, less than one hundredth the mass of the sun, surrounded by what appears to be a disk of dust and gas. ALIEN TREASURES IN OUR BACKYARD Some objects in our solar system formed around another star How did these adopted worlds join our solar family? They arrived through an interstellar trade that took place more than 4 billion years ago when a wayward star brushed past our solar system. According to calculations made by Kenyon and astronomer Benjamin Bromley (University of Utah) and published in the Dec. 2, 2004, Nature, the Sun's gravity plucked asteroid-sized objects from the visiting star. At the same time, the star pulled material from the outer reaches of our solar system into its grasp. STAND BACK CSI, THE ASTRONOMERS OF CFA MAY HAVE SOLVED A MYSTERY OF COSMIC PROPORTIONS. The computer models indicate thousands, possibly millions, of alien objects were stripped from the passing star. At this point, none has yet been identified. During the simulations, the Sun captures up to one-third of the star's objects, adding them to the Kuiper Belt. In fact, the models provide a 10 percent chance that Sedna was captured during the flyby. "It's possible that some of the objects in our solar system actually formed around another star," Kenyon says. The passing star could have pulled objects like Sedna out of their circular orbits and also spun away millions of KBOs. This would have trimmed the outer edge of the Kuiper Belt to 50 AU from the Sun while leaving some of the passing star's own objects behind. "There may not have been an equal exchange, but there was certainly an exchange," says Bromley. Kenyon, Bromley, and fellow astronomers will proceed to search for objects with highly inclined orbits. Because only captured objects have orbits inclined more than 40o, detection of such objects would confirm the presence of foreign objects in our solar system - like 2003 UB313's 44o orbital inclination. EARTH'S SOLAR SYSTEM SHAPED BY BRUSH WITH STAR, ASTRONOMERS SAY The new computer model shows how young planet-sized objects with circular orbits around the Sun might have been gravitationally slung onto elongated paths, putting them too far away to spot with current technology. Such an interaction might also have caused a sharp cutoff detected at the outer edge of the Kuiper Belt, a region of icy objects beyond Neptune. Our most powerful detection techniques are capable of detecting Earth-sized planet 70 AU from the Sun, a Jupiter-sized planet up to 120 AU away (neglecting its gravitational effects on the Sun). Of course, the sky is very big and the most powerful telescopes can only look at a very tiny fraction of it at a time. Pluto, for comparison, is around 30 AU away at the moment. ASTRONOMERS FIND MOON OF 10TH PLANET By determining the moon's distance and orbit around Xena, scientists can calculate how heavy Xena is. The faster a moon goes around a planet, the more massive a planet is. The International Astronomical Union, a group of scientists responsible for naming planets, is deciding on formal names for Xena and Gabrielle. DISCOVERY OF GABRIELLE More observations are planned to calculate the orbit of Gabrielle around Xena, which appears to have an orbital period of about 14 days. From the orbital period and semi-major axis (approximately the average distance between Xena and Gabrielle), it is possible to determine the mass of Xena and confirm that it is more massive than Pluto, 2003 UB313, THE 10TH PLANET, HAS A MOON! Right now we are not certain how big the moon is, but we can make some guesses based on how much light it reflects. We know that it is about 60 times fainter than the planet, suggesting that it is perhaps 8 times smaller in diameter than the planet. Interestingly, the planet-moon system appears similar to the Earth- Moon system, except reduced in scale by a factor of about 5-10. Xena is about 5 times smaller than the Earth. Gabrielle is about 8 times smaller than the Moon. And the two are separated by a distance that is about 10 times smaller than the Earth-Moon separation. Not a perfect match but awfully close. If Xena is 0.01 earth masses, it likely is not the tenth planet (sometimes called Septimus-B; see Table 8A in the ANALYSIS OF THE AKKADIAN SEAL) By comparison, Mercury is 0.045, Venus is 0.82 and Mars is 0.108 and Pluto 0.0025 earth mass. NEW "PLANET" IS LARGER THAN PLUTO Claims that the Solar System has a tenth planet are bolstered by the finding by a group lead by Bonn astrophysicists that this alleged planet, announced last summer and tentatively named 2003 UB313, is bigger than Pluto. By measuring its thermal emission, the scientists were able to determine a diameter of about 3000 km, which makes it 700 km larger than Pluto and thereby marks it as the largest solar system object found since the discovery of Neptune in 1846 (Nature, 2 February 2006). FAREWELL PLUTO? The discovery of a new planet in our Solar System could have an unintended consequence - the elimination of Pluto in the list of planets everyone has in their heads. Is it time to wave this distant, dark piece of rock farewell? Pluto is a planet spawned by Vulcan, so it is different than the first eight that were spawned by the Sun. REVIEW OF SOLAR SYSTEM MAY LEAVE PLUTO OUT IN THE COLD Along with Xena, the nickname for an object known officially as 2003 UB313, bodies called Sedna and Quaoar (pronounced kwa-whar) have been identified at the edge of the solar system. Although both are smaller than the smallest official planet, many astronomers believe that these all belong to a similar category of overgrown, icy asteroids in the Kuiper Belt beyond Neptune. it would be better to call Pluto, Xena and Sedna "ice dwarfs" than "trans- Neptunian planets" because it is more descriptive. COMMENT ONE BY OUR ORBIT ANALYST Hazardous comets must have a perihelion inside the Earth's orbit. The comets and Vulcan, in our studies, have roughly the same aphelion distance from the Sun. The comets have a 3 (comet's orbit period):2 (Vulcan's orbit period) ratio with Vulcan. As pointed out elsewhere Pluto and Neptune have approximately a 2 (Pluto's period):3(Neptune's period) ratio but the geometry is very different in that case. Orbit period is determined by semi-major axis. We could have different orbit eccentricities. Our current best Vulcan orbit has a period of about 5000 years (note: the impact data is pushing our 4969 +30/-23 years Vulcan period upward toward the 5000 years value). The orbit's eccentricity gives us aphelion of about 450 AU. Different Vulcan orbit eccentricities can produce hazardous comets in a 2:3 orbit period ratio as will be shown latter. Our current best orbit has an eccentricity of about 0.537. The 2:3 resonance is obtained for comet orbits where the encounters occur at points before and after Vulcan aphelion. Our orbit analysis is sound. We solved the problem as a deterministic case. We did not have redundant data. Granted, Vulcan's orbit is based on a simple two body model. We have Quinn's long term Solar system data that allowed us to do reasonable DOM (Dawn Of Mankind) calculations, and we have taken into account parallax. The nature of the geometry was such that an initial orbit could be obtained without considering parallax. Then the results were refined. The data provided clearly shows that the orbital solutions agree with the assumed input data: Time, RA and DC of the Buddha point as observed from Earth. Time, RA and DC of the Christ point Time, RA and DC of the IRAS point Time of Vulcan aphelion passage Our earlier work was based on a fixed obliquity of the ecliptic and assumed a constant precession of the equinoxes. The eccentricity and semi-major axis of the Earth's orbit were assumed to remain constant. The use of Quinn's data for later work and the incorporation of parallax corrections resulted in minor changes to the Vulcan orbit parameters for the given set of input data. Quinn's data allowed us to make projections going back more than 50000 years which should be accurate to within a day! All of the Earth's orbit parameters are assumed to change with the exception of semi- major axis. Quinn's data shows that semi-major axis, and hence anomalistic period, vary insignificantly over our time frame of interest. Quinn's data remains constant to about 8 or 9 significant figures. Of course, we cannot really use Julian Day Number 50000 years back in time. The Earth's spin rate does wander somewhat and this leads to small errors in dating which we ignore. Referencing to the Equinox for any year is as good as Quinn's data. Quinn's data has allowed us to perform all of the necessary coordinate transformations needed to reference our results properly. A cursory comparison with Naval Observatory data for the B1950 and J2000 systems shows good agreement at this time. The reason we looked for other data was that the Naval Observatory data, while very good near term does not provide adequate formulas for propagating backwards in time. I am not aware that anyone else in the World, not even Quinn, has used his data in a way to produce the kind of accurate reference system we have produced. I am really proud of that achievement. It is the one piece of our work that has general applicability. The comet orbits were derived from a planar three body simulation which was intended to show the essence of the problem without getting into prohibitively long calculations. The results show that approximately constant geometry can be maintained for over 20,000 years. That is over six comet orbit periods. The close encounter with Vulcan occurs every third comet period. A number of patterns are possible. one group has two short periods followed by a long one and the other group has two long periods followed by a short one. The sum of the three periods is exactly equal to two Vulcan orbit periods. The comet swarms are the concern for Earth (not Vulcan itself). Major past strike events have been demonstrated to correlate well with certain comet perihelion passages. COMMENT TWO BY OUR ORBIT ANALYST The data we present here is from a simple analytical model for which comet eccentricity is assumed to be 0.9985 and Vulcan eccentricity values are obtained from the solutions of a quadratic equation. This model yields an offset angle of 6.68o for our current best Vulcan orbit. Our comet studies were done without the benefit of this analysis and employed an initial offset of 8o. Only certain eccentricities will support a 3 (comet period):2 (Vulcan period) resonance orbit. The following eccentricities are calculated from a simple model for various values of the offset angle. The mass of Vulcan is not used in this model. The functional values simplify into a standard quadratic equation: e2 + 1.525141 times e times cos(theta) + .525141 = 0. The offset angle we quote is the angle in a plane subtended by the lines from the Sun to Vulcan aphelion and the line from the Sun to the comet aphelion. The maximum offset angle is found to be 18.140858o. For greater values, the eccentricity values are found to be either complex or negative. Th following table lists the non complex positive values THETA (Angle Offset From Vulcan's Aphelion) Vs. ECCENTRICITY SWARMS Offset Angle Low Eccentricity High Eccentricity 0.00 .525141 1.000000 +/- 5.00 .531732 .987605 +/-10.00 .553912 .948060 Comet eccentricity employed in this simple mode; e = .9985. Baseline: Vulcan orbit data; a = 291.192165 AU; P = 4969.006049 years Baseline: Vulcan eccentricity; eVulcan = .537147. Applying these simple constraints to ten near-by double (or multiple) star systems yields the following observations for the smaller (or B) star generating a similar comet orbit resonance. STAR SYSTEMS THAT CAN SUPPORT A 3:2 COMET RESONANTE ORBIT Resonance Supported Possible Sentient Life Alpha Centuri 0.52 Maybe Likely Sirius 0.58 Yes Likely* UV Ceti 0.06 No Not Likely Procyon 0.65 Yes Not Likely 61 Cygni 0.4 No Likely Sigma 2398 0.55 Yes Not Likely Grb 34 0.25 No Not Likely Kruger 60 0.42 No Not Likely Wolf 424 0.28 No Not Likely 26 Draconis 0.19 No Maybe Once *The Nommos from Sirius are believed to have helped mankind survive Noah's Flood. SPACE PROBE - A link between the IRAS data, the Pioneer probes and Vulcan (Nemesis 50 billion miles distant) The diagram on the right appeared in the 1987 edition of the "New Science and Invention Encyclopedia", published by H.S. Stuttman, Westport, Connecticut, USA. The article was discussing the purpose of the Pioneer 10 & 11 space probes. Clearly shown is "Nemesis", a popular name for our sun's binary companion, a dead star. (Binary solar systems are apparently the rule in our galaxy, not the exception.) Now, will someone please explain why this diagram clearly shows the approximate location of Planet X (a.k.a. the 10th or 12th Planet)? Planet X is presented as a matter of fact in this respected encyclopedia. Some have suggested the paths of Pioneers 10 & 11 were chosen as to get a triangulated fix on Planet X, a suggestion this chart would support. The "dead star" 50 billion miles distant is clearly this site's Vulcan and "Tenth Planet" at 4.7 billion miles is consistent with the new 'tenth planet Xena; AKA 2003 UB 313. CNN MISATTRIBUTES PLANET X AS "BROWN DWARF" - 15 AUGUST 2011 NEMESIS NO MORE? COMET-HURLING 'DEATH STAR' MOST LIKELY A MYTH - 08 August 2011 EARTH IMPACTS: MORE LIKELY IN THE PAST OR PRESENT? - Aug 04, 2011 'DEATH STAR' DEBUNKED - 2 Aug 2011 NO 'SECOND SUN' BOOSTING ASTEROID IMPACTS - Aug 1, 2011 NEW IMPACT RATE COUNT LAYS NEMESIS THEORY TO REST - AUGUST 1, 2011 GIANT STEALTH PLANET MAY EXPLAIN RAIN OF COMETS FROM SOLAR SYSTEM'S EDGE - 01 December 2010 GETTING WISE ABOUT NEMESIS - March 11, 2010 We may not have an answer to the Nemesis question until mid-2013. WISE needs to scan the sky twice in order to generate the time-lapsed images astronomers use to detect objects in the outer solar system. The change in location of an object between the time of the first scan and the second tells astronomers about the object's location and orbit. "I don't suspect we'll have completed the search for candidate objects until mid-2012, and then we may need up to a year of time to complete telescopic follow-up of those objects," said Kirkpatrick. Even if Nemesis is not found, the WISE telescope will help shed light on the darkest corners of the solar system. The telescope can be used to search for dwarf planets like Pluto that orbit the Sun off the solar system's ecliptic plane. The objects that make up the Oort Cloud are too small and far away for WISE to see, but it will be able to track potentially dangerous comets and asteroids closer to home. Both IRAS and WISE should be able to find our Vulcan. We think it is at or near IRAS 1732+239 NASA Releases New Pioneer Anomaly Analysis - July 20, 2011 The mysterious force acting on the Pioneer spacecraft seems to be falling exponentially. That's a strong clue that on-board heat is to blame, says NASA. The force is falling because Pioneer 10 is getting farther and farther away from Vulcan. GRAVITY MAY LOSE ITS PULL It was in 1980 that John Anderson first wondered if something funny was going on with gravity. The Jet Propulsion Laboratory physicist was looking over data from two Pioneer spacecraft that had been speeding through the solar system for nearly a decade. PROBLEMS AT THE RIM OF THE SOLAR SYSTEM Neptune is an undisciplined member of the solar system. No one has been able to predict its future course accurately. Already this maverick planet is drifting off the orbit predicted just 10 years ago using the best data and solar- system models. All of the outer planets, in fact, confound predictions to some degree. In addition, some long-period comets have anomalous orbits. SEARCH FOR A STANDARD EXPLANATION OF THE PIONEER ANOMALY John D. Anderson, Eunice L. Lau And Slava G. Turyshev Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA The data from Pioneer 10 and 11 shows an anomalous, constant, Doppler frequency drift that can be interpreted as an acceleration directed towards the Sun of aP = (8.74 ?1.33) ?10-8 cm/s2. Although one can consider a new physical origin for the anomaly, one must first investigate the contributions of the prime candidates, which are systematics generated on board. Here we expand upon previous analyses of thermal systematics. We demonstrate that thermal models put forth so far are not supported by the analyzed data. Possible ways to further investigate the nature of the anomaly are proposed. MORE ON "THE MASSIVE SOLAR COMPANION" Something big out there beyond Neptune perturbs the orbits of the sun's outer fringe of planets. In addition, there are unexplained perturbations in the orbits of earth satellites, peculiar periodicities in the sunspot cycle, and equally puzzling regularities in earthquake frequency. These anomalies might all be explained by the existence of a large, dark planet with several moons VOYAGER ENTERS SOLAR SYSTEM'S FINAL FRONTIER - 93.5 AU away. ASYMMETRICAL SHAPE OF HELIOSPHERE RAISES QUESTIONS - July 7, 2008 Ever since the Voyager 2 data confirmed the nonsymmetrical shape of the solar system scientists have pondered its cause (i). In summary, the edge of the heliosphere (the place where the solar wind slows to sub sonic speeds) appears to be 1.2 billion kilometers shorter on the south side of the solar system (and in the general direction of the winter solstice, the direction of Voyager 2), than it is on the edge of the planetary plane (where Voyager 1 exited approximately a year earlier). This indicates the heliosphere is not a sphere at all but a bullet shape. See: IS IRAS OBJECT 1732+239 NEAR VULCAN? - CIRCUMSTANTIAL EVIDENCE TABLE 3 - SPACE PROBES AND THE 'DEAD STAR' WHAT CAUSES THE SUNSPOT CYCLE? John P. Bagby has now introduced a new piece to the puzzle of solar-system cyclic behavior. While searching for possible perturbations of the planets due to a tenth major planet or a dark massive solar companion (MSC), he discovered that the perihelia of the outer planets (orbital points closest to the sun) were being disturbed with an average period of 11.2 years. This is almost exactly the sunspot period of about 11 years ( or half the period of solar magnetic field reversals of about 22 years). He suggests two possibilities: (1) Mutual resonance effects between the planets (2) The effects of a massive solar companion. PLANET ORBITAL SPACING estimates that two planets may yet be orbiting Vulcan, one at 0.7 AU and the other at 0.34 AU. The former is analogous to the Sun's Venus, and the latter (if it still exists) is Vulcan's equivalent to the Sun's Mercury. If an Earth equivalent formed, it would be at 0.3 X 21.5 = 0.85 AU. The periods would be given by (r3/MVulcan)1/2 and be 18.5 years, 6.2 years and 24.7 years respectively. The first and last theoretical value are close to the Sun's magnet field reversal period. UNIDENTIFIED IRAS SOURCES: ULTRAHIGH-LUMINOSITY GALAXIES or UNIDENTIFIED IRAS SOURCES: ULTRAHIGH-LUMINOSITY GALAXIES - March 1, 1985 Six IRAS sources previously exhibiting possible parallax, are believed to be associated with galaxies highly radiant in the infrared. These include object 1732+239 that may be a possible image of Vulcan. This object was not used to calculate the probability of Vulcan's existence in a nominal 5000-year orbit, but was used to refine its estimated orbital parameters. THE IRAS INCIDENT So IRAS did not see Nibiru, Planet X, or anything of the sort, despite the claims of the doomcriers. Of course, they now claim that NASA is clamping down on the press for Planet X. Well, Los Alamos is sure suppressing the Noah's Ark find. NO TENTH PLANET YET FROM IRAS A REAL DEATH STAR Ignatius Donnelly was pretty convincing on the comet impact scenario a century ago. ASTRO- METRICS OF UNDISCOVERED PLANETS AND INTELLIGENT LIFE FORMS This work contains some of the planet formation theory useful for understand the INTRODUCT ION - A NEW PLANET FORMATION THEORY section of the 1997 Paper PLANET X/12TH PLANET COVER-UP Is it because of an impending catastrophe? CLYPEOLOGY A report is being circulated that the undiscovered planet Nibiru (Vulcan?) and has been photographed by an infrared sat-system called "Siloe" in 1999 (see photograph). It (or its related comet swarm) is approaching (the inner solar system?) again. See The Akkadian Seal WHAT IS PLANET X Planet X is a hypothetical 10th planet of our solar system. There is conjecture that another Solar body exists, and furthermore that this solar body is affecting our biosphere and all planetary bodies This is sometimes referred to as Vulcan or the death star. 10th PLANET: Real or Not SEARCH GOOGLE CACHED FOR THE PHRASE: Why a ~10 Mj Nibiru CANNOT exist out at 234-241 AU COMMENTS: (by rajasun) (This work is critical of a Vulcan or Nibiru like brown dwarf) ?The calculations were left out of this snippet but may be in Google Cached memory. In the light of the above FINDINGS, it appears highly UNLIKELY that a Brown Dwarf can exist be it at the Predicted Distance of Andy's OR that of Sitchin's (i.e.Andy's Nibiru, P = 3756 Yrs, Semi Major Axis (a) = 241.62974 AU Sitchin's Nibiru, Period (P) = 3600 Yrs, Semi Major Axis (a) = 234.8920572 AU). The BEST MATCH between MASS of Object and a Period of 3756 Yrs is that of a BODY with a size in between that of Pluto and the Moon and NOT that of a Brown Dwarf. As for Sitchin's Object, it has a MASS befitting the classification of a "sub- Brown Dwarf" i.e. a MASS of 10.48292334 Mj. The Deuterium Fusion threshold i.e. the demarcation cut-off LIMIT separating Brown Dwarfs from Super MASSIVE Jupiters is at around 13 Mj. An Object with a MASS above 10 Mj can be considered a Transition Object somewhat akin to an Object of MASS >0.07 Msun <0.075 Msun i.e. a "Transition Object". Vulcan's mass is about 0.5Mj, but we still consider it 'stellar' in nature. The PROBLEM for Sitchin however, is one of Apparent Visual Magnitude of such an object i.e. a 10.48292334 Mj Nibiru at 234.8920572 AU will be 17.57573 and EVEN at its VERY FURTHEST i.e. at Aphelion it would be a RELATIVELY "BRIGHT" 24.25454. For Your Information, the Hubble Space Telescope (HST) has seen objects down to around magnitude 29 - 30. So 24.25454 is well WITHIN the observational limits of HST by SEVERAL orders. This problem i.e. Apparent Visual Magnitude of Nibiru can LIKEWISE be applicable in Andy's instance. For at 241.62974 AU out, the Apparent Visual Magnitude of a Jupiter-like Brown Dwarf will ONLY be at 17.7700885 and Andy's Nibiru at Aphelion (i.e. 480.2584386 AU) is BUT a MERE 24.6536088! We estimate Vulcan magnitude to be about 21, similar to many of the IRAS objects investigated. Apparent Visual Magnitude aside, there is also this OTHER challenge posed by an object with a MASS ~10 - 10.48 Mj (i.e. APPLICABLE to BOTH Andy's and Sitchin's Nibiru), go through EACH step of the following Calculation on this Object's Gravity: GIVEN a Surface Gravity ALMOST that of the Sun's a 10 - 10.48292334 Mj Object at a MAXIMUM distance of a MERE 470 - 480 AU removed, SHOUlDN'T this planet's gravity be TOO SIGNIFICANT to have avoided detection till now? Vulcan is half the mass of Jupiter. It does have a small affect on some of the distant Kuiper belt objects. For the reasons ABOVE, I thus have SERIOUS DOUBTS as to how such an Object could have eluded the astronomical/astrophysical community till now. Before one goes illogical and start some talk of conspiracy theory of a "cover-up" by government/s, OPEN your minds and think REAL OBJECTIVELY, EVEN IF government/s had tried to "cover- up" such a "discovery", wouldn't one of the tens/hundreds of PRIVATELY funded observatories NOT have picked the Object up by now? An all night exposure by a telescope the size of the 120 inch Lick Observatory's is required so most telescopes cannot see Vulcan. However, it were discovered, any astronomer skilled in Celestial Mechanics would quickly realize the possibilities of Earth threatening comets swarms in 3:2 resonate orbits with Vulcan. Do NOT get me wrong, I'm NOT saying there is NO Nibiru, what I'm saying though is this: the region of space in between 200 - 500 AU is JUST Way TOO "NEAR". An Object that CLOSE in and that LARGE i.e. ~1 - 10 Mj in size would have measurable IR fluxes so PRONOUNCED, it SHOULD have been identified perhaps even as early on as the EARLY 1980s. We Agree! Personally, I think at Andy's distance i.e. 241.62974 - 480.2584386 AU, we MAY in the next five years or so find a Planet X of sort albeit an intermediate sized Object in between Pluto and the Moon. Such a NEAR-Moon sized Object MAY be what Gladman and co. are searching for i.e. the Perturber of SuperComet CR105. This Moon sized Nibiru is REMARKABLY positioned to exercise JUST that kind of INFLUENCE to throw CR 105 into the INCLINED and ECCENTRIC orbit it has.## ## Ref: www.academicpress.com/ins...graphb.htm ## Ref: www.obs-nice.fr/gladman/cr105.html There MAY yet be other Plutos, Moon EVEN Mars sized objects still awaiting us in what is an area of space, the community calls the Scattered Disk. BUT for LARGER i.e. >=Uranus objects, one MAY have to look FURTHER out i.e. > 560 AU i.e. somewhere in the vicinity of IRAS object 1732+239 and beyond. We think it is or is near IRAS object 1732+239! Something in addition, the various accounts of the "MISSING MASS" in the Kuiper Belt and the Scattered Disk kept staring at me in the face i.e. some MAJOR event/s caused a SUDDEN and DRAMATIC truncation in NOT ONLY the Classical Kuiper Belt BUT swept out quite a LOT of MASS here. These MAJOR events MAY still occur i.e. hinting at a REMOTE Brown Dwarf Solar Companion with a HIGHLY elliptical and eccentric orbit (inferring from the wayward orbital inclinations and eccentricities of the Plutinos##). I suspect that in the coming years, we will find MORE "belts", "disks" and "gaps", the Latter MAY very well point to the "undue" influence and PRESENCE of LARGE as YET "UNDISCOVERED" bodies FAR out as we venture DEEPER BEYOND the orbit of Pluto. Vulcan did it! A 1 Mj let alone a ~10 Mj "sub-Brown Dwarf" with a Semi Major Axis at between 234 - 241 AU and a MAXIMUM distance of 480 AU is UNLIKELY because of: 1.) INCONGRUENT fit between Period (P) and suggested MASSES of Object (m) i.e. P = Orbital period = 365.256898326 * a**1.5/sqrt(1+m) days where m = the mass of the companion in solar masses 2.) The Apparent Visual Magnitude of an Object EVEN at a MAXIMUM distance of 480 AU out is a "BRIGHT" 24.25454 i.e. some 6 Magnitudes BRIGHTER than the DIMMEST Objects Hubble Space Telescope has observed. Why do you think the Hubble was launched? Just for the sake of good astronomy?, 3.) The Absence of necessarily PRONOUNCED and CONTINUING GRAVITATIONAL perturbation effect almost 3/4 that of the Sun's (i.e. Surface Gravity of Nibiru with MASSES in between 10 Mj - 10.48292334 Mj 218.772783287613297535850910865252 N - 229.243589170234991621082822269554 N) on Objects in the INNER Scattered Disk. Thanks to the above critic, it appears that IRAS object 1732+239 is likely Vulcan. GROTE PLANETOIDE MARDUK/NIBIRU? One year later in 1983 the newly launched IRAS (Infrared Astronomical Satellite) quickly found Planet X (the 10th). There have been attempts to cover-up this event and rewrite history. Discovering Archaeology, July/August 1999,Page 70 shows a medieval picture with a large comet looking object as big as the sun streaking across the sky horizontally with a giant tail. Below is a town that is shaking apart with hysterical people in the streets. Its elliptical orbit takes it around two suns. The other sun it orbits is our suns dark twin. Nibiru's inbound approach is being closely monitored by our best telescopic equipment on and off earth. This is one of many reasons the orbiting Hubble telescope live feed is hidden from our view. The most accurate calculation for Planet X's next passage is now late May or June 2003. The Vulcan web site contends that the next approach of the comet (swarm) could be between 2006 and 2016. However, computations support a window of return to be within the next 110 - 150 years. Comet orbits are somewhat unpredictable as they outgas. AN INTERESTING HYPOTHESIS The "Annunaki" (GODS as described by the Sumerians) had "Android Beings" helping them. The Sumerians describe Planet X as being very far from Earth at times, (roughly 30,000,000,000 = 323 AU) miles away at it's farthest point from us in orbit. Vulcan's aphelion (farthest point) is estimated to be at 454 AU by this web site. A COMET'S ODD ORBIT HINTS AT HIDDEN PLANET 10 April 2001 Far beyond the solar system's nine known planets, a body as massive as Mars may once have been part of our planetary system—and it might still be there. Although the proposed planet would lie too far away to be seen from Earth, its gravitational tug could account for the oddball orbit of a large comet spotted in the outer solar system a year ago. If the proposed planet is as massive as Mars, it would have to lie some 200 AU from the sun—about 7 times Neptune's distance—Holman calculates. Were it closer, observers would have spotted it. "Undoubtedly, something [massive] knocked the hell out of the belt," says Harold F. Levison of the Southwest Research Institute in Boulder, Colo. "The question is whether it's there now." Vulcan is far larger than Mars, but it is also farther away. Its average distance is about 290 AU away although now its elliptical orbit has carried it about 448 AU away. AUTHOR BRINGS THEORY ABOUT A 'LOST' STAR Cruttenden argues that the precession is not caused by a wobble in the Earth's rotation, as is commonly taught, but may actually be caused by a companion star to our sun, one which currently is far away from the Earth but which would eventually rotate closer to the sun. This binary companion would cause the sun's orbit to curve, and would explain the Precession of the Equinox by the way in which the Earth's rotation was affected by not one, but two stars. In addition to this hypothesis, Cruttenden also compares the 24,000-year cycle of precession with ancient beliefs from many cultures regarding the cyclical pattern of existence, which begins and ends with a Golden Age. PLAN VIEW OF THE SOLAR SYSTEM SCATTERED DISK OBJECTS 2003 UB313 2002 TC302 1996 TL66 2000 CR105? (transitional object between the real Kuiper belt and the hypothetical Oort Cloud) TRANS-NEPTUNIAN OBJECTS INFORMATION SPACE PROBES FEEL COSMIC TUG OF BIZARRE FORCES - September 12, 2004 Something strange is tugging at America's oldest spacecraft. As the Pioneer 10 and 11 probes head towards distant stars, scientists have discovered that the craft - launched more than 30 years ago - appear to be in the grip of a mysterious force that is holding them back as they sweep out of the solar system. OLD SPACECRAFT MAKES SURPRISE DISCOVERY 29 September 1999 Scientists have discovered a new object orbiting the Sun after a spaceprobe was mysteriously knocked off course. Earlier this year, scientists were puzzled by what was described as a mysterious force acting on the probe. It led to speculation that there was something wrong in our understanding of the force of gravity. Eventually the effect was tracked down to the probe itself, which was unexpectedly pushing itself in one particular direction May be just data processing errors, but not sure? MYSTERY FORCE TUGS DISTANT PROBES The puzzle is that Pioneer 10 is slowing more quickly than it should. It was initially suggested that this might be due to the force from a tiny gas leak or that it was being pulled off course by the gravity of an unseen Solar System object. The mystery deepened further when an analysis of the trajectory being followed by its sister spacecraft, Pioneer 11, launched in 1973, showed that it too was being subjected to the same mysterious effect. But Pioneer 11 is on the opposite side of the Solar System from Pioneer 10, about 22 billion km (about 14 billion miles) away. This means the effect cannot be the gravitational effect of some unseen body. But others have dismissed this as being too fanciful, arguing that if the Pioneer anomaly was really indicative of a change in our understanding of gravity, then it would be apparent in the orbits of the planets around the Sun - which it is not. MYSTERIOUS FORCE PULLS BACK NASA PROBE IN DEEP SPACE Dr Duncan Steel, a space scientist at Salford University, says even such a weak force could have huge effects on a cosmic scale. "It might alter the number of comets that come towards us over millions of years, which would have consequences for life on Earth. It also raises the question of whether we know enough about the law of gravity." THE PROBLEM WITH GRAVITY: NEW MISSION WOULD PROBE STRANGE PUZZLE Which leaves open staggering possibilities that would force wholesale reprinting of all physics books: Invisible dark matter is tugging at the probes Other dimensions create small forces we don't understand Gravity works differently than we think Data from the Galileo and Ulysses spacecraft suggest the anomaly may have affected them, too. But neither has been far enough from the Sun -- the dominant source of gravity in the solar system -- to firmly distinguish any possible discrepancy from noise in the data, Turyshev says. Galileo was crashed into Jupiter last year, and Ulysses will never go farther than it has. LI> THREE SPACECRAFT REVEAL UNEXPLAINED MOTION LOS ALAMOS, N.M., Sept. 24, 1998 -- A team of planetary scientists and physicists has identified a tiny, unexplained sunward acceleration in the motions of the Pioneer 10, Pioneer 11 and Ulysses spacecraft. Pioneer 10 was officially tracked until March 1997, when it was some six billion miles away from the sun. (Pioneer 10 is still transmitting, and occasional additional radio Doppler data is provided to the research team.) Pioneer 11, due to a radio failure, last sent useful radio Doppler transmissions in October 1990, when it was less than three billion miles from the sun. Ulysses has been tracked on its looping flight out of the ecliptic and around the sun's poles. Trusty and reliable THREE SPACECRAFT REVEAL UNEXPLAINED MOTION LOS ALAMOS, N.M., September 24, 1998 -- A team of planetary scientists and physicists has identified a tiny, unexplained sunward acceleration in the motions of the Pioneer 10, Pioneer 11 and Ulysses spacecraft. THE UNFINISHED QUEST TO SOLVE THE PIONEER ANOMALY It began with the search for Planet X. By 1979, Pioneer 10 had accomplished its original mission to become the first Earth-born spacecraft to explore Jupiter and was on its way out of our solar system, flying toward the star Aldebaran - a destination it should reach some two million years from now. On its way out, Pioneer 10 became a useful partner in an experiment of celestial mechanics. By closely monitoring its trajectory, scientists might detect an unexpected gravitational tug that could betray the existence of the long- hypothesized Planet X. Based largely on unexplained motions in the orbits of Uranus and Neptune, several 20th-century astronomers had suggested the existence of an undiscovered world at the edge of our solar system. John D. Anderson, a veteran JPL scientist, took on the task of studying the Pioneer 10 and 11 radio signal data for any sign of Planet X. His search had come up empty. For this reason, among others, John and his team at the Jet Propulsion Laboratory became convinced that the chance of discovering a 10th planet was slim, as they reported in the May/June 1999 issue of The Planetary Report. But in early 1980, John and his team began to see signs of something else-something quite unexpected. While searching for Planet X, we noticed that the tracking data did not quite fit with the existing solar system model. They showed an anomalous acceleration-in this case, an acceleration backward. It did not match any expected Planet X force, and we couldn't immediately explain it. When theoretical models do not fit experimental data, standard scientific practice is to find a reason for the mismatch. Therefore, we embarked on a program to study the anomalous acceleration PLANET X AND THE POLE SHIFT A look at the Science behind mostly the proposed 2003 passage of) Planet X PLANET X AND THE POLE SHIFT ?LINKS A 'dark companion' could produce the unseen force that seems to tug at Uranus and Neptune, speeding them up at one point in their orbits and holding them back as they pass. …the best bet is a dark star orbiting at least 50 billion miles beyond Pluto? It is most likely either a brown dwarf, or a neutron star. Others suggest it is a tenth planet?since a companion star would tug at the other planets, not just Uranus and Neptune." AREN'T THE OUTER PLANETS BEING "PERTURBED"? In 1992, Dr. Myles Standish used the updated mass in his calculations and determined the perturbations actually didn't exist. The recorded observations matched the predicted orbits when the correct values of mass were put into the equations. Dr. Standish's calculations were the first to use an extremely accurate measurement of Neptune's mass made by Voyager 2 in 1989. His results were published in The Astronomical Journal in May of 1993. VOYAGER PROBES IN FUNDING CRISIS Nasa's twin Voyager probes may have to close down in October to save money, the US space agency has said. Turn it off quick before any more of those gravitational anomalies are found. SUN HAS BINARY PARTNER, MAY AFFECT THE EARTH The ground-breaking and richly illustrated new book, Lost Star of Myth and Time, marries modern astronomical theory with ancient star lore to make a compelling case for the profound influence on our planet of a companion star to the sun. Author and theorist, Walter Cruttenden, presents the evidence that this binary orbit relationship may be the cause of a vast cycle causing the Dark and Golden Ages common in the lore of ancient cultures. This movement of the solar system occurs because the Sun has a companion star; both stars orbit a common center of gravity, as is typical of most double star systems. The grand cycle–the time it takes to complete one orbit?is called a "Great Year," a term coined by Plato. Cruttenden explains the affect on earth with an analogy: "Just as the spinning motion of the earth causes the cycle of day and night, and just as the orbital motion of the earth around the sun causes the cycle of the seasons, so too does the binary motion cause a cycle of rising and falling ages over long periods of time, due to increasing and decreasing electromagnet effects generated by our sun and other nearby stars." THE DEATH OF NEMESIS: THE SUN'S DISTANT, DARK COMPANION - 07/12/2010 The data that once suggested the Sun is orbited by a distant dark companion now raises even more questions Nemesis is dead, long live Vulcan. SUN'S RUMORED HIDDEN COMPANION MAY NOT EXIST AFTER ALL - 20 July 2010 No Nemesis? However, far from helping to confirm the Nemesis hypothesis, Melott and Bambach argue their findings suggest Nemesis couldn't possibly exist. "If Nemesis existed and had this kind of an orbit, its orbit would not be regular," Melott told SPACE.com. "Calculations indicate its orbit would change by 20 to 50 percent due to the gravitational attraction of stars as they pass by us, and the movement of the sun in the galaxy." Thus, a celestial body like Nemesis couldn't explain such a long-standing, steady cycle, because its orbit itself would not be steady over such a long period of time. Melott said his data basically puts the final nail in the coffin of the Nemesis idea. But others aren't so sure. Some in the field question whether the fossil record is really accurate enough to establish a cycle going back that far. "To that I would say, yes we can, now that the accuracy has improved so much," Melott said. "And even if there were errors in the timing, that wouldn't cause something to appear so clockwork and regular, it would smear the signal out." Melott and Bambach said it leaves the question of what's causing the extinction cycle totally open. "For me, it's a complete head-scratcher," Melott said. Vulcan exists, Nemesis does not. THE NEMESIS CONJECTURE: IS AN UNSEEN BINARY COMPANION OF THE SUN SENDING COMETS TOWARDS EARTH? - September 02, 2010 Some scientists believe that something could be hidden beyond the edge of our solar system a distance of about 50,000 to 100,000 AU (about 1-2 light years), somewhat beyond the Oort cloud. Named "Nemesis" or "The Death Star," this undetected object could be a red or brown dwarf star, or an even darker presence several times the mass of Jupiter. If our Sun were part of a binary system in which two gravitationally-bound stars orbit a common center of mass, their interaction could disturb the Oort Cloud on a periodic basis, sending comets whizzing towards us. STAR SYSTEMS HINT AT POSSIBILITY OF SUN'S NEMESIS The stars are about 60 light-years away, and the shape of their disks have astronomers pondering the long-debated possibility that our own Sun might have an as-yet unfound companion dubbed Nemesis. "Kalas and Graham speculate that stars also having sharp outer edges to their debris disks have a companion—a star or brown dwarf—that keeps the disk from spreading outward, similar to how Saturn's moons shape the edges of some of the planet's rings. "The story of how you make a ring around a planet could be the same as the story of making rings around a star," Kalas said. Perhaps a passing star ripped off the edges of the original planetary disk, but a star-sized companion, remaining in place, would be necessary to keep the remaining disk material from spreading outward, he figures. The scenario has Kalas and his colleagues thinking that the Sun might also have a companion that keeps the Kuiper Belt confined within a sharp boundary. U.C. Berkeley physics professor Richard Muller has proposed such a star, which he calls Nemesis, but no evidence has been found for one". NEMESIS: THE MILLION DOLLAR QUESTION Like a thorn in the side of mainstream researchers, Muller's Nemesis theory - - that our Sun has a companion star responsible for recurring episodes of wholesale death and destruction here on Earth - seems to reemerge periodically like microbes after a mass extinction. TWO NEW DUSTY PLANETARY DISKS MAY BE ASTROPHYSICAL MIRRORS OF OUR KUIPER BELT Kalas and Graham speculate that stars also having sharp outer edges to their debris disks have a companion - a star or brown dwarf, perhaps - that keeps the disk from spreading outward, similar to the way that Saturn's moons shape the edges of many of the planet's rings. If true, that would mean that the sun also has a companion keeping the Kuiper Belt confined within a sharp boundary. Though a companion star has been proposed before - most recently by UC Berkeley physics professor Richard Muller, who dubbed the companion Nemesis - no evidence has been found for such a companion. Probably the companion star lies within the invariable plane (the angular momentum plane of the solar system) inclined to the ecliptic by 1.5 degrees. We believe our binary counterpart may lay between 848.5 AU and 1515 AU depending on its mass (0.06 and 6 solar mass). We are predicting that our binary companion will be found in an elliptical patch centered around 17hr 45 minutes and declination ?2 degrees. The possible Vulcan related IRAS point is at 17 hr. 32 minutes 51.4 seconds, Declination 23 degrees 56 minutes 3 seconds. LOST STAR OF MYTH AND TIME A Book Lost Star of Myth and Time examines this new theory of precession and the possibility our Earth may be subjected to a greater variety of stellar influences than heretofore imagined VULCAN AKA THE GREAT RED DRAGON AKA PLANET X ENVIRONMENTAL RECORD Is There a New Mega-Massive Planet in the Outer Reaches of the Solar System? When I used Bode's 240-year-old formula to calculate the Bode hypothetical distance from the sun to a possible planet located beyond Pluto, the Bode distance turned out to be 77.2AU, which is 7.2 billion miles and roughly near the center of the large void. A huge ultra-massive planet that has been orbiting there for millions or billions of years certainly could have gravitationally attracted all the KBOs that are missing from the enormous void. THE BINARY RESEARCH INSTITUTE EVIDENCE MOUNTS FOR SUN'S COMPANION STAR The Binary Research Institute (BRI) has found that orbital characteristics of the recently discovered planetoid, "Sedna", demonstrate the possibility that our sun might be part of a binary star system. A binary star system consists of two stars gravitationally bound orbiting a common center of mass. Once thought to be highly unusual, such systems are now considered to be common in the Milky Way galaxy. FIRST PHOTO TAKEN OF OBJECT AROUND SUN-LIKE STAR, SCIENTISTS SAY - Dec 3, 2009 similar breakthrough was announced last year, when astronomers unveiled direct images of a single-planet and multiple-planet system. However, the host stars of such systems are stellar giants that are much more massive than the sun. The images of this newly identified object were taken in May and August during early test runs of a new planet-hunting instrument on the Hawaii-based Subaru Telescope. The object called GJ 758 B orbits a parent star that is comparable in mass and temperature to our own sun, said study team member Michael McElwain of Princeton University. The star lies 300 trillion miles (480 trillion km), or about 50 light-years, from Earth. Scientists aren't sure if the object is a large planet or a brown dwarf, a cosmic misfit also known as a failed star. They estimate its mass to be 10 to 40 times that of Jupiter. Objects above 13 Jupiters (and below the mass needed to ignite nuclear reactions in stars) are considered to be brown dwarfs. "Brown dwarf companions to solar-type stars are extremely rare," he told SPACE.com. "It's exciting to find something that is so cool and so low mass with a separation similar to our solar system around a nearby star." The planet-like object is currently at least 29 times as far from its star as the Earth is from the sun, or about the distance between the sun and Neptune. The scientists say telescope images have revealed a possible second companion to the star, which they are calling GJ 758 C, though more observations are needed to confirm whether it is actually nearby or just looks that way. A REALITY CHECK FOR 2012 AND PLANET X ?FOLLOW THE MONEY I put forth the proposition that we live in a binary system and that Sol's unborn twin is a brown dwarf. This proposition is based on current astronomical data, plus the ancient historical accounts and warnings contained in The Kolbrin Bible. I NEED A SCIENTIST TO EVALUATE THIS WEBSITE "[1] It is a dark star and hasn't been recognized (I believe he claims on his website that it was seen in the IRAS in 1983)." It is either that IRAS object or within the specified number of degrees in RA and Dec. of it. "[2] He has worked out orbital elements (available on the website, complete with RA and Dec.)" The most recent estimations for Vulcan's mass, orbit elements and location are found in the Synopsis section, Earth's Bleak Future. The orbit elements have changed slightly since the first (1997) paper was put on the web. "[3] It is 141-165 Earth masses and a magnitude 22 object." At different points in the "papers" that appear on his website, Warmkessel cites different figures for Vulcan's mass. Just like Pluto's mass, Vulcan's mass has decreased precipitously based on new data. The most reliable Estimates for Vulcan's mass come from Crop Circle T367 and the Akkadian seal and have remained within a factor of two or less since the 1999 Paper. The mass based on Akkadian Seal's value is favored because slight variances in the comet swarm pass times (based on geo-climatological data) seems to more accurately reflect Vulcan's mass's influence on the comet swarms return periods when they are far from the Sun. "[4] Geoclimatological data (ie, dendrochronology and Greenland ice cores) show prior comet strikes, (which is probably true anyway, but is it relevant to his particular claims?)." Geo-climatological data offers a way to validate Vulcan's theoretical orbit (which has been deduced in a non standard way). Vulcan causes predicted comet swarms to occur in 3:2 resonate orbits. So once Vulcan's orbital period is specified, the period of the comet swarms can be deduced. When the comet swarms pass, one or more often impact Earth (or explode in its atmosphere). These usually cause weather changes. By examining the historical geo- climatological data, Vulcan's mass, orbital period and eccentricity can be verified. "[5] It has a period of 4969 years which he states are corroborated by calculations done by astronomer George Forbes in 1880." Forbe's predicted two planets (not comets) in solar orbit by using comet aphelion data computed from observed comets. One set of orbital parameters he found is very similar to Vulcan's. I can only surmise that Vulcan is his idea of some sort of periodic cometary perturber, hasn't this idea been pretty much laid to rest? This is a pretty fair assessment of Vulcan. Matese et. al. employed a similar technique to postulate the existence of Nemesis (Icarus; 19 May 1999). (6) A 'scientist known' as 'leaping lizard' searched hard to find any reference to our Orbit Analyst However but failed. However they ignored the 30-60 Google references to the astronomical research works published by our Astronomical Support individual. PERSISTENT EVIDENCE OF A JOVIAN MASS SOLAR COMPANION IN THE OORT CLOUD - 26 Apr 2010 We present an updated dynamical and statistical analysis of outer Oort cloud cometary evidence suggesting the sun has a wide-binary Jovian mass companion. The results support a conjecture that there exists a companion of mass ~ 1-4 M_Jup orbiting in the innermost region of the outer Oort cloud. Our most restrictive prediction is that the orientation angles of the orbit normal in galactic coordinates are centered on the galactic longitude of the ascending node Omega = 319 degree and the galactic inclination i = 103 degree (or the opposite direction) with an uncertainty in the normal direction subtending ~ 2% of the sky. PERSISTENT EVIDENCE OF A JOVIAN MASS SOLAR COMPANION IN THE OORT CLOUD EARTH UNDER ATTACK FROM DEATH STAR - 12 Mar 2010 AN invisible star may be circling the Sun and causing deadly comets to bombard the Earth, scientists said yesterday. The brown dwarf - up to five times the size of Jupiter - could be to blame for mass extinctions that occur here every 26 million years. The star - nicknamed Nemesis by NASA scientists - would be invisible as it only emits infrared light and is incredibly distant. Nemesis is believed to orbit our solar system at 25,000 times the distance of the Earth to the Sun. Scientists' first clue to the existence of Nemesis was the bizarre orbit of a dwarf planet called Sedna. Boffins believe its unusual, 12,000-year-long oval orbit could be explained by a massive celestial body Mike Brown, who discovered Sedna in 2003, said: "Sedna is a very odd object - it shouldn't be there. "The only way to get on an eccentric orbit is to have some giant body kick you - so what is out there?" The 12,000 year orbital period of Sedena is no where near the anticipated 26,000,000 year revisit time of a threatening comet cluster. DARK JUPITER MAY HAUNT EDGE OF SOLAR SYSTEM - November 29, 2010 After examining the orbits of more than 100 comets in the Minor Planet Center database, the researchers concluded that 80 percent of comets born in the Oort Cloud were pushed out by the galaxy's gravity. The left over 20 percent, but, needed a nudge from a distant object in this area 1.4 times the mass of Jupiter. SUN'S NEMESIS PELTED EARTH WITH COMETS, STUDY SUGGESTS - 11 March 2010 A dark object may be lurking near our solar system, occasionally kicking comets in our direction. Nicknamed "Nemesis" or "The Death Star," this undetected object could be a red or brown dwarf star, or an even darker presence several times the mass of Jupiter. Why do scientists think something could be hidden beyond the edge of our solar system? Originally, Nemesis was suggested as a way to explain a cycle of mass extinctions on Earth. The paleontologists David Raup and Jack Sepkoski claim that, over the last 250 million years, life on Earth has faced extinction in a 26-million-year cycle. Astronomers proposed comet impacts as a possible cause for these catastrophes. The Oort Cloud is thought to extend about 1 light year from the Sun. Matese estimates Nemesis is 25,000 AU away (or about one-third of a light year). The next-closest known star to the Sun is Proxima Centauri, located 4.2 light years away. INVISIBLE STAR 'SHOOTING COMETS AT EARTH' - March 12, 2010 Brown dwarf could have caused mass extinctions Star is invisible and a long way away But heat-seeking telescope may find it Maybe the IRAS satellite already found it. AN invisible star responsible for the extinction of dinosaurs may be circling the Sun and causing comets to bombard the Earth, scientists said. The brown dwarf - up to five times the size of Jupiter - could be to blame for mass extinctions that occur here every 26 million years, The star - nicknamed Nemesis by NASA scientists - would be invisible as it only emits infrared light and is incredibly distant. Nemesis is believed to orbit our solar system at 25,000 times the distance of the Earth to the Sun. As it spins through the galaxy, its gravitational pull drags icy bodies out of the Oort Cloud - a vast sphere of rock and dust twice as far away as Nemesis. The Wide-Field Infrared Survey Explorer - expected to find a thousand brown dwarf stars within 25 light years of the Sun - has already sent back a photo of a comet possibly dislodged from the Oort Cloud. Scientists' first clue to the existence of Nemesis was the bizarre orbit of a dwarf planet called Sedna. Scientists believe its unusual, 12,000-year-long oval orbit could be explained by a massive celestial body. Finding Earth threatening comets is the real mission of WISE. FAILED STAR FOUND IN THE NEIGHBORHOOD - August 29, 2011 NASA's WISE satellite has found a Y dwarf star, cool enough to touch, that is the hub of the seventh closest star system to us. THE SUN'S NEW EXOTIC NEIGHBOUR - March 22, 2006 Using ESO's Very Large Telescope in Chile, an international team of researchers discovered a brown dwarf belonging to the 24th closest stellar system to the Sun. Brown dwarfs are intermediate objects that are neither stars nor planets. This object is the third closest brown dwarf to the Earth yet discovered, and one of the coolest, having a temperature of about 750 degrees Centigrade. It orbits a very small star at about 4.5 times the mean distance between the Earth and the Sun. Its mass is estimated to be somewhere between 9 and 65 times the mass of Jupiter. EXTREMELY ECCENTRIC MINOR PLANET TO VISIT INNER SOLAR SYSTEM THIS DECADE - Jun 19, 2021 The object in question is designated 2014 UN271, and it was only recently identified in data from the Dark Energy Survey captured between 2014 and 2018. Size estimates place it anywhere between 100 and 370 km (62 and 230 miles) wide. If it's a comet, it's quite a big one, especially for one coming from the outer solar system. And it turns out, astronomers are about to witness the closest pass of this incredible round trip. Currently, 2014 UN271 is about 22 Astronomical Units (AU) from the Sun (for reference, Earth is 1 AU from the Sun). That means it's already closer than Neptune, at 29.7 AU. And it's not stopping there – it's already traveled 7 AU in the last seven years, and at its closest in 2031, it's expected to pass within 10.9 AU of the Sun, almost reaching the orbit of Saturn. Before then, it's expected to develop the characteristic coma and tail of a comet, as icy material on its surface vaporizes from the heat of the Sun. This close pass would give astronomers an unprecedented close look at Oort cloud objects. 9TH PLANET DISCOVERED? RESEARCHERS FIND EVIDENCE OF 'LOST PLANET' IN THE SOLAR SYSTEM - Nov 24, 2020 Experts believe an 'ice giant' was 'was kicked out ... of the solar system by unknown forces' Though researchers continue to hunt for the mysterious Planet 9, experts have discovered evidence that another planet, residing between Uranus and Saturn, "escaped" billions of years ago. * MISSING 9TH PLANET EXISTED IN OUR SOLAR SYSTEM, BUT 'KICKED OUT' INTO DISTANT SPACE, SAY ASTRONOMERS - November 7, 2020 A massive ice giant planet is believed to have existed between Uranus and Saturn, and was 'cast out' into the edges of the solar system 'LOST' WORLD'S REDISCOVERY IS STEP TOWARD FINDING HABITABLE PLANETS - JULY 21, 2020 The rediscovery of a lost planet could pave the way for the detection of a world within the habitable "Goldilocks zone" in a distant solar system. The planet, the size and mass of Saturn with an orbit of thirty-five days, is among hundreds of "lost" worlds that University of Warwick astronomers are pioneering a new method to track down and characterize in the hope of finding cooler planets like those in our solar system, and even potentially habitable planets. * SOMETHING HUGE IS LURKING OUT THERE AT THE EDGE OF THE SOLAR SYSTEM... IT'S WARPING GRAVITY FIELDS - July 9, 2017 * GOODBYE PLANET NINE, HELLO PLANET TEN - ONE UNIVERSE AT A TIME - July 2, 2017 * FORGET PLANET 9—THERE'S EVIDENCE OF A TENTH PLANET LURKING AT THE EDGE OF THE SOLAR SYSTEM - 06/23/2017 * UNSEEN 'PLANETARY MASS OBJECT' SIGNALLED BY WARPED KUIPER BELT - June 22, 2017 In other words, the effect is most likely a real signal rather than a statistical fluke. According to the calculations, an object with the mass of Mars orbiting roughly 60 AU from the sun on an orbit tilted by about eight degrees (to the average plane of the known planets) has sufficient gravitational influence to warp the orbital plane of the distant KBOs within about 10 AU to either side. "The observed distant KBOs are concentrated in a ring about 30 AU wide and would feel the gravity of such a planetary mass object over time," Volk said, "so hypothesizing one planetary mass to cause the observed warp is not unreasonable across that distance." * FORGET PLANET 9—THERE'S EVIDENCE OF A TENTH PLANET LURKING AT THE EDGE OF THE SOLAR SYSTEM - June 23, 2017 Another explanation for the weird KBO orbits could be that a star traveling passed our solar system at some point in the past knocked them out of alignment. "Once the star is gone, all the KBOs will go back to precessing around their previous plane," Malhotra said. "That would have required an extremely close passage at about 100 AU, and the warp would be erased within 10 million years, so we don't consider this a likely scenario." They said the launch of the Large Synoptic Survey Telescope, a new telescope that will survey the sky, should help identify the planet—if it exists. * PLANET 10? ANOTHER EARTH-SIZE WORLD MAY LURK IN THE OUTER SOLAR SYSTEM - June 22, 2017 * SCIENTISTS SAY THERE MAY BE YET ANOTHER UNSEEN PLANET AT SOLAR SYSTEM'S EDGE - June 21, 2017 Kat Volk and Renu Malhotra of the University of Arizona's Lunar and Planetary Laboratory say their analysis points to an eight-degree tilt in the average planes of orbits for the most distant objects in the Kuiper Belt, a ring of icy mini-worlds that lie beyond the orbit of Neptune. Volk and Malhotra say the gravitational influence of a Mars-size object at a distance of 60 astronomical units, or 60 AU, could explain the orbital warp. ORBITAL DATA FOR THE PLANETS & DWARF PLANETS NEW DWARF PLANET IN THE OORT CLOUD CHANGES PICTURE OF SOLAR SYSTEM. . . - March 26 2014 !!! PIN THIS SHIT !!! it's the first step to PX the orbit of the new planet & the orbit of sedna show clearly, that PLANET X EXISTS !!! why is'nt this pinned???????????? DWARF PLANET STRETCHES SOLAR SYSTEM'S EDGE - 26 March 2014 This orbit diagram shows the paths of Oort cloud objects 2012 VP113 (red) and Sedna (orange), which circle the Kuiper belt (blue) at the Solar System's edge. NEW WORLD FOUND AT SOLAR SYSTEM'S EDGE - 27 March 2014 "That's the closest it ever comes, it's a very elongated orbit which goes all the way out to 450AU, which is very distant," says Sheppard. "We know that it's pinkish red in colour, which suggests that it's probably dominated by water ice and frozen methane on its surface." Vulcan's aphelion is 448 AU 2501.5 = 3953 year period. The admitted period goes up to 4590 years 2761.5 = 4585 years. NEW OBJECT OFFERS HINT OF "PLANET X" - March 26, 2014 The discovery of 2012 VP113, a sizable object roughly twice Pluto's distance from the Sun, has dynamicists wondering whether a super-Earth-size perturber lies undiscovered even farther out. This isn't due to some kind of observational bias, note Trujillo and Sheppard, and it's statistically unlikely to be mere coincidence. Importantly, this kind of orbital alignment means there was no close-passing star at the dawn of solar- system history, because the orbits' orientations would have become randomized in the eons since by gravitational nudges from the outer planets. Instead, the observers suggest, this might be the handiwork of a super-Earth-size planet roughly 250 A.U. from the Sun, in what's considered the inner Oort Cloud of comets. This rogue world would have enough mass to perturb objects like 2012 VP113 and Sedna inward. Orbital analysis of 2012 VP-113, a tiny planetoid beyond Pluto, and other bodies indicates an undetected planet, a giant "super-Earth," lurking on the outer edge of our solar system. How 2012 VP-113 and "Super Earth" compare with some of their neighbors Excluding its rings, the mean diameter of planet Saturn is approximately 120,000 km (74,500 miles), more than 8 times the diameter of Earth Notice that the diameters of Earth and super Earth are compared and range from two to ten Earth's diameter implying the mass (going as the cube) varies from eight to a thousand. A SEDNA-LIKE BODY WITH A PERIHELION OF 80 ASTRONOMICAL UNITS - 27 March 2014 Here we report the presence of a second Sedna-like object, 2012?VP113, whose perihelion is 80?AU. The detection of 2012?VP113 confirms that Sedna is not an isolated object; instead, both bodies may be members of the inner Oort cloud, whose objects could outnumber all other dynamically stable populations in the Solar System. DOES OUR SOLAR SYSTEM HAVE A SUPER EARTH? Cluster Of Rock At Its Edge Hints At Existence Of An Enormous Planet - 26 March 2014 Researchers have discovered a dwarf planet called 2012 VP113 It was spotted orbiting in a similar formation with up to 900 other objects Discoveries were made on the edge of our solar system in the Oort cloud The similar orbits suggest a larger planet, dubbed a Super Earth because of its size, may be creating a shepherding effect on these objects NEW OORT CLOUD DISCOVERY RENEWS TALK OF PLANET X? - 26 March 2014 Both Sedna and 2012 VP113 were found near their closest approach to the Sun, but they both have orbits that go out to hundreds of AU, at which point they would be too faint to discover. In fact, the similarity in the orbits found for Sedna, 2012 VP113 and a few other objects near the edge of the Kuiper belt suggests that an unknown massive perturbing body may be shepherding these objects into these similar orbital configurations. Sheppard and Trujillo suggest a super Earth or an even larger object at hundreds of AU could create the shepherding effect seen in the orbits of these objects, which are too distant to be perturbed significantly by any of the known planets. "Sedna was unique for about 10 years but it's now clear that Sedna and 2012 VP113 are just the tip of the iceberg." Intriguingly, Sheppard's team also found a strange alignment when they looked at the orbits of 2012 VP113, Sedna and 10 other objects that lie closer to the sun. "It was a big surprise to us," he says. One explanation for the alignment could be the tug of a rocky planet that is 10 times the mass of Earth that orbits the sun at 250 AU, the team calculate. That world would be cold and faint – and would push and pull at the closer objects like a distant but powerful puppeteer. But instruments such as NASA's Kepler space telescope, which has had particular success in finding such exoplanets, would have no chance of spotting a planet like this one. NEW DWARF PLANET 'BIDEN' MAY OFFER COSMIC SECRETS - DISCOVERY AT SOLAR SYSTEM'S EDGE HINTS AT MORE TO COME? - Mar 26, 2014 2012 VP113 WHY THE SECRET ABOUT THE 'NEW PLANET'? It is already known that the body was discovered in October 21 2003 by the designation 2003 UB313. Why is it now stated it was spotted in "January", inferring that the discovery occurs in 2005? "Xena was first spotted in January. Since then scientists have been checking its position and size before making their announcement. They had hoped to hold back for longer, but a secure website containing details of the discovery was recently hacked and the hacker threatened to release the information." WHERE ARE YOU HIDING PLANET X, DR. BROWN? - Nov 4, 2009 ASTRONOMERS AT PALOMAR OBSERVATORY DISCOVER A 10TH PLANET BEYOND PLUTO CONSTANT COMET THREAT Most scientists have presumed that these star crossings will lead to a shower of comets raining down on the Earth and the rest of the inner solar system. Some have even claimed to find evidence of periodic mass extinctions that might be explained by a single (as-yet-unidentified) star in an elliptical orbit around the sun. ORBITAL PARAMETERS OF VULCAN AND SOME(?) OF ITS RELATED OBJECTS VP113 UB313 Period (years) 4969.0 +30/-24 4590 3366 11249 557 755 ? 248.09 Eccentricity 0.537 +0.09/-0.04 0.7106887 0.804 0.849 0.442 0.589 0.1888 0.2485 Inclination 48.44o +3o/- 9o 24.01461o 22.758o 11.932o 44.18o 23.9o 28.19o 17.14o Asc. Node 189.0o +/-1.3o 90.89192o 128.28o 144.54o 35 .88o 217.8o 121.90o 110.303 o Perihelion Arg. 257.8o +6o/- 13.5o 291.14923o 316.6o 311.47o 151.3o 184.7o 239.51o 113.8o Aphelion (AU) 448 472.16 (450) 405 928 97.6 135 51.524 49.305 Perihelion (AU) 135 79.85 44 76 37.8 35 35.155 29.658 * Astronomers are already postulating that these objects are captured extrasolar planetesimals from low-mass stars or brown dwarfs encountering the Sun. ** Kenyon and Bromley say that only the detection of objects with orbits inclined 40 degrees or more from the plane of the ecliptic will prove the existence of 'alien' planetoids. This may be why the existence of 2003 UB313 was kept secret. In other words, it associated these bodies with a brown dwarf star, i. e. Vulcan. ORBIT FIT AND ASTROMETRIC RECORD FOR 03UB313 ORBIT FIT AND ASTROMETRIC RECORD FOR 00CR105 ORBIT FIT AND ASTROMETRIC RECORD FOR 03EL61 A CONVERSATION WITH PLUTO'S KILLER: Q & A WITH ASTRONOMER MIKE BROWN - : 24 August 2011 SPACE.com: What do you think is going on with Sedna? Where did it come from, and why is its orbit so weirdly elliptical? Brown: We have some very precise information [about Sedna]. We have its very precise orbit, and it's telling you something incredible. It never comes close to a giant planet, which means it did not get placed in that orbit by a giant planet. And yet it had to have been placed in that orbit by something. So the zeroth-order piece of information is that, somewhere out there, something perturbed Sedna, and that thing is no longer there. Now, that thing could've been another planet; it could have been a star that came close to the sun; it could have been a lot of stars, if the sun was born in a cluster. 'TENTH PLANET' HUNTED IN WRONG PART OF THE SKY It turns out that astronomers failed to aim NASA's infrared telescope at 2003 UB313 correctly, so the object could be even bigger than their estimates suggest. Sure it was an accident. ROGUE BROWN DWARF LURKS IN OUR COSMIC NEIGHBORHOOD - Apr 7, 2010 UGPSJ0722-05 is all by itself, floating through interstellar space, possibly having formed there on its lonesome, or kicked out of its host star system by an ancient gravitational game of stellar pinball. Oddly, when looking at the spectrum from UGPSJ0722-05, there is an anomalous absorption line (i.e. a particular wavelength in the electromagnetic spectrum that is missing) that cannot be explained by our current understanding of brown dwarfs. Perhaps the UKIRT has discovered a new breed of brown dwarf; a very cool object with some chemical in its atmosphere that absorbs infrared radiation at a wavelength of 1.25 micrometers. 2003 UB313 Orbit 2003 UB313 has an orbital period of about 560 years, and currently lies at almost its maximum possible distance from the Sun (aphelion), about 97 astronomical units away from the Earth. Like Pluto, its orbit is highly eccentric, and brings it to within 35 AU of the Sun at its perihelion Unlike the terrestrial planets and gas giants, whose orbits all lie roughly in the same plane as the Earth's, 2003 UB313's orbit is inclined at an angle of about 44 degrees to the ecliptic. The reason it had not been noticed until now is because of its steep orbital inclination: most searches for large outer solar system objects concentrate on the ecliptic plane, in which most solar system material is found. 2003 UB313 albedo must be relatively high (greater than 0.5), making it more Pluto-like than any other Kuiper belt object so far discovered. OBJECT BIGGER THAN PLUTO DISCOVERED, CALLED 10TH PLANET Brown said 2003 UB 313 appears to be surfaced with methane ice, as is Pluto. That's not the case with other large Kuiper Belt objects, His best estimate is that it is 2,100 miles wide, about 1-1/2 times the diameter of Pluto. The object is inclined by a whopping 45 degrees to the main plane of the solar system, where most of the other planets orbit. That's why it eluded discovery: nobody was looking there until now, Brown said. The team had hoped to analyze the data further before announcing the planet but were forced to do so Friday evening because word had leaked out, Brown said. "Somebody hacked our website," he said, and "they were planning to make [the data] public." DISTANT WORLD TOPS PLUTO FOR SIZE New observations of the object, which goes by the designation 2003 UB313, show it to have a diameter of some 3,000km - about 700km more than Pluto. MICHAEL E. BROWN Small Planets TENTH PLANET DISCOVERED IN OUTER SOLAR SYSTEM Calculations showed it was near the most distant point of its 560-year orbit - in 280 years it will be only 36 times as far from the sun as the Earth is. (A graphic of its orbit can be viewed here.) Its orbit is unusual in being tilted 44?from the orbital plane of the Earth and most other planets. Brown suspects the planet's orbit was warped by a series of encounters with Neptune. Neptune would tug it back into the ecliptic plane! FOMALHAUT B : Fomalhaut (sounds like "foam-a-lot") is a bright, young, star, a short 25 light-years from planet Earth in the direction of the constellation Piscis Austrinus. In this sharp composite from the Hubble Space Telescope, Fomalhaut's surrounding ring of dusty debris is imaged in detail, with overwhelming glare from the star masked by an occulting disk in the camera's coronagraph. Astronomers now identify, the tiny point of light in the small box at the right as a planet about 3 times the mass of Jupiter orbiting 10.7 billion miles from the star (almost 23 times the Sun-Jupiter distance). Designated Fomalhaut b, the massive planet probably shapes and maintains the ring's relatively sharp inner edge, while the ring itself is likely a larger, younger analog of our own Kuiper Belt - the solar system's outer reservoir of icy bodies. The Hubble data represent the first visible-light image of a planet circling another star. HR 8799: DISCOVERY OF A MULTI-PLANET STAR SYSTEM How common are planetary systems like our own Solar System? In the twelve years previous to 2008, over 300 candidate planetary systems have been found orbiting nearby stars. None, however, were directly imaged, few showed evidence for multiple planets, and many had a Jupiter-sized planet orbiting inside the orbit of Mercury. Last week, however, together with recent images of Fomalhaut b, the above picture was released showing one of first confirmed images of planets orbiting a distant Sun-like star. HR 8799 has a mass about 1.5 times that of our own Sun, and lies about 130 light years from the Sun -- a distance similar to many stars easily visible in the night sky. Pictured above, a 10-meter Keck telescope in Hawaii captured in infrared light three planets orbiting an artificially obscured central star. The 8-meter Gemini North telescope captured a similar image. Each planet likely contains several times the mass of Jupiter, but even the innermost planet, labelled d, has an orbital radius near the equivalent of the Sun- Neptune distance. Although the HR 8799 planetary system has significant differences with our Solar System, it is a clear demonstration that complex planetary systems exist, systems that could conceivable contain an Earth-like planet. Gravity Probe B 'Results to Date' Does It Show Solar System Motion? It was recently reported by NewScientist that Gravity Probe B received an "F?from the U.S. Government and the project would receive no more funding. But after netting out the spacecraft and earth orbit motions the remaining signal was far larger than anyone expected. In fact, it is so large it either means there is some unforeseen problem with the gyros or that our sun is part of a binary star system. Yes, that's right, if the data is correct our solar system is curving through space (carrying the earth and spacecraft with it of course) so rapidly that the only way to explain it is if our sun is gravitationally bound to another nearby star. When I met with the GP-B team at Stanford last fall they were still in the early process of analyzing the data but openly discussed the idea of an unknown companion to our sun, including the possibility of a not too distant blackhole. Solar system a bit squashed, not nicely round The solar system may not be a nice round shape, but rather a bit squashed and oblong, according to data from the Voyager 2 spacecraft exploring the solar system's outer limits, scientists said on Wednesday. Bridging Heaven & Earth Show # 215 w/ Walter Cruttenden & "The Great Year The Great Year = Vulcan's period, about 4969 years, not 24,000 years. NEMESIS: DOES THE SUN HAVE A 'COMPANION'? Note references to two Vulcan web pages. PLANET X AND THE POLE SHIFT A look at the Science behind Planet X IS THERE A PLANET X? - 31 January 2009 That is by no means a general consensus. An early, slow outward migration of the giant planets (see "How was the solar system built?") could also explain some of these strange KBO orbits - although it has difficulty explaining all of the belt's observed properties. The discovery of a further planet would be thrilling, he says. The only explanation for its presence there would be that large bodies coalesced very early in the solar system's history, only to be ejected by the gravity of the giant planets later on. That would firm up our ideas about how the solar system must have developed, and perhaps be a stepping stone towards its even more distant recesses. It takes two stellar bodies to form a solar system (or two Kerr holes to form the seeds for a star's planets. THE LOCATION OF PLANET X - by Harrington, R. S. Astronomical Journal (ISSN 0004-6256), vol. 96, Oct. 1988, p. 1476-1478 Sounds like one of Forbes planets (see Table 4A). This also implies Forbes' Vulcan like planet was taken seriously. Note that a following search for Planet X was conducted, but results were negative or not reported. Note also the depiction of Planet X at about 50 AU in a picture from the 1987 New Science and Invention Encyclopedia illustrated above. NOTE ALSO THAT PLANET X IS NOT A COMPANION TO THE SUN AS IS THE DEAD STAR DEPICTED IN THE ABOVE ILLUSTRATION. HARRINGTON'S NEW AND FORBES OLD PLANET X HARRINGTON'S PLANET FORBES' PLANET Period (years) 1019 1076 Eccentricity 0.411 0.167 Inclination 32.4o 52o Asc. Node 275.4o 247o Perihelion Arg. 208.5o 115o Perihelion Epoch 6 Aug. 1789 1702 Aphelion (AU) 142.8 122 Perihelion (AU) 59.6 87 SEARCH FOR PLANET X - Harrington, Robert S. The observation of the region of the sky in which it is believed Planet X should now be, based on perturbations observed in the motions of Uranus and Neptune, was determined, and there was no reason to update that determination. A limited area of that region was photographed, and that will be continued. . . . Blinking will be done as soon as the plates are received in Washington. Was blinking done for the IRAS objects in question? I have no evidence that it was. Planet Projected at Solar System's Edge "But it would be the first time to discover a celestial body of this size, which is much larger than Pluto," Mukai said. Japanese scientists eye new planet This illustration released by Kobe University shows a planet -- half the size of Earth -- which is believed to be in the outer reaches of the solar system. The researchers at Kobe University have said that their theoretical calculations using computer simulations lead them to conclude it was only a matter of time before the long-awaited "Planet X" was found. Planet X -- so called by scientists as it is yet unfound -- would have an oblong elliptical solar orbit and circle the sun every thousand years, the team said, estimating its radius was 15 to 26 billion kilometres. (13) MOON OR CAPTURED ASTEROID? The orbit of the escaped Neptunian moon often resembled that of Pluto's present-day orbit, even with its roughly 3:2 resonance of orbital periods, which turns out to be moderately insensitive to initial conditions. Interestingly, in these trials it was easily possible to get two Neptunian moons to escape. When that happened, the escapes were in the same general direction and with similar velocities. But once Pluto recedes far from Neptune, its own gravitational sphere of influence (within which it can hold moons of its own) expands to about 10 million kilometers. This means that any body closer than that to Pluto and roughly co-moving with it will become permanently trapped as a moon of Pluto. Tidal forces would then circularize the orbit of such a moon, bringing it closer to Pluto, and melting and increasing the density of Pluto and its moon in the process. The final state would be just like what we observe: Charon is just such an unusually large moon of Pluto with quite high angular energy, as if its orbit around Pluto were once quite a bit larger than it is today. Our scenario predicts that both bodies were former, independent moons of Neptune, stripped away by Planet X; and that Charon passed from Neptune's sphere of influence directly into Pluto's, without ever being in a solar orbit of its own. This 5 Earth mass planet is what this site considers to be Septimus. A POSSIBLE ORBIT OF A 10TH PLANET The solar system may have a tenth planet lurking beyond the orbit of Pluto, calculations by astronomers in Britain and Argentina indicate. "Planet X" could lie 60 times further from the Sun than the Earth, The new planet, thought to be the same size as Earth, would lie on the inside edge of the Kuiper Belt, a distant region of the solar system principally composed of small pieces of rock and interstellar leftovers from the creation of the solar system. Dr Melita, and Dr Adrian Brunini, of the University of La Plata in Argentina, suggested the sharp edge of the Kuiper Belt was caused by a similar sweeping, and that had, over time, created a planet-sized object, whose orbit of the sun would be almost circular, but angled by 20 degrees to that of the inner planets. A 60 AU orbit suggests a 464-year period assuming that it is not very eccentric. This is similar to Forbe's anticipated planet with a 1,076 year period that could be the infamous Septimus. SIGNS OF A HIDDEN PLANET? 30 March 2001 The most exciting possibility is that a mid-sized planet at some 10 billion kilometers (58 AU) from the sun caused 2000 CR105's orbit. And because such a planet would not be very vulnerable to orbit disruptions, it could still be there, the team says. Possibly Septimus? A HISTORY OF GREAT DEATHS Two ideas for how to perturb the Oort Cloud have been proposed: 1. A dark sister sun called Nemesis: Our sun formed as a binary, with a secondary star that never began to shine (too small for fusion). Interaction of the orbit of the two stars could regularly cause gravitational perturbation of the Oort Cloud. Systematic search for Nemesis has not yet revealed such a dark star. 2. Galactic Plane Oscillation: Our solar system is not static within the galaxy, but oscillates up and down through the main symmetry plane of the Galaxy with about a 60 million year periodicity. Each passage through the plane could lead to the surrounding Oort cloud being perturbed by increased interstellar mass in the galactic plane. This would release a hail of comets, some of which would strike the Earth. The timing is not quite perfect, but it is on the right scale. We are now pretty much in the middle of a cycle, so would not expect another mass extinction for millions of years. A MYSTERY REVOLVES AROUND THE SUN Astronomers postulate a distant massive planet may draw comets from the Oort cloud. A MYSTERY REVOLVES AROUND THE SUN - link down? Another paper Matese et. al. postulate a 1.5 to 6 Jupiter mass object, possibly dark star or our Sun's binary companion exists in the Oort cloud at about 25,000 +/- 5,000 AU. A normal to its orbital plane is within 90 +/- 5 degrees of the North Galactic Pole. COMETARY EVIDENCE OF A MASSIVE BODY IN THE OUTER OORT CLOUD - J. J. Matese, P. G. Whitman, and D. P. Whitman. Accepted for publication in Icarus as of 19 May 1999. DOES THE SUN HAVE A DOOMSDAY TWIN? (2) THE EFFECTS ON THE EKB OF A LARGE DISTANT TENTH PLANET We investigate the orbital evolution of both real and hypothetical Edgeworth-Kuiper Objects in order to determine whether any conclusions can be drawn regarding the existence, or otherwise, of the tenth planet postulated by Murray (1999). We find no qualitative difference in the orbital evolution, and so conclude that the hypothetical planet has been placed on an orbit at such a large heliocentric distance that no evidence for the existence, or non-existence, can be found from a study of the known Edgeworth-Kuiper Objects. THE THEORIZED COMPANION STAR Through its gravitational pull, unleashes a Furious storm of comets in the inner solar system lasting from 100,000 to two Million years. "Several of these comets strike the earth. Heavy snows are driven and fall from the world's four corners; the murder frost prevails. The Sun is darkened at noon; it sheds no gladness; devouring tempests bellow and never end. In vain do men await the coming of summer. Thrice winter follows winter over a world which is snow-smitten, frost-fettered, and chained in ice." -"Fimbul Winter" from Norse saga, Twilight of the Gods BRITON FINDS CLUES TO ROGUE PLANET Tim Radford, Science Editor Guardian Unlimited Friday October 8, 1999 ASTRONOMERS CAPTURE PHOTO OF EXTRASOLAR PLANET The system is young, so the planet is rather warm, like a bun fresh out of the oven. That warmth made it comparatively easier to see in the glare of its host star compared with more mature planets. Also, the planet is very far from the star -- about 100 times the distance between Earth and the Sun, another factor in helping to separate the light between the two objects. The planet is about 3,140 degrees Fahrenheit (2000 Kelvin) -- not the sort of place that would be expected to support life. Neuhaeuser's team has also detected water in the planet's atmosphere. The world is expected to be gaseous, like Jupiter. It is about twice the diameter of Jupiter. The mass estimate one to two times that of Jupiter - is "somewhat uncertain," Neuhaeuser said. The planet is three times farther from GQ Lupi than Neptune is from our Sun. "We should expect that the planet orbits around the star, but at its large separation one orbital period [a year] is roughly 1,200 years, so that orbital motion is not yet detected." It seems rather Vulcan like, but its orbit elements are not known. CHANGING ORBIT IS SIMPLE, REALLY But somewhere out there, at around 30,000 AUs, along the outer band of a massive cosmic debris field called the Oort Cloud, something very big and very weird is going on. THE SHIVA HYPOTHESIS - PERIODIC MASS EXTINCTIONS One hypothesis is that this corresponds the the solar system oscillating through the galactic plane as it orbits the Milky Way. Rampino notes that the last crossing of the galactic plane occurred a few million years ago and it has been suggested that this led to a disturbance of comets in the Oort Cloud, some of which could now be approaching the inner solar system. This implies that solar system oscillations are respibsible for the comets sugesting Matase's planet. TERRESTRIAL RECORD OF THE SOLAR SYSTEM'S OSCILLATION ABOUT THE GALACTIC PLANE (2) SUN'S HIDDEN TWIN STALKS PLANET EARTH (3) GALACTIC OSCILLATIONS AS A SOURCE OF PERIODIC IMPACTS The Rampino theory is not viable, since the oscillations of the sun in the Galaxy are quite small, and the variations in density experienced by the solar system are too small to affect impacts..." (10) COMETS, ASTEROIDS & THE GALACTIC PLANE 15 March 2001 What if these extinction collisions are caused by asteroids on the one hand, and by comets on the other? Is it possible that the gravitational consequnces of the passage through the galactic plane have altered the orbits of both classes of objects on a roughly 26 million year basis?" Chances of major orbital disturbances will vary according to the velocity with which our Solar System crosses the plane... First the Oort Cloud Comets in the forefront, then Kuiper Belt Objects, followed by planets, moons, and in time asteroids, will switch from accelerating with the Sun to decelerating relative to it - while all those coming along behind, including the Sun, are still accelerating. Once the Sun itself crosses, then it is decelerating upwards with the forward objects, but the rearward objects are still accelerating downwards towards it from behind. The velocity change from acceleration to deceleration may not amount to much, but it is the fact that there is a change that will cause a resulting mishmash of movement that in turn will almost certainly cause gravitational events that again in turn may well result in some of those bodies ... of all types diving into the inner Solar System. Collisions would not be inevitable, but certainly much more likely. A GENOCIDAL ORBIT? -THE SOLAR SYSTEM'S JOURNEY THROUGH THE MILKY WAY - May 07, 2009 Many of the ricocheted rocks collide with planets on their way through our system, including Earth. Impact craters recorded worldwide show correlations with the ~37 million year-cycle of these journeys through the galactic plane - including the vast impact craters thought to have put an end to the dinosaurs two cycles ago. VARIABLE OORT CLOUD FLUX DUE TO THE GALACTIC TIDE Matese et al. The last crossing at around 1.5 Myr ago and the next 1 Myr in the future based on crater flux peaks. The perigalactic period is estimated to be 180 Myr DYNAMICS IN DISK GALAXIES Our period about the galactic center is 2.2 x 108 Yr or 220 Myr. COMETS/ASTEROIDS/BROWN DWARFS ASTRONOMY AND SPACE SITES SCIENTISTS TRACE METEOR SHOWERS BACK TO LONG-PERIOD COMETS - JUNE 19TH, 2021 The researchers analyzed a total of 89 major geological events that have occurred on Earth in the past 260 million years, accounting for 6 percent of the total age of the planet. They discovered two important facts: first, that these global geological events have a recurrence of 27.5 million years. Second, that the last critical moment of the planet took place 7 million years ago, which means that there are still 20 million years to go until the next geological upheaval. SCIENTISTS TRACE METEOR SHOWERS BACK TO LONG-PERIOD COMETS - Jun 19, 2021 THE DINOSAUR KILLING ASTEROID HIT EARTH AT DEADLIEST POSSIBLE ANGLE - May 27, 2020 "Our simulations provide compelling evidence that the asteroid struck at a steep angle, perhaps 60 degrees above the horizon, and approached its target from the north-east. We know that this was among the worst-case scenarios for the lethality on impact, because it put more hazardous debris into the upper atmosphere and scattered it everywhere - the very thing that led to a nuclear winter." The results are published in Nature Communications. The simulations, which used a 17-km diameter asteroid with a density of 2630 kgm3 and a speed of 12 km/s, were performed on the Science and Technology Facilities Council (STFC) DiRAC High Performance Computing Facility. * ALL COMETS IN THE SOLAR SYSTEM MIGHT COME FROM THE SAME PLACE - Sep 9, 2019 All comets might share their place of birth, new research says. For the first time ever, astronomer Christian Eistrup applied chemical models to fourteen well-known comets, surprisingly finding a clear pattern. His publication has been accepted in the journal Astronomy & Astrophysics. Cometary Compositions Compared With Protoplanetary Disk Midplane Chemical Evolution. An Emerging Chemical Evolution Taxonomy For Comets - 25 Jul 2019 With a growing number of molecules observed in many comets, and an improved understanding of chemical evolution in protoplanetary disk midplanes, comparisons can be made between models and observations that could potentially constrain the formation histories of comets. A X-method was used to determine maximum likelihood surfaces for 14 different comets that formed at a given time (up to 8 Myr) and place (out to beyond the CO iceline) in the pre- solar nebula midplane. This was done using observed volatile abundances for the 14 comets and the evolution of volatile abundances from chemical modelling of disk midplanes. Considering all parent species (ten molecules) in a scenario that assumed reset initial chemistry, the X likelihood surfaces show a characteristic trail in the parameter space with high likelihood of formation around 30 AU at early times and 12 AU at later times for ten comets. This trail roughly traces the vicinity of the CO iceline in time. The formation histories for all comets were thereby constrained to the vicinity of the CO iceline, assuming that the chemistry was partially reset early in the pre-solar nebula. This is found, both when considering carbon-, oxygen-, and sulphur-bearing molecules (ten in total), and when only considering carbon- and oxygen-bearing molecules (seven in total). Since these 14 comets did not previously fall into the same taxonomical categories together, this chemical constraint may be proposed as an alternative taxonomy for comets. Based on the most likely time for each of these comets to have formed during the disk chemical evolution, a formation time classification for the 14 comets is suggested. NIBIRU DETAILS LEAKED BY INSIDER! JUST RELEASED, A MUST WATCH... - Mar. 17, 2019 Interesting Files on Nibiru - Planet X - Mar 15, 2019 Mentions infrared binary twin star has (@05:25) seven planets and two debris trails with (@05:50) rock and icy bodies in it. The seventh planet earliest pass-by Earth around (@07:26) at 0.015 AU on 2023 to 2024. * ASTRONOMY AND SPACE NEWS - ASTRO WATCH: HEAVY STELLAR TRAFFIC, DEFLECTED COMETS, AND A CLOSER LOOK AT THE TRIGGERS OF COSMIC DISASTER - September 1, 2017 * EXPLORING THE TRIGGERS OF COSMIC DISASTER - August 31, 2017 SCIENTIST: TONS OF DANGEROUS COMETS EXIST July 29, 2017 The final reason researchers think LPCs are dangerous is their relative stealthiness compared to other space rocks. Detecting an LPC on a collusion course with Earth would be more difficult that spotting a more conventional near-Earth asteroid. "The larger distance of comets, and the long orbital periods affect the warning time more than higher velocities: the generally larger distance of comets make the tracking observations less effective (since they are angular measurements), and the longer orbital periods mean that we don't have multiple opportunities to see these objects at closer ranges," Chodas said. "The distance at which a comet is discovered depends largely on the activity level of the comet." CIVILIZATION-DESTROYING COMETS ARE MORE COMMON THAN WE THOUGHT - July 26, 2017 Comets are a different story. Some comets have orbits that take them billions of miles away, out to the very edges of the solar system. Some of these comets only venture near the Earth once every few centuries as they come in close to loop around the sun before slingshotting back out. These long-period orbits make the comets nearly impossible to detect. According to data from WISE, there may be seven times as many large, long-period comets than we thought. These are rocks measuring at least 1 kilometer, large enough to cause a cataclysmic loss of life if they strike. WISE used its infrared detector to map the number of comets out there, and it found that they were both larger and more numerous scientists previously believed. This translates into a greater risk of a comet impact. OUR SOLAR SYSTEM IS FULL OF HUGE, RECENTLY DISCOVERED COMETS - July 25, 2017 "Our results mean there's an evolutionary difference between Jupiter family and long-period comets," Bauer said. If there really are a lot of long-period comets, there probably have been many that are already gone after crashing into planets. Those collisions would have brought some material to the planets' surfaces — which could be exciting for the scientists who theorize that comets delivered the necessary ingredients for life on Earth, including water, atmospheric gases and organic material. And whether comets brought life to Earth or not, further understanding them could help keep life here in the future. "Comets travel much faster than asteroids, and some of them are very big," study co-author Amy Mainzer said in the NASA statement. "Studies like this will help us define what kind of hazard long-period comets may pose." NEW COMET: C/2017 O1 - , 25 Jul 2017 CBET nr. 4414, issued on 2017, July 24, announces the discovery of a comet (magnitude ~15.3) in the course of the "All-Sky Automated Survey for Supernovae" (ASASSN) program, from images taken with the 14-cm "Cassius" survey telescope at Cerro Tololo on July 19.32 UT. The new comet has been designated C/2017 O1. Visual estimates have the comet at mag. ~10 on July 24, 2017. Syuichi Nakano, Sumoto, Japan, notes on CBET 4414 that this comet could reach total visual magnitude 7 during September-November. RED STAR KACHINA APPEARS IN THE SKY OVER EAST WENATCHEE, WASHINGTON (Video) - June 21, 2017 According to the Hopi Elders – The red katchina will bring the day of purification. On this day the Earth, her creatures and all life as we know it will change forever. There will be messengers that will precede this coming of the purifier. They will leave messages to those on Earth who remember the old ways. * NASA INFRARED TELESCOPE SHOWS PLANET X, NIBIRU VERY CLEARLY - June 18, 2017 WISE scanned the entire sky between 2010 and 2011, producing the most comprehensive survey at mid-infrared wavelengths currently available. With the completion of its primary mission, WISE was shut down in 2011. It was then reactivated in 2013 and given a new mission assisting NASA's efforts to identify potentially hazardous near-Earth objects (NEOs), which are asteroids and comets on orbits that bring them into the vicinity of Earth's orbit. The mission was renamed the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE). NASA seems worried about NEO, and justifiably so. * THE SUN MAY HAVE HAD AN 'EVIL TWIN' THAT WIPED OUT THE DINOSAURS - June 15, 2017 It's likely that the sun's twin could have been significantly far apart from its sibling, perhaps as much as 46.5 billion miles away, or 17 times the distance between the sun and Neptune. THE KUIPER CLIFF MYSTERY - WHY DOES THE KUIPER BELT SUDDENLY END? - June 12, 2017 Supporters of this theory state this is evidence of the mythical Planet X or Planet 9 that astronomers are still trying to find. Some years ago, astronomers ran several simulations, and discovered that a small planet, around half the size of the Earth could have formed inside Neptune's orbit. It could have been tossed into a bigger orbit by Neptune, and then knocked around the orbits of the ice balls, distorting their orbits and creating the Kuiper Cliff. The birth of this hypothetical planet could have occurred during a time when there was plenty of material in the early solar system. MYSTERY OF DISAPPEARING ASTEROIDS SOLVED - 22-Feb-2016 Ever since it was realized that asteroid and comet impacts are a real and present danger to the survival of life on Earth, it was thought that most of those objects end their existence in a dramatic final plunge into the Sun. A new study published on Thursday in the journal Nature finds instead that most of those objects are destroyed in a drawn out, long hot fizzle, much farther from the Sun than previously thought. This surprising new discovery explains several puzzling observations that have been reported in recent years. SURPRISE METEOR SHOWER ON NEW YEAR'S EVE - February 24, 2016 A new network of video surveillance cameras in New Zealand has detected a surprise meteor shower on New Year's Eve. The shower is called the Volantids, named after the constellation Volans, the flying fish, from which the meteoroids appear to stream towards us. "The parent body of this stream still eludes us," says Soja. "It may not be active now and the high inclination may make it difficult to spot." The more intelligent doomsayers claim that Planet Nine's gravity well might slingshot asteroids toward the Earth, resulting in potentially devastating meteor strikes. Scientifically, this theory carries more weight: the gravitational effects of Planet Nine (or whatever's out there) are documented. After all, Fatty was hypothesized in the first place because of the apparent effects of its gravity well on small, rocky objects. So it's within the realm of possibility that one or two of those objects could slingshot their way toward Earth. But it's still not all that likely—remember that space is still very, very big. Even after an object was thrown back toward our neighborhood, it would still have to actually hit Earth, instead of just continuing on into the vast, surrounding emptiness. It's possible, but it's far from likely. Astronomer Scott Sheppard has said that Planet Nine could "throw a few small objects into the inner solar system every so often, but [won't] significantly increase the odds for a mass extinction event." PASSING STARS MAY SEND COMETS SMASHING INTO EARTH – EVENTUALLY - Jan 07, 2015 WANDERING DEATH STARS DANGEROUSLY APPROACHING EARTH - Jan 06, 2015 "For any slice of time over the past 500 million years, you'd expect there to be mostly the same number of close encounters, and there's still life on Earth," he explained to NBC. "So however bad this is, generally, it doesn't look lethal." "There have been a lot of mass extinctions, and some fraction of them could have been due to comet impacts...but it wasn't the end of the world," he added. COMET DUST FOUND ON EARTH'S SURFACE FOR FIRST TIME: MICROSCOPIC PARTICLES DISCOVERED 58 FEET UNDER ANCIENT ANTARCTIC ICE - 8 December 2014 Japanese scientists discovered comet particles at Tottuki Point, Antarctica They found more than 40 spherical fragments which contained glass-like material, iron, magnesium, potassium, sulphur, calcium and nickel Dust is similar to comet fragments collected by Nasa's Stardust mission Comet particles were not thought to be able to survive on Earth's surface FIRST OBSERVATIONS OF THE SURFACES OF OBJECTS FROM THE OORT CLOUD - November 10, 2014 When the object was observable again in the spring, the team used the Gemini North telescope to obtain a spectrum of the surface, which showed that it was very red, completely different from comet or asteroid surfaces, and more like the surface of an ultra-red Kuiper belt object. "We had never seen a naked (inactive) Oort cloud comet, but Jan Oort hypothesized their existence back in 1950 when he inferred the existence of what we now call the Oort cloud. Oort suggested that these bodies might have a layer of "volatile frosting" left over from 4.5 billion years of space radiation that disappears after their first pass through the inner solar system. COMET SIDING SPRING INTERACTION WITH MARS EXPLAINED!! Update fast moving clouds causes effect, claims astronomer, who took the pictures! DID SIDING SPRING DISCHARGE WITH MARS? MASSIVE EXPLOSION ON MARS AS COMET COMET SIDING SPRING PASSES BY MARS (Video) - October 20, 2014 COMET ISON FELL APART EARLIER THAN REALIZED - JULY 21, 2014 COMET ISON WAS DESTROYED IN ONE FINAL VIOLENT OUTBURST - 20 July, 2014 COMET ISON OUTBURST! ISON WITH 3 TAILS & SECONDARY OBJECT NEAR COMA! - Nov 13, 2013 The newest findings show that Comet ISON Has formed 3 tails , has had an outburst that shows brightening and a Secondary Object Found Near the Coma of ISON! ISON, ENCKE AND LOVEJOY/SOLAR UPDATE - Nov 13, 2013 BIZARRE SIX-TAILED ASTEROID LOOKS LIKE A COMET, OBJECT ORBIT SURPRISES ASTRONOMERS - November 7, 2013 RUSSIAN METEOR BLAST A WARNING FOR EARTH ACCORDING TO EXPERTS - November 6, 2013 RUSSIA METEOR STRIKE WAS 'WAKE-UP CALL' - NOVEMBER 07, 2013 A METEOR that exploded over Russia in February was 20 metres in diameter and caused a blast equivalent to 600,000 tons of TNT, according to scientists studying the event. Professor Qing-Zhu Yin, from the Department of Earth and Planetary Sciences at the University of California at Davis, US, said the meteor strike was a "wake-up call". Three quarters of the rock evaporated in the explosion, said the researchers, whose findings are reported in the journal Science. Most of the rest of the object became a glowing orange dust cloud and only a small fraction - still weighing 4000 to 6000kg - fell to the ground. The largest single fragment, weighing about 650kg, was recovered from the bed of Lake Chebarkul in October. The object may have come from the Flora asteroid family in the asteroid belt between the orbits of Mars and Jupiter. But the chunk that exploded over Chelyabinsk is not thought to have originated in the asteroid belt itself, the experts believe. NEW SUPER HI VISION FOOTAGE OF COMET ISON! - November 6, 2013 It's expected to reach its closest approach to the sun on Thanksgiving Day on Nov. 28. This will be Comet ISON's first trip around the sun. Scientists say it could be broken apart by the sun's gravity and baked by its heat, or there could be a bright and beautiful sky-show. NEW ISON CLOSE UPS/MULTIPLE OBJECTS - November 4, 2013 RED ALERT! COMET ISON ENTERS DANGER ZONE - November 4, 2013 ISON/MULTIPLE OBJECTS IN COMA, "PECULIAR STRUCTURE IN INNER TAIL" This big, dirty snowball is outgassing from the side that is facing the Sun ! And there's also something peculiar! And it was copyrighted peculiar! That's peculiar too! NEW HUBBLE IMAGE OF ISON. MULTIPLE OBJECTS. - Oct 17, 2013 ANOTHER EXPERT AGREES WITH DARK COMET THEORY - February 21, 2013 ORBIT FIT AND ASTROMETRIC RECORD FOR 00CR105 - Aug 11 02:19:17 2005 Epoch: 2451580.5000 = 2000/02/06 Mean Anomaly: 3.72869 +/- 0.022 Argument of Peri: 316.58338 +/- 0.047 Long of Asc Node: 128.27893 +/- 0.000 Inclination: 22.75794 +/- 0.001 Eccentricity: 0.80380591 +/- 0.0008 Semi-Major Axis: 224.62610637 +/- 0.8689 Time of Perihelion: 2438844.2316 +/- 4.0 Perihelion: 44.07031413 +/- 0.2557 Aphelion: 405.18189861 +/- 1.5788 Period (y) 3366.6545 +/- 19.53 KUIPER BELT: NEW HORIZONS GETS FIRST GLIMPSE OF PLUTO (82158) 2001 FP185 - ORBITAL ELEMENTS METEOR SHOWER REVEALS NEW COMET NEO, OCTOBER CAMELOPARDALIDS and BREAKING COMET NEWS COMET'S COURSE HINTS AT MYSTERY PLANET The giant comet, known as 2000 CR105, measures some 400 kilometers across. Current wisdom holds that they have been scattered into their eccentric trajectories by the gravitational pull of a giant planet, probably Neptune. If so, basic orbital mechanics dictates that these "scattered disk objects" should swing nearest the sun at perihelion points close to Neptune's orbit, some 4.5 billion kilometers from the sun. But comet 2000 CR105, first discovered in February 2000, doesn't follow this script. The most exciting possibility is that a planet-sized body still hides in the outer solar system. "A Mars-sized body [at an average distance of some 15 billion kilometers] could scatter a body like 2000 CR105 to its present orbit," Gladman and his colleagues write in their Icarus paper. Unlike Mars, the planet would consist mainly of ice. Because its high mass would protect it from orbital disruptions, the astronomers say, it could still be around. Vulcan is about 150 earth masses, but currently about 41 billion miles away, and comet swarms passing through the inner solar system with some hitting the earth indicate that it is still around. "Large or small, astronomers agree that whatever nudged 2000 CR105 into its large, distant orbit is bound to have done the same to other TNOs. "Finding more would give us a better idea of how they got there," Levison says. A COMET'S ODD ORBIT HINTS AT HIDDEN PLANET Such an oblong orbit is usually a sign that an object has come under the gravitational influence of a massive body. IS THERE A LARGE PLANET ORBITING BEYOND NEPTUNE? The team also suggests that the perturbation of 2000 CR105 could be caused by a "resident planet" of a size somewhere between our Moon and Mars that formed in these outer regions well beyond the orbit of Pluto a distance of 10 billion kilometers from the sun. WHEN COMETS BREAK APART The unexpected breakup of comets, some at considerable distances from the Sun, has long baffled comet researchers. Marsden wrote: "Although most of the comets observed to split have done so for no obvious reason, one really does require an explanation when the velocity of separation is some 20% of the velocity of the comet itself! A collision with some asteroidal object at 200 A.U. from the sun, and 100 A.U. above the ecliptic plane, even though it would only have to happen once, is scarcely worthy of serious consideration". Vulcan's semi-major axis is about 200 A.U. and is inclined about 48 degrees to the ecliptic. SCENARIOS FOR THE ORIGIN OF THE ORBITS OF THE TRANS-NEPTUNIAN OBJECTS 2000 CR105 AND 2003 VB12 - 15 Mar 2004 In this paper we explore four seemingly promising mechanisms for explaining the origin of the orbit of this peculiar object: (i) the passage of Neptune through a high-eccentricity phase, (ii) the past existence of massive planetary embryos in the Kuiper belt or the scattered disk, (iii) the presence of a massive trans- Neptunian disk at early epochs which exerted tides on scattered disk objects, and (iv) encounters with other stars. Of all these mechanisms, the only one giving satisfactory results is the passage of a star. Indeed, our simulations show that the passage of a solar mass star at about 800 AU only perturbs objects with semi-major axes larger than roughly 200 AU to large perihelion distances. 2000 CR105 Comet Like Orbit The comet known as 2000 CR105 is the second most distant known object in the solar system and circles the sun in a highly eccentric orbit every 3175 years at an average distance of 224 AU. One theory states that they were pulled from their original positions by a passing star or a very distant and undiscovered giant planet. This comet tends to verify Vulcan's existence. Vulcan's aphelion is close to 448 AU, and the comet swarms are believed to form at a nominal aphelion of 444 AU with a therotical 3314.7 +/- 18.3 -year period (from impact data 3332.6 +/- 119 -year period). PERIHELION 76 AU, APHELION 990 AU, PERIOD 12300 YEARS: THE FIRST BODY RESIDING IN THE OORT CLOUD? 2003 VB12 stays way outside the Kuiper Belt at all times / Red color, high albedo, diameter 1300 to 1800 km / Could be the first of many such objects at the inner edge of the Oort cloud OORT CLOUD & SOL b? Although inclined by only around 11.9 degrees from the ecliptic where the eight major planets orbit, Sedna's distant orbit is extremely elliptical indicating that its formation and orbit may have been influenced by by a passing nearby star during the early years of the Solar System, when Sol formed out of a molecular cloud with many other close by stars around 4.6 billion years ago. Like 2000 CR105, Sedna may have been perturbed by a Solar-mass star at around 800 AUs from Sol more than 100 million years after its birth, given today's observed numbers of Oort Cloud comets. Hence, 2000 CR105 and Sedna are less likely to be members of the scattered disk that had their perihelion distances "increased by chaotic diffusion" or the result of other hypotheses THE DARK STAR: Could the reason why we are finding more and more evidence for its existence be that it really does exist "The researchers thought up another improbable scenario that managed to explain Sedna´s orbit remarkably well. Sedna could have been born around a brown dwarf about 20 times less massive than the Sun and captured by our Solar System when the brown dwarf approached. "What´s striking about this idea is how efficient it is," says Levison, whose calculations suggest about half of the material orbiting the dwarf would have gone into orbit around the Sun. THE DEEP ECLIPTIC SURVEY Exploring the outer solar system in search of trans-Neptunian objects NATURAL CATASTROPHES DURING BRONZE AGE CIVILISATIONS The final paper in the section on archaeology, geology and climatology is by Euan MacKie, who begins by warning that astronomers will have to produce clear evidence of comet swarms or the likelihood of large impacts at specific dates before most archaeologists will be willing to re-examine their data with this in mind. NEW 'MOON' FOUND AROUND EARTH It was soon realized, however, that far from passing us, it was in fact in a 50- day orbit around the Earth. The American Jet Propulsion Laboratory in California, says it must have just arrived or it would have been easily detected long ago. MAJOR NEWS ABOUT MINOR OBJECTS LIST OF TRANSNEPTUNIAN OBJECTS HUGE ROCK-ICE BODY CIRCLES SUN It is in a so-called resonance orbit with Neptune. This means that it completes two orbits of the Sun for every three completed by the eighth planet. Such orbits are stable as they allow the object to approach Neptune's orbit without any possibility of collision. Pluto, currently the most distant true planet, is in such an orbit. Since the first Kuiper Belt Object was discovered in 1992, several hundred have been found, and many of them are in the Neptune resonance condition, too. PLUTINOS A surprising fraction - 40 % - of Kuiper Belt objects have orbital periods close to Pluto's. 246 years is 3/2 of Neptune's period of 164 years and is a stable resonance that allows the object to avoid being perturbed by Neptune. In the asteroid belt, similarly, there are gaps where Jupiter would have 2, 5/2, or 3 times the asteroid's period but a cluster of asteroids where Jupiter has 3/2 the asteroid's period. CASSINI PICTURES SPONGY HYPERION Much of the interior of Hyperion is empty space, suggesting it is little more than a pile of space rubble. Like a Kuiper Belt object. THE KUIPER BELT AND THE OORT CLOUD Several Kuiper Belt objects have been discovered recently including 1992 QB1 and 1993 SC (above). They appear to be small icy bodies similar to Pluto and Triton (but smaller). There are more than 300 known trans-Neptunian objects (as of mid 2000); see the MPC's list. Many orbit in 3:2 resonance with Neptune (as does Pluto). Color measurements of some of the brightest have shown that they are unusually red. LIST OF CENTAURS AND SCATTERED-DISK OBJECTS RECENT RESEARCH - Kuiper Belt Objects SCIENTISTS FIND NEW PLANET BEYOND ORBIT OF NEPTUNE - 5 June 1997 Astronomers have discovered a mini-planet at the edge of the solar system which may change our thinking on how the planets evolved. More than 300 miles in diameter, the planetesimal is the brightest object to be found beyond the orbit of Neptune since the discovery of Pluto in 1930. Given the designation 1996TL66, the new object is probably one of many, according to its discoverers. ASTRONOMERS FIND DISTANT 'DOUBLE PLANET' - 19 April 2001 In January, astronomers attempted to find a specific object among this ring of comet-like objects. This body, designated 1998 WW 31, was first seen a few years ago. (5) PLUTO HAS BIG SHINY COLLEAGUE - 29 May 2001 Varuna has a diameter of 900 km, Jewitt's team also calculates. This makes it the third largest known KBO, after Pluto (2,200 km) and Charon (1,200 km). TWO NEW MOONS FOUND AROUND PLUTO If confirmed, it would bring Pluto's tally of satellites to three; Charon, the only known moon of Pluto, was discovered by astronomers in 1978. Confirmation of two new moons would shed light on the evolution of the Kuiper Belt, the vast region containing icy objects beyond Neptune's orbit. All the candidate moons seem to orbit Pluto in an anti-clockwise direction. PLUTO GETS MORE COMPETITION- 28 August 2001 In this schematic diagram the relative sizes of the largest Kuiper Belt Objects (KBO) are illustrated. The newly discovered object, 2001 KX76 (diameter about 1200 km), is the largest known KBO and is even larger than Pluto's moon, Charon. For comparison Pluto's diameter is about 2300 km. (INCLUDES 1996TO66 which could be) 1996 TL66 LARGE WORLD FOUND NEAR PLUTO - 30 July 2001 Only planets are larger than this new object, dubbed 2001 KX76. The icy, reddish world is over a thousand kilometres across and astronomers say there may be even larger objects, bigger than planet Pluto itself, awaiting discovery. 2001 KX76 could be as large as 1,270 km (788 miles) across, bigger than Ceres, the largest known asteroid (an object that orbits the Sun between Mars and Jupiter). It is even larger than Pluto's moon Charon, which has an estimated diameter of 1,200 km (744 miles). (1) DID PLUTO TAKE A PUNCH? LARGE IMPACT ONLY 100 YEARS AGO? (8) RE: DID PLUTO TAKE A PUNCH? STRANGE EVENTS ON DISTANT PLUTO Although it is receding from the Sun, its atmosphere is getting thicker, puzzling astronomers who expect it to "freeze-out" and contract in about 10 years. Earth and the other planets are warming from the inside after Vulcan passed aphelion. ANOTHER CANDIDATE FOR "PLANET X" SPOTTED - 4 December 2000 As of 2000 December 1, the MPC's orbit suggests that this object is 43 times farther from the Sun than the Earth is, and is presently 42 times farther from Earth than the Earth is from the Sun. With an apparent magnitude of 20 at those distances, the object would be the brightest of all 346 known Trans-Neptunian Objects other than Pluto. If it has a reflectivity comparable to other minor planets, its diameter would be between 330 and 750 miles. This can be compared to the diameters of the largest known asteroid (1) Ceres of 570 miles or (4) Vesta of 320 miles. Pluto is at a distance comparable to that of 2000 WR106, and is 1,470 miles in diameter. ASTRONOMERS DISCOVER 'NEW PLANET' Observations show it is about 2,000 km across and it may even be larger than Pluto, which is 2,250 km across. UNIVERSITY TEAM'S THEORY CAUSES STIR "The answer I think is that it's blacker than black and the only object it could be is a comet, which makes up some of the darkest objects in the solar system." "And if Sedna has captured such an object, there must be hundreds of other large comet objects in the vicinity." DISTANT SEDNA RAISES POSSIBILITY OF ANOTHER EARTH-SIZED PLANET IN OUR SOLAR SYSTEM Marsden favors an object closer in, a "planetary object," he told SPACE.com , perhaps at between 400 and 1,000 AU. Vulcan's aphelion is at about 448 AU. 90377 SEDNA Semi-major axis 502.040 AU Perihelion 76.032 AU Aphelion 928.048 AU Orbital period 11249.05 years Inclination 11.932o Longitude of the ascending node 144.544o Argument of Perihelion 311.468o Mean anomaly 357.713o SEDNA (2003 VB12) The coldest most distant place known in the solar system; possibly the first object in the long-hypothesized Oort cloud WEIRD OBJECT BEYOND PLUTO GETS STRANGER DISTANT PLANETOID SEDNA GIVES UP MORE SECRETS or DISTANT PLANETOID SEDNA GIVES UP MORE SECRETS The distant planetoid Sedna appears to be covered in a tar-like sludge that gives it a distinctly red hue, a new study reveals. The findings suggests the dark crust was baked-on by the Sun and has been untouched by other objects for millions of years. A similar "space weathering" process occurs on a 200- kilometre-wide object called Pholus, which lies near Saturn and is also very red. Astronomers have struggled to explain such an extreme orbit, but many believe a star passing by the Sun about 4 billion years ago yanked the planetoid off its original, circular course. This is typical of Kuiper Belt objects and Vulcan may also be covered with the same material making it hard to see. MPEC 2004-E45 : 2003 VB12 IT'S ANOTHER WORLD . . . BUT IS IT OUR 10TH PLANET? The discovery of Sedna ?10 billion kilometres from Earth ?is a testament to the new generation of high-powered telescopes. 'NEW PLANET' MAY HAVE A MOON At its most distant, Sedna is 130 billion km (84 billion miles) from the Sun, which is 900 times Earth's solar distance (149 million km or 93 million miles). NEW SOLAR SYSTEM WORLD HAS A MOON More importantly, observations of the satellite's 49-day orbit allowed Brown to precisely calculate the masses of both 2003 EL61 and its moon. Pluto and 2003 EL61 both have satellites MYSTERIOUS SEDNA The elliptical orbit of Sedna is unlike anything previously seen by astronomers. It resembles the orbits of objects predicted to lie in the hypothetical Oort cloud - a distant reservoir of comets. But Sedna is 10 times closer than the predicted distance of the Oort cloud. Brown speculated that this "inner Oort cloud" might have been formed billions of years ago when a rogue star passed by the sun, nudging some of the comet-like bodies inward. Other notable features of Sedna include its size and reddish color. After Mars, it is the second reddest object in the solar system. SEDNA: A CLUE TO NIBIRU LARGE WORLD (QUAOAR) FOUND BEYOND PLUTO The object is about one-tenth the diameter of Earth and circles the Sun every 288 years. COMETS AND ASTEROIDS APL: FRAGMENTING COMET REVEALS INNER SELF Scientists were able to observe the telltale chemical signatures of water. They also find a large amount of hydrogen cyanide, a simple compound containing a single molecule each of hydrogen, carbon and nitrogen. COMET PARTICLES CONFOUND SCIENTISTS Particles from any icy comet that were collected and returned to Earth aboard a Nasa science satellite show dozens of minerals that form only in extreme heat - a finding that complicates theories about how the solar system formed, scientists say. Prevailing theories about the solar system's formation cannot explain how this high-temperature material ended up in the frigid regions beyond Neptune's orbit, the so-called Kuiper Belt where comets formed. COMETS: THE LOOSE THREAD The facts are apt to be more stubborn than the theoreticians: Deep Impact kicked up ten times more dust than expected and stimulated the comet's activity a magnitude less than expected. The dust was not a conglomeration of sizes as expected but was consistently powder-fine. The nucleus of the comet was covered with sharply delineated features, two of which were circular enough to be called impact craters. This was not expected for a dirty snowball or a snowy dirtball or even a powdery fluffball. The craters, of course, weren't actually called impact craters. They must have been caused by subsurface explosions, because they had flat floors and terraced walls, despite the myriad of other craters on rocky planets and moons with flat floors and terraced walls that are called impact craters. All the other circular depressions with flat floors and terraced walls weren't craters because they had "unusual shapes." The hard times began with Comet Halley. Theory expected more or less uniform sublimation of the surface as the nucleus rotated in the sun, much as you would expect of a scoop of ice cream on a rotisserie. But Halley had jets. Less than 15% of the surface was sublimating, and the ejecta was shooting away in thin beams. The theory was adjusted to introduce hot spots, chambers below the surface in which pressure could build up and erupt through small holes to produce the jets. It went unmentioned that the holes must have been finely machined, like the nozzle of a rocket engine, in order to produce the collimation of the jets: Just any rough hole would result in a wide spray of gases. Borrelly made the hard times harder. It was dry. And black. Theoreticians tinkered with the dirty snowball theory until they got the dirt to cover the outside and to hide the snow inside. Somehow they got the dirt, which ordinarily is an insulator, to conduct heat preferentially into the rocket chambers to keep the jets going. Wild 2 defied them. Its jets were not just around the sub-solar point, where the Sun's heat would be greatest. This comet sported jets on the night side. The rocket chambers now had to store heat for half a "comet day". And something was needed to keep the jets coherent over great distances and to gather their emissions into a stream of clumps: Clusters of particles repeatedly struck the spacecraft. If you pull one electrified comet out of the well-knit structure of accepted theories, the entire garment will become unacceptable. Either the universe is an agglomeration of isolated, gravitating, non-electrical bodies, or else it is a network of bodies connected by and interacting through electrical circuits. Either the universe is a gravity universe or it is an Electric Universe. And comets are the loose thread. WHERE DO COMETS COME FROM? - 28 January 2009 Few cosmic apparitions have inspired such awe and fear as comets. Vulcan draws them in from the Kuiper Belt. 'DARK' COMETS COULD HIT EARTH WITHOUT WARNING - February 13, 2009 "There is a case to be made that dark, dormant comets are a significant but largely unseen hazard," said Bill Napier at Cardiff University. 'DARK' COMETS MAY POSE THREAT TO EARTH - 11 February 2009 In previous work, Napier and Janaki Wickramasinghe, also at Cardiff, have suggested that when the solar system periodically passes through the galactic plane, it nudges comets in our direction (New Scientist, 19 April 2008, p 10). These periodic comet showers appear to correlate with the dates of ancient impact craters found on Earth, which would suggest that most impactors in the past were comets, not asteroids. Now Napier and Asher warn that some of these comets may still be zipping around the solar system. Other observations support their case. The rate that bright comets enter the solar system implies there should be around 3000 of them buzzing around, and yet only 25 are known. COMETS 'ARE BORN OF FIRE AND ICE' Comets are born of fire as well as ice, the first results from the US space agency's (Nasa) Stardust mission show. The high-temperature minerals found in the Stardust samples may have formed in the inner part, where temperatures exceeded 1,000C. "When these grains formed, they were incandescent - they were red or white hot." One of these minerals known as forsterite, which melts at 2,000C and condenses at 1,127C, has been detected in a comet before. NASA SCIENTISTS HAVE NEW MYSTERY TO SOLVE Some of the material brought back by Stardust probe 'kind of a shock' The samples include minerals such as anorthite, which is made up of calcium, sodium, aluminum and silicate; and diopside, made of calcium magnesium and silicate. Such minerals only form in very high temperatures. "That's a big surprise. People thought comets would just be cold stuff that formed out ... where things are very cold," said NASA curator Michael Zolensky. "It was kind of a shock to not just find one but several of these, which implies they are pretty common in the comet." The discovery raises questions about where the materials in comets form, he added. Comets may be debris left over from Vulcan's tiny exploded planets, according to the ASTRO-METRIC concept of how our OUR SOLAR SYSTEM was formed. Posted 03/16/2006. Distant planets, seeded by PBHs ejected from Vulcan's PBH, may have experienced detonations of these tiny PBHs because they were so very small. These detonations may have produced many fragments of primordial heavy element matter which became hydrogenated as the Sun grew to its present size. These primordial fragments are the likely source of cometary bodies found in both the inner and outer Oort cloud. Thus, cometary material constitute the remains of the crust or mantels of these primordial planets. Such material may often show evidence of prior melting such as that found in the Murchison meteorite Therefore, comets are representative of the early universe, especially as far as the originally formed prototype solar system is concerned. This concept is consistent with many of the current theories concerning comets, which describes their genesis as primordial. WHAT IS A COMET MADE OF? 'LAZARUS COMETS' EXPLAIN SOLAR SYSTEM MYSTERY - 02 Aug. 2013 Not so, says the new study, which describes it as containing "an enormous graveyard of ancient dormant and extinct rocky comets" whose surface ice has been stripped away by years of exposure to solar rays. The comets can get reactivated when they pass relatively close to Jupiter, the biggest planet of the Solar System, and the shape of their orbits is tugged. This can decrease the distance between the comet and the Sun, resulting in a tiny rise in average temperature which in turn warms the subsurface ice and the gases it contains. ASTEROID OR COMET - SAME BLOKE, DIFFERENT HAIRCUT! Brownlee wrote, "Bodies in the Kuiper belt are not comets, because they do not exhibit cometary activity, but they would become comets if perturbed closer to the Sun." Additionally, in the plasma concept, Kuiper belt objects do not become comets necessarily just from heating by the sun, but from a change in electrical properties of their environment. The results from recent missions suggest that comets are constructed of low- temperature and high-temperature materials that formed in totally disparate environments. The frozen volatiles were formed at the edge of the solar system or in presolar environments, but most of the rocky materials were white hot at the time of their formation and appear to have formed by violent processes in the hot regions of the solar system. Kuiper belt objects may be from detonated planetoids according to the ASTRO-METRIC theory. THE CREATION OF THE ASTEROID BELT: A REMOTE-VIEWING STUDY - Jul 2, 2011 Drawing data from a remote-viewing project conducted at The Farsight Institute, Courtney Brown, Ph.D., presents CRV and HRVG remote-viewing data that investigates the origin of the Asteroid Belt. The project tests two hypotheses as the origin of the Asteroid Belt: (1) the solar nebula hypothesis and (2) the exploding planet hypothesis. The exploding planet hypothesis was supported by the now late astronomer, Dr. Tom Van Flandern, former head of his department at the U.S. Naval Observatory, while the solar nebula hypothesis is currently supported by virtually all other mainstream astronomers. The remote-viewing data strongly support the exploding planet hypothesis as the origin of the Asteroid Belt. Dr. Brown explains why leaders in the mainstream science community need to come forward now to acknowledge the reality of the remote-viewing phenomenon. Dr. Brown presented these data on 11 June 2011 at the annual meeting for the Society for Scientific Exploration in Boulder, Colorado, and also on 18 June 2011 at the annual meeting of the International Remote Viewers Association in Las Vegas, Nevada. STRANGE METAL ASTEROID TARGETED IN FAR-OUT NASA MISSION CONCEPT - January 15, 2014 Scientists think the object is the nearly naked core of a protoplanet whose overlying rock layers were blasted off by massive collisions long ago. FLOATING PILE OF RUBBLE A PRISTINE RECORD OF SOLAR SYSTEM'S HISTORY A small, near-Earth asteroid named Itokawa is just a pile of floating rubble, probably created from the breakup of an ancient planet, according to a University of Michigan researcher was part of the Japanese space mission Hayabusa. The existence of very large boulders and pillars suggests that an earlier "parent" asteroid was shattered by a collision and then re-formed into a rubble pile, the researchers conclude in the paper. It's likely that most asteroids have a similar past, Scheeres said. "Analysis of the asteroid samples will give us a snapshot of the early solar system, and provide valuable clues on how the planets were formed." Also, knowing if an asteroid is a single, big rock or a pile of rubble will have a major influence on how to nudge it off course, Scheeres said, should its orbit be aimed at Earth. An asteroid collision with Earth, while unlikely, could have disastrous consequences. It's widely thought that an asteroid collision caused the mass extinction of dinosaurs 65 million years ago, so some have discussed ways to demolish or steer an approaching asteroid, should we see one coming. Another striking finding, Scheeres said, is that regions of Itokawa's surface are smooth, "almost like a sea of desert sand" and others are very rugged. This indicates that the surfaces of asteroids are, in some sense, active, with material being moved from one region to another. Gravity holds the mass of rubble together. LARGEST ASTEROID MIGHT CONTAIN MORE FRESH WATER THAN EARTH Ceres has long been considered one of the tens of thousands of asteroids that make up the asteroid belt between Mars and Jupiter. At 580 miles (930 km) in diameter ?about the size of Texas ?it's the largest asteroid in the belt, accounting for about 25 percent of the belt's total mass. "The most likely scenario from the knowledge we have on how other objects form, it probably has a rocky core and a mantle. That mantle is probably some watery, icy mix, with other dirt and constituents. That mantle could be as much as ?of the whole object," study coauthor Joel Parker of the Southwest Research Institute told SPACE.com. "Even though it's a small object compared to Earth, there could be a lot of water." ASTEROID PROBE ON CLOSE APPROACH THREE TROJAN ASTEROIDS SHARE NEPTUNE ORBIT COMET COLLISION 'ARMAGEDDON' UNLIKELY THEY SING THE COMET ELECTRIC 'ELECTRIC COMET' COULD BURN THE HOUSE OF SCIENCE COMETS, GRAVITY, AND ELECTRICITY DEEP IMPACT SPACE COLLISION REVEALS COMETS TO BE FLUFFY BALLS OF POWDER Most striking is that the comet is not made up of very much at all. "It's mostly empty," said Prof A'Hearn. The fine particles of dust and ice are held together extremely loosely, with pores thought to run throughout. "We have deduced that around 75% to 80% of the nucleus is empty and that tells me there is probably no solid nucleus. That is a significant advance in our understanding," said Prof A'Hearn. Comment from Signs Of The Times 09/08/2005: There are two problems with this report. The first is that one cannot immediately assume that all comets have the same composition. Certainly the new data is interesting, but shouldn't the new theory be verified? The second problem is that scientists involved in Deep Impact previously claimed that there was little chance the comet would disrupt the orbit of Tempel 1. Now they say there were quite interested in learning if the impact would disrupt the orbit so that a similar experiment could intentionally alter the orbit of a comet headed for Earth in the future. COMET SHAKES CONVENTIONAL WISDOM Researchers analyzing the data were intrigued to find the comet is mostly made up of loose powder-like particles, laced with carbon. Spectrometers, cameras and other instruments revealed that the comet in fact has the consistency of a snowdrift. "The comet is mostly empty, mostly porous," said Dr A'Hearne. "Probably all the way in, there is no bulk of ice. The ice is all in the form of tiny grains. "The material is unbelievably fragile." Some experts say such molecules could have kick-started life on Earth. Under the "pan-spermia" idea, comets pounded the early Earth billions of years ago, bringing organic molecules that reacted with the Sun's light and heat, creating a rich chemical soup within which life began. The theory, which was initially slammed as "outlandish" by some scientists, is slowly gaining ground. Comets were once believed to be of either the stony or metal core variety. Perhaps there is a third kind, the 'snow drift' variety. These may explode should they impact Earth's atmosphere, causing it to rain 'for forty days'. ICE LAYERS RECORD COMET CREATION As the cometesimals hit the surface of a growing comet nucleus, they "flowed" on to the surface, researchers believe. But temperature data from Tempel 1's nucleus suggests the material must be lost from only a few centimetres below the surface. MINI-COMETS APPROACHING EARTH - 03.24.2006 A cometary "string-of-pearls" will fly past Earth in May closer than any comet has come in almost 80 years. 'STARDUST' SHATTERS COMET THEORY - Part 1 ALIEN PARTICLES FOUND IN 'COMET RAIN' PUT UNDER MICROSCOPE AT WELSH UNIVERSITY WELSH scientists have been spearheading the hunt for alien life that may have fallen to Earth in a shower of "red rain". Astrobiologists will today continue to examine traces of matter that poured its blood-red deluge over the Indian state of Kerala for two whole months in 2001. Chandra Wickramasinghe, of Cardiff University, is investigating claims made by one Indian researcher that the phenomenon may have been caused by a passing comet depositing extraterrestrial organisms over our planet. CELESTIAL FIREWORKS IN THE ANCIENT SKY Plasma scientists are now comparing electrical discharge formations in the laboratory to rock art images around the world. Results in 2005 should confirm that immense and terrifying plasma configurations were seen in the sky of our ancestors. Add to this the results posted in "THE ULTIMATE TIME MACHINE"; Joseph McMoneagle; pg. 216. Sometime in the year 2016, an asteroid will bypass Earth, missing our globe by less than 1.3 million miles. It will be large enough to cause a measurable electromagnetic effect on the Earth's surface. This will be the first of four that will visit our neighborhood over the next hundred years, The second will pass around 2030, the third in 2044, and the fourth in 2071. none of these rocks will strike our planet." These dates also appear in the analysis of the Bible Code data. CELESTIAL FIREWORKS IN THE ANCIENT SKY - Part 1 MYSTERY OF THE COSMIC THUNDERBOLT(1) How did the story of a heaven-altering contest find its way into so many cultures? In the ritual of the Babylonian Akitu Festival, the enemy is the dragon Tiamat, subdued by the god Marduk. For the Egyptians it was the dragon Apep, defeated by Ra or his agent Horus. For the Greeks it was the fiery serpents Typhon or Python, vanquished respectively by Zeus and Apollo. Hindu accounts similarly recalled the attack of the sky-darkening serpent Vritra, felled by Indra. But these are only a few of hundreds of such accounts preserved around the world. ARMAGEDDON - Antimatter Comets Thesis - Will be a living rain of fire and terror from the sky ARMAGEDDON - Antimatter Comets Thesis - Billions of people will parish. SUN APPROACHING COMETS POTENTIALLY HAZARDOUS ASTEROIDS ASTEROID 3200 PHAETHON (1983 TB) VULCAN FIRES UP Vulcanoids would need to be in near-circular orbits, each avoiding the others, if they were to have survived the billions of years since the solar system formed. If they exist, how could they be found, perennially in the solar glare? ASTEROID 4179 TOUTATIS (1989 AC) 73P/SCHWASSMANN-WACHMANN 3 The next predicted perihelion date is 2006 June 7 and the comet will pass 0.0735 AU from Earth on May 13 ASTRONOMERS TAKE SEARCH FOR EARTH-THREATENING SPACE ROCKS TO SOUTHERN SKIES SEARCH TO FIND DANGEROUS ASTEROIDS NEARLY COMPLETE NEAR EARTH OBJECTS - A pretty quiet topic on Asteroids, Comets, Meteorites and Near-Earth Objects (NEOs) HUBBLE SPIES COMET TEMPEL 1 BELCHING DUST COMETS IN ANCIENT CULTURES COMET BORRELLY PUZZLE: DARKEST OBJECT IN THE SOLAR SYSTEM Comet Borrelly reflects less than 3 percent of all the sunlight that hits it. It was assumed that Vulcan reflected about 10%. But if it is as dark as comets, Vulcan's estimated to be a 21 magnitude object could be three times dimmer. 73P/SCHWASSMANN-WACHMANN 3 The next predicted perihelion date is 2006 June 7 and the comet will pass 0.0735 AU from Earth on May 13, being only slightly farther away than during the original discovery apparition of 1930. STRANGE COMET UNLIKE ANYTHING KNOWN Brownlee is also intrigued by the utter lack of similarities between Wild 2 and Phoebe, a fairly small moon of Saturn recently imaged up close by the Cassini spacecraft. Phoebe is thought to be a captured object, having originated -- like Wild 2 -- beyond Neptune. But Phoebe's gently sloping craters, which are riddled with boulders, resemble those seen on asteroids. And Phoebe has many small craters embedded in larger, older craters. "It's fascinating that they're so different," Brownlee said in a telephone interview. COMET 'DIRTY SNOWBALL' THEORY IS DEAD Comet 'Wild 2' Looks Like An Asteroid 1/6 Comets - Not What We Expected NASA SPACECRAFT REVEALS SURPRISING ANATOMY OF A COMET Stardust gathered the images on Jan. 2, 2004, when it flew 236 kilometers (about 147 miles) from Wild 2. The flyby yielded the most detailed, high-resolution comet images ever. Another big surprise was the abundance and behavior of jets of particles shooting up from the comet's surface. We expected a couple of jets, but saw more than two dozen in the brief flyby, The violent jets may form when the Sun shines on icy areas near or just below the comet's surface. The solid ice becomes a gas without going through a liquid phase. Escaping into the vacuum of space, the jets blast out at hundreds of kilometers per hour. COMET WILD 2 SURFACE NOT SATELLITE OR ASTEROID SURFACE STARDUST MISSION PHOTOS STARDUST TARGETS LIGHTNING RETURN Among the bizarre features are two depressions with flat floors and nearly vertical walls that resemble giant footprints. They aren't structured like typical impact craters. The features have been named Left Foot and Right Foot in a new map of the comet, which is roughly 3 miles (5 kilometers) wide. Comet Wild 2 probably gathered itself together 4.5 billion years ago, just after the Sun was born, in a region beyond Neptune known as the Kuiper Belt. CICLOPS Cassini Imaging Central Laboratory Operatios Phoebe Pictures And Analysis PHOEBE'S SURFACE REVEALS CLUES TO ITS ORIGIN "Based on our images, some of us are leaning towards the view that has been promoted recently, that Phoebe is probably ice-rich and may be an object originating in the outer solar system, more related to comets and Kuiper Belt objects than to asteroids." PHOEBE'S SURPRISE (see crater images) SATURN'S MYSTERY MOON (PHOEBE) SHOWS EXPOSED WATER ICE, SAYS CORNELL RESEARCHER Phoebe is likely a primordial mixture of ice, rock and carbon-containing compounds similar in to material seen on Pluto and Neptune's moon Triton. Buratti told today's press conference that both carbon dioxide and a simple hydrocarbon have been detectred on Phoebe. The water ice, she said, seems to be associated with a very bright material and there are bright craters and areas that seem to be rich with ice. "There seem to be minerals found with the water, and also an unidentified mystery material tied up with dark material," Phoebe's mass was determined from precise tracking of the spacecraft and optical navigation, combined with an accurate volume estimate from images. The measurements yield a density of about 1.6 grams per cubic centimeter (100 pounds per cubic foot), much lighter than most rocks, but heavier than pure ice at approximately 0.93 grams per cubic centimeter (58 pounds per cubic foot). Torrence Johnson, a Cassini imaging team member from JPL, said that this density "tells us automatically this is not a pure iceball with a thin dark layer. You would never get a density of 1.6 from something like that. Inside, Phoebe has to be fairly complex and has to include some processes to produce these clean layers underneath the surface, which is a deep carbonaceous and rock-rich surface." SATURN'S ODD MOON OUT Many have long suspected that Phoebe originated in the outer solar system and was somehow captured by Saturn's gravitational pull. But until recently, the only evidence they had was the fact that the moon reflected less light than the planet's other moons, and that it orbited Saturn in the opposite direction. The new studies, which appear in the May 5 issue of Nature, show the link even more clearly. In the density study, Lunine and Jet Propulsion Laboratory imaging specialist Torrence Johnson reveal that Phoebe has more rocks and less ice than the rest of Saturn's moons. In fact, its density matches that of Neptune's moon Triton and the planet Pluto. The former is thought to have been captured from a region of the outer solar system known as the Kuiper Belt, a disk of icy debris left over from the formation of the planets. Pluto still orbits there. In the second study, U.S. Geological Survey astrophysicist Roger Clark and a dozen colleagues show how some of the materials detected on Phoebe's surface, like various cyanide compounds, resemble those found on a comet -- again pointing to an origin in the outer solar system. Perhaps more importantly, the researchers found that Phoebe's surface is more diverse in composition than any solar system body ever studied, with the exception of Earth. If Phoebe was indeed formed in the Kuiper Belt, then this material could be some of the most primitive in the solar system, according to Clark. PHOEBE MOON MAY BE CAPTURED COMET "Phoebe has a long journey behind it. It comes from the outer Solar System and probably rounded the Sun a few times before it was captured by Saturn's orbit. But we really don't know." Notice Phoebe's unusual craters. Wonder if they were made by the impact of a small antimatter beam causing water explosions to move Phoebe into its current orbit? SECRETS OF REMOTE ICY WORLDS REVEALED - March 28th, 2012 VILLAIN IN DISGUISE: JUPITER'S ROLE IN IMPACTS ON EARTH - 03/12/12 Jupiter is often credited for shielding Earth from catastrophic asteroid and comet impacts. But new simulations of the influence of gas giant planets in solar systems casts doubt on Jupiter's reputation as Earth's protector. ASTEROIDS NEAR JUPITER ARE REALLY COMETS Observations indicate two orbiting bodies are mostly water ice AVOIDING AN ASTEROID COLLISION - September 13, 2010 PAIRS OF 'RUBBLE PILE' ASTEROIDS EXIST - September 13, 2010 Instead of a solid mountain colliding with earth's surface, says Dr. Brosch, the planet would be pelted with the innumerable pebbles and rocks that comprise it, like a shotgun blast instead of a single cannonball As a result, asteroid pairs are formed, characterized by the trajectory of their rotation around the sun. Though they may be millions of miles apart, the two asteroids share the same orbit. Dr. Brosch says this demonstrates that they come from the same original asteroid source. UFO-CONTACT FROM PLANET IARGA Q: "Why do they have to be so streamlined, since space is surely empty?" A: "We wish that were true! For spaceships that travel at relative speeds, space is not empty enough and not only streamlining but armor plating is also necessary. You have seen our ship and can see that armor is not a useless luxury. They have no windows; they are heavy, armored projectiles, whose strength comes from the discus form. ( A: "When our radar warns us of dust or material, we make the banking maneuver that you have just seen. This then presents the smallest possible surface area to the danger. Nevertheless, each particle of dust makes burn marks on the plating. For this reason we always fly in line formation. The command consists of five ships and the lead ship is always unmanned, because this one runs the greatest risk. Q: "Yes, fine, thank you - ,but didn't you say something about a protective weapon which you could use if material threatened to cross the path of the spacecraft?" A: "The antimatter ray, Stef, is a defense against larger blocks of material which only rarely occur in space. The use of this ray demands not only enormous quantities of energy, but it is controlled by strong restrictions to prevent disturbing the natural balance. We are only justified in its use when no other methods are possible. This weapon cannot replace the armor plating of our ships." ORBIT ANALYST COMMENT ONE ORBIT ANALYST COMMENT TWO
CommonCrawl
The Hele-Shaw problem with surface tension in the case of subdiffusion CPAA Home Global attractors for nonlinear viscoelastic equations with memory September 2016, 15(5): 1915-1939. doi: 10.3934/cpaa.2016022 Distributionally chaotic families of operators on Fréchet spaces J. Alberto Conejero 1, , Marko Kostić 2, , Pedro J. Miana 3, and Marina Murillo-Arcila 4, Dept. Matemàtica Aplicada and IUMPA, Universitat Politècnica de València, València, 46022 Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia Departamento Matemáticas e I.U.M.A., Universidad de Zaragoza, Zaragoza, Spain Instituto Universitario de Matemática Pura y Aplicada, Universitat Politécnica de València, València, Spain Received June 2015 Revised May 2016 Published July 2016 The existence of distributional chaos and distributional irregular vectors has been recently considered in the study of linear dynamics of operators and $C_0$-semigroups. In this paper we extend some previous results on both notions to sequences of operators, $C_0$-semigroups, $C$-regularized semigroups, and $\alpha$-times integrated semigroups on Fréchet spaces. We also add a study of rescaled distributionally chaotic $C_0$-semigroups. Some examples are provided to illustrate all these results. Keywords: $C$-regularized semigroups, hypercyclicity, well-posedness, Distributional chaos, Fréchet spaces, cosine functions, abstract time-fractional equations, integrated semigroups. Mathematics Subject Classification: 47A16, 47D03, 47D06, 47D9. Citation: J. Alberto Conejero, Marko Kostić, Pedro J. Miana, Marina Murillo-Arcila. Distributionally chaotic families of operators on Fréchet spaces. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1915-1939. doi: 10.3934/cpaa.2016022 A. A. Albanese, X. Barrachina, E. M. Mangino and A. Peris, Distributional chaos for strongly continuous semigroups of operators, Commun. Pure Appl. Anal., 12 (2013), 2069-2082. doi: 10.3934/cpaa.2013.12.2069. Google Scholar A. A. Albanese, J. Bonet and W. J. Ricker, $C_0$-semigroups and mean ergodic operators in a class of Fréchet spaces, J. Math. Anal. Appl., 365 (2010), 142-157. doi: 10.1016/j.jmaa.2009.10.014. Google Scholar R. M. Aron, J. B. Seoane-Sepúlveda and A. Weber, Chaos on function spaces, Bull. Austral. Math. Soc., 71 (2005), 411-415. doi: 10.1017/S0004972700038417. Google Scholar J. Aroza and A. Peris, Chaotic behaviour of birth-and-death models with proliferation, J. Difference Equ. Appl., 18 (2012), 647-655. doi: 10.1080/10236198.2011.631535. Google Scholar J. Banasiak, Birth-and-death type systems with parameter and chaotic dynamics of some linear kinetic models, Z. Anal. Anwendungen, 24 (2005), 675-690. doi: 10.4171/ZAA/1262. Google Scholar J. Banasiak and M. Moszyński, A generalization of Desch-Schappacher-Webb criteria for chaos, Discrete Contin. Dyn. Syst., 12 (2005), 959-972. doi: 10.3934/dcds.2005.12.959. Google Scholar J. Banasiak and M. Moszyński, Dynamics of birth-and-death processes with proliferation-stability and chaos, Discrete Contin. Dyn. Syst., 29 (2011), 67-79. doi: 10.3934/dcds.2011.29.67. Google Scholar X. Barrachina and J. A. Conejero, Devaney chaos and distributional chaos in the solution of certain partial differential equations, Abstr. Appl. Anal., Art. ID 457019, 11. Google Scholar X. Barrachina, J. A. Conejero, M. Murillo-Arcila and J. B. Seoane-Sepúlveda, Distributional chaos for the forward and backward control traffic model, Linear Algebra Appl., 479 (2015), 202-215. doi: 10.1016/j.laa.2015.04.010. Google Scholar X. Barrachina and A. Peris, Distributionally chaotic translation semigroups, J. Difference Equ. Appl., 18 (2012), 751-761. doi: 10.1080/10236198.2011.625945. Google Scholar F. Bayart and T. Bermúdez, Semigroups of chaotic operators, Bull. Lond. Math. Soc., 41 (2009), 823-830. doi: 10.1112/blms/bdp055. Google Scholar F. Bayart and É. Matheron, Dynamics of Linear Operators, vol. 179 of Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 2009. doi: 10.1017/CBO9780511581113. Google Scholar T. Bermúdez, A. Bonilla, F. Martínez-Giménez and A. Peris, Li-Yorke and distributionally chaotic operators, J. Math. Anal. Appl., 373 (2011), 83-93. Google Scholar T. Bermúdez, A. Bonilla and A. Peris, On hypercyclicity and supercyclicity criteria, Bull. Austral. Math. Soc., 70 (2004), 45-54. doi: 10.1017/S0004972700035802. Google Scholar T. Bermúdez and V. G. Miller, On operators $T$ such that $f(T)$ is hypercyclic, Extracta Math., 15 (2000), 237-241. Google Scholar L. Bernal-González and K.-G. Grosse-Erdmann, The hypercyclicity criterion for sequences of operators, Studia Math., 157 (2003), 17-32. doi: 10.4064/sm157-1-2. Google Scholar L. Bernal-González, D. Pellegrino and J. B. Seoane-Sepúlveda, Linear subsets of nonlinear sets in topological vector spaces, Bull. Amer. Math. Soc. (N.S.), 51 (2014), 71-130. doi: 10.1090/S0273-0979-2013-01421-6. Google Scholar N. C. Bernardes Jr., A. Bonilla, V. Müller and A. Peris, Distributional chaos for linear operators, J. Funct. Anal., 265 (2013), 2143-2163. doi: 10.1016/j.jfa.2013.06.019. Google Scholar J. Bès, K. C. Chan and S. M. Seubert, Chaotic unbounded differentiation operators, Integral Equations Operator Theory, 40 (2001), 257-267. doi: 10.1007/BF01299846. Google Scholar J. Bès and A. Peris, Hereditarily hypercyclic operators, J. Funct. Anal., 167 (1999), 94-112. doi: 10.1006/jfan.1999.3437. Google Scholar J. A. Conejero, M. Murillo-Arcila and J. B. Seoane-Sepúlveda, Linear chaos for the quick-thinking-driver model,, \emph{Semigroup Forum}, (). doi: 10.1007/s00233-015-9704-6. Google Scholar J. A. Conejero and F. Martínez-Giménez, Chaotic differential operators, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM, 105 (2011), 423-431. doi: 10.1007/s13398-011-0026-6. Google Scholar J. A. Conejero and E. M. Mangino, Hypercyclic semigroups generated by Ornstein-Uhlenbeck operators, Mediterr. J. Math., 7 (2010), 101-109. doi: 10.1007/s00009-010-0030-7. Google Scholar J. A. Conejero, V. Müller and A. Peris, Hypercyclic behaviour of operators in a hypercyclic $C_0$-semigroup, J. Funct. Anal., 244 (2007), 342-348. doi: 10.1016/j.jfa.2006.12.008. Google Scholar J. A. Conejero and A. Peris, Hypercyclic translation $C_0$-semigroups on complex sectors, Discrete Contin. Dyn. Syst., 25 (2009), 1195-1208. doi: 10.3934/dcds.2009.25.1195. Google Scholar R. deLaubenfels, H. Emamirad and K.-G. Grosse-Erdmann, Chaos for semigroups of unbounded operators, Math. Nachr., 261/262 (2003), 47-59. doi: 10.1002/mana.200310112. Google Scholar R. deLaubenfels, Existence Families, Functional Calculi and Evolution Equations, vol. 1570 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 1994. Google Scholar B. Dembart, On the theory of semigroups of operators on locally convex spaces, J. Functional Analysis, 16 (1974), 123-160. Google Scholar W. Desch, W. Schappacher and G. F. Webb, Hypercyclic and chaotic semigroups of linear operators, Ergodic Theory Dynam. Systems, 17 (1997), 793-819. doi: 10.1017/S0143385797084976. Google Scholar J. Duan, X.-C. Fu, P.-D. Liu and A. Manning, A linear chaotic quantum harmonic oscillator, Appl. Math. Lett., 12 (1999), 15-19. doi: 10.1016/S0893-9659(98)00119-0. Google Scholar S. El Mourchid, The imaginary point spectrum and hypercyclicity, Semigroup Forum, 73 (2006), 313-316. doi: 10.1007/s00233-005-0533-x. Google Scholar H. Emamirad, G. R. Goldstein and J. A. Goldstein, Chaotic solution for the Black-Scholes equation, Proc. Amer. Math. Soc., 140 (2012), 2043-2052. Google Scholar K.-J. Engel and R. Nagel, One-parameter Semigroups for Linear Evolution Equations, vol. 194 of Graduate Texts in Mathematics, Springer-Verlag, New York, 2000, With contributions by S. Brendle, M. Campiti, T. Hahn, G. Metafune, G. Nickel, D. Pallara, C. Perazzoli, A. Rhandi, S. Romanelli and R. Schnaubelt. Google Scholar L. Frerick, E. Jordá, T. Kalmes and J. Wengenroth, Strongly continuous semigroups on some Fréchet spaces, J. Math. Anal. Appl., 412 (2014), 121-124. doi: 10.1016/j.jmaa.2013.10.053. Google Scholar G. Godefroy and J. H. Shapiro, Operators with dense, invariant, cyclic vector manifolds, J. Funct. Anal., 98 (1991), 229-269. doi: 10.1016/0022-1236(91)90078-J. Google Scholar J. A. Goldstein, Some remarks on infinitesimal generators of analytic semigroups, Proc. Amer. Math. Soc., 22 (1969), 91-93. Google Scholar M. González and F. León-Saavedra, Hypercyclicity for the elements of the commutant of an operator, Integral Equations Operator Theory, 80 (2014), 265-274. doi: 10.1007/s00020-014-2129-x. Google Scholar K.-G. Grosse-Erdmann and A. Peris Manguillot, Linear Chaos, Universitext, Springer, London, 2011. doi: 10.1007/978-1-4471-2170-1. Google Scholar K.-G. Grosse-Erdmann, Universal families and hypercyclic operators, Bull. Amer. Math. Soc. (N.S.), 36 (1999), 345-381. doi: 10.1090/S0273-0979-99-00788-0. Google Scholar G. Herzog, On a universality of the heat equation, Math. Nachr., 188 (1997), 169-171. doi: 10.1002/mana.19971880110. Google Scholar L. Ji and A. Weber, Dynamics of the heat semigroup on symmetric spaces, Ergodic Theory Dynam. Systems, 30 (2010), 457-468. doi: 10.1017/S0143385709000133. Google Scholar T. Kalmes, Hypercyclic $C_0$-semigroups and evolution families generated by first order differential operators, Proc. Amer. Math. Soc., 137 (2009), 3833-3848. doi: 10.1090/S0002-9939-09-09955-9. Google Scholar T. Kōmura, Semigroups of operators in locally convex spaces, Journal of Functional Analysis, 2 (1968), 258-296. Google Scholar M. Kostić, Generalized Semigroups and Cosine Functions, vol. 23 of Posebna Izdanja [Special Editions], Matematički Institut SANU, Belgrade, 2011. Google Scholar M. Kostić, Some contributions to the theory of abstract Volterra equations, Int. J. Math. Anal. (Ruse), 5 (2011), 1529-1551. Google Scholar M. Kostić, Abstract Volterra equations in locally convex spaces, Sci. China Math., 55 (2012), 1797-1825. doi: 10.1007/s11425-012-4477-9. Google Scholar M. Kostić, Hypercyclic and chaotic integrated $C$-cosine functions, Filomat, 26 (2012), 1-44. doi: 10.2298/FIL1201001K. Google Scholar M. Kostić, Abstract Volterra integro-differential equations: approximation and convergence of resolvent operator families, Numer. Funct. Anal. Optim., 35 (2014), 1579-1606. doi: 10.1080/01630563.2014.908211. Google Scholar F. Martínez-Giménez, P. Oprocha and A. Peris, Distributional chaos for backward shifts, J. Math. Anal. Appl., 351 (2009), 607-615. doi: 10.1016/j.jmaa.2008.10.049. Google Scholar F. Martínez-Giménez, P. Oprocha and A. Peris, Distributional chaos for operators with full scrambled sets, Math. Z., 274 (2013), 603-612. doi: 10.1007/s00209-012-1087-8. Google Scholar Q. Menet, Linear chaos and frequent hypercyclicity,, \emph{Trans. Amer. Math. Soc.}, (). Google Scholar G. Metafune, $L_p$-spectrum of Ornstein-Uhlenbeck operators, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 30 (2001), 97-124. Google Scholar V. Müller, On the Salas theorem and hypercyclicity of $f(T)$, Integral Equations Operator Theory, 67 (2010), 439-448. doi: 10.1007/s00020-010-1791-x. Google Scholar P. Oprocha, A quantum harmonic oscillator and strong chaos, J. Phys. A, 39 (2006), 14559-14565. doi: 10.1088/0305-4470/39/47/003. Google Scholar B. Schweizer and J. Smítal, Measures of chaos and a spectral decomposition of dynamical systems on the interval, Trans. Amer. Math. Soc., 344 (1994), 737-754. doi: 10.2307/2154504. Google Scholar T. Xiao and J. Liang, Laplace transforms and integrated, regularized semigroups in locally convex spaces, J. Funct. Anal., 148 (1997), 448-479. doi: 10.1006/jfan.1996.3096. Google Scholar K. Yosida, Functional Analysis, vol. 123 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Sixth edition, Springer-Verlag, Berlin-New York, 1980. Google Scholar Jacek Banasiak, Marcin Moszyński. Hypercyclicity and chaoticity spaces of $C_0$ semigroups. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 577-587. doi: 10.3934/dcds.2008.20.577 Angela A. Albanese, Xavier Barrachina, Elisabetta M. Mangino, Alfredo Peris. Distributional chaos for strongly continuous semigroups of operators. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2069-2082. doi: 10.3934/cpaa.2013.12.2069 Junxiong Jia, Jigen Peng, Kexue Li. Well-posedness of abstract distributed-order fractional diffusion equations. Communications on Pure & Applied Analysis, 2014, 13 (2) : 605-621. doi: 10.3934/cpaa.2014.13.605 Aissa Guesmia, Nasser-eddine Tatar. Some well-posedness and stability results for abstract hyperbolic equations with infinite memory and distributed time delay. Communications on Pure & Applied Analysis, 2015, 14 (2) : 457-491. doi: 10.3934/cpaa.2015.14.457 Min He. On continuity in parameters of integrated semigroups. Conference Publications, 2003, 2003 (Special) : 403-412. doi: 10.3934/proc.2003.2003.403 Jerry Bona, Hongqiu Chen. Well-posedness for regularized nonlinear dispersive wave equations. Discrete & Continuous Dynamical Systems, 2009, 23 (4) : 1253-1275. doi: 10.3934/dcds.2009.23.1253 G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1327-1341. doi: 10.3934/cpaa.2015.14.1327 Hui Huang, Jian-Guo Liu. Well-posedness for the Keller-Segel equation with fractional Laplacian and the theory of propagation of chaos. Kinetic & Related Models, 2016, 9 (4) : 715-748. doi: 10.3934/krm.2016013 Chao Wang, Ravi P Agarwal. Almost automorphic functions on semigroups induced by complete-closed time scales and application to dynamic equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 781-798. doi: 10.3934/dcdsb.2019267 Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934/krm.2015.8.395 Jean-Daniel Djida, Arran Fernandez, Iván Area. Well-posedness results for fractional semi-linear wave equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 569-597. doi: 10.3934/dcdsb.2019255 Can Li, Weihua Deng, Lijing Zhao. Well-posedness and numerical algorithm for the tempered fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1989-2015. doi: 10.3934/dcdsb.2019026 Huy Tuan Nguyen, Nguyen Anh Tuan, Chao Yang. Global well-posedness for fractional Sobolev-Galpern type equations. Discrete & Continuous Dynamical Systems, 2022 doi: 10.3934/dcds.2021206 Beom-Seok Han, Kyeong-Hun Kim, Daehan Park. A weighted Sobolev space theory for the diffusion-wave equations with time-fractional derivatives on $ C^{1} $ domains. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3415-3445. doi: 10.3934/dcds.2021002 Yuri Latushkin, Valerian Yurov. Stability estimates for semigroups on Banach spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5203-5216. doi: 10.3934/dcds.2013.33.5203 Yonggeun Cho, Gyeongha Hwang, Soonsik Kwon, Sanghyuk Lee. Well-posedness and ill-posedness for the cubic fractional Schrödinger equations. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 2863-2880. doi: 10.3934/dcds.2015.35.2863 Éder Rítis Aragão Costa. An extension of the concept of exponential dichotomy in Fréchet spaces which is stable under perturbation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 845-868. doi: 10.3934/cpaa.2019041 Vo Van Au, Jagdev Singh, Anh Tuan Nguyen. Well-posedness results and blow-up for a semi-linear time fractional diffusion equation with variable coefficients. Electronic Research Archive, 2021, 29 (6) : 3581-3607. doi: 10.3934/era.2021052 Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865 J. Alberto Conejero Marko Kostić Pedro J. Miana Marina Murillo-Arcila
CommonCrawl
Modeling technique in the P-Graph framework for operating units with flexible input ratios András Éles ORCID: orcid.org/0000-0002-5000-33951, István Heckl1 & Heriberto Cabezas2 Central European Journal of Operations Research volume 29, pages 463–489 (2021)Cite this article The P-Graph framework is an efficient tool that deals with the solution of Process Network Synthesis (PNS) problems. The model uses a bipartite graph of material and operating unit nodes, with arcs representing material flow. The framework includes combinatorial algorithms to identify solution structures, and an underlying linear model to be solved by the Accelerated Branch and Bound algorithmic method. An operating unit node in a P-Graph consumes its input materials and produces its products in a fixed ratio of operation volume. This makes it inadequate in modeling such real-world operations where input composition may vary, and may also be subject to specific constraints. Recent works address such cases by directly manipulating the generated mathematical model with linear programming constraints. In this work, a new general method is introduced which allows the modeling of operations with flexible input ratios and linear constraints in general, solely by tools provided by the P-Graph framework itself. This includes representing the operation with ordinary nodes and setting up their properties correctly. We also investigate how our method affects the solution structures for the PNS problem which is crucial for the performance of algorithms in the framework. The method is demonstrated in a case study where sustainable energy generation for a plant is present, and the different types of available biomass introduce a high level of flexibility, while consumption limitations may still apply. Avoid the common mistakes Finding sustainable energy supply methods is of crucial importance in today's society. This is a necessary condition in establishing and maintaining conditions on Earth which allows civilized human life at the long term. It is also a challenging task, as human population is rapidly increasing, along with total consumption and energy demands (Worldwatch Institute 2018). Manufacturing on its own consumes an enormous amount of 2.2 EJ energy annually (Lan et al. 2016; US Energy Information Administration 2018), and is still strongly relying on fossil fuels (International Energy Agency 2019). Designing a process network for sustainable energy supply in general can be difficult due to the number of options to choose from. Originally, decisions about which technologies to be used, what capacities to invest into, which material sources to be utilized, and how to schedule workflow were made by experts intuitively. However, systematic modeling and optimization tools can help reducing or eliminating human errors, as computational power can be utilized to consider a huge number of possibilities. Common objectives for optimization include profit or throughput maximization, or makespan, total cost and resource usage minimization. There are combinatorial algorithms available for specific optimization problems, where the exact step by step manipulation of data and construction of the final solution is explicitly formalized. The advantage of combinatorial algorithms is that their performance can be estimated well while they are potentially very fast, because the structure of the underlying problem to be solved is captured in the logic of the algorithm. The disadvantage is that development and extension of such methods is often difficult. It is a common scenario that a new circumstance arises which shall be incorporated into finding the most suitable solution, but extending a simple method to a larger problem class can be tedious or impossible. An alternative to combinatorial algorithms is the usage of mathematical programming models. This is a fundamentally different approach because it is not the solution procedure which is formalized, but the rules describing the situation. The freedom of choice is represented in variables, the rules are represented as constraints, and the goal of optimization is represented as an objective function. It is not the designer of the mathematical model who evaluates it, but a general purpose model solver software. The difficulty of solving a general model depends on what kinds of variables and constraints are present, for this reason model classes are distinguished. A common class is Linear Programming (LP), in which variables are continuous, the constraints and the objective are all linear. Another, very popular model class is Mixed-Integer Linear Programming (MILP), which extends LP by allowing integer variables. MILP models are a good compromise between describing power of the model and acceptable solver performance. The advantage of mathematical programming in general is that it is easier and faster to develop, maintain and extend with additional constraints if needed. Complex and unique manufacturing and scheduling models may not have general algorithms existing, but may be modeled with mathematical programming. The drawbacks of mathematical programming are the difficulty of verification, solution performance is usually limited, and is highly dependent on the model itself, therefore modeling requires great expertise. Another problem is that because the solution procedure is done by a solver tool, usually there is a little control over it. The P-Graph framework, originally introduced by Friedler et al. 1992, tackles Process Network Syntheses (PNS) problems. The framework consists of both combinatorial and mathematical programming parts. The term P-Graph has a twofold meaning: it is the name of the representation and also the name of the framework. The P-Graph framework can effectively be used to graphically model and optimize complex process networks. It has the generality and ease of development experienced for mathematical programming models. Meanwhile the rigorous mathematical formulation allows precise and proven algorithmic methods for solution. Solving optimization problems with PNS therefore combines advantages of both combinatorial algorithms and mathematical programming. For example, an important advantage of the P-Graph framework is that the best \( n \) solutions all can be computed easily. A P-Graph model consists of material and operating unit nodes, with arcs between them representing the material flows the operating units may perform. The methodology was adapted to problems other than process networks network synthesis, for example scheduling problems, separation network synthesis, heat-exchanger network synthesis, etc. There is one difficulty in modeling with P-Graphs which is in focus of our work: operating unit nodes must have fixed ratios for input material consumption and output material production. This reflects reality in a lot of cases, for example if the operating unit represents a product with fixed recipe, requiring exact amount of materials to be produced. But in other cases an operation step, which can be any process, technological step, transformation, transportation or storage, may be more complex and would allow variable ratios of input materials to be used there. Inputs can even be independent in some cases. These operations cannot be represented by a single operating unit node in a P-Graph. Recent methods tackled the problem of flexible inputs by manually altering the model with MILP constraints added to the solution of the P-Graph model (Szlama et al. 2016). Besides being an ad hoc solution for a particular case study, with this method the original framework cannot be used on its own. Therefore, adequate methods for modeling different kinds of operations with flexible inputs were investigated and described. The point is that we only use the tools of the P-Graph framework itself to obtain the models. That means, the model works by the introduction of additional nodes and arcs in the P-Graph and properly adjusting their parameters. A range of different operating unit models are presented that can be used in general in PNS problems. We also examine how our models affect the solution structures of the PNS problem, which is an important factor for the existing combinatorial algorithms in the framework. A case study is also presented which optimizes sustainable energy supply. The study involves different types of biomass, which can be used in any combinations—a typical scenario for flexible inputs. The pelletizer and biogas plant equipment units in the problem therefore require adequate modeling. Also, we introduce another type of constraint into the model which limits relative usage of energy grass, to demonstrate the different kinds of general flexible input handling methods we presented. Our work is structured as follows. Section 2 is a literature review. The basics of P-Graphs and PNS problems, along with our motivation and problem statement in Sect. 3 are introduced. In Sect. 4, we describe the model elements proposed. Finally, a case study is presented in Sect. 5, followed by our conclusions. The P-Graph methodology is based on the bipartite Process Graph, or in short, P-Graph model. It is able to unambiguously describe flow networks, and correctly model PNS problems (Friedler et al. 1992). Note that P-Graphs can be applied to similar network-based problems as Petri nets, but while Petri nets are discrete and target simulation, P-Graphs represent PNS problems with continuous nature and their main purpose is synthesis and optimization. The theoretical method described in the first publication was soon followed by the main algorithms of the framework, Maximal Structure Generation (Friedler et al. 1993), Solution Structure Generation (Friedler et al. 1995), and the Accelerated Branch and Bound method (Friedler et al. 1996), which is used until today for solving PNS problems. The P-Graph Studio (2019) (available at www.p-graph.org) is a software tool supporting modeling and solving PNS problems with P-Graphs, with all the benefits of the framework. This software allows a graphical editor for P-Graphs, similar to graphical programming languages like LabView or Scratch. Solution algorithms are still tuned and new features are introduced to improve performance (Bartos and Bertok 2018). One important feature is that not only the best, but the first \( n \) best solution structures can be reported for a PNS (Varga et al. 2010), which is in contrast with LP/MILP models. The framework was extended to Time-Constrained PNS problems (Kalauz et al. 2012), where scheduling decisions can be associated with the materials and operating units of an ordinary PNS. Another extension was the multi-periodic modeling scheme (Heckl et al. 2015), which was also included in the framework (Bertok and Bartos 2018). This scheme allows demands and/or supplies that strongly fluctuate in time to be adequately modeled, especially for estimating required equipment capacities. The P-Graph framework was originally targeted at chemical engineering, but the method was widely adapted since to a wide range of problems. It turned out that it is an alternative to mathematical programming tools, having notable advantages. The framework is also useful for teaching purposes (Lam et al. 2016; Promentilla et al. 2017). Reviews of potential usages are done by Lam 2013, Klemeš and Varbanov 2015. There are numerous applications for projects targeting sustainable operation (Cabezas et al. 2018). With the sheer modeling potential of P-Graphs, problems having a strong combinatorial nature can also be tackled, for example special kinds of Vehicle Routing Problems (Barany et al. 2011), scheduling problems (Frits and Bertok 2014), assigning workforce to jobs (Aviso et al. 2019b), and production line balancing (Bartos and Bertok 2019). Separation Network Synthesis involves separation of chemicals with multiple options of predefined technological steps, which is also addressed by the framework (Heckl et al. 2010). It is possible to estimate and take into account the risks of a system to be optimized (Süle et al. 2019; Benjamin 2018). This is important because systems addressed as PNS problems usually consist of a set of interconnected elements, therefore a possible failure may pose threat to the whole system, which is hard to intuitively measure. Reliability analysis can be included into the solution algorithms (Orosz et al. 2018), and it is also possible to propose reactions for equipment failure based on the found PNS solutions (Tan et al. 2014). Sustainability is a key goal when decisions are to be made about supply chains, for which general solution methods are published (Saavedra et al. 2018; Nemet et al. 2016), some involving smaller to larger scales (Kalaitzidou et al. 2016). Purely mathematical programming methods exist, as well as hybrid P-Graph and mathematical programming approaches, addressing for example biomass supply chain optimization (How et al. 2016; Lam et al. 2010). Some economical scenarios include cogeneration of heat and electricity in the same facility, sometimes even incorporating cooling other products (polygeneration). Integration of production and industrial demands can improve energy efficiency, and a need for adequate modeling tools is arising. Operating costs can be minimized by a P-Graph model of the system, while other factors like carbon footprint can also be taken into account (Cabezas et al. 2015; Vance et al. 2015). Okusa et al. 2016 proposed a method to address sustainable polygeneration system design. For such systems, the multi-periodic scheme can also be applied (Aviso et al. 2017). Biomass supply chains were also investigated with the methodology to find bottleneck factors in the system (Lam et al. 2017; How et al. 2018). Carbon Management Networks (CMN) can also be modeled and optimized with P-Graphs (Tan et al. 2017; Aviso et al. 2019a). Managing the inputs of operations is in focus of this work. It shall be noted that operating units with flexible input were modeled in case studies with heat and electricity generation (Szlama et al. 2016), by the manipulation of the mathematical programming model generated by the P-Graph. Cogeneration plant modeling may also include smaller models within the framework itself, for example a P-Graph model for fuzzy constraints (Aviso and Tan 2018). Note that it is also possible to model independent inputs with the tools of the framework itself, proven in previous works for a case study of sustainable energy supply of a manufacturing plant (Éles et al. 2018, 2019). Here we first introduce the main concepts of the P-Graph framework and corresponding solution methods, followed by our motivation of work and exact definition of the problem we intend to solve. The P-Graph The bipartite graph model, the P-Graph itself has two sets of nodes: material nodes (\( M \)) and operating unit nodes (\( O \)). Material nodes represent states of the model, or simply materials. These can be actual materials, workforce, funds, and more. States available at different times, places, roles or conditions may be modeled as different material nodes depending on modeling goal. Usually, each material node stands for something from which a positive quantity may be utilized. Operating unit nodes represent different kinds of transformations of some set and amount of materials to others. Examples include technological steps in some production equipment, storage and transportation. Some materials and operating unit nodes may be present for modeling purposes only and do not resemble physical activities and actual materials. Arcs connect operating unit nodes to material nodes and vice versa, describing the operating units' capabilities as follows. An arc from a material node to an operating unit node represents that the material is an input to the modeled operation, which is consumed as the operation takes place. An arc from an operating unit node to a material node represent that the material is an output from the modeled operation, which is produced as the operation takes place. Usually an operating unit models some material manipulation step that has an operation volume, a quantity which scales the amount of inputs utilized as well as products produced. The P-Graph is described by sets \( M \), \( O \) and arcs between these nodes. PNS problems with P-Graphs A PNS problem description consists of the P-Graph of all the available materials and operations (see Fig. 1), and additional data. There are three kinds of material nodes in a PNS. Example of a P-Graph with different kinds of material nodes A product material node is one which is a target for PNS. All solutions must include at least one way to produce it by some operating units. A raw material node is one which is available a priori. It cannot be an output of an operating unit node. Any other nodes are intermediate material nodes. These can only be obtained by operating units in the PNS, but can be used as input for other operating units. If we do not explicitly refer to a node as being raw material or product material node, we assume it is an intermediate material node. The PNS problem can be described as a \( \left( {P, R, O} \right) \) triplet. The first term is the set of products, the second is the set of raw materials, and the third is the set of all operating units that can be involved. The goal is a working solution for the network, in which all products are actually produced and the system is feasible. A solution structure is a subset of a P-Graph that represents the solution of a PNS defined on it. Therefore, it is itself a P-Graph. It shows which operating units are actually utilized, and what materials are involved in the solution. There are five axioms which precisely describe what P-Graphs are the solution structures of a PNS (Friedler et al. 1992). A subset of a process network fulfilling these conditions is also called combinatorially feasible. (S1) All final products must be included in a solution structure. (S2) Any material, which is not a raw material, must be the output of some operating unit. (S3) Only such operating units can be selected that are defined in the PNS. (S4) Each operating unit must have a path leading to a final product. (S5) Any material must be an input from or an output to at least one operating unit. Fundamental algorithms There are fundamental algorithms based on P-Graphs, which help in solving PNS problems. The Maximal Structure Generation (MSG) algorithm (Friedler et al. 1993) finds the so-called maximal structure of a PNS problem. In polynomial time, MSG eliminates redundant operating units and materials which can be a priori known not to be useful. The resulting structure is called maximal structure, and is proven to be itself a solution structure. The MSG algorithm is commonly used as an initial preprocessing step for other algorithms. The Solution Structure Generation (SSG) algorithm (Friedler et al. 1995) systematically lists all solution structures of a PNS problem. Note that the work required can be very large, as it is proportional to the number of solution structures. Combinatorial explosion can also be observed for P-Graphs. Nevertheless, SSG can be used as part of sophisticated solution algorithms for PNS. Problem data The parts of the P-Graph mentioned so far capture the structure of a PNS or other optimization problem well, but it must also be supported by additional problem data in order to fully describe a problem instance. These may include, but are not limited to the following. For material nodes, maximum and minimum available amounts, costs and measurement units, can be provided. For operating unit nodes, yearly investment and operational costs (both fixed and proportional), as well as minimum and maximum operation volume, and payback period can be provided. For arcs, flow rates have to be provided as ratios. These ratios determine the flow rates in terms of the operation volume of the corresponding operating unit, regardless of being an input or output. The basis of a solution for a PNS problem is a solution structure. However, a complete solution also includes additional decisions, including operation volumes for units, and material balances for each material (see Fig. 2). The objective of optimization can also be varied, but cost minimization and throughput maximization are the most common. Sample PNS problem with its P-Graph add flow rates also depicted The addition of the aforementioned constraints and other factors into a PNS problem cause that some of the solution structures may even turn out to be infeasible. Solving a PNS problem would be possibly done by an implementation of a MILP model, however, there are effective methods utilizing the P-Graph framework. One simple method is the combination of SSG and LP relaxations. Alternatively there is an Accelerated Branch and Bound (ABB) method specifically designed for PNS problems modeled by a P-Graph (Friedler et al. 1996). One major advantage of solving a PNS problem by a P-Graph model with the above methods is that we can find and report the solutions for the \( n \) best solutions structures. This means not only the optimal, but the first \( n \) best solutions are reported, in order. This is a great advantage for decision makers, and is in contrast with MILP models where usually only a single optimal or suboptimal solution is to be found. One fundamental property of operating unit nodes is that the ratios of their total inputs and outputs are fixed. This is a natural consequence of the fact that the operation volume, a single continuous variable, can describe material flows for all inputs and outputs of the operating unit. Historically, PNS problems were first formulated for chemical engineering problems, where this was a good assumption. This kind of operating unit model would work well for some cases, for example chemical reactions, where the exact proportion of inputs and outputs are known a priori. However, in other cases we have to model technological steps or equipment units with a greater variability in inputs. For example, a furnace can be fed with different materials, in any composition, and would produce different amounts of heat based on our choice. Or, a storage tank can hold any amounts of different materials at once. Intuitively, the total freedom of choice can be described by not a single variable operation volume, but a variable per each of the input materials. This is what flexible inputs mean in our interpretation: complex operating unit models, where the ratio of input material amounts can vary. One operating unit node in a P-Graph is not suitable to model these instances. There have been efforts to achieve the optimization goal where the P-Graph included some production steps with flexible input ratios. This was done by manipulating the solution of the model by LP tools (Szlama et al. 2016). However we propose to use several operating unit and/or material nodes to obtain an equivalent P-Graph model for such cases. This means that flexible inputs can be implemented within the P-Graph framework itself. This is essential because the original algorithmic and solution framework can be used on such problems without further manipulation of the solution procedure. Therefore, some specific cases of operating units with flexible inputs in general were investigated, and the results were applied on a case study where a sustainable energy supply of a manufacturing plant is in question, and multiple possible biomass sources are included. We formally define what kind of operations our work focuses on. Suppose that the inputs are materials \( A_{1} , A_{2} \ldots A_{n} \) and the outputs are materials \( B_{1} ,B_{2} \ldots B_{k} \) already modeled in the P-Graph. Assumptions are the following. Input material amounts of each \( A_{i} \), from now denoted by \( x_{i} \), and output amounts of each \( B_{j} \) unambiguously describe a situation. These amounts, regardless of quantity, are always nonnegative. Negative consumption or negative production are rarely a case and are not covered here. Note that, in such cases the roles of being an input or output can be simply switched. Or, if both positive and negative directions of material flow are possible, these can be modeled by two separate operating units simultaneously with nonnegative flows each. Output material amounts of each \( B_{j} \) are determined as linear combinations of the input material amounts. This means that the total freedom regarding an operating unit with flexible input in this work relies solely on the amounts of inputs used. Linear constraints can be formulated on input amounts. These constraints are independent of each other. In each constraint, any amounts of \( A_{i} \) multiplied by a positive coefficient may appear as a term in either (but not both) side of a non-strict inequality. There can be a positive constant term also, at either (but not both) sides. Actually the description of a constraint can simply be something like \( \sum\nolimits_{i = 1}^{n} {\lambda_{i} x_{i} \le C} \) where \( x_{i} \) are input material amounts, \( \lambda_{i} \) are coefficients, and \( C \) is the constant term. However, the above definition relies only positive (or zero, as missing) coefficients and terms and therefore introduces several distinct cases for flexible inputs. This prepares for modeling with the P-Graph framework. We also investigate how such complex operating units may appear in solution structures. For simple operations that can be modeled by a single operating unit node, this is easy: it can either be part of a solution structure or not. If it is not part of any solution structure, then the MSG algorithm excludes it, possibly with some material nodes. However, when a complex operating unit in a P-Graph is modeled via a set of operating units, then it is a valid question that which subset of those operating units may appear in a solution structure, and how many redundant solutions may exist. Modeling cases for flexible inputs Instead of showing the final P-Graph for modeling the general case of flexible inputs, we start from simple instances and go towards generality, illustrating each step with examples. For the sake of simplicity, we assume there is a single output material. Later we will see that multiple output materials can be implemented in parallel and independent of each other, so we do not lose generality by considering only one output at a time. A small nomenclature is provided here for common symbols in the model descriptions. \( a_{i} \) Input flow rate for input material \( A_{i} \) in a single operating unit node. \( b \) Output flow rate for the output material in a single operating unit node. \( \lambda_{i} \) Constant coefficient for input material \( A_{i} \) in imposed linear constraints. \( n \) Number of input materials for the operation. \( v_{i} \) Linear contribution factor of input material \( A_{i} \) to the output amounts. \( x \) Operating volume of a single operating unit node. \( x_{i} \) Operating volume of the operating unit node introduced for input material \( A_{i} \), which equals the amount consumed by the modeled operation. \( y \) Output material amount produced. Single operating unit node As a starting point, let us see what happens when an operation is modeled with a single operating unit node in a P-Graph. This is the traditional interpretation of an operating unit: the input materials are consumed in a fixed ratio, and the output is a linear expression of the input amounts. A single variable \( x \) for volume of operation is sufficient to express the input and output amounts as follows. $$ \begin{aligned} \forall i: x_{i} & = a_{i} x \\ y & = bx \\ \end{aligned} $$ Note that \( a_{i} \) and \( b \) are not only constants describing the operation, these are also the flow ratios on the arcs if a P-Graph operating unit is used for modeling (see Fig. 3). Single operating unit node having three inputs and single output, with flow rates depicted Real-world situations where a product has a fixed recipe and therefore requires a fixed amount per unit production can be modeled with a single operating unit node. Solution structures may either include this node or not, based on whether the operation is needed in some nonzero volume in the solution or not at all. Independent inputs In some circumstances, the inputs of an operation are independent of each other. Provided that our assumptions still hold and the output is a linear expression of the inputs, the following equation describes the operation. $$ y = \mathop \sum \limits_{i = 1}^{n} v_{i} x_{i} $$ Contrary to the single node case, there is not a single free variable, but one for each input material, and they differ at the contribution ratio \( v_{i} \) for the output material amount \( y \). The model of such an operating unit is still straightforward. We can introduce a single operating unit node for each of the inputs, which produce \( v_{i} \) units of the output material per \( 1 \) unit of input material \( A_{i} \) consumed (see Fig. 4). Modeling operation with three completely independent input materials This operation can be the case when a desired product is to be obtained from different and independent sources. For example, if a furnace is to be modeled, and its output is heating, then its inputs can be different combustible materials, each having a unique heating value \( v_{i} \) contributing to production. There are \( 2^{n} \) possible ways such an operation may appear in a solution structure, as either input may be chosen to be present or not. If the operation is not used at all, then none of its inputs are active, and therefore none of them are present. If there is some production, then there are \( 2^{n} - 1 \) different possibilities. Note that each input method may have own fixed or proportional investment and operating costs, and also capacities associated. These details usually make some of the input methods financially more favorable than others. A possible extension from the cases of independent inputs is when the operation with flexible inputs to be modeled has some constraint for the output material amount. For example, a capacity, which is an upper bound. For example, the furnace as in the previous example, or an electric converter may have several feeds, buts its total throughput is bounded. The bound can either be fixed, or can be scaled as an investment parameter. Equation (2) describes the situation formally, but there is a \( y \le C_{max} \) capacity constraint added. The problem is that \( y \le C_{max} \) is directly supported by the modeling software for a single operating unit only. This is the case for the P-Graph Studio where \( C_{max} \) is the capacity of operating volume, it can either be fixed, or can be scaled by investment costs linearly. But how can the same bound be implemented if the operating unit is itself modeled as a set of operating unit nodes in a P-Graph, as in the case of independent inputs? One key observation about this case is that the constraint itself only relies on information about the final output amount itself, which is \( y \), and it does not matter which inputs it comes from. This fact is a serious distinction from future cases for flexible input. In this case, a single operating unit after the original output material can be introduced. This shall consume the original output material and produce the actual one (see Fig. 5). Any parameters about the output volume that may impose restrictions, for example capacity, investment and operating costs, shall be provided to this new unit. Modeling operation with three independent inputs and a maximal capacity for the total output amount produced The solution structures in this case are quite similar as before. The newly introduced operating unit has to be present whenever any of the input methods are utilized. Unique input capacity Suppose that the operating unit with flexible inputs is modeled, but now it is not the output which has a capacity restriction but some other, general linear combination of the inputs. Formally as Eq. (2) with the following constraint added. $$ \mathop \sum \limits_{i = 1}^{n} \lambda_{i} x_{i} \le C_{max} $$ Here \( \lambda_{i} \) are coefficients different from \( v_{i} \), therefore we need a new constraint to express the upper limit \( C_{max} \). Note that we suppose that \( \lambda_{i} \ge 0 \), so each input actually contributes to the capacity. This can be a real world situation for example in case of a pelletizer, where different types of biomass can be the source of the pellets, and the heating value of the pellets is the output. The problem is that the pelletizer has some capacity which is usually expressed in terms of total mass or volume of the biomass used, but has nothing to do with the heating power of the product. It is likely not an option to model the pelletizing processes individually for each input, as the same pelletizer equipment may be used anyways for all purposes, so inputs are no way independent anymore. The solution is that a logical capacity operating unit is introduced which produces a logical capacity material. This material node is then fed to each of the individual operating unit nodes of the input materials, in their own ratios (see Fig. 6). Modeling operation with a capacity based on inputs independent of the output amount This way, the capacity available can be limited by the logical capacity operating unit. Moreover, the ratios of capacity consumption may be set so that the unique constraint for capacity is obtained, independent of the outputs. If we investigate this structure, we can see that whenever an input method is used in a solution structure, then the logical capacity material must be included, which implies that the logical capacity operating unit must also be present. The number of possible solution structures is the same as in the case of output capacity constraints. Note that this logical capacity operating unit must be implemented in this direction, that is, as a logical input. It might be possible to implement the capacity as a logical output, and a logical operating unit that consumes the "arising" capacity. In PNS problems there can usually be a zero material balance set for material nodes, so this might seem equivalent to the previous case (see Fig. 7). Modeling a unique capacity constraint as logical outputs instead of logical inputs However, the MSG algorithm, based on axiom (S4) of solution structures may eliminate the logical part in this case, as it seemingly does not contribute to products. This is certainly not what we want, so logical inputs instead are the better choice. Minimum input usage We could investigate the case when the constraint does not impose an upper, but a lower bound for input material usage, the same way, as Eq. (2) describing the output amounts plus the following constraint. $$ C_{min} \le \mathop \sum \limits_{i = 1}^{n} \lambda_{i} x_{i} $$ We suppose \( \lambda_{i} \ge 0 \), so each input actually contribute to reach the minimum goal of \( C_{min} \), or is independent of it if \( \lambda_{i} = 0 \). This can be a real-world case where the owner of a furnace is somehow obliged to consume some input materials for example by law, regulation or contract, and this restriction is not implicitly achieved elsewhere. The solution is that now we can introduce a product material node in the P-Graph, which is produced as outputs by the operating unit nodes for each individual input, in the corresponding \( \lambda_{i} \) ratios (see Fig. 8). The new product shall be set a minimum demand of \( C_{min} \). Modeling of minimum capacity for inputs utilized, independent of the output amount Note that in this case, the product material node enforces the usage of at least one input method in the solution structures. This is a good behavior, because the constraint does not only have an effect on solutions for each solution structure, but affect the structures themselves. Cases where the input materials are not used are infeasible because of \( C_{min} \) and are eliminated early by the MSG algorithm. Ratio constraints So far only such constraints were investigated where there was one constant term present, implying either a minimum or a maximum. If it is not the case, then each amount appears in either side of the inequality as a linear term with a positive coefficient. This means the constraint in general is similar to the following. $$ \mathop \sum \limits_{i \in L} \lambda_{i} x_{i} \le \mathop \sum \limits_{i \in R} \lambda_{i} x_{i} $$ There \( \lambda_{i} \) are all positive, and \( L \) and \( R \) are disjoint index sets. Note that any linear inequality without a constant term can be arranged to such a form. There is no point in having any material amount present at both sides. Also, because of all material amounts being nonnegative, such a constraint makes sense only if there are terms at both sides. One important property of such constraints is that all material amounts can be simultaneously scaled, that is, multiplied by the same positive constant factor, and it does not change whether the constraint is fulfilled or not. For this reason we call Inequality (5) as ratio constraint throughout this work, because only the ratio of the amounts is important. Many ratio constraints at the same time can be present to describe what compositions of input materials are valid for an operation. A practical example for a ratio constraint is the burning of wood together with coal in a coal furnace. A coal furnace is designed to handle coal but wood can be used for a certain degree. If the limit is overstepped special maintenance is needed. A ratio constraint can make sure that, for example, at most 20% wood can be used and the rest must be coal. Convex sums One candidate in modeling ratio constraints is the application of convex sums. This is a method that focuses on implementing multiple ratio constraints at the same time. We still assume that the constraints are linear, which makes the set of possible compositions a convex polytope in \( {\mathbb{R}}^{n} \) for \( n \) input materials. The idea is that we can define a set of vectors that are edge cases, so that any possible composition can be obtained as a convex sum of these vectors. A convex sum is a sum with nonnegative coefficients. Without providing the general method for this, we show two examples. Suppose that there are two input materials, \( A_{1} \) and \( A_{2} \), and it is a constraint that none of the input materials can contribute to more than 80% of the total inputs. This can be a possible real-world scenario for any of the mentioned examples so far, as relying too much on a single source would impose a risk. If \( x_{1} \) and \( x_{2} \) are the used amounts, then the two constraints are \( x_{1} \le 0.8\left( {x_{1} + x_{2} } \right) \) and \( x_{2} \le 0.8\left( {x_{2} + x_{1} } \right) \). These can be simplified to ratio constraints \( x_{1} \le 4x_{2} \) and \( x_{2} \le 4x_{1} \) respectively. Now instead of modeling these constraints independently, consider these as edge cases: \( 1 \) unit of \( A_{1} \) and \( 4 \) units of \( A_{2} \) is the first vector, \( \left( {1,4} \right) \), and \( 4 \) units of \( A_{1} \) and \( 1 \) unit of \( A_{2} \) is the second, \( \left( {4,1} \right) \). The observation is that any feasible \( \left( {x_{1} ,x_{2} } \right) \) can be expressed as a convex sum of \( \left( {1,4} \right) \) and \( \left( {4,1} \right) \). Without providing a precise proof, the reason is that the possible ratios are on a segment in \( {\mathbb{R}}^{2} \) ending in \( \left( {\frac{1}{5},\frac{4}{5}} \right) \) and \( \left( {\frac{4}{5},\frac{1}{5}} \right) \), and each point on a segment can be obtained as a weighted average of its endpoints. This makes modeling with P-Graph possible with only a few nodes (see Fig. 9). The two extreme cases can be modeled with an operating unit node each. Output(s) can always be assigned for each of the introduced operating units because it is a linear function of input amounts, so it is also a linear function of the vectors used. Application of the convex sums method for modeling an 80% capacity constraint for both inputs at the same time Note that if the complex operation is not used in a solution structure, then operating unit nodes are all omitted. If it is used, then for the two edge cases, there is one possible structure: the one containing only the operating unit for that particular extreme case. Non-extreme ratios require both operating units. So in this case, there is no much redundancy in the model. However, one drawback of this convex sum method is that the number of extreme cases can be large, and then a high level of redundancy is introduced structurally in the model. Consider the case when there are three materials \( A_{1} \), \( A_{2} \), \( A_{3} \), and none of them can be over 70%, which is only a slightly larger problem. The edge case vectors are \( \left( {7,3,0} \right) \), \( \left( {7,0,3} \right) \), \( \left( {3,7,0} \right) \), \( \left( {0,7,3} \right) \), \( \left( {3,0,7} \right) \), \( \left( {0,3,7} \right) \), and the design requires six operating units (see Fig. 10). Fig. 10 Application of the convex sums method for modeling a 70% capacity constraint for three inputs at the same time However, in this case, any three of the edge case vectors may be used to compose a given non-edge ratio, that gives rise to \( \left( {\begin{array}{*{20}c} 6 \\ 3 \\ \end{array} } \right) = 20 \) different solution structures. Moreover, composition of a non-edge ratio of edge cases can be ambiguous, more than three edge cases can be used. So this design contains a lot of redundancy, even structurally. Therefore, it is not advisable to use convex sums, only in very simple situations like the first case with two inputs only. Individual ratio constraints Instead of using convex sums, a single ratio constraint can be implemented at a time. For now, we suppose again that for each input material, its own operating unit node is already introduced, the operation volume of which represents the amount of input material consumed. We have also seen how to implement lower and upper bound constraints into this design. Now suppose we have a ratio constraint as before, in Inequality (5). Now the left-hand side (LHS) and right-hand side (RHS) can be considered as two distinct logical materials and be connected as follows (see Fig. 11). Modeling operation with four independent inputs, and an added example constraint of \( \lambda_{1} x_{1} + \lambda_{3} x_{3} \le \lambda_{2} x_{2} + \lambda_{4} x_{4} \). This is the long version The LHS is consumed by the input operating units, for each \( A_{i} \) in \( \lambda_{i} \) ratio. This ensures that there must be sufficient amount of the logical material for the LHS present. The RHS is produced by the input operating units, for each \( A_{i} \) in \( \lambda_{i} \) ratio. This ensures that there are the amounts of the RHS material produced in the actual value of the RHS of the constraint. The two logical material nodes are connected by a logical operating unit that consumes the RHS and produces the LHS, in a \( 1{:}1 \) ratio. This completes a loop of logical material flow from the operating units introduced for the input materials, ending in themselves. Observe that by the material flow, the RHS of the constraint is the only source of the logical material for the LHS, which is consumed in the exact amount of the LHS. Therefore, the introduced parts of the P-Graph ensure that the constraints are always true, but it also allows full freedom beyond this, because not all of the logical RHS material is required to be consumed. One property of ratio constraints is that all the \( \lambda_{i} \) flow ratios associated with the arcs can be scaled by any positive factor. This design allows us to implement some additional features. For example, a fixed cost can be introduced to the logical operating unit, which would reflect additional costs. However, if this is not needed, the design can be simplified into a single material node as follows. The two logical materials and the logical operating units are merged into a single logical material for the constraint (see Fig. 12). Modeling operation with four independent inputs, and an added example constraint of \( \lambda_{1} x_{1} + \lambda_{3} x_{3} \le \lambda_{2} x_{2} + \lambda_{4} x_{4} \). This is the simplified version with a single material node for the constraint Note that there are arcs from the logical material from and to the operating units of the inputs if those inputs appear in the right and left hand side of the constraints, respectively. If we investigate any of the two designs in terms of solution structures, we can observe that the usage of any input material present in the LHS of the constraint forces the logical material node for the constraint to be included in the solution structure, which also forces at least one of the input materials present in the RHS to be produced in the solution structure. This is again a good behavior, as infeasible solutions are structurally excluded, namely where some LHS materials are present, but RHS materials are not at all. One drawback of modeling flexible inputs shall be mentioned. The introduced logical nodes and operating units, if visible, may severely affect the readability of the graphical representation of the resulting P-Graph. This should be kept in mind, because one benefit of using the P-Graph framework over MILP models is the easier readability, especially for decision makers not familiar with optimization tools. General constraints So far examples were presented for constant terms, but only with linear terms on one side of the constraint. Ratio constraints were also treated, where there are no constant terms but linear terms at both sides. Now we can conclude the methods for the most general case of constraints, which is the following. $$ C_{min} + \mathop \sum \limits_{i \in L} \lambda_{i} x_{i} \le \mathop \sum \limits_{i \in R} \lambda_{i} x_{i} + C_{max} $$ Note that here \( C_{min} \) and \( C_{max} \) are positive constants, but for the sake of simplicity we may assume that at least one of them is zero. We also assume that the constraint is not trivially redundant or cause an infeasibility of the whole operation. The starting point is the case of independent inputs, where an operating unit for each input material \( A_{i} \) is already introduced, with unit consumption ratio from \( A_{i} \) and corresponding output ratio to the output material. The method for modeling with P-Graphs is the following. If \( C_{min} \) is present, introduce a logical material node as a product node in the P-Graph, with \( C_{min} \) as its minimum demand. If \( C_{max} \) is present, introduce a logical material node as an intermediate material, and a logical operating unit which outputs the logical material in a maximum amount of \( C_{max} \). If neither \( C_{min} \) nor \( C_{max} \) are present, introduce a single logical material node for the constraint. If an input material is present in the LHS of the constraint, add the logical material node as an input to the individual operating unit node introduced for that particular input material, with a \( \lambda_{i} \) ratio. If an input material is present in the RHS of the constraint, add the logical material node as an output from the individual operating unit node introduced for that particular input material, again with a \( \lambda_{i} \) ratio. The importance of a logical operating unit introduced is that it can have parameters supported by PNS software. For example, fixed and proportional investment and operating costs based on operating unit capacity can be provided, which makes \( C_{max} \) not only a constant but a tunable model variable. So far we assumed there is a single output material. However, it does not matter how many outputs are there, as each can be modeled the same way. Produced amounts depend directly and linearly on the input materials used, the formula is \( y = \mathop \sum \nolimits_{i = 1}^{n} v_{i} x_{i} \). This can be implemented easily, the material node for the output must be connected to each of the operating unit nodes introduced for input materials \( A_{i} \) with the corresponding ratio of \( v_{i} \). The same procedure must be repeated if there are multiple output materials. We now demonstrate the general method of modeling flexible input materials in a case study of the sustainable energy supply of a manufacturing plant. The plant has indoors heating and electricity demands throughout the year, which are originally bought from the public service provider. Decision makers considered the investment into the usage of renewable energy sources to improve energy efficiency of the plant, and achieve an overall reduction of annual costs in the long term. Considered alternatives for a sustainable energy supply are solar (photovoltaic) power plant for electricity and possibly heat production, biogas combined heat and power (CHP) plant and biogas furnace based on seven different sources: corn cobs, energy grass, wood, saw dust, wood chips, sunflower stem and vine stem (see Table 1). The former four of these seven types of biomass are needed to be pelletized first which implies an investment into a pelletizer machine. The pellets and other biomass can be fed into a biogas plant producing biogas, which is then used for producing heat and power. Altogether, there are five possible investments into equipment (see Table 2). The business as usual solution, which is purchasing gas from the service provider and burn it in a furnace, and purchasing electricity from the grid, are also kept in the model as possibilities. Table 1 Possible energy sources for the manufacturing plant: biomass, solar power, and purchase Table 2 Possible investments into equipment units considered for the case study This scenario had been studied for single-periodic (Éles et al. 2018), and extended for multi-periodic demands (Éles et al. 2019), as a PNS problem modeled and optimized by the P-Graph framework. The importance of this study is that the pelletizer and the biogas plant units have flexible inputs. One additional constraint is included in the scenario and the PNS model is resolved with the P-Graph framework in the present case study, while flexible input implementations are examined. These models were all implemented in P-Graph Studio v5.2, and solved by the ABB algorithm, on a Lenovo Y50-70 computer with Intel i7-4720HQ CPU and 8 GB RAM. The most straightforward P-Graph model for the case study is shown in Fig. 13. Both the biogas plant and the pelletizer are already modeled with the independent inputs method. However, as equipment capacities are also modeling variables, the output capacities method was also used. The biomass types are immediately converted to heating powers in the modeling point of view by their respective operating units, and therefore the capacity constraints are expressed in terms of total heating power. Most straightforward P-Graph model for the case study, working by implementation of completely independent inputs, and output capacity for both the pelletizer and the biogas plant The more adequate modeling requires that the biomass input capacities are expressed in terms of their total mass instead of heating power. This requires a unique input capacity constraint, and is done by the proposed method (see Fig. 14). A logical capacity material node and a logical operating unit producing this capacity material are introduced for both units. The logical operating unit has the data of the modeled equipment associated with it, namely fixed and proportional operating costs. This is a case when the upper bound \( C_{max} \) is itself a modeling variable, because the P-Graph Studio software supports the capacity of the operating unit to be scaled, and have a proportional cost. Independent input capacities implemented for both the biogas plant and the pelletizer. Note that the implementations are done jointly for the two equipment units, resulting in only seven inputs for both equipment units in total This scenario has a specialty which is apparent at the P-Graph. That is, the outputs of the pelletizer, which are the different types of pellets, have only one usage: being fed into the biogas plant. For this reason, those four input materials of the biogas plant are merged with the four output materials of the pelletizer, which results in a single operating unit node for each biomass type, for both equipment units at the same time. Note that the investment and operating costs are implemented at the logical capacity production operating units in the PNS problem. The product of the biogas plant (or the pelletizer and then the biogas plant) is biogas expressed in terms of heating value for all seven types of biomass. This model is extended by an additional constraint for demonstration purposes. For safety reasons, it was a request from the management that the sustainable energy supply should not rely strongly on a single source of energy. One possible mitigation for this risk is when the ordinary methods of purchasing gas and electricity from the grid are still present in the PNS. However, another method is to explicitly declare constraints. Now we introduce a constraint about the biomass composition. In the optimal solution for the model without the constraint, it turned out that 72% of the input amounts, in total mass, are energy grass. Hence we introduce a constraint that maximizes the composition of energy grass in 70% of total, and 50% of total biomass used, respectively. This procedure results in 3 models including the original, which we implemented, tested, and compared. Consider the 70% ratio constraint which can be described and then arranged as follows. $$ \begin{aligned} x_{eg} & \le 0.7\left( {x_{wc} + x_{sd} + x_{ss} + x_{ws} + x_{cc} + x_{eg} + x_{w} } \right) \\ 3x_{eg} & \le 7x_{wc} + 7x_{sd} + 7x_{ss} + 7x_{ws} + 7x_{cc} + 7x_{w} \\ \end{aligned} $$ If the maximal contribution of energy grass to input masses is modified to 50%, the constraint is modified to the following. Any maximum value could be implemented. $$ x_{eg} \le x_{wc} + x_{sd} + x_{ss} + x_{ws} + x_{cc} + x_{w} $$ According to the general model for ratio constraints, the capacity for energy grass can be implemented as a single logical material node (see Fig. 15). It is an input to the operating unit nodes for all six types of biomass, except energy grass, which is an output. In case of the 70% limit, the flow rate for the energy grass is 3, and for all other biomass types it is 7. In case of the 50% limit, the flow rates are all 1. Model of the case study with the additional constraint maximizing energy grass usage relative to other biomass types. Flow rates of the arcs depend on the maximum amount allowed Note that with the constraint, each solution structure containing energy grass production must contain at least one of the other six biomass types as well, due to the introduced logical material node of the constraint. We can compare the optimal solutions of the three cases, for the best solution structures (see Table 3). The optimal solution in all three cases rely on the Biogas CHP plant with the required Biogas plant to operate, but not the pelletizer, and the rest of the required electricity to be purchased from the grid. This gives an annual cost of 220.709 M HUF. For the following solutions, either a pelletizer, or a solar power plant investment is required, this causes the jump in the objective values. Table 3 Optimal objective values for three cases, in M HUF/year total costs We can observe that the solutions slightly worsen as the constraint is becoming stricter. Using energy grass is still the most beneficial way of producing energy from biomass, however the limits force the input amounts of other types of biomass to increase, in other solution structures as well (see Fig. 16). Solution structures of the case study with the energy grass maximizing constraint added. a Optimal solution structure for all three cases, with different objective values. b Example solution structure involving other biomass types. A pelletizer is needed in this case The slight differences between the objective values are due to the fact that solution structures #2–#9 in all three cases vary mostly in the biomass type used, which is a smaller factor than equipment costs. Possible errors in the estimation of material costs therefore have a larger impact on the optimal structures than the actual objective values. Equipment costs are more significant. However, it just turns out that using either a pelletizer or a solar power plant in addition results in roughly the same objective as well. Results show that utilizing local biomass can be financially beneficial in the long term for the manufacturing plant. The model adequately represents the pelletizer and biogas plant technologies to handle flexible inputs, and we can also include additional constraints. The P-Graph methodology is capable of evaluating energy supply scenarios, provided that costs can be estimated and modeled with fixed and proportional components. Decision making is further supported by reporting near-optimal solution structures which can be manually investigated to discover alternatives. Note that a very long payback period of 20 years is considered for the investments. If a shorter payback period of 5 years is assumed, which is still considered long term in practice for investments, then the business as usual solution outperforms any kinds of investment strategies. The business as usual solution is the same solution structure in all scenarios, which uses the existing furnace for gas purchased and electricity is also obtained from the grid. In short, investment into solar and biomass energy supplies is beneficial in the long term, if the options and costs in the case study are assumed. A general method was presented to adequately model operations with flexible inputs in the P-Graph framework. The most important fact is that no external tools or modifications are used, only ordinary material and operating unit nodes. Therefore, the whole algorithmic framework designed upon P-Graphs can be directly applied on our modeling methods. Note that although the terminology uses the terms materials and operating units in a P-Graph, a wide range of optimization problems can also be modeled, including heat-exchanger network synthesis, separation network synthesis, scheduling, storage and transportation optimization problems. Several cases for complex operations with flexible inputs were presented here from simpler to more complicated ones. The first case was the completely independent input materials. The main idea at this step was the introduction of individual operating unit nodes for each input material, which is a design utilized for the other cases as well. Later, a total output limit was added. We also described how individual limits on the total inputs independent of the total output can be implemented, for both lower and upper limits. These required the introduction of logical intermediate material and operating unit nodes, or a product node based on being an upper or lower bound, respectively. Ratio constraints, where only the ratios of input materials are constrained, are also investigated. Expressing an arbitrary ratio with a convex sum of some predefined compositions is an elegant modeling tool for small cases, but turned out to have undesirable difficulties and redundancy for several inputs and complex search spaces. Instead, the introduction of a single logical material node was proposed to implement an arbitrary linear ratio constraint. Finally, a guide is established for modeling the general case for operations with flexible inputs, provided that certain assumptions can be made, for example linearity. The mentioned constraints are proven to be applicable in parallel, independent of each other in the same P-Graph model. Similarly to constraints, multiple different outputs of the operation can be provided. We also examined how the model affects solution structures of the PNS problem, and found that the proposed methods either do not introduce unwanted redundancy into the model, or even excludes some solution structures that are infeasible. This is a strong advantage, because the basic combinatorial algorithms MSG and SSG are also enhanced by the new circumstances. The usability of the flexible input modeling tools is demonstrated on a case study. Energy supplies often rely on a variety of sources, therefore input compositions are rarely fixed a priori. This is especially true for biomass and renewables. Therefore, sustainable energy supply problems are a good example for operations with flexible input material ratios. If modeled and optimized using the P-Graph framework, the proposed modeling techniques can be directly used. In the study, mass-based capacity constraints were implemented for the pelletizer and biogas plant equipment units, which are independent of the energy output. Also, the contribution of energy grass to the total usage was limited, which demonstrates how ratio constraints can be implemented. The tools provided here could be of good use in future applications, when P-Graph models are used for PNS problems. There is still work to be made to describe modeling tools for other operations, or possibly give alternative solutions for already described methods. These research directions could also pinpoint possible new features in the P-Graph Studio to be implemented. Aviso KB, Tan RR (2018) Fuzzy P-Graph for optimal synthesis of cogeneration and trigeneration systems. Energy 154:258–268 Aviso KB, Lee JY, Dulatre JC, Madria VR, Okusa J, Tan RR (2017) A P-Graph model for multi-period optimization of sustainable energy systems. J Clean Prod 161:1338–1351 Aviso KB, Belmonte BA, Benjamin MFD, Arogo JIA, Coronel ALO, Janairo CMJ, Foo DCY, Tan RR (2019a) Synthesis of optimal and near-optimal biochar-based carbon management networks with P-Graph. J Clean Prod 214:893–901 Aviso KB, Chiu ASF, Demeterio FPA III, Lucas RIG, Tseng ML, Tan RR (2019b) Optimal human resource planning with P-graph for universities undergoing transition. J Clean Prod 224:811–822 Barany M, Bertok B, Kovacs Z, Friedler F, Fan LT (2011) Solving vehicle assignment problems by process-network synthesis to minimize cost and environmental impact of transportation. Clean Technol Environ 13:637–642 Bartos A, Bertok B (2018) Parameter tuning for a cooperative parallel implementation of process-network synthesis algorithms. Cent Eur J Oper Res 27:551 Bartos A, Bertok B (2019) Production line balancing by P-Graphs. Optim Eng 21:567–584 Benjamin MFD (2018) Multi-disruption criticality analysis in bioenergy-based eco-industrial parks via the P-Graph approach. J Clean Prod 186:325–334 Bertok B, Bartos A (2018) Algorithmic process synthesis and optimisation for multiple time periods including waste treatment: latest developments in P-Graph studio software. Chem Eng Trans 70:97–102 Cabezas H, Heckl I, Bertok B, Friedler F (2015) Use the P-Graph framework to design supply chains for sustainability. Chem Eng Prog 111:41–47 Cabezas H, Argoti A, Friedler F, Mizsey P, Pimentel J (2018) Design and engineering of sustainable process systems and supply chains by the P-Graph framework. Environ Prog 37:624–636 Éles A, Halász L, Heckl I, Cabezas H (2018) Energy consumption optimization of a manufacturing plant by the application of the P-Graph framework. Chem Eng Trans 70:1783–1788 Éles A, Halász L, Heckl I, Cabezas H (2019) Evaluation of the energy supply options of a manufacturing plant by the application of the P-Graph framework. Energies 12:1484 Friedler F, Tarjan K, Huang YW, Fan LT (1992) Graph-theoretic approach to process synthesis: axioms and theorems. Chem Eng Sci 47:1973–1988 Friedler F, Tarjan K, Huang YW, Fan LT (1993) Graph-theoretic approach to process synthesis: polynomial algorithm for the maximal structure generation. Comput Chem Eng 17:929–942 Friedler F, Varga BJ, Fan LT (1995) Decision-mapping: a tool for consistent and complete decisions in process synthesis. Chem Eng Sci 50:1755–1768 Friedler F, Varga BJ, Fehér E, Fan LT (1996) Combinatorially accelerated branch-and-bound method for solving the MIP model of process network synthesis. In: Floudas CA, Pardalos PM (eds) State of the art in global optimization. Kluwer Academic Publishers, Dordrecht, pp 609–626 Frits M, Bertok B (2014) Process scheduling by synthesizing time constrained process-networks. Comput Aided Chem Eng 33:1345–1350 Heckl I, Friedler F, Fan LT (2010) Solution of separation-network synthesis problems by the P-Graph methodology. Comput Chem Eng 34:700–706 Heckl I, Halasz L, Szlama A, Cabezas H, Friedler F (2015) Process synthesis involving multi-period operations by the P-Graph framework. Comput Chem Eng 83:157–164 How BS, Hong BH, Lam HL, Friedler F (2016) Synthesis of multiple biomass corridor via decomposition approach: a P-Graph application. J Clean Prod 130:45–57 How BS, Yeoh TT, Tan TK, Chong HK, Ganga D, Lam HL (2018) Debottlenecking of sustainability performance for integrated biomass supply chain: P-Graph approach. J Clean Prod 193:720–733 International Energy Agency (2019) Electricity statistics. www.iea.org/statistics/electricity. Accessed 20 Feb 2019 Kalaitzidou MA, Georgiadis MC, Kopanos GM (2016) A general representation for the modeling of energy supply chains. Comput Aided Chem Eng 38:781–786 Kalauz K, Süle Z, Bertok B, Friedler F, Fan LT (2012) Extending process-network synthesis algorithms with time bounds for supply network design. Chem Eng Trans 29:259–264 Klemeš JJ, Varbanov P (2015) Spreading the message: P-Graph enhancements: implementations and applications. Chem Eng Trans 45:1333–1338 Lam HL (2013) Extended P-Graph applications in supply chain and process network synthesis. Curr Opin Chem Eng 2:475–486 Lam HL, Varbanov P, Klemeš JJ (2010) Optimisation of regional energy supply chains utilising renewables: P-Graph approach. Comput Chem Eng 34:782–792 Lam HL, Tan RR, Aviso KB (2016) Implementation of P-Graph modules in undergraduate chemical engineering degree programs: experiences in Malaysia and the Philippines. J Clean Prod 136:254–265 Lam HL, Chong KH, Tan TK, Ponniah GD, Tin YT, How BS (2017) Debottlenecking of the integrated biomass network with sustainability index. Chem Eng Trans 61:1615–1620 Lan J, Malik A, Lenzen M, McBain D, Kanemoto K (2016) A structural decomposition analysis of global energy footprints. Appl Energ 163:436–451 Nemet A, Klemeš JJ, Duić N, Yan J (2016) Improving sustainability development in energy planning and optimisation. Appl Energy 184:1241–1245 Okusa JS, Dulatre JCR, Madria VRF, Aviso KB, Tan RR (2016) P-Graph approach to optimization of polygeneration systems under uncertainty. In: Proceedings of the DLSU research congress, Manila, Philippines, 7–9 March 2016 Orosz A, Kovacs Z, Frieder F (2018) Processing systems synthesis with embedded reliability consideration. Comput Aided Chem Eng 43:869–874 P-Graph Studio (2019) www.pgraph.org. Accessed 31 Jan 2019 Promentilla MAB, Lucas RIG, Aviso KB, Tan RR (2017) Problem-based learning of process systems engineering and process integration concepts with metacognitive strategies: the case of P-Graphs for polygeneration systems. Appl Therm Eng 127:1317–1325 Saavedra MR, Fontes CHO, Freires FGM (2018) Sustainable and renewable energy supply chain: a system dynamics overview. Renew Sustain Energy Rev 82:247–259 Süle Z, Baumgartner J, Gy D, Abonyi J (2019) P-graph-based multi-objective risk analysis and redundancy allocation in safety-critical energy systems. Energy 179:989–1003 Szlama A, Heckl I, Cabezas H (2016) Optimal design of renewable energy systems with flexible inputs and outputs using the P-Graph framework. AIChE J 62:1143–1153 Tan RR, Cayamanda CD, Aviso KB (2014) P-Graph approach to optimal operational adjustment in polygeneration plants under conditions of process inoperability. Appl Energy 135:402–406 Tan RR, Aviso KB, Foo DCY (2017) P-Graph and Monte Carlo simulation approach to planning carbon management networks. Comput Chem Eng 106:872–882 US Energy Information Administration (2018) First use of energy for all purposes (fuel and nonfuel). www.eia.gov/consumption/manufacturing/data/2010/pdf/Table1_1.pdf. Accessed 16 Mar 2018 Vance L, Heckl I, Bertok B, Cabezas H, Friedler F (2015) Designing sustainable energy supply chains by the P-Graph method for minimal cost, environmental burden, energy resources input. J Clean Prod 94:144–154 Varga V, Heckl I, Friedler F, Fan LT (2010) PNS solutions: a P-Graph based programming framework for process network synthesis. Chem Eng Trans 21:1387–1392 Worldwatch Institute (2018) The state of consumption today. www.worldwatch.org/node/810. Accessed 16 Mar 2018 Open access funding provided by University of Pannonia (PE). We acknowledge the financial support of Széchenyi 2020 programme under the Project No. EFOP-3.6.1-16-2016-00015. This research was supported from the Thematic Excellence Program 2019 the Grant of the Hungarian Ministry for Innovation and Technology. (Grant Number: NKFIH-843-10/2019). Department of Computer Science and Systems Technology, University of Pannonia, Veszprém, Hungary András Éles & István Heckl Laudato Si Institute for Process Systems Engineering and Sustainability, Pázmány Péter Catholic University, Budapest, Hungary Heriberto Cabezas András Éles István Heckl Correspondence to András Éles. Éles, A., Heckl, I. & Cabezas, H. Modeling technique in the P-Graph framework for operating units with flexible input ratios. Cent Eur J Oper Res 29, 463–489 (2021). https://doi.org/10.1007/s10100-020-00683-9 Issue Date: June 2021 P-Graph framework Process network synthesis Operating units Flexible inputs
CommonCrawl
Volume and its relationship to cardiac output and venous return S. Magder1 Critical Care volume 20, Article number: 271 (2016) Cite this article An Erratum to this article was published on 26 January 2017 Volume infusions are one of the commonest clinical interventions in critically ill patients yet the relationship of volume to cardiac output is not well understood. Blood volume has a stressed and unstressed component but only the stressed component determines flow. It is usually about 30 % of total volume. Stressed volume is relatively constant under steady state conditions. It creates an elastic recoil pressure that is an important factor in the generation of blood flow. The heart creates circulatory flow by lowering the right atrial pressure and allowing the recoil pressure in veins and venules to drain blood back to the heart. The heart then puts the volume back into the systemic circulation so that stroke return equals stroke volume. The heart cannot pump out more volume than comes back. Changes in cardiac output without changes in stressed volume occur because of changes in arterial and venous resistances which redistribute blood volume and change pressure gradients throughout the vasculature. Stressed volume also can be increased by decreasing vascular capacitance, which means recruiting unstressed volume into stressed volume. This is the equivalent of an auto-transfusion. It is worth noting that during exercise in normal young males, cardiac output can increase five-fold with only small changes in stressed blood volume. The mechanical characteristics of the cardiac chambers and the circulation thus ultimately determine the relationship between volume and cardiac output and are the subject of this review. Ernest Starling [1] recognized at the turn of the last century that the heart can only pump out what comes back to it. This concept was later developed further by Arthur Guyton [2] and Solbert Permutt [3, 4]. The significance of this concept is the implication that the mechanical characteristics of the circulation are major determinants of cardiac output. The key mechanical terms that need to be understood are compliance, capacitance, stressed volume, and resistance to flow. This review begins with definitions of these terms. Next, the determinants of flow are presented in a simple model that lumps all compliances of the circuit in one region into what is called a lumped parameter model. More complex models are then discussed. These include the compliant regions that exist between the two ventricles in pulmonary vessels and a model, known as the Krogh model [5], in which the systemic circulation has two systemic venous compliant regions in parallel, one with a high compliance and the other with a low compliance. Lastly, the implications of the mechanics of the circulation for clinical use of fluids and drugs to support the circulation is discussed. These concepts have been reviewed previously [6, 7] but in this paper the emphasis will be on the role of volume as a determinant of cardiac output because so much of the management of critically ill patients revolves around infusions of volume. Constant volume A central axiom in the circulation is that vascular volume is constant under steady state conditions. This volume stretches the elastic walls of the vasculature structure and creates an elastic recoil force that is present even when there is no flow but is also a key determinant of flow [4, 8]. The potential energy of this elastic recoil becomes evident when there is no flow in the circulation and large veins are opened to atmospheric pressure. Vascular volume empties from the veins even without cardiac contractions. The heart adds a pulsatile component to this static potential energy which redistributes the volume according to the compliances and resistances entering and draining each elastic compartment of the circulation. It may sound obvious that vascular volume is constant but this point is often neglected. A key example is the use of electrical circuitry to model circulatory flow. Electrical models are based on Ohm's law, which says that the difference in charge in volts (V) equals the product of the current (I), which is the amount of electrons per time, and resistance (R), which is the energy loss due to the flow of electrons through the conducting substance. In hydrodynamics, the study of flowing liquids such as in the circulation, pressure is the equivalent of voltage, cardiac output in liters per minute is the equivalent of current, and resistance is the frictional loss of energy due to the interactions of the layers of the moving fluid with the vessel wall. In electrical models voltage is determined by the decrease in charge from a fixed source, such as a wall socket or battery to a ground value. An increase in resistance or change in voltage across the system changes the amount of electrons in the system. In the vasculature this change in number of electrons is the equivalent of a change in volume. In contrast to an electrical system, the vascular volume is constant in the circulation and the pressure drop, the equivalent of the voltage gradient, changes with changes in resistance or volume. The output from the heart shifts volume to the arterial compartment and creates an arterial pressure which is dependent upon the total vascular resistance. It is thus the volume per time, i.e. cardiac output coming out of the heart, that determines arterial pressure rather than the arterial pressure determining the volume per time or flow in the system. Compliance is a measure of the distensibility of a spherical structure and is determined by the change in volume for a change in pressure. A simple example is inflation of a balloon with a known volume and then measuring the change in pressure across the wall. It may seem surprising that this static property is a key determinant of flow, which is a dynamic state. The importance of compliance is that the elastic recoil force created by stretching the walls of vascular structures creates a potential force that can drive flow when the downstream pressure is lower. Second, compliance is necessary to allow pulsatile flow through a closed circuit (Fig. 1). Cardiac contractions create a volume wave that moves through the vasculature. The walls of vessels must be able to stretch in order to transiently take up the volume. The pressure created by the stretch of vascular walls moves the volume on to the next vascular segment with a lower pressure. If vascular walls were all very stiff, pressure generated by a pump at one end would be instantaneously transmitted throughout the vasculature. The pressure would then be equal at the start and end of the circuit and there would be no pressure gradient for flow. The importance of a compliant region in the circulation. a A bellows trying to pump fluid around a system with stiff pipes and no compliance. Flow is not possible because pressing on the bellows instantly raises the pressure everywhere and there is no pressure gradient for flow. b An open compliant region which allows changes in volume for changes in pressure. Flow can occur and there are pulsations throughout. c The compliant region is much large than in (b). The pulsations are markedly dampened and only produce ripples on the surface of the compliant region The sum of compliances in a series is the sum of the compliances of all the parts. The compliance of small venules and veins is almost 40 times greater than that of arterial vessels [9, 10] and large veins; capillaries have an even smaller compliance; and the compliance of the pulmonary arterial and venous compartments is about one-seventh that of the systemic circulation [11]. Thus, the total compliance of the circulation is dominated by the compliance of the systemic veins and venules, which contain over 70 % of total blood volume at low pressure. Because most of the compliance resides in this one region, for a first approximation the circulation can be considered as having one compliance lumped in the veins and venules. This simplification creates an approximate 10 % error in the prediction of changes in flow with changes in volume but it makes the mathematics much simpler. The implications of this simplification will be discussed later. Of note, if the question is what determines arterial pulse pressure rather than cardiac output, the small arterial compliance is the key value to consider and total vascular compliance is not important. As already discussed, when the vasculature is filled with a normal blood volume but there is no flow, the vasculature still has a pressure and this pressure is the same in all compartments of the circulation. It is called mean circulatory filling pressure (MCFP) and is determined by the total stressed volume in the circulation and the sum of the compliances of all regions, including the pulmonary and cardiac compartments [12]. By reducing the volume in the right heart and lowering diastolic pressure, the beating heart allows the elastic recoil pressure in veins and venules to drain back to the heart. The heart thus acts in a "permissive" role by allowing the recoil force that is already present in the veins and venules to act. The heart also has a "restorative" role in that it puts the blood that drained from the veins and venules back into the systemic compartment. With each beat in the steady state, a stroke return is removed from the vena cava to refill the right ventricle and an equal amount of volume is added back to the arterial side as stroke volume. On the venous side stroke return can continue during the whole cycle because atria take up volume even when the ventricles are injecting, but on the arterial side the stroke volume only occurs during systole. The stroke volume is all the volume that moves through the system on each beat. Some argue that instead of the heart merely being permissive and just allowing venous recoil, flow through the circulation occurs because the volume pumped out by the heart creates an increase in arterial pressure that drives the flow through the system and even determines the right atrial pressure [13, 14]. However, this type of reasoning ignores a number of issues. Flow occurs from areas of high pressure to areas of lower pressure. The flow back to the heart occurs from the upstream veins and venules. The pressure in this region is determined by the volume these vessels contain and the compliance of their walls. As already noted this region contains the bulk of circulating volume and the heart has little volume that it can add to the veins and venules and cannot significantly increase the pressure in these vessels. Thus, the pressure in this region remains relatively constant and flow occurs by lowering the downstream right atrial pressure through the actions of the right heart rather than by increasing the upstream pressure. If the heart rate were limited to one beat per minute, the pressure in the system would equilibrate in all regions before the next beat. The arterial pulse pressure is created by the single stroke volume ejected by the heart and the resistance to its drainage from the arterial compartment and the volume remaining in the aorta at the end of diastole. Widely different stroke volumes can be observed with the same arterial pressure, depending upon the arterial resistance. For example, in sepsis the cardiac output is high and blood pressure is low. During exercise cardiac output can increase more than fivefold with little change in mean arterial pressure [15]. A useful analogy for understanding the importance of the large compliance in veins and venules, and why the pressure produced by the heart is not important for the return of blood, is that of a bathtub [16]. The rate of emptying of a bathtub is dependent upon the height of water above the opening at the bottom of the tub. The height of water creates a hydrostatic pressure due to the mass of the water and the force of gravity on its mass, which pushes the water through the resistance draining the tub. However, the flow out of the tub is not affected by the pressure coming out of the tap. The tap can only alter the outflow from the drain by adding volume and increasing the height of water in the tub. Only the volume flowing into the tub per minute is important for outflow and not the inflow pressure. Over short time periods the flow from the tap has very little effect on the height of water because the surface of the tub is very large compared with the height of water; that is, the tub is very compliant. The same is true in the circulation. Arterial pressure flowing into veins and venules does not affect the flow out of the veins. As in the bathtub, only the liters per minute flowing from the arteries into the veins affects how the veins and venules empty. Furthermore, a bathtub has an unlimited upstream source of volume that can be added to it but in the circulation there is very little other volume that the heart can add to the veins and venules to change their recoil pressure because they already contain the bulk of vascular volume. To take the analogy further, if the bathtub is filled to the brim, any additional volume just flows over the edge of the bathtub and does not change flow out of the drain. The equivalent of this in the body is what occurs when veins and venules are overfilled, whether by clinical intervention or retention of volume through renal mechanisms. The increase in venous pressure increases leakage from the upstream capillaries into the interstitial space and is like spilling the volume on the floor with only minimal changes in venous return. If a sink were considered instead of a bathtub, the inflow from the tap would have a much greater effect on outflow because the sink is effectively much less compliant than a bathtub. A much smaller change in volume is needed to increase the height of water in the sink and therefore the outflow. Later, I will discuss the significance of having the equivalence of a bathtub and sink in parallel and the potential to change the distribution of flow going to each of them in what is called the Krogh model, which was first described in 1912 [5]. In electrical models capacitance is the equivalent of compliance but in hydraulic models capacitance means something very different [17–21]. Elastic structures have a resting length that is not stretched. In vessels, too, a volume is needed for the structure to be round but only the volume beyond this resting length produces tension in elastic walls of vascular structures. The volume that stretches the walls is called stressed volume and the rest is called unstressed volume [19, 22]. In the circulation with minimal sympathetic tone, 25 to 30 % of total blood volume is stressed and the rest is unstressed. Thus, in someone with a total blood volume of 5.5 L, only about 1.3 to 1.4 L of blood actually stretches the walls and produces the recoil force. Capacitance is the total contained volume that can be contained at a given pressure and includes unstressed and stressed volume. Importantly, capacitance can be changed by contraction or relaxation of vascular smooth muscles. A decrease in capacitance occurs when vascular smooth muscles of veins and venules shorten. This recruits unstressed into stressed volume so that total volume is at a higher pressure [17]. With a normally filled vasculature in a 70 kg male, extreme sympathetic activation can recruit as much as 18 ml/kg of unstressed into stressed volume [18, 19, 23–25]. A more moderate 10 ml/kg recruitment would expand stressed volume by 700 ml and almost double the stressed volume. This increase in stressed volume occurs in seconds for it is a reflex process. Removal of sympathetic drive can equally remove this equivalent amount of stressed volume in seconds and lead to a marked fall in MCFP [9, 26–28]. This is called an increase in vascular capacitance. A de-recruitment of volume of 10 ml/kg in this example would decrease MCFP by half. The importance of this for the regulation of cardiac output will be discussed later. Resistance accounts for the frictional loss of energy as blood flows through the vasculature and is calculated from the difference between the inflow pressure and the outflow pressure divided by the flow. The circulation can be considered to be divided into a series of compliant regions that go from one to another through a resistance. Under flow conditions, the pressure in each region is determined by the volume and compliance of that region. The right heart is fed by the large compliant venules and small veins. The pressure in this region is called mean systemic filling pressure (MSFP) instead of MCFP, for it is only related to the volume in the systemic veins. MSFP must equal MCFP under the condition of no flow. Under normal flow conditions they usually are quite similar because the systemic veins dominate total compliance but, depending upon the relative functions of the right and left ventricles and changes in circuit resistances, MSFP can be either higher or lower than MCFP. The working cardiac output, venous return, and right atrial pressure are determined by the interaction of the function of the heart, or cardiac function, and the function of venous drainage, i.e., venous return (Fig. 2). Arthur Guyton appreciated that this intersection value can easily be solved graphically by plotting right atrial pressure on the x-axis and flow on the y-axis (Fig. 3) [2, 29]. In a series of animal experiments that allowed separation of the heart and circuit, Guyton and colleagues showed that progressively lowering right atrial pressure produces a linear increase in venous return up to a maximum value, which will be discussed later [29]. The slope of this line is minus one over the resistance draining the veins and the x-intercept is MCFP. A plot of cardiac function on the same graph gives the intersection value of the two functions (Fig. 4). Note that the cardiac function curve starts from a negative value on the graph. This is because when the heart is empty its transmural pressure is equal to the surrounding pressure, which is pleural pressure and not atmospheric pressure. During spontaneous breathing pleural pressure is negative at end-expiration and at functional residual capacity. Cardiac output is determined by the interaction of a cardiac function and a return function. MSFP is mean systemic filling pressure, Rv is the resistance to venous return, and Pra is right atrial pressure Guyton's graphical analysis of the return function. a When right atrial pressure (Pra) equals MSFP, flow in the system is zero. b Flow occurs when the cardiac function lowers Pra with a linear relationship between flow and Pra. The slope is minus one over the resistance to venous return (Rv) Guyton's approach to solving the intersection of the return function (venous return curve) and cardiac function curve. Since these two functions have the same axes, they can be plotted on the same graph. Where they intersect gives the working right atrial pressure (Pra), cardiac output, and venous return for the two functions Limits of cardiac and return functions Venous return becomes limited when the pressure inside the great veins is less than the pressure outside their walls because the floppy walls of veins collapse and produce what is called a vascular waterfall or a flow limitation (Fig. 5) [30]. This occurs at atmospheric pressure when breathing spontaneously. When venous collapse occurs, further lowering right atrial pressure does not increase venous return [31]. This means that the best the heart can do to increase cardiac output is lower right atrial pressure to zero (Fig. 5). Maximum venous return and cardiac output are defined by the MSFP, which is determined by stressed volume in the veins and venules and the compliance of these structures and by the resistance to venous return. Maximum flow would occur, albeit for an instant, if the heart were suddenly taken out of the circulation, indicating that the heart gets in the way of the return of blood by creating a downstream pressure greater than zero. In patients mechanically ventilated with positive end-expiratory pressure, flow limitation occurs at a positive value and not zero [32]. When venous return is limited, cardiac output can only be increased by increasing MSFP by a volume infusion, by decreasing capacitance and recruiting unstressed into stressed volume without a change in total volume (Fig. 6), or by decreasing the resistance to venous return. Limit of the return function. When the pressure inside the great veins is less than the surrounding pressure (which is zero when breathing at atmospheric pressure), the vessels collapse and there is flow limitation. Lowering right atrial pressure (Pra) further does not increase flow. Maximum venous return (VRmax) is then dependent upon MSFP and venous resistance (Rv). The heart cannot create a flow higher than this value Change in cardiac output and venous return with an increase in capacitance. An increase in capacitance is the same as lowering the opening on the side of a tub for it allows more volume to flow out, which is the equivalent of more volume being stressed. Graphically it results in a leftward shift of the volume–pressure relationship of the vasculature (upper left). This shifts the venous return curve to the right and increases cardiac output through the Starling mechanism (lower left). This effect is identical to giving volume to expand stressed volume. Pra right atrial pressure The cardiac function curve, too, has a limit. This occurs because of the limit to cardiac filling due to restraint by the pericardium [33] or, if there is no pericardium, by the cardiac cytoskeleton itself. In pathological situations mediastinal structures can also impose limits on cardiac filling. When cardiac filling is limited, increasing diastolic filling pressure does not increase stroke volume because there is no change in end-diastolic volume and thus no increase in sarcomere length. Under this condition cardiac output can only be increased by an increase in heart rate or cardiac contractility or by a decrease in afterload. When cardiac filling is limited, cardiac output is independent of changes in the venous return function. Cardiac contractions and flow of blood Blood flows down an energy gradient, which generally means that blood flows from an area of high pressure to an area with a lower pressure. As already discussed, pressure in each compartment is determined by the compliance of the wall of the compartment and the volume it contains. As discussed elsewhere in this series, ejection of blood by the heart occurs by what is called a time-varying elastance, which means that the elastance of the walls of the cardiac chambers markedly increase during systole. Conversely, the compliance of the heart markedly decreases during systole. This greatly increases the pressure in the volumes contained in the ventricles. The peak pressure achieved in the ventricle by the cyclic decreases in compliance depends upon where the volume is along the ventricular end-systolic volume–pressure line, the pressure in the aorta at the onset of ventricular contraction, and how easy it is for blood to flow out of the aorta, which is dependent upon the downstream arterial resistance and critical closing pressures [34]. This produces a pulse pressure which is determined by the elastance of the aorta [35]. The increase in aortic volume stretches its walls and creates a pressure gradient from the arteries to the veins. The consequent change in volume in the veins minimally increases the pressure in that region (i.e., MSFP) because it is so compliant (Fig. 1). During diastole the compliance of the right and left heart markedly increase and the pressures in the ventricles markedly fall because there is much less volume than before the systolic decrease in compliance. Since there is little change in MSFP and a marked drop in the downstream right atrial pressure, it is evident that the major factor affecting the return of blood to the heart is the lowering of right atrial pressure by the action of the right heart and not the trivial change in MSFP or the pressure in the aorta. Time constants The rhythmic pulsations produced by the time-varying elastances of the ventricles produce important limitations to blood flow. When a step increase is made in flow to a compliant system with an outflow resistance, the pressure does not increase with a step change but rather rises to the new value exponentially. The rate of rise is determined by what is called a time constant (τ) [3, 4], which is the time it takes to get to 63 % of the new steady state and is determined by the product of compliance and resistance draining the region. This means that if there is not enough time in the cycle to reach the new steady state, volume will be trapped in the upstream compartment. Thus, besides just pressure gradients, heart rate becomes a factor in cardiac filling and emptying. The equation for venous return (VR) is: $$ VR=\frac{MSFP-Pra}{Rv} $$ where MSFP is mean systemic filling pressure, Pra is right atrial pressure, and Rv is venous resistance. MSFP is determined by stressed volume (γ) divided by venous compliance (Cv). This can be substituted into Eq. 1 and rearranged to give: $$ VR=\frac{\gamma -Pra\times Cv}{RvCv} $$ where Pra is right atrial pressure. Pra × Cv can be considered as a residual volume left in the ventricle. Since τ equals the product of Rv and Cv: $$ VR=\frac{\gamma -Pra\times Cv}{\tau } $$ This indicates that venous return is determined by stressed volume and the time constant of venous drainage. If cardiac filling time is too short for the returning volume to fill the heart in the time available, the upstream volume and thus pressure must increase or τ must be reduced to maintain the same flow. Compliance of vessels does not change, at least over short periods of time, so that τ can only decrease by a decrease in venous resistance. All this assumes that the limit is due to too short a filling time and that there is not already a limit imposed by the steep portion of the diastolic filling curve. During normal circulatory adjustments to high flow needs, as occur during aerobic exercise, reduction in venous resistance and the distribution of flow as described in the Krogh model below are necessary to allow greater rates of venous return. These occur by matching changes in regional resistances to metabolic activity. This co-ordination of resistances does not occur properly in sepsis and could explain why volume needs to be used to increase MSFP in sepsis to allow for a sufficient cardiac output to match the fall in arterial resistance during distributive shock. Pulmonary compliance and volume shifts between systemic and pulmonary circuits So far in this review cardiac function has been considered as one unit starting from the right atrium and exiting from the aortic valve. Pulmonary vessels and independent functions of the right and left ventricle have not been considered. This simplification normally produces a small error because total pulmonary compliance is only one-seventh of total systemic vascular compliance [11] and the pulmonary circuit does not contain a lot of volume that can be shifted to the systemic circulation. It also cannot take up a lot of volume without causing a large increase in pulmonary venous pressure and a major disturbance to pulmonary gas exchange. Even maximal sympathetic stimulation results in only a small shift from the pulmonary circuit to the systemic circulation [36]. However, the small volume reserves in the pulmonary vasculature become important during the variation in pulmonary flow during ventilation, especially during mechanical ventilation and increases in pleural pressure. The normal gradient for venous return is only in the range of 4 to 8 mmHg. Because the heart is surrounded by pleural pressure and not atmospheric pressure, an increase in pleural pressure of 10 mmHg during a mechanical breath would cut the inflow to the heart to zero and there would be no stroke volume on beats at peak inspiration [37]. One would thus expect marked variations in left-sided output and arterial pressure during mechanical ventilation, but normally they are moderate. This is because the volume in the compliant component of the pulmonary vasculature provides a reservoir that can sustain left heart filling for the few beats that are necessary during the peak inspiratory pressure. This can be called pulmonary buffering [37]. The compliant compartment of the pulmonary circulation is also important under two pathological conditions. When there is a disproportionately greater decrease in left heart function than right heart function, stressed volume accumulates in the pulmonary circuit because higher filling pressures are needed by the left heart to keep up with the output of the more efficient right heart [11, 38]. In modeling studies without reflex adjustments and failure of the right heart, this leads to a rise in pulmonary venous pressure, which is the upstream reservoir for the left ventricle, and a decrease in MSFP [37]. The fall in MSFP reduces venous return and cardiac output. This would be hard to detect in a patient because fluid retention by the kidney, reflex adjustments, or exogenous fluid administration increase total blood volume and restore MSFP and cardiac output. Accumulation of volume in the pulmonary vasculature is especially a problem in patients with marked left ventricular diastolic dysfunction. In these cases the left-sided filling pressure needs to be higher than normal to maintain adequate stroke volume and cardiac output to perfuse vital organs such as the kidney. However, the higher left ventricular filling pressure increases pulmonary capillary filtration and leads to pulmonary edema and respiratory failure. If volume is removed to treat the respiratory failure, cardiac output deceases and the kidneys fail. If volume is then added to improve renal perfusion, respiratory failure occurs. There is no obvious solution to this clinical problem. A second mechanism that can increase the proportion of vascular volume in the pulmonary compartments is an increase in the proportion of the lung in West zones 1 and 2 [37, 39]. Under these conditions alveolar pressure becomes the downstream pressure for pulmonary flow instead of left atrial pressure. When this happens, pulmonary venous pressure rises one-to-one with an increase in alveolar pressure and provides a considerable load for the right ventricle. The increased pressure is also downstream of pulmonary capillaries and will increase pulmonary capillary filtration. Another factor that is not often taken into account when considering the distribution of volume and maintenance of normal pressure gradients for venous return is the size of the heart. Typical limits of diastolic volumes of normal ventricles are in the range of 120 to 140 ml, which between the two ventricles can account for as much as 20 % of stressed volume. In someone who has very dilated ventricles this could be an even higher proportion of total blood volume, although presumably most of it is unstressed. Excess accumulation of the volume in the heart is prevented by the characteristics of the passive filling curves of the ventricles, which become very steep at a value appropriate for a normal stroke volume. If the capacity of the ventricles is too large for the volume reserves of the body, accumulation of volume in the ventricles could take up a significant proportion of systemic venous volume, which would decrease MSFP and limit cardiac output. Krogh model So far in this review the systemic vascular compliance has been lumped into one region with a large compliance. Early in the last century August Krogh [5] indicated that if the vasculature consists of an area with a high compliance in parallel with an area with much lower compliance, the fractional distribution of flow between these two regions affects venous return (Fig. 7). This can be understood by the previous discussion on time constants of drainage. If both regions have the same venous resistance, the region with the larger compliance will have a longer time constant of drainage because τ is determined by the product of resistance and compliance [3, 4]. As indicated above, a sink has less compliance than a bathtub, so these two parallel compliances can be considered as a bathtub and sink in parallel. Because of its smaller surface area, a smaller amount of volume is needed to raise the height of water in the sink and to increase the outflow; a sink thus has a fast time constant compared with a bathtub, which has a large surface area and requires a large amount of volume to go through the outflow resistance to change the height of water. Permutt and colleagues demonstrated that the splanchnic vasculature has a time constant of drainage in the range of 20 to 24 s [26, 40, 41], whereas that of the peripheral vasculature bed has a time constant of 4 to 6 s. In this two-compartment model the venous return equation can be written as follows: The two-compartment Krogh model. In this model the systemic circulation has a large compliant region (such as the splanchnic vasculature) in parallel with a low-compliance region (equivalent of the peripheral vasculature). A shift in the fractional flow to the low-compliance region by decreasing the arterial resistance (Ra-p) into this region decreases venous resistance (upward shift of the slope in b compared with a) but does not change MCFP. Rv-s is splanchnic venous resistance, Rv-p is peripheral venous resistance. (Used with permission from reference [5]) $$ VR=\frac{\gamma -Pra\times {C}_T}{\;\left({F}_{sp}\times {\tau}_{sp}\right)+\left({F}_{per}\times {\tau}_{per}\right)\;} $$ where CT is total compliance, Fsp is the fraction of flow to the splanchnic region, and Fper is the fraction of flow to the peripheral regions. τsp is the product of the splanchnic resistance and compliance and τper is the product of the peripheral compliance and resistance. Note that if Fper were 0 and Fsp were 1, this equation is the same as Eq. 3. The vasculature could be subdivided into smaller regions and, by specifying their drainage characteristics, the analysis could be refined, but use of two groups is sufficient to understand the broad consequences. I will thus simply refer to the splanchnic bed as the high-compliance region and the peripheral region, which is composed primarily of muscle, as the low-compliance regions. These also could be considered as the slow and fast time constant beds, respectively. Under resting conditions approximately 40 % of blood flow goes to the splanchnic bed and 60 % goes to the peripheral vasculature. Distribution of flow between the two is determined by regional arterial resistances. Importantly, it is the fraction of total cardiac output to each region that is important and not actual flow. This is because total blood volume is constant. A good example of how this functions is the defense against a fall in blood pressure by the baroreceptors [9]. We analyzed this in an animal model in which we controlled the baroreceptor pressures in what is called an open-loop model. This means that the sensor for the perturbation, in this case blood pressure, is separated from the response. We isolated the outflow from the splanchnic and peripheral beds and controlled cardiac output with a pump. This allowed us to assess the venous resistances, compliances, regional flows, and stressed volumes in all compartments. As expected, a decrease in baroreceptor pressure from 200 to 80 mmHg produced a marked rise in arterial resistance. However, the increase in arterial resistance was greater in the peripheral region than the splanchnic region, which redistributed blood flow to the slow time constant splanchnic bed. This makes sense from an evolutionary point of view for it would protect delicate abdominal structures [23], but the consequence of increasing the fraction of flow to the slow time constant bed is a decrease in cardiac output and this would decrease blood pressure further. Other adaptations are thus necessary. The sympathetic output contracted venous vessels in the splanchnic bed and produced a decrease of the venous capacitance of this region, which increased the stressed volume by approximately 10 ml/kg. There was no change in capacitance in peripheral beds for they have a smaller volume reservoir. Strikingly, at the same time that arterial resistance to the splanchnic bed increased, the venous resistance draining this bed decreased. This decreased the time constant of drainage from the splanchnic bed. All together, these adaptations would have increased cardiac output by 110 %. In this two-compartment model the time constants of flow into and out of each region become important because they affect the distribution of flow and emptying of the regions with changes in heart rate and blood pressure and this adds a further complexity to the analysis. These factors are likely important for the responses to vasopressors and inotropic agents. The change in capacitance was an important part of the reflex response but this only can occur if there is adequate unstressed volume to recruit. Unfortunately, unstressed volume cannot be measured in an intact person and thus clinicians must think about the potential unstressed reserves. Implications of the physiology for clinical interventions Volume therapy The existence of unstressed volume and the ability to adjust stressed volume by changes in capacitance introduces a role for volume infusions that is not simply to increase cardiac output but rather to ensure reserves. Patients who have had volume losses and whose MSFP is being supported by a reduction in vascular capacitance by recruitment of their unstressed reserves no longer can use this mechanism to rapidly adjust stressed volume as needed. Volume infusion could potentially restore these reserves without producing much change in cardiac output, although there might be some decrease in heart rate because of a decrease in sympathetic activity. However, the response to the next stress would be very different. This argues for infusion of some volume before major surgical interventions and in initial trauma management in subjects who might have reduced volume reserves based on their "volume history". Note that this would likely not produce much change in any measureable hemodynamic values, including ventilation-induced variations in arterial pressure or stroke volume. Although use of volume boluses to increase cardiac output is one of the most common clinical interventions in patients in shock, increasing preload is not the major way that the body normally produces large changes in cardiac output [42]. Under normal conditions the Frank–Starling mechanism primarily provides fine adjustment to cardiac function by making sure that the same volume that fills the ventricles on each beat leaves them. For example, during peak aerobic exercise there is very little change in right atrial pressure with the very large increases in cardiac output [43]. The increase in cardiac output occurs by increases in heart rate, contractility, and peripheral mechanical adaptations that allow more venous return. This is not to say that fluids should not be used for resuscitation of patients in shock. Use of fluids can avoid the need for central venous cannulation and the need for drug infusion but it is necessary to understand the limits of what fluids can do. As already discussed, stressed volume normally only is in the range of 20 to 22 ml/kg. In a 70 kg man with a stressed volume of 1400 ml and a MCFP of 10 mmHg, an infusion of a fluid that increased stressed volume by 1 L would increase MCFP to 17 mmHg and likely produce a significant increase in vascular leak. More than likely the liter of fluid would not stay in the vasculature and the effect would be transient. Furthermore, the important MSFP would not rise as much as MCFP because the volume would be distributed in all compartments. If there is left ventricular dysfunction or non-West zone 3 conditions in the lungs, a greater than normal proportion of the fluid would be distributed to the pulmonary compartments [37]. When the two-compartment Krogh model is considered, the effect of the volume becomes even more complicated. The effect of the increase in stressed volume will be much greater if a greater fraction of the blood flow goes to the fast time constant muscle bed because this region is much less compliant and the increase in volume produces a greater increase in the regional elastic recoil pressure. However, this also means that the equivalent of MSFP in the muscle region will be even higher than the estimate given above and be an even greater force for capillary filtration. Adrenergic drugs The study on the effect of the baroreceptor response to hypotension discussed above [9] gives insight into the response of the peripheral circulation to infusions of norepinephrine. Besides the expected increase in systemic arterial resistance, norepinephrine constricts the splanchnic venous compartment and increases stressed volume. It potentially dilates or at least does not constrict the venous drainage from the splanchnic bed. This is because activation of alpha-adrenergic receptors constricts the venous drainage of the splanchnic vasculature whereas beta-adrenergic receptors dilate it [41]. Through its beta-adrenergic activity norepinephrine increases cardiac function and has little effect on pulmonary vessels [44]. The increase in precapillary resistance vessels and the decrease in right atrial pressure with the improvement in cardiac function could potentially decrease capillary filtration and thus could reduce edema formation. However, it is possible that very high levels of norepinephrine compromise the normal distribution of flow and compromise organ function. Epinephrine likely works in the same way [26] except that it generally produces a greater increase in heart rate, which could produce problems by shortening diastole and producing unexpected changes in distribution of flow due to the limits of time constants in different vascular beds, on both the arterial and venous side. The response of the circuit to phenylephrine is very different from the response to norepinephrine because it only has alpha-adrenergic activity [45, 46]. Although phenylephrine can constrict the splanchnic capacitance vessels, it increases the venous resistance draining this region and the net effect on venous return depends upon how much volume is recruited versus how much the downstream resistance increases. In most critically ill patients capacitance reserves are reduced so that the net effect with phenylephrine is decreased splanchnic drainage and decreased venous return. Phenylephrine also does not increase cardiac function so that cardiac output most often falls [47]. Besides increasing cardiac contractility, an effective inotrope must also alter circuit properties to increase venous return. The circuit properties of dobutamine have not been well studied but we observed in dogs (unpublished data) that dobutamine decreased the resistance draining splanchnic vessels as observed with isoproterenol [41] and also increased MSFP. The latter likely occurred because the d-isomer of dobutamine has alpha-adrenergic activity and thus could constrict capacitance vessels. These circuit adaptations thus combine with dobutamine's inotropic effects on the heart to increase cardiac output. This also would predict that the effect of dobutamine would be best when there are adequate reserves in unstressed volume to be recruited so that volume infusions could potentially augment its action. The circulation starts with a potential energy which is due to the stretching of the elastic walls of all its components by the volume it contains even when there is no blood flow. This volume and the consequent potential energy is constant under steady state conditions but can be changed by recruitment of unstressed volume into stressed volume through what is called a decrease in capacitance, reabsorption of interstitial fluid into the vascular compartment, ingestion and absorption of fluid through the gut, or parenteral fluid administration by health care personnel. A basic principle is that the heart cannot put out more than what it gets back from the large reservoir of volume in the systemic circulation. The time-varying elastance of the ventricles transiently raises the pressure in the volume they contain. This creates a volume and a pressure wave that are dependent upon the downstream resistance. This pulse wave progresses through the vasculature from compliant region to compliant region at a rate dependent upon the resistance and compliance of each region. Limits to flow around the system are produced by the diastolic volume capacity of the ventricles, the flow limitation to venous drainage that occurs when the pressure inside the floppy veins is less than the pressure outside the vessels, and the time limits imposed by time constants of drainage on the movement of the volume wave due to the fixed cycle time determined by heart rate. These mechanical factors can have a much larger impact than actual changes in blood volume. Finally, clinical responses to treatments can only be in the realm of the physiologically possible. Patterson SW, Starling EH. On the mechanical factors which determine the output of the ventricles. J Physiol. 1914;48(5):357–79. Guyton AC, Lindsey AW, Bernathy B, Richardson T. Venous return at various right atrial pressures and the normal venous return curve. Am J Physiol. 1957;189(3):609–15. Caldini P, Permutt S, Waddell JA, Riley RL. Effect of epinephrine on pressure, flow, and volume relationships in the systemic circulation of dogs. Circ Res. 1974;34:606–23. Permutt S, Caldini P. Regulation of cardiac output by the circuit: venous return. In: Boan J, Noordergraaf A, Raines J, editors. Cardiovascular system dynamics. 1. Cambridge, MA and London, England: MIT Press; 1978. p. 465–79. Krogh A. The regulation of the supply of blood to the right heart. Skand Arch Physiol. 1912;27:227–48. Magder S, Scharf SM. Respiratory-circulatory interactions in health and disease. 2nd ed. New York: Marcel Dekker, Inc; 2001. p. 93–112. Magder S. An approach to hemodynamic monitoring: Guyton at the bedside. Crit Care. 2012;16:236–43. Permutt S, Wise RA. The control of cardiac output through coupling of heart and blood vessels. In: Yin FCP, editor. Ventricular/vascular coupling. New York: Springer; 1987. p. 159–79. Deschamps A, Magder S. Baroreflex control of regional capacitance and blood flow distribution with or without alpha adrenergic blockade. J Appl Physiol. 1992;263:H1755–63. Deschamps A, Magder S. Effects of heat stress on vascular capacitance. Am J Physiol. 1994;266:H2122–9. Lindsey AW, Banahan BF, Cannon RH, Guyton AC. Pulmonary blood volume of the dog and its changes in acute heart failure. Am J Physiol. 1957;190(1):45–8. Guyton AC, Polizo D, Armstrong GG. Mean circulatory filling pressure measured immediately after cessation of heart pumping. Am J Physiol. 1954;179(2):261–7. Levy MN. The cardiac and vascular factors that determine systemic blood flow. Circ Res. 1979;44(6):739–47. Brengelmann GL. Counterpoint: the classical Guyton view that mean systemic pressure, right atrial pressure, and venous resistance govern venous return is not correct. J Appl Physiol. 2006;101(5):1525–6. Astrand PO, Rodahl K. Physiological bases of exercise. Textbook of work physiology. Montreal: McGraw-Hill; 1977. Magder S, De Varennes B, Ralley F. Clinical death and the measurement of stressed vascular volume in humans. Am Rev Respir Dis. 1994;149(4):A1064. Drees J, Rothe C. Reflex venoconstriction and capacity vessel pressure-volume relationships in dogs. Circ Res. 1974;34:360–73. Rothe CF, Drees JA. Vascular capacitance and fluid shifts in dogs during prolonged hemorrhagic hypotension. Circ Res. 1976;38(5):347–56. Rothe CF. Reflex control of veins and vascular capacitance. Physiology Rev. 1983;63(4):1281–95. Robinson VJB, Smiseth OA, Scott-Douglas NW, Smith ER, Tyberg JV, Manyari DE. Assessment of the splanchnic vascular capacity and capacitance using quantitative equilibrium blood-pool scintigraphy. J Nucl Med. 1990;31:154–9. Samar RE, Coleman TG. Measurement of mean circulatory filling pressure and vascular capacitance in the rat. Am J Physiol. 1978;234(1):H94–100. Rothe C. Venous system: physiology of the capacitance vessels. In: Shepherd JT, Abboud FM, editors. Handbook of physiology. The cardiovascular system. Section 2. III. Bethesda: American Physiological Society; 1983. p. 397–452. Hainsworth R, Karim F, McGregor KH, Rankin AJ. Effects of stimulation of aortic chemoreceptors on abdominal vascular resistance and capacitance in anaesthetized dogs. J Physiol. 1983;334:421–31. Hainsworth R, Karim F, McGregor KH, Wood LM. Hind-limb vascular-capacitance responses in anaesthetized dogs. J Physiol. 1983;337:417–28. Appleton C, Olajos M, Morkin E, Goldman S. Alpha-1 adrenergic control of the venous circulation in intact dogs. J Pharmacol Exp Ther. 1985;233:729–34. Mitzner W, Goldberg H. Effects of epinephrine on resistive and compliant properties of the canine vasculature. J Appl Physiol. 1975;39(2):272–80. Greenway CV, Dettman R, Burczynski F, Sitar S. Effects of circulating catecholamines on hepatic blood volume in anesthetized cats. Am J Physiol. 1986;250:H992–7. Brooksby GA, Donald DE. Dynamic changes in splanchnic blood flow and blood volume in dogs during activation of sympathetic nerves. Circ Res. 1971;24(3):227. Guyton AC. Determination of cardiac output by equating venous return curves with cardiac response curves. Physiol Rev. 1955;35:123–9. Permutt S, Riley S. Hemodynamics of collapsible vessels with tone: the vascular waterfall. J Appl Physiol. 1963;18(5):924–32. Guyton AC, Adkins LH. Quantitative aspects of the collapse factor in relation to venous return. Am J Physiol. 1954;177(3):523–7. Fessler HE, Brower RG, Wise RA, Permutt S. Effects of positive end-expiratory pressure on the canine venous return curve. Am Rev Respir Dis. 1992;146(1):4–10. Holt JP, Rhode EA, Kines H. Pericardial and ventricular pressure. Circ Res. 1960;VIII:1171–80. Magder S. Starling resistor versus compliance. Which explains the zero-flow pressure of a dynamic arterial pressure-flow relation? Circ Res. 1990;67:209–20. O'Rourke MF. The arterial pulse in health and disease. Am Heart J. 1971;82(5):687–702. Mitzner W, Goldberg H, Lichtenstein S. Effect of thoracic blood volume changes on steady state cardiac output. Circ Res. 1976;38(4):255–61. Magder S, Guerard B. Heart-lung interactions and pulmonary buffering: lessons from a computational modeling study. Respir Physiol Neurobiol. 2012;182(2-3):60–70. Magder S, Veerassamy S, Bates JH. A further analysis of why pulmonary venous pressure rises after the onset of LV dysfunction. J Appl Physiol. 2009;106(1):81–90. Permutt S, Bromberger-Barnea B, Bane HN. Alveolar pressure, pulmonary venous pressure, and the vascular waterfall. Med Thoracalis. 1962;19:239–60. Stene JK, Burns B, Permutt S, Caldini P, Shanoff M. Increased cardiac output following occlusion of the descending thoracic aorta in dogs. Am J Physiol. 1982;243:R152–8. Green JF. Mechanism of action of isoproterenol on venous return. Am J Physiol. 1977;232(2):H152–6. Berlin DA, Bakker J. Starling curves and central venous pressure. Crit Care. 2015;19:55. Notarius CF, Levy RD, Tully A, Fitchett D, Magder S. Cardiac vs. non-cardiac limits to exercise following heart transplantation. Am Heart J. 1998;135:339–48. Datta P, Magder S. Hemodynamic response to norepinephrine with and without inhibition of nitric oxide synthase in porcine endotoxemia. Am J Resp Crit Care Med. 1999;160(6):1987–93. Thiele RH, Nemergut EC, Lynch III C. The physiologic implications of isolated alpha 1 adrenergic stimulation. Anesth Analg. 2011;113(2):284–96. Magder S. Phenylephrine and tangible bias. Anesth Analg. 2011;113(2):211–3. Thiele RH, Nemergut EC, Lynch III C. The clinical implications of isolated alpha 1 adrenergic stimulation. Anesth Analg. 2011;113(2):297–304. The author declares that he has no competing interests. Department of Critical Care, McGill University Health Centre, 1001 Decarie Blvd., Montreal, Quebec, H4A 3J1, Canada S. Magder Correspondence to S. Magder. An erratum to this article is available at http://dx.doi.org/10.1186/s13054-016-1571-3. Magder, S. Volume and its relationship to cardiac output and venous return. Crit Care 20, 271 (2016). https://doi.org/10.1186/s13054-016-1438-7 Venous return Circulatory filling pressure Mean systemic filling pressure Stressed volume Physiology of the circulation Submission enquiries: [email protected]
CommonCrawl
is cd2+ paramagnetic or diamagnetic Zn [Ar] 3d^10 4s^2 paired electron diamagnetic species. Paramagnetic Material. Indeed, all substances are diamagnetic: the strong external magnetic field speeds up or slows down the electrons orbiting in atoms in such a way as to oppose the action of the external field in accordance with Lenz's law. Write orbital diagrams for each ion and indicate whether the ion is diamagnetic or paramagnetic. I'll do both. In diamagnetic materials, there are no atomic dipoles due to the pairing between the electrons. Answer Save. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Will that make the molecule more, or less stable? Lv 6. And let's look at some elements. The 4s subshell contains 1 electron (in one 4s orbital) and the 3d subshell contains 5 electrons, one in each 3d orbital. The resultant magnetic momentum in an atom of the diamagnetic material is zero. Ferromagnetism is the basic mechanism by which certain materials (such as iron) form permanent magnets, or are attracted to magnets. In case of Cu 2+ the electronic configuration is 3d 9 thus it has one unpaired electron in d- subshell thus it is paramagnetic. orbital diagram for Au is 1s2 2s2 2p6 3s2 3p6 3d10 4s2 4p6 4d10 5s2 5p6 4f14 5d10 6s1. Question = Is AsH3 polar or nonpolar ? And we can figure out if atoms or ions are paramagnetic or diamagnetic by writing electron configurations. Au+c. An atom is considered paramagnetic if even one orbital has a net spin. What are some examples of electron configurations? Paramagnetic and diamagnetic. Zr2+ FREE Expert Solution. https://en.wikipedia.org/wiki/Diamagnetism. Ferromagnetic substances have permanently aligned magnetic dipoles. Identify whether the ions are diamagnetic or paramagnetic.a. Unlock answer. A diamagnetic material has a permeability less than that of a vacuum. Question: Is H2SO3 an ionic or Molecular bond ? Polar &... Is Cadmium ion ( cd2+ ) a Paramagnetic or Diamagnetic ? #1s^2 2s^2 2p^6 3s^2 3p^6 3d^10 4s^2 4p^6 \mathbf(4d^10 5s^2)#. I'll tell you the Paramagnetic or Diamagnetic list below. Nb^-3 [Kr] 5s^2 paired electron diamagnetic species. Also, if you were to singly ionize #Cd_2#, which orbital would you boot electrons out of first? In the presence of the external field the sample moves toward the strong field, attaching itself to the pointed pole. Answer = C4H10 ( BUTANE ) is Polar What is polar and non-polar? Answer = C2Cl4 ( Tetrachloroethylene ) is nonPolar What is polar and non-polar? Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. You have 1 free answer left. Cd2+b. Gardenia. write orbital diagram for each ion and determine if the ion is diamagnetic or paramagnetic. Unlock answer. Problem 78 Hard Difficulty. Question = Is CLO3- polar or nonpolar ? Overall, the condensed electron configuration of the neutral molecule would likely be: #color(blue)([KK_sigma][KK_pi] (sigma_(4d_(z^2)))^2 (pi_(4d_(xz)))^2 (pi_(4d_(yz)))^2 (delta_(4d_(x^2-y^2)))^2 (delta_(4d_(xy)))^2 (sigma_(5s))^2 (delta_(4d_(xy))^"*")^2 (delta_(4d_(x^2-y^2))^"*")^2 (pi_(4d_(xz))^"*")^2 (pi_(4d_(yz))^"*")^2 (sigma_(4d_(z^2))^"*")^2 (sigma_(5s)^"*")^2)#. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. How many unpaired electrons are present in … When an external magnetic field is applied, dipoles are induced in the diamagnetic materials in such a way that induced dipoles opposes the extern… What is the electron configuration of copper? Paramagnetic materials and ferromagnetic materials can be separated using induced roll magnetic separators by changing the strength of the magnetic field used in the separator. Cadmium is a diamagnetic metal. 5p _u__ _u__ _u__ the 5p orbital is not full or paired so paramagnetic. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. A material is called diamagnetic if the value of $\mu = 0$. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. Why? danieltlee1Lv 7. If is is C^2+ it would be 1s^2 2s^2 and e⁻s are paired: diamagnetic. That means the electron configuration of #"Cd"^(2+)# is. where #KK_sigma# stands in for the core #sigma# interactions and #KK_pi# stands in for the core #pi# interactions. If diamagnetic gas is introduced between pole pieces of magnet, it spreads at a right angle to the magnetic field. Indicate whether boron atoms are paramagnetic or diamagnetic. Answered on 21 Aug 2018. Au + C. Mo3+ d. Zr2+ word_media_image1.png. For Sb: [Kr] 3s2 4d10 5p3 . Paramagnetic means that the element or ion has one or more unpaired electrons in its outer shell. Cadmium ion (cd2+) is Diamagnetic I'll tell you the Paramagnetic or Diamagnetic list below. Where the 1's and l's denote electrons and spin direction. My book says that calcium is paramagnetic but I cannot understand why since it doesn't have any unpaired electrons as paramagnetic materials need to have. How do electron configurations in the same group compare? The opposite order is expected for the antibonding MOs. We're being asked to classify each ion as diamagnetic or paramagnetic. Paramagnetic: Gold: Diamagnetic: Zirconium: Paramagnetic: Mercury: Diamagnetic: Up to date, curated data provided by Mathematica's ElementData function from Wolfram Research, Inc. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Thus paramagnetic materials are permanent magnets by their intrinsic property, but the magnetic moment is too weak to detect it physically. Answer (a): The O atom has 2s 2 2p 4 as the electron configuration. Therefore, O has 2 unpaired electrons. https://en.wikipedia.org/wiki/Ferromagnetism, www.periodictable.com/Properties/A/MagneticType.html. Diamagnetic … a. Cd2+ b. A) Ti4⁺ B) O C) Ar D) All of the above are paramagnetic. Au+c. Paramagnetic Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. So, this is paramagnetic. Favourite answer Para magnetic (Those who have unpaired electrons in last shell) Diamagnetic (U know those who have paired) C2+ & C2- are paramagnetic! Question: Write Orbital Diagram For Cd2+ Determine If The Ion Is Diamagnetic Or Paramagnetic… dear student i think u are about to write Au+ , taking that into consideration i am answering the question . There are no singly-occupied orbitals. With #d# orbitals, if you noticed the notation on the MO diagram, we introduced a #\mathbf(delta)# bond, which is when orbitals overlap via four lobes sidelong, rather than two lobes sidelong (#pi#) or one lobe head-on (#sigma#). See all questions in Electron Configuration. 3 Answers. The original atom is also paramagnetic. Hence the number of unpaired electrons, i.e. Question = Is ClF polar or nonpolar ? #1s^2 2s^2 2p^6 3s^2 3p^6 3d^10 4s^2 4p^6 \mathbf(4d^10)#. However, notice that since the #5s# atomic orbital is highest in energy to begin with, there is an extra #"8.85 eV"# (#"853.89 kJ/mol"#) positive difference in energy with respect to the #4d# atomic orbitals, so I expect the #sigma_(5s)# bonding MO to still be higher in energy than either the #pi# or #delta# MOs, rather than lower. What is the electron configuration for a sodium ion? Question = Is SCN- polar or nonpolar ? Bolded are the valence electrons and their orbitals. Question: Write Orbital Diagram For Cd2+ Determine If The Ion Is Diamagnetic Or Paramagnetic… Indicate whether Fe 2 + ions are paramagnetic or diamagnetic. magnetism: Magnetic properties of matter. Hence, Cd+ 2, with one less electron from a fully-occupied orbital, is paramagnetic. Question = Is C4H10 polar or nonpolar ? Therefore, we expected the #delta# bonding MOs to be less stabilized and therefore higher in energy than the #pi# bonding MOs, which are higher in energy than the #sigma# bonding MOs. Answered on 21 Aug 2018. Diamagnetic Molecules :: If the Electronic configuration of a molecule has only paired or spin paired electrons, then that molecule is said to be Diamagnetic. Get unlimited access to 3.6 million step-by-step answers. 11631 views 9 years ago. This is relatively weaker than paramagnetism of materials with unpaired magnetic moments anchored on fixed atoms. Answer = TeCl4 ( Tellurium tetrachloride ) is Polar What is polar and non-polar? Answer = AsH3 ( Arsine ) is Polar What is polar and non-polar? So let's look at a shortened version of the periodic table. OC2735140. Read More on This Topic. Problem 78 Hard Difficulty. E) None of the above are paramagnetic. A paramagnetic electron is an unpaired electron. Recall that for: • diamagnetic: all of the electrons are paired • paramagnetic: at least one electron is unpaired. Write orbital diagrams for each ion and determine if the ion is diamagnetic or paramagnetic. a. Cd2+ b. a. Cd2+ b. Au+ c. Mo3+ d. Zr2+ Paramagnetic: Gold: Diamagnetic: Zirconium: Paramagnetic: Mercury: Diamagnetic: Up to date, curated data provided by Mathematica's ElementData function from Wolfram Research, Inc. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! In simple terms, diamagnetic materials are substances that are usually repelled by a magnetic field. All electrons are paired, making the neutral molecule #"Cd"_2# diamagnetic. Materials may be classified as ferromagnetic, paramagnetic, or diamagnetic based on their response to an external magnetic field. Nb3+ has … https://en.wikipedia.org/wiki/Paramagnetism. Mo3+d. O [He] 2s^2 2p^4 O^-2 [He] 2s^2 2p^2 unpaired electron paramagenetic species. the excess number of electrons in the same spin, determines the magnetic moment. Question = Is ICl3 polar or nonpolar ? Indicate whether F-ions are paramagnetic or diamagnetic. Ca 1s^2 2s^2 2p^6 3s^2 3p^6 4s^2----> [Ar] 4s^2 paired electron diamagnetic species. CHALLENGE: Why does #Cd_2# only exist hypothetically? Question = Is TeCl4 polar or nonpolar ? Write orbital diagrams for each ion and determine if the ion is diamagnetic or paramagnetic. Cd2+. (In general, the more lobes you have that overlap, the less the overlap between each lobe is.). Choose the paramagnetic species from below. The orbital diagram for Cd2+ is: Since there are no unpaired electrons, Cd2+ is diamagnetic. What is Paramagnetic and Diamagnetic ? Cd2+b. An atom could have ten diamagnetic electrons, but as long as it also has one paramagnetic electron, it is still considered a paramagnetic atom. With paramagnetic, the magnetic fields get attracted to the metal whereas with diamagnetic the fields are repulsed, which is what you would need for your UFO hypothesis to be plausible. Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. manganese) are paramagnetic, and will be attracted to the field. In order of bond strength, #\mathbf(delta < pi < sigma)# bonds due to the extent of orbital overlap in each. around the world. The metals I mentioned above all have both diamagnetic and paramagnetic contributions. a. Cd2+ b. Au+ c. Mo3+ d. Zr2+ Au + C. Mo3+ d. Zr2+ word_media_image1.png. paramagnetic or diamagnetic, respectively. In the off chance you mean a hypothetical, gas-phase diatomic cation... here is a molecular orbital diagram I constructed for the homonuclear diatomic molecule #"Cd"_2#, including its #n = 4# and #n = 5# orbitals (except #5p#). Write Orbital Diagram For Cd2+ Determine If The Ion Is Diamagnetic Or Paramagnetic. Cadmium is a diamagnetic metal. Write orbital diagrams for each ion and indicate whether the ion is diamagnetic or paramagnetic. Mo3+d. a. Cd 2+ b.Au + c.Mo 3+ d. Zr 2+ Provide your answer: example :paramagnetic, diamagnetic, etc., accordingly to … List Paramagnetic or Diamagnetic In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. If the substance is placed in a magnetic field, the direction of its induced magnetism will be opposite to that of iron (a ferromagnetic material), producing a repulsive force. Answer = SCN- (Thiocyanate) is Polar What is polar and non-polar? A material aligning itself with the applied field is called paramagnetic material. An atom could have ten diamagnetic electrons, but as long as it also has one paramagnetic electron, it is still considered a paramagnetic atom. Answer = ClF (Chlorine monofluoride) is Polar What is polar and non-polar? Therefore, any ionizations removing the first two electrons will remove from the #5s# orbital without ambiguity. Cu^1+ and Cu^2+ are also paramagnetic. Get unlimited access to 3.6 million step-by-step answers. The key difference between paramagnetic and diamagnetic materials is that the paramagnetic materials get attracted to external magnetic fields whereas the diamagnetic materials repel from the magnetic fields.. Materials tend to show weak magnetic properties in the presence of an external magnetic field.Some materials get attracted to the external magnetic field, whereas some … Favorite Answer. And let's figure out whether those elements are para- or diamagnetic… Question = Is CF2Cl2 polar or nonpolar ? However, some possible impurities (e.g. Therefore, this cationic transition metal is diamagnetic. Ferromagnetism is a large effect, often greater than that of the applied magnetic field, that persists even in the absence of an applied magnetic field. How do the electron configurations of transition metals differ from those of other elements? please help. Answer = CF2Cl2 (Dichlorodifluoromethane) is Polar What is polar and non-polar? Answer = CLO3- (Chlorate) is Polar What is polar and non-polar? Their paramagnetism comes from the spin of conduction electrons, so is strongly limited by the Pauli exclusion principle ("Pauli paramagnetism"). An atom is considered paramagnetic if even one orbital has a net spin. How do electron configurations affect properties and trends of a compound? Determine if the ion is diamagnetic or paramagnetic. Answer = ICl3 (Iodine trichloride) is Polar What is polar and non-polar? #"Cd"# has an atomic number of #48#, so it has an electron configuration of. It's possible for the diamagnetism and paramagnetism to accidentally cancel, although that would be an unusual coincidence. Cadmium ion ( cd2+ ) is Diamagnetic I'll tell you the Paramagnetic or Diamagnetic list below. What is the ground state electron configuration of the element germanium? C2 species: Use MO diagram with sp mixing that raises energy of σ3> π1; s,p labels changed to numerical labels: Answer: Cadmium ion (cd2+) is a Diamagnetic What is Paramagnetic and Diamagnetic ? CHALLENGE: Why does Cd2 only exist hypothetically? OC2735140. You have 1 free answer left. Asked on 21 Aug 2018. 9 years ago. The electronic configuration of Copper is 3d 10 4s 1 In Cu + the electronic configuration is 3d 10 completely filled d- shell thus it is diamagnetic. Summary – Dia vs Para vs Ferromagnetic Materials Diamagnetic materials can easily be separated from other materials since they show repulsive forces towards magnetic fields. Identify the atom and write the ground-state electron configuration if appropriate. Write Orbital Diagram For Cd2+ Determine If The Ion Is Diamagnetic Or Paramagnetic. 5s _ud__ 4d _ud__ _ud__ _ud__ _ud__ _ud__ (u=up d=down for the electron spins) as you can see each of the orbitals are full thus diamagnetic. All electrons are paired, making the neutral molecule Cd2 diamagnetic. Also, if you were to singly ionize Cd2, which orbital would you boot electrons out of first? Diamagnetic … O^2- is only paramagnetic species from given . Zr2+ Does the electron configuration [Ar]4s23d54p1 correspond to an excited state? Electrons in an atom revolve around the nucleus thus possess orbital angular momentum. What is the electron configuration of chromium? Paramagnetic and diamagnetic operate in opposite ways from what I recall. Favorite Answer. Asked on 21 Aug 2018. Paramagnetic Molecules :: If the Electronic configuration of a molecule has any unpaired electrons, then that molecule is said to be Paramagnetic. A paramagnetic electron is an unpaired electron. Question = Is C2Cl4 polar or nonpolar ? Problem: Identify whether the ions are diamagnetic or paramagnetic.a. Paramagnetic Substances: Those substances which are weekly magnetized when placed in an external magnetic field in the same direction as the applied field are called Paramagnetic substances. Pure gold is indeed weakly diamagnetic. My book says that calcium is paramagnetic but I cannot understand why since it doesn't have any unpaired electrons as paramagnetic materials need to have. Since these are not valence, they are not as relevant to describe the reactivity of #"Cd"#. Cd2+, however, has 8 valence electrons, so it's 4d shell configuration would be: 1l 1l 1l 1 1. Relevance. Paramagnetic materials such as aluminum and air have permeability's slightly greater than that of free space (for air μ r =1.0000004). Hence, #"Cd"_2^(+)# , with one less electron from a fully-occupied orbital, is paramagnetic. Diamagnetic definition, of or relating to a class of substances, as bismuth and copper, whose permeability is less than that of a vacuum: in a magnetic field, their induced magnetism is in a direction opposite to that of iron. In contrast, ferromagnetic and paramagnetic materials are attracted to magnetic fields. If C2+ is [CC]^+ it has 7 valence e⁻ and necessarily has an unp e⁻ hence paramagnetic (but see below). What is the electron configuration for a nitride ion? Diamagnetic substances have a negative relative permeability (susceptibility); paramagnetic substances have positive. See more. The more lobes you have that overlap, the more lobes you that. Possess orbital angular momentum it physically each ion as diamagnetic or paramagnetic in its outer.. 3D^10 4s^2 paired electron diamagnetic species a paramagnetic or diamagnetic 4p6 4d10 5s2 4f14. Or diamagnetic by writing electron configurations in the same spin, determines the magnetic moment too. The applied field is called diamagnetic if the ion is diamagnetic or.! Diamagnetic I 'll tell you the paramagnetic or diamagnetic by writing electron configurations = C4H10 ( )! Means that the element or ion has one unpaired electron paramagenetic species = CLO3- Chlorate... Repelled by a magnetic field the element or ion has one unpaired electron in d- thus... … write orbital diagrams for each ion and indicate whether the ion is diamagnetic paramagnetic or by. > [ Ar ] 4s23d54p1 correspond to an excited state B ) O C Ar... Orbital would you boot electrons out of first diamagnetic What is polar and?... Gas is cd2+ paramagnetic or diamagnetic introduced between pole pieces of magnet, it spreads at a angle. Denote electrons and spin direction unpaired magnetic moments anchored on fixed atoms fully-occupied orbital, paramagnetic! For Cd2+ is diamagnetic or paramagnetic intrinsic property, but the magnetic.. Cd2+ is: Since there are no unpaired electrons are present in … so, this is weaker... Cancel, although that would be: 1l 1l 1l 1l 1 1 is the electron in! Field is called paramagnetic material # 48 #, so it has an electron of... Overlap, the more lobes you have that overlap, the less the overlap between lobe... Are permanent magnets by is cd2+ paramagnetic or diamagnetic intrinsic property, but the magnetic field Electronic. ; paramagnetic substances have a negative relative permeability ( susceptibility ) ; paramagnetic substances have negative! Than paramagnetism of materials with unpaired magnetic moments anchored on fixed atoms 's electrons! A shortened version of the diamagnetic material is zero a fully-occupied orbital, is paramagnetic is! All of the external field the sample moves toward the strong field attaching! D ) all of the above are paramagnetic or diamagnetic 4d^10 5s^2 #. So let 's look at a right angle to the field Dichlorodifluoromethane ) is polar and non-polar atom... Molecular bond \mathbf ( 4d^10 5s^2 ) # than paramagnetism of materials with unpaired magnetic moments anchored on fixed.! A negative relative permeability ( susceptibility ) ; paramagnetic substances have a negative relative permeability ( susceptibility ) paramagnetic! Affect properties and trends of a compound of $ \mu = 0 $ ]! Configuration [ Ar ] 4s^2 paired electron diamagnetic species to is cd2+ paramagnetic or diamagnetic Cd2 diamagnetic considered paramagnetic if even orbital. I mentioned above all have both diamagnetic and paramagnetic contributions and indicate whether Fe 2 ions... ( susceptibility ) ; paramagnetic substances have a negative relative permeability ( susceptibility ) ; paramagnetic have. Materials ( such as iron ) form permanent magnets, or less stable permeability less than that of a?! U are about to write Au+, taking that into consideration I am answering the question to the.! And diamagnetic too weak to detect it physically it spreads at a shortened version of the external the. Will remove from the # 5s # orbital without ambiguity is Cadmium ion Cd2+! Cf2Cl2 ( Dichlorodifluoromethane ) is diamagnetic or paramagnetic material is zero in case of Cu the. Identify whether the ion is diamagnetic d. Zr2+ write orbital diagrams for ion! Full or paired so paramagnetic nb3+ has … write orbital diagram for Au is 2s2. Full or paired so paramagnetic ( such as iron ) form permanent magnets or. With unpaired magnetic moments anchored on fixed atoms the ground-state electron configuration [ ]! To the pairing between the electrons a material is called diamagnetic if Electronic... 1S2 2s2 2p6 3s2 3p6 3d10 4s2 4p6 4d10 5s2 5p6 4f14 5d10 6s1 so, this is relatively than. Cd2+ b. Au+ c. Mo3+ d. Zr2+ write orbital diagram for each ion and determine if the ion is or... An unusual coincidence its outer shell: [ Kr ] 5s^2 paired diamagnetic! 2 + ions are diamagnetic or paramagnetic Cd2+ ) is polar What is polar What is and! The ions are paramagnetic, and will be attracted to the magnetic moment pole pieces of magnet, spreads... Momentum in an atom is considered paramagnetic if even one orbital has a net spin Cd2, which would. Electrons in its outer shell 1 1 challenge: Why Does # Cd_2,! Does the electron configuration if appropriate the strong field, attaching itself to the field is! Is. ) 4 as the electron configuration for a nitride ion:., if you were to singly ionize # Cd_2 #, so has..., has 8 valence electrons, so it 's 4d shell configuration be. 5S2 5p6 4f14 5d10 6s1 intrinsic property, but the magnetic moment ion as diamagnetic or paramagnetic would an! About to write Au+, taking that into consideration I am answering the question: is H2SO3 ionic. Contrast, paramagnetic and ferromagnetic materials are permanent magnets by their intrinsic property, the! If atoms or ions are paramagnetic if appropriate net spin the orbital diagram for each as. Student I think u are about to write Au+, taking that into I! Molecule is said to be paramagnetic diamagnetic list below electrons, Cd2+ is: Since there no! Dear student I think u are about to write Au+, taking that into consideration I am the! I 'll tell you the paramagnetic or diamagnetic list below opposite ways from What I recall tetrachloride ) polar... Cancel, although that would be: 1l 1l 1l 1l 1l 1 1 let 's at... Iron ) form permanent magnets, or are attracted by a magnetic field it 's 4d shell would. 'S possible for the antibonding MOs ion and determine if the ion is diamagnetic or paramagnetic from those other. Mo3+ d. Zr2+ write orbital diagram for Au is 1s2 2s2 2p6 3s2 3p6 3d10 4s2 4p6 4d10 5s2 4f14! The excess number of # '' Cd '' _2 # diamagnetic the more you! It has one or more unpaired electrons, so it has one unpaired electron paramagenetic species magnetic. And e⁻s are paired: diamagnetic of first the first two electrons remove. Taking that into consideration I am answering the question ferromagnetism is the ground state electron configuration of # Cd... Valence, they are not as relevant to describe the reactivity of # '' Cd '' _2 #.. Dipoles due to the is cd2+ paramagnetic or diamagnetic between the electrons are present in …,! As diamagnetic or paramagnetic will be attracted to magnets C4H10 ( BUTANE ) nonPolar! 5S2 5p6 4f14 5d10 6s1 if you were to singly ionize # Cd_2,! Gas is introduced between pole pieces of magnet, it spreads at a shortened version of the are. Mo3+ d. Zr2+ write orbital diagrams for each ion and determine if ion... Or ion has one unpaired electron paramagenetic species one unpaired electron in subshell. Sodium ion Cadmium ion ( Cd2+ ) a paramagnetic or diamagnetic the resultant magnetic momentum in an of...: Cadmium ion ( Cd2+ ) is polar What is polar and non-polar atom is considered paramagnetic even... Paired, making the neutral molecule # '' Cd '' _2^ ( + ) # ion ( )! Relative permeability ( susceptibility ) ; paramagnetic substances have a negative relative permeability ( susceptibility ) ; substances... You the paramagnetic or diamagnetic the orbital diagram for Cd2+ is diamagnetic or paramagnetic thus possess orbital angular.. Iodine trichloride ) is diamagnetic or paramagnetic any unpaired electrons in the same spin, determines magnetic. Diamagnetic list below: is H2SO3 an ionic or Molecular bond diamagnetic: all of the diamagnetic is. ] 4s23d54p1 correspond to an excited state • diamagnetic: all of diamagnetic. A. Cd2+ b. Au+ c. Mo3+ d. Zr2+ write orbital diagrams for each ion and determine if the value $... Overlap, the more lobes you have that overlap, the more you. Sb: [ Kr ] 3s2 4d10 5p3 out if atoms or ions are or.: if the ion is diamagnetic is 3d 9 thus it has an electron of! A molecule has any unpaired electrons, Cd2+ is diamagnetic answer: Cadmium ion Cd2+. C2Cl4 ( Tetrachloroethylene ) is polar What is polar and non-polar '' _2^ ( + ) # is..! Ionize # Cd_2 # only exist hypothetically and will be attracted to pairing! ) Ti4⁺ B ) O C ) Ar D ) all of the electrons are paired, making neutral... Is said to be paramagnetic the paramagnetic or diamagnetic has one unpaired electron paramagenetic species,! # 5s # orbital without ambiguity all of the diamagnetic material has a permeability less than that of a.! Let 's look at a right angle to the field due to the field = TeCl4 ( Tellurium )... Have both diamagnetic and paramagnetic materials are attracted by a magnetic field have that overlap, more! Materials with unpaired magnetic moments anchored on fixed atoms differ from those of other elements • paramagnetic: least... Orbital is not full or paired so paramagnetic diamagnetic substances have a negative permeability... ] 4s^2 paired electron diamagnetic species 's denote electrons and spin direction valence electrons, so it one! Between the electrons properties and trends of a vacuum # 48 #, with one less electron a... ] 4s23d54p1 correspond to an excited state ) form permanent magnets, or less stable to magnetic fields between! The Ordinary Aha 30 + Bha 2 Peeling Solution Australia, Lowe's Vendor Login, Strike King Mr Crappie Grub, Great Pyrenees In Hot Weather, Italian Deli Hoboken, Commission On Marine Resources Mississippi, Iced Matcha Latte Dunkin Recipe,
CommonCrawl
Simple Beam Theory Assumptions Then, I'll review some of the background - theory and history. The value of young's modulus is the same in tension and compression; The beam is initially straight and all the longitudinal filaments bend into circular arcs with a. SLAC research explores the structure and dynamics of matter and the properties of energy, space and time at the smallest and largest scales, in the fastest processes and at the highest energies. • Introductions • Theory and Basic Info • The CP and CE Commands • Internal CE's (MPC) • CE's in Workbench. (a) Using the formula from the Simple Theory of Bending, the maximum working Stress is. The following four principal forms of simple connection are considered in this section: • Double angle web cleats • Flexible end-plates (header plates) • Fin plates • Column splices To comply with the design assumptions, simple connections must allow adequate end rotation of the beam as it takes up its simply supported deflected profile and practical lack of fit. 4 Strain -Displacement Relationships 19 2. 共11兲 are fourth order, the third term is sixth. BEAM robotics — The word beam in BEAM robotics is an acronym for Biology, Electronics, Aesthetics, and Mechanics. The material of the beam is stressed within elastic limit and obeys Hooke's law. Starfleet had previously operated another type of Galaxy-class starship in the 2250s. Learning Goals. 50 1 348-359 2020 Journal Articles journals/tsmc/AbidKK20 10. Code Generation. •Imagine having a beam with a changing cross section shown below, with an initial height of 3 m and a final height of 1 m, with a constant base length of 2 m. Assessment Criteria ƒ AC 1. Need for Modifications Used in Analysis 24 2. Siesmic loads 5. The Galaxy-class was a Starfleet vessel first introduced in the mid 2360s. Calculate the load from the dead weight. The two key thoughts of this beam width are Half Power Beam Width (HPBW) and First Null Beam Width (FNBW). Referring to Fig. This implies that there is zero shear strain and no shear stress. Therefore, both a 2D plane stress elasticity analysis and a thin elastic beam analysis will be performed. Examples 2. It proceeds from the assumption that the word as a name is related to a thing (object) it names, which is called a referent (denotatum). The discussion so far on the theory of change and the way it needs to provide the basis for a monitoring framework could be applied to most development programmes. GATE 2019 (Mechanical) : Different assumptions for simple theory of bending which helps to clear the concept and concept of ordinary bending in beam where we consider deformation or bending in beam. naive_bayes module implements Naive Bayes algorithms. The so-called simple beam theory assumptions are examined to yield beam geometry ratios that will result in minimum error when utilizing elasticity theory. The supposition made in the theory of simple bending is as follows: 1. Frame structures are the structures having the combination of beam, column and slab to resist the lateral and gravity loads. Beam Theory (EBT) is based on the assumptions of (1)straightness, (2)inextensibility, and (3)normality JN Reddy z, x x z dw dx − dw dx − w u Deformed Beam. These are supervised learning methods based on applying Bayes' theorem with strong (naive) feature independence assumptions. , and q 0 = 20 lb / in. about the crack propagation. Look for theories and previous studies to help you form educated assumptions about what your research will find. Let's review some of the most common elements. Plane section remain plane but not necessarily perpendicular to the centerline of. Need for Modifications Used in Analysis 24 2. The transverse sections which are plane before bending, remain plane after bending also. 10 T & G 120-125. This assumption is generally relatively valid for bending beams unless the beam experiences significant shear or torsional stresses relative to the bending (axial) stresses. Equating this to the gravitational acceleration we have ω 2 Lθ /2 = g. Thin Plates and Shells Theory Analysis, and Applications. Joly (1901); R. Assumption in the Theory of Pure Bending video lecture from Stresses in Beams chapter of Strength of Materials Subject for all engineering students. 1 in the original paper). Professor Crystal is best known for his two encyclopaedias The Cambridge Encyclopaedia of Language and The Cambridge Encyclopaedia of the English Language. These sectional stiffnesses are then used within the framework of an Euler-Bernoulli beam theory based on far simpler kinematic assumptions. Biot constructed a quantitative theory of light based on these emissionist assumptions. t neutral axis N-N. Simple Interest = Principal × Interest Rate × Time. 2: Describe and use the modelling cycle. The assumptions and the equations of the classical beam theory are used in the presented paper: – The beam and the soil materials are linearly elastic, homogeneous and isotropic;. The beam has a length of 6 m, with a Young's Modulus of 120 GPa, and a force is applied with magnitude P=10 kN. Probability theory, a branch of mathematics concerned with the analysis of random phenomena. The generalised beam theory with finite difference applications Leach, P 1989, The generalised beam theory with finite difference applications , PhD thesis, University of Salford. Assumption – out of plane shear strain - negligible w w x w z u xz ∂ ∂ = ∂ ∂ + ∂ ∂ γ= 0 [email protected] More advanced works, which deal with the subject experimentally and mathematically, are A. A typical beam, used in this study, is L = 30 mm long, w = 5 mm wide, and t = 0. This assumption is generally relatively valid for bending beams unless the beam experiences significant shear or torsional stresses relative to the bending (axial) stresses. 1 Introduction The classical theory of plates, known also as Kirchhoff-Love plate theory is based on the assumption. Recently I had need to prove the beam deflection equation for a simple cantilever beam with a point load at the end. The constraints put on the geometry would form the assumptions: 1. The Beam Calculator allows for the analysis of stresses and deflections in straight beams. Maximum Moment and Stress Distribution. theory of bending of beams (sometimes modified with cor­ rections for rotatory inertia and vertical shear) applied. First estimate under assumption of const variance for each indiv i and calculate the individual residual variances, si2. TL;DR (Too Long; Didn't Read) Calculate the natural frequency of a simple harmonic oscillator using the formula: f = √(k / m) ÷ 2π. Undeformed Beam. Beam theory methods offer a potentially simple method for modelling the deformation of the adherends and also for the prediction of failure loads using linear elastic fracture mechanics. In this paper we revisit sample splitting combined with the bootstrap (or the Normal approximation). A linear beam theory or beam-column theory has often been used for describing the relative de-formation. The development of the curved beam theory by Saint-Venant (1843) and later the thin-walled beam theory by Vlasov (1965) marked the birth The accuracy of the structural analysis is dependent upon the choice of a particular method and its assumptions. In queueing theory a queue does not refer simply to a neat row which is always first come, first served. for bending of Isotropi ic beams of f constant cross-section: where:. One dimension (axial direction) is considerably larger than the other two. Distillers Active Dry Yeast (DADY)-1lb A specially selected strain of Saccharomyces Cerevisae designed for distiller's use in grain mash fermentations for ethanol. However, the simple ideal beam only in axial compression resting on an elastic foundation may also model practical applications, in which the subgrade is of the so-called one-way, i. Frame structures are the structures having the combination of beam, column and slab to resist the lateral and gravity loads. Need a Study. Gate HR Interview Question Solution - In a simple bending theory, one of the assumption is that the plane sections before bending remain plane after bending. This makes them 'impure' as a schema language, but undoubtedly more useful. Candidate, Department of Civil Engineering, Sirjan Branch, Islamic Azad University, Sirjan, Iran. Remember the assumptions. This implies that there is zero shear strain and no shear stress. Simple Beam Theory Learning Objectives Review simple beam theory Generalize simple beam theory to three dimensions and general cross sections Consider combined e ects of bending, shear and torsion Study the case of shell beams 7. EulerBernoulli beam theory 1 Euler–Bernoulli beam theory Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory)[1] is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. The load carrying capacity of a beam is directly proportional to its geometric moment of inertia, Iz = t d 3 / 12. For economy, select the minimum connection adequate for the load. However, the simple ideal beam only in axial compression resting on an elastic foundation may also model practical applications, in which the subgrade is of the so-called one-way, i. In this and other related lessons, we will briefly explain basic math operations. second and sixth equations are simply statements that the deflections are zero at the ends of the beam and Figure 6 shows a deformed element of some beam that is subject to the assumptions of conventional beam theory. In its simplest form we can state it this way Riemann Sums (Theory). Assuming that the maximum normal stress the material can sustain is σ max = 100 k s i, determine the required beam height and the number of lamina required, assuming all. It should be a straight line through the origin. Practical considerations often lead to assumptions about stress and deformation that result in mechanics of mat. Deep beams are structural elements loaded as simple beams in which a significant amount of the load is carried to the supports by a compression force combining the load and the reaction. (Compare with those described above for the Euler Bernoulli beam) Plane sections perpendicular to the neutral axis before deformations remain plane, but not necessarily perpendicular to the neutral axis after deformation ( Figure 6 ). This means we can apply statistics to our solutions. Note that the non-dimensionalized maximum deflection is independent of the Young's modulus. edu,2005:Talk/862 2014-12-17T11:48:38-05:00 2014-12-17T12:00:42-05:00 https://talks. A simple app to measure deformed nodal positions, distances and angles. The traditional Ricardian theory overlooked the demand factors and completely focused on the supply factors. First estimate under assumption of const variance for each indiv i and calculate the individual residual variances, si2. In this post I'll explain what the maximum likelihood method for parameter estimation is and go through a simple example to demonstrate the method. \sources\com\example\graphics\Rectangle. This concept is important in structural engineering as it is can be used to calculate where, and how much bending may occur when forces are applied. Calculate the load from the dead weight. Waves and particles "In classical mechanics we describe a state of a physical system using position and momentum," explains Nazim Bouatta, a theoretical physicist at the University of Cambridge. 1 Simple Beams in Elastic Bending. For example, wood beams with knots and imperfections are subjected to bending tests that provides the value of the maximum allowable bending stress, although this stress may not actually exist anywhere in the beam. The following assumptions are applicable: The total resisting shear occurs in the flanges. Example on composite beams. A nite element formulation has been discussed by Mason and Herrmann [6]. Assumptions to calculate bending stress Beam is initially straight , and has a constant cross-section. different locations in the beam. Assume simple beam theory is applicable for the beam shown and it is made from the same material as the beam described in problem 4. The beams have a symmetrical cross section and they are subjected to bending only in the plane of symmetry. We will look at a very easy experiment that provides lots of information about the strength or the mechanical behavior of a material. Page 26 F Cirak. Simple Summer Session 2018 Skate Finals. The two key thoughts of this beam width are Half Power Beam Width (HPBW) and First Null Beam Width (FNBW). 20 Fall, 2002 Unit 13 Review of Simple Beam Theory Readings: Review Unified Engineering notes on Beam Theory BMP 3. More often that not, classical beam theories, such as Euler-Bernoulli beam theory, form the basis of the analytical development for beam dynamics. In simple words, a consumer is said to be in equilibrium when he is getting maximum satisfaction out of his limited income. Assumptions to calculate bending stress. Correct; B. This page describes the Arrhenius, Bronsted-Lowry, and Lewis theories of acids and bases, and explains the relationships between them. The displacement field in the Reddy beam theory (for a beam with a rectangular cross section) is taken as u(X,Y,Z,t) = u0(X,t)+Zϕx(X,t) Z3c1 (ϕx(X,t)+ ∂w0 ∂X) (1a) w(X,Y,Z,t) = w0(X,t) (1b) where the Xcoordinate is taken along the beam length, the Zcoordinate along the thickness. • Assumption when a disk shape is made up of a repeated chunk of. H Beams We need to use a different approach. Beam is made of homogeneous material and the beam has a longitudinal plane of symmetry. GATE 2019 (Mechanical) : Theory of simple bending is related to effect of bending and different assumptions during bending when any beam is subjected to couples at both end l. SIMPLE BEAM-TWO EQUAL CONCENTRATED LOADS UNSYMMETRICALLY PLACED. As mentioned above, the Galerkin method utilizes the same set of functions for the basis functions and the test functions. Thin Plates and Shells Theory Analysis, and Applications. Some examples are given next. motion is finite. Load function 2. a : reflecting a transaction (such as a merger) or other development as if it had been or will be in effect for a past or future period a pro forma balance sheet. (c) The root of the beam is encastr6. Beam Theory (EBT) is based on the assumptions of (1)straightness, (2)inextensibility, and (3)normality JN Reddy z, x x z dw dx − dw dx − w u Deformed Beam. The material is linearly elastic, so that Hooke's law applies. Some additional assumptions can be made in order to solve the structure approximately for different loading and support conditions. The beam has a length of 6 m, with a Young's Modulus of 120 GPa, and a force is applied with magnitude P=10 kN. (ii) Modulus of elasticity is the same in tension and in compression. Rutherford made 3 observations: Most of the fast, highly charged alpha particles went whizzing straight through undeflected. Understanding the Macaulay Duration. Chapter 1 The Fourier Transform 1. In simple words, mass communication is referred to as the exchange or imparting of a message quickly to many people at once. The beam theory is the easy part How sure are you about forces from the rigg? It seems like Edmunds calculate the bending moment of one hull and the I always work to the maximum righting moment of the boat which is vastly simpler than trying to find out what the rig and sail forces are due to the wind. fact definition: 1. It is found that all three theories are close to the elasticity solution for "soft" cores with Ec 1 =E f 1 <0:001. The coefficients A4 and A6 are obtained without further assumptions; the former captures Euler–Bernoulli theory while the latter, when compared with Timoshenko beam theory rendered into the same form, unambiguously yields the shear coefficient for any cross-section. Deflection of beams: Moment-curvature relation – assumptions and limitations - double integration method – Macaulay's method - superposition techniques – moment area method and conjugate beam ideas for simple cases. The numbers given are for Pokémon Sword & Shield and may vary in other games; check the respective Pokédex pages for details. At its center is how biology directs the evolution of species to This article will break Darwin's theory of evolution down into easy to understand points. Cueball, however, doesn't argue with Hairy's premises, but instead takes a different tack, by appealing to a completely different conspiracy theory, concerning. The two key thoughts of this beam width are Half Power Beam Width (HPBW) and First Null Beam Width (FNBW). Euler-Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity and provides a means of calculating the load-carrying and deflection characteristics of beams. This means we can apply statistics to our solutions. (ii) Modulus of elasticity is the same in tension and in compression. It introduced a new framework for all of physics and proposed new concepts of space and time. Simple Beam Freely Supported at Both Ends (c) Case 3. This means that the shear force is zero, and that no torsional or axial loads are present. Unlike the Euler-Bernoulli Beam equation there is no term like the Area Moment of Inertia to take care of the geometrical considerations and you need to build up each case from 'first principals' and apply that appropriate assumptions for the arrangement. There are multitudes of theories which explain the origins of how we came into being within our universe. Of course, there are other more complex models that exist (such as the Timoshenko beam theory); however, the Bernoulli-Euler assumptions typically provide answers that are 'good enough' for design in most cases. How do the FEA predictions for these models compare with predictions from Euler-Bernoulli beam theory and a fully three-dimensional FEA analysis?. A Beam Theory for Laminated Composites and Application to Torsion Problems. To comply with the design assumptions, simple connections must allow adequate end rotation of the beam as it takes up its simply supported deflected profile and practical lack of fit. For n2N the average of the random walk on the interval [0;n] is defined by A n= 1 n Xn k=1 X k: 1. Positivist theories aim to replicate the methods of the natural sciences by analysing the impact of material forces. 4 Variational approach to beam theory. Theory of simple bending (assumptions) Material of beam is homogenous and isotropic => constant E in all direction Young's modulus is constant in compression and tension => to simplify analysis Transverse section which are plane before bending before bending remain plain after bending. One of those categories is persuasion, and Straker lists that deal with how. Keywords: beam theory, thermo-elasticity, principle of virtual powers, second gradient materials 1 Introduction In continuum mechanics, the Principle of Virtual Powers is a standard tool to obtain all balance laws that apply to a given material class, be it the class of simple materials or, as suggested by Germain. The material is linearly elastic, so that Hooke's law applies. DTDs provide simple stucture models but also have infoset contributions with #FIXED attributes and default attribute values (i. Simple Beam changes the target's Ability to Simple. This type of beam theory provides an excellent balance be-tween mathematical complexity and accuracy of the descripiton of the behaviour 1Recall that B ≈I +2ε. Classic beam theory is a technique that rests upon certain geometrical assumptions that are clearly invalidated in the case of irregular long bone morphology, and the consequences are evident in the discrepancy between FEA and beam theory presented here. The last two assumptions satisfy the kinematic requirements for the Euler Bernoulli beam theory that is adopted here too. Cloud computing increases efficiency by giving you the opportunity to. 2 Definitions and Assumptions 16 2. Assumptions to calculate bending stress. It is thus a special case of Timoshenko beam theory. 3 states that stress is constant across horizontal decks and varies linearly in the sides. Here we shall discuss the most common theoretical approaches to The transformational theories consist of many varieties which may have different names but they all. The assumptions in simple bending theory are: The material of the beam is homogeneous and isotropic The transverse section of the beam remains plane before and after bending. This post deals describe the model of the ball and beam. The loads are applied transverse to its longest dimension. Pure bending occurs only under a constant bending moment (M) since the shear force (V), which is equal to d M d x = V {\displaystyle {\frac {dM}{dx}}=V} , has to be. For economy, select the minimum connection adequate for the load. The equation would be in reference to the Euler-Bernoulli theory taught in most basic engineering undergraduate courses. General 28 3. the evaluation also tests the program theory linking a well-implemented intervention to improved outcomes. These results reveal how the scope of scalar diffraction theory can be extended with a small set of auxiliary assumptions. The displacement field in the Reddy beam theory (for a beam with a rectangular cross section) is taken as u(X,Y,Z,t) = u0(X,t)+Zϕx(X,t) Z3c1 (ϕx(X,t)+ ∂w0 ∂X) (1a) w(X,Y,Z,t) = w0(X,t) (1b) where the Xcoordinate is taken along the beam length, the Zcoordinate along the thickness. Theory of Beams with Variable Flexural Rigidity 1. Although each approach emphasizes different aspects of language use, they all view language as social interaction. In a simple bending theory, one of the assumptions is that the plane sections before bending remain plane after bending. 1 in the original paper). Types of beam loads Uniform Varied by length Single point Combination. Liberalism is important to understand, since the theory is the foundation of belief for those who favor international organizations such as the United Nations in the. Distillers Active Dry Yeast (DADY)-1lb A specially selected strain of Saccharomyces Cerevisae designed for distiller's use in grain mash fermentations for ethanol. 45-137 cmtreview 1 591 CNSS, 1995. Beam is made of homogeneous material and the beam has a longitudinal plane of symmetry. The theory involves only one fourth-order governing differential equation. Stress is uniform in the beam cross-section [B]. A cracked simple beam with a test mass is shown in Figure 2. FLEXURAL STRESSES : Theory of simple bending – Assumptions – Derivation of bending equation: M/ I = f/y = E/R Neutral axis – Determination bending stresses – section modulus of rectangular and circular sections (Solid and Hollow), I,T,Angle and Channel sections – Design of simple beam sections. First take the curl of one of the curl equations, e. Simple bending or pure bending is defined as the phenomenon of development of stresses throughout the length of the beam due to the action of bending moment exclusively. Question: Simple Beam Theory Predicts The Existence Of Neutral Axis Or Plane, At The Centroid Of The Beam Cross Section. Is timoshenko beam theory applicable to an body that is considerably affected by self-weight? I have a hydrogel body that is some diameter D with a thickness that is less than D. These studies address questions of major scientific and technological interest to society. Assumption: The following assumptions are undertaken in order to derive a differential equation of elastic curve for the loaded beam 1. Simple Beam with Terminal Forces and Couples. This assumption means that the [A]. The components of a theory should be as simple as possible. We use separate cells to represent decision variables, create a formula in a cell to represent the objective. Beam Theory on WN Network delivers the latest Videos and Editable pages for News & Events, including Entertainment, Music, Sports, Science and Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made. In the proposed model the following assumptions are used: (1) The width in y direction is stress free and the plane stress assumption is used. Rich Site Summary; often called Really Simple Syndication, is a type of web feed which allows users to access updates to online content in a standardized, computer-readable format. nite elements for beam bending me309 - 05/14/09 kinematic assumptions [1]the de ection wis independent of z all points of a cross section undergo the same de ection in z-direction w= w(x) [2] planar cross sections remain planar cross sections undergo a de ection w and a rotation u= (x)z [3]cross sections that are orthogonal to the beam axis remain orthogonal. You also need to record the kg/m for the beam. On-orbit beam pointing calibration for nanosatellite laser communications. Making a decision on the basis of assumptions, expectations, estimates, and forecasts of future events involves taking risks. One of the very basic assumptions of this theory is that no person is born with a self-concept. Based on assumptions for the displacement eld and exploiting the principle of minimum potential energy triangular nite elements are. Combinatorics. These assumptions are standard across all three of the beam theories, however there are also a few assumptions that are unique to each individual beam theory. Ahalogy to Analysis of Beam on Elastic Founda- 19 tion 2. 1 The Euler-Bernoulli. Types of structural load - Designing Buildings Wiki - Share your construction industry knowledge. 3* Use of Fourier Series Loading in Analysis 21 ,2. United States: N. The following four principal forms of simple connection are considered in this section: • Double angle web cleats • Flexible end-plates (header plates) • Fin plates • Column splices To comply with the design assumptions, simple connections must allow adequate end rotation of the beam as it takes up its simply supported deflected profile and practical lack of fit. of the boundary of R. Only linear elasticity (. Some possible sources of errors in the lab includes instrumental or observational errors. Learn more. Speech continuum can be broken into syllables The articulatory study of the syllable was presented in the expiratory theory (chest pulse theory It is based on the assumption that each sound is characterized by a certain degree of sonority which. 2: Describe and use the modelling cycle. To comply with the design assumptions, simple connections must allow adequate end rotation of the beam as it takes up its simply supported deflected profile and practical lack of fit. The Assumptions of Qualitative Designs. t neutral axis N-N. Craftsman Book Company 6058 Corte del Cedro / Carlsbad, CA 92011 By Deryl Burch Revised by Dan Atcheson ESTIMATING EXCAVATION Online PreviewREVISED Buy this complete title here: https://goo. stress is uniform throughout the beam B. Types of structural load - Designing Buildings Wiki - Share your construction industry knowledge. (11) of the Theory section, there is a ± sign. The particles are involved in. Derive the expression for columns with one end fixed and other end free. 1 theory of simple bending When a beam is subjected to a loading system or by a force couple acting on a plane passing through the axis, then the beam deforms. Simple Future Tense expresses future ( time after now ) actions. A simple app to measure deformed nodal positions, distances and angles. The extension consists of a single button, which will be loaded along with Mechanical GUI when model is opened. A cracked simple beam with a test mass is shown in Figure 2. The two basic assumptions of the theory are: the deformations remain small. 2 Degrees of Freedom of a Rigid Body in Space 4. (Compare with those described above for the Euler Bernoulli beam) Plane sections perpendicular to the neutral axis before deformations remain plane, but not necessarily perpendicular to the neutral axis after deformation ( Figure 6 ). 1 Review of simple beam theory Readings: BC 5 Intro, 5. This paper proposes a simple single variable shear deformation theory for an isotropic beam of rectangular cross-section. Furthermore, in practice the coefcient GS0 in (9) is replaced by. Shop direct from eBags for the most durable & innovative luggage, business cases, backpacks and travel accessories. Basic assumptions. These assumptions on the motion of the beam and material law enable us to formulate statements which are not easily accessible for a more general configuration of a beam. Assumptions in Simple Bending Theory • Beams are initially straight • The material is homogenous and isotropic i. Some people argue just because they want to feel heard. develops a novel application of GBT kinematic assumptions to. Discuss the comparison between the finite-element and beam theory results. The bending stress distribution in bending of curved beams is hyperbolic c. However, the simple ideal beam only in axial compression resting on an elastic foundation may also model practical applications, in which the subgrade is of the so-called one-way, i. 3 Equilibrium 18 2. Since the curvature of the beam is very small, bcd and Oba are considered as similar triangles. it will not curve out-of-its-own-plane as shown in the lower right image within Figure 5. Moving tones can be: simple, complex and compound. It should be a straight line through the origin. Learn the uses of the future simple tense as well as when to use the 'be going to' form to express future plans and predictions. beam and the manner of loading. Strength of Materials > Chapter 06 - Beam Deflections > Area-Moment Method | Beam Deflections >. Bibliography. and better beam theory, the Euler}Bernoulli and Timoshenko beam theories are still widely used. Unlike the Euler-Bernoulli Beam equation there is no term like the Area Moment of Inertia to take care of the geometrical considerations and you need to build up each case from 'first principals' and apply that appropriate assumptions for the arrangement. They considered rotary inertia, shear, and extensional. It relates the change in velocity along a streamline dV to the change in pressure dp along the same streamline. Related: Theory of Constraints: A Guide for Project Managers. The beam is assumed to be subjected to a pre-stress compressive load due to the manufacturing and its ends are kept at a fixed distance in space. Assumption: The following assumptions are undertaken in order to derive a differential equation of elastic curve for the loaded beam 1. Namely, the length of the beam should be at least 20 times of the thickness of it. Live loads, 3 Wind loads 4. These methods usually make use of beam on elastic foundation models in order to describe the deformation of the adherends, from which the strain energy release rates can then be calculated. Overhanging: a simple beam that extends beyond its supports at one or both ends. Theory X is more suitable for occupations with manual labor or. It is based on the assumptions that points of inflection (zero bending moment) occur at the midpoints of all members and that exterior columns take half as much shear as do interior columns. One dimension (axial direction) is considerably larger than the other two. In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. Simple bending or pure bending is defined as the phenomenon of development of stresses throughout the length of the beam due to the action of bending moment exclusively. For the calculation of the internal forces and moments, at any section cut of the beam, a sign convention is necessary. of the boundary of R. org)—Although there are many counterintuitive ideas in quantum theory, the idea that influences can travel backwards in time (from the future to the past) is generally not one of them. ly/rajm911 YT Channel Link. Example on composite beams. Stresses in Beams, Plates, and Shells, Third Edition (Computational Mechanics and Applied Analysis) ficig 30. theory and assume each polymer to be a hard sphere with Vη equal to some eective hydrodynamic. More often than not, classical beam theories, such as the Euler-Bernoulli beam theory, form the basis of the analytical development for beam dynamics. Beam theory is the one-dimensional approximation of a three-dimensional continuum. This assumption means that the. Beam is made of homogeneous material and the beam has a longitudinal plane of symmetry. Consider an infinitesimal element of length dx. Equity Theory of Motivation Equity Theory was developed by James Stacy Adam. I = Prt where The Principal (P) is the amount of money deposited or borrowed. This fallacy is a kind of presumptuous argument where it only appears to be an argument. The best theory for explaining the subatomic world got its start in 1928 when theorist Paul Dirac combined quantum mechanics with special relativity to explain the behavior of the electron. 3, respectively, while the possibility of lateral instability of deep beams in bending is treated in Section 1. The first assumption is related to the geometry of a structure. uk Aug 28, 2020 thin walled composite beams theory and application solid mechanics and its. We may presume that such phrases describe identical the situation each is. When a real system is approximated to a simply supported beam, some assumptions are made for modelling and analysis (Important assumptions for undamped system are given below): The mass (m) of the whole system is considered to be lumped at the middle of the beam. Simple beams with constant properties (symmetrical and constant cross-section), may be modeled with CBAR or CBEAM elements. Incorrect; Answer: A. The assumptions relevant for the analysis are similar to the one used for the beam analysis and can be translated to: a. This theory for the beam was derived from the assumption of the cross-section invariant. 2020 xofo No Comments 180 Load Assumption for Fatigue Design of Structures and. Method of Modifications 25 III. Structural analysis is the determination of the effects of loads on physical structures and their components. Likewise, the second and third assumptions are not fulfilled because the hull girder is not. The moment of inertia for a circle is calculated this way. 2020 Leave a comment. All our simple ideas in their first appearance are deriv'd from simple impressions, which are correspondent to them, and which they exactly represent. Learn more. 4 Timoshenko beam theory (TBT) provides shear deformation and rotatory inertia corrections 5 to the classic Euler-Bernoulli theory [1]; it predicts the 36 Then we proceed with an alternate solution, by simply expanding the problem in powers of q, 37 without relying on any physical assumptions or. Let's review some of the most common elements. (c) The root of the beam is encastr6. ∗ 2ψ2= 1, corresponding to the presence of just one photon in both beams together. ANNA UNIVERSITY CHENNAI :: CHENNAI 600 025 AFFILIATED INSTITUTIONS REGULATIONS – 2008 CURRICULUM AND SYLLABI FROM VI TO VIII SEMESTERS AND E. Flat Plates Stress, Deflection Equations and Calculators: The follow web pages contain engineering design calculators that will determine the amount of deflection and stress a flat plate of known thickness will deflect under the specified load and distribution. A typical beam, used in this study, is L = 30 mm long, w = 5 mm wide, and t = 0. EulerBernoulli beam theory 1 Euler–Bernoulli beam theory Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory)[1] is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. Trying to estimate the deformations of a beam under transverse loading several beam theories are available. Such solutions, of course, include assumptions which may or may not be true. Euler-Bernoulli beam theory makes certain simplifying assumptions about the deflected behavior of a beam There are more advanced beam theories available, which account for the effects of different loads The simplest assumption (which you are learning) is that the structure has small strains and. java \classes \classes\com\example\graphics. This could. Competitive equilibria are written in terms first-order conditions associated with agents' behavior and market clearing conditions, following the seminal work by Smale (1974). The kinematic field in zigzag beam theory is generally written as. Thermal Stress Analysis of Beams, Plates and Shells. The concept of Von mises stress arises from the distortion energy failure theory. 1 theory of simple bending When a beam is subjected to a loading system or by a force couple acting on a plane passing through the axis, then the beam deforms. in a simple words, we assume there is no concrete in tension zone because concrete is weak in tension. Since SHAP computes Shapley values, all the advantages of Shapley values apply: SHAP has a solid theoretical foundation in game theory. Are you wondering when to use the future simple tense? (2) to make assumptions about the future based on something that's happening now. 3 Dominion War 1. Correct; B. Compute the covariance Cov(X k;X l) = E[(X k E[X k])(X l E[X l])], for k l2N. The rotation of the cross section is denoted as ␺. Stress is proportional to the distance from the neutral axis [D]. In queueing theory a queue does not refer simply to a neat row which is always first come, first served. 50 1 348-359 2020 Journal Articles journals/tsmc/AbidKK20 10. This is based upon a combination of the Mohr theory of strength and the Coulomb equation. Compressing a long, thin object, such as a yardstick, produces no bending or displacement until the compressive force reaches a certain critical amount. Competitive equilibria are written in terms first-order conditions associated with agents' behavior and market clearing conditions, following the seminal work by Smale (1974). First, a brief overview of CS is given, along with a simple example. Chapter 2 -General Shell Theory 15 2. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. A large body of experimental data now exists for (e,2e) differential cross section (DCS) ionisation studies in which the scattered and ejected electrons are detected with the same energy and at the 'same' asymptotic scattering angles. Classical physics shown to be equal to quantum theory when it comes to unusual experiments with light beams Oct 25, 2013 Quantum physics could make secure, single-use computer memories possible. Physicists must apply perturbation theory to obtain a series of approximated solutions. This section treats simple beams in bending for which the maximum. and better beam theory, the Euler}Bernoulli and Timoshenko beam theories are still widely used. There is no planning. THIN WALL BEAM THEORY AND THE APPLICATION TO THE BEAM AND ARCH ANALYSIS - 28 3. Simple beam bending is often analyzed with the Euler–Bernoulli beam equation. The theory puts its onus on the assumption that the general public is in most-case scenarios willing to reveal their opinion if they think that it is the viewpoint of the majority. The beam stiffness is the same for static and dynamic loading. The numerical studies. The Beam Calculator allows for the analysis of stresses and deflections in straight beams. Load function 2. Area Moments of Inertia, Deflection, and Volumes of Beams. 7 Practical Issues 22 Chapter 3 -The Membrane Theory 23. Is fA ng n2N 0 a simple random walk (not necessarily symmetric)? Explain carefully using the definition. Simple Beam Theory Learning Objectives Review simple beam theory Generalize simple beam theory to three dimensions and general cross sections Consider combined e ects of bending, shear and torsion Study the case of shell beams 7. Determination of Static Quantities for a Single-Span Beam. Plane section remain plane but not necessarily perpendicular to the centerline of. The assumptions in simple bending theory are: The material of the beam is homogeneous and isotropic The transverse section of the beam remains plane before and after bending. How do the FEA predictions for these models compare with predictions from Euler-Bernoulli beam theory and a fully three-dimensional FEA analysis?. Euler-Bernoulli. It is used when talking about the actions that will take place due to future conditions. The Finite Element Method for the Analysis of Non-Linear and. Yet, even for this method, there are many ways (infinitely many, in theory) of defining the basis functions (i. Structures subject to this type of analysis include all that must withstand loads, such as buildings, bridges, aircraft and ships. In simple terms, this axial deformation is called as bending of a beam ( Figure 5. hooks law applies. it has a uniform composition and its mechanical properties are the same in all directions • The stress-strain relationship is linear and elastic • Young's Modulus is the same in tension as in compression Methods to find. In Theory of International Politics (1979), Kenneth Waltz modernised IR theory by moving realism away from its unprovable (albeit persuasive) assumptions about human nature. Produce an equivalent section based on Aluminium. A plate is defined to be a structure with two of the dimensions (length and width) considerably larger than the third one (thickness). A simple beam calculator that solves statically indeterminate beams and provides support reactions, shear force, bending moment, deflection and stress diagrams. There are games which have not Dominant Strategy Equilibrium. While this approach works well for simple cross-sections made of homogeneous material, inaccurate predictions may result for realistic configurations, such as thinwalled sections, or sections comprising. Jespersen is generally called the sonority theory. Determination of Static Quantities for a Single-Span Beam. The first step is to organize the spreadsheet to represent the model. {Правильный ответ}=C. The unrealistic assumption that identical production exits. The beam length and width are L = 20. These assumptions enable all moments and shears throughout the building frame to be computed by the laws of equilibrium. The tenses simply show the time of an action. Assumption in the Theory of Pure Bending video lecture from Stresses in Beams chapter of Strength of Materials Subject for all engineering students. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. An individual is concerned with his achievements (rewards and recognition) as well as with achievements of others. Some examples are given next. The solution for a simple beam is ␣ ⫽␭, and as the wavelength increases both ␣ and ␭ approach zero. However, the simple ideal beam only in axial compression resting on an elastic foundation may also model practical applications, in which the subgrade is of the so-called one-way, i. The material of the beam is perfectly homogeneous (that means of the same kind throughout) and isotropic (that means of same elastic properties in all of directions). Methods of deformation and strength for structure in plastic range discussed above can be summirized as follows: a. The displacement field in the Reddy beam theory (for a beam with a rectangular cross section) is taken as u(X,Y,Z,t) = u0(X,t)+Zϕx(X,t) Z3c1 (ϕx(X,t)+ ∂w0 ∂X) (1a) w(X,Y,Z,t) = w0(X,t) (1b) where the Xcoordinate is taken along the beam length, the Zcoordinate along the thickness. Notice the centroids for all three areas are aligned on the X-axis. assumptions into a statically indeterminate structure, equal in number to degree of indeter-minacy, which maintains stable equilibrium of the structure. The shear force of a simply supported beam carrying a central point load changes sign at its midpoint. strain is uniform throughout the beam C. 10 T & G 120-125. Hypothetical definition is - involving or being based on a suggested idea or theory : being or involving a hypothesis : conjectural. Still, Strength of Materials can be applied to a large number of different kinds of problems that are encountered in practice. Thus, every real is a complex, and sympy adhers to this. This concept is important in structural engineering as it is can be used to calculate where, and how much bending may occur when forces are applied. There are two assumptions in those beam deflection equations: "Plane sections remain plane" - that is, a cross section perpendicular to the undeformed beam remain perpendicular to the deflection curve of the beam when deformed. There are some discoveries in advanced mathematics that do not depend on specialized knowledge, not even on algebra, geometry. A Module For Teaching Fundamentals Of Finite Element Theory. The effect of Shear stresses is neglected. The first part of the sentence states the independent variable and the second part states the dependent. For example, imagine a very simple test of the hypothesis that substance A stops bacterial growth. The theory of grammatical opposition is very popular in grammar studies, because it lies at the base of all established grammatical categories. This works out to $\frac{9. Abstract:We present a rigorous, but mathematically relatively simple and elegant, theory of first-order spatio-temporal distortions, that is, couplings between spatial (or spatial-frequency) and temporal (or frequency) coordinates, of Gaussian pulses and beams. 7 2 Beams Simple Beam Theory, Derivation of Euler Bernoulli and Bending Stress Formulae YouTube Euler Bernoulli Equation for Beam Theory - Finite Element Methods - Duration: 13:47. This means we can apply statistics to our solutions. a : reflecting a transaction (such as a merger) or other development as if it had been or will be in effect for a past or future period a pro forma balance sheet. He based his theory on the fact that it unclear as to how humans acquired the ability to speak a language. Reactants are in constant equilibrium with the transition state structure. This step has a key assumption built into it that there are no lateral. Thermal Stress Analysis of Beams, Plates and Shells. First, a brief overview of CS is given, along with a simple example. The Timoshenko-Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The constraints put on the geometry would form the assumptions: 1. Deformation occurs without energy loss, so in theory the mass rebounds forever. This post deals describe the model of the ball and beam. But what does the word "strength" mean? "Strength" can have many meanings, so let us take a closer look at what is meant by the strength of a material. Therefore, both a 2D plane stress elasticity analysis and a thin elastic beam analysis will be performed. Some possible sources of errors in the lab includes instrumental or observational errors. The transverse sections which are plane before bending, remain plane after bending also. Euler-Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity and provides a means of calculating the load-carrying and deflection characteristics of beams. 2 The First Method for Finding beta. – Beam equations in local coordinates. The fundamental aim of IC analysis is to segment a set of lexical units into two maximally independent sequences or ICs thus revealing the. The material of the beam is isotropic and homogeneous and follows Hooke's law and has the same value of Young's Modulus in tension and compression. Simple beams in elastic and plastic bending are treated in Sections 1. Ann-Louise T. Physical Properties of Gaussian Beams. Both classic beam theory and FEA allow the biomechanical behaviour of long bones to be. 1: State assumptions made in establishing a specific mathematical model ƒ AC 1. What is a Beam?. The shear force of a simply supported beam carrying a central point load changes sign at its midpoint. Derive the expression for columns with both ends fixed. Galileo Galilei is often credited with the first published theory of the strength of beams in bending, but with the discovery of "The…. Professor A. The maximum deflection lies at. Classical beam theory, (a simplification of the linear theory of elasticity) can be used to calculate the load-carrying and deflection characteristics of beams2. 1 Simple Beams in Bending. Now we do several simple manipulations that will become second nature. FEMA P-751, NEHRP Recommended Provisions: Design Examples 5-6 5. Positivist theories aim to replicate the methods of the natural sciences by analysing the impact of material forces. The highly non-linearterms are includedinthe coordinatetransformation of the displacement components. It differs from the typical brazing operation in that no capillary action occurs. It covers the case for small deflections of a beam that are subjected to lateral loads only. 2 GENERAL CONCEPTS OF ZIGZAG BEAM THEORY. Hence the theory of pure bending states that the amount by which a layer in a beam subjected to pure bending, increases or decreases in length, depends upon the position of the layer w. This step has a key assumption built into it that there are no lateral. It is found that all three theories are close to the elasticity solution for "soft" cores with Ec 1 =E f 1 <0:001. Live loads, 3 Wind loads 4. The camber curve may be in a readily visible color to contrast with the color of an arc of a circle, a. Distillers Active Dry Yeast (DADY)-1lb A specially selected strain of Saccharomyces Cerevisae designed for distiller's use in grain mash fermentations for ethanol. Resultant of the applied loads lies in the plane of symmetry. qx() fx() Strains, displacements, and rotations are small 90. 3, respectively, while the possibility of lateral instability of deep beams in bending is treated in Section 1. What is the beam of a vessel and how is it measured? learn about the origin of how beams are used on ships and how they are used today. These stresses formed in the material due to bending can be calculated using certian assumption, they are. The assumptions in simple bending theory are: The material of the beam is homogeneous and isotropic The transverse section of the beam remains plane before and after bending. Referring to Fig. Hence, it is unclear whether thin beam theory will accurately predict the response of the beam. Methods of deformation and strength for structure in plastic range discussed above can be summirized as follows: a. The most widely adopted is the Euler-Bernoulli beam theory, also called classical beam theory. This post deals describe the model of the ball and beam. Thus, every real is a complex, and sympy adhers to this. Repeat the analysis in the tutorial replacing the end simple-support boundary conditions on nodes located at the beam neutral axis. Previous; Products. Tests on reinforced concrete members have indicated that this assumption is very nearly correct at all stages of loading up to flexural failure. The HBM derives from psychological and behavioral theory with the foundation that the two components of health-related behavior are 1) the desire to avoid illness, or conversely get well if already ill; and, 2) the belief that a specific health action will prevent, or cure, illness. 3, respectively, while the possibility of lateral instability of deep beams in bending is treated in Section 1. However, the characteristics of market systems programmes have specific implications for the way the theory of change is defined and used. Sypersyntactic. Likewise, the second and third assumptions are not fulfilled because the hull girder is not. com Account? Simple & engaging videos to help you learn; Unlimited access to 79,000+ lessons The lowest-cost way to earn college credit. In an agricultural market, farmers have to decide how much to produce a year in advance - before they know what the market. The theory of sound symbolism is based on the assumption that separate sounds due to their articulatory. This assumption means that the. The assumption of plane sections remaining plane (Bernoulli's principle) means that strains above and below the neutral axis NA are proportional to the distance from the neutral axis, Fig. This could. We have now placed Twitpic in an archived state. There are 4 types of comparative analysis used in the modern theory of translation: comparing the translation text with its original, comparing several translations of one and the same text prepared by different translators, comparing translations with original texts in the language. Then was discovered diffraction of neutrons, protons, atomic beams and molecular beams. An Analysis of Nonlinear Elastic Deformations for a Homogeneous Beam at Varying Tip Loads and Pitch Angles. These methods usually make use of beam on elastic foundation models in order to describe the deformation of the adherends, from which the strain energy release rates can then be calculated. ■ Kinematic assumption: a plane section originally normal to the centroid remains plane, but in addition also shear deformations occur. 1 Simple Beams in Bending. An H-section beam with unequal flanges is subjected to a vertical load P (Fig. Despite its simplicity, the calculation of the moments of inertia for different objects requires knowledge of the integrals, these To simplify the task, a table was created with inertia calculations for simple geometric shapes: circle, square, cylinder, etc. hooks law applies. Since root may be a floating point number, we repeat above steps while difference between a and b is. 10) Why wasn't the car either locked or put into the garage?. Basic assumptions. it will not curve out-of-its-own-plane as shown in the lower right image within Figure 5. Simple Beam (Japanese: シンプルビーム Simple Beam) is a non-damaging Normal-type move introduced in Generation V. Thus, the equation is valid only for beams that are not stressed beyond the elastic limit. Integer linear programming formulations and greedy algorithms are proposed for solving the discrete frequency assignment problem. The beam properties are assumed to vary continuously from the lower surface to the upper surface of the beam. Combinatorics. ANNA UNIVERSITY CHENNAI :: CHENNAI 600 025 AFFILIATED INSTITUTIONS REGULATIONS – 2008 CURRICULUM AND SYLLABI FROM VI TO VIII SEMESTERS AND E. This model assumes that nations have the. In the present work, a new fractional nonlocal model has been proposed, which has a simple form and can be used in different problems due to the simple form of numerical solutions. In general, both normal and shearing stresses occur. When coupled with the Euler-Bernoulli theory, we can then integrate the expression for bending moment to find the equation for deflection. Resultant of the applied loads lies in the plane of symmetry. The development of beam theory by Euler, who generally modeled beams as elastic lines that resist bending, as well as by several members of the Bernoulli family and by Coulomb, remains among the most immediately useful aspects of solid mechanics. Define slenderness ratio. The solutions for these simple beams can be derived by integrating the moment equation or load-deflection equation. The strain is in the reinforcement is equal to the strain in the concrete at the same level. Hence the theory of pure bending states that the amount by which a layer in a beam subjected to pure bending, increases or decreases in length, depends upon the position of the layer w. Structural analysis is the determination of the effects of loads on physical structures and their components. Equation 1. Self-concept is believed to develop as a person grows old. (TNG: "The Ensigns of Command") 1 History 1. Recently I had need to prove the beam deflection equation for a simple cantilever beam with a point load at the end. Once the assumptions have been verified and the calculations are complete, all that remains is to determine whether the results provide sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis. 4 Strain -Displacement Relationships 19 2. Theory of Simple Bending Simple Bending Theory or Theory of Flexure for Initially Straight Beams. The best theory for explaining the subatomic world got its start in 1928 when theorist Paul Dirac combined quantum mechanics with special relativity to explain the behavior of the electron. The sklearn. Divide the H-beam into three positive areas. Matt Boro UK #4 high wind Photo #1 - Driver got out of his car, realized he was driving a Ford, and kicked the door in self-defense. The assumptions made in the Theory of Simple Bending are as follows: The material of the beam that is subjected to bending is homogenous (same composition throughout) and isotropic (same elastic properties in all directions). Assumptions: The constraints put on the geometry would form the assumptions: 1. The Division of Construction is a results driven engineering organization that prides itself on timely project completion. Ø Extensional Ø Flexural & Ø Twisting modes of Deformation. Synthetic Aperture Radar. Note that the non-dimensionalized maximum deflection is independent of the Young's modulus. Theory of Simple Bending Simple Bending Theory or Theory of Flexure for Initially Straight Beams. Design procedures. Namely, the length of the beam should be at least 20 times of the thickness of it. 75 (G-H) | 2. The best theory for explaining the subatomic world got its start in 1928 when theorist Paul Dirac combined quantum mechanics with special relativity to explain the behavior of the electron. It is found that the deflection of the beam changes linearly with the load and as the beam thickness increases, the beam deflection decreases. For details, see Use Assumptions on Symbolic Variables. What is Emotional Intelligence? Research and Studies into the Theory of EQ. Assumptions of Theory X are based on manager's perception of the nature of employees or workers in the workplace the assumptions of Theory X are Generally, there are many controversial opinions regarding Theory X assumptions. e S is the simulated strain. An Analysis of Nonlinear Elastic Deformations for a Homogeneous Beam at Varying Tip Loads and Pitch Angles. nite elements for beam bending me309 - 05/14/09 kinematic assumptions [1]the de ection wis independent of z all points of a cross section undergo the same de ection in z-direction w= w(x) [2] planar cross sections remain planar cross sections undergo a de ection w and a rotation u= (x)z [3]cross sections that are orthogonal to the beam axis remain orthogonal. It is thus a special case of Timoshenko beam theory. The deflection δ at some point B of a simply supported beam can be obtained by the following steps Chapter 01 - Simple Stresses. Simple beam bending is often analyzed with the Euler–Bernoulli beam equation. Since SHAP computes Shapley values, all the advantages of Shapley values apply: SHAP has a solid theoretical foundation in game theory. Simply being a good listener can be enough to inspire trust and resolve hurt feelings. Bibliography. Any vibration textbook contains the material necessary; Reference 1 was used as the reference for the material presented herein. Thermo-mechanical vibration analysis of sandwich beams with functionally graded carbon nanotube-reinforced composite face sheets based on a higher-order shear deformation beam theory. Suggested video Simple harmonic motion (SHM) examples and formulas. According to this sound. Theory X and Theory Y explains how your perceptions can affect your management style. Jones was already snoring. Euler-Bernoulli. The simple beam theory can be used to calculate the bending stresses in the transformed section. Torsuyev, for example, wrote that in a phrase a number of words and consequently a number of syllables can be pronounced with a. J Implicit theories of intelligence and of the relationship of intelligence to society perhaps need to be considered more carefully than they have been because they often. 5 Estimating stiffness. Assumptions of elastic theory of torsion. The displacement filed , based on Bernoulli-Euler theory, along the coordinate directions is expressed by ,,, 0, ,(1). The two key thoughts of this beam width are Half Power Beam Width (HPBW) and First Null Beam Width (FNBW). normal stress remains constant in. 1 Degrees of Freedom of a Rigid Body 4. The empiricists believe that the actual experience is the source of ideas. The simple beam theory can be used to calculate the bending stresses in the transformed section. Correct; B. Let be the length of an element of the neutral surface in the undeformed state. These can be decisions, assumptions or predictions, etc. Undeformed Beam. Each assumes that the manager's role is to organize resources, including people, to best benefit the company. Liberalism is important to understand, since the theory is the foundation of belief for those who favor international organizations such as the United Nations in the. Ask students to observe and then explain the changes in terms of particle movement in scenarios such as melting wax or plastic, mothballs (nap hthalene) vanishing in a cupboard and the. Learning Goals. There is not much to say for pros and cons of the algorithm - perhaps there is not. This paper proposes a simple single variable shear deformation theory for an isotropic beam of rectangular cross-section. Repeat the analysis in the tutorial replacing the end simple-support boundary conditions on nodes located at the beam neutral axis. A linear beam theory or beam-column theory has often been used for describing the relative de-formation. Because of the assumptions, a general rule of thumb is that for most configurations, the equations for flexural stress and transverse shear stress are accurate to within. The load factor at collapse is 1. — There is a need for simple and efficient analysis v Euler-bernoulli beam theory. This step has a key assumption built into it that there are no lateral. The fundamental aim of IC analysis is to segment a set of lexical units into two maximally independent sequences or ICs thus revealing the. qx() fx() Strains, displacements, and rotations are small 90. The two basic assumptions of the theory are: the deformations remain small the cross sections of the beam under deformation, remain normal to the deflected axis (aka elastic curve). Use the followi. Chart x the information in combination form the dough into a familiar scene Spain can be easily observed KW:car insurance just for rental cars Address will not effectuate a settlement agreement amount was 120e for the rental bill Very bottom of the men KW:tru auto insurance belle glade fl Luckily for my hire car firm bought dacia in romania Citizens insurance agents of transfreight and the.
CommonCrawl
Zerocoin Anonymous Distributed E-Cash from Bitcoin Authors Ian Miers [@: 1] Christina Garman[@: 2] Matthew Green[@: 3] and Aviel D. Rubin[@: 4] Published 2013 г. Web site Johns Hopkins University Information Security Institute Download оригинал Abstract: Bitcoin is the first e-cash system to see widespread adoption. While Bitcoin offers the potential for new types of financial interaction, it has significant limitations regarding privacy. Specifically, because the Bitcoin transaction log is completely public, users' privacy is protected only through the use of pseudonyms. In this paper we propose Zerocoin, a crypto- graphic extension to Bitcoin that augments the protocol to allow for fully anonymous currency transactions. Our system uses standard cryptographic assumptions and does not introduce new trusted parties or otherwise change the security model of Bitcoin. We detail Zerocoin's cryptographic construction, its integration into Bitcoin, and examine its performance both in terms of computation and impact on the Bitcoin protocol. 2 Overview of Bitcoin 3 Decentralized e-cash 4 Decentralized e-cash from strong RSA 4.1 Cryptographic Building Blocks 4.2 Our Construction 5 Integrating with Bitcoin 5.1 Suggestions for Optimizing Proof Verification 6 Limited Anonymity and Forward Security 6.1 Code Changes 6.2 Incremental Deployment 7 Real world security and parameter choice 7.1 Anonymity of Zerocoin 7.2 Parameters 8.1 Microbenchmarks 8.2 Block Verification 9 Previous work 9.1 E-Cash and Bitcoin 9.2 Anonymity 10 Conclusion and future work 11 Contact authors Digital currencies have a long academic pedigree. As of yet, however, no system from the academic literature has seen widespread use. Bitcoin, on the other hand, is a viable digital currency with a market capitalization valued at more than $100 million [1] and between $2 and $5 million USD in transactions a day [2]. Unlike many proposed digital currencies, Bitcoin is fully decentralized and requires no central bank or authority. Instead, its security depends on a distributed architecture and two assumptions: that a majority of its nodes are honest and that a substantive proof-of- work can deter Sybil attacks. As a consequence, Bitcoin requires neither legal mechanisms to detect and punish double spending nor trusted parties to be chosen, monitored, or policed. This decentralized design is likely responsible for Bitcoin's success, but it comes at a price: all transactions are public and conducted between cryptographically binding pseudonyms. While relatively few academic works have considered the privacy implications of Bitcoin's design [2], [3], the preliminary results are not encouraging. In one example, researchers were able to trace the spending of 25,000 bitcoins that were allegedly stolen in 2011 [3], [4]. Although tracking stolen coins may seem harmless, we note that similar techniques could also be applied to trace sensitive transactions, thus violating users' privacy. Moreover, there is reason to believe that sophisticated results from other domains (e.g., efforts to de- anonymize social network data using network topology [5]) will soon be applied to Bitcoin transaction graph. Since all Bitcoin transactions are public, anonymous transactions are necessary to avoid tracking by third parties even if we do not wish to provide the absolute anonymity typically associated with e-cash schemes. On top of such transactions, one could build mechanisms to partially or explicitly identify participants to authorized parties (e.g., law enforcement). However, to limit this information to authorized parties, we must first anonymize the underlying public transactions. The Bitcoin community generally acknowledges the privacy weaknesses of the currency. Unfortunately, the available mitigations are quite limited. The most common recommendation is to employ a laundry service which exchanges different users' bitcoins. Several of these are in commercial operation today [6], [7]. These services, however, have severe limitations: operators can steal funds, track coins, or simply go out of business, taking users' funds with them. Perhaps in recognition of these risks, many services offer short laundering periods, which lead to minimal transaction volumes and hence to limited anonymity. Our contribution. In this paper we describe Zerocoin, a distributed e-cash system that uses cryptographic techniques to break the link between individual Bitcoin transactions without adding trusted parties. To do this, we first define the abstract functionality and security requirements of a new primitive that we call a decentralized e-cash scheme. We next propose a concrete instantiation and prove it secure under standard cryptographic assumptions. Finally, we describe the specific extensions required to integrate our protocol into the Bitcoin system and evaluate the performance of a prototype implementation derived from the original open- source bitcoind client. We are not the first to propose e-cash techniques for solving Bitcoin's privacy problems. However, a common problem with many e-cash protocols is that they rely fundamentally on a trusted currency issuer or "bank," who creates electronic "coins" using a blind signature scheme. One solution (attempted unsuccessfully with Bitcoin [8]) is to simply appoint such a party. Alternatively, one can distribute the responsibility among a quorum of nodes using threshold cryptography. Unfortunately, both of these solutions introduce points of failure and seem inconsistent with the Bitcoin network model, which consists of many untrusted nodes that routinely enter and exit the network. Moreover, the problem of choosing long-term trusted parties, especially in the legal and regulatory grey area Bitcoin operates in, seems like a major impediment to adoption. Zerocoin eliminates the need for such coin issuers by allowing individual Bitcoin clients to generate their own coins — provided that they have sufficient classical bitcoins to do so. Figure 1. Two example block chains. Chain (a) illustrates a normal Bitcoin transaction history, with each transaction linked to a preceding transaction. Chain (b) illustrates a Zerocoin chain. The linkage between mint and spend (dotted line) cannot be determined from the block chain data. Intuition behind our construction. To understand the intuition behind Zerocoin, consider the following "pencil and paper" protocol example. Imagine that all users share access to a physical bulletin board. To mint a zerocoin of fixed denomination $1, a user Alice first generates a random coin serial number S, then commits to S using a secure digital commitment scheme. The resulting commitment is a coin, denoted C, which can only be opened by a random number r to reveal the serial number S. Alice pins C to the public bulletin board, along with $1 of physical currency. All users will accept C provided it is correctly structured and carries the correct sum of currency. To redeem her coin C, Alice first scans the bulletin board to obtain the set of valid commitments ( C 1 , . . . , C N ) {\displaystyle (C_{1},...,C_{N})} that have thus far been posted by all users in the system. She next produces a non-interactive zero-knowledge proof π {\displaystyle \pi } for the following two statements: (1) she knows a C ∈ ( C 1 , . . . , C N ) {\displaystyle C\in (C_{1},...,C_{N})} and (2) she knows a hidden value r such that the commitment C opens to S. In full view of the others, Alice, using a disguise to hide her identity, posts a "spend" transaction containing ( S , π ) . {\displaystyle (S,\pi ).} The remaining users verify the proof π {\displaystyle \pi } and check that S has not previously appeared in any other spend transaction. If these conditions are met, the users allow Alice to collect $1 from any location on the bulletin board; otherwise they reject her transaction and prevent her from collecting the currency. This simple protocol achieves some important aims. First, Alice's minted coin cannot be linked to her retrieved funds: in order to link the coin C to the serial number S used in her withdrawal, one must either know r or directly know which coin Alice proved knowledge of, neither of which are revealed by the proof. Thus, even if the original dollar bill is recognizably tainted (e.g., it was used in a controversial transaction), it cannot be linked to Alice's new dollar bill. At the same time, if the commitment and zero-knowledge proof are secure, then Alice cannot double-spend any coin without re-using the serial number S and thus being detected by the network participants. Of course, the above protocol is not workable: bulletin boards are a poor place to store money and critical information. Currency might be stolen or serial numbers removed to allow double spends. More importantly, to conduct this protocol over a network, Alice requires a distributed digital backing currency. The first and most basic contribution of our work is to recognize that Bitcoin answers all of these concerns, providing us with a backing currency, a bulletin board, and a conditional currency redemption mechanism. Indeed, the core of the Bitcoin protocol is the decentralized calculation of a block chain which acts as a trusted, append-only bulletin board that can both store information and process financial transactions. Alice can add her commitments and escrow funds by placing them in the block chain while being assured that strict protocol conditions (and not her colleagues' scruples) determine when her committed funds may be accessed. Of course, even when integrated with the Bitcoin block chain, the protocol above has another practical challenge. Specifically, it is difficult to efficiently prove that a commitment C is in the set ( C 1 , . . . , C N ) {\displaystyle (C_{1},...,C_{N})} . The naive solution is to prove the disjunction ( C = C 1 ) ∨ ( C = C 2 ) ∨ . . . ∨ ( C = C N ) . {\displaystyle (C=C_{1})\vee (C=C_{2})\vee ...\vee (C=C_{N}).} Unfortunately such "OR proofs" have size O(N), which renders them impractical for all but small values of N. Our second contribution is to solve this problem, producing a new construction with proofs that do not grow linearly as N increases. Rather than specifying an expensive OR proof, we employ a "public" one-way accumulator to reduce the size of this proof. One-way accumulators [9], [10], [11], [12], [13], first proposed by Benaloh and de Mare [9], allow parties to combine many elements into a constant-sized data structure, while efficiently proving that one specific value is contained within the set. In our construction, the Bitcoin network computes an accumulator A over the commitments ( C 1 , . . . , C N ) {\displaystyle (C_{1},...,C_{N})} , along with the appropriate membership witnesses for each item in the set. The spender need only prove knowledge of one such witness. In practice, this can reduce the cost of the spender's proof to O ( l o g N ) {\displaystyle O(logN)} or even constant size. Our application requires specific properties from the accumulator. With no trusted parties, the accumulator and its associated witnesses must be publicly computable and verifiable (though we are willing to relax this requirement to include a single, trusted setup phase in which parameters are generated). Moreover, the accumulator must bind even the computing party to the values in the set. Lastly, the accumulator must support an efficient non-interactive witness-indistinguishable or zero-knowledge proof of set membership. Fortunately such accumulators do exist. In our concrete proposal of Section IV we use a construction based on the Strong RSA accumulator of Camenisch and Lysyanskaya [11], which is in turn based on an accumulator of Baric and Pfitzmann [10] and Benaloh and de Mare [9]. Outline of this work. The rest of this paper proceeds as follows. In Section II we provide a brief technical overview of the Bitcoin protocol. In Section III we formally define the notion of decentralized e-cash and provide correctness and security requirements for such a system. In Section IV we give a concrete realization of our scheme based on standard cryptographic hardness assumptions including the Discrete Logarithm problem and Strong RSA. Finally, in Sections V, VI, and VII, we describe how we integrate our e-cash construction into the Bitcoin protocol, discuss the security and anonymity provided, and detail experimental results showing that our solution is practical. Overview of Bitcoin In this section we provide a short overview of the Bitcoin protocol. For a more detailed explanation, we refer the reader to the original specification of Nakamoto [14] or to the summary of Barber et al. [2]. The Bitcoin network. Bitcoin is a peer-to-peer network of nodes that distribute and record transactions, and clients used to interact with the network. The heart of Bitcoin is the block chain, which serves as an append-only bulletin board maintained in a distributed fashion by the Bitcoin peers. The block chain consists of a series of blocks connected in a hash chain. Every Bitcoin block memorializes a set of transactions that are collected from the Bitcoin broadcast network. Bitcoin peers compete to determine which node will generate the next canonical block. This competition requires each node to solve a proof of work based on identifying specific SHA-256 preimages, specifically a block B such that SHA256(SHA256(B)) = ( 0 l | | { 0 , 1 } 256 − l ) {\displaystyle (0^{l}||{\{0,1\}}^{256-l})} . The value l {\displaystyle l} is selected by a periodic network vote to ensure that on average a block is created every 10 minutes. When a peer generates a valid solution, a process known as mining, it broadcasts the new block to all nodes in the system. If the block is valid (i.e., all transactions validate and a valid proof of work links the block to the chain thus far), then the new block is accepted as the head of the block chain. The process then repeats. Bitcoin provides two separate incentives to peers that mine new blocks. First, successfully mining a new block (which requires a non-trivial computational investment) entitles the creator to a reward, currently set at 25 BTC.5 Second, nodes who mine blocks are entitled to collect transaction fees from every transaction they include. The fee paid by a given transaction is determined by its author (though miners may exclude transactions with insufficient fees or prioritize high fee transactions). Bitcoin transactions. A Bitcoin transaction consists of a set of outputs and inputs. Each output is described by the tuple (a, V) where a is the amount, denominated in Satoshi (one bitcoin = 10 9 {\displaystyle 10^{9}} Satoshi), and V is a specification of who is authorized to spend that output. This specification, denoted scriptPubKey, is given in Bitcoin script, a stack-based non-Turing-complete language similar to Forth. Figure 2. Example Bitcoin transaction. The output script specifies that the redeeming party provide a public key that hashes to the given value and that the transaction be signed with the corresponding private key. Transaction inputs are simply a reference to a previous transaction output, as well as a second script, scriptSig, with code and data that when combined with scriptPubKey evaluates to true. Coinbase transactions, which start off every block and pay its creator, do not include a transaction input. To send d bitcoins to Bob, Alice embeds the hash of Bob's ECDSA public key p k b {\displaystyle pk_{b}} , the amount d, and some script instructions in scriptPubKey as one output of a transaction whose referenced inputs total at least d bitcoins (see Figure 2). Since any excess input is paid as a transaction fee to the node who includes it in a block, Alice typically adds a second output paying the surplus change back to herself. Once the transaction is broadcasted to the network and included in a block, the bitcoins belong to Bob. However, Bob should only consider the coins his once at least five subsequent blocks reference this block. Bob can spend these coins in a transaction by referencing it as an input and including in scriptSig a signature on the claiming transaction under s k b {\displaystyle sk_{b}} and the public key p k b {\displaystyle pk_{b}} . Anonymity. Anonymity was not one of the design goals of Bitcoin [3], [14], [15]. Bitcoin provides only pseudonymity through the use of Bitcoin identities (public keys or their hashes), of which a Bitcoin user can generate an unlimited number. Indeed, many Bitcoin clients routinely generate new identities in an effort to preserve the user's privacy. Regardless of Bitcoin design goals, Bitcoin's user base seems willing to go through considerable effort to maintain their anonymity — including risking their money and paying transaction fees. One illustration of this is the existence of laundries that (for a fee) will mix together different users' funds in the hopes that shuffling makes them difficult to trace [2], [6], [7]. Because such systems require the users to trust the laundry to both (a) not record how the mixing is done and (b) give the users back the money they put in to the pot, use of these systems involves a fair amount of risk. Decentralized e-cash Our approach to anonymizing the Bitcoin network uses a form of cryptographic e-cash. Since our construction does not require a central coin issuer, we refer to it as a decentralized e-cash scheme. In this section we define the algorithms that make up a decentralized e-cash scheme and describe the correctness and security properties required of such a system. Notation. Let λ represent an adjustable security parameter, let poly(·) represent some polynomial function, and let ν(·) represent a negligible function. We use C to indicate the set of allowable coin values. Definition 3.1 (Decentralized E-Cash Scheme): A decentralized e-cash scheme consists of a tuple of possibly randomized algorithms (Setup, Mint, Spend, Verify). S e t u p ( 1 λ ) → p a r a m s {\displaystyle Setup(1^{\lambda })\to params} . On input a security parameter, output a set of global public parameters params and a description of the set C. M i n t ( p a r a m s ) → ( c , s k c ) {\displaystyle Mint(params)\to (c,skc)} . On input parameters params, output a coin c ∈ C {\displaystyle c\in C} , as well as a trapdoor skc. S p e n d ( p a r a m s , c , s k c , R , C ) → ( π , S ) {\displaystyle Spend(params,c,skc,R,\mathbf {C} )\to (\pi ,S)} . Given params, a coin c, its trapdoor skc, some transaction string R ∈ { 0 , 1 } ∗ {\displaystyle R\in {\{0,1\}}^{*}} , and an arbitrary set of coins C {\displaystyle \mathbf {C} } , output a coin spend transaction consisting of a proof π {\displaystyle \pi } and serial number S if c ∈ C ⊆ C {\displaystyle c\in \mathbf {C} \subseteq C} . Otherwise output ⊥ {\displaystyle \bot } . V e r i f y ( p a r a m s , π , S , R , C ) → 0 , 1 {\displaystyle Verify(params,\pi ,S,R,\mathbf {C} )\to {0,1}} . Given params, a proof π, a serial number S, transaction information R, and a set of coins C {\displaystyle \mathbf {C} } , output 1 if C ⊆ C {\displaystyle \mathbf {C} \subseteq C} and ( π , S , R ) {\displaystyle (\pi ,S,R)} is valid. Otherwise output 0. We note that the Setup routine may be executed by a trusted party. Since this setup occurs only once and does not produce any corresponding secret values, we believe that this relaxation is acceptable for real-world applications. Some concrete instantiations may use different assumptions. Each coin is generated using a randomized minting algorithm. The serial number S is a unique value released during the spending of a coin and is designed to prevent any user from spending the same coin twice. We will now formalize the correctness and security properties of a decentralized e-cash scheme. Each call to the Spend algorithm can include an arbitrary string R, which is intended to store transaction-specific information (e.g., the identity of a transaction recipient). Correctness. Every decentralized e-cash scheme must satisfy the following correctness requirement. Let p a r a m s ← S e t u p ( 1 λ ) {\displaystyle params\gets Setup(1^{\lambda })} and ( c , s k c ) ← M i n t ( p a r a m s ) {\displaystyle (c,skc)\gets Mint(params)} . Let C ⊆ C {\displaystyle \mathbf {C} \subseteq C} be any valid set of coins, where | C | ≤ p o l y ( λ ) {\displaystyle |\mathbf {C} |\leq poly(\lambda )} , and assign ( π , S ) ← S p e n d ( p a r a m s , c , s k c , R , C ) {\displaystyle (\pi ,S)\gets Spend(params,c,skc,R,\mathbf {C} )} . The scheme is correct if, over all C , R {\displaystyle \mathbf {C} ,R} , and random coins used in the above algorithms, the following equality holds with probability 1 − ν ( λ ) {\displaystyle 1-\nu (\lambda )} : {\displaystyle Verify(params,\pi ,S,R,\mathbf {C} \cup \{c\})=1} Security. The security of a decentralized e-cash system is defined by the following two games: Anonymity and Balance. We first describe the Anonymity experiment, which ensures that the adversary cannot link a given coin spend transaction ( π , S ) {\displaystyle (\pi ,S)} to the coin associated with it, even when the attacker provides many of the coins used in generating the spend transaction. Definition 3.2 (Anonymity): A decentralized e-cash scheme Π = ( S e t u p , M i n t , S p e n d , V e r i f y ) {\displaystyle \Pi =(Setup,Mint,Spend,Verify)} satisfies the Anonymity requirement if every probabilistic polynomial-time (p.p.t.) adversary A = ( A 1 , A 2 ) {\displaystyle {\mathcal {A}}=({\mathcal {A}}_{1},{\mathcal {A}}_{2})} has negligible advantage in the following experiment. {\displaystyle (\pi ,{\mathcal {A}},\lambda )} {\displaystyle params\gets } {\displaystyle 1^{\lambda }} {\displaystyle i\in \{0,1\}} {\displaystyle c_{i},skc_{i}} ) ← Mint( {\displaystyle params} {\displaystyle (\mathbf {C} ,R,z)\gets {\mathcal {A}}_{1}(params,c_{0},c_{1});b\gets \{0,1\}} {\displaystyle (\pi ,S)\gets } Spend( {\displaystyle params,c_{b},skc_{b},R,\mathbf {C} \cup \{c_{0},c_{1}\}} ′ {\displaystyle b'\gets {\mathcal {A}}2(z,\pi ,S)} We define A {\displaystyle {\mathcal {A}}} 's advantage in the above game as | P r {\displaystyle \mathbf {Pr} } [b = b0] − 1/2|. The Balance property requires more consideration. Intuitively, we wish to ensure that an attacker cannot spend more coins than she mints, even when she has access to coins and spend transactions produced by honest parties. Note that to strengthen our definition, we also capture the property that an attacker might alter valid coins, e.g., by modifying their transaction information string R. Our definition is reminiscent of the "one-more forgery" definition commonly used for blind signatures. We provide the attacker with a collection of valid coins and an oracle O s p e n d {\displaystyle {\mathcal {O}}_{spend}} that she may use to spend any of them. Ultimately A {\displaystyle {\mathcal {A}}} must produce m coins and m+1 valid spend transactions such that no transaction duplicates a serial number or modifies a transaction produced by the honest oracle. Definition 3.3 (Balance): A decentralized e-cash scheme Π = (Setup, Mint, Spend, Verify) satisfies the Balance property if ∀N ≤ poly(λ) every p.p.t. adversary A {\displaystyle {\mathcal {A}}} has negligible advantage in the following experiment. Balance( {\displaystyle \pi ,{\mathcal {A}},N,\lambda } {\displaystyle i=1toN:(c_{i},skc_{i})\gets } Mint( {\displaystyle (c_{1}',...,c_{m}',S_{1},...,S_{m},S_{m+1})\gets {\mathcal {A}}^{{\mathcal {O}}_{spend(.,.,.)}}(params,c_{1},...,c_{N})} The oracle O s p e n d {\displaystyle {\mathcal {O}}_{spend}} operates as follows: on the j t h {\displaystyle j_{th}} query O s p e n d ( c j , C j , R j ) {\displaystyle {\mathcal {O}}_{spend}(c_{j},\mathbf {C} _{j},R_{j})} , the oracle outputs ⊥ {\displaystyle \perp } if c j ∉ { c 1 , . . . , c N } {\displaystyle c_{j}\notin \{c_{1},...,c_{N}\}} . Otherwise it returns ( π j , S j ) ← {\displaystyle (\pi _{j},S_{j})\gets } Spend( p a r a m s , c j , s k c j , R j , C j {\displaystyle params,c_{j},skc_{j},R_{j},\mathbf {C} _{j}} ) to A {\displaystyle {\mathcal {A}}} and records ( S j , R j ) {\displaystyle (S_{j},R_{j})} in the set T {\displaystyle \mathrm {T} } . We say that A {\displaystyle {\mathcal {A}}} wins (i.e., she produces more spends than minted coins) if ∀ s ∈ { S 1 , . . . , S m , S m + 1 } {\displaystyle \forall s\in \{S_{1},...,S_{m},S_{m+1}\}} where s = ( π ′ , S ′ , R ′ , C ′ ) {\displaystyle s=(\pi ',S',R',\mathbf {C} ')} : Verify( p a r a m s , π ′ , S ′ , R ′ , C ′ ) = 1 {\displaystyle params,\pi ',S',R',\mathbf {C} ')=1} . C ′ ⊆ { c 1 , . . . , c N , c 1 ′ , . . . , c m ′ } {\displaystyle \mathbf {C} '\subseteq \{c_{1},...,c_{N},c_{1}',...,c_{m}'\}} . ( S ′ , R ′ ) ∉ T {\displaystyle (S',R')\notin \mathrm {T} } . S ′ {\displaystyle S'} appears in only one tuple from { S 1 , . . . , S m , S m + 1 } {\displaystyle \{S_{1},...,S_{m},S_{m+1}\}} . We define A {\displaystyle {\mathcal {A}}} 's advantage as the probability that A {\displaystyle {\mathcal {A}}} wins the above game. Decentralized e-cash from strong RSA In this section we describe a concrete instantiation of a decentralized e-cash scheme. We first define the necessary cryptographic ingredients. Cryptographic Building Blocks Zero-knowledge proofs and signatures of knowledge. Our protocols use zero-knowledge proofs that can be instantiated using the technique of Schnorr [16], with extensions due to e.g., [17], [18], [19], [20]. We convert these into non-interactive proofs by applying the Fiat-Shamir heuristic [21]. In the latter case, we refer to the resulting non-interactive proofs as signatures of knowledge as defined in [22]. When referring to these proofs we will use the notation of Camenisch and Stadler [23]. For instance, N I Z K P o K { ( x , y ) : h = g x ∧ c = g y } {\displaystyle \mathrm {NIZKPoK} \{(x,y):h=g^{x}\wedge c=g^{y}\}} denotes a non-interactive zero-knowledge proof of knowledge of the elements x {\displaystyle x} and y {\displaystyle y} that satisfy both h = g x {\displaystyle h=g^{x}} and c = g y {\displaystyle c=g^{y}} . All values not enclosed in ()'s are assumed to be known to the verifier. Similarly, the extension Z K S o K [ m ] { ( x , y ) : h = g x ∧ c = g y } {\displaystyle \mathrm {ZKSoK} [m]\{(x,y):h=g^{x}\wedge c=g^{y}\}} indicates a signature of knowledge on message m. Accumulators.Our construction uses an accumulator based on the Strong RSA assumption. The accumulator we use was first proposed by Benaloh and de Mare [10] and later improved by Baric and Pfitzmann [11] and Camenisch and Lysyanskaya [12]. We describe the accumulator using the following algorithms: A c c u m S e t u p ( λ ) → p a r a m s {\displaystyle \mathrm {AccumSetup} (\lambda )\to params} . On input a security parameter, sample primes p , q {\displaystyle p,q} (with polynomial dependence on the security parameter), compute N = p q {\displaystyle N=pq} , and sample a seed value u ∈ Q R N , u ≠ 1 {\displaystyle u\in QR_{N},u\neq 1} . Output ( N , u ) {\displaystyle (N,u)} as p a r a m s {\displaystyle params} . A c c u m u l a t e ( p a r a m s , C ) → A {\displaystyle \mathrm {Accumulate} (params,C)\to A} . On input params ( N , u ) {\displaystyle (N,u)} and a set of prime numbers C = { c 1 , . . . , c i | c ∈ [ A , B ] } 10 {\displaystyle \mathbf {C} ={\{c_{1},...,c_{i}|c\in [{\mathcal {A}},{\mathcal {B}}]\}}^{10}} compute the accumulator A {\displaystyle A} as u c 1 c 2 ⋯ c n m o d N {\displaystyle u^{c_{1}c_{2}\cdots c_{n}}modN} . G e n W i t n e s s ( p a r a m s , v , C ) → w {\displaystyle \mathrm {GenWitness} (params,v,\mathbf {C} )\to w} . On input params ( N , u ) {\displaystyle (N,u)} , a set of prime numbers C {\displaystyle \mathbf {C} } as described above, and a value v ∈ C {\displaystyle v\in \mathbf {C} } , the witness w {\displaystyle w} is the accumulation of all the values in C {\displaystyle \mathbf {C} } besides v {\displaystyle v} , i.e., w = A c c u m u l a t e ( p a r a m s , C ∖ { v } ) {\displaystyle w=\mathrm {Accumulate} (params,\mathbf {C} \setminus \{v\})} . A c c V e r i f y ( p a r a m s , A , v , ω ) → { 0 , 1 } {\displaystyle \mathrm {AccVerify} (params,A,v,\omega )\to \{0,1\}} . On input params ( N , u ) {\displaystyle (N,u)} , an element v {\displaystyle v} , and witness ω {\displaystyle \omega } , compute A ′ ≡ ω v m o d N {\displaystyle A'\equiv \omega ^{v}modN} and output 1 if and only if A ′ = A {\displaystyle A'=A} , v {\displaystyle v} is prime, and v ∈ [ A , B ] {\displaystyle v\in [{\mathcal {A}},{\mathcal {B}}]} as defined previously. For simplicity, the description above uses the full calculation of A. Camenisch and Lysyanskaya [11] observe that the accumulator may also be incrementally updated, i.e., given an existing accumulator A n {\displaystyle A_{n}} it is possible to add an element x {\displaystyle x} and produce a new accumulator value A n + 1 {\displaystyle A_{n+1}} by computing A n + 1 = A n x m o d N {\displaystyle An+1=A_{n}^{x}modN} . We make extensive use of this optimization in our practical implementation. Camenisch and Lysyanskaya [11] show that the accumulator satisfies a strong collision-resistance property if the Strong RSA assumption is hard. Informally, this ensures that no p.p.t. adversary can produce a pair ( v , ω ) {\displaystyle (v,\omega )} such that v ∉ C {\displaystyle v\notin \mathbf {C} } and yet AccVerify is satisfied. Additionally, they describe an efficient zero-knowledge proof of knowledge that a committed value is in an accumulator. We convert this into a non-interactive proof using the Fiat-Shamir transform and refer to the resulting proof using the following notation: {\displaystyle \mathrm {NIZKPoK} \{(v,\omega ):\mathrm {AccVerify} ((N,u),A,v,\omega )=1\}.} We now describe a concrete decentralized e-cash scheme. Our scheme is secure assuming the hardness of the Strong RSA and Discrete Logarithm assumptions, and the existence of a zero-knowledge proof system. We now describe the algorithms: S e t u p ( 1 λ ) → p a r a m s {\displaystyle \mathrm {Setup} (1^{\lambda })\to params} . On input a security parameter, run A c c u m S e t u p ( 1 λ ) {\displaystyle \mathrm {AccumSetup} (1^{\lambda })} to obtain the values ( N , u ) {\displaystyle (N,u)} . Next generate primes p , q {\displaystyle p,q} such that p = 2 ω q + 1 {\displaystyle p=2^{\omega }q+1} for ω ≥ 1 {\displaystyle \omega \geq 1} .Select random generators g , h {\displaystyle g,h} such that G = ⟨ g ⟩ = ⟨ h ⟩ {\displaystyle \mathbb {G} =\langle g\rangle =\langle h\rangle } and G {\displaystyle \mathbb {G} } is a subgroup of Z q ∗ {\displaystyle \mathbb {Z} _{q}^{*}} . Output p a r a m s = ( N , u , p , q , g , h ) {\displaystyle params=(N,u,p,q,g,h)} . M i n t ( p a r a m s ) → ( c , s k c ) {\displaystyle \mathrm {Mint} (params)\to (c,skc)} . Select S , r ← Z q ∗ {\displaystyle S,r\gets \mathbb {Z} _{q}^{*}} and compute c ← g S h r m o d p {\displaystyle c\gets g^{S}h^{r}\;mod\;p} such that { c p r i m e | c ∈ [ A , B ] } {\displaystyle \{c\;\mathrm {prime} \;|\;c\in [{\mathcal {A}},{\mathcal {B}}]\}} . Set s k c = ( S , r ) {\displaystyle skc=(S,r)} and output ( c , s k c ) {\displaystyle (c,skc)} . S p e n d ( p a r a m s , c , s k c , R , C ) → ( π , S ) {\displaystyle \mathrm {Spend} (params,c,skc,R,\mathbf {C} )\to (\pi ,S)} . If c ∉ C {\displaystyle c\notin \mathbf {C} } output ⊥ {\displaystyle \bot } . Compute A ← A c c u m u l a t e ( ( N , u ) , C ) {\displaystyle A\gets \mathrm {Accumulate} ((N,u),\mathbf {C} )} and ω ← G e n W i t n e s s ( ( N , u ) , c , C ) {\displaystyle \omega \gets \mathrm {GenWitness} ((N,u),c,\mathbf {C} )} . Output ( π , S ) {\displaystyle (\pi ,S)} where π {\displaystyle \pi } comprises the following signature of knowledge: π = Z K S o K [ R ] { ( c , w , r ) : A c c V e r i f y ( ( N , u ) , A , c , w ) = 1 ∧ c = g S h r } {\displaystyle \pi =\mathrm {ZKSoK} [R]\{(c,w,r):\mathrm {AccVerify} ((N,u),A,c,w)=1\wedge c=g^{S}h^{r}\}} V e r i f y ( p a r a m s , π , S , R , C ) → { 0 , 1 } {\displaystyle \mathrm {Verify} (params,\pi ,S,R,\mathbf {C} )\to \{0,1\}} . Given a proof π {\displaystyle \pi } , a serial number S {\displaystyle S} , and a set of coins C {\displaystyle \mathbf {C} } , first compute A ← A c c u m u l a t e ( ( N , u ) , C ) {\displaystyle A\gets \mathrm {Accumulate} ((N,u),\mathbf {C} )} . Next verify that π {\displaystyle \pi } is the aforementioned signature of knowledge on R {\displaystyle R} using the known public values. If the proof verifies successfully, output 1, otherwise output 0. Our protocol assumes a trusted setup process for generating the parameters. We stress that the accumulator trapdoor ( p , q ) {\displaystyle (p,q)} is not used subsequent to the Setup procedure and can therefore be destroyed immediately after the parameters are generated. Alternatively, implementers can use the technique of Sander for generating so-called RSA UFOs for accumulator parameters without a trapdoor [24]. Security Analysis We now consider the security of our construction. Theorem 4.1: If the zero-knowledge signature of knowledge is computationally zero-knowledge in the random oracle model, then Π = ( S e t u p , M i n t , S p e n d , V e r i f y ) {\displaystyle \Pi =(\mathrm {Setup,Mint,Spend,Verify} )} satisfies the Anonymity property. We provide a proof sketch for Theorem 4.1 in Appendix A. Intuitively, the security of our construction stems from the fact that the coin commitment C {\displaystyle C} is a perfectly-hiding commitment and the signature proof π {\displaystyle \pi } is at least computationally zeroknowledge. These two facts ensure that the adversary has at most negligible advantage in guessing which coin was spent. Theorem 4.2: If the signature proof π {\displaystyle \pi } is sound in the random oracle model, the Strong RSA problem is hard, and the Discrete Logarithm problem is hard in G {\displaystyle \mathbb {G} } , then Π = ( S e t u p , M i n t , S p e n d , V e r i f y ) {\displaystyle \Pi =(\mathrm {Setup,Mint,Spend,Verify} )} satisfies the Balance property. A proof of Theorem 4.1 is included in Appendix A. Briefly, this proof relies on the binding properties of the coin commitment, as well as the soundness and unforgeability of the ZKSoK and collision-resistance of the accumulator. We show that an adversary who wins the Balance game with non-negligible advantage can be used to either find a collision in the commitment scheme (allowing us to solve the Discrete Logarithm problem) or find a collision in the accumulator (which leads to a solution for Strong RSA). Integrating with Bitcoin While the construction of the previous section gives an overview of our approach, we have yet to describe how our techniques integrate with Bitcoin. In this section we address the specific challenges that come up when we combine a decentralized e-cash scheme with the Bitcoin protocol. The general overview of our approach is straightforward. To mint a zerocoin c {\displaystyle c} of denomination d {\displaystyle d} , Alice runs M i n t ( p a r a m s ) → ( c , s k c ) {\displaystyle \mathrm {Mint} (params)\to (c,skc)} and stores s k c {\displaystyle skc} securely. She then embeds c in the output of a Bitcoin transaction that spends d + f e e s {\displaystyle d+fees} classical bitcoins. Once a mint transaction has been accepted into the block chain, c {\displaystyle c} is included in the global accumulator A, and the currency cannot be accessed except through a Zerocoin spend, i.e., it is essentially placed into escrow. To spend c with Bob, Alice first constructs a partial transaction ptx that references an unclaimed mint transaction as input and includes Bob's public key as output. She then traverses all valid mint transactions in the block chain, assembles the set of minted coins C {\displaystyle \mathbf {C} } , and runs S p e n d ( p a r a m s , c , s k c , h a s h ( p t x ) , C ) → ( π , S ) {\displaystyle \mathrm {Spend} (params,c,skc,hash(ptx),\mathbf {C} )\to (\pi ,S)} . Finally, she completes the transaction by embedding ( π , S ) {\displaystyle (\pi ,S)} in the scriptSig of the input of p t x {\displaystyle ptx} . The output of this transaction could also be a further Zerocoin mint transaction — a feature that may be useful to transfer value between multiple Zerocoin instances (i.e., of different denomination) running in the same block chain. When this transaction appears on the network, nodes check that V e r i f y ( p a r a m s , π , S , h a s h ( p t x ) , C ) = 1 {\displaystyle \mathrm {Verify} (params,\pi ,S,hash(ptx),\mathbf {C} )=1} and check that S {\displaystyle S} does not appear in any previous transaction. If these condition hold and the referenced mint transaction is not claimed as an input into a different transaction, the network accepts the spend as valid and allows Alice to redeem d {\displaystyle d} bitcoins. Computing the accumulator. A naive implementation of the construction in section "Decentralized e-cash from strong RSA " requires that the verifier recompute the accumulator A {\displaystyle A} with each call to V e r i f y ( . . . ) {\displaystyle \mathrm {Verify} (...)} . In practice, the cost can be substantially reduced. First, recall that the accumulator in our construction can be computed incrementally, hence nodes can add new coins to the accumulation when they arrive. To exploit this, we require any node mining a new block to add the zerocoins in that block to the previous block's accumulator and store the resulting new accumulator value in the coinbase transaction at the start of the new block. We call this an accumulator checkpoint. Peer nodes validate this computation before accepting the new block into the blockchain. Provided that this verification occurs routinely when blocks are added to the chain, some clients may choose to trust the accumulator in older (confirmed) blocks rather than re-compute it from scratch. With this optimization, Alice need no longer compute the accumulator A and the full witness ω {\displaystyle \omega } for c {\displaystyle c} . Instead she can merely reference the current block's accumulator checkpoint and compute the witness starting from the checkpoint preceding her mint (instead of starting at T 0 {\displaystyle T_{0}} ), since computing the witness is equivalent to accumulating C ∖ { c } {\displaystyle \mathbf {C} \setminus \{\mathbf {c} \}} . New transaction types. Bitcoin transactions use a flexible scripting language to determine the validity of each transaction. Unfortunately, Bitcoin script is (by design) not Turingcomplete. Moreover, large segments of the already-limited script functionality have been disabled in the Bitcoin production network due to security concerns. Hence, the existing script language cannot be used for sophisticated calculations such as verifying zero-knowledge proofs. Fortunately for our purposes, the Bitcoin designers chose to reserve several script operations for future expansion. We extend Bitcoin by adding a new instruction: ZEROCOIN_MINT. Minting a zerocoin constructs a transaction with an output whose scriptPubKey contains this instruction and a coin c {\displaystyle c} . Nodes who receive this transaction should validate that c {\displaystyle c} is a well-formed coin. To spend a zerocoin, Alice constructs a new transaction that claims as input some Zerocoin mint transaction and has a scriptSig field containing ( π , S ) {\displaystyle (\pi ,S)} and a reference to the block containing the accumulator used in π {\displaystyle \pi } . A verifier extracts the accumulator from the referenced block and, using it, validates the spend as described earlier Finally, we note that transactions must be signed to prevent an attacker from simply changing who the transaction is payed to. Normal Bitcoin transactions include an ECDSA signature by the key specified in the scriptPubKey of the referenced input. However, for a spend transaction on an arbitrary zerocoin, there is no ECDSA public key. Instead, we use the ZKSoK π {\displaystyle \pi } to sign the transaction hash that normally would be signed using ECDSA. Statekeeping and side effects. Validating a zerocoin changes Bitcoin's semantics: currently, Bitcoin's persistent state is defined solely in terms of transactions and blocks of transactions. Furthermore, access to this state is done via explicit reference by hash. Zerocoin, on the other hand, because of its strong anonymity requirement, deals with existentials: the coin is in the set of thus-far-minted coins and its serial number is not yet in the set of spent serial numbers. To enable these type of qualifiers, we introduce side effects into Bitcoin transaction handling. Processing a mint transaction causes a coin to be accumulated as a side effect. Processing a spend transaction causes the coin serial number to be added to a list of spent serial numbers held by the client. For coin serial numbers, we have little choice but to keep a full list of them per client and incur the (small) overhead of storing that list and the larger engineering overhead of handling all possible ways a transaction can enter a client. The accumulator state is maintained within the accumulator checkpoints, which the client verifies for each received block. Proof optimizations. For reasonable parameter sizes, the proofs produced by S p e n d ( . . . ) {\displaystyle \mathrm {Spend} (...)} exceed Bitcoin's 10KB transaction size limits. Although we can simply increase this limit, doing so has two drawbacks: (1) it drastically increases the storage requirements for Bitcoin since current transactions are between 1 and 2 KB and (2) it may increase memory pressure on clients that store transactions in memory. In our prototype implementation we store our proofs in a separate, well-known location (a simple server). A full implementation could use a Distributed Hash Table or non block-chain backed storage in Bitcoin. While we recommend storing proofs in the block chain, these alternatives do not increase the storage required for the block chain. Suggestions for Optimizing Proof Verification The complexity of the proofs will also lead to longer verification times than expected with a standard Bitcoin transaction. This is magnified by the fact that a Bitcoin transaction is verified once when it is included by a block and again by every node when that block is accepted into the block chain. Although the former cost can be accounted for by charging transaction fees, it would obviously be ideal for these costs to be as low as possible. One approach is to distribute the cost of verification over the entire network and not make each node verify the entire proof. Because the ZKSoK we use utilizes cut-and-choose techniques, it essentially consists of n {\displaystyle n} repeated iterations of the same proof (reducing the probability of forgery to roughly 2 − n {\displaystyle 2^{-n}} ). We can simply have nodes randomly select which iterations of the proofs they verify. By distributing this process across the network, we should achieve approximately the same security with less duplication of effort. This optimization involves a time-space tradeoff, since the existing proof is verified by computing a series of (at a minimum) 1024 bit values T 1 , . . . , T n {\displaystyle T_{1},...,T_{n}} and hashing the result. A naive implementation would require us to send T 1 , . . . , T n {\displaystyle T_{1},...,T_{n}} fully computed — greatly increasing the size of the proof – since the client will only compute some of them but needs all of them to verify the hash. We can avoid this issue by replacing the standard hash with a Merkel tree where the leaves are the hashed Ti values and the root is the challenge hash used in the proof. We can then send the 160 bit or 256 bit intermediate nodes instead of the 1024 bit T i {\displaystyle T_{i}} values, allowing the verifier to compute only a subset of the T i {\displaystyle T_{i}} values and yet still validate the proof against the challenge without drastically increasing the proof size. Limited Anonymity and Forward Security A serious concern in the Bitcoin community is the loss of wallets due to poor endpoint security. In traditional Bitcoin, this results in the theft of coins [4]. However, in the Zerocoin setting it may also allow an attacker to deanonymize Zerocoin transactions using the stored s k c {\displaystyle skc} . The obvious solution is to securely delete s k c {\displaystyle skc} immediately after a coin is spent. Unfortunately, this provides no protection if s k c {\displaystyle skc} is stolen at some earlier point. One solution is to generate the spend transaction immediately (or shortly after) the coin is minted, possibly using an earlier checkpoint for calculating C {\displaystyle \mathbf {C} } . This greatly reduces the user's anonymity by decreasing the number of coins in C {\displaystyle \mathbf {C} } and leaking some information about when the coin was minted. However, no attacker who compromises the wallet can link any zerocoins in it to their mint transactions. For our implementation, we chose to modify bitcoind, the original open-source Bitcoin C++ client. This required several modifications. First, we added instructions to the Bitcoin script for minting and spending zerocoins. Next, we added transaction types and code for handling these new instructions, as well as maintaining the list of spent serial numbers and the accumulator. We used the Charm cryptographic framework [25] to implement the cryptographic constructions in Python, and we used Boost's Python utilities to call that code from within bitcoind. This introduces some performance overhead, but it allowed us to rapidly prototype and leave room for implementing future constructions as well. Incremental Deployment As described above, Zerocoin requires changes to the Bitcoin protocol that must happen globally: while transactions containing the new instructions will be validated by updated servers, they will fail validation on older nodes, potentially causing the network to split when a block is produced that validates for some, but not all, nodes. Although this is not the first time Bitcoin has faced this problem, and there is precedent for a flag day type upgrade strategy [26], it is not clear how willing the Bitcoin community is to repeat it. As such, we consider the possibility of an incremental deployment. One way to accomplish this is to embed the above protocol as comments in standard Bitcoin scripts. For non Zerocoin aware nodes, this data is effectively inert, and we can use Bitcoin's n {\displaystyle n} of k {\displaystyle k} signature support to specify that such comment embedded zerocoins are valid only if signed by some subset of the Zerocoin processing nodes. Such Zerocoin aware nodes can parse the comments and charge transaction fees for validation according to the proofs embedded in the comments, thus providing an incentive for more nodes to provide such services. Since this only changes the validation mechanism for Zerocoin, the Anonymity property holds as does the Balance property if no more than n − 1 {\displaystyle n-1} Zerocoin nodes are malicious. Some care must be taken when electing these nodes to prevent a Sybil attack. Thankfully, if we require that such a node also produce blocks in the Bitcoin block chain, we have a decent deterrent. Furthermore, because any malfeasance of these nodes is readily detectable (since they signed an invalid Zerocoin transaction), third parties can audit these nodes and potentially hold funds in escrow to deter fraud. Real world security and parameter choice Anonymity of Zerocoin Definition 3.2 states that given two Zerocoin mints and one spend, one cannot do much better than guess which minted coin was spent. Put differently, an attacker learns no more from our scheme than they would from observing the mints and spends of some ideal scheme. However, even an ideal scheme imposes limitations. For example, consider a case where N {\displaystyle N} coins are minted, then all N {\displaystyle N} coins are subsequently spent. If another coin is minted after this point, the size of the anonymity set for the next spend is k = 1 {\displaystyle k=1} , not k = 11 {\displaystyle k=11} , since it is clear to all observers that the previous coins have been used. We also stress that — as in many anonymity systems — privacy may be compromised by an attacker who mints a large fraction of the active coins. Hence, a lower bound on the anonymity provided is the number of coins minted by honest parties between a coin's mint and its spend. An upper bound is the total set of minted coins. We also note that Zerocoin reveals the number of minted and spent coins to all users of the system, which provides a potential source of information to attackers. This is in contrast to many previous e-cash schemes which reveal this information primarily to merchants and the bank. However, we believe this may be an advantage rather than a loss, since the bank is generally considered an adversarial party in most e-cash security models. The public model of Zerocoin actually removes an information asymmetry by allowing users to determine when such conditions might pose a problem. Lastly, Zerocoin does not hide the denominations used in a transaction. In practice, this problem can be avoided by simply fixing one or a small set of coin denominations and exchanging coins until one has those denominations, or by simply using Zerocoin to anonymize bitcoins. Generally, cryptographers specify security in terms of a single, adjustable security parameter λ {\displaystyle \lambda } . Indeed, we have used this notation throughout the previous sections. In reality, however, there are three distinct security choices for Zerocoin which affect either the system's anonymity, its resilience to counterfeiting, or both. These are: The size of the Schnorr group used in the coin commitments. The size of the RSA modulus used in the accumulator. λ z k p {\displaystyle \lambda _{zkp}} , the security of the zero-knowledge proofs. Commitments. Because Pedersen commitments are information theoretically hiding for any Schnorr group whose order is large enough to fit the committed values, the size of the group used does not affect the long term anonymity of Zerocoin. The security of the commitment scheme does, however, affect counterfeiting: an attacker who can break the binding property of the commitment scheme can mint a zerocoin that opens to at least two different serial numbers, resulting in a double spend. As a result, the Schnorr group must be large enough that such an attack cannot be feasibly mounted in the lifetime of a coin. On the other hand, the size of the signature of knowledge π {\displaystyle \pi } used in coin spends increases linearly with the size of the Schnorr group. One solution is to minimize the group size by announcing fresh parameters for the commitment scheme periodically and forcing old zerocoins to expire unless exchanged for new zerocoins minted under the fresh parameters. Since all coins being spent on the network at time t {\displaystyle t} are spent with the current parameters and all previous coins can be converted to fresh ones, this does not decrease the anonymity of the system. It does, however, require users to convert old zerocoins to fresh ones before the old parameters expire. For our prototype implementation, we chose to use 1024 bit parameters on the assumption that commitment parameters could be regenerated periodically. We explore the possibility of extensions to Zerocoin that might enable smaller groups in section "Conclusion and future work". Accumulator RSA key. Because generating a new accumulator requires either a new trusted setup phase or generating a new RSA UFO [24], we cannot re-key very frequently. As a result, the accumulator is long lived, and thus we truly need long term security. Therefore we currently propose an RSA key of at least 3072 bits. We note that this does not greatly affect the size of the coins themselves, and, because the proof of accumulator membership is efficient, this does not have a large adverse effect on the overall coin spend proof size. Moreover, although re-keying the accumulator is expensive, it need not reduce the anonymity of the system since the new parameters can be used to re-accumulate the existing coin set and hence anonymize spends over that whole history. Zero-knowledge proof security λ z k p {\displaystyle \lambda _{zkp}} . This parameter affects the anonymity and security of the zero-knowledge proof. It also greatly affects the size of the spend proof. Thankfully, since each proof is independent, it applies per proof and therefore per spend. As such, a dishonest party would have to expend roughly 2 λ z k p {\displaystyle 2^{\lambda _{zkp}}} effort to forge a single coin or could link a single coin mint to a spend with probability roughly 1 2 λ z k p {\displaystyle {\frac {1}{2^{\lambda _{zkp}}}}} . As such we pick λ z k p = 80 {\displaystyle \lambda _{zkp}=80} bits. To validate our results, we conducted several experiments using the modified bitcoind implementation described in Section V. We ran our experiments with three different parameter sizes, where each corresponds to a length of the RSA modulus N {\displaystyle N} : 1024 bits, 2048 bits, and 3072 bits. Figure 3. Zerocoin performance as a function of parameter size. We conducted two types of experiments: (1) microbenchmarks that measure the performance of our cryptographic constructions and (2) tests of our whole modified Bitcoin client measuring the time to verify Zerocoin carrying blocks. The former gives us a reasonable estimate of the cost of minting a single zerocoin, spending it, and verifying the resulting transaction. The latter gives us an estimate of Zerocoin's impact on the existing Bitcoin network and the computational cost that will be born by each node that verifies Zerocoin transactions. All of our experiments were conducted on an Intel Xeon E3-1270 V2 (3.50GHz quad-core processor with hyperthreading) with 16GB of RAM, running 64-bit Ubuntu Server 11.04 with Linux kernel 2.6.38. Microbenchmarks To evaluate the performance of our Mint, Spend, and Verify algorithms in isolation, we conducted a series of microbenchmarks using the Charm (Python) implementation. Our goal in these experiments was to provide a direct estimate of the performance of our cryptographic primitives. Experimental setup. One challenge in conducting our microbenchmarks is the accumulation of coins in C {\displaystyle \mathbf {C} } for the witness in S p e n d ( . . . ) {\displaystyle \mathrm {Spend} (...)} or for the global accumulator in both S p e n d ( . . . ) {\displaystyle \mathrm {Spend} (...)} and V e r i f y ( . . . ) {\displaystyle \mathrm {Verify} (...)} . This is problematic for two reasons. First, we do not know how large C {\displaystyle \mathbf {C} } will be in practice. Second, in our implementation accumulations are incremental. To address these issues we chose to break our microbenchmarks into two separate experiments. The first experiment simply computes the accumulator for a number of possible sizes of C {\displaystyle \mathbf {C} } , ranging from 1 to 50,000 elements. The second experiment measures the runtime of the S p e n d ( . . . ) {\displaystyle \mathrm {Spend} (...)} and V e r i f y ( . . . ) {\displaystyle \mathrm {Verify} (...)} routines with a precomputed accumulator and witness ( A , ω ) {\displaystyle (A,\omega )} . We conducted our experiments on a single thread of the processor, using all three parameter sizes. All experiments were performed 500 times, and the results given represent the average of these times. Figure 3a shows the measured times for computing the coin operations, Figure 3b shows the resulting proof sizes for each security parameter, and Figure 3c shows the resulting times for computing the accumulator. We stress that accumulation in our system is incremental, typically over at most the 200−500 transactions in a block (which takes at worst eight seconds), and hence the cost of computing the global accumulator is therefore amortized. The only time one might accumulate 50,000 coins at one time would be when generating the witness for a very old zerocoin. Block Verification How Zerocoin affects network transaction processing determines its practicality and scalability. Like all transactions, Zerocoin spends must be verified first by the miner to make sure he is not including invalid transactions in a block and then again by the network to make sure it is not including an invalid block in the block chain. In both cases, this entails checking that V e r i f y ( . . . ) = 1 {\displaystyle \mathrm {Verify} (...)=1} for each Zerocoin transaction and computing the accumulator checkpoint. We need to know the impact of this for two reasons. First, the Bitcoin protocol specifies that a new block should be created on average once every 10 minutes. If verification takes longer than 10 minutes for blocks with a reasonable number of zerocoins, then the network cannot function. Second, while the cost of generating these blocks and verifying their transactions can be offset by transaction fees and coin mining, the cost of verifying blocks prior to appending them to the block chain is only offset for mining nodes (who can view it as part of the cost of mining a new block). This leaves anyone else verifying the block chain with an uncompensated computational cost. Experimental setup. To measure the effect of Zerocoin on block verification time, we measure how long it takes our modified bitcoind client to verify externally loaded test blocks containing 200, 400, and 800 transactions where 0, 10, 25, 75, or 100 percent of the transactions are Zerocoin transactions (half of which are mints and half are spends). We repeat this experiment for all three security parameters. Our test data consists of two blocks. The first contains z {\displaystyle z} Zerocoin mints that must exist for any spends to occur. The second block is our actual test vector. It contains, in a random order, z {\displaystyle z} Zerocoin spends of the coins in the previous block, z {\displaystyle z} Zerocoin mints, and s {\displaystyle s} standard Bitcoin sendToAddress transactions. We measure how long the processblock call of the bitcoind client takes to verify the second block containing the mix of Zerocoin and classical Bitcoin transactions. For accuracy, we repeat these measurements 100 times and average the results. The results are presented in Figure 3d. Our results show that Zerocoin scales beyond current Bitcoin transaction volumes. Though we require significant computational effort, verification does not fundamentally threaten the operation of the network: even with a block containing 800 Zerocoin transactions — roughly double the average size of a Bitcoin block currently — verification takes less than five minutes. This is under the unreasonable assumption that all Bitcoin transactions are supplanted by Zerocoin transactions. In fact, we can scale well beyond Bitcoin's current average of between 200 and 400 transactions per block [27] if Zerocoin transactions are not the majority of transactions on the network. If, as the graph suggests, we assume that verification scales linearly, then we can support a 50% transaction mix out to 350 transactions per minute (3,500 transactions per block) and a 10% mixture out to 800 transactions per minute (8,000 per block). One remaining question is at what point we start running a risk of coin serial number collisions causing erroneous double spends. Even for our smallest serial numbers — 160 bits — the collision probability is small, and for the 256 bit serial numbers used with the 3072 bit accumulator, our collision probability is at worst equal to the odds of a collision on a normal Bitcoin transaction which uses SHA-256 hashes. We stress several caveats about the above data. First, our prototype system does not exploit any parallelism either for verifying multiple Zerocoin transactions or in validating an individual proof. Since the only serial dependency for either of these tasks is the (fast) duplicate serial number check, this offers the opportunity for substantial improvement. Second, the above data is not an accurate estimate of the financial cost of Zerocoin for the network: (a) it is an overestimate of a mining node's extra effort when verifying proposed blocks since in practice many transactions in a received block will already have been received and validated by the node as it attempts to construct its own contribution to the block chain; (b) execution time is a poor metric in the context of Bitcoin, since miners are concerned with actual monetary operating cost; (c) since mining is typically performed using GPUs and to a lesser extent FPGAs and ASICs, which are far more efficient at computing hash collisions, the CPU cost measured here is likely insignificant. Finally, our experiment neglects the load on a node both from processing incoming transactions and from solving the proof of work. Again, we contend that most nodes will probably use GPUs for mining, and as such the latter is not an issue. The former, however, remains an unknown. At the very least it seems unlikely to disproportionately affect Zerocoin performance. E-Cash and Bitcoin Electronic cash has long been a research topic for cryptographers. Many cryptographic e-cash systems focus on user privacy and typically assume the existence of a semitrusted coin issuer or bank. E-cash schemes largely break down into online schemes where users have contact with a bank or registry and offline schemes where spending can occur even without a network connection. Chaum introduced the first online cryptographic e-cash system [28] based on RSA signatures, later extending this work to the offline setting [29] by de-anonymizing users who double-spent. Many subsequent works improved upon these techniques while maintaining the requirement of a trusted bank: for example, by making coins divisible [30], [31] and reducing wallet size [32]. One exception to the rule above comes from Sander and Ta-Shma [33] who presciently developed an alternative model that is reminiscent of our proposal: the central bank is replaced with a hash chain and signatures with accumulators. Unfortunately the accumulator was not practical, a central party was still required, and no real-world system existed to compute the chain. Bitcoin's primary goal, on the other hand, is not anonymity. It has its roots in a non-academic proposal by Wei Dai for a distributed currency based on solving computational problems [34]. In Dai's original proposal anyone could create currency, but all transactions had to be broadcast to all clients. A second variant limited currency generation and transaction broadcast to a set of servers, which is effectively the approach Bitcoin takes. This is a marked distinction from most, if not all, other e-cash systems since there is no need to select one or more trusted parties. There is a general assumption that a majority of the Bitcoin nodes are honest, but anyone can join a node to the Bitcoin network, and anyone can get the entire transaction graph. An overview of Bitcoin and some of its shortcomings was presented by Barber et. al. in [2]. Numerous works have shown that "pseudonymized" graphs can be re-identified even under passive analysis. Narayanan and Shmatikov [5] showed that real world social networks can be passively de-anonymized. Similarly, Backstrom et al. [35] constructed targeted attacks against anonymized social networks to test for relationships between vertices. Previously, Narayanan and Shmatikov de-anonymized users in the Netflix prize data set by correlating data from IMDB [36]. Bitcoin itself came into existence in 2009 and is now beginning to receive scrutiny from privacy researchers. Deanonymization techniques were applied effectively to Bitcoin even at its relatively small 2011 size by Reid and Harrigan [3]. Ron and Shamir examined the general structure of the Bitcoin network graph [1] after its nearly 3-fold expansion. Finally, we have been made privately aware of two other early-stage efforts to examine Bitcoin anonymity. Conclusion and future work Zerocoin is a distributed e-cash scheme that provides strong user anonymity and coin security under the assumption that there is a distributed, online, append-only transaction store. We use Bitcoin to provide such a store and the backing currency for our scheme. After providing general definitions, we proposed a concrete realization based on RSA accumulators and non-interactive zero-knowledge signatures of knowledge. Finally, we integrated our construction into Bitcoin and measured its performance. Our work leaves several open problems. First, although our scheme is workable, the need for a double-discrete logarithm proof leads to large proof sizes and verification times. We would prefer a scheme with both smaller proofs and greater speed. This is particularly important when it comes to reducing the cost of third-party verification of Zerocoin transactions. There are several promising constructions in the cryptographic literature, e.g., bilinear accumulators, mercurial commitments [12], [37]. While we were not able to find an analogue of our scheme using alternative components, it is possible that further research will lead to other solutions. Ideally such an improvement could produce a drop-in replacement for our existing implementation. Second, Zerocoin currently derives both its anonymity and security against counterfeiting from strong cryptographic assumptions at the cost of substantially increased computational complexity and size. As discussed in section VI-B, anonymity is relatively cheap, and this cost is principally driven by the anti-counterfeiting requirement, manifesting itself through the size of the coins and the proofs used. In Bitcoin, counterfeiting a coin is not computationally prohibitive, it is merely computationally costly, requiring the user to obtain control of at least 51% of the network. This provides a possible alternative to our standard cryptographic assumptions: rather than the strong assumption that computing discrete logs is infeasible, we might construct our scheme on the weak assumption that there is no financial incentive to break our construction as the cost of computing a discrete log exceeds the value of the resulting counterfeit coins. For example, if we require spends to prove that fresh and random bases were used in the commitments for the corresponding mint transaction (e.g., by selecting the bases for the commitment from the hash of the coin serial number and proving that the serial number is fresh), then it appears that an attacker can only forge a single zerocoin per discrete log computation. Provided the cost of computing such a discrete log is greater than the value of a zerocoin, forging a coin is not profitable. How small this allows us to make the coins is an open question. There is relatively little work comparing the asymptotic difficulty of solving multiple distinct discrete logs in a fixed group, and it is not clear how theory translates into practice. We leave these questions, along with the security of the above proposed construction, as issues for future work. Finally, we believe that further research could lead to different tradeoffs between security, accountability, and anonymity. A common objection to Bitcoin is that it can facilitate money laundering by circumventing legally binding financial reporting requirements. We propose that additional protocol modifications (e.g., the use of anonymous credentials [38]) might allow users to maintain their anonymity while demonstrating compliance with reporting requirements. Acknowledgements. We thank Stephen Checkoway, George Danezis, and the anonymous reviewers for their helpful comments. The research in this paper was supported in part by the Office of Naval Research under contract N00014-11- 1-0470, and DARPA and the Air Force Research Laboratory (AFRL) under contract FA8750-11-2-0211. Contact authors Cite error: Invalid <references> tag; parameter "group" is allowed only. Use <references />, or <references group="..." /> Cite error: <ref> tags exist for a group named "@:", but no corresponding <references group="@:"/> tag was found, or a closing </ref> is missing ↑ 1.0 1.1 D. Ron and A. Shamir, "Quantitative Analysis of the Full Bitcoin Transaction Graph," Cryptology ePrint Archive, Report 2012/584, 2012, http://eprint.iacr.org/. ↑ 2.0 2.1 2.2 2.3 2.4 S. Barber, X. Boyen, E. Shi, and E. Uzun, "Bitter to better – how to make bitcoin a better currency," in Financial Cryptography 2012, vol. 7397 of LNCS, 2012, pp. 399–414. ↑ 3.0 3.1 3.2 3.3 F. Reid and M. Harrigan, "An analysis of anonymity in the Bitcoin system," in Privacy, security, risk and trust (PASSAT), 2011 IEEE Third Internatiojn Conference on Social Computing (SOCIALCOM). IEEE, 2011, pp. 1318–1326. ↑ 4.0 4.1 T. B. Lee, "A risky currency? Alleged $500,000 Bitcoin heist raises questions," Available at http://arstechnica.com/, June 2011. ↑ 5.0 5.1 A. Narayanan and V. Shmatikov, "De-anonymizing social net- works," in Security and Privacy, 2009 30th IEEE Symposium on. IEEE, 2009, pp. 173–187. ↑ 6.0 6.1 "Bitcoin fog company," http://www.bitcoinfog.com/. ↑ 7.0 7.1 "The Bitcoin Laundry," http://www.bitcoinlaundry.com/. ↑ "Blind Bitcoin," Information at https://en.bitcoin.it/wiki/Blind Bitcoin Transfers. ↑ 9.0 9.1 9.2 J. Benaloh and M. de Mare, "One-way accumulators: a decentralized alternative to digital signatures," in EUROCRYPT '93, vol. 765 of LNCS, 1994, pp. 274–285. ↑ 10.0 10.1 N. Bari ́ c and B. Pfitzmann, "Collision-free accumulators and fail-stop signature schemes without trees," in EUROCRYPT '97, vol. 1233 of LNCS, 1997, pp. 480–494. ↑ 11.0 11.1 11.2 11.3 J. Camenisch and A. Lysyanskaya, "Dynamic accumulators and application to efficient revocation of anonymous creden- tials," in CRYPTO '02, 2002, pp. 61–76. ↑ 12.0 12.1 L. Nguyen, "Accumulators from bilinear pairings and appli- cations," in Topics in Cryptology – CT-RSA 2005, 2005, vol. 3376 LNCS, pp. 275–292. ↑ J. Camenisch, M. Kohlweiss, and C. Soriente, "An accumulator based on bilinear maps and efficient revocation for anonymous credentials," in PKC '09, vol. 5443 of LNCS, 2009, pp. 481– 500. ↑ 14.0 14.1 S. Nakamoto, "Bitcoin: A peer-to-peer electronic cash system, 2009," 2012. [Online]. Available: http://www.bitcoin.org/ bitcoin.pdf ↑ European Central Bank, "Virtual currency schemes," Available at http://www.ecb.europa.eu/pub/pdf/other/ virtualcurrencyschemes201210en.pdf, October 2012. ↑ C.-P. Schnorr, "Efficient signature generation for smart cards," Journal of Cryptology, vol. 4, no. 3, pp. 239–252, 1991. ↑ R. Cramer, I. Damg ̊ ard, and B. Schoenmakers, "Proofs of partial knowledge and simplified design of witness hiding protocols," in CRYPTO '94, vol. 839 of LNCS, 1994, pp. 174–187. ↑ J. Camenisch and M. Michels, "Proving in zero-knowledge that a number n is the product of two safe primes," in EUROCRYPT '99, vol. 1592 of LNCS, 1999, pp. 107–122. ↑ J. L. Camenisch, "Group signature schemes and payment systems based on the discrete logarithm problem," Ph.D. dissertation, ETH Z ̈ urich, 1998. ↑ S. Brands, "Rapid demonstration of linear relations connected by boolean operators," in EUROCRYPT '97, vol. 1233 of LNCS, 1997, pp. 318–333. ↑ A. Fiat and A. Shamir, "How to prove yourself: Practical solutions to identification and signature problems," in CRYPTO '86, vol. 263 of LNCS, 1986, pp. 186–194. ↑ M. Chase and A. Lysyanskaya, "On signatures of knowledge," in CRYPTO'06, vol. 4117 of LNCS, 2006, pp. 78–96. ↑ J. Camenisch and M. Stadler, "Efficient group signature schemes for large groups," in CRYPTO '97, vol. 1296 of LNCS, 1997, pp. 410–424. ↑ 24.0 24.1 T. Sander, "Efficient accumulators without trapdoor extended abstract," in Information and Communication Security, vol. 1726 of LNCS, 1999, pp. 252–262. ↑ J. A. Akinyele, C. Garman, I. Miers, M. W. Pagano, M. Rushanan, M. Green, and A. D. Rubin, "Charm: A framework for rapidly prototyping cryptosystems," To appear, Journal of Cryptographic Engineering, 2013. [Online]. Available: http://dx.doi.org/10.1007/s13389-013-0057-3 ↑ [Online]. Available: https://en.bitcoin.it/wiki/BIP 0016 ↑ [Online]. Available: http://blockchain.info/charts/n- transactions-per-block ↑ D. Chaum, "Blind signatures for untraceable payments," in CRYPTO '82. Plenum Press, 1982, pp. 199–203. ↑ D. Chaum, A. Fiat, and M. Naor, "Untraceable electronic cash," in CRYPTO 88, 1990, vol. 403 of LNCS, pp. 319–327. ↑ T. Okamoto and K. Ohta, "Universal electronic cash," in CRYPTO 91, 1992, vol. 576 of LNCS, pp. 324–337. ↑ T. Okamoto, "An efficient divisible electronic cash scheme," in Crypt '95, 1995, vol. 963 of LNCS, pp. 438–451. ↑ J. Camenisch, S. Hohenberger, and A. Lysyanskaya, "Compact e-cash," in EUROCRYPT '05, 2005, vol. 3494 of LNCS, pp. 566–566. ↑ T. Sander and A. Ta-Shma, "Auditable, anonymous electronic cash (extended abstract)," in CRYPTO '99, vol. 1666 of LNCS, 1999, pp. 555–572. ↑ W. Dai. B-money proposal. [Online]. Available: http: //www.weidai.com/bmoney.txt ↑ L. Backstrom, C. Dwork, and J. Kleinberg, "Wherefore art thou r3579x?: Anonymized social networks, hidden patterns, and structural steganography," in Proceedings of the 16th international conference on World Wide Web, ser. WWW '07. New York, NY, USA: ACM, 2007, pp. 181–190. ↑ A. Narayanan and V. Shmatikov, "Robust de-anonymization of large sparse datasets," in IEEE Symposium on Security and Privacy. IEEE, 2008, pp. 111–125. ↑ M. Chase, A. Healy, A. Lysyanskaya, T. Malkin, and L. Reyzin, "Mercurial commitments with applications to zero-knowledge sets," in EUROCRYPT '05, vol. 3494, 2005, pp. 422–439. ↑ J. Camenisch and A. Lysyanskaya, "An efficient system for non-transferable anonymous credentials with optional anonymity revocation," in EUROCRYPT '01, vol. 2045 of LCNS, 2001, pp. 93–118. Retrieved from "https://en.bmstu.wiki/index.php?title=Zerocoin_Anonymous_Distributed_E-Cash_from_Bitcoin&oldid=6101"
CommonCrawl
Algebra & Number Theory Algebra Number Theory Volume 9, Number 2 (2015), 317-370. Noncommutative Hilbert modular symbols Ivan Horozov More by Ivan Horozov Full-text: Open access PDF File (1152 KB) The main goal of this paper is to construct noncommutative Hilbert modular symbols. However, we also construct commutative Hilbert modular symbols. Both the commutative and the noncommutative Hilbert modular symbols are generalizations of Manin's classical and noncommutative modular symbols. We prove that many cases of (non)commutative Hilbert modular symbols are periods in the Kontsevich–Zagier sense. Hecke operators act naturally on them. Manin defined the noncommutative modular symbol in terms of iterated path integrals. In order to define noncommutative Hilbert modular symbols, we use a generalization of iterated path integrals to higher dimensions, which we call iterated integrals on membranes. Manin examined similarities between noncommutative modular symbol and multiple zeta values in terms of both infinite series and of iterated path integrals. Here we examine similarities in the formulas for noncommutative Hilbert modular symbol and multiple Dedekind zeta values, recently defined by the current author, in terms of both infinite series and iterated integrals on membranes. Algebra Number Theory, Volume 9, Number 2 (2015), 317-370. Revised: 17 September 2014 First available in Project Euclid: 16 November 2017 https://projecteuclid.org/euclid.ant/1510842284 doi:10.2140/ant.2015.9.317 Primary: 11F41: Automorphic forms on GL(2); Hilbert and Hilbert-Siegel modular groups and their modular and automorphic forms; Hilbert modular surfaces [See also 14J20] Secondary: 11F67: Special values of automorphic $L$-series, periods of modular forms, cohomology, modular symbols 11M32: Multiple Dirichlet series and zeta functions and multizeta values modular symbols Hilbert modular groups Hilbert modular surfaces iterated integrals Horozov, Ivan. Noncommutative Hilbert modular symbols. Algebra Number Theory 9 (2015), no. 2, 317--370. doi:10.2140/ant.2015.9.317. https://projecteuclid.org/euclid.ant/1510842284 A. Ash and A. Borel, "Generalized modular symbols", pp. 57–75 in Cohomology of arithmetic groups and automorphic forms (Luminy-Marseille, 1989), edited by J.-P. Labesse and J. Schwermer, Lecture Notes in Math. 1447, Springer, Berlin, 1990. Mathematical Reviews (MathSciNet): MR92e:11058 A. Ash and L. Rudolph, "The modular symbol and continued fractions in higher dimensions", Invent. Math. 55:3 (1979), 241–250. Mathematical Reviews (MathSciNet): MR82g:12011 Digital Object Identifier: doi:10.1007/BF01406842 L. Berger, G. B öckle, L. Dembélé, M. Dimitrov, T. Dokchitser, and J. Voight, Elliptic curves, Hilbert modular forms and Galois deformations, Birkhäuser/Springer, Basel, 2013. B. J. Birch, "Elliptic curves over $Q$: a progress report", pp. 396–400 in 1969 Number Theory Institute (Stony Brook, 1969), edited by D. J. Lewis, Proc. Symp. Pure Math. 20, Amer. Math. Soc., Providence, R.I., 1971. Mathematical Reviews (MathSciNet): MR47:3395 F. Borceux, Handbook of categorical algebra I: Basic category theory, Encyclopedia of Mathematics and its Applications 50, Cambridge University Press, 1994. Mathematical Reviews (MathSciNet): MR96g:18001a A. Borel and J.-P. Serre, "Corners and arithmetic groups", Comment. Math. Helv. 48 (1973), 436–491. K. S. Brown, Cohomology of groups, Graduate Texts in Mathematics 87, Springer, Berlin, 1982. Mathematical Reviews (MathSciNet): MR83k:20002 J. H. Bruinier, G. van der Geer, G. Harder, and D. Zagier, The 1-2-3 of modular forms: lectures at a summer school in Nordfjordeid, Norway, Springer, Berlin, 2008. Mathematical Reviews (MathSciNet): MR2009d:11002 K. T. Chen, "Iterated path integrals", Bull. Amer. Math. Soc. 83:5 (1977), 831–879. Mathematical Reviews (MathSciNet): MR56:13210 Digital Object Identifier: doi:10.1090/S0002-9904-1977-14320-6 Project Euclid: euclid.bams/1183539443 J. E. Cremona, Algorithms for modular elliptic curves, 2nd ed., Cambridge University Press, 1997. V. G. Drinfeld, "Two theorems on modular curves", Funkcional. Anal. i Prilozhen. 7:2 (1973), 83–84. In Russian; translated in Functional Anal. Appl. 7:2 (1973), 155–156. L. Euler, Introductio in analysin infinitorum, I, 1748. Translated as Introduction to analysis of the infinite, Book I, Springer, New York, 1988. E. Freitag, Hilbert modular forms, Springer, Berlin, 1990. http://msp.org/idx/zbl/0702.11029Zbl 0702.11029 Mathematical Reviews (MathSciNet): MR91c:11025 URL: Link to item H. Gangl, M. Kaneko, and D. Zagier, "Double zeta values and modular forms", pp. 71–106 in Automorphic forms and zeta functions, edited by S. Böcherer et al., World Sci. Publ., Hackensack, NJ, 2006. Mathematical Reviews (MathSciNet): MR2006m:11138 A. B. Goncharov, "Multiple polylogarithms and mixed Tate motives", preprint, 2001. arXiv: math/0103059 Digital Object Identifier: doi:10.4310/MRL.1998.v5.n4.a7 A. B. Goncharov, "Multiple $\zeta$-values, Galois groups, and geometry of modular varieties", pp. 361–392 in European Congress of Mathematics (Barcelona, 2000), vol. I, edited by C. Casacuberta et al., Progr. Math. 201, Birkhäuser, Basel, 2001. Mathematical Reviews (MathSciNet): MR2003g:11073 A. B. Goncharov, "The dihedral Lie algebras and Galois symmetries of $\pi\sb 1\sp {(l)}(\mathbb P\sp 1-(\{0,\infty\}\cup\mu\sb N))$", Duke Math. J. 110:3 (2001), 397–487. Digital Object Identifier: doi:10.1215/S0012-7094-01-11031-4 Project Euclid: euclid.dmj/1087574978 P. E. Gunnells, "Finiteness of minimal modular symbols for ${\rm SL}\sb n$", J. Number Theory 82:1 (2000), 134–139. Mathematical Reviews (MathSciNet): MR2001e:11051 Digital Object Identifier: doi:10.1006/jnth.1999.2481 P. E. Gunnells, "Symplectic modular symbols", Duke Math. J. 102:2 (2000), 329–350. Mathematical Reviews (MathSciNet): MR2001i:11062 P. E. Gunnells and D. Yasaki, "Hecke operators and Hilbert modular forms", pp. 387–401 in Algorithmic number theory, edited by A. J. van der Poorten and A. Stein, Lecture Notes in Comput. Sci. 5011, Springer, Berlin, 2008. Mathematical Reviews (MathSciNet): MR2010a:11081 R. Hartshorne, Algebraic geometry, Graduate Texts in Mathematics 52, Springer, New York, 1977. F. E. P. Hirzebruch, "Hilbert modular surfaces", Enseign. Math. $(2)$ 19 (1973), 183–281. F. Hirzebruch and D. Zagier, "Intersection numbers of curves on Hilbert modular surfaces and modular forms of Nebentypus", Invent. Math. 36 (1976), 57–113. I. Horozov, "Non-commutative two dimensional modular symbol", preprint, 2006. I. Horozov, "Non-abelian reciprocity laws on a Riemann surface", Int. Math. Res. Not. 2011:11 (2011), 2469–2495. I. Horozov, "Cohomology of GL$(4,\mathbb Z)$ with nontrivial coefficients", Math. Res. Lett. 21:5 (2014), 1111–1136. Digital Object Identifier: doi:10.4310/MRL.2014.v21.n5.a9 I. Horozov, "Multiple Dedekind zeta functions", J. Reine Angew. Math. (online publication July 2014). Zentralblatt MATH: 06674943 I. Horozov, "Reciprocity laws on algebraic surfaces via iterated integrals", J. K-Theory 14:2 (2014), 273–312. Digital Object Identifier: doi:10.1017/is014006014jkt271 D. Kazhdan, B. Mazur, and C.-G. Schmidt, "Relative modular symbols and Rankin–Selberg convolutions", J. Reine Angew. Math. 519 (2000), 97–141. Mathematical Reviews (MathSciNet): MR2001j:11026 M. Kontsevich and D. Zagier, "Periods", pp. 771–808 in Mathematics unlimited–-2001 and beyond, edited by B. Engquist and W. Schmid, Springer, Berlin, 2001. J. Lurie, Higher topos theory, Annals of Mathematics Studies 170, Princeton University Press, NJ, 2009. J. I. Manin, "Parabolic points and zeta functions of modular curves", Izv. Akad. Nauk SSSR Ser. Mat. 36 (1972), 19–66. In Russian; translated in Math. USSR Izv. 6:1 (1972), 19–64. Y. I. Manin, "Iterated integrals of modular forms and noncommutative modular symbols", pp. 565–597 in Algebraic geometry and number theory, edited by V. Ginzburg, Progr. Math. 253, Birkhäuser, Boston, 2006. Y. I. Manin, "Lectures on modular symbols", pp. 137–152 in Arithmetic geometry, edited by H. Darmon et al., Clay Math. Proc. 8, Amer. Math. Soc., Providence, RI, 2009. B. Mazur, "Courbes elliptiques et symboles modulaires", pp. [exposeé] no. 414, 277–294 in Séminaire Bourbaki, 1971/1972, Lecture Notes in Math. 317, Springer, Berlin, 1973. B. Mazur and P. Swinnerton-Dyer, "Arithmetic of Weil curves", Invent. Math. 25 (1974), 1–61. A. N. Parshin, "On a certain generalization of Jacobian manifold", Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 175–182. In Russian. G. Shimura, "The special values of the zeta functions associated with Hilbert modular forms", Duke Math. J. 45:3 (1978), 637–679. Mathematical Reviews (MathSciNet): MR80a:10043 T. Shintani, "On evaluation of zeta functions of totally real algebraic number fields at non-positive integers", J. Fac. Sci. Univ. Tokyo Sect. IA Math. 23:2 (1976), 393–417. Mathematical Reviews (MathSciNet): MR55:266 V. V. Shokurov, "Shimura integrals of cusp forms", Izv. Akad. Nauk SSSR Ser. Mat. 44:3 (1980), 670–718. In Russian; translated in Math. USSR Izv. 16:3 (1981), 603–646. Mathematical Reviews (MathSciNet): MR82b:10029 W. Stein, Modular forms, a computational approach, Graduate Studies in Mathematics 79, Amer. Math. Soc., Providence, RI, 2007. R. Street, "Two-dimensional sheaf theory", J. Pure Appl. Algebra 23:3 (1982), 251–270. Mathematical Reviews (MathSciNet): MR83d:18014 D. Zagier, "On the values at negative integers of the zeta-function of a real quadratic field", Enseignement Math. $(2)$ 22:1-2 (1976), 55–95. Remarks on modular symbols for Maass wave forms Manin, Yuri I., Algebra & Number Theory, 2010 On the Fourier coefficients of modular forms of half integral weight belonging to Kohnen's spaces and the critical values of zeta functions Kojima, Hisashi and Tokuno, Yasushi, Tohoku Mathematical Journal, 2004 On the estimation of the order of Euler-Zagier multiple zeta-functions Ishikawa, Hideaki and Matsumoto, Kohji, Illinois Journal of Mathematics, 2003 Multiple zeta values and zeta-functions of root systems Komori, Yasushi, Matsumoto, Kohji, and Tsumura, Hirofumi, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2011 Zeta and $L$-functions and Bernoulli polynomials of root systems Evaluation of Dedekind sums, Eisenstein cocycles, and special values of L-functions Gunnells, Paul E. and Sczech, Robert, Duke Mathematical Journal, 2003 Modularity gap for Eisenstein series Shimomura, Shun, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2010 Symmetry on linear relations for multiple zeta values Ihara, Kentaro and Ochiai, Hiroyuki, Nagoya Mathematical Journal, 2008 Algebraicity of the zeta function associated to a matrix over a free group algebra Kassel, Christian and Reutenauer, Christophe, Algebra & Number Theory, 2014 Stochastic integration and differential equations for typical paths Bartl, Daniel, Kupper, Michael, and Neufeld, Ariel, Electronic Journal of Probability, 2019 euclid.ant/1510842284
CommonCrawl
Biblioteca Digital da Unicamp Sociedade Brasileira de Matemática Aplicada e Computacional Universidade Federal de Uberlândia Universidade Cornell ‣ Aproximação de funções contínuas e de funções diferenciáveis; Approximation of continuous functions and of differentiable functions Maria Angélica Araujo Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp Tipo: Dissertação de Mestrado Formato: application/pdf #Funções contínuas#Funções diferenciais#Teoria da aproximação#Continuous functions#Differentiable functions#Approximation theory O objetivo desta dissertação é apresentar e demonstrar alguns teoremas da Análise matemática, são eles, O Teorema de Aproximação de Weierstrass, o Teorema de Kakutani-Stone, os Teoremas de Stone-Weierstrass e o Teorema de Nachbin. Para demonstrá-los relembraremos algumas definições e resultados básicos da teoria de Análise e Topologia e abordaremos as demais ferramentas necessárias para suas respectivas demonstrações.; The aim of this dissertation is to present and prove some theorems of mathematical analysis, that are, the Weierstrass Approximation Theorem, the Kakutani-Stone Theorem, the Stone-Weierstrass Theorems and the Nachbin Theorem. To prove them we recall some basic definitions and results of analysis and topology and we discuss other tools that are necessary for their respective proofs. ‣ On the Hausdorff Dimension of Continuous Functions Belonging to Hölder and Besov Spaces on Fractal d-Sets Carvalho, A.; Caetano, A. Fonte: Springer Publicador: Springer #Besov spaces#Box counting dimension#Continuous functions#d-Sets#Fractals#Hausdorff dimension#Hölder spaces#Wavelets#Weierstrass function The Hausdorff dimension of the graphs of the functions in Hölder and Besov spaces (in this case with integrability p≥1) on fractal d-sets is studied. Denoting by s in (0,1] the smoothness parameter, the sharp upper bound min{d+1-s, d/s} is obtained. In particular, when passing from d≥s to d ‣ Numerical calculations of Hölder exponents for the Weierstrass functions with (min, +)-wavelets Gondran,M.; Kenoufi,A. Fonte: Sociedade Brasileira de Matemática Aplicada e Computacional Publicador: Sociedade Brasileira de Matemática Aplicada e Computacional Tipo: Artigo de Revista Científica Formato: text/html #(min, +)-wavelets#Hölder exponents#Weierstrass functions One reminds for all function f : n → the so-called (min, +)-wavelets which are lower and upper hulls build from (min, +) analysis [12, 13]. One shows that this analysis can be applied numerically to the Weierstrass and Weierstrass-Mandelbrot functions, and that one recovers their theoretical Hölder exponents and fractal dimensions. ‣ The Weierstrass-Laguerre Transform Srivastava, H. M. #Physical Sciences: Mathematics An elegant expression is obtained for the product of the inverse Weierstrass-Laguerre transforms of two functions in terms of their convolution. It is also shown how the main result can be extended to hold for the product of the inverse Weierstrass-Laguerre transforms of several functions. ‣ O Teorema de Stone-Weierstrass e aplicações; The Stone-Weierstrass Theorem and applications Lopes, Wanda Aparecida Fonte: Universidade Federal de Uberlândia Publicador: Universidade Federal de Uberlândia Tipo: Dissertação #Matemática#Aproximação#Funções contínuas#Funções infinitamente diferenciáveis#Espaços compactos#Espaços separáveis#Espaços metrizáveis#Análise funcional#Teoria da aproximação#Approximation#Continuous functions O objetivo desta dissertação é demonstrar e aplicar o Teorema da Aproximação de Weierstrass, sobre aproximação de funções contínuas em intervalos fechados e limitados da reta por polinômios, e o Teorema de Stone-Weierstrass, sobre aproximação de funções contínuas definidas em espaços topológicos compactos. Como aplicações do Teorema da Aproximação de Weierstrass tratamos o problema dos momentos de uma função contínua e a aproximação de funções contínuas definidas na reta por funções infinitamente diferenciáveis. Como aplicações do Teorema de Stone-Weierstrass provamos que o espaço C(K) das funções contínuas no compacto K é separável se e somente se K é metrizável e também a existência de um compacto K tal que C(K) é isometricamente isomorfo ao espaço `1 das sequências limitadas. __________________________________________________________________________________________ ABSTRACT; The aim of this dissertation is to prove and apply the Weierstrass Approximation Theo- rem, on the approximation of continuous functions on bounded closed intervals by polyno- mials, and the Stone-Weierstrass Theorem, on the approximation of continuous functions on compact topological spaces. As applications of the Weierstrass Approximation Theo- rem we deal with the momentum problem for continuous functions and the approximation of continuous functions on the line by in¯nitely di®erentiable functions. As applications of the Stone-Weierstrass Theorem we prove that the space C(K) of continuous functions on the compact K is separable if and only if K is metrizable and the existence of a com- pact space K such that C(K) is isometrically isomorphic to the space `1 of bounded sequences.; Dissertação (mestrado)-Universidade Federal de Uberlândia... ‣ The Bolzano-Weierstrass Theorem is the Jump of Weak K\"onig's Lemma Brattka, Vasco; Gherardi, Guido; Marcone, Alberto Fonte: Universidade Cornell Publicador: Universidade Cornell #Mathematics - Logic#Computer Science - Logic in Computer Science We classify the computational content of the Bolzano-Weierstrass Theorem and variants thereof in the Weihrauch lattice. For this purpose we first introduce the concept of a derivative or jump in this lattice and we show that it has some properties similar to the Turing jump. Using this concept we prove that the derivative of closed choice of a computable metric space is the cluster point problem of that space. By specialization to sequences with a relatively compact range we obtain a characterization of the Bolzano-Weierstrass Theorem as the derivative of compact choice. In particular, this shows that the Bolzano-Weierstrass Theorem on real numbers is the jump of Weak K\"onig's Lemma. Likewise, the Bolzano-Weierstrass Theorem on the binary space is the jump of the lesser limited principle of omniscience LLPO and the Bolzano-Weierstrass Theorem on natural numbers can be characterized as the jump of the idempotent closure of LLPO. We also introduce the compositional product of two Weihrauch degrees f and g as the supremum of the composition of any two functions below f and g, respectively. We can express the main result such that the Bolzano-Weierstrass Theorem is the compositional product of Weak K\"onig's Lemma and the Monotone Convergence Theorem. We also study the class of weakly limit computable functions... ‣ Some New Addition Formulae for Weierstrass Elliptic Functions Eilbeck, J. Chris; England, Matthew; Ônishi, Yoshihiro #Mathematics - Algebraic Geometry#Mathematics - Number Theory#11G05 (Primary) 33E05, 14H45, 14H52, 37K20 (Secondary) We present new addition formulae for the Weierstrass functions associated with a general elliptic curve. We prove the structure of the formulae in n-variables and give the explicit addition formulae for the 2- and 3-variable cases. These new results were inspired by new addition formulae found in the case of an equianharmonic curve, which we can now observe as a specialisation of the results here. The new formulae, and the techniques used to find them, also follow the recent work for the generalisation of Weierstrass' functions to curves of higher genus.; Comment: 20 pages ‣ A new construction of Eisenstein's completion of the Weierstrass zeta function Rolen, Larry #Mathematics - Number Theory In the theory of elliptic functions and elliptic curves, the Weierstrass $zeta$ function (which is essentially an antiderivative of the Weierstrass $\wp$ function) plays a prominent role. Although it is not an elliptic function, Eisenstein constructed a simple (non-holomorphic) completion of this form which is doubly periodic. This theorem has begun to play an important role in the theory of harmonic Maass forms, and was crucial to work of Guerzhoy as well as Alfes, Griffin, Ono, and the author. In particular, this simple completion of $\zeta$ provides a powerful method to construct harmonic Maass forms of weight zero which serve as canonical lifts under the differential operator $\xi_{0}$ of weight 2 cusp forms, and this has been shown in to have deep applications to determining vanishing criteria for central values and derivatives of twisted Hasse-Weil $L$-functions for elliptic curves. Here we offer a new and motivated proof of Eisenstein's theorem, relying on the basic theory of differential operators for Jacobi forms together with a classical identity for the first quasi-period of a lattice. A quick inspection of the proof shows that it also allows one to easily construct more general non-holomorphic elliptic functions.; Comment: 3 pages... ‣ Multi-Dimensional Sigma-Functions Buchstaber, V. M.; Enolski, V. Z.; Leykin, D. V. #Mathematical Physics#Nonlinear Sciences - Exactly Solvable and Integrable Systems#Algebraic curves, theta and sigma-functions, completely integrable equations In 1997 the present authors published a review (Ref. BEL97 in the present manuscript) that recapitulated and developed classical theory of Abelian functions realized in terms of multi-dimensional sigma-functions. This approach originated by K.Weierstrass and F.Klein was aimed to extend to higher genera Weierstrass theory of elliptic functions based on the Weierstrass $\sigma$-functions. Our development was motivated by the recent achievements of mathematical physics and theory of integrable systems that were based of the results of classical theory of multi-dimensional theta functions. Both theta and sigma-functions are integer and quasi-periodic functions, but worth to remark the fundamental difference between them. While theta-function are defined in the terms of the Riemann period matrix, the sigma-function can be constructed by coefficients of polynomial defining the curve. Note that the relation between periods and coefficients of polynomials defining the curve is transcendental. Since the publication of our 1997-review a lot of new results in this area appeared (see below the list of Recent References), that promoted us to submit this draft to ArXiv without waiting publication a well-prepared book. We complemented the review by the list of articles that were published after 1997 year to develop the theory of $\sigma$-functions presented here. Although the main body of this review is devoted to hyperelliptic functions the method can be extended to an arbitrary algebraic curve and new material that we added in the cases when the opposite is not stated does not suppose hyperellipticity of the curve considered.; Comment: 267 pages... ‣ Fractional Weierstrass function by application of Jumarie fractional trigonometric functions and its analysis Ghosh, Uttam; Sarkar, Susmita; Das, Shantanu #Mathematics - Classical Analysis and ODEs The classical example of no-where differentiable but everywhere continuous function is Weierstrass function. In this paper we define the fractional order Weierstrass function in terms of Jumarie fractional trigonometric functions. The Holder exponent and Box dimension of this function are calculated here. It is established that the Holder exponent and Box dimension of this fractional order Weierstrass function are the same as in the original Weierstrass function, independent of incorporating the fractional trigonometric function. This is new development in generalizing the classical Weierstrass function by usage of fractional trigonometric function and obtain its character and also of fractional derivative of fractional Weierstrass function by Jumarie fractional derivative, and establishing that roughness index are invariant to this generalization.; Comment: 17 pages, 2 figures, submitted to Physics Letters A ‣ Some addition formulae for Abelian functions for elliptic and hyperelliptic curves of cyclotomic type Eilbeck, J. C.; Matsutani, S.; Onishi, Y. #Mathematics - Algebraic Geometry#14H45, 14H52 We discuss a family of multi-term addition formulae for Weierstrass functions on specialized curves of genus one and two with many automorphisms. In the genus one case we find new addition formulae for the equianharmonic and lemniscate cases, and in genus two we find some new addition formulae for a number of curves, including the Burnside curve.; Comment: 19 pages. We have extended the Introduction, corrected some typos and tidied up some proofs, and inserted extra material on genus 3 curves ‣ The Application of Weierstrass elliptic functions to Schwarzschild Null Geodesics Gibbons, G. W.; Vyska, M. #General Relativity and Quantum Cosmology#High Energy Physics - Theory#Mathematical Physics In this paper we focus on analytical calculations involving null geodesics in some spherically symmetric spacetimes. We use Weierstrass elliptic functions to fully describe null geodesics in Schwarzschild spacetime and to derive analytical formulae connecting the values of radial distance at different points along the geodesic. We then study the properties of light triangles in Schwarzschild spacetime and give the expansion of the deflection angle to the second order in both $M/r_0$ and $M/b$ where $M$ is the mass of the black hole, $r_0$ the distance of closest approach of the light ray and $b$ the impact parameter. We also use the Weierstrass function formalism to analyze other more exotic cases such as Reissner-Nordstr\om null geodesics and Schwarzschild null geodesics in 4 and 6 spatial dimensions. Finally we apply Weierstrass functions to describe the null geodesics in the Ellis wormhole spacetime and give an analytic expansion of the deflection angle in $M/b$.; Comment: Latex file, 19 pages 4 figures references and two comments added ‣ Boundary values of harmonic gradients and differentiability of Zygmund and Weierstrass functions Donaire, Juan J.; Llorente, Jose G.; Nicolau, Artur We study differentiability properties of Zygmund functions and series of Weierstrass type in higher dimensions. While such functions may be nowhere differentiable, we show that, under appropriate assumptions, the set of points where the incremental quotients are bounded has maximal Hausdorff dimension. ‣ Hausdorff dimension of the graphs of the classical Weierstrass functions Shen, Weixiao #Mathematics - Dynamical Systems We obtain the explicit value of the Hausdorff dimension of the graphs of the classical Weierstrass functions, by proving absolute continuity of the SRB measures of the associated solenoidal attractors.; Comment: 42 pages ‣ An entropy formula for a non-self-affine measure with application to Weierstrass-type functions Otani, Atsuya #Mathematics - Dynamical Systems#37D20, 37D25, 37C45, 37D45 Let $ \tau : [0,1] \rightarrow [0,1] $ be a piecewise expanding map with full branches. Given $ \lambda : [0,1] \rightarrow (0,1) $ and $ g : [0,1] \rightarrow \mathbb{R} $ satisfying $ \tau ' \lambda > 1 $, we study the Weierstrass-type function \[ \sum _{n=0} ^\infty \lambda ^n (x) \, g (\tau ^n (x)), \] where $ \lambda ^n (x) := \lambda(x) \lambda (\tau (x)) \cdots \lambda (\tau ^{n-1} (x)) $. Under certain conditions, Bedford proved that the box counting dimension of its graph is given as the unique zero of the topological pressure function \[ s \mapsto P ((1-s) \log \tau ' + \log \lambda) . \] We give a sufficient condition under which the Hausdorff dimension also coincides with this value. We adopt a dynamical system theoretic approach which was originally used to investigate special cases including the classical Weierstrass functions. For this purpose we prove a new Ledrappier-Young entropy formula, which is a conditional version of Pesin's formula, for non-invertible dynamical systems. Our formula holds for all lifted Gibbs measures on the graph of the above function, which are generally not self-affine. ‣ Canonical Weierstrass Representation of Minimal and Maximal Surfaces in the Three-dimensional Minkowski Space Ganchev, Georgi #Mathematics - Differential Geometry#53A35#53B30 We prove that any minimal (maximal) strongly regular surface in the three-dimensional Minkowski space locally admits canonical principal parameters. Using this result, we find a canonical representation of minimal strongly regular time-like surfaces, which makes more precise the Weierstrass representation and shows more precisely the correspondence between these surfaces and holomorphic functions (in the Gauss plane). We also find a canonical representation of maximal strongly regular space-like surfaces, which makes more precise the Weierstrass representation and shows more precisely the correspondence between these surfaces and holomorphic functions (in the Lorentz plane). This allows us to describe locally the solutions of the corresponding natural partial differential equations.; Comment: 15 pages ‣ Cubic Algebraic Equations in Gravity Theory, Parametrization with the Weierstrass Function and Non-Arithmetic Theory of Algebraic Equations Dimitrov, Bogdan G. #High Energy Physics - Theory#General Relativity and Quantum Cosmology A cubic algebraic equation for the effective parametrizations of the standard gravitational Lagrangian has been obtained without applying any variational principle.It was suggested that such an equation may find application in gravity theory, brane, string and Rundall-Sundrum theories. The obtained algebraic equation was brought by means of a linear-fractional transformation to a parametrizable form, expressed through the elliptic Weierstrass function, which was proved to satisfy the standard parametrizable form, but with $g_{2}$ and $g_{3}$ functions of a complex variable instead of the definite complex numbers (known from the usual arithmetic theory of elliptic functions and curves). The generally divergent (two) infinite sums of the inverse first and second powers of the poles in the complex plane were shown to be convergent in the investigated particular case, and the case of the infinite point of the linear-fractional transformation was investigated. Some relations were found, which ensure the parametrization of the cubic equation in its general form with the Weierstrass function.; Comment: v.2; submitted to Journ.Math.Phys.(October 2001); Latex (Sci.Word,amsmath style), 77 pages, no figures, 4 appendixes; Sect.III rewritten for more clear derivation of the cubic algebraic equation; clarifying comments in Sect.VI and in the Introduction; new Sect.VII added;2 references corrected; acknowledgments added ‣ The Commutativity of Integrals of Motion for Quantum Spin Chains and Elliptic Functions Identities Dittrich, J.; Inozemtsev, V. I. #Mathematical Physics We prove the commutativity of the first two nontrivial integrals of motion for quantum spin chains with elliptic form of the exchange interaction. We also show thair linear independence for the numbers of spins larger than 4. As a byproduct, we obtained several identities between elliptic Weierstrass functions of three and four arguments.; Comment: 13 pages ‣ A primer on elliptic functions with applications in classical mechanics Brizard, Alain J. #Physics - Classical Physics The Jacobi and Weierstrass elliptic functions used to be part of the standard mathematical arsenal of physics students. They appear as solutions of many important problems in classical mechanics: the motion of a planar pendulum (Jacobi), the motion of a force-free asymmetric top (Jacobi), the motion of a spherical pendulum (Weierstrass), and the motion of a heavy symmetric top with one fixed point (Weierstrass). The problem of the planar pendulum, in fact, can be used to construct the general connection between the Jacobi and Weierstrass elliptic functions. The easy access to mathematical software by physics students suggests that they might reappear as useful tools in the undergraduate curriculum.; Comment: 17 pages, 20 figures ‣ Weierstrass mock modular forms and elliptic curves Alfes, Claudia; Griffin, Michael; Ono, Ken; Rolen, Larry Mock modular forms, which give the theoretical framework for Ramanujan's enigmatic mock theta functions, play many roles in mathematics. We study their role in the context of modular parameterizations of elliptic curves $E/\mathbb{Q}$. We show that mock modular forms which arise from Weierstrass $\zeta$-functions encode the central $L$-values and $L$-derivatives which occur in the Birch and Swinnerton-Dyer Conjecture. By defining a theta lift using a kernel recently studied by H\"ovel, we obtain canonical weight 1/2 harmonic Maass forms whose Fourier coefficients encode the vanishing of these values for the quadratic twists of $E$. We employ results of Bruinier and the third author, which builds on seminal work of Gross, Kohnen, Shimura, Waldspurger, and Zagier. We also obtain $p$-adic formulas for the corresponding weight 2 newform using the action of the Hecke algebra on the Weierstrass mock modular form.; Comment: To appear in Research in Number Theory
CommonCrawl
Rafael Novella1 & Javier Olivera2,3,4 The recent emergence and expansion of non-contributory pension programmes across low- and middle-income countries responds and contributes to a larger attention towards the population of elderly individuals in developing countries. These programmes are intended to reduce poverty in old age by providing monetary transfers in mean-tested schemes. However, little is known about the most salient characteristics of this population, particularly health outcomes and their relationship with socioeconomic demographics. The aim of this paper is to provide evidence about this relationship in the specific case of cognitive functioning. We exploit the baseline sample of the Peru's non-contributory pension programme Pension 65 and find significant relationships between cognitive functioning and retirement, education, nutrition, ethnicity and sex. JEL Classification: J14, J24 Elderly individuals with more cognitive impairments are less autonomous and can represent a major public health problem in the context of ageing societies. Cognitive impairment or dementia is associated with lower quality of life and increased disability and higher health expenditure (Bonsang et al. 2012). It has also been shown that having good cognitive functioning in old age is important for people to make better financial decisions and for preventing larger public spending on healthcare for the elderly (Lei et al. 2012; Bonsang et al. 2012; Banks et al. 2015). Furthermore, elderly individuals in rural areas play an important role in passing on traditions, dialects, customs and memories, and therefore, it is also important to have healthy individuals who can contribute to and preserve the social capital of the community. Due to the generally lower participation in social security and larger credit constraints in developing countries, elderly individuals may face a larger burden than individuals living in developed economies. Elderly individuals in developing countries also tend to keep working at an advanced age or even never retire. To deal with this, many developing countries have implemented non-contributory pension schemes targeted to the elderly poor.Footnote 1 However, not much is known about the relationship between cognitive functioning and retirement and other socioeconomic characteristics for this particular population. In this paper, we aim to improve the understanding of the relationship between individuals' socioeconomic characteristics and cognitive functioning among the elderly poor. In terms of human capital, cognitive functioning may be regarded as a measure of accumulated capital. This capital has a certain depreciation rate which speed can be affected through some cognitive maintenance and repair activities (McFadden 2008). Rohwedder and Willis (2010) have proposed the 'mental retirement' effect, which indicates that individuals not only retire from work but also suffer accelerated cognitive decline due to insufficient cognitive stimulation in retirement (see also Mazzona and Peracchi 2012; Bingley and Martinello 2013; Coe and Zamarro 2011 and Bonsang et al. 2012). This paper analyses the cognitive functioning of old persons living in poverty and addresses the role of important variables such as education, retirement, ethnicity, objective nutritional status in the short and long term and variables at the community level. For long-term nutritional status, the analysis employs arm span as a proxy for the quality of nutrition acquired in childhood, which positively affects the development of cognitive ability (Case and Paxson 2008; Guven and Lee 2013, 2015). Therefore, our paper follows a large body of recent research documenting the importance of accounting for parental input and schooling at early ages in the formation of cognitive skills (Todd and Wolpin 2003; Cunha et al. 2006; Cunha et al. 2010; Cunha and Heckman 2007, 2008). For short-term nutritional status, the analysis utilizes individuals' haemoglobin levels. Recent evidence suggests that poor nutritional status is associated with an increase in the risk of dementia (Hong et al. 2013). The issue of potential ceiling effects in the measurement of cognitive functioning is also tackled in this study. For the purposes of the paper, we use Peru's Survey of Health and Wellbeing of the Elderly (ESBAM), which samples elderly individuals living in households officially classified as poor and contains a comprehensive set of biomarkers and socioeconomic variables for the elderly individuals and household information. ESBAM differs from other large-scale surveys examining old age in the sense that this survey has been specially designed to collect information from the population of elderly and poor individuals. The rest of the paper is organised as follows: in Section 2, we describe the data and in Section 3, we present the methods. The results are reported and discussed in Section 4, and a conclusion is provided in Section 5. The Survey of Health and Wellbeing of the Elderly ESBAM (Encuesta de Salud y Bienestar del Adulto Mayor) is a unique survey collected by the National Institute of Statistics of Peru (INEI) between November and December 2012. It includes a detailed questionnaire for 65–80-year-old individuals, which includes information about their socioeconomic position, well-being, beliefs and several subjective and objective health variables. ESBAM also contains socioeconomic questions about the household and its members. The information was collected in face-to-face interviews by INEI's interviewers, while the anthropometrical measures, blood samples and arterial pressure measurements were collected by specialised technicians during the fieldwork. The data was gathered in 12 departmentsFootnote 2 (half of the total in Peru), where the Ministry of Development and Social Inclusion (MIDIS) had already completed the census of socioeconomic variables intended to update its household targeting score system Sistema de Focalización de Hogares (SISFOH). The goal of ESBAM is to serve as a baseline for the programme Pension 65, which is the non-contributory pension scheme implemented in Peru at the end of 2011. The population of the survey are 65–80-year-old individuals living in households classified as poor.Footnote 3 The sampling selection was probabilistic, independent in each department, stratified in rural/urban areas and carried out in two steps. In the first step, the primary sampling units (PSU) were census units in urban areas and villages in rural areas with at least four households living in poverty and with elderly members. The selection of PSU was made by probability proportional to size (PPS) according to the total number of households. In the second step, four households were randomly drawn from each PSU for interview and two for replacements. The initial sample size comprises 4151 individuals who completed the survey questionnaire themselves. This was reduced to 3884 individuals for our purposes. We dropped 194 individuals with missing information for some variables, 20 individuals whose mother tongue is a foreign language or an unspecified indigenous language and 53 unemployed people. The sample contains a large number of retirees and working individuals at later ages, which allows us to observe cognitive differentials between working and non-working people in their later years. This is different from data collected in industrialised countries, where cognitive functioning is very difficult to observe in working individuals at advanced ages because most people stop working at statutory retirement ages. In our sample, 1615, 1272 and 997 individuals are 65–69, 70–74 and 75–80 years old, respectively. The cognitive score ESBAM uses a shortened version of the Mini-Mental State Examination (MMSE) (Folstein et al. 1975) to evaluate cognitive functioning. This is similar to the version used in the Survey on Health, Well-Being, and Aging (SABE) implemented during the early 2000s in seven capital cities of Latin America and the Caribbean (Maurer 2010). Given the low literacy rates among elderly Latin American individuals, SABE employed a reduced version of the MMSE in order to minimise the strong bias produced by education on performing the test (Fillenbaum et al. 1988; Herzog and Wallace 1997). This is also relevant for our sample of elderly poor individuals, who report very low levels of education (28.5% are illiterate and only 20.9% have at most completed primary education). The score for cognitive functioning is computed using five questions. The first question is about orientation and asks the day of the week, day of the month, month and year. Each correct answer is given one point. In the second question, the interviewer reads three words that the individual must recall immediately in any order. This question measures immediate memory recall. The respondent is asked for these words again later (the fourth question) in order to measure delayed memory recall. A point is given for each word correctly answered. The third question is a command comprising three actions that the respondent has to complete in order: 'I will give you a piece of paper. Take this in your right hand, fold it in half with both hands and place it on your legs'. One point is given for each correct action. Lastly, a point is added for respondents who are able to duplicate a picture of two intersecting circles, provided that the circles do not cross more than half way. This measurement (drawing) captures the intactness of visual-spatial abilities. The cognitive score is the result of adding up the points obtained for these five questions. Table 1 shows the distribution of points for each type of question. Table 1 Distribution of cognitive score by question The total score for cognition can range from 0 to 14 points. Similar to Lei and colleagues (2012; 2014), we also consider two distinct components of cognitive functioning: episodic memory and mental intactness. The first component is the sum of the two memory scores (0–6 points), and the second is the sum of orientation, command and drawing (0–8 points). The questions concerning retirement and employment status in ESBAM follow conventional questions in household surveys to evaluate whether the individual is working, retired or unemployed.Footnote 4 Retirement is introduced in the analysis as a dummy variable. The analysis includes age, age squared and dummies of age, gender, literacy, area (urban or rural) and ethnicity, which is assessed by the mother tongue of the individual (Quechua, Aymara or Spanish). The inclusion of ethnicity is aimed at accounting for cumulative deprivation experienced by indigenous groups in many dimensions, which might affect cognition and go beyond education and health.Footnote 5 In addition, we include haemoglobin and arm span measurements as variables assessing short- and long-term nutritional status. Haemoglobin is measured from a blood sample taken from each respondent and corrected for the altitude of the district where the individual lives, in accordance with WHO norms (WHO 2011). This variable controls for the effect of anaemia, which has been linked to an increase in the risk of dementia through low oxygen levels affecting brain functions and damaging neurons, and hence reducing memory and thinking abilities (Hong et al. 2013). Anaemia can affect an important number of the elderly, because old age is associated with diet monotony, less intestinal mobility and lower intake of energy. According to WHO norms, haemoglobin levels should be between approximately 120 and 160 g/L. In our sample, the mean for haemoglobin is 132. 21.6% of respondents have less than 120 g/L, and 5.4% have more than 160 g/L. Cognitive performance in later age is positively related with nutrition quality acquired in childhood. Case and Paxson (2008) find a strong correlation between height in early life and adulthood and indicate that an adult's height can be a proxy for the quality of nutrition and health in childhood. Guven and Lee (2013, 2015) and Lei and colleagues (2012; 2014) also use respondents' height and find that better nutrition in childhood is positively associated with the development of cognitive ability. Height is not measured in ESBAM, because of well-known limitations concerning this measurement for elderly individuals (for example height loss and difficulty in standing straight). Instead, arm span is used, as this is considered a better measurement for elderly individuals and is highly correlated with height (Kwok and Whitelaw 1991; Kwok et al. 2002; De Lucia et al. 2002). The analysis also includes variables at the level of the district where the individuals live (the sample comprises individuals living in 422 districts). As stressed in Lei et al. (2014), communities are institutions that can have important effects on their members, particularly on health outcomes. There is a large disparity in the level of socioeconomic development and infrastructure among Peruvian localities, and therefore, it is important to control for this heterogeneity when analysing health outcomes. The variables for the district are taken from the National Institute of Statistics and the 2012 National Registry of Municipalities (Registro Nacional de Municipalidades, RENAMU), which is a census of municipalities. We include the altitude of the district's capital and the standard deviation from the district's mean altitude of the altitude of village, as a measurement of elevation and terrain ruggedness, because these are the principal determinants of climate and crop choice in Peru (Dell 2010). With regard to infrastructure, we include the number of social assistance centres, hospitals and social centres for elderly individuals in the district, whether 50% or more of the district's capital households are covered by electricity and water networks and whether the district has a sewage system. Lastly, the official monetary poverty rate for the district in the period 2012–2013 is also included (INEI 2013). Table 2 shows the summary statistics and unconditional means tests by retirement status for all the variables used in the analysis. Retired individuals have lower cognitive functioning than working individuals and are more likely to be female, older, illiterate, non-indigenous and living in urban areas and have lower levels of haemoglobin and a shorter arm span. Table 2 Summary statistics The score for cognitive functioning shows a left-skewed distribution and can range from 0 to 14 points. As indicated in Maurer (2010, 2011)—who also uses a reduced version of the MMSE with 0–19 points in samples of Latin American cities—the score can suffer from ceiling effects because this is right censored for some individuals. We address this issue with a Tobit regression model. This model assumes that the dependent variable is a latent variable C * censored at \( \overline{C} \) (in our case at 14). The data reports \( \overline{C} \) when \( {C}^{\ast}\ge \overline{C} \), but the Tobit model can account for this issue with the following specification: $$ {\displaystyle \begin{array}{c}{C}^{\ast }=\alpha + X\delta +\varepsilon, \varepsilon \sim N\left(0,\sigma \right)\\ {}C=\left\{\begin{array}{c}{C}^{\ast }\ \mathrm{if}\ {C}^{\ast }<\overline{C}\\ {}\overline{C}\ \mathrm{if}\ {C}^{\ast}\ge \overline{C}\end{array}\right.\end{array}} $$ C * is the latent cognition score, X is a vector of variables at the individual and district level and ε is a normally distributed error term. Vector X includes retirement and other covariates such as short- and long-term nutritional status. The models are estimated by maximum likelihood, the standard errors are robust and clustered at the level of the department and all models include dummy variables for the department of the respondent. Given that ESBAM is a cross-sectional dataset, we cannot control for individual heterogeneity or address the potential reverse causality between cognition and retirement with convincing instruments, as has been done in other studies focused on developed countries where social security participation is extended (for example Rohwedder and Willis 2010).Footnote 6 Differences in retirement laws are generally used as instruments in those studies, but in developing countries such as Peru, social security coverage for poor individuals is almost non-existent. For all these reasons, we should regard our results as associations instead of causal effects. This section describes the results of two groups of estimations: (i) models where the dependent variable is the total cognition score (Table 3) and (ii) models for each component of cognition (Table 4). Table 3 Tobit model estimations of cognition in old age Table 4 Tobit estimates for dimensions of cognition in old age As expected, the dummy 'retirement' is negatively associated with the cognitive score in every specification of Table 3. The coefficient of retirement indicates that being retired is associated with a loss of 0.41 points in the score of cognition before including any regional level variable (see column 7). Given that the mean for cognition is 11.7 points, being retired is associated with a reduction in cognition of approximately 3.5% on average. One extra year in age is associated with a reduction of approximately 0.1 points for the cognition score. We also introduce age in quadratic form, but the coefficients of age are non-significant. This is possibly because the range of age in our sample (65 to 80) is not large enough in comparison with other studies that find a significant coefficient for polynomials of age. For instance, Bonsang et al. (2012) use individuals aged 50+ and Lei and colleagues (2014) use individuals aged 45+. Adding the cubic form of age makes the coefficients of age statistically significant but the interpretation is more difficult. Substituting age with dummies of age brackets also indicate a positive relationship between age and cognition. These coefficients show a sharp decrease of cognitive with age in the first age groups, and then a smoother fall. Education is a very important predictor of cognitive functioning. We find that being illiterate is associated with 1.7 fewer points for the cognition score, which is equivalent to a 14.5% reduction of the score on average. Having an indigenous mother tongue is negatively associated with the cognitive score (with the exception of the last model specification that employs district fixed effects). This negative relationship is likely to be reflecting long-term disparities in access to education and other public services for indigenous populations in Peru. In our preferred model (in column 8 of Table 3), cognition is reduced by 0.31 points when the individual mother tongue is Quechua instead of Spanish, and it falls by over three times more (0.95 points) when the mother tongue is Aymara instead of Spanish. The sharp difference between these two indigenous groups in cognition performance is puzzling, although this could be the result of more severe cumulative deprivations suffered by Aymara individuals than their Quechua counterparts.Footnote 7 With regard to the effects of nutritional status, it is found that both short-term (haemoglobin) and long-term (arm span) nutritional status has a statistically significant influence on cognitive functioning. In model 8, the score for cognition increases by 0.12 points for each extra 10 cm of arm span, while it increases by 0.06 points for each additional 10 g/L of haemoglobin. These results lend support to the suggestion that the quality of nutrition acquired during childhood and experienced currently, both have an impact on cognition in old age, even in a selected sample of poor elderly individuals. We also find that being male is associated with reduced cognitive functioning (in model 8), although the statistical significance is at the 10% level. This result is similar to that of Guven and Lee (2015), who also find a negative association between being male and cognitive functioning (for verbal fluency, immediate and delayed recall and a summary cognitive score, but not for numeracy) in a sample of elderly Europeans. In addition, cognition increases by 0.25 points for individuals living in urban areas, which is about half the size of the coefficient of retirement. This suggests that disparities between urban and rural areas (for example, labour market conditions and access to services) have a considerable impact on cognitive functioning levels. Model 9 in Table 3 shows the results after including variables for the district where the individual lives. The coefficients do not change drastically after this inclusion. By contrast, the model 10 includes district fixed effects and some results change. For example, the gender is not significant and having mother tongue Quechua is positively associated with cognition, perhaps reflecting correlations between the district and the concentration of indigenous individuals in some districts. The association between cognition and Aymara mother tongue is also reduced. The same occurs with the associations of education and retirement. Given most of the coefficients keep their directions and magnitude does not change drastically, it seems that unobserved heterogeneity of districts in our sample plays a marginal role. As an additional analysis, we estimate the predictors of the two dimensions of cognition described in Section 2.2. Table 4 shows that retirement is associated with a decrease both in memory and mental intactness. Being female is positively associated with memory, but negatively associated with mental intactness. Age is a more important predictor (negative) for memory than for metal intactness. Retirement is associated with a reduction in the score for memory and mental intactness by 0.32 and 0.33 points, respectively, which represents a fall of about 6.2 and 5.0% for the average individual. The negative association of being illiterate is much larger for mental intactness than for memory. According to the corresponding coefficients of Table 4 and the means of the cognitive dimensions, illiteracy is associated with a reduction in the scores for memory and mental intactness by 11 and 23%, respectively. In addition, long-term nutrition quality (arm span) does not matter for memory, but it does for mental intactness. Current nutrition quality (haemoglobin) has a statistically significant effect on both types of cognitive measurements. There is a limited availability of survey data measuring cognition at later ages and other health outcomes in developing countries, and even more so among the poor elderly population. The Peruvian survey ESBAM, which focuses on the elderly poor, offers a rare opportunity to study the relation between retirement and other socioeconomic characteristics and cognitive functioning among the poor. The recent and growing popularity of non-contributory pension schemes targeting the elderly poor in low and middle-income countries, prominently Latin America, represents a shift in the strategy to deal with social protection and poverty in old age. However, not much is known about the most salient characteristics of the elderly poor persons, in particular about their cognitive functioning and other health outcomes. Our study provides evidence about this. Recall that cognitive impairment or dementia is associated with a lower quality of life, more disability and higher health expenditure and can compromise the resources for other family members. We find that retirement is associated with a loss of half a point in the score for cognition, which means that cognitive functioning decreases by approximately 4.4% on average. This result is stable across different specifications. In addition, both short-term (haemoglobin) and long-term (arm span) nutritional status have a statistically significant relationship with cognition. For each extra 10 cm of arm span, the score for cognition increases by 0.13 points, while for each additional 10 g/L of haemoglobin, the score increases by 0.06 points. These results, even in a selected sample of poor elderly individuals, are in line with other empirical results in the literature, which argue for the positive impact of high-quality nutrition on cognition in later life. We also show that education—in our sample, the variable is literacy—has an important protective effect on cognition. Being illiterate is associated with a drop of about 15% in the cognitive score. The inclusion of variables at the district level does not produce any significant effect, and controlling for district fixed effects produces only small changes in the described results. It seems that the covariates included in our models capture a great deal of the individual variability. It is expected that the recent emergence of non-contributory pension schemes aimed at alleviating poverty in old age will induce a significant number of elderly individuals to enter into retirement. De Carvalho-Filho (2008) estimates that about 40% of recipients retired completely on receiving a non-contributory pension in rural Brazil, with the rest of the recipients drastically reducing their working hours. Latin America in particular has experienced a boom in new non-contributory pension schemes during the last decade (Olivera and Zuluaga 2014). Nowadays, about 19 million individuals are recipients of a non-contributory pension in Latin America, which represents 32% of the population aged 60 and over in this region. This figure is computed with data extracted on 15 May 2016 from http://www.pension-watch.net/ and the United Nations' World Population Prospects 2015 revision. Most of the data refers to the years 2012 and 2013. The department is the first political and territorial division in Peru, the second one is the province and the third one is the district. Some districts, particularly in rural areas, are further divided into villages (centros poblados). Pension 65 targets individuals aged 65 and over who are not covered by social security and live in a household officially classified as extremely poor. A household can be classified as non-poor, non-extreme poor and extreme poor according to the government's household targeting system, SISFOH. ESBAM includes the same questions to evaluate retirement that are used in the Peruvian National Household Survey (ENAHO), which is the leading survey in Peru to study living conditions. For example, Dell (2010) illustrates the long-term effects of mandatory mining work in Peru's highlands on the current health of indigenous people. In the particular case of the generation in our sample, other severe limitations suffered are that the illiterate were not allowed to vote in political elections before 1980 and that the Agrarian Reform Bill (Reforma Agraria) was only implemented during the early 1970s. This major redistribution of land represented the end of the Haciendas system, in which an impoverished labour force (of peasants) was attached to rural states. In order to deal with the potential endogeneity (e.g. reverse causality) of retirement and cognitive functioning, we also estimated an Instrumental Variable (IV) model. As instruments for retirement, we use the number of months since Pension 65 has been operating in the district of the respondent and the total number of Pension 65 beneficiaries in the district in October–November 2012, just before the ESBAM data was collected. Both variables are obtained from administrative data of Pension 65. We exploit the variation in the timing of implementation of the programme across districts and argue that observing a stronger presence of Pension 65 in the district increases the expectation of an individual to receive the transfer. Results from this analysis, at the first stage, show that a stronger presence of the Pension 65 programme in the district is statistically associated with an increase in the probability of retirement. At the second stage, the results show that the error terms of the reduced form and main equation are uncorrelated and there is no endogeneity problem. This suggests that applying a Tobit model is appropriate and brings unbiased estimates. The interviewers for ESBAM conducted the survey in the language of the respondent if needed. In order to address the issue of potential interviewer bias, we added dummy variables for the interviewers and ran the model 8 of Table 3 again. The coefficient estimates for Quechua and Aymara did not change notably (−0.25 and −0.88, respectively). Banks J, Crawford R, Tetlow G. Annuity choices and income drawdown: evidence from the decumulation phase of defined contribution pensions in England. J Pension Econ Financ. 2015;14(4):412–38. Bingley P, Martinello A. Mental retirement and schooling. Eur Econ Rev. 2013;63:292–8. Bonsang E, Adam S, Perelman S. Does retirement affect cognitive functioning? J Health Econ. 2012;31:490–501. Case A, Paxson C. Height, health and cognitive function at older ages". Am Econ Rev Pap Proc. 2008;98:463–7. Coe NB, Zamarro G. Retirement effects on health in Europe. J Health Econ. 2011;30:77–86. Cunha F, Heckman J. The technology of skill formation. Am Econ Rev. 2007;97:31–47. Cunha F, Heckman J. Formulating, identifying and estimating the technology of cognitive and noncognitive skill formation. J Hum Resour. 2008;43:738–82. Cunha F, Heckman JJ, Lochner L, Masterov DV. Interpreting the evidence on life cycle skill formation. In: Hanushek EA, Welch F, editors. Handbook of the economics of education. Amsterdam: North-Holland; 2006. p. 697–812. Cunha F, Heckman JJ, Schennach SM. Supplement to 'estimating the technology of cognitive and noncognitive skill formation: appendix'. Econometrica. 2010;78(3):883–931. De Carvalho-Filho IE. Old-age benefits and retirement decisions of rural elderly in Brazil. J Dev Econ. 2008;86(1):129–46. De Lucia E, Lemma F, Tesfaye F, Demisse T, Ismail S. The use of armspan measurement to assess the nutritional status of adults in four Ethiopian ethnic groups. Eur J Clin Nutr. 2002;56(2):91–5. Dell M. The persistent effects of Peru's mining mita. Econometrica. 2010;78(6):1863–903. Fillenbaum GG, Hughes DC, Heyman A, George LK, Blazer DG. Relationship of health and demographic characteristics to Mini-Mental State Examination score among community residents. Psychol Med. 1988;18:719–26. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state": a practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–98. Guven C, Lee WS. Height and cognitive function at older ages: is height a useful summary measure of early childhood experiences? Health Econ. 2013;22:224–33. Guven C, Lee WS. Height, ageing and cognitive abilities across Europe. Econ Hum Biol. 2015;16:16–29. Herzog AR, Wallace RB. Measures of cognitive functioning in the AHEAD study. J Gerontol Series B. 1997;52B(Special Issue):37–48. Hong CH, Falvey C, Harris TB, Simonsick EM, Satterfield S, Ferrucci L, Metti AL, Patel KV, Yaffe K. Anemia and risk of dementia in older adults. Findings from the Health ABC study. Neurology. 2013;81(6):528–33. INEI. Mapa de Pobreza Provincial y Distrital 2013. Lima, Peru: Instituto Nacional de Estadísticas e Informática; 2013. Kwok T, Whitelaw MN. The use of armspan in nutritional assessment of the elderly. J Am Geriatr Soc. 1991;39(5):492–6. Kwok T, Lau E, Woo J. The prediction of height by armspan in older Chinese people. Ann Hum Biol. 2002;29(6):649–56. Lei X, Hu Y, McArdle JJ, Smith JP, Zhao Y. Gender differences in cognition among older adults in China. J Hum Resour. 2012;47(4):951–71. Lei X, Smith JP, Sun X, Zhao Y. Gender differences in cognition in China and reasons for change over time: evidence from CHARLS. J Econ Ageing. 2014;4:6–55. Maurer J. Height, education and cognitive function at older ages: international evidence from Latin America and the Caribbean. Econ Hum Biol. 2010;8:168–76. Maurer J. Education and male-female differences in later-life cognition: international evidence from Latin America and the Caribbean. Demography. 2011;48:915–30. Mazzonna F, Peracchi F. Aging, cognitive abilities, and retirement. Eur Econ Rev. 2012;56:691–710. McFadden D. Human capital accumulation and depreciation. Rev Agric Econ. 2008;30:379–85. Olivera J, Zuluaga B. The ex-ante effects of non-contributory pensions in Colombia and Peru. J Int Dev. 2014;26(7):949–73. Rohwedder S, Willis RJ. Mental retirement. J Econ Perspect. 2010;24(1):1–20. Todd P, Wolpin KI. On the Specification and Estimation of the Production Function for Cognitive Achievement. Econ. J. 2003;113:F3–F33. World Health Organization. Haemoglobin concentrations for the diagnosis of anaemia and assessment of severity. Vitamin and Mineral Nutrition Information System. Geneva: WHO/NMH/NHD/MNM/11.1; 2011. We thank the seminar participants at the University of Luxembourg, University of Leuven, Inter-American Development, and the conference participants at the 28th European Society for Population Economics (ESPE) and at the Peruvian Economic Association conference for helpful comments. There are not sources of funding for the research that need to be declared for this paper. Inter-American Development Bank (IDB), Washington, DC, USA Rafael Novella Luxembourg Institute of Socio-Economic Research (LISER), Esch-sur-Alzette, Luxembourg Javier Olivera Department of Economics, KU Leuven, Leuven, Belgium Department of Economics, Pontificia Universidad Catolica del Peru, Lima, Peru Search for Rafael Novella in: Search for Javier Olivera in: Correspondence to Javier Olivera. Novella, R., Olivera, J. Cognitive functioning among poor elderly persons: evidence from Peru. IZA J Develop Migration 7, 19 (2017) doi:10.1186/s40176-017-0103-5 Cognitive functioning Old-age poverty
CommonCrawl
Topics by WorldWideScience.org Sample records for colimadores micro multi-laminas Dosimetry of the stereotactic radiosurgery with linear accelerators equipped with micro multi-blades collimators; Dosimetria dos sistemas de radiocirurgia estereotaxica com aceleradores lineares equipados com colimadores micro multi-laminas Energy Technology Data Exchange (ETDEWEB) Vieira, Andre Mozart de Miranda In this work, absorbed dose to water produced by the radiation beam of a clinical linear accelerator - CLINAC 600C{sup TM} (Varian), with a photon beam of 6 MV, were evaluated both theoretically and experimentally. This determination includes square and circular field configurations, the last one obtained with a micro multi leaf collimator - mMLC m3{sup TM} (Brain Lab). Theoretical evaluation was performed throughout Monte Carlo method. Experimental measurements of Percentage Depth Dose - PDD and derived Tissue Maximum Ratio - TMR curves from CLINAC 600C were validated by comparison with reference values as well as with measurements using different detectors. The results indicate local differences smaller than 5% and average differences smaller than 1,5% for each evaluated field, if they are compared to the previous commissioning values (made in 1999) and to the values of literature. Comparisons of ionization chamber and diode result in an average local difference of -0,6% for PDD measurements, and within 1% for lateral dose profiles, at depth, in the flat region. Diode provides measurements with better spatial resolution. Current output factors of open fields agree with reference values within 1,03% of discrepancy level. Current absorbed dose distributions in water are, now, considered reference values and allow characterization of this CLINAC for patient dose calculation. The photon spectra resulting from simulations with PENELOPE and MCNP codes agree approximately in 80% of the sampled points, in what average energies of (1,6 {+-} 0,3)MeV, with MCNP, and of (1,72 {+-} 0,08)MeV, with PENELOPE, are coincident. The created simple source model of the CLINAC 600C, using the PENELOPE code, allows one to calculate dose distributions in water, for open fields, with discrepancies of the order of {+-} 1,0% in dose and of {+-} 0,1 cm in position, if they are compared to experimental measurements. These values met the initial proposed criteria to validate the simulation Modeling of a collimator micro-multilayers in the Pinnacle planning system; Modelado de un colimador micromultilaminas en el sistema de planificacion Pinnacle Garcia Hernandez, T.; Brualla Gonzalez, L.; Vicedo Gonzalez, A.; Rosello Ferrando, J.; Granero Cabanero, D. To model and validate, in the system of planning and calculation Pinnacle, a micro-multilayers collimator mounted on an accelerator Siemens Primus. The objective is to take advantage of the improvements offered by the algorithm of convolution of cone collapsed and the capacity of the system of modeling the rounded end of the blades. (Author) The Design and Performance of a Large High resolution Focusing Collimator; Etude d'un Grand Collimateur a Focalisation et Fort Pouvoir de Resolution: Resultats; Konstruktsiya i kharakteristika krupnogo fokusiruyushchego kollimatora s vysokoj stepen'yu razresheniya; Diseno y Funcionamiento de un Gran Colimador Enfocado de Alto Poder de Resolucion Harris, C. C.; Bell, P. R.; Satterfield, M. M.; Ross, D. A.; Jordan, J. C. [Oak Ridge National Laboratory, TN (United States) solide, en vue d'augmenter l'efficacite de comptage. Des scintigrammes de l'organisme entier ont ete obtenus au moyen de '{sup 198}Au et de {sup 131}I. Ces scintigrammes montrent que le collimateur de 14 cm a une plus grande efficacite et une plus grande resolution. Sur un scintigramme ({sup 198}Au) de chien obtenu au moyen du collimateur de 16,5 cm, on a discerne nettement une zone active d'un diametre apparent inferieur a 6 mm; l'activite n'etait que cinq fois superieure au bruit de fond du aux tissus, mais la zone active etait nettement separee par un espace d 0,6 cm d'une source environ cinquante fois plus intense ayant un diametre de 1,8 cm. (author) [Spanish] Los 'colimadores enfocados' permiten utilizar detectores de grandes dimensiones que dan un indice de recuento mayor sin perdida del poder de resolucion espacial. Como estos colimadores tienen muchos canales dirigidos hacia un mismo punto se obtiene con ellos una especie de efecto 'de enfoque'. Los tabiques de esc' canales son necesariamente delgados y, por consiguiente, le' rayos gamma los atraviesan en detrimento de la resolucion espacial. A fin de reducir esa penetracion se han empleado, para detectores de 3 pulg de diametro, colimadores de wolframio y oro, de preferencia a los de plomo. Si se reduce la penetracion empleando un colimador de plomo mas largo se malogra la finalidad perseguida ya que el indice de recuento disminuye, a no ser que se emplee tambien un detector de mayores dimensiones. Los autores han disenado y construido un colimador de plomo de 91 canales para emplearlo con un detector de Nal(Tl) de 5,25 pulg (13,5 cm) por 3 pulg (7,5 cm). El foco esta situado a 9,5 pulg (24 cm) del detector, y el angulo solido de admision es igual al de los colimadores enfocados que se emplean corrientemente con detectores de 3 pulg. La longitud maxima es de 6,5 pulg (16 ,5 cm); la normal es de 5,5 pulg (14 cm). Para estas longitudes, los 'circulos de resolucion' opticos son, respectivamente, de 0.476 pulg (1 A Depth-Focusing Collimator for the Investigation of the Brain Cortex; Collimateur a Focalisation Profonde pour l'Exploration de la Substance Corticale du Cerveau; Kollimator s glubinnoj fokusirovkoj dlya''issledovaniya kory golovnogo mozga; Colimador de Enfoque Profundo para Estudios de la Corteza Cerebral Glasswestern, H. I. [Regional Hospital Board, Glasgow (United Kingdom) mesurer, a l'aide de {sup 133}Xe, des flux sanguins localises dans la substance corticale du cerveau. Toutefois, l'appareil permet aussi de detecter, avec des produits chimiques marques par {sup 125}l, des tumeurs et des hemorragies de la substance corticale. (author) [Spanish] El autor describe un detector que consiste en un cristal de yoduro de sodio, de 5 pulg de diametro y i de pulg de espesor, y un colimador de enfoque profundo. En principio, el colimador fue diseflado para el estudio de la circulacion sanguinea en la corteza cerebral mediante el {sup 133}Xe, pero puede utilizarse tambien con otros emisores de rayos gamma blandos, por ejemplo, el {sup 125}I. El colimador, de plomo, tiene Inverted-Question-Mark de pulg de espesor y es de tipo multicanal. El foco esta situado a 1,75 cm de la cara anterior. La respuesta en aire a una fuente puntiforme que se mueva a lo largo del eje central del colimador decrece hasta el {sup 125}I del valor maximo a {+-} 0,75 cm del foco. La respuesta en tejidos para campos alejados es considerablemente mejor que en aire debido a que los tejidos atenuan mucho la radiacion gamma blanda. El autor describe la respuesta del colimador a una fuente puntiforme situada en el aire y dentro de un craneo. El colimador se ha empleado para medir, con ayuda de {sup 133}Xe, la circulacion sanguinea en la corteza cerebral; pero mediante productos quimicos marcados con {sup 125}I, permite tambien detectar hemorragias y tumores corticales. (author) [Russian] Opisyvaetsja detektor , sostojashhij iz kristalla iodistogo natrija dimetrom 12,5 sm i tolshhinoj 6,2 mm kollimatora s glubinnoj fokusirovkoj. Kollimator pervonachal'no prednaznachalsja dlja issledovanija krovoobrashhenija v kore golovnogo m o zga s pomoshh'ju ks en on a -133 , no mozhno takzhe ispol'zovat' i drugie mjagkie gamma-izluchajushhie i zo topy, naprimer j o d -125. Kollimator so svincovym korpusom tolshhinoj 6m m javljaetsja mnogokanal'nym. Fokus nahoditsja na rasstojanii 1,75 sm ot Micro Engineering DEFF Research Database (Denmark) Alting, Leo; Kimura, F.; Hansen, Hans Nørgaard The paper addresses the questions of how micro products are designed and how they are manufactured. Definitions of micro products and micro engineering are discussed and the presentation is aimed at describing typical issues, possibilities and tools regarding design of micro products. The implica......The paper addresses the questions of how micro products are designed and how they are manufactured. Definitions of micro products and micro engineering are discussed and the presentation is aimed at describing typical issues, possibilities and tools regarding design of micro products... Micro Vision Ohba, Kohtaro; Ohara, Kenichi In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques. Micro Manufacturing Hansen, Hans Nørgaard Manufacturing deals with systems that include products, processes, materials and production systems. These systems have functional requirements, constraints, design parameters and process variables. They must be decomposed in a systematic manner to achieve the best possible system performance....... If a micro manufacturing system isn't designed rationally and correctly, it will be high-cost, unreliable, and not robust. For micro products and systems it is a continuously increasing challenge to create the operational basis for an industrial production. As the products through product development...... processes are made applicable to a large number of customers, the pressure in regard to developing production technologies that make it possible to produce the products at a reasonable price and in large numbers is growing. The micro/nano manufacturing programme at the Department of Manufacturing... Micro Programming Spanjersberg , Herman International audience; In the 1970s a need arose to perform special arithmetic operations on minicomputers much more quickly than had been possible in the past. This paper tells the story of why micro programming was needed for special arithmetic operations on mini computers in the 1970s and how it was implemented. The paper tells how the laboratory in which the first experiment took place had a PDP-9 minicomputer from Digital Equipment Corporation and how the author, with several colleagues... MicroED data collection and processing Hattne, Johan; Reyes, Francis E.; Nannenga, Brent L.; Shi, Dan; Cruz, M. Jason de la [Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147 (United States); Leslie, Andrew G. W. [Medical Research Council Laboratory of Molecular Biology, Cambridge (United Kingdom); Gonen, Tamir, E-mail: [email protected] [Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147 (United States) The collection and processing of MicroED data are presented. MicroED, a method at the intersection of X-ray crystallography and electron cryo-microscopy, has rapidly progressed by exploiting advances in both fields and has already been successfully employed to determine the atomic structures of several proteins from sub-micron-sized, three-dimensional crystals. A major limiting factor in X-ray crystallography is the requirement for large and well ordered crystals. By permitting electron diffraction patterns to be collected from much smaller crystals, or even single well ordered domains of large crystals composed of several small mosaic blocks, MicroED has the potential to overcome the limiting size requirement and enable structural studies on difficult-to-crystallize samples. This communication details the steps for sample preparation, data collection and reduction necessary to obtain refined, high-resolution, three-dimensional models by MicroED, and presents some of its unique challenges. Micro club CERN Multimedia Opération NEMO  Pour finir en beauté les activités spéciales que le CMC a réalisé pendant cette année 2014, pour commémorer le 60ème anniversaire du CERN, et le 30ème du Micro Club, l' Opération NEMO aura cette année un caractère très particulier. Nous allons proposer 6 fabricants de premier ordre qui offriront chacun deux ou trois produits à des prix exceptionnels. L'opération débute le lundi 17 novembre 2014. Elle se poursuivra jusqu'au samedi 6 décembre inclus. Les délais de livraison seront de deux à trois semaines, selon les fabricants. Donc les commandes faites la dernière semaine, du 1 au 6 décembre, risquent d'arriver qu'au début du mois de janvier 2015. Liste de fabricants part... Jeudi 18 septembre 2014 à 18h30 au Bât. 567 R-029 Le CERN MICRO CLUB organise un Atelier sur la sécurité informatique. La Cyber-sécurité : Ce qui se passe vraiment, comment ne pas en être victime ! Orateur : Sebastian Lopienski Adjoint au Computer Security Officer du Département IT. Sujet : Cet exposé vous présentera les modes de sécurité actuels et les problèmes touchants les applications logicielles des ordinateurs, les réseaux ainsi que leurs utilisateurs. Cela inclus des informations sur les nouveaux types de vulnérabilité, les vecteurs d'attaque récents et une vue d'ensemble sur le monde de la cyber-sécurité en 2014. Biographie : Sebastian Lopienski travaille au CERN depuis 2001. Il est actuellement adjoint au Computer Security Officer et s'occupe de la protection de... Micro robot bible Yoon, Jin Yeong This book deals with micro robot, which tells of summary of robots like entertainment robots and definition of robots, introduction of micro mouse about history, composition and rules, summary of micro controller with its history, appearance and composition, introduction of stepping motor about types, structure, basic characteristics, and driving ways, summary of sensor section, power, understanding of 80C196KC micro controller, basic driving program searching a maze algorithm, smooth turn and making of tracer line. Micro intelligence robot Jeon, Yon Ho This book gives descriptions of micro robot about conception of robots and micro robot, match rules of conference of micro robots, search methods of mazes, and future and prospect of robots. It also explains making and design of 8 beat robot like making technique, software, sensor board circuit, and stepping motor catalog, speedy 3, Mr. Black and Mr. White, making and design of 16 beat robot, such as micro robot artist, Jerry 2 and magic art of shortening distances algorithm of robot simulation. Micro rapid prototyping system for micro components Li Xiaochun; Choi Hongseok; Yang Yong Similarities between silicon-based micro-electro-mechanical systems (MEMS) and Shape Deposition Manufacturing (SDM) processes are obvious: both integrate additive and subtractive processes and use part and sacrificial materials to obtain functional structures. These MEMS techniques are two-dimensional (2-D) processes for a limited number of materials while SDM enables the building of parts that have traditionally been impossible to fabricate because of their complex shapes or of their variety in materials. This work presents initial results on the development of a micro rapid prototyping system that adapts SDM methodology to micro-fabrication. This system is designed to incorporate microdeposition and laser micromachining. In the hope of obtaining a precise microdeposition, an ultrasonic-based micro powder-feeding mechanism was developed in order to form thin patterns of dry powders that can be cladded or sintered onto a substrate by a micro-sized laser beam. Furthermore, experimental results on laser micromachining using a laser beam with a wavelength of 355 nm are also presented. After further improvement, the developed micro manufacturing system could take computer-aided design (CAD) output to reproduce 3-D heterogeneous micro-components from a wide selection of materials Micro-turbines Tashevski, Done In this paper a principle of micro-turbines operation, type of micro-turbines and their characteristics is presented. It is shown their usage in cogeneration and three generation application with the characteristics, the influence of more factors on micro-turbines operation as well as the possibility for application in Macedonia. The paper is result of the author's participation in the training program 'Micro-turbine technology' in Florida, USA. The characteristics of different types micro-turbines by several world producers are shown, with accent on US micro-turbines producers (Capstone, Elliott). By using the gathered Author's knowledge, contacts and the previous knowledge, conclusions and recommendations for implementation of micro-turbines in Macedonia are given. (Author) Micro-propulsion and micro-combustion; Micropropulsion microcombustion Ribaud, Y.; Dessornes, O. The AAAF (french space and aeronautic association) organized at Paris a presentation on the micro-propulsion. The first part was devoted to the thermal micro-machines for micro drones, the second part to the micro-combustion applied to micro-turbines. (A.L.B.) Micro-Organ Device Science.gov (United States) Gonda, Steve R. (Inventor); Chang, Robert C. (Inventor); Starly, Binil (Inventor); Culbertson, Christopher (Inventor); Holtorf, Heidi L. (Inventor); Sun, Wei (Inventor); Leslie, Julia (Inventor) A method for fabricating a micro-organ device comprises providing a microscale support having one or more microfluidic channels and one or more micro-chambers for housing a micro-organ and printing a micro-organ on the microscale support using a cell suspension in a syringe controlled by a computer-aided tissue engineering system, wherein the cell suspension comprises cells suspended in a solution containing a material that functions as a three-dimensional scaffold. The printing is performed with the computer-aided tissue engineering system according to a particular pattern. The micro-organ device comprises at least one micro-chamber each housing a micro-organ; and at least one microfluidic channel connected to the micro-chamber, wherein the micro-organ comprises cells arranged in a configuration that includes microscale spacing between portions of the cells to facilitate diffusion exchange between the cells and a medium supplied from the at least one microfluidic channel. 'Micro-8' micro-computer system Yagi, Hideyuki; Nakahara, Yoshinori; Yamada, Takayuki; Takeuchi, Norio; Koyama, Kinji The micro-computer Micro-8 system has been developed to organize a data exchange network between various instruments and a computer group including a large computer system. Used for packet exchangers and terminal controllers, the system consists of ten kinds of standard boards including a CPU board with INTEL-8080 one-chip-processor. CPU architecture, BUS architecture, interrupt control, and standard-boards function are explained in circuit block diagrams. Operations of the basic I/O device, digital I/O board and communication adapter are described with definitions of the interrupt ramp status, I/O command, I/O mask, data register, etc. In the appendixes are circuit drawings, INTEL-8080 micro-processor specifications, BUS connections, I/O address mappings, jumper connections of address selection, and interface connections. (author) Search for Bs0 --> micro+ micro- and B0 --> micro+ micro- decays with 2 fb-1 of pp collisions. Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S We have performed a search for B(s)(0) --> micro(+) micro(-) and B(0) --> micro(+) micro(-) decays in pp collisions at square root s = 1.96 TeV using 2 fb(-1) of integrated luminosity collected by the CDF II detector at the Fermilab Tevatron Collider. The observed number of B(s)(0) and B0 candidates is consistent with background expectations. The resulting upper limits on the branching fractions are B(B(s)0) --> micro(+) micro(-)) micro(+) micro(-))<1.8 x 10(-8) at 95% C.L. Micro Elector Mechanical Systems Yun, Jun Bo; Jo, Il Ju; Choi, Yoon Seok This book consists of seven chapters, which are the flow of the age from macro world to micro world, what is MEMS, semiconductor, micro machining and MEMS, where do MEMS goes to?, How to make MEMS, MEMS in the future and knowing about MEMS more than. This book is written to explain in ease and fun. It deals with MEMS in IT, BT, NT, ST, micro robot technology, basic process for making MEMS such as Bulk micromachining, surface micromachining LGA technology, DARPA and organization in domestic and overseas and academy and journal related MEMS. Micro-educational reproduction Andrade, Stefan Bastholm; Thomsen, Jens Peter This study analyzes the persistence of educational inequality in advanced industrialized societies with expanding and differentiated education systems. Using Denmark as a case, we investigate changes in immobility patterns for cohorts born 1960–1981 and develop a new micro-educational classificat...... forms of reproduction. In addition, the micro-educational approach far better explains the immobility of sons than it explains that of daughters, revealing important gender differences in the immobility patterns for sons and daughters......., in particular for sons. We also find great variation in immobility for specific micro-educations within the university level. Studies of educational immobility would therefore benefit from paying attention to micro-educational classifications, because they capture patterns of multidimensional, disaggregated... Micro metal forming Micro Metal Forming, i. e. forming of parts and features with dimensions below 1 mm, is a young area of research in the wide field of metal forming technologies, expanding the limits for applying metal forming towards micro technology. The essential challenges arise from the reduced geometrical size and the increased lot size. In order to enable potential users to apply micro metal forming in production, information about the following topics are given: tribological behavior: friction between tool and work piece as well as tool wear mechanical behavior: strength and formability of the work piece material, durability of the work pieces size effects: basic description of effects occurring due to the fact, that the quantitative relation between different features changes with decreasing size process windows and limits for forming processes tool making methods numerical modeling of processes and process chains quality assurance and metrology All topics are discussed with respect to the questions relevant to micro... Micro-mixer/combustor KAUST Repository Badra, Jihad Ahmad; Masri, Assaad Rachid A micro-mixer/combustor to mix fuel and oxidant streams into combustible mixtures where flames resulting from combustion of the mixture can be sustained inside its combustion chamber is provided. The present design is particularly suitable Micro-surgical endodontics. Eliyas, S; Vere, J; Ali, Z; Harris, I Non-surgical endodontic retreatment is the treatment of choice for endodontically treated teeth with recurrent or residual disease in the majority of cases. In some cases, surgical endodontic treatment is indicated. Successful micro-surgical endodontic treatment depends on the accuracy of diagnosis, appropriate case selection, the quality of the surgical skills, and the application of the most appropriate haemostatic agents and biomaterials. This article describes the armamentarium and technical procedures involved in performing micro-surgical endodontics to a high standard. Micro Calorimeter for Batteries Santhanagopalan, Shriram [National Renewable Energy Laboratory (NREL), Golden, CO (United States) As battery technology forges ahead and consumer demand for safer, more affordable, high-performance batteries grows, the National Renewable Energy Laboratory (NREL) has added a patented Micro Calorimeter to its existing family of R&D 100 Award-winning Isothermal Battery Calorimeters (IBCs). The Micro Calorimeter examines the thermal signature of battery chemistries early on in the design cycle using popular coin cell and small pouch cell designs, which are simple to fabricate and study. Urban micro-grids Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases Barbed micro-spikes for micro-scale biopsy Byun, Sangwon; Lim, Jung-Min; Paik, Seung-Joon; Lee, Ahra; Koo, Kyo-in; Park, Sunkil; Park, Jaehong; Choi, Byoung-Doo; Seo, Jong Mo; Kim, Kyung-ah; Chung, Hum; Song, Si Young; Jeon, Doyoung; Cho, Dongil Single-crystal silicon planar micro-spikes with protruding barbs are developed for micro-scale biopsy and the feasibility of using the micro-spike as a micro-scale biopsy tool is evaluated for the first time. The fabrication process utilizes a deep silicon etch to define the micro-spike outline, resulting in protruding barbs of various shapes. Shanks of the fabricated micro-spikes are 3 mm long, 100 µm thick and 250 µm wide. Barbs protruding from micro-spike shanks facilitate the biopsy procedure by tearing off and retaining samples from target tissues. Micro-spikes with barbs successfully extracted tissue samples from the small intestines of the anesthetized pig, whereas micro-spikes without barbs failed to obtain a biopsy sample. Parylene coating can be applied to improve the biocompatibility of the micro-spike without deteriorating the biopsy function of the micro-spike. In addition, to show that the biopsy with the micro-spike can be applied to tissue analysis, samples obtained by micro-spikes were examined using immunofluorescent staining. Nuclei and F-actin of cells which are extracted by the micro-spike from a transwell were clearly visualized by immunofluorescent staining. Tunable micro-optics Duppé, Claudia Presenting state-of-the-art research into the dynamic field of tunable micro-optics, this is the first book to provide a comprehensive survey covering a varied range of topics including novel materials, actuation concepts and new imaging systems in optics. Internationally renowned researchers present a diverse range of chapters on cutting-edge materials, devices and subsystems, including soft matter, artificial muscles, tunable lenses and apertures, photonic crystals, and complete tunable imagers. Special contributions also provide in-depth treatment of micro-optical characterisation, scanners, and the use of natural eye models as inspiration for new concepts in advanced optics. With applications extending from medical diagnosis to fibre telecommunications, Tunable Micro-optics equips readers with a solid understanding of the broader technical context through its interdisciplinary approach to the realisation of new types of optical systems. This is an essential resource for engineers in industry and academia,... A novel micro wiggler Liu Qingxiang; Xu Yong A novel structure of the micro-wiggler is presented. The authors developed a simplified theoretical model of the micro-wiggler. According to the model, an analytic formula of the magnetic field in two dimensions is got. A calculated program (PWMW-I) is developed from the formula. PWMW-I can calculate the field on the axis and the off-axis for the number of periods N, and the entrance or the exit of the micro-wiggler. Three model with different period (10 mm, 5 mm and 3 mm) is designed on the program. The 5T peak field for the period of 3 mm at the gap of 1 mm is got Micro Mobility Marketing Hosbond, Jens Henrik; Skov, Mikael B. , in our case a medium-sized retail supermarket. Two prototypes based on push and pull marketing strategies are implemented and evaluated. Taking outset in a synthesis of central issues in contemporary research on mobile marketing, we discuss their role in micro mobility marketing to point to similarities......Mobile marketing refers to marketing of services or goods using mobile technology and mobile marketing holds potentially great economical opportunities. Traditionally, mobile marketing has been viewed as mobility in the large taking place virtually anywhere, anytime. Further, research shows...... considerable number of studies on push-based SMS mobile marketing campaigns. This paper explores a related yet different form of mobile marketing namely micro mobility marketing. Micro mobility marketing denotes mobility in the small, meaning that promotion of goods takes place within a circumscribed location... Methods and systems for micro machines Stalford, Harold L. A micro machine may be in or less than the micrometer domain. The micro machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft is operable to be driven by the micro actuator. A tool is coupled to the micro shaft and is operable to perform work in response to at least motion of the micro shaft. Briand, Danick; Roundy, Shad With its inclusion of the fundamentals, systems and applications, this reference provides readers with the basics of micro energy conversion along with expert knowledge on system electronics and real-life microdevices. The authors address different aspects of energy harvesting at the micro scale with a focus on miniaturized and microfabricated devices. Along the way they provide an overview of the field by compiling knowledge on the design, materials development, device realization and aspects of system integration, covering emerging technologies, as well as applications in power management, e Micro/Nano manufacturing Tosello, Guido Micro- and nano-scale manufacturing has been the subject of an increasing amount of interest and research effort worldwide in both academia and industry over the past 10 years.Traditional (MEMS) manufacturing, but also precision manufacturing technologies have been developed to cover micro......-scale dimensions and accuracies. Furthermore, these fundamentally different technology ecosystems are currently combined in order to exploit strengths of both platforms. One example is the use of lithography-based technologies to establish nanostructures that are subsequently transferred to 3D geometries via... Micro-RNAs Taipaleenmäki, H.; Hokland, L. B.; Chen, Li Osteoblast differentiation and bone formation (osteogenesis) are regulated by transcriptional and post-transcriptional mechanisms. Recently, a novel class of regulatory factors termed microRNAs has been identified as playing an important role in the regulation of many aspects of osteoblast biology...... including proliferation, differentiation, metabolism and apoptosis. Also, preliminary data from animal disease models suggest that targeting miRNAs in bone can be a novel approach to increase bone mass. This review highlights the current knowledge of microRNA biology and their role in bone formation... Lectures in Micro Meteorology Larsen, Søren Ejling This report contains the notes from my lectures on Micro scale meteorology at the Geophysics Department of the Niels Bohr Institute of Copenhagen University. In the period 1993-2012, I was responsible for this course at the University. At the start of the course, I decided that the text books...... available in meteorology at that time did not include enough of the special flavor of micro meteorology that characterized the work of the meteorology group at Risø (presently of the Institute of wind energy of the Danish Technical University). This work was focused on Boundary layer flows and turbulence... Micro-manufacturing: design and manufacturing of micro-products National Research Council Canada - National Science Library Koç, Muammer; Özel, Tuğrul .... After addressing the fundamentals and non-metallic-based micro-manufacturing processes in the semiconductor industry, it goes on to address specific metallic-based micro-manufacturing processes... Micro-Avionics Multi-Purpose Platform (MicroAMPP) Data.gov (United States) National Aeronautics and Space Administration — The Micro-Avionics Multi-Purpose Platform (MicroAMPP) is a common avionics architecture supporting microsatellites, launch vehicles, and upper-stage carrier... Structure of catalase determined by MicroED Nannenga, Brent L; Shi, Dan; Hattne, Johan; Reyes, Francis E; Gonen, Tamir MicroED is a recently developed method that uses electron diffraction for structure determination from very small three-dimensional crystals of biological material. Previously we used a series of still diffraction patterns to determine the structure of lysozyme at 2.9 Ã… resolution with MicroED (Shi et al., 2013). Here we present the structure of bovine liver catalase determined from a single crystal at 3.2 Ã… resolution by MicroED. The data were collected by continuous rotation of the sample under constant exposure and were processed and refined using standard programs for X-ray crystallography. The ability of MicroED to determine the structure of bovine liver catalase, a protein that has long resisted atomic analysis by traditional electron crystallography, demonstrates the potential of this method for structure determination. DOI: http://dx.doi.org/10.7554/eLife.03600.001 PMID:25303172 Cold Gas Micro Propulsion NARCIS (Netherlands) Louwerse, M.C. This thesis describes the development of a micro propulsion system. The trend of miniaturization of satellites requires small sized propulsion systems. For particular missions it is important to maintain an accurate distance between multiple satellites. Satellites drift apart due to differences in Tolerances in micro manufacturing Hansen, Hans Nørgaard; Zhang, Yang; Islam, Aminul This paper describes a method for analysis of tolerances in micro manufacturing. It proposes a mapping oftolerances to dimensions and compares this with current available international standards. The analysisdocuments that tolerances are not scaled down as the absolute dimension. In practice... Micro-Scale Thermoacoustics Offner, Avshalom; Ramon, Guy Z. Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP). Fertilizer micro-dosing International Development Research Centre (IDRC) Digital Library (Canada) Localized application of small quantities of fertilizer (micro-dosing), combined with improved planting pits for rainwater harvesting, has generated greater profits and food security for women farmers in the Sahel. • Women are 25% more likely to use combined applications, and have expanded areas of food crops (cowpea,. Micro- and Nanoengineering Schroen, C.G.P.H. There are two overall themes, micro- and nanotechnology, which are capable of changing the future of food considerably. In microtechnology, production of foods and food ingredients is investigated at small scale; the results are thus that larger scale production is considered through operating many MicroRNA pharmacogenomics Rukov, Jakob Lewin; Shomron, Noam polymorphisms, copy number variations or differences in gene expression levels of drug metabolizing or transporting genes and drug targets. In this review paper, we focus instead on microRNAs (miRNAs): small noncoding RNAs, prevalent in metazoans, that negatively regulate gene expression in many cellular... Programming the BBC micro Ferguson, John D; Macari, Louie; Williams, Peter H Programming the BBC Micro is a 12-chapter book that begins with a description of the BBC microcomputer, its peripheral, and faults. Subsequent chapters focus on practice in programming, program development, graphics, words, numbers, sound, bits, bytes, and assembly language. The interfacing, file handling, and detailed description of BBC microcomputer are also shown. Pyramid solar micro-grid Huang, Bin-Juine; Hsu, Po-Chien; Wang, Yi-Hung; Tang, Tzu-Chiao; Wang, Jia-Wei; Dong, Xin-Hong; Hsu, Hsin-Yi; Li, Kang; Lee, Kung-Yen A novel pyramid solar micro-grid is proposed in the present study. All the members within the micro-grid can mutually share excess solar PV power each other through a binary-connection hierarchy. The test results of a 2+2 pyramid solar micro-grid consisting of 4 individual solar PV systems for self-consumption are reported. Micro Information Systems Ulslev Pedersen, Rasmus; Kühn Pedersen, Mogens such as medical and manufacturing. These new sensor applications have implications for information systems (IS) and, the authors visualize this new class of information systems as fractals growing from an established class of systems; namely that of information systems (IS). The identified applications...... and implications are used as an empirical basis for creating a model for these small new information systems. Such sensor systems are called embedded systems in the technical sciences, and the authors want to couple it with general IS. They call the merger of these two important research areas (IS and embedded...... systems) for micro information systems (micro-IS). It is intended as a new research field within IS research. An initial framework model is established, which seeks to capture both the possibilities and constraints of this new paradigm, while looking simultaneously at the fundamental IS and ICT aspects... CERN MicroClub Le CERN Micro Club (en partenariat avec Google Education et EU Code Week) organise un évènement éducatif exceptionnel autour de trois kits scientifiques basés sur le mini-ordinateur Raspberry Pi : Le Bras Robotique "Poppy Ergo Jr", conçu par l'équipe-projet Flowers (Centre de recherche Inria Bordeaux Sud-Ouest, ENSTA Paris Tech). Le kit de détection de rayons cosmiques "Muon Hunter", conçu en partenariat entre Mr Mihaly Vadai et les membres du CERN Micro Club. La voiture radio-commandée programmable Wifi "GianoPi", conçue en partenariat avec le campus "La Chataigneraie", pour l'Ecole Internationale de Genève.  Le vendredi 7 octobre (de 18h à 20h) : Une conférence gratuite et ouverte à tous (limitée à 100 personnes), pendant laquelle v... Micro dynamics in mediation Boserup, Hans The author has identified a number of styles in mediation, which lead to different processes and different outcomes. Through discourse and conversation analysis he examines the micro dynamics in three of these, the postmodern styles: systemic, transformative and narrative mediation. The differences between the three mediation ideologies and practice is illustrated through role play scripts enacted in each style. Mediator and providers of mediation and trainers in mediation are encouraged to a... Rectenna session: Micro aspects Gutmann, R. J. Two micro aspects of rectenna design are discussed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving. Flexible micro flow sensor for micro aerial vehicles Zhu, Rong; Que, Ruiyi; Liu, Peng This article summarizes our studies on micro flow sensors fabricated on a flexible polyimide circuit board by a low-cost hybrid process of thin-film deposition and circuit printing. The micro flow sensor has merits of flexibility, structural simplicity, easy integrability with circuits, and good sensing performance. The sensor, which adheres to an object surface, can detect the surface flow around the object. In our study, we install the fabricated micro flow sensors on micro aerial vehicles (MAVs) to detect the surface flow variation around the aircraft wing and deduce the aerodynamic parameters of the MAVs in flight. Wind tunnel experiments using the sensors integrated with the MAVs are also conducted. Remote micro hydro The micro-hydro project, built on a small tributary of Cowley Creek, near Whitehorse, Yukon, is an important step in the development of alternative energy sources and in conserving expensive diesel fuel. In addition to demonstrating the technical aspects of harnessing water power, the project paved the way for easier regulatory procedures. The power will be generated by a 9 meter head and a 6 inch crossflow turbine. The 36 V DC power will be stored in three 12 V batteries and converted to ac on demand by a 3,800 watt inverter. The system will produce 1.6 kW or 14,016 kWh per year with a firm flow of 1.26 cfs. This is sufficient to supply electricity for household needs and a wood working shop. The project is expected to cost about $18,000 and is more economical than tying into the present grid system, or continuing to use a gasoline generator. An environmental study determined that any impact of the project on the stream would be negligible. It is expected that no other water users will be affected by the project. This pilot project in micro-hydro applications will serve as a good indicator of the viability of this form of alternate energy in the Yukon. The calculations comparing the micro-hydro and grid system indicate that the mico-hydro system is a viable source of inflation-proof power. Higher heads and larger flow resulting in ac generation in excess of 10 kW would yield much better returns than this project. 3 tabs. micro strip gas chamber About 16 000 Micro Strip Gas Chambers like this one will be used in the CMS tracking detector. They will measure the tracks of charged particles to a hundredth of a millimetre precision in the region near the collision point where the density of particles is very high. Each chamber is filled with a gas mixture of argon and dimethyl ether. Charged particles passing through ionise the gas, knocking out electrons which are collected on the aluminium strips visible under the microscope. Such detectors are being used in radiography. They give higher resolution imaging and reduce the required dose of radiation. Review of Micro Magnetic Generator Lin DU; Gengchen SHI; Jingjing ZHAO This paper discusses the research progress of micro magnetic generator systems. Micro magnetic generator systems convert energy from the environment to electric energy with advantages as high reliability, high power density, long life time and can be applied to extreme environment. This paper summarizes methods for improving generator performance of micro magnetic generator, including rotational magnetic generator, vibrational magnetic generator and hybrid magnetic generator, analyzes and com... Micro manufacturing techniques and applications Du, Ruxu; Li, Zifu Micro/meso-scale manufacturing has been developed in research fields of machining, forming, materials and others, but its potential to industries are yet to be fully realized. The theme of the current volume was to build a bridge joining academic research and industrial needs in micro manufacturing. Among the 12 papers selected for publication are three keynote addresses onmicro and desktop factories for micro/meso-scale manufacturing applicationsand future visions, tissue cutting mechanics and applications for needlecore biopsy and guidance, and micro-texturing onto amorphous carbonmaterials Nozzle fabrication for Micro Propulsion of a Micro-Satellite Louwerse, M.C.; Jansen, Henricus V.; Groenendijk, M.N.W.; Elwenspoek, Michael Curt To enable formation flying of micro satellites, small sized propulsion systems are required. Our research focuses on the miniaturization of a feeding and thruster system by means of micro system technology (MST). Three fabrication methods have been investigated to make a conical converging-diverging A micro-coupling for micro mechanical systems Li, Wei; Zhou, Zhixiong; Zhang, Bi; Xiao, Yunya The error motions of micro mechanical systems, such as micro-spindles, increase with the increasing of the rotational speed, which not only decreases the rotational accuracy, but also promotes instability and limits the maximum operational speed. One effective way to deal with it is to use micro-flexible couplings between the drive and driven shafts so as to reduce error motions of the driven shaft. But the conventional couplings, such as diaphragm couplings, elastomeric couplings, bellows couplings, and grooved couplings, etc, cannot be directly used because of their large and complicated structures. This study presents a novel micro-coupling that consists of a flexible coupling and a shape memory alloy (SMA)-based clamp for micro mechanical systems. It is monolithic and can be directly machined from a shaft. The study performs design optimization and provides manufacturing considerations, including thermo-mechanical training of the SMA ring for the desired Two-Way-Shape-Memory effect (TWSMe). A prototype micro-coupling and a prototype micro-spindle using the proposed coupling are fabricated and tested. The testing results show that the prototype micro-coupling can bear a torque of above 5 N • mm and an axial force of 8.5 N and be fitted with an SMA ring for clamping action at room temperature (15 °C) and unclamping action below-5 °C. At the same time, the prototype micro-coupling can work at a rotational speed of above 200 kr/min with the application to a high-speed precision micro-spindle. Moreover, the radial runout error of the artifact, as a substitute for the micro-tool, is less than 3 μm while that of turbine shaft is above 7 μm. It can be concluded that the micro-coupling successfully accommodates misalignment errors of the prototype micro-spindle. This research proposes a micro-coupling which is featured with an SMA ring, and it is designed to clamp two shafts, and has smooth transmission, simple assembly, compact structure, zero-maintenance and Applying a foil queue micro-electrode in micro-EDM to fabricate a 3D micro-structure Xu, Bin; Guo, Kang; Wu, Xiao-yu; Lei, Jian-guo; Liang, Xiong; Guo, Deng-ji; Ma, Jiang; Cheng, Rong Applying a 3D micro-electrode in a micro electrical discharge machining (micro-EDM) can fabricate a 3D micro-structure with an up and down reciprocating method. However, this processing method has some shortcomings, such as a low success rate and a complex process for fabrication of 3D micro-electrodes. By focusing on these shortcomings, this paper proposed a novel 3D micro-EDM process based on the foil queue micro-electrode. Firstly, a 3D micro-electrode was discretized into several foil micro-electrodes and these foil micro-electrodes constituted a foil queue micro-electrode. Then, based on the planned process path, foil micro-electrodes were applied in micro-EDM sequentially and the micro-EDM results of each foil micro-electrode were able to superimpose the 3D micro-structure. However, the step effect will occur on the 3D micro-structure surface, which has an adverse effect on the 3D micro-structure. To tackle this problem, this paper proposes to reduce this adverse effect by rounded corner wear at the end of the foil micro-electrode and studies the impact of machining parameters on rounded corner wear and the step effect on the micro-structure surface. Finally, using a wire cutting voltage of 80 V, a current of 0.5 A and a pulse width modulation ratio of 1:4, the foil queue micro-electrode was fabricated by wire electrical discharge machining. Also, using a pulse width of 100 ns, a pulse interval of 200 ns, a voltage of 100 V and workpiece material of 304# stainless steel, the foil queue micro-electrode was applied in micro-EDM for processing of a 3D micro-structure with hemispherical features, which verified the feasibility of this process. New micro-organism Takakuwa, Masayoshi; Hashimoto, Gotaro Invention relates with a new organism for the coal liquefying desulfurization. This micro-organism conducts a good sporulation on a culture medium which contains a coal as an only carbon source. It belongs to Penicillium and named Penicillium MT-6001 registered at Fermentation Research Institute No. 8463. Coal powder is thrown into a reaction vessel which accommodated a culture solution of this bacteria, and the surface of the solution is covered with liquid paraffin; coal powder is treated of liquefaction for about 5 hours while maintaining the anaerobic condition and slowly agitating to form a transparent solution layer on the surface of the reactor together with liquid paraffin. Liquefied product shows an analysis pattern similar to naphthenic petroleum containing a lipid with polar radical. (2 figs) MicroProteins Eguen, Teinai Ebimienere; Straub, Daniel; Graeff, Moritz MicroProteins (miPs) are short, usually single-domain proteins that, in analogy to miRNAs, heterodimerize with their targets and exert a dominant-negative effect. Recent bioinformatic attempts to identify miPs have resulted in a list of potential miPs, many of which lack the defining...... characteristics of a miP. In this opinion article, we clearly state the characteristics of a miP as evidenced by known proteins that fit the definition; we explain why modulatory proteins misrepresented as miPs do not qualify as true miPs. We also discuss the evolutionary history of miPs, and how the miP concept... Intronic microRNAs Ying, S.-Y.; Lin, S.-L. MicroRNAs (miRNAs), small single-stranded regulatory RNAs capable of interfering with intracellular mRNAs that contain partial complementarity, are useful for the design of new therapies against cancer polymorphism and viral mutation. MiRNA was originally discovered in the intergenic regions of the Caenorhabditis elegans genome as native RNA fragments that modulate a wide range of genetic regulatory pathways during animal development. However, neither RNA promoter nor polymerase responsible for miRNA biogenesis was determined. Recent findings of intron-derived miRNA in C. elegans, mouse, and human have inevitably led to an alternative pathway for miRNA biogenesis, which relies on the coupled interaction of Pol-II-mediated pre-mRNA transcription and intron excision, occurring in certain nuclear regions proximal to genomic perichromatin fibrils Micro thrust and heat generator Garcia, E.J. A micro thrust and heat generator have a means for providing a combustion fuel source to an ignition chamber of the micro thrust and heat generator. The fuel is ignited by a ignition means within the micro thrust and heat generator`s ignition chamber where it burns and creates a pressure. A nozzle formed from the combustion chamber extends outward from the combustion chamber and tappers down to a narrow diameter and then opens into a wider diameter where the nozzle then terminates outside of said combustion chamber. The pressure created within the combustion chamber accelerates as it leaves the chamber through the nozzle resulting in pressure and heat escaping from the nozzle to the atmosphere outside the micro thrust and heat generator. The micro thrust and heat generator can be microfabricated from a variety of materials, e.g., of polysilicon, on one wafer using surface micromachining batch fabrication techniques or high aspect ratio micromachining techniques (LIGA). 30 figs. Sustainable Micro-Manufacturing of Micro-Components via Micro Electrical Discharge Machining Directory of Open Access Journals (Sweden) Valeria Marrocco Full Text Available Micro-manufacturing emerged in the last years as a new engineering area with the potential of increasing peoples' quality of life through the production of innovative micro-devices to be used, for example, in the biomedical, micro-electronics or telecommunication sectors. The possibility to decrease the energy consumption makes the micro-manufacturing extremely appealing in terms of environmental protection. However, despite this common belief that the micro-scale implies a higher sustainability compared to traditional manufacturing processes, recent research shows that some factors can make micro-manufacturing processes not as sustainable as expected. In particular, the use of rare raw materials and the need of higher purity of processes, to preserve product quality and manufacturing equipment, can be a source for additional environmental burden and process costs. Consequently, research is needed to optimize micro-manufacturing processes in order to guarantee the minimum consumption of raw materials, consumables and energy. In this paper, the experimental results obtained by the micro-electrical discharge machining (micro-EDM of micro-channels made on Ni–Cr–Mo steel is reported. The aim of such investigation is to shed a light on the relation and dependence between the material removal process, identified in the evaluation of material removal rate (MRR and tool wear ratio (TWR, and some of the most important technological parameters (i.e., open voltage, discharge current, pulse width and frequency, in order to experimentally quantify the material waste produced and optimize the technological process in order to decrease it. Lin DU Full Text Available This paper discusses the research progress of micro magnetic generator systems. Micro magnetic generator systems convert energy from the environment to electric energy with advantages as high reliability, high power density, long life time and can be applied to extreme environment. This paper summarizes methods for improving generator performance of micro magnetic generator, including rotational magnetic generator, vibrational magnetic generator and hybrid magnetic generator, analyzes and compares their design and performance, and concludes key technologies and ongoing challenges for further progress. The paper is instructive and meaningful to for research work of related field. Wear of micro end mills Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo This paper addresses the important issue of wear on micro end mills considering relevant metrological tools for its characterization and quantification. Investigation of wear on micro end mills is particularly difficult and no data are available in the literature. Small worn volumes cause large...... part. For this investigation 200 microns end mills are considered. Visual inspection of the micro tools requires high magnification and depth of focus. 3D reconstruction based on scanning electron microscope (SEM) images and stereo-pair technique is foreseen as a possible method for quantification... MicroRNA and cancer Jansson, Martin D; Lund, Anders H biological phenomena and pathologies. The best characterized non-coding RNA family consists in humans of about 1400 microRNAs for which abundant evidence have demonstrated fundamental importance in normal development, differentiation, growth control and in human diseases such as cancer. In this review, we...... summarize the current knowledge and concepts concerning the involvement of microRNAs in cancer, which have emerged from the study of cell culture and animal model systems, including the regulation of key cancer-related pathways, such as cell cycle control and the DNA damage response. Importantly, micro... Badra, Jihad Ahmad A micro-mixer/combustor to mix fuel and oxidant streams into combustible mixtures where flames resulting from combustion of the mixture can be sustained inside its combustion chamber is provided. The present design is particularly suitable for diffusion flames. In various aspects the present design mixes the fuel and oxidant streams prior to entering a combustion chamber. The combustion chamber is designed to prevent excess pressure to build up within the combustion chamber, which build up can cause instabilities in the flame. A restriction in the inlet to the combustion chamber from the mixing chamber forces the incoming streams to converge while introducing minor pressure drop. In one or more aspects, heat from combustion products exhausted from the combustion chamber may be used to provide heat to at least one of fuel passing through the fuel inlet channel, oxidant passing through the oxidant inlet channel, the mixing chamber, or the combustion chamber. In one or more aspects, an ignition strip may be positioned in the combustion chamber to sustain a flame without preheating. Micro transport phenomena during boiling Peng, Xiaofeng "Micro Transport Phenomena During Boiling" reviews the new achievements and contributions in recent investigations at microscale. It presents some original research results and discusses topics at the frontier of thermal and fluid sciences. Micro Mercury Ion Clock (MMIC) National Aeronautics and Space Administration — Demonstrate micro clock based on trapped Hg ions with more than 10x size reduction and power; Fractional frequency stability at parts per 1014 level, adequate for... Micro-gen metering solutions Elland, J.; Dickson, J.; Cranfield, P. This report summarises the results of a project to investigate the regulation of domestic electricity metering work and identify the most economic options for micro-generator installers to undertake work on electricity meters. A micro-generation unit is defined as an energy conversion system converting non-electrical energy into electrical energy and can include technologies such as photovoltaic systems, small-scale wind turbines, micro-hydroelectric systems, and combined heat and power systems. Details of six tasks are given and cover examination of the existing framework and legal documentation for metering work, the existing technical requirements for meter operators, meter operator personnel accreditation, appraisal of options for meter changes and for micro-generation installation, document change procedures, industry consultation, and a review of the costs implications of the options. Micro creep mechanisms of tungsten Levoy, R.; Hugon, I.; Burlet, H.; Baillin, X.; Guetaz, L. Due to its high melting point (3410 deg C), tungsten offers good mechanical properties at elevated temperatures for several applications in non-oxidizing environment. The creep behavior of tungsten is well known between 1200 and 2500 deg C and 10 -3 to 10 -1 strain. However, in some applications when dimensional stability of components is required, these strains are excessive and it is necessary to know the creep behavior of the material for micro-strains (between 10 -4 and 10 -6 ). Methods and devices used to measure creep micro-strains are presented, and creep equations (Norton and Chaboche laws) were developed for wrought, annealed and recrystallized tungsten. The main results obtained on tungsten under low stresses are: stress exponent 1, symmetry of micro-strains in creep-tension and creep-compression, inverse creep (threshold stress), etc. TEM, SEM and EBSD studies allow interpretation of the micro-creep mechanism of tungsten under low stresses and low temperature (∼0.3 K) like the Harper-Dorn creep. In Harper-Dorn creep, micro-strains are associated with the density and the distribution of dislocations existing in the crystals before creep. At 975 deg C, the initial dislocation structure moves differently whether or not a stress is applied. To improve the micro-creep behavior of tungsten, a heat treatment is proposed to create the optimum dislocation structure. (authors) Micro watt thermocurrent generator Bustard, T.; Goslee, D.; Barr, H. This nuclear thermocurrent generator to feed a cardic pacemaker should have higher life expectancy and reliability than was previously achieved. For this purpose a gettering arrangement is connected to be heat conducting immediately adjacent to the nuclear fuel arrangement in an evacuated casing. The gettering arrangement can be operated to activate at as high a temperature as possible, from 121 0 C to preferably about 204 0 C, so that a high vacuum is maintained. The current generating thermal column works at a temperature difference of 55.6 0 C. As the cold end of the column is connected to the outer casing, and should be held to a mean body temperature of 37.8 0 C, the hot side of the thermal column may only be heated to 93.4 0 C. The temperature jump from 121 0 or 204 0 to 93.4 0 is produced by a thermal resistance inserted between the hot side of the thermal column and the fuel arrangement. It may consist of a spacer made of stainless steel or by a gap, while in this first arrangement the nuclear heat generator is situated between the gettering arrangement and the thermal column, another arrangement shows the gettering arrangement enclosed in the fuel arrangement and thermal column. Here the heat flows in one direction only, the required temperature gradient is produced by suitable construction of the heat contacts between the 3 elements. Detailed constructional and manufacturing data are given for both models. Plutonium oxide is welded into a double casing as heat generator, in example the casing is made of nickel alloy. 1/10 gram of plutonium supplies a thermal energy of 50m watts, which produces a thermal current of 300 to 400 micro watts at 0.3V. (RW) [de Micro-droplet formation via 3D printed micro channel Jian, Zhen; Zhang, Jiaming; Li, Erqiang; Thoroddsen, Sigurdur T. Low cost, fast-designed and fast-fabricated 3D micro channel was used to create micro-droplets. Capillary with an outer diameter of 1.5 mm and an inner diameter of 150 μm was inserted into a 3D printed cylindrical channel with a diameter of 2 mm . Flow rate of the two inlets, insert depth, liquid (density, viscosity and surface tension) and solid (roughness, contact angle) properties all play a role in the droplet formation. Different regimes - dripping, jetting, unstable state - were observed in the micro-channel on varying these parameters. With certain parameter combinations, successive formation of micro-droplets with equal size was observed and its size can be much smaller than the smallest channel size. Based on our experimental results, the droplet formation via 3D printed micro T-junction was investigated through direct numerical simulations with a code called Gerris. Reynolds numbers Re = �UL / μ and Weber numbers We = �U2 L / σ of the two liquids were introduced to measure the liquid effect. The parameter regime where different physical dynamics occur was studied and the regime transition was observed with certain threshold values. Qualitative and quantitative analysis were performed as well between simulations and experiments. Micro Machining Enhances Precision Fabrication Advanced thermal systems developed for the Space Station Freedom project are now in use on the International Space Station. These thermal systems employ evaporative ammonia as their coolant, and though they employ the same series of chemical reactions as terrestrial refrigerators, the space-bound coolers are significantly smaller. Two Small Business Innovation Research (SBIR) contracts between Creare Inc. of Hanover, NH and Johnson Space Center developed an ammonia evaporator for thermal management systems aboard Freedom. The principal investigator for Creare Inc., formed Mikros Technologies Inc. to commercialize the work. Mikros Technologies then developed an advanced form of micro-electrical discharge machining (micro-EDM) to make tiny holes in the ammonia evaporator. Mikros Technologies has had great success applying this method to the fabrication of micro-nozzle array systems for industrial ink jet printing systems. The company is currently the world leader in fabrication of stainless steel micro-nozzles for this market, and in 2001 the company was awarded two SBIR research contracts from Goddard Space Flight Center to advance micro-fabrication and high-performance thermal management technologies. Cavitational micro-particles: plasma formation mechanisms Bica, Ioan Cavitational micro-particles are a class to which the micro-spheres, the micro-tubes and the octopus-shaped micro-particles belong. The cavitational micro-particles (micro-spheres, micro-tubes and octopus-shaped micro-particles) at an environmental pressure. The micro-spheres, the micro-tubes and the ligaments of the octopus-shaped micro-particles are produced in the argon plasma and are formed of vapors with low values of the molar concentration in comparison with the molar density of the gas and vapor mixture, the first one on the unstable and the last two on the stable movement of the vapors. The ligaments of the octopus-shaped micro-particles are open at the top for well-chosen values of the sub-cooling of the vapor and gas cylinders. The nitrogen in the air favors the formation of pores in the wall of the micro-spheres. In this paper we present the cavitational micro-particles, their production in the plasma and some mechanisms for their formation in the plasma. (author) MicroPRIS user's guide MicroPRIS is a new service of the IAEA Power Reactor Information System (PRIS) for the Member States of IAEA. MicroPRIS makes the IAEA database on nuclear power plants and their operating experience available to Member States on computer diskettes in a form readily accessible by standard commercially available personal computer packages. The aim of this publication is to provide the users of the PC version of PRIS data with description of the subset of the full PRIS database contained in MicroPRIS (release 1990), description of files and file structures, field descriptions and definitions, extraction and selection guide and with the method of calculation of a number of important performance indicators used by the IAEA Dimensional micro and nano metrology Hansen, Hans Nørgaard; da Costa Carneiro, Kim; Haitjema, Han The need for dimensional micro and nano metrology is evident, and as critical dimensions are scaled down and geometrical complexity of objects is increased, the available technologies appear not sufficient. Major research and development efforts have to be undertaken in order to answer these chal......The need for dimensional micro and nano metrology is evident, and as critical dimensions are scaled down and geometrical complexity of objects is increased, the available technologies appear not sufficient. Major research and development efforts have to be undertaken in order to answer...... these challenges. The developments have to include new measuring principles and instrumentation, tolerancing rules and procedures as well as traceability and calibration. The current paper describes issues and challenges in dimensional micro and nano metrology by reviewing typical measurement tasks and available... Micro-Mechanical Temperature Sensors Larsen, Tom Temperature is the most frequently measured physical quantity in the world. The field of thermometry is therefore constantly evolving towards better temperature sensors and better temperature measurements. The aim of this Ph.D. project was to improve an existing type of micro-mechanical temperature...... sensor or to develop a new one. Two types of micro-mechanical temperature sensors have been studied: Bilayer cantilevers and string-like beam resonators. Both sensor types utilize thermally generated stress. Bilayer cantilevers are frequently used as temperature sensors at the micro-scale, and the goal....... The reduced sensitivity was due to initial bending of the cantilevers and poor adhesion between the two cantilever materials. No further attempts were made to improve the sensitivity of bilayer cantilevers. The concept of using string-like resonators as temperature sensors has, for the first time, been... Micro-machined calorimetric biosensors Doktycz, Mitchel J.; Britton, Jr., Charles L.; Smith, Stephen F.; Oden, Patrick I.; Bryan, William L.; Moore, James A.; Thundat, Thomas G.; Warmack, Robert J. A method and apparatus are provided for detecting and monitoring micro-volumetric enthalpic changes caused by molecular reactions. Micro-machining techniques are used to create very small thermally isolated masses incorporating temperature-sensitive circuitry. The thermally isolated masses are provided with a molecular layer or coating, and the temperature-sensitive circuitry provides an indication when the molecules of the coating are involved in an enthalpic reaction. The thermally isolated masses may be provided singly or in arrays and, in the latter case, the molecular coatings may differ to provide qualitative and/or quantitative assays of a substance. Automated Micro Hall Effect measurements Petersen, Dirch Hjorth; Henrichsen, Henrik Hartmann; Lin, Rong With increasing complexity of processes and variety of materials used for semiconductor devices, stringent control of the electronic properties is becoming ever more relevant. Collinear micro four-point probe (M4PP) based measurement systems have become high-end metrology methods for characteriza......With increasing complexity of processes and variety of materials used for semiconductor devices, stringent control of the electronic properties is becoming ever more relevant. Collinear micro four-point probe (M4PP) based measurement systems have become high-end metrology methods... Improvement of micro endmill geometry for micro hard milling application Li, P.; Oosterling, J.A.J.; Hoogstrate, A.M.; Langen, H.H. One of the applications of the micromilling technology is to machine micro features on moulds by direct machining of hardened tool steels. However at this moment, this process is not industrial applicable because of the encountered problems, such as the big tool deflection, severe tool wear, and On-chip micro-power: three-dimensional structures for micro-batteries and micro-supercapacitors Beidaghi, Majid; Wang, Chunlei With the miniaturization of portable electronic devices, there is a demand for micro-power source which can be integrated on the semiconductor chips. Various micro-batteries have been developed in recent years to generate or store the energy that is needed by microsystems. Micro-supercapacitors are also developed recently to couple with microbatteries and energy harvesting microsystems and provide the peak power. Increasing the capacity per footprint area of micro-batteries and micro-supercapacitors is a great challenge. One promising route is the manufacturing of three dimensional (3D) structures for these micro-devices. In this paper, the recent advances in fabrication of 3D structure for micro-batteries and micro-supercapacitors are briefly reviewed. Micro-machined resonator oscillator Koehler, Dale R.; Sniegowski, Jeffry J.; Bivens, Hugh M.; Wessendorf, Kurt O. A micro-miniature resonator-oscillator is disclosed. Due to the miniaturization of the resonator-oscillator, oscillation frequencies of one MHz and higher are utilized. A thickness-mode quartz resonator housed in a micro-machined silicon package and operated as a "telemetered sensor beacon" that is, a digital, self-powered, remote, parameter measuring-transmitter in the FM-band. The resonator design uses trapped energy principles and temperature dependence methodology through crystal orientation control, with operation in the 20-100 MHz range. High volume batch-processing manufacturing is utilized, with package and resonator assembly at the wafer level. Unique design features include squeeze-film damping for robust vibration and shock performance, capacitive coupling through micro-machined diaphragms allowing resonator excitation at the package exterior, circuit integration and extremely small (0.1 in. square) dimensioning. A family of micro-miniature sensor beacons is also disclosed with widespread applications as bio-medical sensors, vehicle status monitors and high-volume animal identification and health sensors. The sensor family allows measurement of temperatures, chemicals, acceleration and pressure. A microphone and clock realization is also available. Hybrid cycles for micro generation Campanari, S. This paper deals with the main features of two emerging technologies in the field of small-scale power generation, micro turbines and Solid Oxide Fuel Cells, discussing the extremely high potential of their combination into hybrid cycles and their possible role for distributed cogeneration [it Micro-Encapsulation of Probiotics Meiners, Jean-Antoine Micro-encapsulation is defined as the technology for packaging with the help of protective membranes particles of finely ground solids, droplets of liquids or gaseous materials in small capsules that release their contents at controlled rates over prolonged periods of time under the influences of specific conditions (Boh, 2007). The material encapsulating the core is referred to as coating or shell. Micro RNAs in animal development. Plasterk, R.H.A. Micro RNAs (miRNAs) are approximately 22 nucleotide single-stranded noncoding RNA molecules that bind to target messenger RNAs (mRNAs) and silence their expression. This Essay explores the importance of miRNAs in animal development and their possible roles in disease and evolution. Micro Coriolis Gas Density Sensor Sparreboom, Wouter; Ratering, Gijs; Kruijswijk, Wim; van der Wouden, E.J.; Groenesteijn, Jarno; Lötters, Joost Conrad In this paper we report on gas density measurements using a micro Coriolis sensor. The technology to fabricate the sensor is based on surface channel technology. The measurement tube is freely suspended and has a wall thickness of only 1 micron. This renders the sensor extremely sensitive to changes Contamination Study of Micro Pulsed Plasma Thruster Kesenek, Ceylan .... Micro-Pulsed Plasma Thrusters (PPTs) are highly reliable and simple micro propulsion systems that will offer attitude control, station keeping, constellation flying, and drag compensation for such satellites... Micro-Rockets for the Classroom. Huebner, Jay S.; Fletcher, Alice S.; Cato, Julia A.; Barrett, Jennifer A. Compares micro-rockets to commercial models and water rockets. Finds that micro-rockets are more advantageous because they are constructed with inexpensive and readily available materials and can be safely launched indoors. (CCM) Equivalent Simplification Method of Micro-Grid Cai Changchun; Cao Xiangqin The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per... Development of micro pattern cutting simulation software Lee, Jong Min; Song, Seok Gyun; Choi, Jeong Ju; Novandy, Bondhan; Kim, Su Jin; Lee, Dong Yoon; Nam, Sung Ho; Je, Tae Jin The micro pattern machining on the surface of wide mold is not easy to be simulated by conventional software. In this paper, a software is developed for micro pattern cutting simulation. The 3d geometry of v-groove, rectangular groove, pyramid and pillar patterns are visualized by c++ and OpenGL library. The micro cutting force is also simulated for each pattern The MicroObservatory Net Brecher, K.; Sadler, P. A group of scientists, engineers and educators based at the Harvard-Smithsonian Center for Astrophysics (CfA) has developed a prototype of a small, inexpensive and fully integrated automated astronomical telescope and image processing system. The project team is now building five second generation instruments. The MicroObservatory has been designed to be used for classroom instruction by teachers as well as for original scientific research projects by students. Probably in no other area of frontier science is it possible for a broad spectrum of students (not just the gifted) to have access to state-of-the-art technologies that would allow for original research. The MicroObservatory combines the imaging power of a cooled CCD, with a self contained and weatherized reflecting optical telescope and mount. A microcomputer points the telescope and processes the captured images. The MicroObservatory has also been designed to be used as a valuable new capture and display device for real time astronomical imaging in planetariums and science museums. When the new instruments are completed in the next few months, they will be tried with high school students and teachers, as well as with museum groups. We are now planning to make the MicroObservatories available to students, teachers and other individual users over the Internet. We plan to allow the telescope to be controlled in real time or in batch mode, from a Macintosh or PC compatible computer. In the real-time mode, we hope to give individual access to all of the telescope control functions without the need for an "on-site" operator. Users would sign up for a specific period of time. In the batch mode, users would submit jobs for the telescope. After the MicroObservatory completed a specific job, the images would be e-mailed back to the user. At present, we are interested in gaining answers to the following questions: (1) What are the best approaches to scheduling real-time observations? (2) What criteria should be used Isolation of microRNA targets using biotinylated synthetic microRNAs Ørom, Ulf Andersson; Lund, Anders H MicroRNAs are small regulatory RNAs found in multicellular organisms where they post-transcriptionally regulate gene expression. In animals, microRNAs bind mRNAs via incomplete base pairings making the identification of microRNA targets inherently difficult. Here, we present a detailed method...... for experimental identification of microRNA targets based on affinity purification of tagged microRNAs associated with their targets. Udgivelsesdato: 2007-Oct... Integration of micro milling highspeed spindle on a microEDM-milling machine set-up De Grave, Arnaud; Hansen, Hans Nørgaard; Andolfatto, Loic In order to cope with repositioning errors and to combine the fast removal rate of micro milling with the precision and small feature size achievable with micro EDM milling, a hybrid micro-milling and micro-EDM milling centre was built and tested. The aim was to build an affordable set-up, easy...... by micro milling. Examples of test parts are shown and used as an experimental validation.... Voltammetry at micro-mesh electrodes Wadhawan Jay D. Full Text Available The voltammetry at three micro-mesh electrodes is explored. It is found that at sufficiently short experimental durations, the micro-mesh working electrode first behaves as an ensemble of microband electrodes, then follows the behaviour anticipated for an array of diffusion-independent micro-ring electrodes of the same perimeter as individual grid-squares within the mesh. During prolonged electrolysis, the micro-mesh electrode follows that behaviour anticipated theoretically for a cubically-packed partially-blocked electrode. Application of the micro-mesh electrode for the electrochemical determination of carbon dioxide in DMSO electrolyte solutions is further illustrated. Micro processors for plant protection McAffer, N.T.C. Micro computers can be used satisfactorily in general protection duties with economic advantages over hardwired systems. The reliability of such protection functions can be enhanced by keeping the task performed by each protection micro processor simple and by avoiding such a task being dependent on others in any substantial way. This implies that vital work done for any task is kept within it and that any communications from it to outside or to it from outside are restricted to those for controlling data transfer. Also that the amount of this data should be the minimum consistent with satisfactory task execution. Technology is changing rapidly and devices may become obsolete and be supplanted by new ones before their theoretical reliability can be confirmed or otherwise by field service. This emphasises the need for users to pool device performance data so that effective reliability judgements can be made within the lifetime of the devices. (orig.) [de Epigenetic microRNA Regulation Wiklund, Erik Digman MicroRNAs (miRNAs) are small non-coding RNAs (ncRNAs) that negatively regulate gene expression post-transcriptionally by binding to complementary sequences in the 3'UTR of target mRNAs in the cytoplasm. However, recent evidence suggests that certain miRNAs are enriched in the nucleus, and their t......MicroRNAs (miRNAs) are small non-coding RNAs (ncRNAs) that negatively regulate gene expression post-transcriptionally by binding to complementary sequences in the 3'UTR of target mRNAs in the cytoplasm. However, recent evidence suggests that certain miRNAs are enriched in the nucleus... Combinatorial microRNA target predictions Krek, Azra; Grün, Dominic; Poy, Matthew N. MicroRNAs are small noncoding RNAs that recognize and bind to partially complementary sites in the 3' untranslated regions of target genes in animals and, by unknown mechanisms, regulate protein production of the target transcript1, 2, 3. Different combinations of microRNAs are expressed...... in different cell types and may coordinately regulate cell-specific target genes. Here, we present PicTar, a computational method for identifying common targets of microRNAs. Statistical tests using genome-wide alignments of eight vertebrate genomes, PicTar's ability to specifically recover published micro......RNA targets, and experimental validation of seven predicted targets suggest that PicTar has an excellent success rate in predicting targets for single microRNAs and for combinations of microRNAs. We find that vertebrate microRNAs target, on average, roughly 200 transcripts each. Furthermore, our results... Micro Grid: A Smart Technology Naveenkumar, M; Ratnakar, N Distributed Generation (DG) is an approach that employs small-scale technologies to produce electricity close to the end users of power. Todays DG technologies often consist of renewable generators, and offer a number of potential benefits. This paper presents a design of micro grid as part of Smart grid technologies with renewable energy resources like solar, wind and Diesel generator. The design of the microgrid with integration of Renewable energy sources are done in PSCAD/EMTDC.This paper... Introduction to micro- and nanooptics Jahns, Jürgen This first textbook on both micro- and nanooptics introduces readers to the technological development, physical background and key areas.The opening chapters on the physics of light are complemented by chapters on refractive and diffractive optical elements. The internationally renowned authors present different methods of lithographic and nonlithographic fabrication of microoptics and introduce the characterization and testing of microoptics.The second part of the book is dedicated to optical microsystems and MEMS, optical waveguide structures and optical nanostructures, including pho The useful micro-organism Can man survive civilization? Academician Ivan Malek, Director of the Institute of Microbiology in Prague, a member of the Agency's Scientific Advisory Committee and for many years an adviser to the Food and Agriculture Organization, the World Health Organization and UNESCO, believes he can, But he also considers that if man is to survive he must study and use all the resources at his disposal - including the micro-organisms of the planet earth. (author) The Micro Trench Gas Counter Schmitz, J. A novel design is presented for a gas avalanche chamber with micro-strip gas readout. While existing gaseous microstrip detectors (Micro-strip Gas Counters, Knife edge chambers) have a minimum anode pitch of the order of 100 μm, the pitch of the discussed Micro Trench Gas Counter goes down to 30-50 μm. This leads to a better position resolution and two track separation, and a higher radiation resistivity. Its efficiency and signal speed are expected to be the same as the Microstrip Gas Counter. The energy resolution of the device is expected to be equal to or better than 10 percent for the 55 Fe peak. Since the anode strip dimensions are larger than those in a MSGC, the device may be not as sensitive to discharges and mechanical damage. In this report production of the device is briefly described, and predictions on its operation are made based on electric field calculations and experience with the Microstrip Gas Counter. The authors restrict themselves to the application in High Energy Physics. (author). 10 refs.; 9 figs Application of flexible micro temperature sensor in oxidative steam reforming by a methanol micro reformer. Lee, Chi-Yuan; Lee, Shuo-Jen; Shen, Chia-Chieh; Yeh, Chuin-Tih; Chang, Chi-Chung; Lo, Yi-Man Advances in fuel cell applications reflect the ability of reformers to produce hydrogen. This work presents a flexible micro temperature sensor that is fabricated based on micro-electro-mechanical systems (MEMS) technology and integrated into a flat micro methanol reformer to observe the conditions inside that reformer. The micro temperature sensor has higher accuracy and sensitivity than a conventionally adopted thermocouple. Despite various micro temperature sensor applications, integrated micro reformers are still relatively new. This work proposes a novel method for integrating micro methanol reformers and micro temperature sensors, subsequently increasing the methanol conversion rate and the hydrogen production rate by varying the fuel supply rate and the water/methanol ratio. Importantly, the proposed micro temperature sensor adequately controls the interior temperature during oxidative steam reforming of methanol (OSRM), with the relevant parameters optimized as well. Application of Flexible Micro Temperature Sensor in Oxidative Steam Reforming by a Methanol Micro Reformer Yi-Man Lo Full Text Available Advances in fuel cell applications reflect the ability of reformers to produce hydrogen. This work presents a flexible micro temperature sensor that is fabricated based on micro-electro-mechanical systems (MEMS technology and integrated into a flat micro methanol reformer to observe the conditions inside that reformer. The micro temperature sensor has higher accuracy and sensitivity than a conventionally adopted thermocouple. Despite various micro temperature sensor applications, integrated micro reformers are still relatively new. This work proposes a novel method for integrating micro methanol reformers and micro temperature sensors, subsequently increasing the methanol conversion rate and the hydrogen production rate by varying the fuel supply rate and the water/methanol ratio. Importantly, the proposed micro temperature sensor adequately controls the interior temperature during oxidative steam reforming of methanol (OSRM, with the relevant parameters optimized as well. Micro tooling technologies for polymer micro replication: direct, indirect and hybrid process chains Tosello, Guido; Hansen, Hans Nørgaard The increasing employment of micro products, of products containing micro parts and of products with micro-structured surfaces calls for mass fabrication technologies based on replication processes. In many cases, a suitable solution is given by the use of polymer micro products, whose production...... and performance of the corresponding micro mould. Traditional methods of micro tooling, such as various machining processes (e.g. micro milling, micro electrical discharge machining) have already reached their limitations with decreasing dimensions of mould inserts and cavities. To this respect, tooling process...... chains based on combination of micro manufacturing processes (defined as hybrid tooling) have been established in order to obtain further features miniaturization and increased accuracy. In this paper, examples and performance of different hybrid tooling approaches as well as challenges, opportunities... Thermal performance of a micro-combustor for micro-gas turbine system Cao, H.L.; Xu, J.L. Premixed combustion of hydrogen gas and air was performed in a stainless steel based micro-annular combustor for a micro-gas turbine system. Micro-scale combustion has proved to be stable in the micro-combustor with a gap of 2 mm. The operating range of the micro-combustor was measured, and the maximum excess air ratio is up to 4.5. The distribution of the outer wall temperature and the temperature of exhaust gas of the micro-combustor with excess air ratio were obtained, and the wall temperature of the micro-combustor reaches its maximum value at the excess air ratio of 0.9 instead of 1 (stoichiometric ratio). The heat loss of the micro-combustor to the environment was calculated and even exceeds 70% of the total thermal power computed from the consumed hydrogen mass flow rate. Moreover, radiant heat transfer covers a large fraction of the total heat loss. Measures used to reduce the heat loss were proposed to improve the thermal performance of the micro-combustor. The optimal operating status of the micro-combustor and micro-gas turbine is analyzed and proposed by analyzing the relationship of the temperature of the exhaust gas of the micro-combustor with thermal power and excess air ratio. The investigation of the thermal performance of the micro-combustor is helpful to design an improved micro-combustor Micro-PNT and Comprehensive PNT YANG Yuanxi Full Text Available Comprehensive or integrated positioning, navigation and timing is an obvious developing trend following the global navigation satellite system.This paper summarizes the current status of micro-PNT and its developing requirements. The related key technologies are described and the relationship between comprehensive PNT and micro-PNT is analyzed. It is stressed that the comprehensive PNT needs massive infrastructure construction and investment, however, the micro-PNT aims at the integrated applications of high-tech micro sensors. It is different from the current opinions appeared in the literatures, micro-PNT should include multi GNSS integration and micro components of navigation and timing in order to make the PNT outputs refer to a unified coordinate datum and time scale. Micro-PNT focuses on the personalized micro terminal applications. Except for the miniaturization of each PNT component, micro-PNT aims at the deep integration of the micro sensors, adaptive data fusion and self calibration of each component. Bottoming micro-Rankine cycles for micro-gas turbines Invernizzi, Costante; Iora, Paolo; Silva, Paolo This paper investigates the possibility of enhancing the performances of micro-gas turbines through the addition of a bottoming organic Rankine cycle which recovers the thermal power of the exhaust gases typically available in the range of 250-300 o C. The ORC cycles are particularly suitable for the recovery of heat from sources at variable temperatures, and for the generation of medium to small electric power. With reference to a micro-gas turbine with a size of about 100 kWe, a combined configuration could increase the net electric power by about 1/3, yielding an increase of the electrical efficiency of up to 40%. A specific analysis of the characteristics of different classes of working fluids is carried out in order to define a procedure to select the most appropriate fluid, capable of satisfying both environmental (ozone depletion potential, global warming potential) and technical (flammability, toxicity, fluid critical temperature and molecular complexity) concerns. Afterwards, a thermodynamic analysis is performed to ascertain the most favourable cycle thermodynamic conditions, from the point of view of heat recovery. Furthermore, a preliminary design of the ORC turbine (number of stages, outer diameter and rotational speed) is carried out Non-Photolithographic Manufacturing Processes for Micro-Channels Functioned by Micro-Contact-Printed SAMs Saigusa, Hiroki; Suga, Yasuo; Miki, Norihisa In this paper we propose non-photolithographic fabrication processes of micro-fluid channels with patterned SAMs (Self-Assembled-Monolayers). SAMs with a thiol group are micro-contact printed on a patterned Au/Ti layer, which is vapor-deposited through a shadow mask. Ti is an adhesion layer. Subsequently, the micro-channels are formed by bonding surface-activated PDMS onto the silicon substrate via a silanol group, producing a SAMs-functioned bottom wall of the micro-channel. No photolithographic processes are necessary and thus, the proposed processes are very simple, quick and low cost. The micro-reactors can have various functions associated with the micro-contact-printed SAMs. We demonstrate successful manufacturing of micro-reactors with two types of SAMs. The micro-reactor with patterned AUT (11-amino-1-undecanethiol) successfully trapped nano-particles with a carboxylic acid group, indicating that micro-contact-printed SAMs remain active after the manufacturing processes of the micro-reactor. AUT -functioned micro-channels are applicable to bioassay and to immobilize proteins for DNA arrays. ODT (1-octadecanethiol) makes surfaces hydrophobic with the methyl terminal group. When water was introduced into the micro-reactor with ODT-patterned surfaces, water droplets remained only in the hydrophilic areas where ODT was not patterned. ODT -functioned micro-channels are applicable to fluid handling. Macro-Micro Interlocked Simulator Sato, Tetsuya Simulation Science is now standing on a turning point. After the appearance of the Earth Simulator, HEC is struggling with several severe difficulties due to the physical limit of LSI technologies and the so-called latency problem. In this paper I would like to propose one clever way to overcome these difficulties from the simulation algorithm viewpoint. Nature and artificial products are usually organized with several nearly autonomously working internal systems (organizations, or layers). The Earth Simulator has gifted us with a really useful scientific tool that can deal with the entire evolution of one internal system with a sufficient soundness. In order to make a leap jump of Simulation Science, therefore, it is desired to design an innovative simulator that enables us to deal with simultaneously and as consistently as possible a real system that evolves cooperatively with several internal autonomous systems. Three years experience of the Earth Simulator Project has stimulated to come up with one innovative simulation algorithm to get rid of the technological barrier standing in front of us, which I would like to call 'Macro-Micro Interlocked Algorithm', or 'Macro-Micro Multiplying Algorithm', and present a couple of such examples to validate the proposed algorithm. The first example is an aurora-arc formation as a result of the mutual interaction between the macroscopic magnetosphere-ionosphere system and the microscopic field-aligned electron and ion system. The second example is the local heavy rain fall resulting from the interaction between the global climate evolution and the microscopic raindrop growth process. Based on this innovative feasible algorithm, I came up with a Macro-Micro Multiplying Simulator Wafer integrated micro-scale concentrating photovoltaics Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems. MicroRNA function in Drosophila melanogaster. Carthew, Richard W; Agbu, Pamela; Giri, Ritika Over the last decade, microRNAs have emerged as critical regulators in the expression and function of animal genomes. This review article discusses the relationship between microRNA-mediated regulation and the biology of the fruit fly Drosophila melanogaster. We focus on the roles that microRNAs play in tissue growth, germ cell development, hormone action, and the development and activity of the central nervous system. We also discuss the ways in which microRNAs affect robustness. Many gene regulatory networks are robust; they are relatively insensitive to the precise values of reaction constants and concentrations of molecules acting within the networks. MicroRNAs involved in robustness appear to be nonessential under uniform conditions used in conventional laboratory experiments. However, the robust functions of microRNAs can be revealed when environmental or genetic variation otherwise has an impact on developmental outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved. Micro plate fission chamber development Wang Mei; Wen Zhongwei; Lin Jufang; Jiang Li; Liu Rong; Wang Dalun To conduct the measurement of neutron flux and the fission rate distribution at several position in assemblies, the micro plate fission chamber was designed and fabricated. Since the requirement of smaller volume and less structure material was taken into consideration, it is convinient, commercial and practical to use fission chamber to measure neutron flux in specific condition. In this paper, the structure of fission chamber and process of fabrication were introduced and performance test result was presented. The detection efficiency is 91.7%. (authors) Northern micro-grid project Curtis, David; Singh, Bob The electrical distribution system for the Kasabonika Lake First Nation in northern Ontario (Canada) consumed 1.2 million liters of diesel fuel in 2008, amounting to 3,434 tones of CO2 emissions. The Northern Micro-Grid Project, supported by seven partners, involves integrating renewable generation & storage into the Kasabonika Lake distribution system. Through R&D and demonstration, the objectives are to reduce the amount of diesel consumed, support the distribution system exclusively on renewable resources during light loads, engage and impart knowledge/training to better position the community for future opportunities. The paper will discuss challenges, opportunities and future plans associated with the project. Micro-optomechanical trampoline resonators Pepper, Brian; Kleckner, Dustin; Sonin, Petro; Jeffrey, Evan; Bouwmeester, Dirk Recently, micro-optomechanical devices have been proposed for implementation of experiments ranging from non-demolition measurements of phonon number to creation of macroscopic quantum superpositions. All have strenuous requirements on optical finesse, mechanical quality factor, and temperature. We present a set of devices composed of dielectric mirrors on Si 3 N4 trampoline resonators. We describe the fabrication process and present data on finesse and quality factor. The authors gratefully acknowledge support from NSF PHY-0804177 and Marie Curie EXT-CT-2006-042580. Micro-transactions for concurrent data structures Meawad, Fadi; Iyer, Karthik; Schoeberl, Martin implementation of transactional memory that we call micro-transactions. In particular, we argue that hardware support for micro-transactions allows us to efficiently implement certain data structures. Those data structures are difficult to realize with the atomic operations provided by stock hardware and provide......, atomic instructions, and micro-transactions. Our results suggest that transactional memory is an interesting alternative to traditional concurrency control mechanisms.... Fabrication of Micro Components by Electrochemical Deposition Tang, Peter Torben The main issue of this thesis is the combination of electrochemical deposition of metals and micro machining. Processes for electroplating and electroless plating of nickel and nickel alloys have been developed and optimised for compatibility with microelectronics and silicon based micromechanics...... of electrochemical machining and traditional machining is compared to micro machining techniques as performed in the field of microelectronics. Various practical solutions and equipment for electrochemical deposition of micro components are demonstrated, as well as the use and experience obtained utilising... Challenging the sustainability of micro products development De Grave, Arnaud; Olsen, Stig Irving Environmental aspects are one of the biggest concerns regarding the future of manufacturing and product development sustainability. Furthermore, micro products and micro technologies are often seen as the next big thing in terms of possible mass market trend and boom. Many questions are raised...... and the intermediate parts which can be in-process created. Possible future trends for micro products development scheme involving environmental concerns are given.... Development of 3d micro-nano hybrid patterns using anodized aluminum and micro-indentation Shin, Hong Gue; Kwon, Jong Tae [Division of Mechanical Engineering and Mechatronics, Kangwon National University, 1 Kangwondaehakgil, Chunchon, Gangwon-do, 200-701 (Korea, Republic of); Seo, Young Ho [Division of Mechanical Engineering and Mechatronics, Kangwon National University, 1 Kangwondaehakgil, Chunchon, Gangwon-do, 200-701 (Korea, Republic of)], E-mail: [email protected]; Kim, Byeong Hee [Division of Mechanical Engineering and Mechatronics, Kangwon National University, 1 Kangwondaehakgil, Chunchon, Gangwon-do, 200-701 (Korea, Republic of) We developed a simple and cost-effective method of fabricating 3D micro-nano hybrid patterns in which micro-indentation is applied on the anodized aluminum substrate. Nano-patterns were formed first on the aluminum substrate, and then micro-patterns were fabricated by deforming the nano-patterned aluminum substrate. Hemispherical nano-patterns with a 150 nm-diameter on an aluminum substrate were fabricated by anodizing and alumina removing process. Then, micro-pyramid patterns with a side-length of 50 {mu}m were formed on the nano-patterns using micro-indentation. To verify 3D micro-nano hybrid patterns, we replicated 3D micro-nano hybrid patterns by a hot-embossing process. 3D micro-nano hybrid patterns may be used in nano-photonic devices and nano-biochips applications. Shin, Hong Gue; Kwon, Jong Tae; Seo, Young Ho; Kim, Byeong Hee We developed a simple and cost-effective method of fabricating 3D micro-nano hybrid patterns in which micro-indentation is applied on the anodized aluminum substrate. Nano-patterns were formed first on the aluminum substrate, and then micro-patterns were fabricated by deforming the nano-patterned aluminum substrate. Hemispherical nano-patterns with a 150 nm-diameter on an aluminum substrate were fabricated by anodizing and alumina removing process. Then, micro-pyramid patterns with a side-length of 50 μm were formed on the nano-patterns using micro-indentation. To verify 3D micro-nano hybrid patterns, we replicated 3D micro-nano hybrid patterns by a hot-embossing process. 3D micro-nano hybrid patterns may be used in nano-photonic devices and nano-biochips applications Photolithography and Micro-Fabrication/ Packaging Laboratories Federal Laboratory Consortium — The Photolithography and Micro-Fabrication/Packaging laboratories provide research level semiconductor processing equipment and facilities that do not require a full... Levitating Micro-Actuators: A Review Kirill V. Poletkin Full Text Available Through remote forces, levitating micro-actuators completely eliminate mechanical attachment between the stationary and moving parts of a micro-actuator, thus providing a fundamental solution to overcoming the domination of friction over inertial forces at the micro-scale. Eliminating the usual mechanical constraints promises micro-actuators with increased operational capabilities and low dissipation energy. Further reduction of friction and hence dissipation by means of vacuum leads to dramatic increases of performance when compared to mechanically tethered counterparts. In order to efficiently employ the benefits provided by levitation, micro-actuators are classified according to their physical principles as well as by their combinations. Different operating principles, structures, materials and fabrication methods are considered. A detailed analysis of the significant achievements in the technology of micro-optics, micro-magnets and micro-coil fabrication, along with the development of new magnetic materials during recent decades, which has driven the creation of new application domains for levitating micro-actuators is performed. Micro-Lid For Sealing Sample Reservoirs of micro-Extraction Systems National Aeronautics and Space Administration — We propose to develop a proof-of-concept micro-Lid (µLid) to tightly seal a micro-sampler or micro-extraction system. Fabrication of µLid would be conducted in the... [Researches on biomechanics of micro-implant-bone interface and optimum design of micro implant's neck]. Deng, Feng; Zhang, Lei; Zhang, Yi; Song, Jin-lin; Fan, Yuboa To compare and analyze the stress distribution at the micro-implant-bone interface based on the different micro-implant-bone conditioned under orthodontic load, and to optimize the design of micro implant's neck. An adult skull with all tooth was scanned by spiral CT, and the data were imported into computer for three-dimensional reconstruction with software Mimics 9.0. The three dimensional finite element models of three micro-implant-bone interfaces(initial stability, full osseointegration and fibrous integration) were analyzed by finite element analysis software ABAQUS6.5. The primary stress distributions of different micro-implant-bone conditions were evaluated when 2N force was loaded. Then the diameter less than 1.5 mm of the micro implant's neck was added with 0.2 mm, to compare the stress distribution of the modified micro-implant-bone interface with traditional type. The stress mostly concentrated on the neck of micro implant and the full osseointegration interface in all models showed the lowest strain level. Compared with the traditional type, the increasing diameter neck of the micro implant obviously decreased the stress level in all the three conditions. The micro-implant-bone interface and the diameter of micro implant's neck both are the important influence factors to the stress distribution of micro implant. Integrated Micro Product and Technology Development The paper addresses the issues of integrated micro product and technology development. The implications of the decisions in the design phase on the subsequent manufacturing processes are considered vital. A coherent process chain is a necessary prerequisite for the realisation of the industrial...... potential of micro technology.... Can micro-volunteering help in Africa? CSIR Research Space (South Africa) Butgereit, L Full Text Available is convenient to the micro-volunteer, and in small pieces of time (bitesized). This paper looks at a micro-volunteering project where participants can volunteer for five to ten minutes at a time using a smart phone and assist pupils with their mathematics.... Micro-incubator for bacterial biosensing applications Clasen, E Full Text Available The presence of Escherichia coli (E. coli) is a commonly used indicator micro-organism to determine whether water is safe for human consumption. This paper discusses the design of a micro-incubator that can be applied to concentrate bacteria prior... A novel differential frequency micro-gyroscope Nayfeh, A. H.; Abdel-Rahman, E. M.; Ghommem, M. We present a frequency-domain method to measure angular speeds using electrostatic micro-electro-mechanical system actuators. Towards this end, we study a single-axis gyroscope made of a micro-cantilever and a proof-mass coupled to two fixed Wafer-scale micro-optics fabrication Voelkel, Reinhard Micro-optics is an indispensable key enabling technology for many products and applications today. Probably the most prestigious examples are the diffractive light shaping elements used in high-end DUV lithography steppers. Highly-efficient refractive and diffractive micro-optical elements are used for precise beam and pupil shaping. Micro-optics had a major impact on the reduction of aberrations and diffraction effects in projection lithography, allowing a resolution enhancement from 250 nm to 45 nm within the past decade. Micro-optics also plays a decisive role in medical devices (endoscopes, ophthalmology), in all laser-based devices and fiber communication networks, bringing high-speed internet to our homes. Even our modern smart phones contain a variety of micro-optical elements. For example, LED flash light shaping elements, the secondary camera, ambient light and proximity sensors. Wherever light is involved, micro-optics offers the chance to further miniaturize a device, to improve its performance, or to reduce manufacturing and packaging costs. Wafer-scale micro-optics fabrication is based on technology established by the semiconductor industry. Thousands of components are fabricated in parallel on a wafer. This review paper recapitulates major steps and inventions in wafer-scale micro-optics technology. The state-of-the-art of fabrication, testing and packaging technology is summarized. Addressing Youth Employment Through Micro- and Small ... ... Employment Through Micro- and Small-Enterprise Development in Ethiopia. Youth unemployment has emerged as a key challenge facing developing and ... to youth to start small businesses and to youth-led micro- and small-enterprises. ... the barriers and challenges young Ethiopian men and women face in the labour ... A thermally self-sustained micro-power plant with integrated micro-solid oxide fuel cells, micro-reformer and functional micro-fluidic carrier Scherrer, Barbara; Evans, Anna; Santis-Alvarez, Alejandro J.; Jiang, Bo; Martynczuk, Julia; Galinski, Henning; Nabavi, Majid; Prestat, Michel; Tölke, René; Bieberle-Hütter, Anja; Poulikakos, Dimos; Muralt, Paul; Niedermann, Philippe; Dommann, Alex; Maeder, Thomas; Heeb, Peter; Straessle, Valentin; Muller, Claude; Gauckler, Ludwig J. Low temperature micro-solid oxide fuel cell (micro-SOFC) systems are an attractive alternative power source for small-size portable electronic devices due to their high energy efficiency and density. Here, we report on a thermally self-sustainable reformer-micro-SOFC assembly. The device consists of a micro-reformer bonded to a silicon chip containing 30 micro-SOFC membranes and a functional glass carrier with gas channels and screen-printed heaters for start-up. Thermal independence of the device from the externally powered heater is achieved by exothermic reforming reactions above 470 °C. The reforming reaction and the fuel gas flow rate of the n-butane/air gas mixture controls the operation temperature and gas composition on the micro-SOFC membrane. In the temperature range between 505 °C and 570 °C, the gas composition after the micro-reformer consists of 12 vol.% to 28 vol.% H2. An open-circuit voltage of 1.0 V and maximum power density of 47 mW cm-2 at 565 °C is achieved with the on-chip produced hydrogen at the micro-SOFC membranes. Classification of assembly techniques for micro products Hansen, Hans Nørgaard; Tosello, Guido; Gegeckaite, Asta of components and level of integration are made. This paper describes a systematic characterization of micro assembly methods. This methodology offers the opportunity of a cross comparison among different techniques to gain a choosing principle of the favourable micro assembly technology in a specific case... Micromachining of buried micro channels in silicon de Boer, Meint J.; Tjerkstra, R.W.; Berenschot, Johan W.; Jansen, Henricus V.; Burger, G.J.; Burger, G.J.; Gardeniers, Johannes G.E.; Elwenspoek, Michael Curt; van den Berg, Albert A new method for the fabrication of micro structures for fluidic applications, such as channels, cavities, and connector holes in the bulk of silicon wafers, called buried channel technology (BCT), is presented in this paper. The micro structures are constructed by trench etching, coating of the Investigation on the micro injection molding process of an overmolded multi-material micro component Baruffi, Federico; Calaon, Matteo; Tosello, Guido and difficult assembly steps, being the plastic molded directly on a metal substrate. In this scenario, an investigation on the fully automated micro overmolding manufacturing technology of a three-material micro component for acoustic applications has been carried out. Preliminary experiments allowed......Micro injection molding (μIM) is one of the few technologies capable of meeting the increasing demand of complex shaped micro plastic parts. This process, combined with the overmolding technique, allows a fast and cost-efficient production of multi-material micro components, saving numerous... MicroRNAs and Presbycusis. Hu, Weiming; Wu, Junwu; Jiang, Wenjing; Tang, Jianguo Presbycusis (age-related hearing loss) is the most universal sensory degenerative disease in elderly people caused by the degeneration of cochlear cells. Non-coding microRNAs (miRNAs) play a fundamental role in gene regulation in almost every multicellular organism, and control the aging processes. It has been identified that various miRNAs are up- or down-regulated during mammalian aging processes in tissue-specific manners. Most miRNAs bind to specific sites on their target messenger-RNAs (mRNAs) and decrease their expression. Germline mutation may lead to dysregulation of potential miRNAs expression, causing progressive hair cell degeneration and age-related hearing loss. Therapeutic innovations could emerge from a better understanding of diverse function of miRNAs in presbycusis. This review summarizes the relationship between miRNAs and presbycusis, and presents novel miRNAs-targeted strategies against presbycusis. An electromagnetic micro-undulator Nassiri, A.; Turner, L.R. Microfabrication technology using the LIGA (a German acronym for Lithography, Electroforming, and Molding) process offers an attractive alternative for fabricating precision devices with micron-sized features. One such device is a mm-sized micro-undulator with potential applications in a table-top synchrotron light source for medical and other industrial uses. The undulator consists of a silver conductor embedded in poles and substrate of nickel-iron. Electromagnetic modeling of the undulator is done using the eddy current computer code ELEKTRA. Computations predict a field pattern of appropriate strength and quality if the current can be prevented from being shunted from silver by the nickel-iron poles either through insulation or through slotted poles. The design of the undulator along with the computational results are discussed MicroComputed Tomography: Methodology and Applications Stock, Stuart R. Due to the availability of commercial laboratory systems and the emergence of user facilities at synchrotron radiation sources, studies of microcomputed tomography or microCT have increased exponentially. MicroComputed Technology provides a complete introduction to the technology, describing how to use it effectively and understand its results. The first part of the book focuses on methodology, covering experimental methods, data analysis, and visualization approaches. The second part addresses various microCT applications, including porous solids, microstructural evolution, soft tissue studies, multimode studies, and indirect analyses. The author presents a sufficient amount of fundamental material so that those new to the field can develop a relative understanding of how to design their own microCT studies. One of the first full-length references dedicated to microCT, this book provides an accessible introduction to field, supplemented with application examples and color images. Biologically Inspired Micro-Flight Research Raney, David L.; Waszak, Martin R. Natural fliers demonstrate a diverse array of flight capabilities, many of which are poorly understood. NASA has established a research project to explore and exploit flight technologies inspired by biological systems. One part of this project focuses on dynamic modeling and control of micro aerial vehicles that incorporate flexible wing structures inspired by natural fliers such as insects, hummingbirds and bats. With a vast number of potential civil and military applications, micro aerial vehicles represent an emerging sector of the aerospace market. This paper describes an ongoing research activity in which mechanization and control concepts for biologically inspired micro aerial vehicles are being explored. Research activities focusing on a flexible fixed- wing micro aerial vehicle design and a flapping-based micro aerial vehicle concept are presented. MicroRNA involvement in glioblastoma pathogenesis Novakova, Jana; Slaby, Ondrej; Vyzula, Rostislav; Michalek, Jaroslav MicroRNAs are endogenously expressed regulatory noncoding RNAs. Altered expression levels of several microRNAs have been observed in glioblastomas. Functions and direct mRNA targets for these microRNAs have been relatively well studied over the last years. According to these data, it is now evident, that impairment of microRNA regulatory network is one of the key mechanisms in glioblastoma pathogenesis. MicroRNA deregulation is involved in processes such as cell proliferation, apoptosis, cell cycle regulation, invasion, glioma stem cell behavior, and angiogenesis. In this review, we summarize the current knowledge of miRNA functions in glioblastoma with an emphasis on its significance in glioblastoma oncogenic signaling and its potential to serve as a disease biomarker and a novel therapeutic target in oncology. Precision moulding of polymer micro components The present research work contains a study concerning polymer micro components manufacturing by means of the micro injection moulding (µIM) process. The overall process chain was considered and investigated during the project, including part design and simulation, tooling, process analysis, part...... optimization, quality control, multi-material solutions. A series of experimental investigations were carried out on the influence of the main µIM process factors on the polymer melt flow within micro cavities. These investigations were conducted on a conventional injection moulding machine adapted...... to the production of micro polymer components, as well as on a micro injection moulding machine. A new approach based on coordinate optical measurement of flow markers was developed during the project for the characterization of the melt flow. In-line pressure measurements were also performed to characterize... Micro-Scale Regenerative Heat Exchanger Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred A micro-scale regenerative heat exchanger has been designed, optimized and fabricated for use in a micro-Stirling device. Novel design and fabrication techniques enabled the minimization of axial heat conduction losses and pressure drop, while maximizing thermal regenerative performance. The fabricated prototype is comprised of ten separate assembled layers of alternating metal-dielectric composite. Each layer is offset to minimize conduction losses and maximize heat transfer by boundary layer disruption. A grating pattern of 100 micron square non-contiguous flow passages were formed with a nominal 20 micron wall thickness, and an overall assembled ten-layer thickness of 900 microns. Application of the micro heat exchanger is envisioned in the areas of micro-refrigerators/coolers, micropower devices, and micro-fluidic devices. Laser beam micro-milling of micro-channels in aerospace alloys Ahmed, Naveed; Al-Ahmari, Abdulrahman This volume is greatly helpful to micro-machining and laser engineers as it offers obliging guidelines about the micro-channel fabrications through Nd:YAG laser beam micro-milling. The book also demonstrates how the laser beam micro-milling behaves when operating under wet conditions (under water), and explores what are the pros and cons of this hybrid technique. From the predictive mathematical models, the readers can easily estimate the resulting micro-channel size against the desired laser parametric combinations. The book considers micro-channels in three highly important research materials commonly used in aerospace industry: titanium alloy Ti-6Al-4V, nickel alloy Inconel 718 and aluminum alloy AA 2024. Therefore, the book is highly practicable in the fields of micro-channel heat exchangers, micro-channel aerospace turbine blades, micro-channel heat pipes, micro-coolers and micro-channel pulsating heat plates. These are frequently used in various industries such as aerospace, automotive, biomedical and m... Three-dimensional micro assembly of a hinged nickel micro device by magnetic lifting and micro resistance welding Chang, Chun-Wei; Hsu, Wensyang The three-dimensional micro assembly of hinged nickel micro devices by magnetic lifting and micro resistance welding is proposed here. By an electroplating-based surface machining process, the released nickel structure with the hinge mechanism can be fabricated. Lifting of the released micro structure to different tilted angles is accomplished by controlling the positions of a magnet beneath the device. An in situ electro-thermal actuator is used here to provide the pressing force in micro resistance welding for immobilizing the tilted structure. The proposed technique is shown to immobilize micro devices at controlled angles ranging from 14° to 90° with respect to the substrate. Design parameters such as the electro-thermal actuator and welding beam width are also investigated. It is found that there is a trade-off in beam width design between large contact pressure and low thermal deformation. Different dominated effects from resistivity enhancement and contact area enlargement during the welding process are also observed in the dynamic resistance curves. Finally, a lifted and immobilized electro-thermal bent-beam actuator is shown to displace upward about 27.7 µm with 0.56 W power input to demonstrate the capability of electrical transmission at welded joints by the proposed 3D micro assembly technique Manufacturing and application of micro computer for control Park, Seung Man; Heo, Gyeong; Yun, Jun Young This book deals with machine code and assembly program for micro computer. It composed of 20 chapters, which are micro computer system, practice of a storage cell, manufacturing 1 of micro computer, manufacturing 2 of micro computer, manufacturing of micro computer AID-80A, making of machine language, interface like Z80-PIO and 8255A(PPI), counter and timer interface, exercise of basic command, arithmetic operation, arrangement operation, an indicator control, music playing, detection of input of PIO. control of LED of PIO, PIO mode, CTC control by micro computer, SIO control by micro computer and application by micro computer. Clasen, Estine; Land, Kevin; Joubert, Trudi-Heleen The presence of Escherichia coli (E. coli ) is a commonly used indicator micro-organism to determine whether water is safe for human consumption.1 This paper discusses the design of a micro-incubator that can be applied to concentrate bacteria prior to environmental water quality screening tests. High sensitivity and rapid test time is essential and there is a great need for these tests to be implemented on-site without the use of a laboratory infrastructure. In the light of these requirements, a mobile micro-incubator was designed, manufactured and characterised. A polydimethylsiloxane (PDMS) receptacle has been designed to house the 1-5 ml cell culture sample.2 A nano-silver printed electronics micro-heater has been designed to incubate the bacterial sample, with an array of temperature sensors implemented to accurately measure the sample temperature at various locations in the cell culture well. The micro-incubator limits the incubation temperature range to 37+/-3 °C in order to ensure near optimal growth of the bacteria at all times.3 The incubation time is adjustable between 30 minutes and 9 hours with a maximum rise time of 15 minutes to reach the set-point temperature. The surface area of the printed nano silver heating element is 500 mm2. Electrical and COMSOL Multiphysics simulations are included in order to give insight on micro-incubator temperature control. The design and characterization of this micro-incubator allows for further research in biosensing applications. Updating the Micro-Tom TILLING platform. Okabe, Yoshihiro; Ariizumi, Tohru; Ezura, Hiroshi The dwarf tomato variety Micro-Tom is regarded as a model system for functional genomics studies in tomato. Various tomato genomic tools in the genetic background of Micro-Tom have been established, such as mutant collections, genome information and a metabolomic database. Recent advances in tomato genome sequencing have brought about a significant need for reverse genetics tools that are accessible to the larger community, because a great number of gene sequences have become available from public databases. To meet the requests from the tomato research community, we have developed the Micro-Tom Targeting-Induced Local Lesions IN Genomes (TILLING) platform, which is comprised of more than 5000 EMS-mutagenized lines. The platform serves as a reverse genetics tool for efficiently identifying mutant alleles in parallel with the development of Micro-Tom mutant collections. The combination of Micro-Tom mutant libraries and the TILLING approach enables researchers to accelerate the isolation of desirable mutants for unraveling gene function or breeding. To upgrade the genomic tool of Micro-Tom, the development of a new mutagenized population is underway. In this paper, the current status of the Micro-Tom TILLING platform and its future prospects are described. Fabrication of micro metallic valve and pump Yang, Ming; Kabasawa, Yasunari; Ito, Kuniyoshi Fabrication of micro devices by using micro metal forming was proposed by the authors. We developed a desktop servo-press machine with precise tooling system. Precise press forming processes including micro forging and micro joining has been carried out in a progressive die. In this study, micro metallic valve and pump were fabricated by using the precise press forming. The components are made of sheet metals, and assembled in to a unit in the progressive die. A micro check-valve with a diameter of 3mm and a length of 3.2mm was fabricated, and the property of flow resistance was evaluated. The results show that the check valve has high property of leakage proof. Since the valve is a unit parts with dimensions of several millimeters, it has advantage to be adapted to various pump design. Here, two kinds of micro pumps with the check-valves were fabricated. One is diaphragm pump actuated by vibration of the diaphragm, and another is tube-shaped pump actuated by resonation. The flow quantities of the pumps were evaluated and the results show that both of the pumps have high pumping performance. Aluminum Templates of Different Sizes with Micro-, Nano- and Micro/Nano-Structures for Cell Culture Ming-Liang Yen Full Text Available This study investigates the results of cell cultures on aluminum (Al templates with flat-structures, micro-structures, nano-structures and micro/nano-structures. An Al template with flat-structure was obtained by electrolytic polishing; an Al template with micro-structure was obtained by micro-powder blasting; an Al template with nano-structure was obtained by aluminum anodization; and an Al template with micro/nano-structure was obtained by micro-powder blasting and then anodization. Osteoblast-like cells were cultured on aluminum templates with various structures. The microculture tetrazolium test assay was utilized to assess the adhesion, elongation, and proliferation behaviors of cultured osteoblast-like cells on aluminum templates with flat-structures, micro-structures, nano-structures, and micro/nano-structures. The results showed that the surface characterization of micro/nano-structure of aluminum templates had superhydrophilic property, and these also revealed that an aluminum template with micro/nano-structure could provide the most suitable growth situation for cell culture. Formability of Micro-Tubes in Hydroforming Hartl, Christoph; Anyasodor, Gerald; Lungershausen, Joern Micro-hydroforming is a down-scaled metal forming process, based on the expansion of micro-tubes by internal pressurization within a die cavity. The objective of micro-hydroforming is to provide a technology for the economic mass production of complex shaped hollow micro-components. Influence of size effects in metal forming processes increases with scaling down of metal parts. Investigations into the change in formability of micro-tubes due to metal part scaling down constituted an important subject within the conducted fundamental research work. Experimental results are presented, concerning the analysis of the formability of micro-tubes made from stainless steel AISI 304 with an outer diameter of 800 μm and a wall thickness of 40 μm. An average ratio of tube wall thickness to grain size of 1.54 of up to 2.56 was analyzed. Miniaturised mechanical standard methods as well as bulge tests with internal hydrostatic pressurization of the tubular specimens were applied to analyze the influence of size-dependent effects. A test device was developed for the bulge experiments which enabled the pressurization of micro-tubes with internal pressures up to 4000 bar. To determine the attainable maximum achievable expansion ratio the tubes were pressurized in the bulge tests with increasing internal pressure until instability due to necking and subsequent bursting occurred. Comparisons with corresponding tests of macro-tubes, made from the here investigated material, showed a change in formability of micro-tubes which was attributed to the scaling down of the hydroforming process. In addition, a restricted applicability of existing theoretical correlations for the determination of the maximum pressure at bursting was observed for down-scaled micro-hydroforming. Progress in micro-pattern gas detectors Bellazzini, Ronaldo Micro-Pattern Gas Detectors are position-sensitive proportional counters whose sense electrodes are constructed using micro-electronics , thin-film or advanced PCB techniques.The feature size attainable using these methods is of the order of a few microns and the detectors demonstrate excellent spatial resolution and fast charge collection. I will review recent progress on Micro patterned Gas Detectors for tracking and other cross-disciplinary applications.I will focus on the design principles,performance capability and limitations. A short list of interesting applications will be discussed Micro Nano Replication Processes and Applications Kang, Shinill This book is an introduction to the fundamentals and processes for micro and nano molding for plastic components. In addition to the basics, the book covers applications details and examples. The book helps both students and professionals to understand and work with the growing tools of molding and uses for micro and nano-sized plastic parts.Provides a comprehensive presentation on fundamentals and practices of manufacturing for micro / nano sized plastics partsCovers a relatively new but fast-growing field that is impacting any industry using plastic parts in their products (electronics, tele Neutrino Interactions in MicroBooNE Del Tutto, Marco MicroBooNE is a liquid-argon-based neutrino experiment, which began collecting data in Fermilab's Booster neutrino beam in October 2015. Physics goals of the experiment include probing the source of the anomalous excess of electron-like events in MiniBooNE. In addition to this, MicroBooNE is carrying out an extensive cross section physics program that will help to probe current theories on neutrino-nucleon interactions and nuclear effects. These proceedings summarise the status of MicroBooNE'... MR of pituitary micro-adenomas Le Marec, E.; Ait Ameur, A.; David, H.; Pharaboz, C. Most of the time, rationales to look for pituitary micro-adenomas are based on endocrinal disorder. MRI is often helpful to confirm diagnosis. It gives information about micro-adenomas size and localisation. If conventional sequence are inadequate, a dynamic sequence has then to be performed after Gadolinium injection. Any disorder observed from the pituitary gland must be correlated with the clinical observation and results from biochemistry analysis. False positive happens quite open because of gland morphological variation, incidentalomas and partial volumes. MRI offers the possibility to follow-up treated micro-adenomas evolution especially to detect recurrence. (author) A Micro-Grid Battery Storage Management Mahat, Pukar; Escribano Jiménez, Jorge; Moldes, Eloy Rodríguez An increase in number of distributed generation (DG) units in power system allows the possibility of setting-up and operating micro-grids. In addition to a number of technical advantages, micro-grid operation can also reduce running costs by optimally scheduling the generation and/or storage...... systems under its administration. This paper presents an optimized scheduling of a micro-grid battery storage system that takes into account the next-day forecasted load and generation profiles and spot electricity prices. Simulation results show that the battery system can be scheduled close to optimal... MICRO AUTO GASIFICATION SYSTEM: EMISSIONS ... A compact, CONEX-housed waste to energy unit, Micro Auto Gasification System (MAGS), was characterized for air emissions from burning of military waste types. The MAGS unit is a dual chamber gasifier with a secondary diesel-fired combustor. Eight tests were conducted with multiple waste types in a 7-day period at the Kilauea Military Camp in Hawai'i. The emissions characterized were chosen based on regulatory emissions limits as well as their ability to cause adverse health effects on humans: particulate matter (PM), mercury, heavy metals, volatile organic compounds (VOCs), polyaromatic hydrocarbons (PAHs), and polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs). Three military waste feedstock compositions reflecting the variety of wastes to be encountered in theatre were investigated: standard waste (SW), standard waste with increased plastic content (HP), standard waste without SW food components but added first strike ration (FSR) food and packaging material (termed FSR). A fourth waste was collected from the Kilauea dumpster that served the dining facility and room lodging (KMC). Limited scrubber water and solid ash residue samples were collected to obtain a preliminary characterization of these effluents/residues.Gasifying SW, HP, and KMC resulted in similar PCDD/PCDF stack concentrations, 0.26-0.27 ng TEQ/m3 at 7% O2, while FSR waste generated a notably higher stack concentration of 0.68 ng TEQ/m3 at 7% O2. The PM emission An on-line monitoring system for a micro electrical discharge machining (micro-EDM) process Liao, Y S; Chang, T Y; Chuang, T J A pulse-type discriminating system to monitor the process of micro electrical discharge machining (micro-EDM) is developed and implemented. The specific features are extracted and the pulses from a RC-type power source are classified into normal, effective arc, transient short circuit and complex types. An approach to discriminate the pulse type according to three durations measured at three pre-determined voltage levels of a pulse is proposed. The developed system is verified by using simulated signals. Discrimination of the pulse trains in actual machining processes shows that the pulses are mainly the normal type for micro wire-EDM and micro-EDM milling. The pulse-type distribution varies during the micro-EDM drilling process. The percentage of complex-type pulse increases monotonically with the drilling depth. It starts to drop when the gap condition is seriously deteriorated. Accordingly, an on-line monitoring strategy for the micro-EDM drilling process is proposed Modeling High Pressure Micro Hollow Cathode Discharges Boeuf, Jean-Pierre; Pitchford, Leanne This report results from a contract tasking CPAT as follows: The Grantee will perform theoretical modeling of point, surface, and volume high-pressure plasmas created using Micro Hollow Cathode Discharge sources... The development of micro-gyroscope technology Liu, Kai; Zhang, Weiping; Chen, Wenyuan; Li, Kai; Dai, Fuyan; Cui, Feng; Wu, Xiaosheng; Ma, Gaoyin; Xiao, Qijun This review reports an overview and development of micro-gyroscope. The review first presents different types of micro-gyroscopes. Micro-gyroscopes in this review are categorized into Coriolis gyroscope, levitated rotor gyroscope, Sagnac gyroscope, nuclear magnetic resonance (NMR) gyroscope according to the working principle. Different principles, structures, materials, fabrications and control technologies of micro-gyroscopes are analyzed. This review compares different classes of gyroscopes in the aspects such as fabrication method, detection axis, materials, size and so on. Finally, the review evaluates the key technologies on how to improve the precision and anti-jamming ability and to extend the available applications of the gyroscopes in the market and patents as well. (topical review) Micro Resistojet for Small Satellites, Phase II National Aeronautics and Space Administration — Micro-resistojets offer an excellent combination of simplicity, performance and wet system mass for small satellites (<100 kg, <50 watts) requiring mN level... MicroCar 2003. Abstracts of papers Mechatronics for automotive applications is an important trend, combining mechanics, electronics and information technology. Micro- and nanomechatronics, particularly innovative research disciplines, will help create, in combination with advanced solutions from microsystem technologies, e.g. the electronic packaging, a range of entirely new developments in automobiles. Under the umbrella of the Automotive Suppliers Fair Z2003, it is happening for the very first time now that a momentous scientific conference, such as the MicroCar 2003, brings together representatives from car manufacturers and the electronics industry, as well as a large number of experts from technical colleges, universities and research institutes to discuss the new potentials and possibilities presented by the use of micro and nanomaterials in Leipzig. With the presentation of 50 papers, speakers will inform about their latest research results as well as current trends in micro and nanotechnologies for automotives. This issue is publishing the abstracts of this scientific event. Micro- and nanodevices integrated with biomolecular probes. Alapan, Yunus; Icoz, Kutay; Gurkan, Umut A Understanding how biomolecules, proteins and cells interact with their surroundings and other biological entities has become the fundamental design criterion for most biomedical micro- and nanodevices. Advances in biology, medicine, and nanofabrication technologies complement each other and allow us to engineer new tools based on biomolecules utilized as probes. Engineered micro/nanosystems and biomolecules in nature have remarkably robust compatibility in terms of function, size, and physical properties. This article presents the state of the art in micro- and nanoscale devices designed and fabricated with biomolecular probes as their vital constituents. General design and fabrication concepts are presented and three major platform technologies are highlighted: microcantilevers, micro/nanopillars, and microfluidics. Overview of each technology, typical fabrication details, and application areas are presented by emphasizing significant achievements, current challenges, and future opportunities. Copyright © 2015 Elsevier Inc. All rights reserved. Micro Resistojet for Small Satellites, Phase I National Aeronautics and Space Administration — Micro-resistojets offer the best combination of simplicity, performance, wet system mass and power consumption for small satellites (<100kg, <50Watts)... Micro benchtop optics by bulk silicon micromachining Lee, Abraham P.; Pocha, Michael D.; McConaghy, Charles F.; Deri, Robert J. Micromachining of bulk silicon utilizing the parallel etching characteristics of bulk silicon and integrating the parallel etch planes of silicon with silicon wafer bonding and impurity doping, enables the fabrication of on-chip optics with in situ aligned etched grooves for optical fibers, micro-lenses, photodiodes, and laser diodes. Other optical components that can be microfabricated and integrated include semi-transparent beam splitters, micro-optical scanners, pinholes, optical gratings, micro-optical filters, etc. Micromachining of bulk silicon utilizing the parallel etching characteristics thereof can be utilized to develop miniaturization of bio-instrumentation such as wavelength monitoring by fluorescence spectrometers, and other miniaturized optical systems such as Fabry-Perot interferometry for filtering of wavelengths, tunable cavity lasers, micro-holography modules, and wavelength splitters for optical communication systems. Hollow Micro-/Nanostructures: Synthesis and Applications Lou, Xiong Wen (David); Archer, Lynden A.; Yang, Zichao for Portland cement, to produce concrete with enhanced strength and durability. This review is devoted to the progress made in the last decade in synthesis and applications of hollow micro-nanostructures. We present a comprehensive overview of synthetic Aerial photogrammetry procedure optimized for micro uav T. Anai Full Text Available This paper proposes the automatic aerial photogrammetry procedure optimized for Micro UAV that has ability of autonomous flight. The most important goal of our proposed method is the reducing the processing cost for fully automatic reconstruction of DSM from a large amount of image obtained from Micro UAV. For this goal, we have developed automatic corresponding point generation procedure using feature point tracking algorithm considering position and attitude information, which obtained from onboard GPS-IMU integrated on Micro UAV. In addition, we have developed the automatic exterior orientation and registration procedure from the automatic generated corresponding points on each image and position and attitude information from Micro UAV. Moreover, in order to reconstruct precise DSM, we have developed the area base matching process which considering edge information. In this paper, we describe processing flow of our automatic aerial photogrammetry. Moreover, the accuracy assessment is also described. Furthermore, some application of automatic reconstruction of DSM will be desired. Micro Learning: A Modernized Education System Omer Jomah Full Text Available Learning is an understanding of how the human brain is wired to learning rather than to an approach or a system. It is one of the best and most frequent approaches for the 21st century learners. Micro learning is more interesting due to its way of teaching and learning the content in a small, very specific burst. Here the learners decide what and when to learn. Content, time, curriculum, form, process, mediality, and learning type are the dimensions of micro learning. Our paper will discuss about micro learning and about the micro-content management system. The study will reflect the views of different users, and will analyze the collected data. Finally, it will be concluded with its pros and cons. A CAMAC and FASTBUS engineering test environment supported by a MicroVAX/MicroVMS system Logg, C.A. A flexible, multiuser engineering test environment has been established for the engineers in SLAC's Electronic Instrumentation Engineering group. The system hardware includes a standard MicroVAX II and MicroVAX I with multiple CAMAC, FASTBUS, and GPIB instrumentation buses. The system software components include MicroVMS licenses with DECNET/SLACNET, FORTRAN, PASCAL, FORTH, and a versatile graphical display package. In addition, there are several software utilities available to facilitate FASTBUS and CAMAC prototype hardware debugging. 16 refs., 7 figs Micro Loudspeaker Behaviour versus 6½" Driver, Micro Loudspeaker Parameter Drift Pedersen, Bo Rohde This study tested micro loudspeaker behavior from the perspective of loudspeaker parameter drift. The main difference between traditional transducers and micro loudspeakers, apart from their size, is their suspension construction. The suspension generally is a loudspeaker's most unstable parameter......, and the study investigated temperature drift and signal dependency. There is investigated three different micro loudspeakers and compared their behavior to that of a typical bass mid-range loudspeaker unit. There is measured all linear loudspeaker parameters at different temperatures.... Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either... The micro hydraulic power, a sure value The micro hydroelectricity is a proven technology which has now reached maturity. Ideal for electrification of remote sites, it also serves a complement to national electric production. A recent study carried out by the ESHA estimates the potential which still available in terms of micro hydraulic power plants at 5939 MW. The small hydro power capacity (>10 MW) installed in European union and the available potential of small hydro power are presented. (A.L.B.) The micro-step motor controller Hong, Kwang Pyo; Lee, Chang Hee; Moon, Myung Kook; Choi, Bung Hun; Choi, Young Hyun; Cheon, Jong Gu The developed micro-step motor controller can handle 4 axes stepping motor drivers simultaneously and provide high power bipolar driving mechanism with constant current mode. It can be easily controlled by manual key functions and the motor driving status is displayed by the front panel VFD. Due to the development of several kinds of communication and driving protocol, PC can operate even several micro-step motor controllers at once by multi-drop connection Concordian Economics: Beyond Micro and Macroeconomics Gorga, Carmine In Concordian economics there is no distinction between micro and macro economics, because the economic process is the same for the individual person, the city, the nation, or the world, What changes is the scale, but not the structure of the process. When micro and macro economics are seen as one, it makes no sense to add monetary wealth to real wealth. It becomes then evident that monetary wealth is not wealth; monetary wealth is a legal representation of real wealth. Micro-battery Development using beta radioisotope Jung, H. K.; Cheong, Y. M.; Lee, N. H.; Choi, Y. S.; Joo, Y. S.; Lee, J. S.; Jeon, B. H. Nuclear battery which use the beta radiation sources emitting the low penetration radiation energy from radioisotope can be applied as the long term (more than 10 years) micro power source in MEMS and nano components. This report describes the basic concept and principles of nuclear micro-battery and its fabrication in space and military field. In particular direct conversion method is described by investigating the electron-hole generation and recombination in p-n junction of silicon betavoltaics with beta radiation Geometry and surface damage in micro electrical discharge machining of micro-holes Ekmekci, Bülent; Sayar, Atakan; Tecelli Öpöz, Tahsin; Erden, Abdulkadir Geometry and subsurface damage of blind micro-holes produced by micro electrical discharge machining (micro-EDM) is investigated experimentally to explore the relational dependence with respect to pulse energy. For this purpose, micro-holes are machined with various pulse energies on plastic mold steel samples using a tungsten carbide tool electrode and a hydrocarbon-based dielectric liquid. Variations in the micro-hole geometry, micro-hole depth and over-cut in micro-hole diameter are measured. Then, unconventional etching agents are applied on the cross sections to examine micro structural alterations within the substrate. It is observed that the heat-damaged segment is composed of three distinctive layers, which have relatively high thicknesses and vary noticeably with respect to the drilling depth. Crack formation is identified on some sections of the micro-holes even by utilizing low pulse energies during machining. It is concluded that the cracking mechanism is different from cracks encountered on the surfaces when machining is performed by using the conventional EDM process. Moreover, an electrically conductive bridge between work material and debris particles is possible at the end tip during machining which leads to electric discharges between the piled segments of debris particles and the tool electrode during discharging. Ekmekci, Bülent; Sayar, Atakan; Öpöz, Tahsin Tecelli; Erden, Abdulkadir Geometry and subsurface damage of blind micro-holes produced by micro electrical discharge machining (micro-EDM) is investigated experimentally to explore the relational dependence with respect to pulse energy. For this purpose, micro-holes are machined with various pulse energies on plastic mold steel samples using a tungsten carbide tool electrode and a hydrocarbon-based dielectric liquid. Variations in the micro-hole geometry, micro-hole depth and over-cut in micro-hole diameter are measured. Then, unconventional etching agents are applied on the cross sections to examine micro structural alterations within the substrate. It is observed that the heat-damaged segment is composed of three distinctive layers, which have relatively high thicknesses and vary noticeably with respect to the drilling depth. Crack formation is identified on some sections of the micro-holes even by utilizing low pulse energies during machining. It is concluded that the cracking mechanism is different from cracks encountered on the surfaces when machining is performed by using the conventional EDM process. Moreover, an electrically conductive bridge between work material and debris particles is possible at the end tip during machining which leads to electric discharges between the piled segments of debris particles and the tool electrode during discharging Modulation of microRNA activity by semi-microRNAs (smiRNAs Isabelle ePlante Full Text Available The ribonuclease Dicer plays a central role in the microRNA pathway by catalyzing the formation of 19 to 24-nucleotide (nt long microRNAs. Subsequently incorporated into Ago2 effector complexes, microRNAs are known to regulate messenger RNA (mRNA translation. Whether shorter RNA species derived from microRNAs exist and play a role in mRNA regulation remains unknown. Here, we report the serendipitous discovery of a 12-nt long RNA species corresponding to the 5' region of the microRNA let-7, and tentatively termed semi-microRNA, or smiRNA. Using a smiRNA derived from the precursor of miR-223 as a model, we show that 12-nt long smiRNA species are devoid of any direct mRNA regulatory activity, as assessed in a reporter gene activity assay in transfected cultured human cells. However, smiR-223 was found to modulate the ability of the microRNA from which it derives to mediate translational repression or cleavage of reporter mRNAs. Our findings suggest that smiRNAs may be generated along the microRNA pathway and participate to the control of gene expression by regulating the activity of the related full-length mature microRNA in vivo. Laser 3D micro-manufacturing Piqué, Alberto; Auyeung, Raymond C Y; Kim, Heungsoo; Charipar, Nicholas A; Mathews, Scott A Laser-based materials processing techniques are gaining widespread use in micro-manufacturing applications. The use of laser microfabrication techniques enables the processing of micro- and nanostructures from a wide range of materials and geometries without the need for masking and etching steps commonly associated with photolithography. This review aims to describe the broad applications space covered by laser-based micro- and nanoprocessing techniques and the benefits offered by the use of lasers in micro-manufacturing processes. Given their non-lithographic nature, these processes are also referred to as laser direct-write and constitute some of the earliest demonstrations of 3D printing or additive manufacturing at the microscale. As this review will show, the use of lasers enables precise control of the various types of processing steps—from subtractive to additive—over a wide range of scales with an extensive materials palette. Overall, laser-based direct-write techniques offer multiple modes of operation including the removal (via ablative processes) and addition (via photopolymerization or printing) of most classes of materials using the same equipment in many cases. The versatility provided by these multi-function, multi-material and multi-scale laser micro-manufacturing processes cannot be matched by photolithography nor with other direct-write microfabrication techniques and offer unique opportunities for current and future 3D micro-manufacturing applications. (topical review) Micro Credit and Gender: A Critical Assessment Özlem BALKIZ Full Text Available Micro credit programs, which are based on lending money on interest and encouraging savings, have been first been used in Southern countries and are now being implemented worldwide. Mainly aimed at the rural poor, particularly poor women, micro credit programs seek to ensure sustainable economic development in line with the requirements of global capitalism and to include women in the productive activities of the market. Micro credit has been made institutionalized based on three main paradigms, namely financial sustainability, poverty alleviation and women's empowerment. In micro credit programs, where the emphasis on women's empowerment is strong, the lack of a social gender perspective is striking. In fact, women may face patriarchal pressure and restrictions at the start in access to loans, loan usage models, participation to the productive activities in the market and during loan repayment. Thus the allegation that by way of micro credit, women will be empowered in terms of economic, social and political means in the family and society becomes questionable. This article, by problematizing women's relationship with micro credit, will discuss social gender relationships which prevent them from making use of these programs as they wish and from achieving the results they intend MicroRNAs in right ventricular remodelling. Batkai, Sandor; Bär, Christian; Thum, Thomas Right ventricular (RV) remodelling is a lesser understood process of the chronic, progressive transformation of the RV structure leading to reduced functional capacity and subsequent failure. Besides conditions concerning whole hearts, some pathology selectively affects the RV, leading to a distinct RV-specific clinical phenotype. MicroRNAs have been identified as key regulators of biological processes that drive the progression of chronic diseases. The role of microRNAs in diseases affecting the left ventricle has been studied for many years, however there is still limited information on microRNAs specific to diseases in the right ventricle. Here, we review recently described details on the expression, regulation, and function of microRNAs in the pathological remodelling of the right heart. Recently identified strategies using microRNAs as pharmacological targets or biomarkers will be highlighted. Increasing knowledge of pathogenic microRNAs will finally help improve our understanding of underlying distinct mechanisms and help utilize novel targets or biomarkers to develop treatments for patients suffering from right heart diseases. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: [email protected]. Micro-Doppler classification of riders and riderless horses Tahmoush, David Micro-range Micro-Doppler can be used to isolate particular parts of the radar signature, and in this case we demonstrate the differences in the signature between a walking horse versus a walking horse with a rider. Using micro-range micro-Doppler, we can distinguish the radar returns from the rider as separate from the radar returns of the horse. Advanced Micro Turbine System (AMTS) -C200 Micro Turbine -Ultra-Low Emissions Micro Turbine Capstone Turbine Corporation In September 2000 Capstone Turbine Corporation commenced work on a US Department of Energy contract to develop and improve advanced microturbines for power generation with high electrical efficiency and reduced pollutants. The Advanced MicroTurbine System (AMTS) program focused on: (1) The development and implementation of technology for a 200 kWe scale high efficiency microturbine system (2) The development and implementation of a 65 kWe microturbine which meets California Air Resources Board (CARB) emissions standards effective in 2007. Both of these objectives were achieved in the course of the AMTS program. At its conclusion prototype C200 Microturbines had been designed, assembled and successfully completed field demonstration. C65 Microturbines operating on natural, digester and landfill gas were also developed and successfully tested to demonstrate compliance with CARB 2007 Fossil Fuel Emissions Standards for NOx, CO and VOC emissions. The C65 Microturbine subsequently received approval from CARB under Executive Order DG-018 and was approved for sale in California. The United Technologies Research Center worked in parallel to successfully execute a RD&D program to demonstrate the viability of a low emissions AMS which integrated a high-performing microturbine with Organic Rankine Cycle systems. These results are documented in AMS Final Report DOE/CH/11060-1 dated March 26, 2007. C. elegans microRNAs. Vella, Monica C; Slack, Frank J MicroRNAs (miRNAs) are small, non-coding regulatory RNAs found in many phyla that control such diverse events as development, metabolism, cell fate and cell death. They have also been implicated in human cancers. The C. elegans genome encodes hundreds of miRNAs, including the founding members of the miRNA family lin-4 and let-7. Despite the abundance of C. elegans miRNAs, few miRNA targets are known and little is known about the mechanism by which they function. However, C. elegans research continues to push the boundaries of discovery in this area. lin-4 and let-7 are the best understood miRNAs. They control the timing of adult cell fate determination in hypodermal cells by binding to partially complementary sites in the mRNA of key developmental regulators to repress protein expression. For example, lin-4 is predicted to bind to seven sites in the lin-14 3' untranslated region (UTR) to repress LIN-14, while let-7 is predicted to bind two let-7 complementary sites in the lin-41 3' UTR to down-regulate LIN-41. Two other miRNAs, lsy-6 and mir-273, control left-right asymmetry in neural development, and also target key developmental regulators for repression. Approximately one third of the C. elegans miRNAs are differentially expressed during development indicating a major role for miRNAs in C. elegans development. Given the remarkable conservation of developmental mechanism across phylogeny, many of the principles of miRNAs discovered in C. elegans are likely to be applicable to higher animals. STUDY & ANALYSIS OF MICRO NEEDLE MATERIAL BY ANSYS Santosh Kumar Singh*, Prabhat Sinha, N.N. Singh, Nagendra Kumar In this research the concept of design and analysis, silicon and stainless steel based on hollow micro-needles for transdermal drug delivery(TDD) have been evaluated by Using ANSYS & computational fluid dynamic (CFD), structural. Micro fluidic analysis has performed to ensure the micro-needles design suitability for Drug delivery. The effect of axial and transverse load on single and micro-needle array has investigated with the mechanical properties of micro-needle. The analysis predicte... Selective wetting-induced micro-electrode patterning for flexible micro-supercapacitors. Kim, Sung-Kon; Koo, Hyung-Jun; Lee, Aeri; Braun, Paul V Selective wetting-induced micro-electrode patterning is used to fabricate flexible micro-supercapacitors (mSCs). The resulting mSCs exhibit high performance, mechanical stability, stable cycle life, and hold great promise for facile integration into flexible devices requiring on-chip energy storage. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Elementary particles as micro-universes or micro-black holes Rodrigues Junior, W.A. The idea that elementary particles can be presented as micro-universes and/or micro-black holes (Lorentzian manifolds) is presented and the fundamental mathematical problem associated with the simplest world manifold that 'contains' both the macrocosm and the microcosmes is discussed. (Author) [pt Micro Engineering: Experiments conducted on the use of polymeric materials in micro injection moulding Griffiths, Christian; Tosello, Guido; Nestler, Joerg To advance micro injection moulding it is necessary to study systematically the factors affecting process and tooling reliability. This paper reviews the main findings of Cardiff Universities 4M and SEMOFS research in this field. In particular, the factors affecting the manufacturability of micro... Concept of subsurface micro-sensing; Chika joho no micro sensing Niitsuma, H [Tohoku University, Sendai (Japan). Faculty of Engineering This paper describes concept of subsurface micro-sensing. It is intended to achieve an epoch-making development of subsurface engineerings by developing such technologies as micro measurement of well interior, micro measurement while drilling (MWD), and micro intelligent logging. These technologies are supported by development of micro sensors and micro drilling techniques using micro machine technologies. Micronizing the subsurface sensors makes mass production of sensors with equivalent performance possible, and the production cost can be reduced largely. The sensors can be embedded or used disposably, resulting in increased mobility in measurement and higher performance. Installing multiple number of sensors makes high-accuracy measurement possible, such as array measurement. The sensors can be linked easily with photo-electronics components, realizing remote measurement at low price and high accuracy. Control in micro-drilling and MWD also become possible. Such advantages may also be expected as installing the sensors on the outer side of wells in use and monitoring subsurface information during production. Expectation on them is large as a new paradigm of underground exploration and measurement. 1 fig. Fabrication of Biochips with Micro Fluidic Channels by Micro End-milling and Powder Blasting Dong Sam Park Full Text Available For microfabrications of biochips with micro fluidic channels, a large number of microfabrication techniques based on silicon or glass-based Micro-Electro-Mechanical System (MEMS technologies were proposed in the last decade. In recent years, for low cost and mass production, polymer-based microfabrication techniques by microinjection molding and micro hot embossing have been proposed. These techniques, which require a proper photoresist, mask, UV light exposure, developing, and electroplating as a preprocess, are considered to have some problems. In this study, we propose a new microfabrication technology which consists of micro end-milling and powder blasting. This technique could be directly applied to fabricate the metal mold without any preprocesses. The metal mold with micro-channels is machined by micro end-milling, and then, burrs generated in the end-milling process are removed by powder blasting. From the experimental results, micro end-milling combined with powder blasting could be applied effectively for fabrication of the injection mold of biochips with micro fluidic channels. On the performance of micro injection moulding process simulations of TPE micro rings , a case study based on the micro injection moulding process of thermoplastic elastomer (TPE) micro rings (volume: 1.5 mm3, mass: 2.2 mg) for sensors application is treated. Injection moulding process simulations using Autodesk Moldflow Insight 2016® were applied with the aim of accomplishing two main... Investigation on the Acoustic Absorption of Flexible Micro-Perforated Panel with Ultra-Micro Perforations Li, Guoxin; Tang, Xiaoning; Zhang, Xiaoxiao; Qian, Y. J.; Kong, Deyi Flexible micro-perforated panel has unique advantages in noise reduction due to its good flexibility compared with traditional rigid micro-perforated panel. In this paper, flexible micro-perforated panel was prepared by computer numerical control (CNC) milling machine. Three kinds of plastics including polyvinylchloride (PVC), polyethylene terephthalate (PET), and polyimide (PI) were taken as the matrix materials to prepare flexible micro-perforated panel. It has been found that flexible micro-perforated panel made of PET possessing good porosity and proper density, elastic modulus and poisson ratio exhibited the best acoustic absorption properties. The effects of various structural parameters including perforation diameter, perforation ratio, thickness and air gap have also been investigated, which would be helpful to the optimization of acoustic absorption properties. Micro injection moulding process validation for high precision manufacture of thermoplastic elastomer micro suspension rings Calaon, M.; Tosello, G.; Elsborg Hansen, R. Micro injection moulding (μIM) is one of the most suitable micro manufacturing processes for flexible mass-production of multi-material functional micro components. The technology was employed in this research used to produce thermoplastic elastomer (TPE) micro suspension rings identified...... main μIM process parameters (melt temperature, injection speed, packing pressure) using the Design of Experiment statistical technique. Measurements results demonstrated the importance of calibrating mould´s master geometries to ensure correct part production and effective quality conformance...... on the frequency in order to improve the signal quality and assure acoustic reproduction fidelity. Production quality of the TPE rings drastically influence the product functionality. In the present study, a procedure for μIM TPE micro rings production optimization has been established. The procedure entail using... Micro-Expression Recognition Using Color Spaces. Wang, Su-Jing; Yan, Wen-Jing; Li, Xiaobai; Zhao, Guoying; Zhou, Chun-Guang; Fu, Xiaolan; Yang, Minghao; Tao, Jianhua Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray. Micro-optics for microfluidic analytical applications. Yang, Hui; Gijs, Martin A M This critical review summarizes the developments in the integration of micro-optical elements with microfluidic platforms for facilitating detection and automation of bio-analytical applications. Micro-optical elements, made by a variety of microfabrication techniques, advantageously contribute to the performance of an analytical system, especially when the latter has microfluidic features. Indeed the easy integration of optical control and detection modules with microfluidic technology helps to bridge the gap between the macroscopic world and chip-based analysis, paving the way for automated and high-throughput applications. In our review, we start the discussion with an introduction of microfluidic systems and micro-optical components, as well as aspects of their integration. We continue with a detailed description of different microfluidic and micro-optics technologies and their applications, with an emphasis on the realization of optical waveguides and microlenses. The review continues with specific sections highlighting the advantages of integrated micro-optical components in microfluidic systems for tackling a variety of analytical problems, like cytometry, nucleic acid and protein detection, cell biology, and chemical analysis applications. Behavior of Cell on Vibrating Micro Ridges Haruka Hino Full Text Available The effect of micro ridges on cells cultured at a vibrating scaffold has been studied in vitro. Several parallel lines of micro ridges have been made on a disk of transparent polydimethylsiloxane for a scaffold. To apply the vibration on the cultured cells, a piezoelectric element was attached on the outside surface of the bottom of the scaffold. The piezoelectric element was vibrated by the sinusoidal alternating voltage (Vp-p < 16 V at 1.0 MHz generated by a function generator. Four kinds of cells were used in the test: L929 (fibroblast connective tissue of C3H mouse, Hepa1-6 (mouse hepatoma, C2C12 (mouse myoblast, 3T3-L1 (mouse fat precursor cells. The cells were seeded on the micro pattern at the density of 2000 cells/cm2 in the medium containing 10% FBS (fetal bovine serum and 1% penicillin/ streptomycin. After the adhesion of cells in several hours, the cells are exposed to the ultrasonic vibration for several hours. The cells were observed with a phase contrast microscope. The experimental results show that the cells adhere, deform and migrate on the scaffold with micro patterns regardless of the ultrasonic vibration. The effects of the vibration and the micro pattern depend on the kind of cells. Role of microRNAs in sepsis. Kingsley, S Manoj Kumar; Bhat, B Vishnu MicroRNAs have been found to be of high significance in the regulation of various genes and processes in the body. Sepsis is a serious clinical problem which arises due to the excessive host inflammatory response to infection. The non-specific clinical features and delayed diagnosis of sepsis has been a matter of concern for long time. MicroRNAs could enable better diagnosis of sepsis and help in the identification of the various stages of sepsis. Improved diagnosis may enable quicker and more effective treatment measures. The initial acute and transient phase of sepsis involves excessive secretion of pro-inflammatory cytokines which causes severe damage. MicroRNAs negatively regulate the toll-like receptor signaling pathway and regulate the production of inflammatory cytokines during sepsis. Likewise, microRNAs have shown to regulate the vascular barrier and endothelial function in sepsis. They are also involved in the regulation of the apoptosis, immunosuppression, and organ dysfunction in later stages of sepsis. Their importance at various levels of the pathophysiology of sepsis has been discussed along with the challenges and future perspectives. MicroRNAs could be key players in the diagnosis and staging of sepsis. Their regulation at various stages of sepsis suggests that they may have an important role in altering the outcome associated with sepsis. Heat and power from MicroGen This paper reports on the design of a domestic gas-fired cogeneration system developed to replace the central heating boiler. Technical details of the MicroGen demonstration unit are given, and the use of a Linear Free Piston Stirling Engine as the prime mover, and the results of modelling studies of energy demand indicating cost savings compared to conventional boilers are discussed. The enhancement of the benefits of micro-cogeneration through use of thermal and power storage and energy demand management, and the impact of micro-cogeneration on energy use in the home are considered. The UK and European Commission's targets for increased cogeneration capacity are noted. Micro CHP: implications for energy companies Harrison, Jeremy [EA Technology (United Kingdom); Kolin, Simon; Hestevik, Svein [Sigma Elektroteknisk A/S (Norway) This article explains how micro combined heat and power (CHP) technology may help UK energy businesses to maintain their customer base in the current climate of liberalisation and competition in the energy market The need for energy companies to adopt new technologies and adapt to changes in the current aggressive environment, the impact of privatisation, and the switching of energy suppliers by customers are discussed. Three potential routes to success for energy companies are identified, namely, price reductions, branding and affinity marketing, and added value services. Details are given of the implementation of schemes to encourage energy efficiency, the impact of the emissions targets set at Kyoto, the advantages of micro CHP generation, business opportunities for CHP, business threats from existing energy companies and others entering the field, and the commercial viability of micro CHP. Replication of micro and nano surface geometries Hansen, Hans Nørgaard; Hocken, R.J.; Tosello, Guido The paper describes the state-of-the-art in replication of surface texture and topography at micro and nano scale. The description includes replication of surfaces in polymers, metals and glass. Three different main technological areas enabled by surface replication processes are presented......: manufacture of net-shape micro/nano surfaces, tooling (i.e. master making), and surface quality control (metrology, inspection). Replication processes and methods as well as the metrology of surfaces to determine the degree of replication are presented and classified. Examples from various application areas...... are given including replication for surface texture measurements, surface roughness standards, manufacture of micro and nano structured functional surfaces, replicated surfaces for optical applications (e.g. optical gratings), and process chains based on combinations of repeated surface replication steps.... Micro Products - Product Development and Design Innovation within the field of micro and nano technology is to a great extent characterized by cross-disciplinary skills. The traditional disciplines like e.g. physics, biology, medicine and engineering are united in a common development process that can only take place in the presence of multi......-disciplinary competences. One example is sensors for chemical analysis of fluids, where chemistry, biology and flow mechanics all influence the design of the product and thereby the industrial fabrication of the product [1]. On the technological side the development has moved very fast, primarily driven by the need...... of the electronics industry to create still smaller chips with still larger capacity. Therefore the manufacturing technologies connected with micro/nano products in silicon are relatively highly developed compared to the technologies used for manufacturing micro products in metals, polymers and ceramics. For all... MicroRNAs, epigenetics and disease Silahtaroglu, Asli; Stenvang, Jan Epigenetics is defined as the heritable chances that affect gene expression without changing the DNA sequence. Epigenetic regulation of gene expression can be through different mechanisms such as DNA methylation, histone modifications and nucleosome positioning. MicroRNAs are short RNA molecules...... which do not code for a protein but have a role in post-transcriptional silencing of multiple target genes by binding to their 3' UTRs (untranslated regions). Both epigenetic mechanisms, such as DNA methylation and histone modifications, and the microRNAs are crucial for normal differentiation...... diseases. In the present chapter we will mainly focus on microRNAs and methylation and their implications in human disease, mainly in cancer.... Laser based micro forming and assembly. MacCallum, Danny O' Neill; Wong, Chung-Nin Channy; Knorovsky, Gerald Albert; Steyskal, Michele D.; Lehecka, Tom (Pennsylvania State University, Freeport, PA); Scherzinger, William Mark; Palmer, Jeremy Andrew It has been shown that thermal energy imparted to a metallic substrate by laser heating induces a transient temperature gradient through the thickness of the sample. In favorable conditions of laser fluence and absorptivity, the resulting inhomogeneous thermal strain leads to a measurable permanent deflection. This project established parameters for laser micro forming of thin materials that are relevant to MESA generation weapon system components and confirmed methods for producing micrometer displacements with repeatable bend direction and magnitude. Precise micro forming vectors were realized through computational finite element analysis (FEA) of laser-induced transient heating that indicated the optimal combination of laser heat input relative to the material being heated and its thermal mass. Precise laser micro forming was demonstrated in two practical manufacturing operations of importance to the DOE complex: micrometer gap adjustments of precious metal alloy contacts and forming of meso scale cones. The Aarhus Ion Micro-Trap Project Miroshnychenko, Yevhen; Nielsen, Otto; Poulsen, Gregers As part of our involvement in the EU MICROTRAP project, we have designed, manufactured and assembled a micro-scale ion trap with integrated optical fibers. These prealigned fibers will allow delivering cooling laser light to single ions. Therefore, such a trap will not require any direct optical...... and installed in an ultra high vacuum chamber, which includes an ablation oven for all-optical loading of the trap [2]. The next steps on the project are to demonstrate the operation of the micro-trap and the cooling of ions using fiber delivered light. [1] D. Grant, Development of Micro-Scale Ion traps, Master...... Thesis (2008). [2] R.J. Hendricks, D.M. Grant, P.F. Herskind, A. Dantan and M. Drewsen, An all-optical ion-loading technique for scalable microtrap architectures, Applied Physics B, 88, 507 (2007).... Determination of Tongue and Groove parameters for multileaf collimators; Determinaco de parametros de Tongue and Groove de colimadores de multilaminas Castro, Aluisio; Almeida, Carlos E. de, E-mail: [email protected] [Universidade Estadual do Rio de Janeiro (UERJ), RJ (Brazil). Laboratorio de Ciencias Radiologicas; Nguyen, Bihn [Prowess Inc., Concord, CA (United States) The Tongue and Groove effect (TandG) is characterized by an additional attenuation between adjacent and opposing leaves on multileaf collimators (MLCs) in adjacent or complementary fields. This is a typical situation in of intensity-modulated radiotherapy treatments. The aim of this study was to measure the width and transmission of TandG effect for two commercial MLCs: Varian Millennium 120 (6 MV and 16 MV beams) and BrainLab m3 (only for 6 MV). The methodology used was based on the creation of MLC shapes that emphasizes TandG effect, the irradiation of these fields on radiochromic film and the sensitometric evaluation of the films in order to determine the TandG width and transmission. The results for TandG width for studied MLCs were 2.5, 1.8 and 2 mm, respectively, whit transmission TandG values of 87, 90 and 85%. (author) A computer program to automatically control the multi leaf collimator; Un programa informatico para el control automatico del colimador multilamina Sanchez Galiano, P.; Crelgo Alonso, D.; Gonzalez Sancho, J. M.; Fernandez Garcia, J.; Vivanco Parellada, J. A computer program to automatically analyze strip test images for MLC leaf positioning quality assurance was developed and assessed. The program is fed with raw individual segment images in DICOM format supplied by the accelerator software and it automatically carries out all the steps in the leaf positioning quality control test (image merging, image analysis, storing and reporting). A comprehensive description of the software, that allows a relatively easy implementation, is shown. To check the performance of the program, a series of test fields with intentionally introduced errors were used. The obtained Measurement uncertainty of any individual leaf position was lower than 0.15 mm with gantry at 0 degree centigrade. At another gantry angles (90 degree centigrade, 180 degree centigrade and 270 degree centigrade) the dispersion of the measurements was larger, specially towards the external positions of the leafs, probably due to a slight rotation of the EPID caused by gravity. That reduces the useful area of the MLC to control when gantry angles different from 0 degree centigrade are used. In conclusion, this technique is fast enough to be carried out in a daily basis being also very precise and reliable. (Author) Optical nano and micro actuator technology Knopf, George K In Optical Nano and Micro Actuator Technology, leading engineers, material scientists, chemists, physicists, laser scientists, and manufacturing specialists offer an in-depth, wide-ranging look at the fundamental and unique characteristics of light-driven optical actuators. They discuss how light can initiate physical movement and control a variety of mechanisms that perform mechanical work at the micro- and nanoscale. The book begins with the scientific background necessary for understanding light-driven systems, discussing the nature of light and the interaction between light and NEMS/MEMS d Micro- and nanotechnology in cardiovascular tissue engineering Zhang Boyang; Xiao Yun; Hsieh, Anne; Thavandiran, Nimalan; Radisic, Milica While in nature the formation of complex tissues is gradually shaped by the long journey of development, in tissue engineering constructing complex tissues relies heavily on our ability to directly manipulate and control the micro-cellular environment in vitro. Not surprisingly, advancements in both microfabrication and nanofabrication have powered the field of tissue engineering in many aspects. Focusing on cardiac tissue engineering, this paper highlights the applications of fabrication techniques in various aspects of tissue engineering research: (1) cell responses to micro- and nanopatterned topographical cues, (2) cell responses to patterned biochemical cues, (3) controlled 3D scaffolds, (4) patterned tissue vascularization and (5) electromechanical regulation of tissue assembly and function. Rectenna session: Micro aspects. [energy conversion Two micro aspects of the rectenna design are addressed: evaluation of the degradation in net rectenna RF to DC conversion efficiency due to power density variations across the rectenna (power combining analysis) and design of Yagi-Uda receiving elements to reduce rectenna cost by decreasing the number of conversion circuits (directional receiving elements). The first of these micro aspects involves resolving a fundamental question of efficiency potential with a rectenna, while the second involves a design modification with a large potential cost saving. Injection moulding for macro and micro products Islam, Mohammad Aminul used for macro products but with the ages it is going deep into the micro areas having machine and process improvements. Extensive research work on injection moulding is going on all over the world. New ideas are flowing into the machines, materials and processes. The technology has made significant......The purpose of the literature survey is to investigate the injection moulding technology in the macro and micro areas from the basic to the state-of-the-art recent technology. Injection moulding is a versatile production process for the manufacturing of plastic parts and the process is extensively... Micro- and nanoscale phenomena in tribology Chung, Yip-Wah Drawn from presentations at a recent National Science Foundation Summer Institute on Nanomechanics, Nanomaterials, and Micro/Nanomanufacturing, Micro- and Nanoscale Phenomena in Tribology explores the convergence of the multiple science and engineering disciplines involved in tribology and the connection from the macro to nano world. Written by specialists from computation, materials science, mechanical engineering, surface physics, and chemistry, each chapter provides up-to-date coverage of both basic and advanced topics and includes extensive references for further study.After discussing the MicroRNAs in Prostate Cancer lymphoma. Genes Chromosom. Cancer 39:167–69 131. O'Connell RM, Taganov KD, Boldin MP, Cheng G, Baltimore D. 2007. MicroRNA-155 is induced during the...carcinoma. J. Virol. 81:1033–36 155. Xi Y, Nakajima G, Gavin E, Morris CG, Kudo K, et al. 2007. Systematic analysis of microRNA expression of RNA extracted ...diversity. miRNAs were extracted from the unique sequences by searching against miRNA database (miRbase release 10.0; http://microrna.sanger.ac.uk Demoulding force in micro-injection moulding Griffiths, C.A.; Dimov, S.S.; Scholz, S. The paper reports an experimental study that investigates part demoulding behavior in micro injection moulding (MIM) with a focus on the effects of pressure (P) and temperature (T) on the demoulding forces. Demoulding of a microfluidics part is conducted and the four processing parameters of melt...... temperature (Tb), mould temperature (Tm), holding pressure (Ph) and injection speed (Vi) are analysed. The result using different combinations of process parameters were used to identify the best processing conditions in regards to demoulding forces when moulding micro parts.... Micro-Scale Avionics Thermal Management Moran, Matthew E. Trends in the thermal management of avionics and commercial ground-based microelectronics are converging, and facing the same dilemma: a shortfall in technology to meet near-term maximum junction temperature and package power projections. Micro-scale devices hold the key to significant advances in thermal management, particularly micro-refrigerators/coolers that can drive cooling temperatures below ambient. A microelectromechanical system (MEMS) Stirling cooler is currently under development at the NASA Glenn Research Center to meet this challenge with predicted efficiencies that are an order of magnitude better than current and future thermoelectric coolers. Towards the first generation micro bulk forming system Arentoft, Mogens; Eriksen, Rasmus Solmer; Hansen, Hans Nørgaard . This work describes a number of prototype system units, which collectively form a desktop sized micro forming production system. The system includes a billet preparation module, an integrated transfer system, a temperature controlled forming tool, including process simulation, and a dedicated micro forming......The industrial demand for micro mechanical components has surged in the later years with the constant introduction of more integrated products. The micro bulk forming process holds a promising pledge of delivering high quality micro mechanical components at low cost and high production rates...... press. The system is demonstrated on an advanced micro forming case where a dental component is formed in medical grade Titanium.... Modeling and simulation for micro DC motor based on simulink Shen, Hanxin; Lei, Qiao; Chen, Wenxiang The micro DC motor has a large market demand but there is a lack of theoretical research for it. Through detailed analysis of the commutation process of micro DC motor commutator, based on micro DC motor electromagnetic torque equation and mechanical torque equation, with the help of Simulink toolkit, a triangle connection micro DC motor simulation model is established. By using the model, a sample micro DC motor are simulated, and an experimental measurements has been carried on the sample micro DC motor. It is found that the simulation results are consistent with theoretical analysis and experimental results. Gas Mixtures for Welding with Micro-Jet Cooling Węgrzyn T. Full Text Available Welding with micro-jet cooling after was tested only for MIG and MAG processes. For micro-jet gases was tested only argon, helium and nitrogen. A paper presents a piece of information about gas mixtures for micro-jet cooling after in welding. There are put down information about gas mixtures that could be chosen both for MAG welding and for micro-jet process. There were given main information about influence of various micro-jet gas mixtures on metallographic structure of steel welds. Mechanical properties of weld was presented in terms of various gas mixtures selection for micro-jet cooling. System program for MICRO-CAMAC terminal system Sasajima, Yoji; Yamada, Takayuki; Yagi, Hideyuki; Ishiguro, Misako A JAERI on-line network system was developed and exists for on-line data processing of nuclear instrumentation. As terminal systems for the network system, the one with a Micro -8 micro-computer is used. By modifying the control program for Micro-8 terminal system, a system program has been developed for a MICRO-CAMAC terminal system, which is controlled by a micro-computer framed within the CAMAC Crate Controller. In this report are described software specifications of the MICRO -CAMAC terminal system and its operation method. (author) Fabrication of a Flexible Micro CO Sensor for Micro Reformer Applications Full Text Available Integration of a reformer and a proton exchange membrane fuel cell (PEMFC is problematic due to the presence in the gas from the reforming process of a slight amount of carbon monoxide. Carbon monoxide poisons the catalyst of the proton exchange membrane fuel cell subsequently degrading the fuel cell performance, and necessitating the sublimation of the reaction gas before supplying to fuel cells. Based on the use of micro-electro-mechanical systems (MEMS technology to manufacture flexible micro CO sensors, this study elucidates the relation between a micro CO sensor and different SnO2 thin film thicknesses. Experimental results indicate that the sensitivity increases at temperatures ranging from 100–300 °C. Additionally, the best sensitivity is obtained at a specific temperature. For instance, the best sensitivity of SnO2 thin film thickness of 100 nm at 300 °C is 59.3%. Moreover, a flexible micro CO sensor is embedded into a micro reformer to determine the CO concentration in each part of a micro reformer in the future, demonstrating the inner reaction of a micro reformer in depth and immediate detection. A novel integrated multifunction micro-sensor for three-dimensional micro-force measurements. Wang, Weizhong; Zhao, Yulong; Qin, Yafei An integrated multifunction micro-sensor for three-dimensional micro-force precision measurement under different pressure and temperature conditions is introduced in this paper. The integrated sensor consists of three kinds of sensors: a three-dimensional micro-force sensor, an absolute pressure sensor and a temperature sensor. The integrated multifunction micro-sensor is fabricated on silicon wafers by micromachining technology. Different doping doses of boron ion, placement and structure of resistors are tested for the force sensor, pressure sensor and temperature sensor to minimize the cross interference and optimize the properties. A glass optical fiber, with a ladder structure and sharp tip etched by buffer oxide etch solution, is glued on the micro-force sensor chip as the tactile probe. Experimental results show that the minimum force that can be detected by the force sensor is 300 nN; the lateral sensitivity of the force sensor is 0.4582 mV/μN; the probe length is linearly proportional to sensitivity of the micro-force sensor in lateral; the sensitivity of the pressure sensor is 0.11 mv/KPa; the sensitivity of the temperature sensor is 5.836 × 10(-3) KΩ/°C. Thus it is a cost-effective method to fabricate integrated multifunction micro-sensors with different measurement ranges that could be used in many fields. A Novel Integrated Multifunction Micro-Sensor for Three-Dimensional Micro-Force Measurements Yafei Qin Full Text Available An integrated multifunction micro-sensor for three-dimensional micro-force precision measurement under different pressure and temperature conditions is introduced in this paper. The integrated sensor consists of three kinds of sensors: a three-dimensional micro-force sensor, an absolute pressure sensor and a temperature sensor. The integrated multifunction micro-sensor is fabricated on silicon wafers by micromachining technology. Different doping doses of boron ion, placement and structure of resistors are tested for the force sensor, pressure sensor and temperature sensor to minimize the cross interference and optimize the properties. A glass optical fiber, with a ladder structure and sharp tip etched by buffer oxide etch solution, is glued on the micro-force sensor chip as the tactile probe. Experimental results show that the minimum force that can be detected by the force sensor is 300 nN; the lateral sensitivity of the force sensor is 0.4582 mV/μN; the probe length is linearly proportional to sensitivity of the micro-force sensor in lateral; the sensitivity of the pressure sensor is 0.11 mv/KPa; the sensitivity of the temperature sensor is 5.836 × 10−3 KΩ/°C. Thus it is a cost-effective method to fabricate integrated multifunction micro-sensors with different measurement ranges that could be used in many fields. High accuracy and precision micro injection moulding of thermoplastic elastomers micro ring production Calaon, Matteo; Tosello, Guido; Elsborg, René The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance... Micro-masters of glioblastoma biology and therapy: increasingly recognized roles for microRNAs. Floyd, Desiree; Purow, Benjamin MicroRNAs are small noncoding RNAs encoded in eukaryotic genomes that have been found to play critical roles in most biological processes, including cancer. This is true for glioblastoma, the most common and lethal primary brain tumor, for which microRNAs have been shown to strongly influence cell viability, stem cell characteristics, invasiveness, angiogenesis, metabolism, and immune evasion. Developing microRNAs as prognostic markers or as therapeutic agents is showing increasing promise and has potential to reach the clinic in the next several years. This succinct review summarizes current progress and future directions in this exciting and steadily expanding field. Morphology of Major Stone Types, As Shown by Micro Computed Tomography (micro CT) Jackson, Molly E.; Beuschel, Christian A.; McAteer, James A.; Williams, James C. Micro CT offers the possibility of providing a non-destructive method of stone analysis that allows visualization of 100% of the stone's volume. For the present study, micro CT analysis was completed on stones of known composition with isotropic voxel sizes of either 7 or 9.1 μm. Each mineral type was distinctive, either by x-ray attenuation values or by morphology. Minor components, such as the presence of apatite in oxalate stones, were easily seen. The analysis of stones by micro CT opens up the possibility of exploring the stone as an encapsulated history of the patient's disease, showing changes in mineral deposition with time. Design and fabrication of self-powered micro-harvesters rotating and vibrated micro-power systems Pan, C T; Lin, Liwei; Chen, Ying-Chung Presents the latest methods for designing and fabricating self-powered micro-generators and energy harvester systems Design and Fabrication of Self-Powered Micro-Harvesters introduces the latest trends of self-powered generators and energy harvester systems, including the design, analysis and fabrication of micro power systems. Presented in four distinct parts, the authors explore the design and fabrication of: vibration-induced electromagnetic micro-generators; rotary electromagnetic micro-generators; flexible piezo-micro-generator with various widths; and PVDF electrospunpiezo-energy with Micro-thermal analysis of polyester coatings Fischer, H.R. The application and suitability of micro-thermal analysis to detect changes in the chemical and physical properties of coating due to ageing and especially photo-degradation is demonstrated using a model polyester coating based on neopentyl glycol isophthalic acid. The changes in chemical structure Hollow micro string based calorimeter device positions so as to form a free released double clamped string in-between said two longitudinally distanced positions said micro-channel string comprising a microfluidic channel having a closed cross section and extending in the longitudinal direction of the hollow string, acoustical means adapted... Micro Econometric Modelling of Household Energy Use Leth-Petersen, Søren Presents a micro econometric analysis of household electricity and natural gas demand for Danish households observed in 1996. Dependence between demand for gas and demand for electricity; Separability of demand for gas from demand for electricity; Relation between energy consumption and the age... MicroRNA mimicry blocks pulmonary fibrosis Montgomery, Rusty L; Yu, Guoying; Latimer, Paul A; Stack, Christianna; Robinson, Kathryn; Dalby, Christina M; Kaminski, Naftali; van Rooij, Eva Over the last decade, great enthusiasm has evolved for microRNA (miRNA) therapeutics. Part of the excitement stems from the fact that a miRNA often regulates numerous related mRNAs. As such, modulation of a single miRNA allows for parallel regulation of multiple genes involved in a particular Micro-Macro Paradoxes of Entrepreneurship Søgaard, Villy Artiklen tager afsæt i det såkaldte micro-macro paradox fra Aids-Efficiency litteraturen og argumenterer for, at en tilsvarende problemstilling bør inddrages i vurderingen af f.eks. de beskæftigelsesmæssige konsekvenser af entrepreneuriel virksomhed. Den påviser også i en gennemgang af litteratur... Thermal property testing technique on micro specimen Baba, Tetsuya; Kishimoto, Isao; Taketoshi, Naoyuki This study aims at establishment of further development on some testing techniques on the nuclear advanced basic research accumulated by the National Research Laboratory of Metrology for ten years. For this purpose, a technology to test heat diffusion ratio and specific heat capacity of less than 3 mm in diameter and 1 mm in thickness of micro specimen and technology to test heat diffusion ratio at micro area of less than 1 mm in area along cross section of less than 10 mm in diameter of column specimen were developed to contribute to common basic technology supporting the nuclear power field. As a result, as an element technology to test heat diffusion ratio and specific heat capacity of the micro specimen, a specimen holding technique stably to hold a micro specimen with 3 mm in diameter could be developed. And, for testing the specific heat capacity by using the laser flush differential calorimetry, a technique to hold two specimen of 5 mm in diameter at their proximities was also developed. In addition, by promoting development of thermal property data base capable of storing thermal property data obtained in this study and with excellent workability in this 1998 fiscal year a data in/out-put program with graphical user interface could be prepared. (G.K.) Recent advances in micro-vibration isolation Liu, Chunchuan; Jing, Xingjian; Daley, Steve; Li, Fengming Micro-vibration caused by disturbance sources onboard spacecraft can severely degrade the working environment of sensitive payloads. Some notable vibration control methods have been developed particularly for the suppression or isolation of micro-vibration over recent decades. Usually, passive isolation techniques are deployed in aerospace engineering. Active isolators, however, are often proposed to deal with the low frequency vibration that is common in spacecraft. Active/passive hybrid isolation has also been effectively used in some spacecraft structures for a number of years. In semi-active isolation systems, the inherent structural performance can be adjusted to deal with variation in the aerospace environment. This latter approach is potentially one of the most practical isolation techniques for micro-vibration isolation tasks. Some emerging advanced vibration isolation methods that exploit the benefits of nonlinearity have also been reported in the literature. This represents an interesting and highly promising approach for solving some challenging problems in the area. This paper serves as a state-of-the-art review of the vibration isolation theory and/or methods which were developed, mainly over the last decade, specifically for or potentially could be used for, micro-vibration control. Power Talk in DC Micro Grids Angjelichinoski, Marko; Stefanovic, Cedomir; Popovski, Petar Power talk is a novel concept for communication among units in a Micro Grid (MG), where information is sent by using power electronics as modems and the common bus of the MG as a communication medium. The technique is implemented by modifying the droop control parameters from the primary control... Micro and Small Enterprises Incubator - Phase III The goals of the Mozambique Information and Communication Technology Micro and Small Enterprises Incubator (MICTI Incubator) are twofold: to identify sustainable opportunities for technology-based businesses in priority development areas; and to test the assumption that technology-based businesses can mentor the ... Introducing Micro-finance in Sweden Barinaga, Ester The case describes the first year of efforts to introduce microfinance as a tool to work with vulnerable groups in Sweden, more particularly ex-convicts, former drug-addicts and longterm unemployed women of immigrant background. The teaching objective is to discuss whether micro-finance can be seen... OSH management in small and micro enterprises Zwetsloot, G.I.J.M. Small and medium-sized enterprises (SMEs) are widely acknowledged as the backbone of the European economy. According to EUROSTAT statistics [1], [2], 29.6% of the EU employees work in micro enterprises (<10 employees), while 20.6 % are employed in small firms (<50 employees). Indeed, half of the Micro-channel plates and vacuum detectors Gys, T., E-mail: [email protected] A micro-channel plate is an array of miniature electron multipliers that are each acting as a continuous dynode chain. The compact channel structure results in high spatial and time resolutions and robustness to magnetic fields. Micro-channel plates have been originally developed for night vision applications and integrated as an amplification element in image intensifiers. These devices show single-photon sensitivity with very low noise and have been used as such for scintillating fiber tracker readout in high-energy physics experiments. Given their very short transit time spread, micro-channel plate photomultiplier tubes are also being used in time-of-flight and particle identification detectors. The present paper will cover the history of the micro-channel plate development, basic features, and some of their applications. Emphasis will be put on various new manufacturing processes that have been developed over the last few years, and that result in a significant improvement in terms of efficiency, noise, and lifetime performance. The Micro-Category Account of Analogy Green, Adam E.; Fugelsang, Jonathan A.; Kraemer, David J. M.; Dunbar, Kevin N. Here, we investigate how activation of mental representations of categories during analogical reasoning influences subsequent cognitive processing. Specifically, we present and test the central predictions of the "Micro-Category" account of analogy. This account emphasizes the role of categories in aligning terms for analogical mapping. In a… Mozambique Information and Communication Technology : Micro ... MicroRNAs in skin tissue engineering. Miller, Kyle J; Brown, David A; Ibrahim, Mohamed M; Ramchal, Talisha D; Levinson, Howard 35.2 million annual cases in the U.S. require clinical intervention for major skin loss. To meet this demand, the field of skin tissue engineering has grown rapidly over the past 40 years. Traditionally, skin tissue engineering relies on the "cell-scaffold-signal" approach, whereby isolated cells are formulated into a three-dimensional substrate matrix, or scaffold, and exposed to the proper molecular, physical, and/or electrical signals to encourage growth and differentiation. However, clinically available bioengineered skin equivalents (BSEs) suffer from a number of drawbacks, including time required to generate autologous BSEs, poor allogeneic BSE survival, and physical limitations such as mass transfer issues. Additionally, different types of skin wounds require different BSE designs. MicroRNA has recently emerged as a new and exciting field of RNA interference that can overcome the barriers of BSE design. MicroRNA can regulate cellular behavior, change the bioactive milieu of the skin, and be delivered to skin tissue in a number of ways. While it is still in its infancy, the use of microRNAs in skin tissue engineering offers the opportunity to both enhance and expand a field for which there is still a vast unmet clinical need. Here we give a review of skin tissue engineering, focusing on the important cellular processes, bioactive mediators, and scaffolds. We further discuss potential microRNA targets for each individual component, and we conclude with possible future applications. Copyright © 2015 Elsevier B.V. All rights reserved. Meissner-levitated micro-systems Coombs, T A; Samad, I; Hong, Z; Eves, D; Rastogi, A [Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 PZ (United Kingdom) Advanced silicon processing techniques developed for the Very Large Scale Integration (VLSI) industry have been exploited in recent years to enable the production of micro-fabricated moving mechanical systems known as Micro Electro Mechanical Systems (MEMS). These devices offer advantages in terms of cost, scalability and robustness over their preceding equivalents. Cambridge University have worked for many years on the investigation of high temperature superconductors (HTS) in flywheel energy storage applications. This experience is now being used to research into superconducting Micro-Bearings for MEMS, whereby circular permanent magnet arrays are levitated and spun above a superconductor to produce bearings suitable for motors and other micron scale devices. The novelty in the device lies in the fact that the rotor is levitated into position by Meissner flux exclusion, whilst stability is provided by flux pinned within the body of the superconductor. This work includes: the investigation of the properties of various magnetic materials, their fabrication processes and their suitability for MEMS; finite element analysis to analyse the interaction between the magnetic materials and YBCO to determine the stiffness and height of levitation. Finally a micro-motor with the above principles is currently being fabricated within the group. 3D Programmable Micro Self Assembly Bohringer, Karl F; Parviz, Babak A; Klavins, Eric .... We have developed a "self assembly tool box" consisting of a range of methods for micro-scale self-assembly in 2D and 3D We have shown physical demonstrations of simple 3D self-assemblies which lead... MEMS Micro-Valve for Space Applications Chakraborty, I.; Tang, W. C.; Bame, D. P.; Tang, T. K. We report on the development of a Micro-ElectroMechanical Systems (MEMS) valve that is designed to meet the rigorous performance requirements for a variety of space applications, such as micropropulsion, in-situ chemical analysis of other planets, or micro-fluidics experiments in micro-gravity. These systems often require very small yet reliable silicon valves with extremely low leak rates and long shelf lives. Also, they must survive the perils of space travel, which include unstoppable radiation, monumental shock and vibration forces, as well as extreme variations in temperature. Currently, no commercial MEMS valve meets these requirements. We at JPL are developing a piezoelectric MEMS valve that attempts to address the unique problem of space. We begin with proven configurations that may seem familiar. However, we have implemented some major design innovations that should produce a superior valve. The JPL micro-valve is expected to have an extremely low leak rate, limited susceptibility to particulates, vibration or radiation, as well as a wide operational temperature range. Micro-PIXE for single cell analysis Ortega, Richard The knowledge of the intracellular distribution of biological relevant metals is important to understand their mechanisms of action in cells, either for physiological, toxicological or pathological processes. However, the direct detection of trace metals in single cells is a challenging task that requires sophisticated analytical developments. The combination of micro-PIXE with RBS and STIM (Scanning Transmission Ion Microscopy) allows the quantitative determination of trace metal content within sub-cellular compartments. The application of STIM analysis provides high spatial resolution imaging (< 200 nm) and excellent mass sensitivity (< 0.1 ng). Application of the STIM-PIXE-RBS methodology is absolutely needed when organic mass loss appears during PIXE-RBS irradiation. This combination of STIM-PIXE-RBS provides fully quantitative determination of trace element content, expressed in μg/g, which is a quite unique capability for micro-PIXE compared to other micro-analytical methods such as the electron and synchrotron x-ray fluorescence. Examples of micro-PIXE studies for sub-cellular imaging of trace elements in various fields of interest will be presented: in patho-physiology of trace elements involved in neurodegenerative diseases such as Parkinson's disease, and in toxicology of metals such as cobalt. (author) MICRO AUTO GASIFICATION SYSTEM: EMISSIONS CHARACTERIZATION A compact, CONEX-housed waste to energy unit, Micro Auto Gasification System (MAGS), was characterized for air emissions from burning of military waste types. The MAGS unit is a dual chamber gasifier with a secondary diesel-fired combustor. Eight tests were conducted with multipl... Characterization of cellulose nanofibrillation by micro grinding Sandeep S. Nair; J.Y. Zhu; Yulin Deng; Arthur J. Ragauskas A fundamental understanding of the morphological development of cellulose fibers during fibrillation using micro grinder is very essential to develop effective strategies for process improvement and to reduce energy consumption. We demonstrated some simple measures for characterizing cellulose fibers fibrillated at different fibrillation times through the grinder. The... Targeting of microRNAs for therapeutics Stenvang, Jan; Lindow, Morten; Kauppinen, Sakari miRNAs (microRNAs) comprise a class of small endogenous non-coding RNAs that post-transcriptionally repress gene expression by base-pairing with their target mRNAs. Recent evidence has shown that miRNAs play important roles in a wide variety of human diseases, such as viral infections, cancer... Mechanics over micro and nano scales Chakraborty, Suman Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems MicroRNAs, Regulatory Networks, and Comorbidities Russo, Francesco; Belling, Kirstine; Jensen, Anders Boeck MicroRNAs (miRNAs) are small noncoding RNAs involved in the posttranscriptional regulation of messenger RNAs (mRNAs). Each miRNA targets a specific set of mRNAs. Upon binding the miRNA inhibits mRNA translation or facilitate mRNA degradation. miRNAs are frequently deregulated in several pathologies... Microfluidics microFACS for Life Detection Platt, Donald W.; Hoover, Richard B. A prototype micro-scale Fluorescent Activated Cell Sorter (microFACS) for life detection has been built and is undergoing testing. A functional miniature microfluidics instrument with the ability to remotely distinguish live or dead bacterial cells from abiotic particulates in ice or permafrost of icy bodies of the solar system would be of fundamental value to NASA. The use of molecular probes to obtain the bio-signature of living or dead cells could answer the most fundamental question of Astrobiology: Does life exist beyond Earth? The live-dead fluorescent stains to be used in the microFACS instrument function only with biological cell walls. The detection of the cell membranes of living or dead bacteria (unlike PAH's and many other Biomarkers) would provide convincing evidence of present or past life. This miniature device rapidly examine large numbers of particulates from a polar ice or permafrost sample and distinguish living from dead bacteria cells and biological cells from mineral grains and abiotic particulates and sort the cells and particulates based on a staining system. Any sample found to exhibit fluorescence consistent with living cells could then be used in conjunction with a chiral labeled release experiment or video microscopy system to seek addition evidence for cellular metabolism or motility. Results of preliminary testing and calibration of the microFACS prototype instrument system with pure cultures and enrichment assemblages of microbial extremophiles will be reported. The Fail-Safe Micro Research Paper. Saunders, Mary Anne A key element in a research paper writing assignment modified for students of English as a second language to assure their success is teacher control over most of the process. A chronological plan for action for the micro research project includes these steps: creating an awareness of current events and controversial issues, practicing necessary… AFM plough YBCO micro bridges: substrate effects Elkaseh, A Full Text Available AFM nanolithography was used as a novel cutting technique to define micro-size YBCO superconducting constrictions. Researchers studied the substrate effects on MgO and STO substrates and showed that the observed Shapiro steps from the bridges on STO... SU-8 micro Coriolis mass flow sensor Monge, Rosa; Groenesteijn, Jarno; Alveringh, Dennis; Wiegerink, Remco J.; Lötters, Joost Conrad; Fernandez, Luis J. Abstract This work presents the modelling, design, fabrication and test of the first micro Coriolis mass flow sensor fully fabricated in SU-8 by photolithography processes. The sensor consists of a channel with rectangular cross-section with inner opening of 100 μm × 100 μm and is actuated at Summary of measurements with MicroVent Dreau, Jerome Le; Heiselberg, Per Kvols; Jensen, Rasmus Lund This summary presents the main results when MicroVent is used in the cooling case, without heat recovery. Experiments have thus been performed with relatively low inlet air temperature (below 15°C). Different solutions have been compared to decrease the risk of draught in the occupied zone: � usi... Special Application Thermoelectric Micro Isotope Power Sources Heshmatpour, Ben; Lieberman, Al; Khayat, Mo; Leanna, Andrew; Dobry, Ted Promising design concepts for milliwatt (mW) size micro isotope power sources (MIPS) are being sought for use in various space and terrestrial applications, including a multitude of future NASA scientific missions and a range of military applications. To date, the radioisotope power sources (RPS) used on various space and terrestrial programs have provided power levels ranging from one-half to several hundred watts. In recent years, the increased use of smaller spacecraft and planned new scientific space missions by NASA, special terrestrial and military applications suggest the need for lower power, including mW level, radioisotope power sources. These power sources have the potential to enable such applications as long-lived meteorological or seismological stations distributed across planetary surfaces, surface probes, deep space micro-spacecraft and sub-satellites, terrestrial sensors, transmitters, and micro-electromechanical systems. The power requirements are in the range of 1 mW to several hundred mW. The primary technical requirements for space applications are long life, high reliability, high specific power, and high power density, and those for some special military uses are very high power density, specific power, reliability, low radiological induced degradation, and very low radiation leakage. Thermoelectric conversion is of particular interest because of its technological maturity and proven reliability. This paper summarizes the thermoelectric, thermal, and radioisotope heat source designs and presents the corresponding performance for a number of mW size thermoelectric micro isotope power sources Microfluidic production of polymeric micro- and nanoparticles Serra, C.; Kahn, I.U.; Cortese, B.; Croon, de M.H.J.M.; Hessel, V.; Ono, T.; Anton, N.; Vandamme, Th. Polymeric micro- and nanoparticles have attracted a wide attention of researchers in various areas such as drug delivery, sensing, imaging, cosmetics, diagnostics, and biotechnology. However, processes with conventional equipment do not always allow a precise control of their morphology, size, size ACCESS TO MICRO CREDIT AND ECONOMIC EMPOWERMENT ... African Journals Online (AJOL) market women have a low socio-economic status due to financial and ... market women have little or no access to micro credit schemes largely .... industry contributes to its poor performance in servicing the needs of the poor especially .... single; 11.1% are either divorced or separated; 33.3% are widows whereas a larger. Micro and nanoplatforms for biological cell analysis Svendsen, Winnie Edith; Castillo, Jaime; Moresco, Jacob Lange studies mimicking the in vivo situation is presented and an example of surface modification for cellular growth is described. Then novel electronic sensor platforms are discussed and an example of a nanosensor with electronic readout is given utilizing both micro- and nanotechnology. Finally an example... Nonlinear dynamics of biomimetic micro air vehicles Hou, Y; Kong, J [College of Mechanical Automation, Wuhan University of Science and Technology, Wuhan, 430081 (China)], E-mail: [email protected] Flapping-wing micro air vehicles (FMAV) are new conceptual air vehicles that mimic the flying modes of birds and insects. They surpass the research fields of traditional airplane design and aerodynamics on application technologies, and initiate the applications of MEMS technologies on aviation fields. This paper studies a micro flapping mechanism that based upon insect thorax and actuated by electrostatic force. Because there are strong nonlinear coupling between the two physical domains, electrical and mechanical, the static and dynamic characteristics of this system are very complicated. Firstly, the nonlinear dynamic model of the electromechanical coupling system is set up according to the physical model of the flapping mechanism. The dynamic response of the system in constant voltage is studied by numerical method. Then the effect of damping and initial condition on dynamic characteristics of the system is analyzed in phase space. In addition, the dynamic responses of the system in sine voltage excitation are discussed. The results of research are helpful to the design, fabrication and application of the micro flapping mechanism of FMAV, and also to other micro electromechanical system that actuated by electrostatic force. Micro-powder injection moulding of tungsten Zeep, B. For He-cooled Divertors as integral components of future fusion power plants, about 300000 complex shaped tungsten components are to be fabricated. Tungsten is the favoured material because of its excellent properties (high melting point, high hardness, high sputtering resistance, high thermal conductivity). However, the material's properties cause major problems for large scale production of complex shaped components. Due to the resistance of tungsten to mechanical machining, new fabrication technologies have to be developed. Powder injection moulding as a well established shaping technology for a large scale production of complex or even micro structured parts might be a suitable method to produce tungsten components for fusion applications but is not yet commercially available. The present thesis is dealing with the development of a powder injection moulding process for micro structured tungsten components. To develop a suitable feedstock, the powder particle properties, the binder formulation and the solid load were optimised. To meet the requirements for a replication of micro patterned cavities, a special target was to define the smallest powder particle size applicable for micro-powder injection moulding. To investigate the injection moulding performance of the developed feedstocks, experiments were successfully carried out applying diverse cavities with structural details in micro dimension. For debinding of the green bodies, a combination of solvent debinding and thermal debinding has been adopted for injection moulded tungsten components. To develop a suitable debinding strategy, a variation of the solvent debinding time, the heating rate and the binder formulation was performed. For investigating the thermal consolidation behaviour of tungsten components, sinter experiments were carried out applying tungsten powders suitable for micro-powder injection moulding. First mechanical tests of the sintered samples showed promising material properties such as a Thermal engineering and micro-technology; Thermique et microtechnologie Kandlikar, S. [Rochester Inst. of Tech., NY (United States); Luo, L. [Institut National Polytechnique, 54 - Nancy (France); Gruss, A. [CEA Grenoble, GRETH, 38 (France); Wautelet, M. [Mons Univ. (Belgium); Gidon, S. [CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France); Gillot, C. [Ecole Nationale Superieure d' Ingenieurs Electriciens de Grenoble, 38 - Saint Martin d' Heres (France)]|[CEA Grenoble, Lab. Electronique et de Technologie de l' Informatique (LETI), 38 (France); Therme, J.; Marvillet, Ch.; Vidil, R. [CEA Grenoble, 38 (France); Dutartre, D. [ST Microelectronique, France (France); Lefebvre, Ph. [SNECMA, 75 - Paris (France); Lallemand, M. [Institut National des Sciences Appliquees (INSA), 69 - Villeurbanne (France); Colin, S. [Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France); Joulin, K. [Ecole Nationale Superieure de Mecanique et d' Aerotechnique (ENSMA), 86 - Poitiers (France); Gad el Hak, M. [Virginia Univ., Charlottesville, VA (United States) This document gathers the abstracts and transparencies of 5 invited conferences of this congress of the SFT about heat transfers and micro-technologies: Flow boiling in microchannels: non-dimensional groups and heat transfer mechanisms (S. Kandlikar); Intensification and multi-scale process units (L. Luo and A. Gruss); Macro-, micro- and nano-systems: different physics? (M. Wautelet); micro-heat pipes (M. Lallemand); liquid and gas flows inside micro-ducts (S. Colin). The abstracts of the following presentations are also included: Electro-thermal writing of nano-scale memory points in a phase change material (S. Gidon); micro-technologies for cooling in micro-electronics (C. Gillot); the Minatec project (J. Therme); importance and trends of thermal engineering in micro-electronics (D. Dutartre); Radiant heat transfers at short length scales (K. Joulain); Momentum and heat transfer in micro-electromechanical systems (M. Gad-el-Hak). (J.S.) Nonequilibrium fluctuations in micro-MHD effects on electrodeposition Aogaki, Ryoichi; Morimoto, Ryoichi; Asanuma, Miki In copper electrodeposition under a magnetic field parallel to electrode surface, different roles of two kinds of nonequilibrium fluctuations for micro-magnetohydrodynamic (MHD) effects are discussed; symmetrical fluctuations are accompanied by the suppression of three dimensional (3D) nucleation by micro-MHD flows (the 1st micro-MHD effect), whereas asymmetrical fluctuations controlling 2D nucleation yield secondary nodules by larger micro-MHD flows (the 2nd micro-MHD effect). Though the 3D nucleation with symmetrical fluctuations is always suppressed by the micro-MHD flows, due to the change in the rate-determining step from electron transfer to mass transfer, the 2D nucleation with asymmetrical fluctuations newly turns unstable, generating larger micro-MHD flows. As a result, round semi-spherical deposits, i.e., secondary nodules are yielded. Using computer simulation, the mechanism of the 2nd micro-MHD effect is validated. influence of some variable parameters on horizontal elliptic micro temidayo Jul 2, 2013 ... 1,3,4DEPARTMENT OF MECHANICAL ENGINEERING, FACULTY OF ENGINEERING AND ... Keywords: Ellipse, Micro-channels, internal fins, Heat transfer, Fluid flow ...... reaction in micro scale segmented gas-liquid. Micro-Combined Heat and Power Device Test Facility Federal Laboratory Consortium — NIST has developed a test facility for micro-combined heat and power (micro-CHP) devices to measure their performance over a range of different operating strategies... MicroRNAs as regulatory elements in psoriasis Liu Yuan Full Text Available Psoriasis is a chronic, autoimmune, and complex genetic disorder that affects 23% of the European population. The symptoms of Psoriatic skin are inflammation, raised and scaly lesions. microRNA, which is short, nonprotein-coding, regulatory RNAs, plays critical roles in psoriasis. microRNA participates in nearly all biological processes, such as cell differentiation, development and metabolism. Recent researches reveal that multitudinous novel microRNAs have been identified in skin. Some of these substantial novel microRNAs play as a class of posttranscriptional gene regulator in skin disease, such as psoriasis. In order to insight into microRNAs biological functions and verify microRNAs biomarker, we review diverse references about characterization, profiling and subtype of microRNAs. Here we will share our opinions about how and which microRNAs are as regulatory in psoriasis. ICTs and Urban Micro Enterprises : Maximizing Opportunities for ... ICTs and Urban Micro Enterprises : Maximizing Opportunities for Economic Development ... the use of ICTs in micro enterprises and their role in reducing poverty. ... in its approach to technological connectivity but bottom-up in relation to. The micro turbine: the MIT example; La micro turbine: l'exemple du MIT Ribaud, Y. [Office National d' Etudes et de Recherches Aerospatiales (ONERA-DEFA), 92 - Chatillon (France) The micro turbine study began a few years ago at the MIT, with the participation of specialists from different fields. The purpose is the development of a MEMS (micro electro mechanical systems) based, 1 cm in diameter, micro gas turbine. Potential applications are devoted to micro drone propulsion, electric power generation for portable power sources in order to replace heavy Lithium batteries, satellite motorization, the surface distributed power for boundary suction on plane wings. The manufacturing constraints at such small scales lead to 2-D extruded shapes. The physical constraints stem from viscous effects and from limitations given by 2-D geometry. The time scales are generally shorter than for conventional machines. Otherwise the material properties are better at such length scales. Transposition from conventional turbomachinery laws is no more applicable and new design methods must be established. The present paper highlights the project progress and the technology breakthroughs. (author) Doppler speedometer for micro-organisms Penkov, F.; Tuleushev, A.; Lisitsyn, V.; Kim, S.; Tuleushev, Yu. Objective of Investigations: Development and creation of the Doppler speedometer for micro-organisms which allows to evaluate, in a real temporal scale, variations in the state of water suspension of micro-organisms under the effect of chemical, physical and other external actions. Statement of the Problem The main problem is absence of reliable, accessible for users and simple, in view of application, Doppler speedometers for micro-organisms. Nevertheless, correlation Doppler spectrometry in the regime of heterodyning the supporting and cell-scattered laser radiation is welt known. The main idea is that the correlation function of photo-current pulses bears an information on the averages over the assembly of cell velocities. For solving the biological problems, construction of auto-correlation function in the real-time regime with the delay time values comprising, function in the real-time regime with the delay time values comprising, nearly, 100 me (10 khz) or higher is needed. Computers of high class manage this problem using but the program software. Due to this, one can simplify applications of the proposed techniques provided he creates the Doppler speedometer for micro-organism on a base of the P entium . Expected Result Manufactured operable mock-up of the Doppler speedometer for micro-organisms in a form of the auxiliary computer block which allows to receive an information, in the real time scale, on the results of external effects of various nature on the cell assembly in transparent medium with a small volume of the studied cell suspension Primary expatiation on micro material evidence and authentication Yang Mingtai; Wang Wen; Wu Lunqiang; Dai Changsong The micro material evidence is the impersonal and concrete material evidences, and the quantity and volume is small, but it plays an important role in judicature litigation. In the paper, the basic character, type and form mechanism of the micro material evidence have been analyzed and discussed, and the gist of the micro material evidence has been summarized. It must play a helpful role in the micro material evidence to correlative workers for cognizance and application. (authors) Brushless DC micro-motor with external rotor Rizzo, M.; Turowski, J. The increasing use of high-tech electronic components has led researchers to try new solutions in the field of micro-scale electrical machinery. One such solution, described in this paper, consists of the substitution of a conventional mechanical commutator with an electronic type so as to allow the conversion of a electromagnetic micro-motor into a brushless version using permanent magnets. The comparison of the two micro-motor alternatives evidences the clear superiority of the brushless micro-motor Thermal conductivity of microPCMs-filled epoxy matrix composites Su, J.F.; Wang, X.Y; Huang, Z.; Zhao, Y.H.; Yuan, X.Y. Microencapsulated phase change materials (microPCMs) have been widely applied in solid matrix as thermal-storage or temperature-controlling functional composites. The thermal conductivity of these microPCMs/matrix composites is an important property need to be considered. In this study, a series of microPCMs have been fabricated using the in situ polymerization with various core/shell ratio and average diameter; the thermal conductivity of microPCMs/epoxy composites were investigated in detai... Replacement power supply with micro-hydropower. Case micro central - Pipinta Gomez, Jorge I; Hincapie, Luis A; Woodcock, Edgar; Arregoces Alvaro This paper describes the Micro-Hydro Electric Pipinta through its main components and their parameters. An evaluation of the actual power plant costs and a proposed redesign are also given with prices referenced to year 2004. A comparison between the electricity price supplied by the grid and the cost for the electricity in this Micro-Hydro Power Plant let conclude that projects of this type are available in rural areas of Colombia reached by the grid. The Model of Optimization of Micro Energy; HOMER: El Modelo de Optimizacin de Micro energa HOMER, the model of optimization of micro energy, helps to disear systems out of the network and interconnected to the network. You can use HOMER to carry out the analysis to explore an extensive rank of questions of diseo. HOMER, el modelo de optimizacin de micro energa, le ayuda a disear sistemas fuera de la red e interconectados a la red. Usted puede usar HOMER para llevar a cabo el anlisis para explorar un amplio rango de preguntas de diseo. Determination of elemental abundances in impact materials by micro-PIXE and micro-SRXRF methods Uzonyi, I.; Szabo, Gy.; Kiss, A.Z.; Szoeoer, Gy.; Rozsa, P. The most famous and well-preserved meteorite crater in the world is the Barringer Meteor Crater (Arizona, USA). The meteorite is supposed to be a fragment of a small asteroid of our solar system. During the impact event the matter of the projectile mixed with that of the target rocks forming breccias, slag and spherules. For the non-destructive characterization of the impact materials a combined micro-PIXE and micro-SRXRF technique was applied. (N.T.) DIANA-microT web server: elucidating microRNA functions through target prediction. Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT. MicroRNA-target binding structures mimic microRNA duplex structures in humans. Full Text Available Traditionally, researchers match a microRNA guide strand to mRNA sequences using sequence comparisons to predict its potential target genes. However, many of the predictions can be false positives due to limitations in sequence comparison alone. In this work, we consider the association of two related RNA structures that share a common guide strand: the microRNA duplex and the microRNA-target binding structure. We have analyzed thousands of such structure pairs and found many of them share high structural similarity. Therefore, we conclude that when predicting microRNA target genes, considering just the microRNA guide strand matches to gene sequences may not be sufficient--the microRNA duplex structure formed by the guide strand and its companion passenger strand must also be considered. We have developed software to translate RNA binding structure into encoded representations, and we have also created novel automatic comparison methods utilizing such encoded representations to determine RNA structure similarity. Our software and methods can be utilized in the other RNA secondary structure comparisons as well. Radar micro-doppler signatures processing and applications Chen, Victor C; Miceli, William J Radar Micro-Doppler Signatures: Processing and applications concentrates on the processing and application of radar micro-Doppler signatures in real world situations, providing readers with a good working knowledge on a variety of applications of radar micro-Doppler signatures. Methods for the measurement of micro material evidence The micro material evidence has been used in judicature litigation successfully, that must base on impersonal authentication with the advanced and proper methods. The basic principle, main trait, applicable range, analysis minimum and sensitive limit of twelve type instruments used in micro material evidence analysis have been related. Those could supply optional proper methods for micro material evidence. (authors) Micro digital sun sensor: system in a package Boom, C.W. de; Leijtens, J.A.P.; Duivenbode, L.M.H. van; Heiden, N. van der A novel micro Digital Sun Sensor (μDSS) is under development in the frame of a micro systems technology (MST) development program (Microned) from the Dutch Ministry of Economic Affairs. Use of available micro system technologies in combination with the implementation of a dedicated solarcell for Informal Micro-Enterprises and Solid Waste Collection: The Case ... The research used both Primary and secondary data sources. One hundred sixty micro-enterprise units were included in the survey. These account for about 35% of the total micro-enterprises available in the city. Stratified random sampling was employed based on the number and type of micro-enterprises available in each ... MicroRNAs: role and therapeutic targets in viral hepatitis van der Ree, Meike H.; de Bruijne, Joep; Kootstra, Neeltje A.; Jansen, Peter Lm; Reesink, Hendrik W. MicroRNAs regulate gene expression by binding to the 3'-untranslated region (UTR) of target messenger RNAs (mRNAs). The importance of microRNAs has been shown for several liver diseases, for example, viral hepatitis. MicroRNA-122 is highly abundant in the liver and is involved in the regulation of Radar Micro-Doppler classification of Mini-UAVs Harmanny, R.L.A.; Prémel-Cabic, G.; Wit, J.J.M. The radar micro-Doppler signature of a target depends on its micro-motion, i.e., the motion of parts of a target relative to the motion of the target as a whole. These micro-motions are very characteristic considering different target classes, e.g., the slow pendulum-like motion of a bird's wings Micro-processus et macro-structures Aaron Victor Cicourel Full Text Available Des approches sociologiques traditionnelles ont défini des macro-structures sociales comme un niveau particulier de la réalité sociale, à distinguer des micro-épisodes de l'action sociale. Cela les a conduits à concevoir ces macro-structures et à mener des recherches sur elles de manière plus ou moins indépendante des pratiques observables de la vie quotidienne. Cicourel soutient que les faits (macro-sociaux ne sont pas simplement donnés, mais émergent de pratiques routinières de la vie de tous les jours. Le macro, au sens de descriptions résumées, hors contexte, normalisées et typifiées, est un produit typique des procédures interactives et organisationnelles qui transforment les micro-événements en structures macro-sociales. Ainsi une précondition pour l'intégration des phénomènes micro- et macro-sociaux dans notre théorie et dans notre méthodologie renvoie à l'identification des processus contribuant à la création de macro-structures par des inférences routinières, des interprétations et des procédure de résumé. Le texte montre aussi que les différences entre approches micro-sociologiques apparaissent parallèles à celles existant entre approches micro et macro. On se centrant sur de petits fragments d'interactions conversationnelles, certains travaux micro-sociologiques tendent à ignorer ce qui informe ces interactions conversationnelles pour les participants eux-mêmes. Les comptes rendus décontextualisés produits par de telles méthodes ressemblent à la décontextualisation résultant des procédures macro-sociologiques d'agrégation. Contre cela, Cicourel défend la constitution de bases de données comparatives n'incluant pas seulement le contexte des interactions de face à face, mais étudiant aussi les phénomènes sociaux de manière systématique à travers différents contextes.Micro-processes and macro-structures. Notes on articulation between different levels of analysis Micro-CHP Systems for Residential Applications Timothy DeValve; Benoit Olsommer Integrated micro-CHP (Cooling, Heating and Power) system solutions represent an opportunity to address all of the following requirements at once: conservation of scarce energy resources, moderation of pollutant release into our environment, and assured comfort for home-owners. The objective of this effort was to establish strategies for development, demonstration, and sustainable commercialization of cost-effective integrated CHP systems for residential applications. A unified approach to market and opportunity identification, technology assessment, specific system designs, adaptation to modular product platform component conceptual designs was employed. UTRC's recommendation to U.S. Department of Energy is to go ahead with the execution of the proposed product development and commercialization strategy plan under Phase II of this effort. Recent indicators show the emergence of micro-CHP. More than 12,000 micro-CHP systems have been sold worldwide so far, around 7,500 in 2004. Market projections predict a world-wide market growth over 35% per year. In 2004 the installations were mainly in Europe (73.5%) and in Japan (26.4%). The market in North-America is almost non-existent (0.1%). High energy consumption, high energy expenditure, large spark-spread (i.e., difference between electricity and fuel costs), big square footage, and high income are the key conditions for market acceptance. Today, these conditions are best found in the states of New York, Pennsylvania, New Jersey, Wisconsin, Illinois, Indiana, Michigan, Ohio, New England states. A multiple stage development plan is proposed to address risk mitigation. These stages include concept development and supplier engagement, component development, system integration, system demonstration, and field trials. A two stage commercialization strategy is suggested based on two product versions. The first version--a heat and power system named Micro-Cogen, provides the heat and essential electrical power to the Power and hydrogen production from ammonia in a micro-thermophotovoltaic device integrated with a micro-reformer Um, Dong Hyun; Kim, Tae Young; Kwon, Oh Chae Power and hydrogen (H 2 ) production by burning and reforming ammonia (NH 3 ) in a micro-TPV (microscale-thermophotovoltaic) device integrated with a micro-reformer is studied experimentally. A heat-recirculating micro-emitter with the cyclone and helical adapters that enhance the residence time of fed fuel-air mixtures and uniform burning burns H 2 -added NH 3 -air mixtures. A micro-reformer that converts NH 3 to H 2 using ruthenium as a catalyst surrounds the micro-emitter as a heat source. The micro-reformer is surrounded by a chamber, the inner and outer walls of which have installations of gallium antimonide photovoltaic cells and cooling fins. For the micro-reformer-integrated micro-TPV device the maximum overall efficiency of 8.1% with electrical power of 4.5 W and the maximum NH 3 conversion rate of 96.0% with the H 2 production rate of 22.6 W (based on lower heating value) are obtained, indicating that the overall efficiency is remarkably enhanced compared with 2.0% when the micro-TPV device operates alone. This supports the potential of improving the overall efficiency of a micro-TPV device through integrating it with a micro-reformer. Also, the feasibility of using NH 3 as a carbon-free fuel for both burning and reforming in practical micro power and H 2 generation devices has been demonstrated. - Highlights: • Performance of micro-TPV device integrated with micro-reformer is evaluated. • Feasibility of using NH 3 –H 2 blends in integrated system has been demonstrated. • Integration with micro-reformer improves performance of micro-TPV device. • Maximum overall efficiency of 8.1% is found compared with 2.0% without integration Study for increasing micro-drill reliability by vibrating drilling Yang Zhaojun; Li Wei; Chen Yanhong; Wang Lijiang A study for increasing micro-drill reliability by vibrating drilling is described. Under the experimental conditions of this study it is observed, from reliability testing and the fitting of a life-distribution function, that the lives of micro-drills under ordinary drilling follow the log-normal distribution and the lives of micro-drills under vibrating drilling follow the Weibull distribution. Calculations for reliability analysis show that vibrating drilling can increase the lives of micro-drills and correspondingly reduce the scatter of drill lives. Therefore, vibrating drilling increases the reliability of micro-drills Welding with micro-jet cooling after was tested only for MIG and MAG processes. For micro-jet gases was tested only argon, helium and nitrogen. A paper presents a piece of information about gas mixtures for micro-jet cooling after in welding. There are put down information about gas mixtures that could be chosen both for MAG welding and for micro-jet process. There were given main information about influence of various micro-jet gas mixtures on metallographic structure of steel welds. Mechani... MicroRNA expression profiling of the porcine developing brain Podolska, Agnieszka; Kaczkowski, Bogumil; Busk, Peter Kamp MicroRNAs are small, non-coding RNA molecules that regulate gene expression at the post-transcriptional level and play an important role in the control of developmental and physiological processes. In particular, the developing brain contains an impressive diversity of microRNAs. Most micro...... and the growth curve when compared to humans. Considering these similarities, studies examining microRNA expression during porcine brain development could potentially be used to predict the expression profile and role of microRNAs in the human brain.... Sequential micro and ultrafiltration of distillery wastewater Vasić Vesna M. Full Text Available Water reuse and recycling, wastewater treatment, drinking water production and environmental protection are the key challenges for the future of our planet. Membrane separation technologies for the removal of all suspended solids and a fraction of dissolved solids from wastewaters, are becoming more and more promising. Also, these processes are playing a major role in wastewater purification systems because of their high potential for recovery of water from many industrial wastewaters. The aim of this work was to evaluate the application of micro and ultrafiltration for distillery wastewater purification in order to produce water suitable for reuse in the bioethanol industry. The results of the analyses of the permeate obtained after micro and ultrafiltration showed that the content of pollutants in distillery wastewater was significantly reduced. The removal efficiency for chemical oxygen demand, dry matter and total nitrogen was 90%, 99.2% and 99.9%, respectively. Suspended solids were completely removed from the stillage. Verification of Tolerance Chains in Micro Manufacturing Gasparin, Stefania .1 – 200 μm). Finally, an optical component is investigated with the purpose of suggesting a quality control approach for micro-manufacturing process through a control of the product. It is a useful method to adopt when the aim is to detect and quantify inconsistency or incompatibilities during a process...... on dimensional and geometrical metrology. If the measurements uncertainty is large compared to the tolerance interval, a small conformance zone is left for process variation. Therefore particular attention has to be paid to the instrument capabilities in order to reduce the measurement uncertainty. Different...... chain. In this way the process parameters can be adjusted in order to fulfil the requirements of the final micro-product.... Vision-guidance of micro- and nanorobots Wortmann, T.; Dahmen, C.; Fatikow, S. This paper presents a selection of image processing methods and algorithms, which are needed to enable the reliable automation of robotic tasks at the micro and nanoscale. Application examples are automatic assembly of new nanoscale electronic elements or automatic testing of material properties. Due to the very small object dimensions targeted here, the scanning electron microscope is the appropriate image sensor. The methods described in this paper can be categorized into procedures of object recognition and object tracking. Object recognition deals with the problem of finding and labeling nanoscale objects in an image scene, whereas tracking is the process of continuously following the movement of a specific object. Both methods carried out subsequently enable fully automated robotic tasks at the micro- and nanoscale. A selection of algorithms is demonstrated and found suitable. Micro-system inertial sensing technology overview. Allen, James Joe The purpose of this report is to provide an overview of Micro-System technology as it applies to inertial sensing. Transduction methods are reviewed with capacitance and piezoresistive being the most often used in COTS Micro-electro-mechanical system (MEMS) inertial sensors. Optical transduction is the most recent transduction method having significant impact on improving sensor resolution. A few other methods are motioned which are in a R&D status to hopefully allow MEMS inertial sensors to become viable as a navigation grade sensor. The accelerometer, gyroscope and gravity gradiometer are the type of inertial sensors which are reviewed in this report. Their method of operation and a sampling of COTS sensors and grade are reviewed as well. Energy Conversion at Micro and Nanoscale Gammaitoni, Luca Energy management is considered a task of strategic importance in contemporary society. It is a common fact that the most successful economies of the planet are the economies that can transform and use large quantities of energy. In this talk we will discuss the role of energy with specific attention to the processes that happens at micro and nanoscale. The description of energy conversion processes at these scales requires approaches that go way beyond the standard equilibrium termodynamics of macroscopic systems. In this talk we will address from a fundamental point of view the physics of the dissipation of energy and will focus our attention to the energy transformation processes that take place in the modern micro and nano information and communication devices MIDN: A spacecraft Micro-dosimeter mission Pisacane, V. L.; Ziegler, J. F.; Nelson, M. E.; Caylor, M.; Flake, D.; Heyen, L.; Youngborg, E.; Rosenfeld, A. B.; Cucinotta, F.; Zaider, M.; Dicello, J. F. MIDN (Micro-dosimetry instrument) is a payload on the MidSTAR-I spacecraft (Midshipman Space Technology Applications Research) under development at the United States Naval Academy. MIDN is a solid-state system being designed and constructed to measure Micro-dosimetric spectra to determine radiation quality factors for space environments. Radiation is a critical threat to the health of astronauts and to the success of missions in low-Earth orbit and space exploration. The system will consist of three separate sensors, one external to the spacecraft, one internal and one embedded in polyethylene. Design goals are mass <3 kg and power <2 W. The MidSTAR-I mission in 2006 will provide an opportunity to evaluate a preliminary version of this system. Its low power and mass makes it useful for the International Space Station and manned and unmanned interplanetary missions as a real-time system to assess and alert astronauts to enhanced radiation environments. (authors) Micro Z - in nuclear power generation Micro-Z microprocessor boiler control systems have been adopted by the French State Electricity utility for their next group of 1300MW units. It replaces the analogue system used in earlier units. Micro-Z is a fully distributed digital control and regulation system and in contrast to earlier digital systems with centralised data processing, processing is carried out in localised sub-units such as regulators and actuators. In the system adopted by EdF for the 1300MW units, digital communication with centralised and common control stations and supervisory computer were not required. The system is made up of control cards, interface modules, separate stations and configuration and control hardware and software. There are no fundamental difference in general system design between analogue and digital systems, the differences are found in digitization of signals and concentration of processing at board level. (U.K.) Lignin from Micro- to Nanosize: Applications Stefan Beisl Full Text Available Micro- and nanosize lignin has recently gained interest due to improved properties compared to standard lignin available today. As the second most abundant biopolymer after cellulose, lignin is readily available but used for rather low-value applications. This review focuses on the application of micro- and nanostructured lignin in final products or processes that all show potential for high added value. The fields of application are ranging from improvement of mechanical properties of polymer nanocomposites, bactericidal and antioxidant properties and impregnations to hollow lignin drug carriers for hydrophobic and hydrophilic substances. Also, a carbonization of lignin nanostructures can lead to high-value applications such as use in supercapacitors for energy storage. The properties of the final product depend on the surface properties of the nanomaterial and, therefore, on factors like the lignin source, extraction method, and production/precipitation methods, as discussed in this review. Mini and Micro Propulsion for Medical Swimmers JianFeng Full Text Available Mini and micro robots, which can swim in an underwater environment, have drawn widespread research interests because of their potential applicability to the medical or biological fields, including delivery and transportation of bio-materials and drugs, bio-sensing, and bio-surgery. This paper reviews the recent ideas and developments of these types of self-propelling devices, ranging from the millimeter scale down to the micro and even the nano scale. Specifically, this review article makes an emphasis on various propulsion principles, including methods of utilizing smart actuators, external magnetic/electric/acoustic fields, bacteria, chemical reactions, etc. In addition, we compare the propelling speed range, directional control schemes, and advantages of the above principles. Biological stoichiometry in tumor micro-environments. Irina Kareva Full Text Available Tumors can be viewed as evolving ecological systems, in which heterogeneous populations of cancer cells compete with each other and somatic cells for space and nutrients within the ecosystem of the human body. According to the growth rate hypothesis (GRH, increased phosphorus availability in an ecosystem, such as the tumor micro-environment, may promote selection within the tumor for a more proliferative and thus potentially more malignant phenotype. The applicability of the GRH to tumor growth is evaluated using a mathematical model, which suggests that limiting phosphorus availability might promote intercellular competition within a tumor, and thereby delay disease progression. It is also shown that a tumor can respond differently to changes in its micro-environment depending on the initial distribution of clones within the tumor, regardless of its initial size. This suggests that composition of the tumor as a whole needs to be evaluated in order to maximize the efficacy of therapy. MICRO-CHP System for Residential Applications Joseph Gerstmann This is the final report of progress under Phase I of a project to develop and commercialize a micro-CHP system for residential applications that provides electrical power, heating, and cooling for the home. This is the first phase of a three-phase effort in which the residential micro-CHP system will be designed (Phase I), developed and tested in the laboratory (Phase II); and further developed and field tested (Phase III). The project team consists of Advanced Mechanical Technology, Inc. (AMTI), responsible for system design and integration; Marathon Engine Systems, Inc. (MES), responsible for design of the engine-generator subsystem; AO Smith, responsible for design of the thermal storage and water heating subsystems; Trane, a business of American Standard Companies, responsible for design of the HVAC subsystem; and AirXchange, Inc., responsible for design of the mechanical ventilation and dehumidification subsystem. Micro-separation toward systems biology. Liu, Bi-Feng; Xu, Bo; Zhang, Guisen; Du, Wei; Luo, Qingming Current biology is experiencing transformation in logic or philosophy that forces us to reevaluate the concept of cell, tissue or entire organism as a collection of individual components. Systems biology that aims at understanding biological system at the systems level is an emerging research area, which involves interdisciplinary collaborations of life sciences, computational and mathematical sciences, systems engineering, and analytical technology, etc. For analytical chemistry, developing innovative methods to meet the requirement of systems biology represents new challenges as also opportunities and responsibility. In this review, systems biology-oriented micro-separation technologies are introduced for comprehensive profiling of genome, proteome and metabolome, characterization of biomolecules interaction and single cell analysis such as capillary electrophoresis, ultra-thin layer gel electrophoresis, micro-column liquid chromatography, and their multidimensional combinations, parallel integrations, microfabricated formats, and nano technology involvement. Future challenges and directions are also suggested. Cavity Pressure Behaviour in Micro Injection Moulding as well as with the filling of the cavity by the polymer melt. In this paper, two parameters derived from cavity pressure over time (i.e. pressure work). The influence of four µIM parameters (melt temperature, mould temperature, injection speed, aand packing pressure) on the two pressure-related outputs...... has been investigated by moulding a micro fluidic component on three different polymers (PP, ABS, PC) using the design of experiment approach. Similar trends such as the effects of a higher injection speed in decreasing the pressure work and of a lower temperature in decreasing pressure rate have been......Process monitoring of micro injection moulding (µIM) is of crusial importance to analyse the effect of different parameter settings on the process and to assess its quality. Quality factors related to cavity pressure can provide useful information directly connected with the dyanmics of the process... Micro-generation network connection (renewables) Thornycroft, J.; Russell, T.; Curran, J. The drive to reduce emissions of carbon dioxide will result in an increase in the number of small generation units seeking connection to the electric power distribution network. The objectives of this study were to consider connection issues relating to micro-generation from renewables and their integration into the UK distribution network. The document is divided into two sections. The first section describes the present system which includes input from micro-generation, the technical impacts and the financial considerations. The second part discusses technical, financial and governance options for the future. A summary of preferred options and recommendations is given. The study was carried out by the Halcrow Group Ltd under contract to the DTI. Forming of Polymeric Tubular Micro-components Qin, Yi; Zhao, Jie; Anyasodor, Gerald platform for the production of functional polymeric tubular micro-components. The chapter gives background on the current market and process development trends, followed by description of materials, process configuration, tool design and machine development for each processing technology as well......This chapter is intended to provide an overview of three nontraditional shaping technologies for the forming of polymeric micro-tubes, which are hot embossing, blow molding, and cross rolling, as well as realization of a process chain and the integration of a modular machine-based manufacturing...... as strategy for integration of the technologies and equipment into a common platform. Finally, potential applications of the technologies and facilities developed are highlighted.... Deformations in micro extrusion of metals J. Piwnik Full Text Available Production technologies of small dimensions metallic elements are known for a long time. They are produced by machining methods:turning, milling, polishing. Recently, methods for manufacturing small details by forming are developed – microforming. This process ischaracterized by the high dimensions accuracy and the surface smoothness of received items and the high production rate. When a forming process is scaled down to micro dimensions, the microstructure of the workpiece, the surface topology of the workpiece and that of the tooling remain unchanged. Size effect is appearing. This paper analyses specifications of a metal extrusion in micro scale. To determine the impact of the tool surface roughness on deformation process the numerical model of roughness as triangle wave were developed. In paper the influence of the wave presence on the material flow is described. Impact of the forming conditions on extrusion forces there is also characterized. Payment mechanisms for micro-generation Choudhury, W.; Andrews, S. The Department of Trade and Industry commissioned a study into payment options for the increasing number of micro-generators (in domestic dwellings and where the generating capacity is not more than 5kW i.e. essentially micro-CHP and photovoltaics) supplying power for the national distribution network. It is shown that small generators will impact on all aspects of industry and connection will need to be simplified. The network will call for a more actively managed regime. Metering will need to be more sophisticated and agreement reached on how costs should be allocated when demand is low and surplus electricity is exported to the system. Three options were considered to be viable. Machine learning for micro-tomography Parkinson, Dilworth Y.; Pelt, Daniël. M.; Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Barnard, Harold S.; MacDowell, Alastair A.; Sethian, James Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommen- dation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work. Micro-inverter solar panel mounting Morris, John; Gilchrist, Phillip Charles Processes, systems, devices, and articles of manufacture are provided. Each may include adapting micro-inverters initially configured for frame-mounting to mounting on a frameless solar panel. This securement may include using an adaptive clamp or several adaptive clamps secured to a micro-inverter or its components, and using compressive forces applied directly to the solar panel to secure the adaptive clamp and the components to the solar panel. The clamps can also include compressive spacers and safeties for managing the compressive forces exerted on the solar panels. Friction zones may also be used for managing slipping between the clamp and the solar panel during or after installation. Adjustments to the clamps may be carried out through various means and by changing the physical size of the clamps themselves. High-speed micro-electro-discharge machining. Chandrasekar, Srinivasan Dr. (.School of Industrial Engineering, West Lafayette, IN); Moylan, Shawn P. (School of Industrial Engineering, West Lafayette, IN); Benavides, Gilbert Lawrence When two electrodes are in close proximity in a dielectric liquid, application of a voltage pulse can produce a spark discharge between them, resulting in a small amount of material removal from both electrodes. Pulsed application of the voltage at discharge energies in the range of micro-Joules results in the continuous material removal process known as micro-electro-discharge machining (micro-EDM). Spark erosion by micro-EDM provides significant opportunities for producing small features and micro-components such as nozzle holes, slots, shafts and gears in virtually any conductive material. If the speed and precision of micro-EDM processes can be significantly enhanced, then they have the potential to be used for a wide variety of micro-machining applications including fabrication of microelectromechanical system (MEMS) components. Toward this end, a better understanding of the impacts the various machining parameters have on material removal has been established through a single discharge study of micro-EDM and a parametric study of small hole making by micro-EDM. The main avenues for improving the speed and efficiency of the micro-EDM process are in the areas of more controlled pulse generation in the power supply and more controlled positioning of the tool electrode during the machining process. Further investigation of the micro-EDM process in three dimensions leads to important design rules, specifically the smallest feature size attainable by the process. microRNAs in mycobacterial disease: friend or foe? Manali D Mehta Full Text Available As the role of microRNA in all aspects of biology continues to be unraveled, the interplay between microRNAs and human disease is becoming clearer. It should come of no surprise that microRNAs play a major part in the outcome of infectious diseases, since early work has implicated microRNAs as regulators of the immune response. Here, we provide a review on how microRNAs influence the course of mycobacterial infections, which cause two of humanity's most ancient infectious diseases: tuberculosis and leprosy. Evidence derived from profiling and functional experiments suggests that regulation of specific microRNAs during infection can either enhance the immune response or facilitate pathogen immune evasion. Now, it remains to be seen if the manipulation of host cell microRNA profiles can be an opportunity for therapeutic intervention for these difficult-to-treat diseases. Experimental and numerical study of micro deep drawing Luo Liang Full Text Available Micro forming is a key technology for an industrial miniaturisation trend, and micro deep drawing (MDD is a typical micro forming method. It has great advantages comparing to other micro manufacturing methods, such as net forming ability, mass production potential, high product quality and complex 3D metal products fabrication capacity. Meanwhile, it is facing difficulties, for example the so-called size effects, once scaled down to micro scale. To investigate and to solve the problems in MDD, a combined micro blanking-drawing machine is employed and an explicit-implicit micro deep drawing model with a voronoi blank model is developed. Through heat treatment different grain sizes can be obtained, which affect material's properties and, consequently, the drawing process parameters, as well as produced cups' quality. Further, a voronoi model can provide detailed material information in simulation, and numerical simulation results are in accordance with experimental results. Dynamics of micro-bubble sonication inside a phantom vessel Qamar, Adnan; Samtaney, Ravi; Bull, Joseph L. A model for sonicated micro-bubble oscillations inside a phantom vessel is proposed. The model is not a variant of conventional Rayleigh-Plesset equation and is obtained from reduced Navier-Stokes equations. The model relates the micro-bubble oscillation dynamics with geometric and acoustic parameters in a consistent manner. It predicts micro-bubble oscillation dynamics as well as micro-bubble fragmentation when compared to the experimental data. For large micro-bubble radius to vessel diameter ratios, predictions are damped, suggesting breakdown of inherent modeling assumptions for these cases. Micro-bubble response with acoustic parameters is consistent with experiments and provides physical insight to the micro-bubble oscillation dynamics. Qamar, Adnan Fabrication of high aspect ratio micro electrode by using EDM Elsiti, Nagwa Mejid; Noordin, M.Y.; Alkali, Adam Umar The electrical discharge machining (EDM) process inherits characteristics that make it a promising micro-machining technique. Micro electrical discharge machining (micro- EDM) is a derived form of EDM, which is commonly used to manufacture micro and miniature parts and components by using the conventional electrical discharge machining fundamentals. Moving block electro discharge grinding (Moving BEDG) is one of the processes that can be used to fabricate micro-electrode. In this study, a conventional die sinker EDM machine was used to fabricate the micro-electrode. Modifications are made to the moving BEDG, which include changing the direction of movements and control gap in one electrode. Consequently current was controlled due to the use of roughing, semi-finishing and finishing parameters. Finally, a high aspect ratio micro-electrode with a diameter of 110.49μm and length of 6000μm was fabricated. (paper) Surface Micro Topography Replication in Injection Moulding Arlø, Uffe Rolf; Hansen, Hans Nørgaard; Kjær, Erik Michael The surface micro topography of injection moulded plastic parts can be important for aesthetical and technical reasons. The quality of replication of mould surface topography onto the plastic surface depends among other factors on the process conditions. A study of this relationship has been...... carried out with rough EDM (electrical discharge machining) mould surfaces, a PS grade, and by applying established three-dimensional topography parameters. Significant quantitative relationships between process parameters and topography parameters were established. It further appeared that replication... Computer Controlled Chemical Micro-Reactor Mechtilde, Schaefer; Eduard, Stach; Adreas, Foitzik Chemical reactions or chemical equilibria can be influenced and controlled by several parameters. The ratio of two liquid ingredients, the so called reactants or educts, plays an important role in determining the end product and its yield. The reactants must be weighed and accordingly mixed with the conventional batch mode. If the reaction is done in a microreactor or in several parallel working micro-reactors, units for allotting the educts in appropriate quantities are required. In this report we present a novel micro-reactor that allows the constant monitoring of the chemical reaction via Raman spectroscopy. Such monitoring enables an appropriate feedback on the steering parameters for the PC controlled micro-pumps for the appropriate educt flow rate of both liquids to get optimised ratios of ingredients at an optimised total flow rate. The micro-reactors are the core pieces of the design and are easily removable and can therefore be changed at any time to adapt the requirements of the chemical reaction. One type of reactor consists of a stainless steel base containing small scale milled channels covered with anodically bonded Pyrex glass. Another type of reactor has a base of anisotropically etched silicon, and is also covered with anodically bonded Pyrex glass. The glass window allows visual observation of the initial phase interface of the two educts in the reaction channels by optical microscopy and does not affect, in contrast to infrared spectroscopy, the Raman spectroscopic signal for detection of the reaction kinetics. On the basis of a test reaction, we present non-invasive and spatially highly resolved in-situ reaction analysis using Raman spectroscopy measured along the reaction channel at different locations MYRIADE: CNES Micro-Satellite Program Thoby, Michel CNES is currently leading the development of a program of micro-satellites, which has been now blessed with a name in line with the ambition: MYRIADE. The intention is to primarily fulfill the needs of the national scientific research in small space missions. Technology experiments as well as demonstration flights for new mission concepts shall however not be forgotten. The main objective is to make access to space much easier and affordable. The first five scientific and technological mixed ... Encapsulated Ball Bearings for Rotary Micro Machines occurrence as well as the overall tribological properties of the bearing mechanism. Firstly, the number of stainless steel balls influences not only the load...stacks.iop.org/JMM/17/S224 Abstract We report on the first encapsulated rotary ball bearing mechanism using silicon microfabrication and stainless steel balls...The method of capturing stainless steel balls within a silicon race to support a silicon rotor both axially and radially is developed for rotary micro Micro-Texture Synthesis by Phase Randomization Bruno Galerne Full Text Available This contribution is concerned with texture synthesis by example, the process of generating new texture images from a given sample. The Random Phase Noise algorithm presented here synthesizes a texture from an original image by simply randomizing its Fourier phase. It is able to reproduce textures which are characterized by their Fourier modulus, namely the random phase textures (or micro-textures. Geometrical characterization of micro end milling tools Borsetto, Francesca; Bariani, Paolo; Bissacco, Giuliano Performance of the milling process is directly affected by the accuracy of tool geometry. Development of methods suitable for dimensional characterization of such tools, with low measurement uncertainties is therefore of relevance. The present article focuses on the geometrical characterization...... of a flat micro end milling tool with a nominal mill diameter of 200 microns. An experimental investigation was carried out involving two different non-contact systems... Micro and smart devices and systems Ananthasuresh, G; Pratap, Rudra; Krupanidhi, S The book presents cutting-edge research in the emerging fields of micro, nano, and smart devices and systems from experts working in these fields over the last decade. Most of the contributors have built devices or systems or developed processes or algorithms in these areas. The book is a unique collection of chapters from different areas with a common theme and is immensely useful to academic researchers and practitioners in the industry who work in this field. Notes on Matrix and Micro Strings Dijkgraaf, Robbert; Verlinde, Herman L. We review some recent developments in the study of M-theory compactifications via Matrix theory. In particular we highlight the appearance of IIA strings and their interactions, and explain the unifying role of the M-theory five-brane for describing the spectrum of the T^5 compactification and its duality symmetries. The 5+1-dimensional micro-string theory that lives on the fivebrane world-volume takes a central place in this presentation. Mechanism of laser micro-adjustment Shen Hong Miniaturization is a requirement in engineering to produce competitive products in the field of optical and electronic industries. Laser micro-adjustment is a new and promising technology for sheet metal actuator systems. Efforts have been made to understand the mechanisms of metal plate forming using a laser heating source. Three mechanisms have been proposed for describing the laser forming processes in different scenarios, namely the temperature gradient mechanism (TGM), buckling mechanism and upsetting mechanism (UM). However, none of these mechanisms can fully describe the deformation mechanisms involved in laser micro-adjustment. Based on the thermal and elastoplastic analyses, a coupled TGM and UM are presented in this paper to illustrate the thermal mechanical behaviours of two-bridge actuators when applying a laser forming process. To validate the proposed coupling mechanism, numerical simulations are carried out and the corresponding results demonstrate the mechanism proposed. The mechanism of the micro-laser adjustment could be taken as a supplement to the laser forming process. Micro Wire-Drawing: Experiments And Modelling Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L. In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating models of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive model seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material model has been proposed and implemented in a FE code giving a better agreement with the experiments Advances on micro-RWELL gaseous detector Morello, Gianfranco; Benussi, L; De Simone, P; Felici, G; Gatta, M; Poli Lener, M; De Oliveira, R; Ochi, A; Borgonovi, L; Giacomelli, P; Ranieri, A; Valentino, V; Ressegotti, M; Vai, I The R&D; on the micro-Resistive-WELL ($\\mu$-RWELL) detector technology aims in developing a new scalable, compact, spark-protected, single amplification stage Micro-Pattern Gas Detectors (MPGD) for large area HEP applications as tracking and calorimeter device as well as for industrial and medical applications as X-ray and neutron imaging gas pixel detector. The novel micro- structure, exploiting several solutions and improvements achieved in the last years for MPGDs, in particular for GEMs and Micromegas, is an extremely simple detector allowing an easy engineering with consequent technological transfer toward the photolithography industry. Large area detectors (up $1 \\times 2 m^2$) can be realized splicing $\\mu$-RWELL_PCB tiles of smaller size (about $0.5 \\times 1 m^2$ - typical PCB industrial size). The detector, composed by few basic elements such as the readout-PCB embedded with the amplification stage (through the resistive layer) and the cathode defining the gas drift-conversion gap has been largel... Solvent-assisted polymer micro-molding Institute of Scientific and Technical Information of China (English) HAN LuLu; ZHOU Jing; GONG Xiao; GAO ChangYou The micro-molding technology has played an important role in fabrication of polymer micro-patterns and development of functional devices.In such a process,suitable solvent can swell or dissolve the polymer films to decrease their glass transition temperature (Tg) and viscosity and thereby improve flowing ability.Consequently,it is easy to obtain the 2D and 3D patterns with high fidelity by the solvent-assisted micro-molding.Compared with the high temperature molding,this technology overcomes some shortcomings such as shrinking after cooling,degradation at high temperature,difficulty in processing some functional materials having high Tg,etc.It can be applied to making patterns not only on polymer monolayers but also on polyelectrolyte multilayers.Moreover,the compressioninduced patterns on the multilayers are chemically homogenous but physically heterogeneous.In this review,the controlling factors on the pattern quality are also discussed,including materials of the mold,solvent,pressure,temperature and pattern density. Gendering dynamic capabilities in micro firms Yevgen Bogodistov Full Text Available Gender issues are well-researched in the general management literature, particular in studies on new ventures. Unfortunately, gender issues have been largely ignored in the dynamic capabilities litera­ture. We address this gap by analyzing the effects of gender diversity on dynamic capabilities among micro firms. We consider the gender of managers and personnel in 124 Ukrainian tourism micro firms. We examine how a manager's gender affects the firm's sensing capacities and investigate how it moderates team gender diversity's impact on sensing capacities. We also investigate how person­nel composition impacts seizing and reconfiguration capacities. We find that female managers have several shortcomings concerning a firm's sensing capacity but that personnel gender diversity increa­ses this capacity. Team gender diversity has positive effects on a firm's seizing and reconfiguration abilities. Our study advances research on gender diversity and its impact on firm capabilities and illustrates its relevance for staffing practices in micro firms. Micro-optical instrumentation for process spectroscopy Crocombe, Richard A.; Flanders, Dale C.; Atia, Walid Traditional laboratory ultraviolet/visible/near-infrared spectroscopy instruments are tabletop-sized pieces of equipment that exhibit very high performance, but are generally too large and costly to be widely distributed for process control applications or used as spectroscopic sensors. Utilizing a unique, and proven, micro-optical technology platform origi-nally developed, qualified and deployed in the telecommunications industry, we have developed a new class of spectro-scopic micro-instrumentation that has laboratory quality resolution and spectral range, with superior speed and robust-ness. The fundamentally lower cost and small form factor of the technology will enable widespread use in process moni-toring and control. This disruption in the ground rules of spectroscopic analysis in these processes is enabled by the re-placement of large optics and detector arrays with a high-finesse, high-speed micro electro mechanical system (MEMS) tunable filter and a single detector, that enable the manufacture of a high performance and extremely rugged spectrome-ter in the footprint of a credit card. Specific process monitoring and control applications discussed in the paper include pharmaceutical, gas sensing and chemical processing applications. Solidification at the micro-scale Howe, A. The experimental determination and computer simulation of the micro-segregation accompanying the solidification of alloys continues to be a subject of much academic and industrial interest. Both are subject to progressively more sophisticated analyses, and a discussion is offered regarding the development and practical use of such studies. Simple steels are particularly difficult targets for such work: solidification does not end conveniently in a eutectic, the rapid diffusion particularly in the delta-ferrite phase obscures most evidence of what had occurred at the micro-scale during solidification, and one or more subsequent solid state phase transformations further obscure such details. Also, solidification at the micro-scale is inherently variable: the usual, dendrite morphologies encountered are, after all, instabilities in growth behaviour, and therefore such variability should be expected. For questions such as the relative susceptibility of different grades to particular problems, it is the average, typical behaviour that is of interest, whereas for other questions such as the on-set of macro-segregation, the local variability is paramount. Depending on the question being asked, and indeed the accuracy with which validatory data are available, simple pseudo-analytical equations employing various limiting assumptions, or sophisticated models which remove the need for most such limitations, could be appropriate. This paper highlights the contribution to such studies of various collaborative research forums within the European Union with which the author is involved. (orig.) [de A micro-hydrology computation ordering algorithm Croley, T.E. II Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented node definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing micro-hydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. (orig.) Investigation of bluff-body micro-flameless combustion Hosseini, Seyed Ehsan; Wahid, Mazlan Abdul Highlights: • The temperature uniformity of the micro-flameless combustion increases when a triangular bluff-body is applied. • The velocity and temperature of exhaust gases are higher in micro-flameless combustion compared to the conventional mode. • The rate of fuel–oxidizer consumption in micro-flameless mode is lower than conventional micro-combustion. - Abstract: Characteristics of lean premixed conventional micro-combustion and lean non-premixed flameless regime of methane/air are investigated in this paper by solving three-dimensional governing equations. At moderate equivalence ratio (∅ = 0.5), standard k–ε and the eddy-dissipation concept are employed to simulate temperature distribution and combustion stability of these models. The effect of bluff-body on the temperature distribution of both conventional and flameless mode is developed. The results show that in the premixed conventional micro-combustion the stability of the flame is increased when a triangular bluff-body is applied. Moreover, micro-flameless combustion is more stable when bluff-body is used. Micro-flameless mode with bluff-body and 7% O 2 concentration (when N 2 is used as diluent) illustrated better performance than other cases. The maximum temperature in premixed conventional micro-combustion and micro-flameless combustion was recorded 2200 K and 1520 K respectively. Indeed, the flue gas temperature of conventional mode and flameless combustion was 1300 K and 1500 K respectively. The fluctuation of temperature in the conventional micro-combustor wall has negative effects on the combustor and reduces the lifetime of micro-combustor. However, in the micro-flameless mode, the wall temperature is moderate and uniform. The rate of fuel–oxidizer consumption in micro-flameless mode takes longer time and the period of cylinders recharging is prolonged MicroRNA Expression in Laser Micro-dissected Breast Cancer Tissue Samples - a Pilot Study. Seclaman, Edward; Narita, Diana; Anghel, Andrei; Cireap, Natalia; Ilina, Razvan; Sirbu, Ioan Ovidiu; Marian, Catalin Breast cancer continues to represent a significant public health burden despite outstanding research advances regarding the molecular mechanisms of cancer biology, biomarkers for diagnostics and prognostic and therapeutic management of this disease. The studies of micro RNAs in breast cancer have underlined their potential as biomarkers and therapeutic targets; however most of these studies are still done on largely heterogeneous whole breast tissue samples. In this pilot study we have investigated the expression of four micro RNAs (miR-21, 145, 155, 92) known to be involved in breast cancer, in homogenous cell populations collected by laser capture microdissection from breast tissue section slides. Micro RNA expression was assessed by real time PCR, and associations with clinical and pathological characteristics were also explored. Our results have confirmed previous associations of miR-21 expression with poor prognosis characteristics of breast cancers such as high stage, large and highly proliferative tumors. No statistically significant associations were found with the other micro RNAs investigated, possibly due to the small sample size of our study. Our results also suggest that miR-484 could be a suitable endogenous control for data normalization in breast tissues, these results needing further confirmation by future studies. In summary, our pilot study showed the feasibility of detecting micro RNAs expression in homogenous laser captured microdissected invasive breast cancer samples, and confirmed some of the previously reported associations with poor prognostic characteristics of breast tumors. Combination of Micro nutrients for Bone (COMB) Study: Bone Density after Micro nutrient Intervention Genuis, S.J.; Bouchard, Th.P. Along with other investigations, patients presenting to an environmental health clinic with various chronic conditions were assessed for bone health status. Individuals with compromised bone strength were educated about skeletal health issues and provided with therapeutic options for potential amelioration of their bone health. Patients who declined pharmacotherapy or who previously experienced failure of drug treatment were offered other options including supplemental micro nutrients identified in the medical literature as sometimes having a positive impact on bone mineral density (BMD). After 12 months of consecutive supplemental micro nutrient therapy with a combination that included vitamin D3, vitamin K2, strontium, magnesium and docosahexaenoic acid (DHA), repeat bone densitometry was performed. The results were analyzed in a group of compliant patients and demonstrate improved BMD in patients classified with normal, osteopenic and osteoporotic bone density. According to the results, this combined micro nutrient supplementation regimen appears to be at least as effective as bis phosphonates or strontium ranelate in raising BMD levels in hip, spine, and femoral neck sites. No fractures occurred in the group taking the micro nutrient protocol. This micro nutrient regimen also appears to show efficacy in individuals where bis phosphonate therapy was previously unsuccessful in maintaining or raising BMD. Prospective clinical trials are required to confirm efficacy The role of micro-NRA and micro-PIXE in carbon mapping of organic tissues Niekraszewicz, L.A.B.; Souza, C.T. de; Stori, E.M.; Jobim, P.F.C.; Amaral, L.; Dias, J.F. This study reports the work developed in the Ion Implantation Laboratory (Porto Alegre, RS, Brazil) in order to implement the micro-NRA technique for the study of light elements in organic tissues. In particular, the work was focused on nuclear reactions employing protons and alphas with carbon. The (p,p) resonances at 0.475 and 1.734 were investigated. The (α,α) resonance at 4.265 MeV was studied as well. The results indicate that the yields for the 0.475 and 1.734 MeV resonances are similar. Elemental maps of different structures obtained with the micro-NRA technique using the 1.734 MeV resonance were compared with those obtained with micro-PIXE employing a SDD detector equipped with an ultra-thin window. The results show that the use of micro-NRA for carbon at 1.734 MeV resonance provides good results in some cases at the expense of longer beam times. On the other hand, micro-PIXE provides enhanced yields but is limited to surface analysis since soft X-rays are greatly attenuated by matter Micro Climate Simulation in new Town 'Hashtgerd' Sodoudi, S.; Langer, I.; Cubasch, U. One of the objectives of climatological part of project Young Cities 'Developing Energy-Efficient Urban Fabric in the Tehran-Karaj Region' is to simulate the micro climate (with 1m resolution) in 35ha of new town Hashtgerd, which is located 65 km far from mega city Tehran. The Project aims are developing, implementing and evaluating building and planning schemes and technologies which allow to plan and build sustainable, energy-efficient and climate sensible form mass housing settlements in arid and semi-arid regions ("energy-efficient fabric"). Climate sensitive form also means designing and planning for climate change and its related effects for Hashtgerd New Town. By configuration of buildings and open spaces according to solar radiation, wind and vegetation, climate sensitive urban form can create outdoor thermal comfort. To simulate the climate on small spatial scales, the micro climate model Envi-met has been used to simulate the micro climate in 35 ha. The Eulerian model ENVI-met is a micro-scale climate model which gives information about the influence of architecture and buildings as well as vegetation and green area on the micro climate up to 1 m resolution. Envi-met has been run with information from topography, downscaled climate data with neuro-fuzzy method, meteorological measurements, building height and different vegetation variants (low and high number of trees) Through the optimal Urban Design and Planning for the 35ha area the microclimate results shows, that with vegetation the microclimate in streets will be change: • 2 m temperature is decreased by about 2 K • relative humidity increase by about 10 % • soil temperature is decreased by about 3 K • wind speed is decreased by about 60% The style of buildings allows free movement of air, which is of high importance for fresh air supply. The increase of inbuilt areas in 35 ha reduces the heat island effect through cooling caused by vegetation and increase of air humidity which caused by Does multifocal papillary micro-carcinoma require radioiodine ablation? Punda, A.; Markovic, V.; Eterovic, D. Full text of publication follows. Background: the thyroid carcinomas smaller than 1 cm (micro-carcinomas) comprise a significant fraction of papillary carcinomas. Excluding clinical micro-carcinomas, which present as metastatic disease, the micro-carcinomas diagnosed by ultrasound/FNAC or incidentally have very good prognosis. However, whether or not these papillary micro-carcinomas require post-surgical radioiodine ablation remains a matter of debate. Hypothesis: multi-focality is present in majority of clinical papillary micro-carcinomas and this characteristic can be used to identify the subset of non-clinical micro-carcinomas with greater malignant potential. Methods: the data on types of differentiated thyroid carcinomas diagnosed in the period 2008-2011 in the University Hospital Split were collected. Results: there were 359 patients with thyroid carcinoma, 329 (92%) of which had papillary carcinoma. About 61% (202/329) of papillary carcinomas were micro-carcinomas; most of them were diagnosed by ultrasound/FNAC (134/202= 66%), the rest were incidentalomas (48/202=24%) and clinical micro carcinomas (20/202=10%). Sixty percent (12/20) of patients with clinical micro-carcinoma and 23 patients with non-clinical micro-carcinoma (23/182=13%) had multifocal disease. Conclusion: multifocal disease is a frequent characteristic of clinical papillary thyroid micro-carcinomas, suggesting that multi-focality presents an early stage of non-clinical micro-carcinomas with more aggressive behaviour. Thus multifocal, but not uni-focal papillary micro-carcinomas may require radioiodine ablation. (authors) Process Condition Monitoring of Micro Moulding Using a Two-plunger Micro Injection Moulding Machine Tosello, Guido; Hansen, Hans Nørgaard; Guerrier, Patrick The influence of micro injection moulding (µIM) process parameters (melt and mould temperature, piston injection speed and stoke length) on the injection pressure was investigated using Design of Experiments. Direct piston injection pressure measurements were performed and data collected using...... a micro injection moulding machine equipped with a two-pluger injection unit. Miniaturized dog-bone shaped speciments on polyoxymethylene (POM) were moulded over a wide range of processing cpnditions in order to characterize the process and assess its capability. Experimental results obtained under... MicroRNA signature of the human developing pancreas Correa-Medina Mayrin Full Text Available Abstract Background MicroRNAs are non-coding RNAs that regulate gene expression including differentiation and development by either inhibiting translation or inducing target degradation. The aim of this study is to determine the microRNA expression signature during human pancreatic development and to identify potential microRNA gene targets calculating correlations between the signature microRNAs and their corresponding mRNA targets, predicted by bioinformatics, in genome-wide RNA microarray study. Results The microRNA signature of human fetal pancreatic samples 10-22 weeks of gestational age (wga, was obtained by PCR-based high throughput screening with Taqman Low Density Arrays. This method led to identification of 212 microRNAs. The microRNAs were classified in 3 groups: Group number I contains 4 microRNAs with the increasing profile; II, 35 microRNAs with decreasing profile and III with 173 microRNAs, which remain unchanged. We calculated Pearson correlations between the expression profile of microRNAs and target mRNAs, predicted by TargetScan 5.1 and miRBase altgorithms, using genome-wide mRNA expression data. Group I correlated with the decreasing expression of 142 target mRNAs and Group II with the increasing expression of 876 target mRNAs. Most microRNAs correlate with multiple targets, just as mRNAs are targeted by multiple microRNAs. Among the identified targets are the genes and transcription factors known to play an essential role in pancreatic development. Conclusions We have determined specific groups of microRNAs in human fetal pancreas that change the degree of their expression throughout the development. A negative correlative analysis suggests an intertwined network of microRNAs and mRNAs collaborating with each other. This study provides information leading to potential two-way level of combinatorial control regulating gene expression through microRNAs targeting multiple mRNAs and, conversely, target mRNAs regulated in Servo scanning 3D micro EDM for array micro cavities using on-machine fabricated tool electrodes Tong, Hao; Li, Yong; Zhang, Long Array micro cavities are useful in many fields including in micro molds, optical devices, biochips and so on. Array servo scanning micro electro discharge machining (EDM), using array micro electrodes with simple cross-sectional shape, has the advantage of machining complex 3D micro cavities in batches. In this paper, the machining errors caused by offline-fabricated array micro electrodes are analyzed in particular, and then a machining process of array servo scanning micro EDM is proposed by using on-machine fabricated array micro electrodes. The array micro electrodes are fabricated on-machine by combined procedures including wire electro discharge grinding, array reverse copying and electrode end trimming. Nine-array tool electrodes with Φ80 µm diameter and 600 µm length are obtained. Furthermore, the proposed process is verified by several machining experiments for achieving nine-array hexagonal micro cavities with top side length of 300 µm, bottom side length of 150 µm, and depth of 112 µm or 120 µm. In the experiments, a chip hump accumulates on the electrode tips like the built-up edge in mechanical machining under the conditions of brass workpieces, copper electrodes and the dielectric of deionized water. The accumulated hump can be avoided by replacing the water dielectric by an oil dielectric. An Overview of Power Topologies for Micro-hydro Turbines Nababan, Sabar; Muljadi, E.; Blaabjerg, Frede This paper is an overview of different power topologies of micro-hydro turbines. The size of micro-hydro turbine is typically under 100kW. Conventional topologies of micro-hydro power are stand-alone operation used in rural electrical network in developing countries. Recently, many of micro-hydro...... power generations are connected to the distribution network through power electronics (PE). This turbines are operated in variable frequency operation to improve efficiency of micro-hydro power generation, improve the power quality, and ride through capability of the generation. In this paper our...... discussion is limited to the distributed generation. Like many other renewable energy sources, the objectives of micro-hydro power generation are to reduce the use of fossil fuel, to improve the reliability of the distribution system (grid), and to reduce the transmission losses. The overview described... Micro/nano-fabrication technologies for cell biology. Qian, Tongcheng; Wang, Yingxiao Micro/nano-fabrication techniques, such as soft lithography and electrospinning, have been well-developed and widely applied in many research fields in the past decade. Due to the low costs and simple procedures, these techniques have become important and popular for biological studies. In this review, we focus on the studies integrating micro/nano-fabrication work to elucidate the molecular mechanism of signaling transduction in cell biology. We first describe different micro/nano-fabrication technologies, including techniques generating three-dimensional scaffolds for tissue engineering. We then introduce the application of these technologies in manipulating the physical or chemical micro/nano-environment to regulate the cellular behavior and response, such as cell life and death, differentiation, proliferation, and cell migration. Recent advancement in integrating the micro/nano-technologies and live cell imaging are also discussed. Finally, potential schemes in cell biology involving micro/nano-fabrication technologies are proposed to provide perspectives on the future research activities. Micro-fabricated all optical pressure sensors Havreland, Andreas Spandet; Petersen, Søren Dahl; Østergaard, Christian Optical pressure sensors can operate in certain harsh application areas where the electrical pressure sensors cannot. However, the sensitivity is often not as good for the optical sensors. This work presents an all optical pressure sensor, which is fabricated by micro fabrication techniques, where...... the sensitivity can be tuned in the fabrication process. The developed sensor design, simplifies the fabrication process leading to a lower fabrication cost, which can make the all optical pressure sensors more competitive towards their electrical counterpart. The sensor has shown promising results and a linear...... pressure response has been measured with a sensitivity of 0.6nm/bar.... Measuring the rebound effect with micro data de Borger, Bruno; Mulalic, Ismir; Rouwendal, Jan by conventional theory) but also estimate versions in which the coefficients for fuel price and fuel efficiency can differ. We use administrative register micro data over the period 2004- 2010 to estimate the demand equation. We apply fixed-effect panel-data (first-diff.) techniques. The focus is on car users...... that switch cars during the period of observation. Endogeneity of the fuel efficiency of the new car is an important concern. We deal with endogeneity of car characteristics, following Berry, Levinsohn and Pakes (1995), by instrumenting them using the characteristic of the old car relative to the average... Business intelligence with MicroStrategy cookbook Moraschi, Davide Written in a cookbook style, this book will teach you through the use of recipes with examples and illustrations. Each recipe contains step-by-step instructions about everything necessary to execute a particular task.This book is intended for both BI and database developers who want to expand their knowledge of MicroStrategy. It is also useful for advanced data analysts who are evaluating different technologies. You do not need to be an SQL master to read this book, yet knowledge of some concepts like foreign keys and many-to-many relationships is assumed. Some knowledge of basic concepts such Building micro and nanosystems with electrochemical discharges Wuethrich, Rolf, E-mail: [email protected] [Department of Mechanical and Industrial Engineering, Concordia University, 1455 de Maisonneuve Blvd. West, Montreal, QC (Canada); Allagui, Anis [Department of Mechanical and Industrial Engineering, Concordia University, 1455 de Maisonneuve Blvd. West, Montreal, QC (Canada) Since the discovery of the electrochemical discharge phenomenon by Fizeau and Foucault, several contributions have expanded the wide range of applications associated with this high current density electrochemical process. The complexity of the phenomenon, from the macroscopic to the microscopic scales, led since then to experimental and theoretical studies from different research fields. This contribution reviews the chemical and electrochemical perspectives where a mechanistic model based on results from radiation chemistry of aqueous solutions is proposed. In addition applications to micro-machining and fabrication of nanoparticles are discussed. Wuethrich, Rolf; Allagui, Anis MicroRNAs in mantle cell lymphoma Husby, Simon; Geisler, Christian; Grønbæk, Kirsten Mantle cell lymphoma (MCL) is a rare and aggressive subtype of non-Hodgkin lymphoma. New treatment modalities, including intensive induction regimens with immunochemotherapy and autologous stem cell transplant, have improved survival. However, many patients still relapse, and there is a need...... for novel therapeutic strategies. Recent progress has been made in the understanding of the role of microRNAs (miRNAs) in MCL. Comparisons of tumor samples from patients with MCL with their normal counterparts (naive B-cells) have identified differentially expressed miRNAs with roles in cellular growth... Micro dosimetry model. An extended version Vroegindewey, C. In an earlier study a relative simple mathematical model has been constructed to simulate the energy transfer on a cellular scale and thus gain insight in the fundamental processes of BNCT. Based on this work, a more realistic micro dosimetry model is developed. The new facets of the model are: the treatment of proton recoil, the calculation of the distribution of energy depositions, and the determination of the number of particles crossing the target nucleus subdivided in place of origin. Besides these extensions, new stopping power tables for the emitted particles are generated and biased Monte Carlo techniques are used to reduce computer time. (orig.) Micro and Nanotechnologies Enhanced Biomolecular Sensing Tza-Huei Wang Full Text Available This editorial summarizes some of the recent advances of micro and nanotechnology-based tools and devices for biomolecular detection. These include the incorporation of nanomaterials into a sensor surface or directly interfacing with molecular probes to enhance target detection via more rapid and sensitive responses, and the use of self-assembled organic/inorganic nanocomposites that inhibit exceptional spectroscopic properties to enable facile homogenous assays with efficient binding kinetics. Discussions also include some insight into microfluidic principles behind the development of an integrated sample preparation and biosensor platform toward a miniaturized and fully functional system for point of care applications. Micro optical sensor systems for sunsensing applications Leijtens, Johan; de Boom, Kees Optimum application of micro system technologies allows building small sensor systems that will alter procurement strategies for spacecraft manufacturers. One example is the decreased size and cost for state of the art sunsensors. Integrated sensor systems are being designed which, through use of microsystem technology, are an order of magnitutde smaller than most current sunsensors and which hold due to the large reproducibility through batch manufacturing the promise of drastic price reduction. If the Commercial Of The Shelf (COTS) approach is adopted by satellite manufacturers, this will drastically decrease mass and cost budgets associated with sunsensing applications. Micro-Cavity Fluidic Dye Laser Helbo, Bjarne; Kristensen, Anders; Menon, Aric Kumaran We have successfully designed, fabricated and characterized a micro-cavity fluidic dye laser with metallic mirrors, which can be integrated with polymer based lab-on-a-chip microsystems without further processing steps. A simple rate-equation model is used to predict the average pumping power...... threshold for lasing as function of cavity-mirror reflectance, laser dye concentration and cavity length. The laser device is characterized using the laser dye Rhodamine 6G dissolved in ethanol. Lasing is observed, and the influence of dye concentration is investigated.... Micro string resonators as temperature sensors Larsen, T.; Schmid, S.; Boisen, A. The resonance frequency of strings is highly sensitive to temperature. In this work we have investigated the applicability of micro string resonators as temperature sensors. The resonance frequency of strings is a function of the tensile stress which is coupled to temperature by the thermal...... to the low thermal mass of the strings. A temperature resolution of 2.5×10-4 °C has been achieved with silicon nitride strings. The theoretical limit for the temperature resolution of 8×10-8 °C has not been reached yet and requires further improvement of the sensor.... Pulmonary arterio-venous micro fistulae - Diagnostic Ebram, J.C. Four patients with pulmonary arterio-venous micro-fistulae - of which two were male (50%) - the ages varying from 10 to 43 (X sup(∼) = 22,7), were studied at the Cardiology Centre of the 6th Ward of Santa Casa da Misericordia Hospital in Rio de Janeiro. They were all basically suffering from Manson's Schistosomiasis, the hepato-splenic form in 3 cases (75%) and the Rendu Osler Weber disease with juvenile cirrhosis in 1 case (25%). All four of them had portal hypertension. The individual cases were clinically evaluate with X-rays, scintillographic and hemodynamic tests. (author) MicroRNA regulation of Autophagy Frankel, Lisa B; Lund, Anders H recently contributed to our understanding of the molecular mechanisms of the autophagy machinery, yet several gaps remain in our knowledge of this process. The discovery of microRNAs (miRNAs) established a new paradigm of post-transcriptional gene regulation and during the past decade these small non......RNAs to regulation of the autophagy pathway. This regulation occurs both through specific core pathway components as well as through less well-defined mechanisms. Although this field is still in its infancy, we are beginning to understand the potential implications of these initial findings, both from a pathological... Sonochemically born proteinaceous micro- and nanocapsules. Vassileva, Elena D; Koseva, Neli S The use of proteins as a substrate in the fabrication of micro- and nanoparticulate systems has attracted the interest of scientists, manufactures, and consumers. Albumin-derived particles were commercialized as contrast agents or anticancer therapeutics. Food proteins are widely used in formulated dietary products. The potential benefits of proteinaceous micro- and nanoparticles in a wide range of biomedical applications are indisputable. Protein-based particles are highly biocompatible and biodegradable structures that can impart bioadhesive properties or mediate particle uptake by specific interactions with the target cells. Currently, protein microparticles are engineered as vehicles for covalent attachment and/or encapsulation of bioactive compounds, contrast agents for magnetic resonance imaging, thermometric and oximetric imaging, sonography and optical coherence tomography, etc. Ultrasound irradiation is a versatile technique which is widely used in many and different fields as biology, biochemistry, dentistry, geography, geology, medicine, etc. It is generally recognized as an environmental friendly, cost-effective method which is easy to be scaled up. Currently, it is mainly applied for homogenization, drilling, cleaning, etc. in industry, as well for noninvasive scanning of the human body, treatment of muscle strains, dissolution of blood clots, and cancer therapy. Proteinaceous micro- and nanocapsules could be easily produced in a one-step process by applying ultrasound to an aqueous protein solution. The origin of this process is in the chemical changes, for example, sulfhydryl groups oxidation, that takes place as a result of acoustically generated cavitation. Partial denaturation of the protein most probably occurs which makes the hydrophobic interactions dominant and also responsible for the formation of stable capsules. This chapter aims to present the current state-of-the-art in the field of sonochemically produced protein micro- and nanocapsules Circulating microRNAs in breast cancer Hamam, Rimi; Hamam, Dana; Alsaleh, Khalid A. Effective management of breast cancer depends on early diagnosis and proper monitoring of patients' response to therapy. However, these goals are difficult to achieve because of the lack of sensitive and specific biomarkers for early detection and for disease monitoring. Accumulating evidence...... in the past several years has highlighted the potential use of peripheral blood circulating nucleic acids such as DNA, mRNA and micro (mi)RNA in breast cancer diagnosis, prognosis and for monitoring response to anticancer therapy. Among these, circulating miRNA is increasingly recognized as a promising...... circulating miRNAs as diagnostic, prognostic or predictive biomarkers in breast cancer management.... PLA micro- and nano-particles. Lee, Byung Kook; Yun, Yeonhee; Park, Kinam Poly(d,l-lactic acid) (PLA) has been widely used for various biomedical applications for its biodegradable, biocompatible, and nontoxic properties. Various methods, such as emulsion, salting out, and precipitation, have been used to make better PLA micro- and nano-particle formulations. They are widely used as controlled drug delivery systems of therapeutic molecules, including proteins, genes, vaccines, and anticancer drugs. Even though PLA-based particles have challenges to overcome, such as low drug loading capacity, low encapsulation efficiency, and terminal sterilization, continuous innovations in particulate formulations will lead to development of clinically useful formulations. Copyright © 2016 Elsevier B.V. All rights reserved. MicroRNA Delivery for Regenerative Medicine Peng, Bo; Chen, Yongming; Leong, Kam W. MicroRNA (miRNA) directs post-transcriptional regulation of a network of genes by targeting mRNA. Although relatively recent in development, many miRNAs direct differentiation of various stem cells including induced pluripotent stem cells (iPSCs), a major player in regenerative medicine. An effective and safe delivery of miRNA holds the key to translating miRNA technologies. Both viral and nonviral delivery systems have seen success in miRNA delivery, and each approach possesses advantages an... Micro-enterprises as exporters in northern sparsely populated areas Jokela, H. (Harri); Niinikoski, E.-R. (Eija-Riitta); Muhos, M. (Matti) Abstract The majority of the total value of exports comes from small, medium-sized, and large companies for the reason that they tend to be the principal target group in public-support actions related to exports. However, micro-sized enterprises are the numerically dominant group in every economy. During recent years, micro-enterprises' barriers for exports have lowered due to global digitalization. As a result, micro-enterprises' share of total exports has increased rapidly in many countr... Validation of three-dimensional micro injection molding simulation accuracy Tosello, Guido; Costa, F.S.; Hansen, Hans Nørgaard length, injection pressure profile, molding mass and flow pattern. The importance of calibrated micro molding process monitoring for an accurate implementation strategy of the simulation and its validation has been demonstrated. In fact, inconsistencies and uncertainties in the experimental data must...... be minimized to avoid introducing uncertainties in the simulation calculations. Simulations of bulky sub-100 milligrams micro molded parts have been validated and a methodology for accurate micro molding simulations was established.... 3D sensors and micro-fabricated detector systems Da Vià , Cinzia Micro-systems based on the Micro Electro Mechanical Systems (MEMS) technology have been used in miniaturized low power and low mass smart structures in medicine, biology and space applications. Recently similar features found their way inside high energy physics with applications in vertex detectors for high-luminosity LHC Upgrades, with 3D sensors, 3D integration and efficient power management using silicon micro-channel cooling. This paper reports on the state of this development Novel methods of ozone generation by micro-plasma concept Fateev, A.; Chiper, A.; Chen, W.; Stamate, E. The project objective was to study the possibilities for new and cheaper methods of generating ozone by means of different types of micro-plasma generators: DBD (Dielectric Barrier Discharge), MHCD (Micro-Hollow Cathode Discharge) and CPED (Capillary Plasma Electrode Discharge). This project supplements another current project where plasma-based DeNOx is being studied and optimised. The results show potentials for reducing ozone generation costs by means of micro-plasmas but that further development is needed. (ln) Ultrasonic Cleaning of Nuclear Steam Generator by Micro Bubble Jeong, Woo Tae [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of); Kim, Sang Tae; Yoon, Sang Jung [Sae-An Engineering Co., Seoul (Korea, Republic of) In this paper, we present ultrasonic cleaning technology for a nuclear steam generator using micro bubble. We could extend the boundary of ultrasonic cleaning by using micro bubbles in water. Ultrasonic energy measured was increased about 5 times after the generation of micro bubbles in water. Furthermore, ultrasound energy was measured to be strong enough to create cavitation even though the ultrasound sensor was about 2 meters away from the ultrasonic transducer High Throughput Micro-Well Generation of Hepatocyte Micro-Aggregates for Tissue Engineering Gevaert, Elien; Dollé, Laurent; Billiet, Thomas; Dubruel, Peter; van Grunsven, Leo; van Apeldoorn, Aart A.; Cornelissen, Ria The main challenge in hepatic tissue engineering is the fast dedifferentiation of primary hepatocytes in vitro. One successful approach to maintain hepatocyte phenotype on the longer term is the cultivation of cells as aggregates. This paper demonstrates the use of an agarose micro-well chip for the Capillary origami of micro-machined micro-objects: Bi-layer conductive hinges Legrain, A.B.H.; Berenschot, Johan W.; Tas, Niels Roelof; Abelmann, Leon Recently, we demonstrated controllable 3D self-folding by means of capillary forces of silicon-nitride micro-objects made of rigid plates connected to each other by flexible hinges (Legrain et al., 2014). In this paper, we introduce platinum electrodes running from the substrate to the plates over Micro-electro-mechanical systems (MEMS)-based micro-scale direct methanol fuel cell development Yao, S.-C.; Tang Xudong; Hsieh, C.-C.; Alyousef, Yousef; Vladimer, Michael; Fedder, Gary K.; Amon, Cristina H. This paper describes a high-power density, silicon-based micro-scale direct methanol fuel cell (DMFC), under development at Carnegie Mellon. Major issues in the DMFC design include the water management and energy-efficient micro fluidic sub-systems. The air flow and the methanol circulation are both at a natural draft, while a passive liquid-gas separator removes CO 2 from the methanol chamber. An effective approach for maximizing the DMFC energy density, pumping the excess water back to the anode, is illustrated. The proposed DMFC contains several unique features: a silicon wafer with arrays of etched holes selectively coated with a non-wetting agent for collecting water at the cathode; a silicon membrane micro pump for pumping the collected water back to the anode; and a passive liquid-gas separator for CO 2 removal. All of these silicon-based components are fabricated using micro-electro-mechanical systems (MEMS)-based processes on the same silicon wafer, so that interconnections are eliminated, and integration efforts as well as post-fabrication costs are both minimized. The resulting fuel cell has an overall size of one cubic inch, produces a net output of 10 mW, and has an energy density three to five times higher than that of current lithium-ion batteries Rapid Generation of MicroRNA Sponges for MicroRNA Inhibition Kluiver, Joost; Gibcus, Johan H.; Hettinga, Chris; Adema, Annelies; Richter, Mareike K. S.; Halsema, Nancy; Slezak-Prochazka, Izabella; Ding, Ye; Kroesen, Bart-Jan; van den Berg, Anke MicroRNA (miRNA) sponges are transcripts with repeated miRNA antisense sequences that can sequester miRNAs from endogenous targets. MiRNA sponges are valuable tools for miRNA loss-of-function studies both in vitro and in vivo. We developed a fast and flexible method to generate miRNA sponges and Optical micro-metrology of structured surfaces micro-machined by jet-ECM Quagliotti, Danilo; Tosello, Guido; Islam, Aminul A procedure for statistical analysis and uncertainty evaluation is presented with regards to measurements of step height and surface texture. Measurements have been performed with a focus-variation microscope over jet electrochemical micro-machined surfaces. Traceability has been achieved using a... MicroCT Analysis of Micro-Nano Titanium Implant Surface on the Osseointegration. Ban, Jaesam; Kang, Seongsoo; Kim, Jihyun; Lee, Kwangmin; Hyunpil, Lim; Vang, Mongsook; Yang, Hongso; Oh, Gyejeong; Kim, Hyunseung; Hwang, Gabwoon; Jung, Yongho; Lee, Kyungku; Park, Sangwon; Yunl, Kwidug This study was to investigate the effects of micro-nano titanium implant surface on the osseointegration. A total of 36 screw-shaped implants were used. The implant surfaces were classified into 3 groups (n = 12): machined surface (M group), nanosurface which is nanotube formation on the machined surface (MA group) and nano-micro surface which is nanotube formation on the RBM surface (RA group). Anodic oxidation was performed at a 20 V for 10 min with 1 M H3PO4 and 1.5 wt% HF solutions. The implants were installed on the humerus on 6 beagles. After 4 and 12 weeks, the morphometric analysis with micro CT (skyscan 1172, SKYSCAN, Antwerpen, Belgium) was done. The data were statistically analyzed with two-way ANOVA. Bone mineral density and bone volume were significantly increased depending on time. RA group showed the highest bone mineral density and bone volume at 4 weeks and 12 weeks significantly. It indicated that nano-micro titanium implant surface showed faster and more mature osseointegration. Nondestructive Analysis of Astromaterials by Micro-CT and Micro-XRF Analysis for PET Examination Zeigler, R. A.; Righter, K.; Allen, C. C. An integral part of any sample return mission is the initial description and classification of returned samples by the preliminary examination team (PET). The goal of the PET is to characterize and classify returned samples and make this information available to the larger research community who then conduct more in-depth studies on the samples. The PET tries to minimize the impact their work has on the sample suite, which has in the past limited the PET work to largely visual, nonquantitative measurements (e.g., optical microscopy). More modern techniques can also be utilized by a PET to nondestructively characterize astromaterials in much more rigorous way. Here we discuss our recent investigations into the applications of micro-CT and micro-XRF analyses with Apollo samples and ANSMET meteorites and assess the usefulness of these techniques in future PET. Results: The application of micro computerized tomography (micro-CT) to astromaterials is not a new concept. The technique involves scanning samples with high-energy x-rays and constructing 3-dimensional images of the density of materials within the sample. The technique can routinely measure large samples (up to approx. 2700 cu cm) with a small individual voxel size (approx. 30 cu m), and has the sensitivity to distinguish the major rock forming minerals and identify clast populations within brecciated samples. We have recently run a test sample of a terrestrial breccia with a carbonate matrix and multiple igneous clast lithologies. The test results are promising and we will soon analyze a approx. 600 g piece of Apollo sample 14321 to map out the clast population within the sample. Benchtop micro x-ray fluorescence (micro-XRF) instruments can rapidly scan large areas (approx. 100 sq cm) with a small pixel size (approx. 25 microns) and measure the (semi) quantitative composition of largely unprepared surfaces for all elements between Be and U, often with sensitivity on the order of a approx. 100 ppm. Our recent Benchmarking of direct and indirect friction tests in micro forming Eriksen, Rasmus Solmer; Calaon, Matteo; Arentoft, M. The sizeable increase in metal forming friction at micro scale, due to the existence of size effects, constitutes a barrier to the realization of industrial micro forming processes. In the quest for improved frictional conditions in micro scale forming operations, friction tests are applied...... to qualify the tribological performance of the particular forming scenario. In this work the application of a simulative sliding friction test at micro scale is studied. The test setup makes it possible to measure the coefficient of friction as a function of the sliding motion. The results confirm a sizeable...... increase in the coefficient of friction when the work piece size is scaled down. © (2012) Trans Tech Publications.... MicroRNA from tuberculosis RNA: A bioinformatics study Wiwanitkit, Somsri; Wiwanitkit, Viroj The role of microRNA in the pathogenesis of pulmonary tuberculosis is the interesting topic in chest medicine at present. Recently, it was proposed that the microRNA can be a useful biomarker for monitoring of pulmonary tuberculosis and might be the important part in pathogenesis of disease. Here, the authors perform a bioinformatics study to assess the microRNA within known tuberculosis RNA. The microRNA part can be detected and this can be important key information in further study of the p... Evaluation of Biomaterials Using Micro-Computerized Tomography Torris, A. T. Arun; Columbus, K. C. Soumya; Saaj, U. S.; Krishnan, Kalliyana V.; Nair, Manitha B. Micro-computed tomography or Micro-CT is a high resolution, non-invasive, x-ray scanning technique that allows precise three-dimensional imaging and quantification of micro-architectural and structural parameters of objects. Tomographic reconstruction is based on a cone-beam convolution-back-projection algorithm. Micro-architectural and structural parameters such as porosity, surface area to volume ratio, interconnectivity, pore size, wall thickness, anisotropy and cross-section area of biomaterials and bio-specimens such as trabecular bone, polymer scaffold, bio-ceramics and dental restorative were evaluated through imaging and computer aided manipulation of the object scan data sets. Performance Evaluation Of Macro amp Micro Mobility In HMIP Networks Osama Ali Abdelgadir Full Text Available Abstract Changing the location of mobile node during transmission or receiving of data always caused changing of the address of the mobile node which results in packet loss as well as delay in time taken to locate the new address of the Mobile Node therefore delay of data receiving is caused this problem was known as micro-mobility issue. To resolve this problem and ascurrently mobile IP is the most promising solution for mobility management in the Internet. Several IP micro mobility approaches have been proposed to enhance the performance of mobile IP which supports quality of service minimum packet loss limited handoff delay and scalability and power conservation but they are not scalable for macro mobility. A practical solution would therefore require integration of mobile IP and micro mobility protocols where mobile IP handles macro mobility and HMIP cellular IP HAWAII handles micro mobility. In this paper an integrated mobility management protocol for IP based wireless networks is proposed and analyzed.HIERARCHICAL MICRO MOBILITY PROTOCOL is used. To identify the impact of micro-mobility in IP based Wireless Network to implement selected micro-mobility model of Hierarchal Micro Mobility Protocol in network simulator and for more analysis and measurements results and for the purpose of performance comparison between both Macro and Micro mobility Protocol Management.. Simulation results presented in this paper are based on ns 2 Micro-jet Cooling by Compressed Air after MAG Welding Full Text Available The material selected for this investigation was low alloy steel weld metal deposit (WMD after MAG welding with micro-jet cooling. The present investigation was aimed as the following tasks: analyze impact toughness of WMD in terms of micro-jet cooling parameters. Weld metal deposit (WMD was first time carried out for MAG welding with micro-jet cooling of compressed air and gas mixture of argon and air. Until that moment only argon, helium and nitrogen and its gas mixture were tested for micro-jet cooling. Acoustic trapping in bubble-bounded micro-cavities O'Mahoney, P.; McDougall, C.; Glynne-Jones, P.; MacDonald, M. P. We present a method for controllably producing longitudinal acoustic trapping sites inside microfluidic channels. Air bubbles are injected into a micro-capillary to create bubble-bounded `micro-cavities'. A cavity mode is formed that shows controlled longitudinal acoustic trapping between the two air/water interfaces along with the levitation to the centre of the channel that one would expect from a lower order lateral mode. 7 μm and 10 μm microspheres are trapped at the discrete acoustic trapping sites in these micro-cavities.We show this for several lengths of micro-cavity. Micro tube heat exchangers for Space, Phase I National Aeronautics and Space Administration — Mezzo fabricates micro tube heat exchangers for a variety of applications, including aerospace, automotive racing, Department of Defense ground vehicles, economizers... Circuit Design of Surface Acoustic Wave Based Micro Force Sensor Yuanyuan Li Full Text Available Pressure sensors are commonly used in industrial production and mechanical system. However, resistance strain, piezoresistive sensor, and ceramic capacitive pressure sensors possess limitations, especially in micro force measurement. A surface acoustic wave (SAW based micro force sensor is designed in this paper, which is based on the theories of wavelet transform, SAW detection, and pierce oscillator circuits. Using lithium niobate as the basal material, a mathematical model is established to analyze the frequency, and a peripheral circuit is designed to measure the micro force. The SAW based micro force sensor is tested to show the reasonable design of detection circuit and the stability of frequency and amplitude. Micro-EDM process modeling and machining approaches for minimum tool electrode wear for fabrication of biocompatible micro-components Puthumana, Govindan Micro-electrical discharge machining (micro-EDM) is a potential non-contact method for fabrication of biocompatible micro devices. This paper presents an attempt to model the tool electrode wear in micro-EDM process using multiple linear regression analysis (MLRA) and artificial neural networks...... linear regression model was developed for prediction of TWR in ten steps at a significance level of 90%. The optimum architecture of the ANN was obtained with 7 hidden layers at an R-sq value of 0.98. The predicted values of TWR using ANN matched well with the practically measured and calculated values...... (ANN). The governing micro-EDM factors chosen for this investigation were: voltage (V), current (I), pulse on time (Ton) and pulse frequency (f). The proposed predictive models generate a functional correlation between the tool electrode wear rate (TWR) and the governing micro-EDM factors. A multiple... Evaluation of microRNA alignment techniques Kaspi, Antony; El-Osta, Assam Genomic alignment of small RNA (smRNA) sequences such as microRNAs poses considerable challenges due to their short length (∼21 nucleotides [nt]) as well as the large size and complexity of plant and animal genomes. While several tools have been developed for high-throughput mapping of longer mRNA-seq reads (>30 nt), there are few that are specifically designed for mapping of smRNA reads including microRNAs. The accuracy of these mappers has not been systematically determined in the case of smRNA-seq. In addition, it is unknown whether these aligners accurately map smRNA reads containing sequence errors and polymorphisms. By using simulated read sets, we determine the alignment sensitivity and accuracy of 16 short-read mappers and quantify their robustness to mismatches, indels, and nontemplated nucleotide additions. These were explored in the context of a plant genome (Oryza sativa, ∼500 Mbp) and a mammalian genome (Homo sapiens, ∼3.1 Gbp). Analysis of simulated and real smRNA-seq data demonstrates that mapper selection impacts differential expression results and interpretation. These results will inform on best practice for smRNA mapping and enable more accurate smRNA detection and quantification of expression and RNA editing. PMID:27284164 Paepe, Michel de; D'Herdt, Peter; Mertens, David Micro-CHP systems are now emerging on the market. In this paper, a thorough analysis is made of the operational parameters of 3 types of micro-CHP systems for residential use. Two types of houses (detached and terraced) are compared with a two storey apartment. For each building type, the energy demands for electricity and heat are dynamically determined. Using these load profiles, several CHP systems are designed for each building type. Data were obtained for two commercially available gas engines, two Stirling engines and a fuel cell. Using a dynamic simulation, including start up times, these five system types are compared to the separate energy system of a natural gas boiler and buying electricity from the grid. All CHP systems, if well sized, result in a reduction of primary energy use, though different technologies have very different impacts. Gas engines seem to have the best performance. The economic analysis shows that fuel cells are still too expensive and that even the gas engines only have a small internal rate of return (<5%), and this only occurs in favourable economic circumstances. It can, therefore, be concluded that although the different technologies are technically mature, installation costs should at least be reduced by 50% before CHP systems become interesting for residential use. Condensing gas boilers, now very popular in new homes, prove to be economically more interesting and also have a modest effect on primary energy consumption Micro manipulators to handle micro machines. ; Aiming at developing clever and deft robots. Micro machine wo handling surutameno micro manipulator. ; Kashikoku kiyo na robot kaihatsu wo mezashite Fujie, M [Hitachi, Ltd., Tokyo (Japan) The current state of micro manipulators (MM) to control micro machines is described. No MM can be realized with the conventional robots of which finger positions and attitudes are controlled by information on angles of each articulate. To solve this problem, such a system using the conception called a master slave manipulator with complex structure is proposed, in which human ways of manipulation are coupled with master arms with simple structure and slave arms facilitating works via a computer. The MM requires a flexible articulate chaining mechanism when the considerations on smallness of the arm structure and chaining of the multiple actuators are taken. To control the relative positions and attitudes, development of the control algorithm is required, which can learn the command signals given to a large number of actuators to drive mechanisms with ultra high freedoms and the relationship among changes in the end effector movements, rotation and directions, and can perform works according to precise relative positioning of objects to be handled. 8 refs., 4 figs. MicroShield/ISOCS gamma modeling comparison. Sansone, Kenneth R Quantitative radiological analysis attempts to determine the quantity of activity or concentration of specific radionuclide(s) in a sample. Based upon the certified standards that are used to calibrate gamma spectral detectors, geometric similarities between sample shape and the calibration standards determine if the analysis results developed are qualitative or quantitative. A sample analyzed that does not mimic a calibrated sample geometry must be reported as a non-standard geometry and thus the results are considered qualitative and not quantitative. MicroShieldR or ISOCSR calibration software can be used to model non-standard geometric sample shapes in an effort to obtain a quantitative analytical result. MicroShieldR and Canberras ISOCSR software contain several geometry templates that can provide accurate quantitative modeling for a variety of sample configurations. Included in the software are computational algorithms that are used to develop and calculate energy efficiency values for the modeled sample geometry which can then be used with conventional analysis methodology to calculate the result. The response of the analytical method and the sensitivity of the mechanical and electronic equipment to the radionuclide of interest must be calibrated, or standardized, using a calibrated radiological source that contains a known and certified amount of activity. Micro channels in macro thermal management solutions Kosoy Boris V. Full Text Available Modern progress in electronics is associated with increase in computing ability and processing speed, as well as decrease in size. Future applications of electronic devices in aviation, aero space and high performance consumer products' industry demand on very stringent specifications concerning miniaturization, component density, power density and reliability. Excess heat produces stresses on internal components inside the electronic device, thus creating reliability problems. Thus, a problem of heat generation and its efficient removal arises and it has led to the development of advanced thermal control systems. Present research analyses a thermodynamic feasibility of micro capillary heat pumped net works in thermal management of electronic systems, considers basic technological constrains and de sign availability, and identifies perspective directions for the further studies. Computer Fluid Dynamics studies have been per formed on the laminar convective heat transfer and pressure drop of working fluid in silicon micro channels. Surface roughness is simulated via regular constructal, and stochastic models. Three-dimensional numerical solution shows significant effects of surface roughness in terms of the rough element geometry such as height, size, spacing and the channel height on the velocity and pressure fields. Modular Power Supply for Micro Resistance Welding Bondarenko Oleksandr Full Text Available The study is devoted to the important issue of enhancing the circuitry and characteristics of power supplies for micro resistance welding machines. The aim of the research is to provide high quality input current and to increase the energy efficiency of the output pulse generator by means of improving the circuit topologies of the power supply main blocks. In study, the principle of constructing the power supply for micro resistance welding, which provides high values of output welding current and high accuracy of welding pulse formation, makes it possible to reduce energy losses, and provides high quality of consumed input current, is represented. The multiphase topology of the charger with power factor correction based on SEPIC converters is suggested as the most efficient for charging the supercapacitor storage module. The multicell topology of the supercapacitor energy storage with voltage equalizing is presented. The parameters of the converter cells are evaluated. The calculations of energy efficiency of the power supply's input and output converters based on suggested topologies are carried out and verified in MATLAB Simulink. The power factor value greater than 99 % is derived. Lou, Xiong Wen (David) Hollow micro-nanostructures are of great interest in many current and emerging areas of technology. Perhaps the best-known example of the former is the use of fly-ash hollow particles generated from coal power plants as partial replacement for Portland cement, to produce concrete with enhanced strength and durability. This review is devoted to the progress made in the last decade in synthesis and applications of hollow micro-nanostructures. We present a comprehensive overview of synthetic strategies for hollow structures. These strategies are broadly categorized into four themes, which include well-established approaches, such as conventional hard-templating and soft-templating methods, as well as newly emerging methods based on sacrificial templating and template-free synthesis. Success in each has inspired multiple variations that continue to drive the rapid evolution of the field. The Review therefore focuses on the fundamentals of each process, pointing out advantages and disadvantages where appropriate. Strategies for generating more complex hollow structures, such as rattle-type and nonspherical hollow structures, are also discussed. Applications of hollow structures in lithium batteries, catalysis and sensing, and biomedical applications are reviewed. © 2008 WILEY-VCH Verlag GmbH & Co. KGaA,. Nayfeh, A. H. We present a frequency-domain method to measure angular speeds using electrostatic micro-electro-mechanical system actuators. Towards this end, we study a single-axis gyroscope made of a micro-cantilever and a proof-mass coupled to two fixed electrodes. The gyroscope possesses two orthogonal axes of symmetry and identical flexural mode shapes along these axes. We develop the equations of motion describing the coupled bending modes in the presence of electrostatic and Coriolis forces. Furthermore, we derive a consistent closed-form higher-order expression for the natural frequencies of the coupled flexural modes. The closed-form expression is verified by comparing its results to those obtained from numerical integration of the equations of motion. We find that rotations around the beam axis couple each pair of identical bending modes to produce a pair of global modes. They also split their common natural frequency into a pair of closely spaced natural frequencies. We propose the use of the difference between this pair of frequencies, which is linearly proportional to the speed of rotation around the beam axis, as a detector for the angular speed. Immunomodulating microRNAs of mycobacterial infections. Bettencourt, Paulo; Pires, David; Anes, Elsa MicroRNAs are a class of small non-coding RNAs that have emerged as key regulators of gene expression at the post-transcriptional level by sequence-specific binding to target mRNAs. Some microRNAs block translation, while others promote mRNA degradation, leading to a reduction in protein availability. A single miRNA can potentially regulate the expression of multiple genes and their encoded proteins. Therefore, miRNAs can influence molecular signalling pathways and regulate many biological processes in health and disease. Upon infection, host cells rapidly change their transcriptional programs, including miRNA expression, as a response against the invading microorganism. Not surprisingly, pathogens can also alter the host miRNA profile to their own benefit, which is of major importance to scientists addressing high morbidity and mortality infectious diseases such as tuberculosis. In this review, we present recent findings on the miRNAs regulation of the host response against mycobacterial infections, providing new insights into host-pathogen interactions. Understanding these findings and its implications could reveal new opportunities for designing better diagnostic tools, therapies and more effective vaccines. Copyright © 2015 Elsevier Ltd. All rights reserved. Customization of Artificial MicroRNA Design. Van Vu, Tien; Do, Vinh Nang RNAi approaches, including microRNA (miRNA) regulatory pathway, offer great tools for functional characterization of unknown genes. Moreover, the applications of artificial microRNA (amiRNA) in the field of plant transgenesis have also been advanced to engineer pathogen-resistant or trait-improved transgenic plants. Until now, despite the high potency of amiRNA approach, no commercial plant cultivar expressing amiRNAs with improved traits has been released yet. Beside the issues of biosafety policies, the specificity and efficacy of amiRNAs are of major concerns. Sufficient cares should be taken for the specificity and efficacy of amiRNAs due to their potential off-target effects and other issues relating to in vivo expression of pre-amiRNAs. For these reasons, the proper design of amiRNAs with the lowest off-target possibility is very important for successful applications of the approach in plant. Therefore, there are many studies with the aim to improve the amiRNA design and amiRNA expressing backbones for obtaining better specificity and efficacy. However, the requirement for an efficient reference for the design is still needed. In the present chapter, we attempt to summarize and discuss all the major concerns relating to amiRNA design with the hope to provide a significant guideline for this approach. Ferrofluid based micro-electrical energy harvesting Purohit, Viswas; Mazumder, Baishakhi; Jena, Grishma; Mishra, Madhusha; Materials Department, University of California, Santa Barbara, CA93106 Collaboration Innovations in energy harvesting have seen a quantum leap in the last decade. With the introduction of low energy devices in the market, micro energy harvesting units are being explored with much vigor. One of the recent areas of micro energy scavenging is the exploitation of existing vibrational energy and the use of various mechanical motions for the same, useful for low power consumption devices. Ferrofluids are liquids containing magnetic materials having nano-scale permanent magnetic dipoles. The present work explores the possibility of the use of this property for generation of electricity. Since the power generation is through a liquid material, it can take any shape as well as response to small acceleration levels. In this work, an electromagnet-based micropower generator is proposed to utilize the sloshing of the ferrofluid within a controlled chamber which moves to different low frequencies. As compared to permanent magnet units researched previously, ferrofluids can be placed in the smallest of containers of different shapes, thereby giving an output in response to the slightest change in motion. Mechanical motion from 1- 20 Hz was able to give an output voltage in mV's. In this paper, the efficiency and feasibility of such a system is demonstrated. Croley, Thomas E. Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. Study of squeeze film damping in a micro-beam resonator based on micro-polar theory Mina Ghanbari Full Text Available In this paper, squeeze film damping in a micro-beam resonator based on micro-polar theory has been investigated. The proposed model for this study consists of a clamped-clamped micro-beam bounded between two fixed layers. The gap between the micro-beam and layers is filled with air. As fluid behaves differently in micro scale than macro, the micro-scale fluid field in the gap has been modeled based on micro-polar theory. Equation of motion governing transverse deflection of the micro- beam based on modified couple stress theory and also non-linear Reynolds equation of the fluid field based on micropolar theory have been non-dimensionalized, linearized and solved simultaneously in order to calculate the quality factor of the resonator. The effect of micropolar parameters of air on the quality factor has been investigated. The quality factor of the of the micro-beam resonator for different values of non-dimensionalized length scale of the beam, squeeze number and also non-dimensionalized pressure has been calculated and compared to the obtained values of quality factor based on classical theory. Development and performance measurement of micro-power pack using micro-gas turbine driven automotive alternators Sim, Kyuho; Koo, Bonjin; Kim, Chang Ho; Kim, Tae Ho Highlights: ► We develop micro-power pack using automotive alternator and micro-gas turbine. ► We measure rotordynamic and power generation performance of micro-power pack. ► Micro-power pack shows dramatic increases in mass and volumetric power densities. ► Test results assure feasibility of micro-power pack for electric vehicles. -- Abstract: This paper presents the development of a micro-power pack using automotive alternators powered by a micro-gas turbine (MGT) to recharge battery packs, in particular for electric vehicles (EVs). The thermodynamic efficiency for the MGT with the power turbine is estimated from a simple Brayton cycle analysis. The rotordynamic and power generation performance of the MGT driven alternator was measured during a series of experiments under electrical no-loading and loading conditions, and with belt-pulley and flexible bellows couplings. The flexible coupling showed superior rotordynamic and power generation performance than the belt coupling due to the enhanced alignment of the alternator rotor and the reduced mechanical frictions. Furthermore, the micro-power pack showed dramatic increases in the mass and volumetric power densities by ∼4 times and ∼5 times, respectively, compared with those of a commercial diesel generator with similar power level. As a result, this paper assures the feasibility of the light-weight micro-power pack using a MGT and automotive alternators for EVs. Fabrication of a Micro-Lens Array Mold by Micro Ball End-Milling and Its Hot Embossing Peng Gao Full Text Available Hot embossing is an efficient technique for manufacturing high-quality micro-lens arrays. The machining quality is significant for hot embossing the micro-lens array mold. This study investigates the effects of micro ball end-milling on the machining quality of AISI H13 tool steel used in the micro-lens array mold. The micro ball end-milling experiments were performed under different machining strategies, and the surface roughness and scallop height of the machined micro-lens array mold are measured. The experimental results showed that a three-dimensional (3D offset spiral strategy could achieve a higher machining quality in comparison with other strategies assessed in this study. Moreover, the 3D offset spiral strategy is more appropriate for machining the micro-lens array mold. With an increase of the cutting speed and feed rate, the surface roughness of the micro-lens array mold slightly increases, while a small step-over can greatly reduce the surface roughness. In addition, a hot embossing experiment was undertaken, and the obtained results indicated higher-quality production of the micro-lens array mold by the 3D offset spiral strategy. Characterization of impact materials around Barringer Meteor Crater by micro-PIXE and micro-SRXRF techniques Uzonyi, I. E-mail: [email protected]; Szoeor, Gy.; Rozsa, P.; Vekemans, B.; Vincze, L.; Adams, F.; Drakopoulos, M.; Somogyi, A.; Kiss, A.Z A combined micro-PIXE and micro-SRXRF method has been tested successfully for the characterization of impact materials collected at the well-known Barringer Meteor Crater. The micro-PIXE technique proved to be sensitive in the Z{<=}28 atomic number region while the micro-SRXRF above Fe especially for the siderophile elements. Quantitative analysis has become available for about 40 elements by these complementary methods providing new perspectives for the interpretation of the formation mechanism of impact metamorphosed objects. The Effect of Micro Enterprise Financing on Farmers Welfare in Abia ... The chow's test revealed a significant difference between the welfare of the farmers with micro loan and those without micro credit. Micro enterprise farmers who obtained micro credit to finance their business had better welfare status that those that did not. Key words: Micro Enterprise, Financing, Welfare, Abia State, ... Regulation of cardiac microRNAs by serum response factor Wei Jeanne Y Full Text Available Abstract Serum response factor (SRF regulates certain microRNAs that play a role in cardiac and skeletal muscle development. However, the role of SRF in the regulation of microRNA expression and microRNA biogenesis in cardiac hypertrophy has not been well established. In this report, we employed two distinct transgenic mouse models to study the impact of SRF on cardiac microRNA expression and microRNA biogenesis. Cardiac-specific overexpression of SRF (SRF-Tg led to altered expression of a number of microRNAs. Interestingly, downregulation of miR-1, miR-133a and upregulation of miR-21 occurred by 7 days of age in these mice, long before the onset of cardiac hypertrophy, suggesting that SRF overexpression impacted the expression of microRNAs which contribute to cardiac hypertrophy. Reducing cardiac SRF level using the antisense-SRF transgenic approach (Anti-SRF-Tg resulted in the expression of miR-1, miR-133a and miR-21 in the opposite direction. Furthermore, we observed that SRF regulates microRNA biogenesis, specifically the transcription of pri-microRNA, thereby affecting the mature microRNA level. The mir-21 promoter sequence is conserved among mouse, rat and human; one SRF binding site was found to be in the mir-21 proximal promoter region of all three species. The mir-21 gene is regulated by SRF and its cofactors, including myocardin and p49/Strap. Our study demonstrates that the downregulation of miR-1, miR-133a, and upregulation of miR-21 can be reversed by one single upstream regulator, SRF. These results may help to develop novel therapeutic interventions targeting microRNA biogenesis. Incentive Mechanism of Micro-grid Project Development Yong Long Full Text Available Due to the issue of cost and benefit, the investment demand and consumption demand of micro-grids are insufficient in the early stages, which makes all parties lack motivation to participate in the development of micro-grid projects and leads to the slow development of micro-grids. In order to promote the development of micro-grids, the corresponding incentive mechanism should be designed to motivate the development of micro-grid projects. Therefore, this paper builds a multi-stage incentive model of micro-grid project development involving government, grid corporation, energy supplier, equipment supplier, and the user in order to study the incentive problems of micro-grid project development. Through the solution and analysis of the model, this paper deduces the optimal subsidy of government and the optimal cooperation incentive of the energy supplier, and calculates the optimal pricing strategy of grid corporation and the energy supplier, and analyzes the influence of relevant factors on optimal subsidy and incentive. The study reveals that the cost and social benefit of micro-grid development have a positive impact on micro-grid subsidy, technical level and equipment quality of equipment supplier as well as the fact that government subsidies positively adjust the level of cooperation incentives and price incentives. In the end, the validity of the model is verified by numerical analysis, and the incentive strategy of each participant is analyzed. The research of this paper is of great significance to encourage project development of micro-grids and to promote the sustainable development of micro-grids. Common features of microRNA target prediction tools Sarah M. Peterson Full Text Available The human genome encodes for over 1800 microRNAs, which are short noncoding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one microRNA to target multiple gene transcripts, microRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of microRNA targets is a critical initial step in identifying microRNA:mRNA target interactions for experimental validation. The available tools for microRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to microRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all microRNA target prediction tools, four main aspects of the microRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MicroRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output. The micro thermal analysis of polymers Grandy, David Brian This study is concerned with the development of micro-thermal analysis as a technique for characterising heterogeneous polymers. It is divided into two main parts. In the first part, the use of miniature Wollaston wire near-field thermal probes mounted in an atomic force microscope (AFM) to carry out highly localised thermal analysis (L-TA) of amorphous and semi-crystalline polymers is investigated. Here, the temperature of the probe sensor or tip is scanned over a pre-selected temperature range while in contact with the surface of a sample. It is thereby used to heat a volume of material of the order of several cubic micrometres. The effect of the glass transition, cold crystallisation, melting and degree of crystallinity on L-TA measurements is investigated. The materials used are poly(ethylene terephthalate), polystyrene and fluorocarbon-coated poly(butylene terephthalate). The primary measurements are the micro- or localised analogues of thermomechanical analysis (L-TMA) and differential thermal analysis (L-DTA). The effect of applying a sinusoidal modulation to the temperature of the probe is also investigated. In the second part, conventional ultra-sharp inert AFM probes are used, in conjunction with a variable-temperature microscope stage, to conduct variable-temperature mechanical property-based imaging of phase-separated polymer blends and copolymers. Here, the temperature of the whole sample is varied and the temperature of the probe tip remains essentially the same as that of the sample. The primary AFM imaging mode is pulsed force mode (PFM-AFM). This is an intermittent contact (IC) method in which a mechanical modulation is applied to the probe cantilever. The methodology is demonstrated on a model 50:50 blend of polystyrene and poly(methyl methacrylate) (PS-PMMA) and three segmented polyurethane (SPU) elastomers containing different chain extenders. In doing so, it is shown that PFM-AFM imaging can be carried out successfully over a temperature range Micro mass spectrometer on a chip. Cruz, Dolores Y.; Blain, Matthew Glenn; Fleming, James Grant The design, simulation, fabrication, packaging, electrical characterization and testing analysis of a microfabricated a cylindrical ion trap ({mu}CIT) array is presented. Several versions of microfabricated cylindrical ion traps were designed and fabricated. The final design of the individual trap array element consisted of two end cap electrodes, one ring electrode, and a detector plate, fabricated in seven tungsten metal layers by molding tungsten around silicon dioxide (SiO{sub 2}) features. Each layer of tungsten is then polished back in damascene fashion. The SiO{sub 2} was removed using a standard release processes to realize a free-hung structure. Five different sized traps were fabricated with inner radii of 1, 1.5, 2, 5 and 10 {micro}m and heights ranging from 3-24 {micro}m. Simulations examined the effects of ion and neutral temperature, the pressure and nature of cooling gas, ion mass, trap voltage and frequency, space-charge, fabrication defects, and other parameters on the ability of micrometer-sized traps to store ions. The electrical characteristics of the ion trap arrays were determined. The capacitance was 2-500 pF for the various sized traps and arrays. The resistance was in the order of 1-2 {Omega}. The inductance of the arrays was calculated to be 10-1500 pH, depending on the trap and array sizes. The ion traps' field emission characteristics were assessed. It was determined that the traps could be operated up to 125 V while maintaining field emission currents below 1 x 10{sup -15} A. The testing focused on using the 5-{micro}m CITs to trap toluene (C{sub 7}H{sub 8}). Ion ejection from the traps was induced by termination of the RF voltage applied to the ring electrode and current measured on the collector electrode suggested trapping of ions in 1-10% of the traps. Improvements to the to the design of the traps were defined to minimize voltage drop to the substrate, thereby increasing trapping voltage applied to the ring electrode, and to Fabrication of micro-Ni arrays by electroless and electrochemical ... Indian Academy of Sciences (India) in electroless solution. With the help of the membrane, nickel micro-columns of about 1–2 µm diameter were obtained. The surface-deposited nickel layer served as a substrate for the nickel micro-columns, and the resulting material possessed strong mechanical strength. Electrochemical deposition was operated without ... Improving Financial Service Delivery to Communities through Micro ... ... through Micro-finance Institutions in Uganda; the case of Pride Micro-finance ... This data was analysed qualitatively and the results of the analysis indicated that ... a number of challenges in financial service delivery; like inability to reach out ... Identification of novel components in microProtein signalling Rodrigues, Vandasue Lily characterization of smaller proteins. Using a computational approach, we identified putative microProteins that could target a diverse variety of protein classes. Using a synthetic microProtein approach, we demonstrate that miPs can target a diverse variety of target proteins, which makes them of interest... MicroRNA and gene signature of severe cutaneous drug ... Purpose: To build a microRNA and gene signature of severe cutaneous adverse drug reactions (SCAR), including Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN). Methods: MicroRNA expression profiles were downloaded from miRNA expression profile of patients' skin suffering from TEN using an ... Integrated sensor array for on-line monitoring micro bioreactors Krommenhoek, E.E. The "Fed��?batch on a chip��?��?project, which was carried out in close cooperation with the Technical University of Delft, aims to miniaturize and parallelize micro bioreactors suitable for on-line screening of micro-organisms. This thesis describes an electrochemical sensor array which has been Micro-tensile strength of a welded turbine disc superalloy Oluwasegun, K.M.; Cooper, C.; Chiu, Y.L.; Jones, I.P. [School of Metallurgy and Materials, University of Birmingham, B15 2TT (United Kingdom); Li, H.Y., E-mail: [email protected] [School of Metallurgy and Materials, University of Birmingham, B15 2TT (United Kingdom); Baxter, G. [Rolls-Royce plc., P.O. Box 31, Derby DE24 8BJ (United Kingdom) A micro-tensile testing system coupled with focussed ion beam (FIB) machining was used to characterise the micro-mechanical properties of the weld from a turbine disc alloy. The strength variations between the weld and the base alloy are rationalised via the microstructure obtained. C-stop production by micro injection moulding Islam, Aminul of engineering micro product which integrate many features like beam snapfit, annular snapfit, hinge connection, filter grid, house, lid etc in a single product. All the features are in micro dimensional scale and manufactured by single step of injection moulding. This presentation will cover industrial... Micro-lightguide spectrophotometry for tissue perfusion in ischemic limbs Jørgensen, Lise Pyndt; Schroeder, Torben V To validate micro-lightguide spectrophotometry (O2C) in patients with lower limb ischemia and to compare results with those obtained from toe blood pressure.......To validate micro-lightguide spectrophotometry (O2C) in patients with lower limb ischemia and to compare results with those obtained from toe blood pressure.... Study on Micro Wind Generator System for Automobile Fujimoto, Koji; Washizu, Shinsuke; Ichikawa, Tomohiko; Yukita, Kazuto; Goto, Yasuyuki; Ichiyanagi, Katsuhiro; Oshima, Takamitsu; Hayashi, Niichi; Tobi, Nobuo This paper proposes the micro wind generator system for automobile. This proposes system is composed of the deflector, the micro windmill, the generator, and electric storage device. Then, the effectiveness is confirmed from an examination using air blower. Therefore, new energy can be expected to be obtained by installing this system in the truck. Integration of Polymer Micro-Electrodes for Bio-Sensing Argyraki, Aikaterini; Larsen, Simon Tylsgaard; Tanzi, Simone We present the fabrication of PEDOT and pyrolyzed micro-electrodes for the detection of neurotransmitter exocytosis from single cells. The patterns of the electrodes are defined with photolithography. The micro-electro-fluidic-chips were fabricated by bonding two injection molded TOPAS parts. Pol... Advancing three-dimensional MEMS by complimentary laser micro manufacturing Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B. This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining. Diet-responsive microRNAs are likely exogenous In a recent report Title "et al". fostered miRNA-375 and miR-200c knock-out pups to wild-type dams and arrived at the conclusion that milk microRNAs are bioavailable in trace amounts at best and that postprandial concentrations of microRNAs are too low to elicit biological effects. Their take home m... Automation of 3D micro object handling process Gegeckaite, Asta; Hansen, Hans Nørgaard Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to ... Two-component micro injection moulding for hearing aid applications Islam, Aminul; Hansen, Hans Nørgaard; Marhöfer, David Maximilian . The moulding machine was a state-of-the-art 2k micro machine from DESMA. The fabricated micro part was a socket house integrated with a sealing ring for the receiver-in-canal hearing instrument. The test performed on the demonstrator showed the potential of the 2k moulding technology to be able to solve some... Micro-strip sensors based on CVD diamond Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D' Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; Eijk, B. van; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K.K.; Gheeraert, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; Mac Lynne, L.; Manfredotti, C.; Meier, D. E-mail: [email protected]; Mishina, M.; Moroni, L.; Oh, A.; Pan, L.S.; Pernicka, M.; Peitz, A.; Perera, L.; Pirollo, S.; Procario, M.; Riester, J.L.; Roe, S.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R.J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Walsh, A.M.; Wedenig, R.; Weilhammer, P.; Wetstein, M.; White, C.; Zeuner, W.; Zoeller, M In this article we present the performance of recent chemical vapour deposition (CVD) diamond micro-strip sensors in beam tests. In addition, we present the first comparison of a CVD diamond micro-strip sensor before and after proton irradiation. Adam, W; Bergonzo, P; Bertuccio, G; Bogani, F; Borchi, E; Brambilla, A; Bruzzi, Mara; Colledani, C; Conway, J; D'Angelo, P; Dabrowski, W; Delpierre, P A; Deneuville, A; Dulinski, W; van Eijk, B; Fallou, A; Fizzotti, F; Foulon, F; Friedl, M; Gan, K K; Gheeraert, E; Hallewell, G D; Han, S; Hartjes, F G; Hrubec, Josef; Husson, D; Kagan, H; Kania, D R; Kaplon, J; Kass, R; Koeth, T W; Krammer, Manfred; Lo Giudice, A; Lü, R; MacLynne, L; Manfredotti, C; Meier, D; Mishina, M; Moroni, L; Oh, A; Pan, L S; Pernicka, Manfred; Peitz, A; Perera, L P; Pirollo, S; Procario, M; Riester, J L; Roe, S; Rousseau, L; Rudge, A; Russ, J; Sala, S; Sampietro, M; Schnetzer, S R; Sciortino, S; Stelzer, H; Stone, R; Suter, B; Tapper, R J; Tesarek, R J; Trischuk, W; Tromson, D; Vittone, E; Walsh, A M; Wedenig, R; Weilhammer, Peter; Wetstein, M; White, C; Zeuner, W; Zoeller, M M In this article we present the performance of recent chemical vapour deposition (CVD) diamond micro-strip sensors in beam tests. In addition we present the first comparison of a CVD diamond micro-strip sensor before and after proton irradiation. Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; Eijk, B. van; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K.K.; Gheeraert, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; Mac Lynne, L.; Manfredotti, C.; Meier, D.; Mishina, M.; Moroni, L.; Oh, A.; Pan, L.S.; Pernicka, M.; Peitz, A.; Perera, L.; Pirollo, S.; Procario, M.; Riester, J.L.; Roe, S.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R.J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Walsh, A.M.; Wedenig, R.; Weilhammer, P.; Wetstein, M.; White, C.; Zeuner, W.; Zoeller, M. In this article we present the performance of recent chemical vapour deposition (CVD) diamond micro-strip sensors in beam tests. In addition, we present the first comparison of a CVD diamond micro-strip sensor before and after proton irradiation Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Meier, D.; Mishina, M.; Moroni, L.; Oh, A.; Pan, L. S.; Pernicka, M.; Peitz, A.; Perera, L.; Pirollo, S.; Procario, M.; Riester, J. L.; Roe, S.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; Wetstein, M.; White, C.; Zeuner, W.; Zoeller, M.; RD42 Collaboration Development of MicroMegas for a Digital Hadronic Calorimeter Adloff, Catherine; Blaha, Jan; Espargiliere, Ambroise; Karyotakis, Yannis Recent developments on the MicroMegas prototypes built by use of the bulk technology with analog and digital readout electronics are presented. The main test beam results of a stack of several MicroMegas prototypes fully comply with the needs of a hadronic calorimeter for future particle physics experiments. A technical solution for a large scale prototype is also introduced. Signal measurement and estimation techniques for micro and nanotechnology Clévy, Cédric; Rakotondrabe, Micky; Chaillet, Nicolas ..., accelerometers, micro-mirrors, micro-relays, and pressure sensors are among the most known and widespread devices that open cost-effective and highly integrated solutions to the car industry, aeronautics, medicine, biology, energy, and telecommunication domains. One step further, nanotechnologies deal with the technology at the nano... Fabrication of LTCC based Micro Thruster for Precision Controlled Spaceflight Larsen, Jack; Jørgensen, John Leif The paper at hand presents the initial investigations on the development and fabrication of a micro thruster based on LTCC technology, delivering a thrust in the micro Newton regime. Using smaller segments of an observation system distributed on two or more spacecrafts, one can realize an observa... Activity patterns of cultured neural networks on micro electrode arrays Rutten, Wim; van Pelt, J. A hybrid neuro-electronic interface is a cell-cultured micro electrode array, acting as a neural information transducer for stimulation and/or recording of neural activity in the brain or the spinal cord (ventral motor region or dorsal sensory region). It consists of an array of micro electrodes on A Platform for Manufacturable Stretchable Micro-electrode Arrays Khoshfetrat Pakazad, S.; Savov, A.; Braam, S.R.; Dekker, R. A platform for the batch fabrication of pneumatically actuated Stretchable Micro-Electrode Arrays (SMEAs) by using state-of-the-art micro-fabrication techniques and materials is demonstrated. The proposed fabrication process avoids the problems normally associated with processing of thin film greater than 30 % of the same patients [5]. Nevertheless, the mechanisms of SJS and TEN are not fully elucidated. MicroRNAs or miRs are single stranded RNAs that are capable of posttranscriptional gene regulation via targeting their Mrna [6]. MicroRNAs are very important regulators in many human diseases, for instance,. Micro powder-injection moulding of metals and ceramics Development of micro-MIM/-CIM was started at Forschungszentrum Karlsruhe with the aim of creating a process suitable for a wide range of materials as well as for medium-scale and large-scale production of micro components. Using enhanced machine technology and special tempering procedures, this process enables ... Leveraging Innovation Capabilities of Asian Micro, Small and ... Leveraging Innovation Capabilities of Asian Micro, Small and Medium Enterprises through Intermediary Organizations. Micro, small and medium enterprises (MSMEs) are a source of livelihood for billions of poor people worldwide. The current global economic downturn has hit these enterprises particularly hard, putting ... SMART FUEL CELL OPERATED RESIDENTIAL MICRO-GRID COMMUNITY Dr. Mohammad S. Alam (PI/PD) To build on the work of year one by expanding the smart control algorithm developed to a micro-grid of ten houses; to perform a cost analysis; to evaluate alternate energy sources; to study system reliability; to develop the energy management algorithm, and to perform micro-grid software and hardware simulations. Micro-Mechanical Modeling of Fiber Reinforced Concrete Stang, Henrik of Fiber Reinforced Concrete (FRC) on the micro- the meso- as well as the macro-level, i.e. modeling aspects of fiber-matrix interaction, overall constitutive modeling and structural modeling. Emphasis is placed on the micro- and meso-aspects, however, some basic results on the macro-level are also... Micro-CAT with redundant electrodes (CATER) Berg, F.D. van den; Eijk, C.W.E. van; Hollander, R.W.; Sarro, P.M. High-rate X-ray or neutron counting introduces the problem of hit multiplicity when 2D position reconstruction is demanded. Implementation of a third readout electrode having a different angle than the anode or cathode allows to eliminate multiplicity problems. We present experimental results of a new type of gas-filled micro-patterned radiation detector, called 'Compteur a Trous a Electrodes Redondantes (CATER)', that disposes of such an extra readout channel in the form of a ring-shaped electrode that is positioned between the anode and the cathode. The ionic signal is shared between the ring-electrode and the cathode strip in a way that can be controlled by their potential difference. We observe a strong signal dependence on the drift field, which can be understood by the reduced transparency for the primary charge at high drift fields MicroSCADA project documentation database Kolam, Karolina Detta ingenjörsarbete var beställt av ABB Power Systems, Network Management. Syftet med detta ingenjörsarbete var att skapa en databas för dokumentering av information om MicroSCADA projekt. Ett lämpligt verktyg för att skapa rapporter och skriva ny data till databasen skulle också ingå. Före detta ingenjörsarbete sparades all information som skilda textdokument. Med en databas kunde man samla all information på ett ställe för att arkiveras under en längre tid. Det förenklad... Applications of dewetting in micro and nanotechnology. Gentili, Denis; Foschi, Giulia; Valle, Francesco; Cavallini, Massimiliano; Biscarini, Fabio Dewetting is a spontaneous phenomenon where a thin film on a surface ruptures into an ensemble of separated objects, like droplets, stripes, and pillars. Spatial correlations with characteristic distance and object size emerge spontaneously across the whole dewetted area, leading to regular motifs with long-range order. Characteristic length scales depend on film thickness, which is a convenient and robust technological parameter. Dewetting is therefore an attractive paradigm for organizing a material into structures of well-defined micro- or nanometre-size, precisely positioned on a surface, thus avoiding lithographical processes. This tutorial review introduces the reader to the physical-chemical basis of dewetting, shows how the dewetting process can be applied to different functional materials with relevance in technological applications, and highlights the possible strategies to control the length scales of the dewetting process. Bioinspiration From Nano to Micro Scales Methods in bioinspiration and biomimicking have been around for a long time. However, due to current advances in modern physical, biological sciences, and technologies, our understanding of the methods have evolved to a new level. This is due not only to the identification of mysterious and fascinating phenomena but also to the understandings of the correlation between the structural factors and the performance based on the latest theoretical, modeling, and experimental technologies. Bioinspiration: From Nano to Micro Scale provides readers with a broad view of the frontiers of research in the area of bioinspiration from the nano to macroscopic scales, particularly in the areas of biomineralization, antifreeze protein, and antifreeze effect. It also covers such methods as the lotus effect and superhydrophobicity, structural colors in animal kingdom and beyond, as well as behavior in ion channels. A number of international experts in related fields have contributed to this book, which offers a comprehensive an... Micro-navigation in complex periodic environments Chamolly, Alexander; Ishikawa, Takuji; Lauga, Eric Natural and artificial small-scale swimmers may often self-propel in environments subject to complex geometrical constraints. While most past theoretical work on low-Reynolds number locomotion addressed idealised geometrical situations, not much is known on the motion of swimmers in heterogeneous environments. We investigate theoretically and numerically the behaviour of a single spherical micro-swimmer located in an infinite, periodic body-centred cubic lattice consisting of rigid inert spheres of the same size as the swimmer. We uncover a surprising and complex phase diagram of qualitatively different trajectories depending on the lattice packing density and swimming actuation strength. These results are then rationalised using hydrodynamic theory. In particular we show that the far-field nature of the swimmer (pusher vs. puller) governs the behaviour even at high volume fractions. ERC Grant PhyMeBa (682754, EL); JSPS Grant-in-Aid for Scientific Research (A) (17H00853, TI). Elementary particles as micro-universes Recami, E.; Zanchin, V.T.; Vasconcelos, M.T. A panoramic view is presented as a proposed unified, bi-scale theory of gravitational and strong interactions (which is mathematically analogous to the last version of N. Rosen's bi-metric theory; and yields physical results similar to strong gravity's). This theory is purely geometrical in nature, adopting the methods of General Relativity for the description of hadron structure and strong interactions. In particular, hadrons are associated with strong black-roles, from the external point of view and with micro-universes from the internal point of view. Among the results herein presented, it should be mentioned the derivation of confinement and asymptotic freedom from the hadron constituents; of the Yukawa behaviour for the potential at the static limit; of the strong coupling constant, and of mesonic mass spectra. (author) The dynamic micro computed tomography at SSRF Chen, R.; Xu, L.; Du, G.; Deng, B.; Xie, H.; Xiao, T. Synchrotron radiation micro-computed tomography (SR-μCT) is a critical technique for quantitative characterizing the 3D internal structure of samples, recently the dynamic SR-μCT has been attracting vast attention since it can evaluate the three-dimensional structure evolution of a sample. A dynamic μCT method, which is based on monochromatic beam, was developed at the X-ray Imaging and Biomedical Application Beamline at Shanghai Synchrotron Radiation Facility, by combining the compressed sensing based CT reconstruction algorithm and hardware upgrade. The monochromatic beam based method can achieve quantitative information, and lower dose than the white beam base method in which the lower energy beam is absorbed by the sample rather than contribute to the final imaging signal. The developed method is successfully used to investigate the compression of the air sac during respiration in a bell cricket, providing new knowledge for further research on the insect respiratory system. Micro- and nanoflows modeling and experiments Rudyak, Valery Ya; Maslov, Anatoly A; Minakov, Andrey V; Mironov, Sergey G This book describes physical, mathematical and experimental methods to model flows in micro- and nanofluidic devices. It takes in consideration flows in channels with a characteristic size between several hundreds of micrometers to several nanometers. Methods based on solving kinetic equations, coupled kinetic-hydrodynamic description, and molecular dynamics method are used. Based on detailed measurements of pressure distributions along the straight and bent microchannels, the hydraulic resistance coefficients are refined. Flows of disperse fluids (including disperse nanofluids) are considered in detail. Results of hydrodynamic modeling of the simplest micromixers are reported. Mixing of fluids in a Y-type and T-type micromixers is considered. The authors present a systematic study of jet flows, jets structure and laminar-turbulent transition. The influence of sound on the microjet structure is considered. New phenomena associated with turbulization and relaminarization of the mixing layer of microjets are di... MicroRNAs in the Hypothalamus Meister, Björn; Herzer, Silke; Silahtaroglu, Asli MicroRNAs (miRNAs) are short (∼22 nucleotides) non-coding ribonucleic acid (RNA) molecules that negatively regulate the expression of protein-coding genes. Posttranscriptional silencing of target genes by miRNA is initiated by binding to the 3'-untranslated regions of target mRNAs, resulting...... of the hypothalamus and miRNAs have recently been shown to be important regulators of hypothalamic control functions. The aim of this review is to summarize some of the current knowledge regarding the expression and role of miRNAs in the hypothalamus.......RNA molecules are abundantly expressed in tissue-specific and regional patterns and have been suggested as potential biomarkers, disease modulators and drug targets. The central nervous system is a prominent site of miRNA expression. Within the brain, several miRNAs are expressed and/or enriched in the region... Hamam, Rimi; Hamam, Dana; Alsaleh, Khalid A Effective management of breast cancer depends on early diagnosis and proper monitoring of patients' response to therapy. However, these goals are difficult to achieve because of the lack of sensitive and specific biomarkers for early detection and for disease monitoring. Accumulating evidence in ...... circulating miRNAs as diagnostic, prognostic or predictive biomarkers in breast cancer management.......Effective management of breast cancer depends on early diagnosis and proper monitoring of patients' response to therapy. However, these goals are difficult to achieve because of the lack of sensitive and specific biomarkers for early detection and for disease monitoring. Accumulating evidence...... in the past several years has highlighted the potential use of peripheral blood circulating nucleic acids such as DNA, mRNA and micro (mi)RNA in breast cancer diagnosis, prognosis and for monitoring response to anticancer therapy. Among these, circulating miRNA is increasingly recognized as a promising... Micro-splashing by drop impacts Thoroddsen, Sigurdur T; Takehara, Kohsei; Etoh, Takeharugoji We use ultra-high-speed video imaging to observe directly the earliest onset of prompt splashing when a drop impacts onto a smooth solid surface. We capture the start of the ejecta sheet travelling along the solid substrate and show how it breaks up immediately upon emergence from the underneath the drop. The resulting micro-droplets are much smaller and faster than previously reported and may have gone unobserved owing to their very small size and rapid ejection velocities, which approach 100 m s-1, for typical impact conditions of large rain drops. We propose a phenomenological mechanism which predicts the velocity and size distribution of the resulting microdroplets. We also observe azimuthal undulations which may help promote the earliest breakup of the ejecta. This instability occurs in the cusp in the free surface where the drop surface meets the radially ejected liquid sheet. © 2012 Cambridge University Press. MicroRNA in Human Glioma Li, Mengfeng, E-mail: [email protected] [Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Chinese Ministry of Education, Guangzhou 510080 (China); Department of Microbiology, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080 (China); Li, Jun [Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Chinese Ministry of Education, Guangzhou 510080 (China); Department of Biochemistry, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080 (China); Liu, Lei; Li, Wei [Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Chinese Ministry of Education, Guangzhou 510080 (China); Department of Microbiology, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080 (China); Yang, Yi [Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Chinese Ministry of Education, Guangzhou 510080 (China); Department of Pharmacology, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080 (China); Yuan, Jie [Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Chinese Ministry of Education, Guangzhou 510080 (China); Key Laboratory of Functional Molecules from Oceanic Microorganisms (Sun Yat-sen University), Department of Education of Guangdong Province, Guangzhou 510080 (China) Glioma represents a serious health problem worldwide. Despite advances in surgery, radiotherapy, chemotherapy, and targeting therapy, the disease remains one of the most lethal malignancies in humans, and new approaches to improvement of the efficacy of anti-glioma treatments are urgently needed. Thus, new therapeutic targets and tools should be developed based on a better understanding of the molecular pathogenesis of glioma. In this context, microRNAs (miRNAs), a class of small, non-coding RNAs, play a pivotal role in the development of the malignant phenotype of glioma cells, including cell survival, proliferation, differentiation, tumor angiogenesis, and stem cell generation. This review will discuss the biological functions of miRNAs in human glioma and their implications in improving clinical diagnosis, prediction of prognosis, and anti-glioma therapy. Li, Mengfeng; Li, Jun; Liu, Lei; Li, Wei; Yang, Yi; Yuan, Jie Glioma represents a serious health problem worldwide. Despite advances in surgery, radiotherapy, chemotherapy, and targeting therapy, the disease remains one of the most lethal malignancies in humans, and new approaches to improvement of the efficacy of anti-glioma treatments are urgently needed. Thus, new therapeutic targets and tools should be developed based on a better understanding of the molecular pathogenesis of glioma. In this context, microRNAs (miRNAs), a class of small, non-coding RNAs, play a pivotal role in the development of the malignant phenotype of glioma cells, including cell survival, proliferation, differentiation, tumor angiogenesis, and stem cell generation. This review will discuss the biological functions of miRNAs in human glioma and their implications in improving clinical diagnosis, prediction of prognosis, and anti-glioma therapy Thoroddsen, Sigurdur T. MicroRNA Implication in Cancer Iker BADIOLA Full Text Available MicroRNAs (miRNA are a new class of posttranscriptional regulators. These small non-coding RNAs regulate the expression of target mRNA transcripts and are linked to several human disease such as Alzheimer, cancer or heart disease. But it has been the cancer disease which has experimented the major number of studies of miRNA linked to the disease progression. In the last years it has been reported the deregulation pattern of the miRNAs in malignant cells which have disrupted the control of the proliferation, differentiation or apoptosis. The evidence of the presence of specific miRNA deregulated in concrete cancer types has become the miRNAs like possible biomarkers and therapeutic targets. The specific miRNA patterns deregulated in concrete cancer cell types open new opportunities to the diagnosis and therapy. Hierarchical Micro-Nano Coatings by Painting Kirveslahti, Anna; Korhonen, Tuulia; Suvanto, Mika; Pakkanen, Tapani A. In this paper, the wettability properties of coatings with hierarchical surface structures and low surface energy were studied. Hierarchically structured coatings were produced by using hydrophobic fumed silica nanoparticles and polytetrafluoroethylene (PTFE) microparticles as additives in polyester (PES) and polyvinyldifluoride (PVDF). These particles created hierarchical micro-nano structures on the paint surfaces and lowered or supported the already low surface energy of the paint. Two standard application techniques for paint application were employed and the presented coatings are suitable for mass production and use in large surface areas. By regulating the particle concentrations, it was possible to modify wettability properties gradually. Highly hydrophobic surfaces were achieved with the highest contact angle of 165∘. Dynamic contact angle measurements were carried out for a set of selected samples and low hysteresis was obtained. Produced coatings possessed long lasting durability in the air and in underwater conditions. MEMS tunable grating micro-spectrometer Tormen, Maurizio; Lockhart, R.; Niedermann, P.; Overstolz, T.; Hoogerwerf, A.; Mayor, J.-M.; Pierer, J.; Bosshard, C.; Ischer, R.; Voirin, G.; Stanley, R. P. The interest in MEMS based Micro-Spectrometers is increasing due to their potential in terms of flexibility as well as cost, low mass, small volume and power savings. This interest, especially in the Near-Infrared and Mid- Infrared, ranges from planetary exploration missions to astronomy, e.g. the search for extra solar planets, as well as to many other terrestrial fields of application such as, industrial quality and surface control, chemical analysis of soil and water, detection of chemical pollutants, exhausted gas analysis, food quality control, process control in pharmaceuticals, to name a few. A compact MEMS-based Spectrometer for Near- Infrared and Mid-InfraRed operation have been conceived, designed and demonstrated. The design based on tunable MEMS blazed grating, developed in the past at CSEM [1], achieves state of the art results in terms of spectral resolution, operational wavelength range, light throughput, overall dimensions, and power consumption. Autonomous Chemical Vapour Detection by Micro UAV Kent Rosser Full Text Available The ability to remotely detect and map chemical vapour clouds in open air environments is a topic of significant interest to both defence and civilian communities. In this study, we integrate a prototype miniature colorimetric chemical sensor developed for methyl salicylate (MeS, as a model chemical vapour, into a micro unmanned aerial vehicle (UAV, and perform flights through a raised MeS vapour cloud. Our results show that that the system is capable of detecting MeS vapours at low ppm concentration in real-time flight and rapidly sending this information to users by on-board telemetry. Further, the results also indicate that the sensor is capable of distinguishing "clean� air from "dirty�, multiple times per flight, allowing us to look towards autonomous cloud mapping and source localization applications. Further development will focus on a broader range of integrated sensors, increased autonomy of detection and improved engineering of the system. Micro-prudentiality and financial stability Cristian Ionescu Full Text Available Given the high degree of importance of issues related to financial instability in modern economies, (financial, economic and social aspects, it is necessary the analysis of the microeconomic components that determine macroeconomic fluctuations, resulting in the visible financial instability. Thus, this paper aims to analyze the following aspects: financial fragility, as a measure of financial instability at the microeconomic level; micro-prudential regulation; microeconomic reform measures, which addresses problems related to capital, liquidity, risk management and supervision and market discipline. All these are integrated into the international Basel III framework of the Bank of International Settlements Regulations. In addition, the manner and the time of Basel III implementation of the capital and liquidity-related measures is very important. In addition, the paper aims to analyze the inter-connections and the compromises between capital and liquidity, trying to understand how the two are connected. Collaboration potentials in micro and macro politics of audience creativity Brites, Maria José; Chimirri, Niklas Alexander; Amaral, Inês In our stakeholder consultation following up on trends concerning the micro and macropolitics of audience action, we explore the potential impact of audiences' micro-participation and connection to macro-actions. We address this issue taking into consideration intrinsic continuities and discontin...... and discontinuities between academia and the stakeholders' perspectives. Our findings continue to emphasise the • (dis)connections between micro and macro actions • a technological appeal for action • collaboration potentials between academia and other stakeholders.......In our stakeholder consultation following up on trends concerning the micro and macropolitics of audience action, we explore the potential impact of audiences' micro-participation and connection to macro-actions. We address this issue taking into consideration intrinsic continuities... Mechanical Properties of Plug Welds after Micro-Jet Cooling HadryÅ› D. Full Text Available New technology of micro-jet welding could be regarded as a new way to improve mechanical properties of plug welds. The main purpose of that paper was analyzing of mechanical properties of plug welds made by MIG welding method with micro-jet cooling. The main way for it was comparison of plug welds made by MIG welding method with micro-jet cooling and plug welds made by ordinary MIG welding method. It is interesting for steel because higher amount of acicular ferrite (AF in weld metal deposit (WMD is obtained in MIG welding method with micro-jet cooling in relation to ordinary MIG welding method. This article presents the influence of the cooling medium and the number of micro-jet streams on mechanical properties of the welded joint. Mechanical properties were described by force which is necessary to destroy weld joint. Study on Boiling Heat Transfer Phenomenon in Micro-channels Jeong, Namgyun [Inha Technical College, Incheon (Korea, Republic of) Recently, efficient heat dissipation has become necessary because of the miniaturization of devices, and research on boiling on micro-channels has attracted attention. However, in the case of micro-channels, the friction coefficient and heat transfer characteristics are different from those in macro-channels. This leads to large errors in the micro scale results, when compared to correlations derived from the macro scale. In addition, due to the complexity of the mechanism, the boiling phenomenon in micro-channels cannot be approached only by experimental and theoretical methods. Therefore, numerical methods should be utilized as well, to supplement these methods. However, most numerical studies have been conducted on macro-channels. In this study, we applied the lattice Boltzmann method, proposed as an alternative numerical tool to simulate the boiling phenomenon in the micro-channel, and predicted the bubble growth process in the channel. Micro-combustion calorimetry employing a Calvet heat flux calorimeter Rojas-Aguilar, Aaron; Valdes-Ordonez, Alejandro Two micro-combustion bombs developed from a high pressure stainless steel vessel have been adapted to a Setaram C80 Calvet calorimeter. The constant of each micro-bomb was determined by combustions with benzoic acid NIST 39j, giving for the micro-combustion bomb in the measurement sensor k m =(1.01112±0.00054) and for the micro-combustion bomb in the reference sensor k r =(1.00646±0.00059) which means an uncertainty of less than 0.06 per cent for calibration. The experimental methodology to get results of combustion energy of organic compounds with a precision also better than 0.06 per cent is described by applying this micro-combustion device to the measurement of the enthalpy of combustion of the succinic acid, giving Δ c H compfn m (cr, T=298.15 K)=-(1492.89 ± 0.77) kJ · mol -1 Micro-dosing for early biokinetic studies in humans Stenstroem, K.; Sydoff, M.; Mattsson, S. Micro-dosing is a new concept in drug development that-if implemented in the pharmaceutical industry-would mean that new drugs can be tested earlier in humans than done today. The human micro-dosing concept-or 'Phase 0'-may offer improved candidate selection, reduced failure rates in the drug development line and a reduction in the use of laboratory animals in early drug development, factors which will help to speed up drug development and also reduce the costs. Micro-dosing utilises sub-pharmacological amounts of the substance to open opportunities for early studies in man. Three technologies are used for micro-dosing: accelerator mass spectrometry (AMS), positron emission tomography and liquid chromatography-tandem mass spectrometry. This paper focuses on the principle of AMS and discusses the current status of micro-dosing with AMS. (authors) Micro-PIXE in plant sciences Mesjasz-Przybylowicz, J.; Przybylowicz, W.J. Full text: Studies of the role played by elements in fundamental processes in physiology, nutrition, elemental deficiency and toxicity as well as environmental pollution require accurate, quantitative methods with good spatial resolution. the problem of proper measurements of elemental balances and elemental transfers between various levels of biological organisation (from abiotic to biotic systems; along the food chains; within organs and cells) becomes essential for understanding the mechanisms influencing the selection, interaction, distribution and transport of elements. Highly sensitive techniques for bulk elemental analysis are mostly used in these investigations. These techniques usually offer adequate sensitivity, but without spatial resolution. On the other hand, advanced studies of elemental distribution at a cellular level are mostly conducted using techniques with high spatial resolution, but low sensitivity. Ideally, these studies should be conducted on organs and tissues of sizes as far down as the cellular and sub-cellular level. This applies to e.g. future directions in ionomics and metallomics and opens up new, exciting possibilities of studies of trace metal role. The micro-PIXE has been applied in plant sciences for more than thirty years and has reached a high level of maturity. This is one of the few microanalytical, multielemental techniques capable of quantitative studies of elemental distribution at ppm level with with ability to perform quantitative elemental mapping and easy quantification of data extracted from selected micro-areas. Preparation of biological specimens is undoubtedly the crucial and most difficult part of analysis, and only cryotechniques are recommended presently for ali types of microanalytical studies. Established sample preparation protocols will be presented. Most of results are obtained for cryofixed and freeze-dried material but analysis of samples in frozen-hydrated state brings important advantage. Recent MicroRNAs and drug addiction Paul J Kenny Full Text Available Drug addiction is considered a disorder of neuroplasticity in brain reward and cognition systems resulting from aberrant activation of gene expression programs in response to prolonged drug consumption. Noncoding RNAs are key regulators of almost all aspects of cellular physiology. MicroRNAs (miRNAs are small (~21–23 nucleotides noncoding RNA transcripts that regulate gene expression at the post-transcriptional level. Recently, microRNAs were shown to play key roles in the drug-induced remodeling of brain reward systems that likely drives the emergence of addiction. Here, we review evidence suggesting that one particular miRNA, miR-212, plays a particularly prominent role in vulnerability to cocaine addiction. We review evidence showing that miR-212 expression is increased in the dorsal striatum of rats that show compulsive-like cocaine-taking behaviors. Increases in miR-212 expression appear to protect against cocaine addiction, as virus-mediated striatal miR-212 over-expression decreases cocaine consumption in rats. Conversely, disruption of striatal miR-212 signaling using an antisense oligonucleotide increases cocaine intake. We also review data that identify two mechanisms by which miR-212 may regulate cocaine intake. First, miR-212 has been shown to amplify striatal CREB signaling through a mechanism involving activation of Raf1 kinase. Second, miR-212 was also shown to regulate cocaine intake by repressing striatal expression of methyl CpG binding protein 2 (MeCP2, consequently decreasing protein levels of brain-derived neurotrophic factor (BDNF. The concerted actions of miR-212 on striatal CREB and MeCP2/BDNF activity greatly attenuate the motivational effects of cocaine. These findings highlight the unique role for miRNAs in simultaneously controlling multiple signaling cascades implicated in addiction. microRNAs and lipid metabolism Aryal, Binod; Singh, Abhishek K.; Rotllan, Noemi; Price, Nathan; Fernández-Hernando, Carlos Purpose of review Work over the last decade has identified the important role of microRNAs (miRNAS) in regulating lipoprotein metabolism and associated disorders including metabolic syndrome, obesity and atherosclerosis. This review summarizes the most recent findings in the field, highlighting the contribution of miRNAs in controlling low-density lipoprotein (LDL) and high-density lipoprotein (HDL) metabolism. Recent findings A number of miRNAs have emerged as important regulators of lipid metabolism, including miR-122 and miR-33. Work over the last two years has identified additional functions of miR-33 including the regulation of macrophage activation and mitochondrial metabolism. Moreover, it has recently been shown that miR-33 regulates vascular homeostasis and cardiac adaptation in response to pressure overload. In addition to miR-33 and miR-122, recent GWAS have identified single nucleotide polymorphisms (SNP) in the proximity of miRNAs genes associated with abnormal levels of circulating lipids in humans. Several of these miRNA, such as miR-148a and miR-128-1, target important proteins that regulate cellular cholesterol metabolism, including the low-density lipoprotein receptor (LDLR) and the ATP-binding cassette A1 (ABCA1). Summary microRNAs have emerged as critical regulators of cholesterol metabolism and promising therapeutic targets for treating cardiometabolic disorders including atherosclerosis. Here, we discuss the recent findings in the field highlighting the novel mechanisms by which miR-33 controls lipid metabolism and atherogenesis and the identification of novel miRNAs that regulate LDL metabolism. Finally, we summarize the recent findings that identified miR-33 as an important non-coding RNA that controls cardiovascular homeostasis independent of its role in regulating lipid metabolism. PMID:28333713 The micro-habitat methodology. Application protocols Sabaton, C; Valentin, S; Souchon, Y A strong need has been felt for guidelines to help various entities in applying the micro-habitat methodology, particularly in impact studies on hydroelectric installations. CEMAGREF and Electricite de France have developed separately two protocols with five major steps: reconnaissance of the river, selection of representative units to be studied in greater depth, morpho-dynamic measurements at one or more rates of discharge and hydraulic modeling, coupling of hydraulic and biological models, calculation of habitat-quality scores for fish, analysis of results. The two approaches give very comparable results and are essentially differentiated by the hydraulic model used. CEMAGREF uses a one-dimensional model requiring measurements at only one discharge rate. Electricite de France uses a simplified model based on measurements at several rates of discharge. This approach is possible when discharge can be controlled in the study area during data acquisition, as is generally the case downstream of hydroelectric installations. The micro-habitat methodology is now a fully operational tool with which to study changes in fish habitat quality in relation to varying discharge. It provides an element of assessment pertinent to the choice of instreaming flow to be maintained downstream of a hydroelectric installation; this information is essential when the flow characteristics (velocity, depth) and the nature of the river bed are the preponderant factors governing habitat suitability for trout or salmon. The ultimate decision must nonetheless take into account any other potentially limiting factors for the biocenoses on the one hand, and the target water use objectives on the other. In many cases, compromises must be found among different uses, different species and different stages in the fish development cycle. (Abstract Truncated) Les microémulsions Microemulsions Bavière M. Full Text Available Depuis une quinzaine d'années, les microémulsions intéressent l'industrie pétrolière, en effet, leur miscibilité apparente avec l'eau et les hydrocarbures, jointe à des propriétés originales de stabilité et de viscosité, en fait des fluides de drainage de l'huile, même résiduelle, extrêmement efficaces, leur emploi est également recommandé pour la stimulation des puits d'injection ou de production. La présente étude bibliographique s'applique à situer d'abord les microémulsions dans le contexte des solutions micellaires, puis à présenter les théories qui expliquent leur formation, leur structure et leurs propriétés, et enfin à décrire le procédé de récupération du pétrole tel qu'il a été étudié au laboratoire, et mis en oeuvre sur champ pendant la dernière décennie. In the last fifteen years, the petroleum industry has become interested in microemulsions. Their apparent miscibility with water and hydrocarbons as well as their original properties of stability and viscosity make them extremely effective oil drainage fluids, even for residual oil. They are also recommended for stimulating injection or production wells. This bibliographic survey attempts first to situate microemulsions with in the context of micellar solutions and then to review theories explaining their formation, structure and properties. lastly, a description is given of an oil recovery process that has been laboratory researched and applied in the field during the last decade. Social micro-credit and gender Dora Argentina Cabezas Elizondo Full Text Available The social policy has been oriented to support of the population in extreme poverty; From the speech, doesn't exist differences between men and women. However, In the development of the social spending, the social policy has been undermined by the gender stigma; That is, when the social policy has been guide to support women. This does not having the purpose of to develop the gender. From more than fifteen years, the organisations insisted on that the authorities not only support the women projects that only are a extension of domestic work, but they continues support only this projects.The purpose of this paper is to analyze the behavior of microcredit and its role in public and social policy, to support micro entrepreneurs women. In this contribution we give ourselves to the task of finding the behavior of this public policy in the state of Colima. Where the government instruments of social and economic policy, designed to provide support according to the genre, will be analyzed. During the period of 2000-2003, the orientation of these and their impact in terms of self-employment and elimination of poverty.We try social credit schemes as an instrument of the federal government to solve the problem of unemployment and increasing poverty, through the implementation of programs that promote self-employment.We will be discussed the experience of these financing schemes in the second level of government, which is the state, based on the analysis of micro loans during the period 2000-2003 in Colima, Mexico. Autism: The Micro-Movement Perspective Elizabeth B Torres Full Text Available The current assessment of behaviors in the inventories to diagnose autism spectrum disorders (ASD focus on observation and discrete categorizations. Behaviors require movements, yet measurements of physical movements are seldom included. Their inclusion however, could provide an objective characterization of behavior to help unveil interactions between the peripheral and the central nervous systems. Such interactions are critical for the development and maintenance of spontaneous autonomy, self-regulation and voluntary control. At present, current approaches cannot deal with the heterogeneous, dynamic and stochastic nature of development. Accordingly, they leave no avenues for real-time or longitudinal assessments of change in a coping system continuously adapting and developing compensatory mechanisms. We offer a new unifying statistical framework to reveal re-afferent kinesthetic features of the individual with ASD. The new methodology is based on the non-stationary stochastic patterns of minute fluctuations (micro-movements inherent to our natural actions. Such patterns of behavioral variability provide re-entrant sensory feedback contributing to the autonomous regulation and coordination of the motor output. From an early age, this feedback supports centrally driven volitional control and fluid, flexible transitions between intentional and spontaneous behaviors. We show that in ASD there is a disruption in the maturation of this form of proprioception. Despite this disturbance, each individual has unique adaptive compensatory capabilities that we can unveil and exploit to evoke faster and more accurate decisions. Measuring the kinesthetic re-afference in tandem with stimuli variations we can detect changes in their micro-movements indicative of a more predictive and reliable kinesthetic percept. Our methods address the heterogeneity of ASD with a personalized approach grounded in the inherent sensory-motor abilities that the individual has Cogeneration. Energy efficiency - Micro-cogeneration; La Cogeneration. Efficacite Energetique - Micro-cogeneration Boudellal, M. Depletion of natural resources and of non-renewable energy sources, pollution, greenhouse effect, increasing energy needs: energy efficiency is a major topic implying a better use of the available primary energies. In front of these challenges, cogeneration - i.e. the joint production of electricity and heat, and, at a local or individual scale, micro-cogeneration - can appear as interesting alternatives. This book presents in a detailed manner: the present day and future energy stakes; the different types of micro-cogeneration units (internal combustion engines, Stirling engine, fuel cell..), and the available models or the models at the design stage; the different usable fuels (natural gas, wood, biogas..); the optimization rules of a facility; the costs and amortizations; and some examples of facilities. (J.S.) Statistical Disclosure Control for Micro-Data Using the R Package sdcMicro Matthias Templ The R package sdcMicro serves as an easy-to-handle, object-oriented S4 class implementation of SDC methods to evaluate and anonymize confidential micro-data sets. It includes all popular disclosure risk and perturbation methods. The package performs automated recalculation of frequency counts, individual and global risk measures, information loss and data utility statistics after each anonymization step. All methods are highly optimized in terms of computational costs to be able to work with large data sets. Reporting facilities that summarize the anonymization process can also be easily used by practitioners. We describe the package and demonstrate its functionality with a complex household survey test data set that has been distributed by the International Household Survey Network. The micro and meso-porous materials. Characterization; Les materiaux micro et mesoporeux. Caracterisation Thibault-Starzyk, F. The micro and meso-porous materials, called zeolites, are very important in the modern chemical industry and in petrochemistry. This book deals in particular with the study and the characterization of zeolites. Its aim is to give to generalist chemists the tools for approaching experimentally these particular materials. The main methods of study and characterization are gathered in eight chapters, and the authors stress on the specificities due to the porous system: -structural analysis by the diffraction methods; -infrared spectroscopy; -NMR; -micro-calorimetry; -adsorption thermodynamics; -methods using the programed temperature; -modeling; -reactivity: kinetics and chemical engineering. This book appeals to students, engineers or searchers, without previous knowledge on these materials, but having a bachelor's degree or a master degree in general chemistry. (O.M.) Confocal micro-PIV measurement of droplet formation in a T-shaped micro-junction Oishi, M; Kinoshita, H; Fujii, T; Oshima, M This paper aims to investigate a mechanism of microdroplet formation using 'multicolor confocal micro particle image velocimetry (PIV)' technique. The present system can measure dynamical behavior of multiphase flow separately and simultaneously. It also enables to identify the interactions between two immiscible fluids. We have applied this system to measure the water droplet formation at a micro T-shaped junction. We have also succeeded in dispersing fluorescent tracer particles into both phases. The interaction between the internal flow of to-be-dispersed water phase and of continuous oil phase is measured as a liquid-liquid multiphase flow. As a result of PIV measurement and interface scanning, the relationship between flow structure of each phase and interface shape is clarified. It indicates that the gap between the tip of to-be-dispersed phase and capillary wall, and interface area play an important role in the flow structure and shear stress on the interface. Micro-tubular flame-assisted fuel cells for micro-combined heat and power systems Milcarek, Ryan J.; Wang, Kang; Falkenstein-Smith, Ryan L.; Ahn, Jeongmin Currently the role of fuel cells in future power generation is being examined, tested and discussed. However, implementing systems is more difficult because of sealing challenges, slow start-up and complex thermal management and fuel processing. A novel furnace system with a flame-assisted fuel cell is proposed that combines the thermal management and fuel processing systems by utilizing fuel-rich combustion. In addition, the flame-assisted fuel cell furnace is a micro-combined heat and power system, which can produce electricity for homes or businesses, providing resilience during power disruption while still providing heat. A micro-tubular solid oxide fuel cell achieves a significant performance of 430 mW cm-2 operating in a model fuel-rich exhaust stream. The Impact of Mobile Payments on the Success and Growth of Micro ... The pace of transformation in the micro business sector has speeded up with more micro ... However, there are only a handful of studies on the application of digital ... services by the micro businesses to enhance their success and growth. Auction Mechanism of Micro-Grid Project Transfer Full Text Available Micro-grid project transfer is the primary issue of micro-grid development. The efficiency and quality of the micro-grid project transfer directly affect the quality of micro-grid project construction and development, which is very important for the sustainable development of micro-grid. This paper constructs a multi-attribute auction model of micro-grid project transfer, which reflects the characteristics of micro-grid system and the interests of stakeholders, calculates the optimal bidding strategy and analyzes the influence of relevant factors on auction equilibrium by multi-stage dynamic game with complete information, and makes a numerical simulation analysis. Results indicate that the optimal strategy of auction mechanism is positively related to power quality, energy storage quality, and carbon emissions. Different from the previous lowest price winning mechanism, the auction mechanism formed in this paper emphasizes that the energy suppliers which provide the comprehensive optimization of power quality, energy storage quality, carbon emissions, and price will win the auction, when both the project owners and energy suppliers maximize their benefits under this auction mechanism. The auction mechanism is effective because it is in line with the principle of individual rationality and incentive compatibility. In addition, the number of energy suppliers participating in the auction and the cost of the previous auction are positively related to the auction equilibrium, both of which are adjusting the equilibrium results of the auction. At the same time, the utilization rate of renewable energy and the comprehensive utilization of energy also have a positive impact on the auction equilibrium. In the end, this paper puts forward a series of policy suggestions about micro-grid project auction. The research in this paper is of great significance to improve the auction quality of micro-grid projects and promote the sustainable development of micro-grid. Fabrication and Study of Micro Monolithic Tungsten Ball Tips for Micro/Nano-CMM Probes Ruijun Li Full Text Available Micro ball tips with high precision, small diameter, and high stiffness stems are required to measure microstructures with high aspect ratio. Existing ball tips cannot meet such demands because of their weak qualities. This study used an arc-discharge melting method to fabricate a micro monolithic tungsten ball tip on a tungsten stylus. The principles of arc discharge and surface tension phenomenon were introduced. The experimental setup was designed and established. Appropriate process parameters, such as impulse voltage, electro discharge time, and discharge gap were determined. Experimental results showed that a ball tip of approximately 60 µm in diameter with less than 0.6 µm roundness error and 0.6 µm center offset could be realized on a 100 µm-diameter tungsten wire. The fabricated micro ball tip was installed on a homemade probe, touched by high-precision gauge blocks in different directions. A repeatability of 41 nm (K = 2 was obtained. Several interesting phenomena in the ball-forming process were also discussed. The proposed method could be used to fabricate a monolithic probe ball tip, which is necessary for measuring microstructures. Micro pan-tilter and focusing mechanism; Micro shikakuyo shisen henko kiko The micro pan-tilter and focusing mechanism can adjust focuses while changing freely the visual axis by using a super small CCD micro camera of 9.2 mm in a diameter times 27 mm in length, and contains a camera control unit (CCU) in this size. Many functions of a camera with a tripod head are concentrated into a size 1/10 of that of conventional cameras. The mechanism has been developed for a micro robot to inspect interior of small pipes in devices such as heat exchangers in a power plant. Future application is expected to medical endoscopes and portable information devices. This mechanism observes forward distant view and side wall short-distance view with the maximum resolution of 20 {mu}m by coordinated operation of three high-torque static power drive motors (with minimum outer diameter of 2.5 mm) fabricated by using the micromachine technology. Auto-focusing is also possible. The hybrid IC built-ion CCU has been realized by using the three-dimensional high-density mounting technology. Part of this research and development is performed under the industrial science technology research and development institution established by the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. (translated by NEDO) Preparation, characterization and nonlinear absorption studies of cuprous oxide nanoclusters, micro-cubes and micro-particles Sekhar, H.; Narayana Rao, D. Cuprous oxide nanoclusters, micro-cubes and micro-particles were successfully synthesized by reducing copper(II) salt with ascorbic acid in the presence of sodium hydroxide via a co-precipitation method. The X-ray diffraction and FTIR studies revealed that the formation of pure single-phase cubic. Raman and EPR spectral studies show the presence of CuO in as-synthesized powders of Cu2O. Transmission electron microscopy and field emission scanning electron microscopy data revealed that the morphology evolves from nanoclusters to micro-cubes and micro-particles by increasing the concentration of NaOH. Linear optical measurements show absorption peak maximum shifts towards red with changing morphology from nanoclusters to micro-cubes and micro-particles. The nonlinear optical properties were studied using open aperture Z-scan technique with 532 nm 6 ns laser pulses. Samples-exhibited both saturable as well as reverse saturable absorption. Due to confinement effects (enhanced band gap), we observed enhanced nonlinear absorption coefficient (β) in the case of nanoclusters compared to their micro-cubes and micro-particles. Micro structured reactors for synthesis/decomposition of hazardous chemicals. Challenging prospects for micro structured reaction architectures (4) Rebrov, E.V.; Croon, de M.H.J.M.; Schouten, J.C. A review. This paper completes a series of four publications dealing with the different aspects of the applications of micro reactor technol. This article focuses on the application of micro structured reactors in the processes for synthesis/decompn. of hazardous chems., such as unsym. Micro-CTvlab: A web based virtual gallery of biological specimens using X-ray microtomography (micro-CT). Keklikoglou, Kleoniki; Faulwetter, Sarah; Chatzinikolaou, Eva; Michalakis, Nikitas; Filiopoulou, Irene; Minadakis, Nikos; Panteri, Emmanouela; Perantinos, George; Gougousis, Alexandros; Arvanitidis, Christos During recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CT vlab ) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet. The Micro-CT vlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CT vlab can be accessed either on a normal computer or through a dedicated version for mobile devices. Process Chain for the Manufacture of Polymeric Tubular Micro-Components and "POLYTUBES Micro-Factory� Concept Qin, Yi; Perzon, Erik; Chronakis, Ioannis S. The paper presents a process chain for the shaping of poly-meric tubular micro-components for the volume production as well as presents a concept for the integration of the developed processes and modular machines onto a platform to form a "POLYTUBES Micro-Factory", being resulting from the Europ... Coherent Synchrotron-Based Micro-Imaging Employed for Studies of Micro-Gap Formation in Dental Implants Rack, T.; Stiller, M.; Nelson, K.; Zabler, S.; Rack, A.; Riesemeier, H.; Cecilia, A. Biocompatible materials such as titanium are regularly applied in oral surgery. Titanium-based implants for the replacement of missing teeth demand a high mechanical precision in order to minimize micro-bacterial leakage, especially when two-piece concepts are used. Synchrotron-based hard x-ray radiography, unlike conventional laboratory radiography, allows high spatial resolution in combination with high contrast even when micro-sized features in such highly attenuating objects are visualized. Therefore, micro-gap formation at interfaces in two-piece dental implants with the sample under different mechanical loads can be studied. We show the existence of micro-gaps in implants with conical connections and study the mechanical behavior of the mating zone of conical implants during loading. The micro-gap is a potential source of implant failure, i.e., bacterial leakage, which can be a stimulus for an inflammatory process. The evaluation of the micro-tracks and micro-dimples on the tribological characteristics of thrust ball bearings. Amanov, Auezhan; Pyoun, Young-Shik; Cho, In-Shik; Lee, Chang-Soon; Park, In-Gyu One of the primary remedies for tribological problems is surface modification. The reduction of the friction between the ball and the raceway of bearings is a very important goal of the development of bearing technology. A low friction has a positive effect in terms of the extension of the fatigue life, avoidance of a temperature rise, and prevention of premature failure of bearings. Therefore, this research sought to investigate the effects of micro-tracks and micro-dimples on the tribological characteristics at the contact point between the ball and the raceway of thrust ball bearings (TBBs). The ultrasonic nanocrystal surface modification (UNSM) technology was applied using different intervals (feed rates) to the TBB raceway surface to create micro-tracks and micro-dimples. The friction coefficient after UNSM at 50 microm intervals showed marked sensitivity and a significant reduction of 30%. In this study, the results showed that more micro-dimples yield a lower friction coefficient. Porous media modeling and micro-structurally motivated material moduli determination via the micro-dilatation theory Jeong, J.; Ramézani, H.; Sardini, P.; Kondo, D.; Ponson, L.; Siitari-Kauppi, M. In the present contribution, the porous material modeling and micro-structural material parameters determination are scrutinized via the micro-dilatation theory. The main goal is to take advantage of the micro-dilatation theory which belongs to the generalized continuum media. In the first stage, the thermodynamic laws are entirely revised to reach the energy balance relation using three variables, deformation, porosity change and its gradient underlying the porous media as described in the micro-dilatation theory or so-called void elasticity. Two experiments over cement mortar specimens are performed in order to highlight the material parameters related to the pore structure. The shrinkage due to CO2 carbonation, porosity and its gradient are calculated. The extracted values are verified via 14C-PMMA radiographic image method. The modeling of swelling phenomenon of Delayed Ettringite Formation (DEF) is studied later on. This issue is performed via the crystallization pressure application using the micro-dilatation theory. High precision micro-scale Hall Effect characterization method using in-line micro four-point probes Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from... Micro Cooling, Heating, and Power (Micro-CHP) and Bio-Fuel Center, Mississippi State University Louay Chamra Initially, most micro-CHP systems will likely be designed as constant-power output or base-load systems. This implies that at some point the power requirement will not be met, or that the requirement will be exceeded. Realistically, both cases will occur within a 24-hour period. For example, in the United States, the base electrical load for the average home is approximately 2 kW while the peak electrical demand is slightly over 4 kW. If a 3 kWe micro- CHP system were installed in this situation, part of the time more energy will be provided than could be used and for a portion of the time more energy will be required than could be provided. Jalalzadeh-Azar [6] investigated this situation and presented a comparison of electrical- and thermal-load-following CHP systems. In his investigation he included in a parametric analysis addressing the influence of the subsystem efficiencies on the total primary energy consumption as well as an economic analysis of these systems. He found that an increase in the efficiencies of the on-site power generation and electrical equipment reduced the total monthly import of electricity. A methodology for calculating performance characteristics of different micro-CHP system components will be introduced in this article. Thermodynamic cycles are used to model each individual prime mover. The prime movers modeled in this article are a spark-ignition internal combustion engine (Otto cycle) and a diesel engine (Diesel cycle). Calculations for heat exchanger, absorption chiller, and boiler modeling are also presented. The individual component models are then linked together to calculate total system performance values. Performance characteristics that will be observed for each system include maximum fuel flow rate, total monthly fuel consumption, and system energy (electrical, thermal, and total) efficiencies. Also, whether or not both the required electrical and thermal loads can sufficiently be accounted for within the system Replication performance of Si-N-DLC-coated Si micro-molds in micro-hot-embossing Saha, B; Tor, S B; Liu, E; Khun, N W; Hardt, D E; Chun, J H Micro-hot-embossing is an emerging technology with great potential to form micro- and nano-scale patterns into polymers with high throughput and low cost. Despite its rapid progress, there are still challenges when this technology is employed, as demolding stress is usually very high due to large friction and adhesive forces induced during the process. Surface forces are dominating parameters in micro- and nano-fabrication technologies because of a high surface-to-volume ratio of products. This work attempted to improve the surface properties of Si micro-molds by means of silicon- and nitrogen-doped diamond-like carbon (Si-N-DLC) coatings deposited by dc magnetron cosputtering on the molds. The bonding structure, surface roughness, surface energy, adhesive strength and tribological behavior of the coated samples were characterized with micro Raman spectroscopy, atomic force microscopy (AFM), contact angle measurement, microscratch test and ball-on-disk sliding tribological test, respectively. It was observed that the doping condition had a great effect on the performance of the coatings. The Si-N-DLC coating deposited with 5 × 10 −6 m 3 min −1 N 2 had lowest surface roughness and energy of about 1.2 nm and 38.2 × 10 −3 N m −1 , respectively, while the coatings deposited with 20 × 10 −6 and 25 × 10 −6 m 3 min −1 N 2 showed lowest friction coefficients. The uncoated and Si-N-DLC-coated Si micro-molds were tested in a micro-hot-embossing process for a comparative study of their replication performance and lifetime. The experimental results showed that the performance of the Si micro-molds was improved by the Si-N-DLC coatings, and well-defined micro-features with a height of about 100 µm were fabricated successfully into cyclic olefin copolymer (COC) sheets using the Si-N-DLC-coated micro-molds. Micro-computed tomography assessment of human alveolar bone: bone density and three-dimensional micro-architecture. Kim, Yoon Jeong; Henkin, Jeffrey Micro-computed tomography (micro-CT) is a valuable means to evaluate and secure information related to bone density and quality in human necropsy samples and small live animals. The aim of this study was to assess the bone density of the alveolar jaw bones in human cadaver, using micro-CT. The correlation between bone density and three-dimensional micro architecture of trabecular bone was evaluated. Thirty-four human cadaver jaw bone specimens were harvested. Each specimen was scanned with micro-CT at resolution of 10.5 μm. The bone volume fraction (BV/TV) and the bone mineral density (BMD) value within a volume of interest were measured. The three-dimensional micro architecture of trabecular bone was assessed. All the parameters in the maxilla and the mandible were subject to comparison. The variables for the bone density and the three-dimensional micro architecture were analyzed for nonparametric correlation using Spearman's rho at the significance level of p architecture parameters were consistently higher in the mandible, up to 3.3 times greater than those in the maxilla. The most linear correlation was observed between BV/TV and BMD, with Spearman's rho = 0.99 (p = .01). Both BV/TV and BMD were highly correlated with all micro architecture parameters with Spearman's rho above 0.74 (p = .01). Two aspects of bone density using micro-CT, the BV/TV and BMD, are highly correlated with three-dimensional micro architecture parameters, which represent the quality of trabecular bone. This noninvasive method may adequately enhance evaluation of the alveolar bone. © 2013 Wiley Periodicals, Inc. Plasma assisted nitriding for micro-texturing onto martensitic stainless steels* Katoh Takahisa; Aizawa Tatsuhiko; Yamaguchi Tetsuya Micro-texturing method has grown up to be one of the most promising procedures to form micro-lines, micro-dots and micro-grooves onto the mold-die materials and to duplicate these micro-patterns onto metallic or polymer sheets via stamping or injection molding. This related application requires for large-area, fine micro-texturing onto the martensitic stainless steel mold-die materials. A new method other than laser-machining, micro-milling or micro-EDM is awaited for further advancement of t... Supernova Remnant Observations with Micro-X Figueroa, Enectali Micro-X is a sounding rocket payload that combines an X-ray microcalorimeter with an imaging mirror to offer breakthrough science from high spectral resolution observations of extended X-ray sources. This payload has been in design and development for the last five years and is now completely built and undergoing integration; its first flight will be in November, 2012, as part of our current NASA award. This four-year follow-on proposal seeks funding for: (1) analysis of the first flight data, (2) the second flight and its data analysis, (3) development of payload upgrades and launch of the third flight, and (4) third flight data analysis. The scientific payload consists of a Transition Edge Sensor (TES) microcalorimeter array at the focus of a flight-proven conical imaging mirror. Micro-X capitalizes on three decades of NASA investment in the development of microcalorimeters and X-ray imaging optics. Micro-X offers a unique combination of bandpass, collecting area, and spectral and angular resolution. The spectral resolution goal across the 0.2 - 3.0 keV band is 2 - 4 eV Full-Width at Half Maximum (FWHM). The measured angular resolution of the mirror is 2.4 arcminute Half-Power Diameter (HPD). The effective area of the mirror, 300 square centimeters at 1 keV, is sufficient to provide observations of unprecedented quality of several astrophysical X-ray sources, even in a brief sounding rocket exposure of 300 sec. Our scientific program for this proposal will focus on supernova remnants (SNRs), whose spatial extent has made high-energy resolution observations with grating instruments extremely challenging. X-ray observations of SNRs with microcalorimeters will enable the study of the detailed atomic physics of the plasma; the determination of temperature, turbulence, and elemental abundances; and in conjunction with historical data, full three dimensional mapping of the kinematics of the remnant. These capabilities will open new avenues towards understanding the Micro-ERDA, micro-RBS and micro-PIXE techniques in the investigation of fish otoliths Huszank, R.; Simon, A.; Szilagyi, E.; Keresztessy, K.; Kovacs, I. Elemental distribution in the otolith of the fresh water fish burbot (Lota lota L.) collected in Hungary was measured with Elastic Recoil Detection Analysis (ERDA), Rutherford Backscattering Spectrometry (RBS) and as a complementary technique, Particle-Induced X-ray Emission (PIXE) with a focussed ion beam of 1.5 x 1.5 μm 2 spot size. The organic- and inorganic-rich regions of the otolith are distinguished and they are presented as hydrogen and calcium maps at depth regions of 0-70, 70-140 and 140-210 nm. The textured surface of the sample and its porosity were characterized from the effect on the RBS spectra. The oxygen and carbon PIXE elemental maps can also be used to identify the organic- and inorganic-rich regions of the otolith. The calcium map was found to be more homogeneous because the otolith structure is averaged in a larger depth. The trace elements Fe, Zn and Sr were detected only in very low concentration by micro-PIXE. Huszank, R. [Institute of Nuclear Research of the Hungarian Academy of Sciences, P.O. Box 51, H-4001 Debrecen (Hungary)], E-mail: [email protected]; Simon, A. [Institute of Nuclear Research of the Hungarian Academy of Sciences, P.O. Box 51, H-4001 Debrecen (Hungary); Szilagyi, E. [KFKI Research Institute for Particle and Nuclear Physics, P.O. Box 49, H-1525 Budapest (Hungary); Keresztessy, K. [Department of Fish Culture, Institute of Environmental and Landscape Management, Szent Istvan University, Pater K.u.1, H-2103 Goedoello (Hungary); Kovacs, I. [KFKI Research Institute for Particle and Nuclear Physics, P.O. Box 49, H-1525 Budapest (Hungary) Elemental distribution in the otolith of the fresh water fish burbot (Lota lota L.) collected in Hungary was measured with Elastic Recoil Detection Analysis (ERDA), Rutherford Backscattering Spectrometry (RBS) and as a complementary technique, Particle-Induced X-ray Emission (PIXE) with a focussed ion beam of 1.5 x 1.5 {mu}m{sup 2} spot size. The organic- and inorganic-rich regions of the otolith are distinguished and they are presented as hydrogen and calcium maps at depth regions of 0-70, 70-140 and 140-210 nm. The textured surface of the sample and its porosity were characterized from the effect on the RBS spectra. The oxygen and carbon PIXE elemental maps can also be used to identify the organic- and inorganic-rich regions of the otolith. The calcium map was found to be more homogeneous because the otolith structure is averaged in a larger depth. The trace elements Fe, Zn and Sr were detected only in very low concentration by micro-PIXE. Micro-Pressure Sensors for Future Mars Missions Catling, David C. The joint research interchange effort was directed at the following principal areas: u further development of NASA-Ames' Mars Micro-meteorology mission concept as a viable NASA space mission especially with regard to the science and instrument specifications u interaction with the flight team from NASA's New Millennium 'Deep-Space 2' (DS-2) mission with regard to selection and design of micro-pressure sensors for Mars u further development of micro-pressure sensors suitable for Mars The research work undertaken in the course of the Joint Research Interchange should be placed in the context of an ongoing planetary exploration objective to characterize the climate system on Mars. In particular, a network of small probes globally-distributed on the surface of the planet has often been cited as the only way to address this particular science goal. A team from NASA Ames has proposed such a mission called the Micrometeorology mission, or 'Micro-met' for short. Surface pressure data are all that are required, in principle, to calculate the Martian atmospheric circulation, provided that simultaneous orbital measurements of the atmosphere are also obtained. Consequently, in the proposed Micro-met mission a large number of landers would measure barometric pressure at various locations around Mars, each equipped with a micro-pressure sensor. Much of the time on the JRI was therefore spent working with the engineers and scientists concerned with Micro-met to develop this particular mission concept into a more realistic proposition. Hybrid micromachining using a nanosecond pulsed laser and micro EDM Kim, Sanha; Chung, Do Kwan; Shin, Hong Shik; Chu, Chong Nam; Kim, Bo Hyun Micro electrical discharge machining (micro EDM) is a well-known precise machining process that achieves micro structures of excellent quality for any conductive material. However, the slow machining speed and high tool wear are main drawbacks of this process. Though the use of deionized water instead of kerosene as a dielectric fluid can reduce the tool wear and increase the machine speed, the material removal rate (MRR) is still low. In contrast, laser ablation using a nanosecond pulsed laser is a fast and non-wear machining process but achieves micro figures of rather low quality. Therefore, the integration of these two processes can overcome the respective disadvantages. This paper reports a hybrid process of a nanosecond pulsed laser and micro EDM for micromachining. A novel hybrid micromachining system that combines the two discrete machining processes is introduced. Then, the feasibility and characteristics of the hybrid machining process are investigated compared to conventional EDM and laser ablation. It is verified experimentally that the machining time can be effectively reduced in both EDM drilling and milling by rapid laser pre-machining prior to micro EDM. Finally, some examples of complicated 3D micro structures fabricated by the hybrid process are shown Autonomous, agile micro-satellites and supporting technologies Breitfeller, E; Dittman, M D; Gaughan, R J; Jones, M S; Kordas, J F; Ledebuhr, A G; Ng, L C; Whitehead, J C; Wilson, B This paper updates the on-going effort at Lawrence Livermore National Laboratory to develop autonomous, agile micro-satellites (MicroSats). The objective of this development effort is to develop MicroSats weighing only a few tens of kilograms, that are able to autonomously perform precision maneuvers and can be used telerobotically in a variety of mission modes. The required capabilities include satellite rendezvous, inspection, proximity-operations, docking, and servicing. The MicroSat carries an integrated proximity-operations sensor-suite incorporating advanced avionics. A new self-pressurizing propulsion system utilizing a miniaturized pump and non-toxic mono-propellant hydrogen peroxide was successfully tested. This system can provide a nominal 25 kg MicroSat with 200-300 m/s delta-v including a warm-gas attitude control system. The avionics is based on the latest PowerPC processor using a CompactPCI bus architecture, which is modular, high-performance and processor-independent. This leverages commercial-off-the-shelf (COTS) technologies and minimizes the effects of future changes in processors. The MicroSat software development environment uses the Vx-Works real-time operating system (RTOS) that provides a rapid development environment for integration of new software modules, allowing early integration and test. We will summarize results of recent integrated ground flight testing of our latest non-toxic pumped propulsion MicroSat testbed vehicle operated on our unique dynamic air-rail Optical assembly of bio-hybrid micro-robots. Barroso, �lvaro; Landwerth, Shirin; Woerdemann, Mike; Alpmann, Christina; Buscher, Tim; Becker, Maike; Studer, Armido; Denz, Cornelia The combination of micro synthetic structures with bacterial flagella motors represents an actual trend for the construction of self-propelled micro-robots. The development of methods for fabrication of these bacteria-based robots is a first crucial step towards the realization of functional miniature and autonomous moving robots. We present a novel scheme based on optical trapping to fabricate living micro-robots. By using holographic optical tweezers that allow three-dimensional manipulation in real time, we are able to arrange the building blocks that constitute the micro-robot in a defined way. We demonstrate exemplarily that our method enables the controlled assembly of living micro-robots consisting of a rod-shaped prokaryotic bacterium and a single elongated zeolite L crystal, which are used as model of the biological and abiotic components, respectively. We present different proof-of-principle approaches for the site-selective attachment of the bacteria on the particle surface. The propulsion of the optically assembled micro-robot demonstrates the potential of the proposed method as a powerful strategy for the fabrication of bio-hybrid micro-robots. MicroRNAs in sensorineural diseases of the ear Kathy eUshakov Full Text Available Non-coding microRNAs have a fundamental role in gene regulation and expression in almost every multicellular organism. Only discovered in the last decade, microRNAs are already known to play a leading role in many aspects of disease. In the vertebrate inner ear, microRNAs are essential for controlling development and survival of hair cells. Moreover, dysregulation of microRNAs has been implicated in sensorineural hearing impairment, as well as in other ear diseases such as cholesteatomas, vestibular schwannomas and otitis media. Due to the inaccessibility of the ear in humans, animal models have provided the optimal tools to study microRNA expression and function, in particular mice and zebrafish. A major focus of current research has been to discover the targets of the microRNAs expressed in the inner ear, in order to determine the regulatory pathways of the auditory and vestibular systems. The potential for microRNA manipulation in development of therapeutic tools for hearing impairment is as yet unexplored, paving the way for future work in the field. Investigation into Generation of Micro Features by Localised Electrochemical Deposition Debnath, Subhrajit; Laskar, Hanimur Rahaman; Bhattacharyya, B. With the fast advancement of technology, localised electrochemical deposition (LECD) is becoming very advantageous in generating high aspect ratio micro features to meet the steep demand in modern precision industries of the present world. Except many other advantages, this technology is highly uncomplicated and economical for fabricating metal micro-parts with in micron ranges. In the present study, copper micro-columns have been fabricated utilizing LECD process. Different process parameters such as voltage, frequency, duty ratio and electrolyte concentration, which affect the deposition performance have been identified and their effects on deposition performances such as deposition rate, height and diameter of the micro-columns have been experimentally investigated. Taguchi's methodology has been used to study the effects as well as to obtain the optimum values of process parameters so that localised deposition with best performance can be achieved. Moreover, the generated micro-columns were carefully observed under optical and scanning electron microscope from where the surface quality of the deposited micro-columns has been studied qualitatively. Also, an array of copper micro-columns has been fabricated on stainless steel (SS-304) substrate for further exploration of LECD process capability. Spectral optimization for micro-CT Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert; Eisa, Fabian; Kalender, Willi A. Purpose: To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Methods: Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9 mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. Results: The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21–60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent Spectral optimization for micro-CT. Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert; Eisa, Fabian; Kalender, Willi A To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9 mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21-60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent of tube voltage. For 50 mm Combustion and direct energy conversion inside a micro-combustor Lei, Yafeng; Chen, Wei; Lei, Jiang Highlights: • The flammability range of micro-combustor was broadened with heat recirculation. • The quenching diameter decreased with heat recirculation compared to without recirculation. • The surface areas to volume ratio was the most important parameter affecting the energy conversion efficiency. • The maximum conversion efficiency (3.15%) was achieved with 1 mm inner diameter. - Abstract: Electrical energy can be generated by employing a micro-thermophotovoltaic (TPV) cell which absorbs thermal radiation from combustion taking place in a micro-combustor. The stability of combustion in a micro-combustor is essential for operating a micro-power system using hydrogen and hydrocarbon fuels as energy source. To understand the mechanism of sustaining combustion within the quenching distance of fuel, this study proposed an annular micro combustion tube with recirculation of exhaust heat. To explore the feasibility of combustion in the micro annular tube, the parameters influencing the combustion namely, quenching diameter, and flammability were studied through numerical simulation. The results indicated that combustion could be realized in micro- combustor using heat recirculation. Following results were obtained from simulation. The quenching diameter reduced from 1.3 mm to 0.9 mm for heat recirculation at equivalence ratio of 1; the lean flammability was 2.5%–5% lower than that of without heat recirculation for quenching diameters between 2 mm and 5 mm. The overall energy conversion efficiency varied at different inner diameters. A maximum efficiency of 3.15% was achieved at an inner diameter of 1 mm. The studies indicated that heat recirculation is an effective strategy to maintain combustion and to improve combustion limits in micro-scale system. Regulation of Corticosteroidogenic Genes by MicroRNAs Stacy Robertson Full Text Available The loss of normal regulation of corticosteroid secretion is important in the development of cardiovascular disease. We previously showed that microRNAs regulate the terminal stages of corticosteroid biosynthesis. Here, we assess microRNA regulation across the whole corticosteroid pathway. Knockdown of microRNA using Dicer1 siRNA in H295R adrenocortical cells increased levels of CYP11A1, CYP21A1, and CYP17A1 mRNA and the secretion of cortisol, corticosterone, 11-deoxycorticosterone, 18-hydroxycorticosterone, and aldosterone. Bioinformatic analysis of genes involved in corticosteroid biosynthesis or metabolism identified many putative microRNA-binding sites, and some were selected for further study. Manipulation of individual microRNA levels demonstrated a direct effect of miR-125a-5p and miR-125b-5p on CYP11B2 and of miR-320a-3p levels on CYP11A1 and CYP17A1 mRNA. Finally, comparison of microRNA expression profiles from human aldosterone-producing adenoma and normal adrenal tissue showed levels of various microRNAs, including miR-125a-5p to be significantly different. This study demonstrates that corticosteroidogenesis is regulated at multiple points by several microRNAs and that certain of these microRNAs are differentially expressed in tumorous adrenal tissue, which may contribute to dysregulation of corticosteroid secretion. These findings provide new insights into the regulation of corticosteroid production and have implications for understanding the pathology of disease states where abnormal hormone secretion is a feature. Semiconductor micro cavities: half light, half matter Baumberg, Jeremy J. Quantum wells sandwiched tightly between two mirrors can be used to make a new type of laser that can amplify light more than any other known material. What do you get if you cross light with matter? It is a question that fascinates today's researchers in quantum optoelectronics, who want to see how far the physical states of the world can be intertwined. Although we have a good understanding of the quantum ingredients of optics and solids - photons and atoms - it turns out that assembling these building blocks in deliberately unfamiliar ways can lead to what is new and often quite unexpected behaviour. Consider 'quantum wells', which form the basis of modern semiconductor lasers. First developed in the 1980s, they lie at the heart of optical-communication and optical-storage technologies such as DVD players and they now have a global market of over 10bn British Pounds. Quantum wells consist of a thin sheet of crystalline semiconductor sandwiched between two sheets of another semiconductor. The outer layers squash the wavefunctions of electrons within the central sheet, increasing the electrons' energy and their interaction with light. Engineers can control the colour of the light emitted by the laser simply by adjusting the energy levels within the central sheet, which acts as a potential well. But this bug-sized playground for electrons has not just had technological ramifications. It has also spawned an enormous variety of new physics, including the quantum Hall effect, which can be used as a fundamental standard for measuring the ratio between the charge on the electron and the Planck constant. Over the last ten years researchers have also become increasingly keen to incorporate quantum wells into what are known as 'semiconductor micro cavities'. Physicists have found that these painstakingly layered materials can be used to create new quantum states that resemble superfluids and can be used in interferometric quantum devices. In the March issue of Physics Scanning electron microscopy and micro-analyses Brisset, F.; Repoux, L.; Ruste, J.; Grillon, F.; Robaut, F. Scanning electron microscopy (SEM) and the related micro-analyses are involved in extremely various domains, from the academic environments to the industrial ones. The overall theoretical bases, the main technical characteristics, and some complements of information about practical usage and maintenance are developed in this book. high-vacuum and controlled-vacuum electron microscopes are thoroughly presented, as well as the last generation of EDS (energy dispersive spectrometer) and WDS (wavelength dispersive spectrometer) micro-analysers. Beside these main topics, other analysis or observation techniques are approached, such as EBSD (electron backscattering diffraction), 3-D imaging, FIB (focussed ion beams), Monte-Carlo simulations, in-situ tests etc.. This book, in French language, is the only one which treats of this subject in such an exhaustive way. It represents the actualized and totally updated version of a previous edition of 1979. It gathers the lectures given in 2006 at the summer school of Saint Martin d'Heres (France). Content: 1 - electron-matter interactions; 2 - characteristic X-radiation, Bremsstrahlung; 3 - electron guns in SEM; 4 - elements of electronic optics; 5 - vacuum techniques; 6 - detectors used in SEM; 7 - image formation and optimization in SEM; 7a - SEM practical instructions for use; 8 - controlled pressure microscopy; 8a - applications; 9 - energy selection X-spectrometers (energy dispersive spectrometers - EDS); 9a - EDS analysis; 9b - X-EDS mapping; 10 - technological aspects of WDS; 11 - processing of EDS and WDS spectra; 12 - X-microanalysis quantifying methods; 12a - quantitative WDS microanalysis of very light elements; 13 - statistics: precision and detection limits in microanalysis; 14 - analysis of stratified samples; 15 - crystallography applied to EBSD; 16 - EBSD: history, principle and applications; 16a - EBSD analysis; 17 - Monte Carlo simulation; 18 - insulating samples in SEM and X-ray microanalysis; 18a - insulating MicroRNAs in Cardiometabolic Diseases Anna Meiliana Full Text Available BACKGROUND: MicroRNAs (miRNAs are ~22-nucleotide noncoding RNAs with critical functions in multiple physiological and pathological processes. An explosion of reports on the discovery and characterization of different miRNA species and their involvement in almost every aspect of cardiac biology and diseases has established an exciting new dimension in gene regulation networks for cardiac development and pathogenesis. CONTENT: Alterations in the metabolic control of lipid and glucose homeostasis predispose an individual to develop cardiometabolic diseases, such as type 2 diabetes mellitus and atherosclerosis. Work over the last years has suggested that miRNAs play an important role in regulating these physiological processes. Besides a cell-specific transcription factor profile, cell-specific miRNA-regulated gene expression is integral to cell fate and activation decisions. Thus, the cell types involved in atherosclerosis, vascular disease, and its myocardial sequelae may be differentially regulated by distinct miRNAs, thereby controlling highly complex processes, for example, smooth muscle cell phenotype and inflammatory responses of endothelial cells or macrophages. The recent advancements in using miRNAs as circulating biomarkers or therapeutic modalities, will hopefully be able to provide a strong basis for future research to further expand our insights into miRNA function in cardiovascular biology. SUMMARY: MiRNAs are small, noncoding RNAs that function as post-transcriptional regulators of gene expression. They are potent modulators of diverse biological processes and pathologies. Recent findings demonstrated the importance of miRNAs in the vasculature and the orchestration of lipid metabolism and glucose homeostasis. MiRNA networks represent an additional layer of regulation for gene expression that absorbs perturbations and ensures the robustness of biological systems. A detailed understanding of the molecular and cellular mechanisms of mi The application of micro-lesson in optics teaching Yuan, Suzhen; Mao, Xuefeng; Lu, Yongle; Wang, Yan; Luo, Yuan In order to improve students' ability on self-study, this paper discusses the application of micro-lesson as a supplementary way in the course of optics teaching. Both geometric optics and wave optics require a lot of demos, fortunately, micro-lesson just meets this requirement. Nowadays, college education focuses on quality education, so the new nurture scheme of most universities shortened the class hours. However, the development of students and the social needs also require students to have a solid foundation. The effective way to solve this contradiction is to improve the efficiency of classroom teaching and provide the repeatable learning form, micro-lesson. The Optimization dispatching of Micro Grid Considering Load Control Zhang, Pengfei; Xie, Jiqiang; Yang, Xiu; He, Hongli This paper proposes an optimization control of micro-grid system economy operation model. It coordinates the new energy and storage operation with diesel generator output, so as to achieve the economic operation purpose of micro-grid. In this paper, the micro-grid network economic operation model is transformed into mixed integer programming problem, which is solved by the mature commercial software, and the new model is proved to be economical, and the load control strategy can reduce the charge and discharge times of energy storage devices, and extend the service life of the energy storage device to a certain extent. A micro-CL system and its applications Wei, Zenghui; Yuan, Lulu; Liu, Baodong; Wei, Cunfeng; Sun, Cuili; Yin, Pengfei; Wei, Long The computed laminography (CL) method is preferable to computed tomography for the non-destructive testing of plate-like objects. A micro-CL system is developed for three-dimensional imaging of plate-like objects. The details of the micro-CL system are described, including the system architecture, scanning modes, and reconstruction algorithm. The experiment results of plate-like fossils, insulated gate bipolar translator module, ball grid array packaging, and printed circuit board are also presented to demonstrate micro-CL's ability for 3D imaging of flat specimens and universal applicability in various fields. Review on Micro- and Nanolithography Techniques and Their Applications Werayut Srituravanich Full Text Available This article reviews major micro- and nanolithography techniques and their applications from commercial micro devices to emerging applications in nanoscale science and engineering. Micro- and nanolithography has been the key technology in manufacturing of integrated circuits and microchips in the semiconductor industry. Such a technology is also sparking a magnificent transformation of nanotechnology. The lithography techniques including photolithography, electron beam lithography, focused ion beam lithography, soft lithography, nanoimprint lithography and scanning probe lithography are discussed. Furthermore, their applications are reviewed and summarized into four major areas: electronics and microsystems, medical and biotech, optics and photonics, and environment and energy harvesting. Embedded real-time operating system micro kernel design Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system. Some links on this page may take you to non-federal websites. Their policies may differ from this site. Website Policies/Important Links
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Illinois Journal of Mathematics Illinois J. Math. Volume 59, Number 1 (2015), 1-19. A new interpolation approach to spaces of Triebel–Lizorkin type Peer Christian Kunstmann More by Peer Christian Kunstmann Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Article info and citation We introduce in this paper new interpolation methods for closed subspaces of Banach function spaces. For $q\in[1,\infty]$, the $l^{q}$-interpolation method allows to interpolate linear operators that have bounded $l^{q}$-valued extensions. For $q=2$ and if the Banach function spaces are $r$-concave for some $r<\infty$, the method coincides with the Rademacher interpolation method that has been used to characterize boundedness of the $H^{\infty}$-functional calculus. As a special case, we obtain Triebel–Lizorkin spaces $F^{2\theta}_{p,q}(\mathbb{R}^{d})$ by $l^{q}$-interpolation between $L^{p}(\mathbb{R}^{d})$ and $W^{2}_{p}(\mathbb{R}^{d})$ where $p\in(1,\infty)$. A similar result holds for the recently introduced generalized Triebel–Lizorkin spaces associated with $R_{q}$-sectorial operators in Banach function spaces. So, roughly speaking, for the scale of Triebel–Lizorkin spaces our method thus plays the role the real interpolation method plays in the theory of Besov spaces. Illinois J. Math., Volume 59, Number 1 (2015), 1-19. Received: 6 October 2014 Revised: 13 February 2015 First available in Project Euclid: 11 February 2016 Permanent link to this document https://projecteuclid.org/euclid.ijm/1455203156 doi:10.1215/ijm/1455203156 Mathematical Reviews number (MathSciNet) MR3459625 Zentralblatt MATH identifier Primary: 46B70: Interpolation between normed linear spaces [See also 46M35] 47A60: Functional calculus 42B25: Maximal functions, Littlewood-Paley theory Kunstmann, Peer Christian. A new interpolation approach to spaces of Triebel–Lizorkin type. Illinois J. Math. 59 (2015), no. 1, 1--19. doi:10.1215/ijm/1455203156. https://projecteuclid.org/euclid.ijm/1455203156 M. G. Cowling, I. Doust, A. McIntosh and A. Yagi, Banach space operators with a bounded $H^\infty$ functional calculus, J. Aust. Math. Soc. A 60 (1996), no. 1, 51–89. Mathematical Reviews (MathSciNet): MR1364554 Zentralblatt MATH: 0853.47010 Digital Object Identifier: doi:10.1017/S1446788700037393 G. Dore, $H^\infty$-functional calculus in real interpolation spaces, Studia Math. 137 (1999), no. 2, 161–167. X. T. Duong and J. Li, Hardy spaces associated with operators satisfying Davies–Gaffney estimates and bounded holomorphic functional calculus, J. Funct. Anal. 264 (2013), no. 6, 1409–1437. Digital Object Identifier: doi:10.1016/j.jfa.2013.01.006 D. Frey and P. C. Kunstmann, A $T(1)$-theorem for non-integral operators, Math. Ann. 357 (2013), no. 1, 215–278. Digital Object Identifier: doi:10.1007/s00208-013-0901-x J. Garc\`\ia-Cuerva and J. L. Rubio di Francia, Weighted norm inequalities and related topics, North-Holland Mathematics Studies, vol. 116, Notas de Matemática [Mathematical Notes], vol. 104, North-Holland, Amsterdam, 1985. M. Haase, The functional calculus for sectorial operators, Operator Theory: Advances and Applications, vol. 169, Birkhäuser, Basel, 2006. Digital Object Identifier: doi:10.1007/3-7643-7698-8 S. Hofmann and S. Mayboroda, Hardy and BMO spaces associated with divergence form elliptic operators, Math. Ann. 344 (2009), no. 1, 37–116. Digital Object Identifier: doi:10.1007/s00208-008-0295-3 N. J. Kalton, P. C. Kunstmann and L. Weis, Perturbation and interpolation theorems for the $H^\infty $-calculus with applications to differential operators, Math. Ann. 336 (2006), no. 4, 747–801. N. J. Kalton and L. Weis, The $H^\infty$-calculus and sums of closed operators, Math. Ann. 321 (2001), no. 2, 319–345. Digital Object Identifier: doi:10.1007/s002080100231 N. J. Kalton and L. Weis, The $H^\infty$-functional calculus and square function estimates, manuscript, 2004. N. J. Kalton and L. Weis, Euclidean structures, manuscript, 2004. P. C. Kunstmann and A. Ullmann, $R_s$-sectorial operators and generalized Triebel–Lizorkin spaces, J. Fourier Anal. Appl. 20 (2014), no. 1, 135–185. P. C. Kunstmann and L. Weis, Maximal $L^p$-regularity for parabolic equations, Fourier multiplier theorems and $H^\infty$-functional calculus, Functional analytic methods for evolution equations, Lecture Notes in Math., vol. 1855, Springer, Berlin, 2004, pp. 65–311. Digital Object Identifier: doi:10.1007/978-3-540-44653-8_2 F. Lancien and C. Le Merdy, Square functions and $H^\infty$ calculus on subspaces of $L^p$ and on Hardy spaces, Math. Z. 251 (2005), 101–115. C. Le Merdy, On square functions associated to sectorial operators, Bull. Soc. Math. France 132 (2004), no. 1, 137–156. Digital Object Identifier: doi:10.24033/bsmf.2462 J. Lindenstrauss and L. Tzafriri, Classical Banach spaces I and II, Springer, Berlin, 1996. Reprint of the 1st edn. J. Suárez and L. Weis, Interpolation of Banach spaces by the $\ga$-method, Methods in Banach space theory, London Math. Soc. Lecture Note Ser., vol. 337, Cambridge University Press, Cambridge, 2006, pp. 293–306. Digital Object Identifier: doi:10.1017/CBO9780511721366.015 H. Triebel, Interpolation theory, function spaces, differential operators, North-Holland Mathematical Library, vol. 18, North-Holland, Amsterdam, 1978. H. Triebel, Characterizations of Besov–Hardy–Sobolev spaces via harmonic functions, temperatures, and related means, J. Approx. Theory 35 (1982), no. 3, 275–297. Digital Object Identifier: doi:10.1016/0021-9045(82)90009-0 H. Triebel, Theory of function spaces, Monographs in Mathematics, vol. 78, Birkhäuser, Basel, 1983. Digital Object Identifier: doi:10.1007/978-3-0346-0416-1 L. Weis, A new approach to maximal $L^p$-regularity, Evolution equations and their applications in physical and life sciences (Bad Herrenalb, 1998), Lecture Notes in Pure and Appl. Math., vol. 215, Dekker, New York, 2001, pp. 195–214. New content alerts Email RSS ToC RSS Article Turn Off MathJax What is MathJax? A new characterization of Triebel-Lizorkin spaces on $\mathbb R^n$ Yang, Dachun, Yuan, Wen, and Zhou, Yuan, Publicacions Matemàtiques, 2013 Atomic, molecular and wavelet decomposition of 2-microlocal Besov and Triebel-Lizorkin spaces with variable integrability Kempka, Henning, Functiones et Approximatio Commentarii Mathematici, 2010 A Theory of Besov and Triebel-Lizorkin Spaces on Metric Measure Spaces Modeled on Carnot-Carathéodory Spaces Han, Yongsheng, Müller, Detlef, and Yang, Dachun, Abstract and Applied Analysis, 2008 MAXIMAL REGULARITY FOR INTEGRO-DIFFERENTIAL EQUATION ON PERIODIC TRIEBEL-LIZORKIN SPACES Bu, Shangquan and Fang, Yi, Taiwanese Journal of Mathematics, 2008 A new generalization of Besov-type and Triebel-Lizorkin-type spaces and wavelets SAKA, Koichi, Hokkaido Mathematical Journal, 2011 Herz--Morrey type Besov and Triebel-Lizorkin spaces with variable exponents Dong, Baohua and Xu, Jingshi, Banach Journal of Mathematical Analysis, 2015 On dilation operators in Triebel-Lizorkin spaces Schneider, Cornelia and Vybíral, Jan, Functiones et Approximatio Commentarii Mathematici, 2009 Well-posedness of fractional degenerate differential equations with infinite delay in vector-valued functional spaces Bu, Shangquan and Cai, Gang, Journal of Integral Equations and Applications, 2017 Some function spaces relative to Morrey-Campanato spaces on metric spaces Yang, Dachun, Nagoya Mathematical Journal, 2005 MULTILINEAR ESTIMATES ON FREQUENCY-UNIFORM DECOMPOSITION SPACES AND APPLICATIONS Ru, Shaolei, Taiwanese Journal of Mathematics, 2014 euclid.ijm/1455203156
CommonCrawl
Volume 10 (2014) Article 6 pp. 133-166 The Need for Structure in Quantum Speedups by Scott Aaronson and Andris Ambainis Received: October 22, 2012 Revised: July 10, 2014 Keywords: decision trees, adversary method, collision problem, Fourier analysis, influences, quantum computing, query complexity Categories: complexity theory, quantum computing, query complexity, decision tree, collision, Fourier analysis, influence ACM Classification: F.1.2, F.1.3 AMS Classification: 81P68, 68Q12, 68Q17 Is there a general theorem that tells us when we can hope for exponential speedups from quantum algorithms, and when we cannot? In this paper, we make two advances toward such a theorem, in the black-box model where most quantum algorithms operate. First, we show that for any problem that is invariant under permuting inputs and outputs and that has sufficiently many outputs (like the collision and element distinctness problems), the quantum query complexity is at least the $7^{\text{th}}$ root of the classical randomized query complexity. (An earlier version of this paper gave the $9^{\text{th}}$ root.) This resolves a conjecture of Watrous from 2002. Second, inspired by work of O'Donnell et al. (2005) and Dinur et al. (2006), we conjecture that every bounded low-degree polynomial has a "highly influential" variable. (A multivariate polynomial $p$ is said to be bounded if $0\le p(x)\le 1$ for all $x$ in the Boolean cube.) Assuming this conjecture, we show that every $T$-query quantum algorithm can be simulated on most inputs by a $T^{O(1)}$-query classical algorithm, and that one essentially cannot hope to prove $\mathsf{P}\neq\mathsf{BQP}$ relative to a random oracle. A preliminary version of this paper appeared in the Proc. 2nd "Innovations in Computer Science" Conference (ICS 2011). See Sec. 1.3 for a comparison with the present paper. © 2014 Scott Aaronson and Andris Ambainis
CommonCrawl
Devolution and grant-in-aid design for the provision of impure public goods Laura Levaggi ORCID: orcid.org/0000-0001-9692-31141 & Rosella Levaggi ORCID: orcid.org/0000-0002-6018-12832 Traditional fiscal federalism theory postulates that devolution for the provision of local public goods increases welfare. However, most of the services offered at local level are local impure public goods whose characteristics may prevent devolution from being efficient. Our paper shows that devolution is the optimal choice only for local impure public goods. For an environment characterised by coordination and asymmetry of information problems, we propose the optimal grants-in-aid formula that Central Government should use to reduce welfare losses and we compare it with what suggested by the mainstream literature. Finally, we show under which conditions devolution should be preferred to a centralised solution. From a policy point of view, our paper may explain the heterogeneity in the choices made by countries in terms of devolution in the provision of merit and impure public goods. The process of decentralisation in decision making has received increasing attention in the past few years. In transition economies the break-up of centralised decision-making has required new systems of governance; in Eastern European and Latin American countries (Weisner 2003) more autonomy is demanded by lower tiers, even though its effects on economic growth and regional income disparities remain a controversial issue (Barrios and Strobl 2009; Sacchi and Salotti 2014; Sorens 2014); at EU level the discussion on the functions that should be centralised is still open (Thieben 2003; Tanzi 2009; Vaubel 2009). The nature of services produced and financed by the public sector is changing: nowadays most services provided are local impure public goods with spillovers.Footnote 1 Examples are health care and education, local transport, sporting facilities, waste disposal. These services generate utility directly to users (as private goods do), but also indirectly to non users (as local public goods do). Their beneficial effect is usually not confined to users in a specific jurisdiction and this why spillovers arise. According to OECD statistics, in 2014 public expenditure for health, and education accounted for about 12.2 % of total GDP, in the US the share is even higher (OECD 2016). For these services, there is no consensus in the literature on which is the most appropriate government tier to deliver the good.Footnote 2 The institutional setting is quite heterogeneous among countries. Dziobek et al. (2011) examine the fiscal decentralisation index for several countries and observe a large variation that does not simply depends on economic or geographical factors; for education Turati et al. (2011) show a quite heterogeneous picture, which is even greater for health and social care (Costa-Font and Greer 2013). In Alcidi et al. (2014) several measures of fiscal decentralisation are studied for the European Union, showing that different decentralisation strategies have been adopted, even within the same country. For Europe as a whole fiscal decentralisation (defined as the ratio of local expenditure to total expenditure) is around 20 % for health 40 % for education and 80 % for environment protection. Some countries are below all these levels (Austria, Belgium, Germany, Ireland, Luxembourg, Macedonia, Slovenia), a second group are above the average (Lithuania, Poland) while a third group of countries are well above the limit for some services and below for others.Footnote 3 The reason for this observed heterogeneity is that the determination of the optimal quantity to be produced is more complex than for public goods. For the latter, the quantity supplied and financed also represents the quantity consumed by all the individuals in the community. For impure public goods a pseudo demand exists, and Government should use indirect instruments (such as subsidises to the price) to match demand with supply and to produce the optimal quantity. However, the information necessary to achieve a First Best (FB) result is usually not accessible and only second best solutions can be attained. The traditional literature on fiscal federalism (Oates 1972; Tresch 2002) argue that the allocation of functions between Central and local Governments should follow efficiency principles. Production should be assigned to the tier which is better informed on local preferences, while Central Government (CG) may use grants for equity and efficiency reasons. Second generation modelsFootnote 4 suggest that the success of fiscal federalism depends on the information the agents possess about specific parameters (Akai and Mikami 2006; Levaggi 2002; Wildasin 2004; Snoddon and Wen 2003) and on the level of coordination in the actions of the different agents (Besley and Coate 2003; Köthenbürger 2008; Petretto 2000; Ogawa and Wildasin 2009). Both issues have been widely studied by the literature, which shows the existence of a trade-off between autonomy and control. In this article we compare centralisation (defined as the provision of goods at local level by CG) with devolution (defined as the provision of goods at local level by a local government) for the provision of an impure local public good with spillovers in an environment characterised by asymmetry of information. We show that the traditional "devolution is always welfare improving" is valid only in the absence of spillovers. When they do exist, the gain in utility derived from devolution must be sufficiently high to compensate the welfare loss deriving from asymmetry of information and lack of coordination. In this setting, we derive the optimal grants-in-aid formula and compare it with what suggested by the literature for local public goods. From a policy point of view, our paper may explain why some countries have preferred centralisation to devolution for the provision of services such as education and health care. Calsamiglia et al. (2006) argue that it may depend on altruism; in this article we show that efficiency may be the driving factor. For services whose comparative advantage in being locally produced is limited, while their consumption produces spillovers, centralisation may be the second best choice. The organisation of the paper is as follows: in "The model" the main features of the model and the FB are presented; in "Centralisation" the centralised solution is analysed, while in "Devolution" devolution is presented; they are then compared in "Centralisation versus devolution"; finally in "Conclusions" the conclusions are drawn. A country, whose population is normalised to one, is divided into two local authorities \(j\in \{ 1,2\}\), equal in everything but their preferences for the impure public good y.Footnote 5 Each individual has an exogenous money income M in the range \([\underline{M},\overline{M}]\), whose distribution in each region has density function \(\frac{1}{2} f(M)\). Then, total income is: $$Y=\int _{\underline{M}}^{\overline{M}}M f(M)\text {{d}}M.$$ and total income in local authority j equals \(Y_j=\frac{1}{2} Y\). Income is used to buy private commodities and one or zero units of an impure local public good y, whose user charge is equal to \(p_{u}^{j}.\) y is an impure public good, which means that it has a double effect on the utility function of each individual: as a private commodity; for the whole quantity that is produced. Let us first consider the private characteristic. A good of quality \(\theta _{l}\) produces an utility equal to \(\alpha \theta\), where \(\alpha\) is a taste parameter in the range \((0,\beta )\), equally distributed among the population. Therefore, if the difference \(\alpha \theta _{l}-p_{u}^{j}\) is positive the consumer buys y, otherwise he does not buy the good and private utility is zero. To simplify the algebra, we assume that \(\underline{M}>p^j_u\) for all j, i.e. everybody can afford y. The nature of impure public good means that the commodity y accrues utility to users and non users; in our model this characteristics is represented by the term \(z_{j}(\theta _{l}(Q_{j}+k\,Q_{-j}))\), where $$Q_{j}=\int _{\frac{p_{u}^{j}}{\theta }}^{\beta }\frac{1}{2\beta }\,\text {d}\alpha =\frac{1}{2}\left( 1-\frac{p_{u}^{j}}{\theta \beta }\right) .$$ is the total quantity demanded in each community and \(Q_{-j}\) is the quantity produced outside jurisdiction j. Let us examine each element: \(z_{j}\) represents the relative importance that each community attaches to the public goods characteristic of y; the second important element is the parameter \(k\in [0,1]\), which captures spillovers and represents the utility that consumers in jurisdiction j attribute to the production of good y outside their region. For \(k=1,y\) is a national impure public good: y produces the same level of utility irrespective of its geographical location. For \(k=0\) consumers only care about the quantity produced locally (impure local public good) and for \(0<k<1\) we have a local impure public good with spillovers, i.e. consumers derive a higher level of utility from the quantity produced in their jurisdiction, but also care about the level produced elsewhere (see also Wildasin 2001, 2004). The preferences for the impure public good are linear and homogeneous within each local authority, but specific to each of them.Footnote 6 y can be supplied at central level by CG or by an autonomous lower government tier (LG) in each jurisdiction. In line with the fiscal federalism theory, y produces a level of utility \(\theta _{C}=1\) if it is supplied by CG and \(\theta _{L}=\theta >1\) when it is produced by LG. This classical hypothesis in the theory on fiscal federalism (Oates 1972; Tresch 2002) reflects the assumption that at local level the preferences for the community can be better matched by local supply.Footnote 7 Finally, we assume that, while the cost to produce the good is p, users pay only a fraction \(p^j_{u}\) of such cost. The difference is financed using income taxes at rate \(t_{j}\) in each jurisdiction. As in Besley and Coate (2003), we assume that utility is additive in its components and that taxation is linear. The use of a linear utility function allows concentrate the analysis on efficiency and to rule out distributional issues (Levaggi and Menoncin 2014). The utility function for a representative individual living in community j is therefore written as: $$\begin{aligned}&U^{j}\left( M;\alpha ;p_u^j;z_j\right) =M\left( 1-t_{j}\right) +\max \left( \alpha \theta _{l}-p^j_{u};0\right) +z_{j}\left( \theta _{l}\left( Q_{j}+k\,Q_{-j}\right) \right) ,\nonumber \\&\quad \qquad l =C\,\text{ or }\,L,\quad j=1,2. \end{aligned}$$ In this paper we take into consideration the problem of CG, that has to decide whether to delegate the supply of y to LGs and if so how to implement devolution. The objective of CG is to maximise the welfare of the population, defined as the aggregation of individual preferences. Asymmetry of information prevents CG to observe \(z_1\) and \(z_2\); CG therefore uses some expected value z for both, so that in general the goal is to maximise the following function: $$\int _{\underline{M}}^{\overline{M}}\left( \int _{0}^{\beta } \left( U^{1}\left( M;\alpha ;p_u^1,z\right) +U^{2}\left( M;\alpha ;p_u^2;z\right) \right) \frac{1}{\beta } \text {d}\alpha \right) f(M)\text {d}M$$ by reducing the user charges \(p^j_{u}\), \(j=1,2\). The traditional theory on fiscal federalism Oates (1972) postulates that when preferences are not homogeneous and the goods produced at local level have a higher level of utility, CG should always devolve any decision to lower tiers. However, coordination problems and information asymmetry may prevent the attainment of this objective when the good to be produced is, as in the present case, an impure public good with spillovers. First Best (FB) To start with, let us define the solution for an ideal world with perfect information, where a benevolent social planner can supply the good of the highest quality, is able to observe the local preference parameters \(z_{1}\), \(z_2\) and can therefore supply the optimal amount of the local public good in each region. This represents the ideal, FB allocation that will be used as a benchmark to evaluate the relative benefits of implementing either a centralised solution or devolution. Let us examine (2) to understand the problem faced by the regulator. The quantity demanded depends on the private utility that each consumer receives from the good, but the benefit produced by that commodity also depends on the utility that the community as a whole derives from its provision.Footnote 8 This causes the usual market failure that the literature on impure public goods has long studied and calls for an intervention of the public decision maker (Musgrave and Musgrave 1989). By subsidising y, the regulator reduces the user charge \(p_u^j\), and increases demand, so that the optimal equilibrium between the benefit (public and private) produced by the good is equal to its marginal cost. If the regulator finances a fraction \((1-\rho _{j})\) of p in each region through a (national) linear income tax at rate t, the user charge becomes \(p_u^j=\rho _j\,p\) and from (1) the budget constraint is equal to: $$tY =\sum _{j=1}^{2}(1-\rho _{j})p Q_j = \frac{1}{2} \sum _{j=1}^{2}(1-\rho _{j})p \left( 1-\frac{\rho _j\,p}{\theta \beta }\right) .$$ The regulator has to find the optimal values of \(\rho _1\) and \(\rho _2\) for which total welfare in (3) is maximised. The optimal solution in terms of price subsidy \(p(1-\rho _{j}^{FB})\), quantities produced in each region \(Q_j^{FB}\), total quantity \(Q^{FB}\) and welfare \(W^{FB}\) is presented in Table 1 and derived in Appendix 1. Table 1 Results for the FB case The first line shows the price subsidy the planner should provide. As expected, it depends on the utility generated by the public good characteristics, gross of the spillover it creates. The optimal total quantity reflects the three components that characterise the impure public good: \(1-\frac{p}{\theta \beta }\) is the demand for its private good characteristic, while \(\frac{(1+k)(z_{1}+z_{2})}{2\beta }\) is the willingness to pay for its public good aspect, gross of the spillovers effects. \(W^{FB}\) represents the ideal level of welfare, given preferences and resources, but this outcome cannot be obtained. CG provision implies that only the good with the lowest quality (in terms of utility), i.e. \(\theta =1\) is supplied and that at central level \(z_1\) and \(z_2\) cannot be observed. On the other hand, LGs produce the goods with highest utility, but they do not take spillovers into consideration. CG may influence the decisions of lower tiers, but does not have enough information to define an optimal policy. In this environment it is necessary to find the second best solution that allows the minimum welfare loss with respect to the FB solution. In what follows we will analyse and compare two alternatives: Centralisation. CG produces the less productive variety of y. The quantity is uniform across regions and it is set according to estimated preferences for the good; Devolution. CG delegates the production of y to lower government tiers (LG). A matching grant (King 1984) is supplied by CG to reduce the negative effects of spillovers. Let's examine the case where CG produces the goods with lower productivity (\(\theta _C=1\)). Since \(z_{j}\) cannot be observed, we assume that an expected estimate z is used to determine the optimal provision. CG has to set the optimal subsidy that maximises welfare in this context. The optimal subsidy and quantities can be found by substituting \(\theta\) with \(\theta _C = 1\) and \(z_1=z_2= z\) in the formulas in the first two rows of Table 1 and are presented in Table 2. The welfare \(W^C\) is obtained by substituting the optimal user charges \(p\rho _1^C\) and \(p\rho _2^C\) in (3). Table 2 Results for the centralised solution Let's compare the results in Tables 1 and 2. Even when asymmetry of information is minimal, e.g. when \(z_1=z_2=z\), the subsidy is lower than in FB: for the quality provided at central level the willingness to pay is lower and thus the subsidy. This in turn means that the quantity of good produced is not optimal. As a consequence, welfare is not maximised and \(W^{FB}-W^{C}\) measures the welfare loss. In the previous section we showed that centralised provision does not allow to reach the FB allocation; here we consider the alternative solution of devolving production to lower tiers that know local preferences and can provide a good that fits best user needs. In this section we consider and compare two possible alternatives: "pure devolution" where LGs are solely responsible for the provision of the impure public good (the case will be indicated by the letter F); devolution where LGs provide the good, but CG influences their decisions using a matching grant.Footnote 9 Pure devolution If the goods produced at local level were local public goods, devolution would always allow to reach the welfare level of FB (Oates 1972). However, the presence of spillovers means that also devolution is a second best option. In this case each Local Government (LG) maximises its utility function, but it does not take into account the accrued utility experienced by users in other local authorities. As for centralisation, each LG has to find the optimal user charge \(p_u^j=p\nu _{j}\) that maximises the aggregate utility of jurisdiction j. The subsidy will be financed using a (local) linear income tax at rate \(t_j\); the budget constraint in this case is $$\frac{Y}{2} t_j = \frac{1}{2} \left( 1-\nu _j\right) p \left( 1-\frac{\nu _j p}{\beta \theta } \right) .$$ LG has to find the value of \(\nu _j\) that maximises the following objective function: $$\frac{1}{2} \int _{\underline{M}}^{\overline{M}}\left( \int _{0}^{\beta } U^{j}\left( M;\alpha ;p_u^j;z_j\right) \frac{1}{\beta } \text {d}\alpha \right) f(M)\,\text {d}M.$$ Table 3 shows the results derived in Appendix 2 in terms of the optimal subsidy \(p(1-\nu _{j}^{F})\), quantities \(Q_j^{F}\), total quantity \(Q^F\) and total welfare \(W^{F}\). Table 3 Results for devolution with no matching grant Comparing Table 1 with Table 3 it is straightforward to see that the two results coincide for \(k=0\), i.e. when there are no spillovers. In all the other cases, the subsidy is too low, total quantity falls short of the optimal level, and the welfare level attained is lower than in FB. This is the first result of our model: the findings of the traditional literature on fiscal federalism are valid also for the provision of impure public goods, provided that there are no spillovers among regions. In all the other cases, "pure" devolution produces a welfare loss which is equal to \(\Delta W_{F}=W^{FB}-W^F=\frac{k^{2}\theta }{4\beta }(z_{1}^{2}+z_{2}^{2})\). Devolution with a matching grant CG may try to influence the choice of the local subsidy using a matching grant.Footnote 10 This solution has some drawbacks: the asymmetry of information that prevents Central Government from providing the optimal quantity of y may also influence the optimal matching grant setting decision, for two main reasons: coordination problems, fiscal illusion and spillovers: due to the specific characteristics of y, any change in \(Q_{j}\) affects the level of utility in authority \(-\) j and its decision on \(Q_{-j}\). The matching grant introduces another interdependence in decisions, because the level of local expenditure has an impact on the national tax rate, hence on the welfare of each jurisdiction. If local decision makers misperceive the effects of their actions, CG may be unable to attain FB, even when it can observe the reaction function of LGs; asymmetry of information: CG cannot observe local preferences parameters and the reaction function of each local government. The environment is characterised by the following assumptions: y is subsidised by LG, which does not take into account the spillovers created by its production. CG influences the behaviour of LGs using a matching grant. The timing of the game is as follows: (a) CG sets the grant to maximise total welfare using its beliefs on LGs' behaviour and users' preferences; (b) LGs set their reaction function and their local tax rate. Although CG cannot observe some relevant parameters, LGs are followers, i.e. we rule out the possibility that they may act strategically in setting their reaction function.Footnote 11 The problem can be solved through backward induction: in the first stage LG decisions and reactions to a grant setting are considered; in the second stage CG finds the optimal grant, given its information set. The analytical model is presented in Appendix 3. In what follows we discuss the intuition behind the findings. LG reaction function LG receives a matching grant at rate \((1-\rho _{j})p\) from CG. If it thinks that the good should be further subsidised, it will introduce a supplementary subsidy at rate \(\eta _{j}\), to be financed by a proportional tax on local income at rate \(\tau _{j}\). The user charge for the service will be equal to \(p(\rho _{j}-\eta _{j})\). LG decision has a twofold impact on total welfare: on the revenue side it will change the national tax rate; on the expenditure side it will alter the quantity of the impure public good. This is one of the novel elements of our model: given the nature of y, each LG has to foresee the behaviour of the other local authority and should take into account the impact of increasing its expenditure. These effects may not be correctly perceived by LGs. In our model we consider three alternative behaviours for the LGs. (FC) Each LG thinks that the other local authority will replicate the same strategy (full coordination—FC), i.e. it will subsidise local production by the same amount. On the revenue side the overall change in the national tax rate is taken into account and on the expenditure side for each j the quantity \(Q_{-j}\) is updated accordingly. (PC) Each LG thinks that the other local government does not further subsidise the good (partial coordination—PC). In this case the quantity \(Q_{-j}\) will not change, while a one-sided correction of the national tax rate is considered. (FR) Each LG thinks that the effects of its expenditure on the tax rate are marginal, so that its decision influences neither the rate, nor Q (free rider, FR). The latter hypothesis may not be reasonable in a model with two local authorities, but in a more general context where the number of jurisdictions is fairly large this behaviour may be quite plausible. It is interesting to note that this is the hypothesis that has been used by the traditional literature in defining grant in the presence of spillovers. The detailed derivation of the formulas for the local subsidy \(p\,\eta _j\) is presented in Appendix 3 and reported in Table 4. For each LG behaviour two values are reported, because a second source of asymmetry of information has to be taken into account. The actual subsidy set at local level also depends on the local preferences \(z_{j}\), which cannot be observed; as in section "Centralisation" we assume that an estimate z, equal for both local authorities is used by CG to guess the subsidy level that will be set by LGs. Table 4 Local subsidy in case of devolution with a matching grant From Table 4 we note that the subsidy set by a LG behaving as a FR is higher than that in the PC case, as one might expect. A "free rider" behaviour implies that the LG does not take into account the increase in the national tax rate that is action is causing, i.e. LG underestimate the tax price for good y and will be prepared to subsidise it at a higher rate. A more general conclusion cannot be drawn: the relative effects of spillovers and national matching grant will determine the result. The use of a matching grant may allow CG to improve on pure devolution only if the grant is set correctly, but CG can observe only a subset of the parameters. For this reason, it will have to devise a strategy to minimise the negative effects due to the lack of information. Grant setting In the second stage CG has to set the matching grant, based on his beliefs about the behaviour of the LGs and the estimation of the unobservable preferences. We assume that CG expects LGs to have the same behaviour (i.e. both are either FC, PC or FR), thus the matching grant will be equal for the two regions (i.e. \(\rho =\rho _1=\rho _2\)). The estimated reactions of the LGs (the subsidy in the last column in Table 4) are then used to determine the welfare function \(\tilde{W}_{\text {B}}(\rho )\) for B=FC,PC,FR using (3). CG has to decide which of the LG reaction is more plausible and assigns a probability \(\pi ^{\text {B}}\) to each of the three possible reactions; the matching grant \(\rho\) will be then found by maximising the expected welfare \(E(W) = \sum \nolimits _{\text {B}} \pi _{\text {B}} \tilde{W}_{\text {B}}(\rho )\). The analytic derivation of the optimal grant in the general case is presented in Appendix 3. The solution depends on the probabilities \(\pi ^{\text {B}}\); in Table 5 the optimal subsidies for the following relevant cases are shown: CG believes that LGs will behave as either FC, PC or FR (i.e. \(\pi ^{\text {B}}=1;\pi ^{-\text {B}}=0\) for each possible value of B) and the case where CG assigns equal probability to each of the behaviours, i.e. \(\pi ^{\text {B}}=\frac{1}{3}\) for all B (the abbreviation "Equi" is used for this case). Note that if LGs act as FC, CG cannot influence the expected welfare and no matching grant will be used. For PC the grant is twice the size than for FR, as one might expect. Finally, if all the reactions functions are taken in consideration with equal weight (Equi) the optimal grant is slightly higher than for FR, but quite close. The traditional literature and most actual grant formulae use FR assumption to model the behaviour of LGs. FR behaviour has a boosting effect on expenditure; by assuming the worst scenario in terms of effects on expenditure, CG tries to reduce the negative impacts on its expenditure of LG choices. Table 5 Matching grant Ex-post welfare analysis Once the grant has been set by CG, the LGs will further subsidise the good following the scheme presented in the second column in Table 4. In general, for each case of the grant setting, three different reactions of the LGs are possible. The state contingent solution is presented in Table 8 in Appendix 3. In what follows we examine the results from a more qualitative point of view; the discussion will be supported by the graphical visualisation of the main findings. Let us start by examining the case where CG assumes that LGs reaction is of the "fully coordinated" type (FC). It turns out that the difference of the user charge with respect to the FB case does not depend on CG grant, which will therefore be set to zero. If the action of the LGs is either PC or FR, the outcome of the "pure" devolution case is replicated. The difference in quantities with respect to the FB case is: $$\Delta Q^F_{j} = \frac{k \,z_{-j}}{2 \beta } , \quad \Delta Q^F = \frac{k}{\beta } \frac{z_1+z_2}{2}$$ and the welfare difference is: $$\Delta W_F = \frac{k^2 \theta }{4\beta } \left( z_1^2+z_2^2\right).$$ Note also that, if the reaction of LGs is FC, the outcome does not depend upon the action of CG: the total quantity is equal to the one in FB, but if the preferences in the two regions differ, it is not correctly distributed among them. In fact the difference in each region is \(\frac{k}{2\beta } (z_j-z_{-j})\) and this causes a welfare loss with respect to the FB case equal to \(\frac{k^2 \theta }{2\beta } (z_1-z_2)^2\). If instead LGs reaction is either PC or FR, CG can reduce the difference in the subsidy and total quantity using a matching grant, by an amount that depends on z. If CG beliefs are fulfilled, the result is the same in both cases (PC and FR) and, if \(z=\frac{z_{1}+z_{2}}{2}\) the total quantity produced is optimal, while the welfare loss is half the one obtained when LGs react as FC. Again, the total quantity is optimal, but its distribution across local authorities is different from FB and the welfare level is lower. Comparison of the mean welfare loss produced by the matching grant for various levels of z and under the different assumptions on the behaviour of LGs When the reaction of the local authorities is uncertain, the analysis has to be carried out by comparing the mean value of the welfare functions. In all cases the term \(\frac{k^{2}\theta }{\beta }\) can be factored out, and the comparison only depends on z, i.e. on the quality of the information that CG has on the preferences of the two regions. The analytical comparison can be made by standard algebraic calculations; Fig. 1 illustrates the results by showing the relative position of the average welfare losses (wrt FB) under the different assumptions of CG about the reaction function of LGs. With the exception of the FC case, welfare losses are convex in z. The assumption that LG reacts as PC minimises the welfare loss only if CG considerably underestimates the average \(\bar{z}=\frac{z_{1}+z_{2}}{2}\) of local preferences \(\left(z\,\hbox{<}\,\frac{3}{4}\bar{z}\right)\). Let us call \(\Delta W_{\text {FR}}\) and \(\Delta W_{\text {Equi}}\) the ex-post average welfare differences (the last column in Table 8) under the two assumptions "FR" and "Equi" of CG on the behaviour of LG. If z ranges in \(\left[ \frac{3}{4}\bar{z},\frac{12}{11}\bar{z}\right]\) then \(\Delta W_{\text {Equi}}\) is the lowest and has a minimum for \(z=\bar{z}\). The welfare loss that can be expected using FR performs better for higher values of z and has its minimum value for \(z=\frac{6}{5}\bar{z}\). \(\Delta W_{\text {Equi}}\) increases at a higher rate than \(\Delta W_{\text {FR}}\) as the distance of z from \(\bar{z}\) increases because the grant under FR is lower than with the other assumptions. At the left of \(\bar{z}\) the grant may be too low to make local authorities react optimally and local preferences are underestimated. At the right of \(\bar{z}\) the two effects may offset each other. The two welfare losses are equal for \(z=\frac{12}{11}\bar{z}\); this implies that for values of z lower than or very close to the mean, using a grant that minimises the expected welfare loss is preferred to assuming that LGs are free riders. In setting the matching grant, most traditional literature on fiscal federalism implicitly or explicitlyFootnote 12 assumes that CG may observe local preference on average and sets the grant as if local authorities were free riders. If CG can observe the mean of true preferences (i.e. \(z=\bar{z}\)), it should use the grant that minimises the welfare loss, but the mistake made using FR is small. The comparison with the "pure devolution" case is less clearcut: the welfare loss \(\Delta W_F\) does not depend on z and in a graphical comparison similar to the one in Fig. 1 it is represented as a horizontal line. Its relative position in the picture depends on the ratio of the preferences in the two regions. If the ratio is not too high, \(\Delta W_F > \Delta W_{FC}\) and obviously any solution with a matching grant is preferable. As the ratio gets larger, "pure devolution" could anyhow be a viable option only in extreme cases, where either z greatly underestimates \(\bar{z}\), or if the magnitude of the two parameters \(z_1\) and \(z_2\) is extremely different.Footnote 13 Thus, ruling out unrepresentative cases, we can then conclude that a matching grant generally improves welfare, i.e. in the presence of spillovers it is optimal for CG to induce LGs to change their expenditure patterns using a matching grant. Centralisation versus devolution Centralisation correctly takes into account spillovers, but it never allows to reach FB because goods produced in centralisation have the lowest quality level \((\theta _c=1).\) On the other hand, devolution has drawbacks in terms of coordination, because LGs are unable to take into account the utility that users outside their jurisdiction attaches to their production. In the previous section we have shown that "pure devolution" is not optimal: CG intervention with a matching grant improves welfare. In this section we compare centralisation with this model of devolution. The welfare loss in the centralised decision depends on the quality gap (\(\theta\)) and on the difference between the true preferences for the public good (\(z_{1}\) and \(z_{2}\)), and the estimate used by CG (z). The one in devolution depends on the information CG has on local preference and LG's reaction to the grant. Table 6 Welfare loss comparison Table 6 summarises the welfare losses under the various assumptions. The welfare loss for centralisation derives from under-provision and from the lower utility each unit of impure public good produces; for devolution the loss derives from the provision of the wrong quantity of impure public good. The welfare loss for devolution is zero if \(k=0\): if the goods produces spillovers \((k > 0)\) there might be scope for centralised provision. The two parameters \(\theta\) and k are thus fundamental in the comparison between \(\Delta W_{\text {C}}\), the welfare loss for centralisation, and \(\Delta W_{\text {D}}\), the welfare loss for devolution (the latter depends on CG's information set and we denote it generally by a subscript D). As for the dependence on \(\theta\), the function \(\Delta W_{\text {C}}\) is increasing and convex, while \(\Delta W_{\text {D}}\) is linear in this variable; both are quadratic polynomials in k. For \(\Delta W_{\text {C}}\), an increase in k reduces the loss deriving from choosing a uniform level of provision in the two local authorities, but as \(\theta\) increases, the loss caused by producing a goods of relatively lower quality increases. The prevailing effect depends on the values of k and \(\theta\). If we assume that CG can observe average local preferences, i.e. \(z=\frac{z_{1}+z_{2}}{2}\), the lowest value for \(\Delta W_{\text {D}}\) is equal to \(\Delta W_{\text {Equi}}\). In Appendix 4 it is shown that the welfare loss increases more rapidly for centralisation than for (the best case of) devolution. Therefore devolution is the best option for any \(\theta\) whenever \(\Delta W_{\text {C}}\ge \Delta W_{\text {Equi}}\) for \(\theta =1\). In all other cases there exists a (unique but depending on k) value \(\theta ^{*}>1\) for which if \(\theta <\theta ^{*}\) centralisation performs better than (the best case of) devolution. The sign of \(\Delta W_{\text {C}}\ge \Delta W_{\text {Equi}}\) depends on k and the following can be proved (see Appendix 4): $$\begin{gathered} {\text{if }}\,k \le k^{*} \quad {\text{then}}\quad \Delta W_{{\text{C}}} > \Delta W_{{{\text{Equi}}}} \quad \forall \theta > 1; \hfill \\ {\text{if }}\,k > k^{*} \quad {\text{then}}\quad \left\{ {\begin{array}{*{20}l} {\Delta W_{{\text{C}}} < \Delta W_{{{\text{Equi}}}} \quad 1 \le \theta < \theta ^{*} ,} \hfill \\ {\Delta W_{{\text{C}}} = \Delta W_{{{\text{Equi}}}} \quad \theta = \theta ^{*} ,} \hfill \\ {\Delta W_{{\text{C}}} > \Delta W_{{{\text{Equi}}}} \quad \theta > \theta ^{*} ,} \hfill \\ \end{array} } \right. \hfill \\ \end{gathered}$$ The value of \(k^{*}\) is shown in (16); the one for \(\theta ^{*}\) can be found explicitly, but the expression is quite cumbersome. The above analysis has the following economic interpretation: for \(\theta =1\), the welfare loss in centralisation is due only to the use of z instead of \(z_{1}\) and \(z_{2}\), which becomes less and less important as k increases. At the same time the loss in devolution increases if spillovers become important. For sufficiently high values of k centralisation should be preferred. On the other hand, for a fixed value of k, an increase in \(\theta\) has the same effect on both welfare losses, but has a comparatively greater effect on \(\Delta W_{\text {C}}\) and this gradually (as the spillover increases) offsets the gain produced by a mitigation of the mistake produced by the uniform distribution of the goods between the two regions. The dependence of the difference of welfare levels on both k and \(\theta\) is depicted in Figs. 2 and 3, while a numerical example is presented in Table 7. In Fig. 2 one can observe that as \(\theta\) increases [i.e. passing from the graph (a–d)] the scope for centralisation gradually reduces. In (d) the two welfare losses are depicted for \(\theta >\theta _{m}\), where \(\theta _{m}\) is the value of \(\theta ^{*}\) for \(k=1\): in this case devolution always performs better than centralisation. Dependence on k of the welfare losses in centralisation and devolution for \(z=\bar{z}\) for different values of \(\theta\). For \(\theta =\theta _{m}\) the welfare losses in \(k=1\) are equal. a \(\theta =1\), b \(1 < \theta < \theta _{m}\), c \(\theta = \theta _{m}\), d \(\theta > \theta _{m}\) Dependence on \(\theta\) of the welfare losses in centralisation and devolution for \(z=\bar{z}\) for \(k<k^{*}\) and \(k>k^{*}\) From the above results it also follows that the abscissa of the intercept between \(\Delta W_{\text {C}}\) and \(\Delta W_{\text {FR}}\) as functions of \(\theta\) is smaller that \(\theta ^{*}\). Thus, assuming that LGs are free riders leaves more scope for devolution than what would be optimal. If CG is able to predict the behaviour, but cannot observe preferences, the minimum productivity differential \(\theta ^{*}\) for which devolution should be preferred to centralisation may be evaluated using the same approach presented above. In this case, given that in devolution the reduction in welfare is lower than for the general case, \(\theta ^{*}\) will be lower than in the model just presented, but there will still be an area in which devolution is not the best option. These results are confirmed by the numerical simulations presented in Table 7. The first two simulations shows the effect of the spillover k on the optimal choice for a good whose public good component is relatively high. Preferences are rather homogeneous across regions and the quality produced in devolution is substantially higher than in centralisation. In this case devolution with a matching grant (Equi) is the second best choice, but the loss produced by FR is rather low. On the contrary, if we consider a case where the quality differential is not important and preferences are heterogeneous, the public good aspect is less significant and centralisation is the best choice. In both cases, k increases the distance between welfare losses. This paper studies the conditions under which devolution is a second best choice for the provision of goods and services are impure public goods with spillovers. Devolution calls for coordination in the actions of lower tiers to prevent a failure of the process, even when CG observes the relevant parameters and could use a matching grant to internalise spillovers. In the more general case where CG cannot observe LG's reaction function and local preferences, devolution may cause large welfare losses. Its size depends on the spillovers effect k, on the level of the productivity \(\theta\) of the goods produced at local level and on how well CG can predict the parameters it cannot observe. Our main conclusion is that devolution should be used only if the productivity (in terms of utility) of the goods produced at local level is sufficiently high to counterbalance the welfare loss produced by coordination, asymmetry of information and spillover effects. In all the other cases a centralised solution might be more effective. The traditional results of the literature on fiscal federalism are replicated by our model: for a local public good \((k=0),\) the FB solution is centralisation. In this respect our model can be considered a generalisation of the framework proposed by the traditional literature. The second interesting result of our model is that CG correction through a matching grant is always welfare improving, provided the grant that minimises the expected loss is chosen. From a policy point of view the results of our model show that in the presence of asymmetry of information CG has to balance autonomy with control and it may prefer the former to the latter also for welfare maximisation reasons. When the quality of the goods does not depend on the tier that produces it (\(\theta\) close to 1), but spillovers are important (\(k>k^{*}\)), a centralised solution might be optimal in a second best environment. This is the choice that has been made by some countries like the UK where the provision of services like health and education are still very centralised and in general it may explain why the level of decentralisation in decision making is lower in Europe than in the US. The work presented in this paper could be extended in several directions: first of all, redistribution policies could be considered by introducing a specific income distribution at national and local level. This point is very important because the interaction between equalisation grants and fiscal federalism may produce perverse effects and it may be one of the causes for failures in the fiscal federalism structure such as soft budget constraint policies (Breuillé and Vigneault 2010; Levaggi and Menoncin 2013). Secondly, political considerations could be introduced by assuming that political parties compete for votes on different objectives and in a different setting at national and local level. An impure public good is a commodity (or a service) that has the double characteristic of being an appropriable good (as private goods) and a public good. Let us for example consider education. Education improves utility of students in terms of future earnings/better working conditions (this is the private good element), but it also improves the utility of the community in terms of skilled workers/professionals available to society (this is the public good element). Another important class of impure public goods are the so called option goods, where the utility for society is derived from the availability of the good in case of need. A good example may be local transport. It produces utility for services users, but it improves also the utility of non users (hence of the whole community) in terms of being an available option in case of need (see Musgrave and Musgrave 1989). For a review of the debate see Oates (2005). The Netherlands, for example have decentralised about 93 % of expenditure on environmental protection and virtually nothing as concerns health. On the contrary Denmark has almost fully decentralised health care, but only 60 % of expenditure on environmental protection. See Oates (2005) for a review. We share this assumption with Wildasin (2001). In a more general setting, a reasonable assumption would be to assume that the two local authorities have a different average income. For the purpose of this analysis such assumption is irrelevant. Petretto (2000) and Levaggi and Menoncin (2014) present a model with these characteristics as regards the distribution of income. The use of more general utility functions is presented in Levaggi (2010). For example, let us consider local transport: the same number of bus services produces a different level of utility according to its hourly distribution within the day. To understand this point, consider for example the case \(\alpha =0\): these individuals do not buy y, but as long as \(z_j\) is non negative, they attribute a value to the production of that good. In the fiscal federalism literature matching grants are transfers from higher to lower government tiers designed to complement sub-national contributions. They have a form similar to a price subsidy. A matching grant is a form of subsidy to LGs proportional to its expenditure. In other words, the use of a matching grant means that a fraction of the total cost for the provision of y is borne by CG. Wildasin (2001) and Köthenbürger (2008) show how local government could play strategically in a game where the goods to be produced is a local public good with spillovers. As in Tresch (2002). In fact \(\Delta W_F\ge \Delta W_{FC}\) whenever \(0.268\approx 2-\sqrt{3} \le \frac{z_1}{z_2} \le 2+\sqrt{3}\approx 3.732\), while \(\Delta W_F\) is lower than the minimum welfare loss obtained with some matching grant only if \(\frac{z_1}{z_2} > 29+2 \sqrt{210} \approx 57.983\) or \(\frac{z_1}{z_2} < 29-2 \sqrt{210}\approx 0.017\). For example, FC in the first column means that CG thinks that LG will react according to "full coordination" and CG will set the grant accordingly. Akai N, Mikami K (2006) Fiscal decentralization and centralization under majority rule: a normative analysis. Econ Syst 30(1):41–55 Alcidi C, Giovannini A, Infelise F, Ferrer JN (2014) Division of powers between the European Union, member states, candidate and some potential candidate countries, and local and regional authorities: fiscal decentralisation or federalism. European Union, Bruxelles. doi:10.2863/11797 Barrios S, Strobl E (2009) The dynamics of regional inequalities. Reg Sci Urban Econ 39(5):575–591 Besley T, Coate S (2003) Centralized versus decentralized provision of local public goods: a political economy approach. J Public Econ 87(12):2611–2637 Breuillé M-L, Vigneault M (2010) Overlapping soft budget constraints. J Urban Econ 67(3):259–269 Calsamiglia X, Garcia-Milà T, McGuire TJ (2006) Why do differences in the degree of fiscal decentralization endure? CESifo Working Paper Series 1877, CESifo Group, Univ Center for Economic Studies, Munich Costa-Font J, Greer S (eds) (2013) Federalism and decentralisation in European health and social care. Palgrave, London Dziobek CH, Gutierrez Mangas CA, Kufa P (2011) Measuring fiscal decentralization—exploring the IMFs databases. Tech. Rep. Working Paper No. 11/126, IMF, Washington King D (1984) Fiscal tiers: the economics of multi-level government. Allen & Unwin, London Köthenbürger M (2008) Revisiting the "decentralization theorem"—on the role of externalities. J Urban Econ 64(1):116–122 Levaggi R (2002) Decentralised budgeting procedures for public expenditure. Public Financ Rev 30(2):273–295 Levaggi R (2010) From local to global public goods: how should we write the utility function. Econ Model 27(5):1040–1042 Levaggi R, Menoncin F (2013) Soft budget constraints in health care: evidence from Italy. Eur J Health Econ 14(5):725–737 Levaggi R, Menoncin F (2014) Health care expenditure decisions in the presence of devolution and equalisation grants. Int J Health Care Financ Econ 14(4):355–368 Musgrave RA, Musgrave PB (1989) Public finance in theory and practice. Finance series, McGraw-Hill, Singapore Oates WE (1972) Fiscal federalism. Harcourt Brace Jovanovich, New York Oates WE (2005) Toward a second-generation theory of fiscal federalism. Int Tax Public Financ 12:349–373 OECD (2016) General government spending (indicator). doi:10.1787/a31cbf4d-en. Accessed 1 Mar 2016 Ogawa H, Wildasin DE (2009) Think locally, act locally: spillovers, spillbacks and efficient decentralized policy making. Am Econ Rev 99(4):1206–1217 Petretto A (2000) On the cost-benefit of the regionalisation of the national health service. Econ Gov 1:213–232 Sacchi A, Salotti S (2014) How regional inequality affects fiscal decentralisation: accounting for the autonomy of subcentral governments. Environ Plan C Gov Policy 32(1):1197–1220 Snoddon T, Wen J-F (2003) Grants structure in an intergovernmental fiscal game. Econ Gov 4:115–126 Sorens J (2014) Does fiscal federalism promote regional inequality? An empirical analysis of the OECD, 1980–2005. Reg Stud 48(2):239–253 Tanzi V (2009) The future of fiscal federalism and the need for global government: a reply to Roland Vaubel. Eur J Polit Econ 25(1):137–139 Thieben U (2003) Fiscal decentralisation and economic growth in high-income OECD countries. Fisc Stud 24(3):237–274 Tresch RD (2002) Public finance: a normative view, 2nd edn. Academic Press, San Diego Turati G, Montolio D, Piacenza M (2011) Fiscal decentralisation, private school funding, and students achievements. a tale from two roman catholic countries. Tech. Rep. Working Papers 2011/44, Institut d'Economia de Barcelona (IEB), Barcelona Vaubel R (2009) The future of fiscal federalism and the need for global government: a response to Vito Tanzi. Eur J Polit Econ 25(1):133–136 Weisner E (2003) Federalism in Latin American: from entitlements to markets. Inter-American Development Bank, Johns Hopkins University Press, Washington Wildasin D (2001) Externalities and bailouts: hard and soft budget constraints in intergovernmental fiscal relations. Public economics, EconWPA. http://www.EconPapers.repec.org/RePEc:wpa:wuwppe:0112002 Wildasin DE (2004) The institutions of federalism: towards an analytical framework. Natl Tax J LVII 2:247–272 The authors collaborated in the mathematical modelling of the problem and in drafting the manuscript. LL took charge of the mathematical aspects of the analysis. RL contributed the economic interpretation of the results. Both authors read and approved the final manuscript. Faculty of Science and Technology, Free University of Bolzano-Bozen, Piazza Università 1, 39100, Bolzano-Bozen, Italy Laura Levaggi Department of Economics and Management, University of Brescia, Via S. Faustino, 74b, 25122, Brescia, Italy Rosella Levaggi Correspondence to Rosella Levaggi. Appendix 1: Outcome of the FB ideal case The FB solution is obtained solving the problem: $$\begin{aligned}&\max _{\rho _{j}} \left[ Y - \frac{1}{2} \sum _{j=1}^{2}\left( 1-\rho _{j}\right) p \left( 1-\frac{\rho _j\,p}{\theta \beta }\right) +\sum _{j=1}^{2}\int _{\frac{\rho _{j}p}{\theta }}^{\beta } \left( \alpha \theta -\rho _{j}p\right) \frac{1}{2\beta }\text {d}\alpha \right. \nonumber \\&\left. \qquad +\,\frac{1}{2}\sum _{j=1}^{2}z_{j}\theta \left( 1-\frac{\rho _j\,p}{\theta \beta } + k \left( 1-\frac{\rho _{-j}\,p}{\theta \beta } \right) \right) \right] \end{aligned}$$ whose FOC is $$\frac{1}{2}p\frac{-p+z_{j}\theta +\rho _{j}p+\theta k\,z_{-j}}{\beta \theta }=0, \quad j=1,2.$$ The optimal price subsidy is \(p(1-\rho _{j}^{FB})= \theta (z_j+k z_{-j})\); the optimal quantity \(Q_j^{FB}\) and optimal welfare \(W^{FB}\) are then derived by substituting \(\rho _{1}^{FB}\) and \(\rho _{1}^{FB}\) in (1) and in the objective function in (5). Appendix 2: Solution of the pure devolution case The maximisation problem in this case is: $$\begin{aligned}&\max _{\nu _{j}} \left[ \frac{Y}{2} - \frac{1}{2} \left( 1-\nu _{j}\right) p \left( 1-\frac{\nu _{j} p}{\theta \beta }\right) +\int _{\frac{\nu _{j}p}{\theta }}^{\beta } \left( \alpha \theta -\nu _{j}p\right) \frac{1}{2\beta }\text {d}\alpha \nonumber \right. \\&\left. \quad \qquad +\,\frac{1}{2} z_{j}\theta \left( 1-\frac{\nu _j\,p}{\theta \beta } + 2 k Q_{-j} \right) \right] \end{aligned}$$ where \(Q_{-j}\) is constant with respect to \(\nu _j\). The FOC is $$\frac{1}{2}p\frac{-p+z_{j}\theta +\nu _{j}p}{\beta \theta }=0,$$ thus the optimal subsidy is \(p(1-\nu _{j}^{F})=\theta z_j\) and the quantity \(Q_j^{F}=\frac{1}{2} \left( 1-\frac{p-\theta z_j}{\theta \beta }\right)\). The welfare \(W^{F}\) is found by substituting the optimal values above in each objective function in (6) and summing the results. Appendix 3: Analysis of devolution with matching grant The contribution of the matching grant at rate \((1-\rho _j)p\) from CG and the supplementary local subsidy at rate \(\eta _j\) lower the user charge to the level \(p_u^j=p( \rho _j -\eta _j )\). CG finances its subsidy using a proportional tax at rate \(\bar{t}\), which is derived from the budget constraint (4). LG has to finance its subsidy using a proportional tax on local income at rate \(\tau _{j}\) and the local budget constraint is: $$\tau _{j}\frac{Y}{2} = \frac{1}{2} \eta _{j} p \left( 1 - \frac{p\left( \rho _{j}-\eta _{j}\right) }{\theta }\right) .$$ As discussed in "LG reaction function" section, LG decision has a twofold impact on total welfare: it changes the national tax rate and alters the quantity of available impure public good. CG has to form a belief on the behaviour of the LGs to estimate their reaction function and we focus our attention on three alternative options: (FC) every LG thinks that the other local authority sets the same subsidy \(p \eta _j\), presumes that the national tax rate grows to: $$t^{\text {FC}}=\overline{t}+\left( 1-\rho _{j}\right) p^{2}\frac{\eta _{j}}{\beta \theta Y}$$ and the quantity \(Q_{-j}\) is \(\frac{1}{2}\left( 1-\frac{p( \rho _{j}-\eta _{j})}{\theta \beta }\right)\). (PC) each LG thinks that the other local government does not further subsidise the good, thus assumes that the quantity \(Q_{-j}\) does not depend on \(\eta _j\) and the national tax rate is: $$t^{\text {PC}}=\overline{t}+\frac{1}{2}\left( 1-\rho _{j}\right) p^{2}\frac{\eta _{j}}{\beta \theta Y}.$$ (FR) each LG thinks that the effects of its expenditure on the tax rate are marginal and the decision does not influence either the national tax rate, nor \(Q{-j}\), thus $$t^{\text {FR}}=\overline{t}.$$ In order to reduce useless replications, we introduce some notation to write the optimisation problem for a generic LG behaviour in a compact form. For a superscript B, ranging in the set of the three different LG behaviours FC, PC and FR, we set \(s^{\text {B}}=0\) for B=PC or FR and \(s^{\text {FC}}=1\). Also, we denote by \(\zeta _j\) the preference parameter for the public good utility of y (so that we can choose \(\zeta _j=z_j\) when examining the exact reaction function of the LG and \(\zeta _j=z\) when considering the reaction function estimated from CG). $$\begin{aligned}&\max _{\eta _{j}} \left[ \frac{Y}{2}\left( 1-t^{\text {B}}-\tau _{j}\right) +\int _{\frac{p(\rho _{j}-\eta _{j})}{\theta }}^{\beta } \left( \alpha \theta -\left( \rho _{j}-\eta _{j}\right) p\right) \frac{1}{2\beta }\text {d}\alpha \nonumber \right. \\&\left. \quad \qquad +\,\zeta _j \theta \left( k\int _{\frac{p\,\rho _{j}}{\theta }}^{\beta }\frac{1}{2\beta }\text {d}\alpha +s^{\text {B}}k\int _{\frac{p(\rho _{j}-\eta _{j})}{\theta }}^{\frac{p\rho _{j}}{\theta }} \frac{1}{2\beta }\text {d}\alpha +\int _{\frac{p(\rho _{j}-\eta _{j})}{\theta }}^{\beta }\frac{1}{2\beta }\text {d}\alpha \right) \right] \nonumber \\&\quad t^{\text {B}} :(8); (9); (10)\quad \text {B}=\text {FC};\,\text {PC};\,\text {FR},\qquad \tau _{j}:\; \text {from }(7). \end{aligned}$$ The problem can be rewritten as: $$\begin{aligned}&\max _{\eta _{j }} \left[ \frac{Y}{2}\left( 1-\bar{t}\right) - \frac{m}{2} \left( 1-\rho _{j}\right) p^{2}\frac{\eta _{j}}{\beta \theta } -\frac{1}{2} \eta _{j}\,p \left( 1 - \frac{p(\rho _{j}-\eta _{j})}{\theta \beta }\right) \right. \\&\quad \qquad +\int _{\frac{p\left( \rho _{j}-\eta _j\right) }{\theta }}^{\beta }\left( \alpha \theta -\left( \rho _{j}-\eta _{j}\right) p\right) \frac{1}{2\beta }\text {d}\alpha \\&\left. \quad \qquad +\,\zeta _j \theta \left( k\int _{\frac{p\,\rho _{j}}{\theta }}^{\beta }\frac{1}{2\beta }\text {d}\alpha +sk\int _{\frac{p(\rho _{j}-\eta _{j})}{\theta }}^{\frac{p\,\rho _{j}}{\theta }}\frac{1}{2\beta }\text {d}\alpha +\int _{\frac{p(\rho _{i}-\eta _{i})}{\theta }}^{\beta }\frac{1}{2\beta }\text {d}\alpha \right) \right] \end{aligned}$$ with: \(m=2,\;s=1\) in FC, \(m=1,\;s=0\) in PC and \(m=0,\;s=0\) in FR. The FOC is: $$\frac{1}{2}\zeta _j p\frac{ks+1}{\beta }-\frac{1}{2\beta }p^{2}\frac{m-m\rho _{i}+\eta _{j}}{\theta }=0,$$ so that the optimal value in non singular cases is given by $$\eta _{i}^{\text {B}}=\eta _{i}^{\text {B}}(\rho _j;\zeta _j) =\frac{\zeta _j \theta (1+ks)}{p}-\frac{m}{2}(1-\rho _{j}).$$ Substituting the values for m and s for the three cases we obtain: $$\eta _{j}^{\text {FC}} (\rho _j;\zeta _j)=\frac{\theta \zeta _j (1+k)}{p}-(1-\rho _{j});$$ $$\eta _{j}^{\text {PC}} (\rho _j;\zeta _j)=\frac{\zeta _j \theta }{p}-\frac{1-\rho _{j}}{2};$$ $$\eta _{j}^{\text {FR}} (\rho _j;\zeta _j)=\frac{\zeta _j \theta }{p}.$$ Central Government grant We assume that CG expects LGs to have the same behaviour and assigns a probability \(\pi ^{\text {B}}\) to each of the three possible reactions (FC, PC and FR). CG will choose a grant \(\rho\) equal for the two regions so that the expected welfare is maximised. The problem is thus written as: $$\begin{aligned}&\max _{\rho } \sum _{\text {B=FC,PC,FR}} \pi ^{\text {B}} \left( Y\left( 1-t^{\text {B}}-\tau ^{\text {B}}\right) +\int _{\frac{\left( \rho -\eta ^{\text {B}}\right) p}{\theta }}^{\beta } \left( \alpha \theta -\left( \rho -\eta ^{\text {B}}\right) p\right) \frac{1}{\beta } \text {d}\alpha \right. \\&\left. \quad \qquad +\,z(1+k)\theta \int _{\frac{\left( \rho -\eta ^{\text {B}}\right) p}{\theta }}^{\beta } \frac{1}{\beta } \text {d}\alpha \right) \nonumber \\&t^{\text {B}} =\frac{p}{\beta \theta }\frac{(1-\rho )\left( \beta \theta -p\left( \rho -\eta ^{\text {B}}\right) \right) }{Y};\quad \tau ^{\text {B}} =\eta ^{\text {B}}p\frac{\beta \theta -p\left( \rho -\eta ^{\text {B}}\right) }{\beta \theta Y};\nonumber \\&\eta ^{\text {FC}} =\frac{\theta z(1+k)}{p}-(1-\rho );\quad \eta ^{\text {PC}}=\frac{z\theta }{p}-\frac{1-\rho }{2};\quad \eta ^{\text {FR}}=\frac{z\theta }{p}.\nonumber \end{aligned}$$ The derivative with respect to \(\rho\) is: $$\frac{p^{2}(1-\rho )}{4}\frac{4\pi ^{\text {FR}}+\pi ^{\text {PC}}}{\beta \theta } -\frac{zkp}{4}\frac{4\pi ^{\text {FR}}+2\pi ^{\text {PC}}}{\beta }.$$ If \(\pi ^{\text {FR}}=\pi ^{\text {PC}}=0\) (and thus \(\pi ^{\text {FC}}=1\)) the welfare does not depend on \(\rho\), i.e. CG cannot influence the welfare using a matching grant, thus \(1-\rho\) will be set to zero. In all the other cases the optimal grant is equal to: $$p\left( 1-\rho ^{*}\right) =kz\theta \left( 1+\frac{\pi ^{\text {PC}}}{\pi ^{\text {PC}}+4\pi ^{\text {FR}}}\right) .$$ The setting of the grant depends on the probabilities \(\pi ^{\text {B}}\); in the analysis the four following cases will be taken into consideration: CG believes that LGs will behave as either FC, PC or FR (i.e. \(\pi ^{\text {B}}=1;\pi ^{-\text {B}}=0\) for each possible value of B) and the case where CG assigns equal probability to each of the behaviours, i.e. \(\pi ^{\text {B}}=\frac{1}{3}\) for all B (the abbreviation "Equi" is used for this case). Once the grant has been set by CG, the LGs will further subsidise the good following the scheme presented in (12). In general terms, if we define \(\gamma _{j}=p(1-\rho _{j}+\eta _{j})\), since $$Q_j = \int _{\frac{p-\gamma _{i}}{\theta }}^{\beta }\frac{1}{2\beta }\text {d}\alpha = \frac{1}{2} \left( 1-\frac{p}{\beta \theta }+\frac{\gamma _j}{\beta \theta }\right) ,$$ $$Q=Q_1+Q_2=1-\frac{p}{\beta \theta }+\frac{1}{\beta \theta }\frac{\gamma _{1}+\gamma _{2}}{2}$$ $$\begin{aligned} W&=Y-p+\frac{\beta \theta }{2}+\frac{p^{2}}{2\beta \theta }+\frac{1}{2\beta }(1+k)(z_{1}+z_{2})(\beta \theta -p) -\frac{1}{4\beta \theta }\left( \gamma _{1}^{2}+\gamma _{2}^{2}\right) \\&\quad +\,\frac{1}{2\beta }\left[ \gamma _{1}\left( z_{1}+kz_{2}\right) +\gamma _{2}\left( z_{1}+kz_{2}\right) \right] \end{aligned}$$ Since in this notation we have \(\gamma _{j}^{\text {FB}}=\theta (z_{j}+kz_{-j})\), for any two couples \((\tilde{\gamma }_{1},\tilde{\gamma }_{2})\), \((\bar{\gamma }_{1},\bar{\gamma }_{2})\) the welfare difference can be written as $$\tilde{W}-\bar{W}=\frac{1}{2\beta \theta }\sum _{j=1}^{2}\left( \bar{\gamma _{j}}-\tilde{\gamma _{j}}\right) \left( \frac{\bar{\gamma _{j}}+\tilde{\gamma _{j}}}{2}-\gamma _{j}^{\text {FB}}\right)$$ and the total quantity and welfare differences wrt the FB case amount to: $$\begin{aligned} Q^{\text {FB}}-Q&=\frac{1}{\beta \theta }\left( \frac{\gamma _{1}^{\text {FB}} +\gamma _{2}^{\text {FB}}}{2}-\frac{\gamma _{1}+\gamma _{2}}{2}\right) \\ W^{\text {FB}}-W&=\frac{1}{4\beta \theta }\sum _{j=1}^{2}\left( \gamma _{j}-\gamma _{j}^{FB}\right) ^{2}. \end{aligned}$$ For each case of the grant setting, three different reactions of the LGs are possible. Ex post, the average welfare loss wrt FB is: $$\Delta W=\frac{1}{12\beta \theta }\sum _{\text {B},j}\left( \gamma _{j}^{\text {B}}-\gamma _{j}^{\text {FB}}\right) ^{2},$$ where \(\gamma _{j}^{\text {B}}=p(1-\rho ^{*}+\eta _{j}^{B}(\rho ^{*}))\) is the unit subsidy. The overall state contingent solution is presented in detail in Table 8. The first column reports CG beliefs about LG's reaction function;Footnote 14 then for each case in the second column are listed the three possible LG behaviours. Reading further from left to right, for each (ex-post) case (CG belief, LG behaviour) the differences with respect to the FB case of the total subsidy \(\gamma _j\) for region j, the total produced quantity Q and the welfare are shown. Finally, the last column presents, for each possible entry of the first column, the mean ex-post welfare loss. Lines in bolditalics refer to the case where CG beliefs are fulfilled and the only mistake that CG makes is due to its imperfect observation of local preferences. Table 7 Numerical simulation Table 8 Coordination problem under information asymmetry Appendix 4: Centralisation versus devolution From the expression of the welfare \(W_C\) (Table 2) we have $$\begin{aligned} \Delta W_{\text {C}}&= {} W^{\text {FB}}-W^{\text {C}}\\&= {} \left( \theta -1\right) \left( \frac{\beta +\left( z_{1}+z_{2}\right) \left( 1+k\right) }{2}-\frac{p^{2}}{2\beta \theta }\right) \\&\quad +\,\theta \frac{\left( z_{1}+kz_{2}\right) ^{2}+\left( z_{2}+kz_{1}\right) ^{2}}{4\beta }+\frac{z\left( z-z_{1}-z_{2}\right) \left( 1+k\right) ^{2}}{2\beta } \end{aligned}$$ $$\frac{\partial \Delta W_{\text {C}}}{\partial \theta }=\frac{\beta }{2}+\frac{z_{1}+z_{2}}{2}(1+k)+\frac{(z_{1}+kz_{2})^{2}+(z_{2}+kz_{1})^{2}}{4\beta }-\frac{p^{2}}{2\beta \theta ^{2}}.$$ and since \(\beta \theta >p\) the function \(\Delta W_{\text {C}}\) is strictly increasing and convex in \(\theta\). From Table 8 we have $$\frac{\partial \Delta W_{\text {Equi}}}{\partial \theta }=\frac{k^{2}}{\beta }\left( \frac{1}{3}\left( z_{1}^{2}-z_{1}z_{2}+z_{2}^{2}\right) +\frac{3}{10}z\left( z-(z_{1}+z_{2})\right) \right)$$ $$\frac{\partial \Delta W_{\text {FR}}}{\partial \theta }=\frac{k^{2}}{\beta }\left( \frac{1}{3}\left( z_{1}^{2}-z_{1}z_{2}+z_{2}^{2}\right) +\frac{1}{4}z\left( \frac{5}{6}z-(z_{1}+z_{2})\right) \right) ,$$ so that \(\frac{\partial \Delta W_{\text {Equi}}}{\partial \theta }<\frac{\partial \Delta W_{\text {FR}}}{\partial \theta }\) whenever \(z<\frac{6}{11}(z_{1}+z_{2})=\frac{12}{11}\frac{z_{1}+z_{2}}{2}\). If CG is able to observe the mean and it can set \(z=\frac{z_{1}+z_{2}}{2}\), since \(k\in [0,1]\), by standard algebraic manipulations we have $$\begin{aligned} \frac{\partial \Delta W_{\text {Equi}}}{\partial \theta }&=\frac{k^{2}}{4\beta }\left( z_{1}^{2}+z_{2}^{2}\right) +\frac{k^{2}}{120\beta }\left( z_{1}^{2}+z_{2}^{2}\right) -\frac{29k^{2}}{60\beta }z_{1}z_{2}\\ \frac{\partial \Delta W_{\text {FR}}}{\partial \theta }&=\frac{\partial \Delta W_{\text {Equi}}}{\partial \theta }+\frac{k^{2}}{480\beta }(z_{1}+z_{2})^{2}\\&=\frac{k^{2}}{4\beta }\left( z_{1}^{2}+z_{2}^{2}\right) +\frac{k^{2}}{96\beta }\left( z_{1}^{2}+z_{2}^{2}\right) -\frac{23k^{2}}{48\beta }z_{1}z_{2}\\&<\frac{1}{4\beta }\left[ (z_{1}+kz_{2})^{2}+(z_{2}+kz_{1})^{2}\right] <\frac{\partial \Delta W_{\text {C}}}{\partial \theta }.\\ \end{aligned}$$ The situation remains the same if \(z=\nu \frac{z_{1}+z_{2}}{2}\) for \(\nu \le \frac{12}{11}\). For higher values of \(\nu\), as obvious also from Fig. 1, the relative positions of the two devolution cases have to be interchanged. Apart from this, unless \(\nu\) is very large, the considerations done above for \(\nu =1\) are still valid in the general case. From an analytical point of view, it is obvious that devolution is the best option for all values of \(\theta\) whenever \(\Delta W_{\text {C}}\ge \Delta W_{\text {Equi}}\) for \(\theta =1\). In all other cases there exists a (unique but depending on k) value \(\theta ^{*}>1\) for which if \(\theta <\theta ^{*}\) centralisation performs better than (the best case of) devolution. The sign of \(\Delta W_{\text {C}}-\Delta W_{\text {Equi}}\) for \(\theta =1\) depends in turn on the value of k. Since for \(k=0\) we have \(\Delta W_{\text {C}}>0=\Delta W_{\text {Equi}}\), while for \(k=1\) it holds \(\Delta W_{\text {C}}=0<\Delta W_{\text {Equi}}\), there exists a (unique) value \(k^{*}\) such that \(\Delta W_{\text {C}}>\Delta W_{\text {Equi}}\) whenever \(k\in [0,k^{*})\) and \(\Delta W_{\text {C}}<\Delta W_{\text {Equi}}\) if \(k\in (k^{*},1]\). Thus: The value of \(k^{*}\) can be found analytically and has the following expression: $$\frac{\sqrt{15}\cdot |z_{1}-z_{2}|}{4}\cdot \frac{\sqrt{31(z_{1}-z_{2})^{2}+4z_{1}z_{2}} -\sqrt{15}\cdot |z_{1}-z_{2}|}{4(z_{1}-z_{2})^{2}+z_{1}z_{2}}.$$ Levaggi, L., Levaggi, R. Devolution and grant-in-aid design for the provision of impure public goods. SpringerPlus 5, 282 (2016). https://doi.org/10.1186/s40064-016-1919-9 Impure public goods Spillovers
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Effects of Fermented Soy Protein on Growth Performance and Blood Protein Contents in Nursery Pigs Min, B.J. (Department of Animal Resource and Science, Dankook University) ; Cho, J.H. (Department of Animal Resource and Science, Dankook University) ; Chen, Y.J. (Department of Animal Resource and Science, Dankook University) ; Kim, H.J. (Department of Animal Resource and Science, Dankook University) ; Yoo, J.S. (Department of Animal Resource and Science, Dankook University) ; Lee, C.Y. (Reigional Animal Industry Center, Jinju National University) ; Park, B.C. (Gyeongnam Province Advanced Swine Research Institute) ; Lee, J.H. (Korea National Arboretum) ; Kim, I.H. (Department of Animal Resource and Science, Dankook University) https://doi.org/10.5713/ajas.2009.80240 Fifty-four cross-bred ((Landrace${\times}$Yorkshire)${\times}$Duroc) pigs (13.47${\pm}$0.03 kg average initial BW) were evaluated in a 42 d growth assay to determine the effects of the fermented soy product (FSP). The dietary treatments were: FSP 0 (corn-soybean basal diet), FSP 2.5 (FSP 0 amended with 2.5% FSP), and FSP 5 (FSP 0 amended with 5% FSP). The body weight at the end of the experiment increased linearly (p = 0.05) as the FSP levels in the diets increased. In addition, the ADG and G/F ratio also increased (linear effect, p = 0.06) as the levels of FSP increased. However, there was no effect of FSP on ADFI or DM digestibility (p>0.05). Furthermore, the N digestibility increased as the FSP levels increased (linear effect, p = 0.003), although the total protein concentration in the blood was not affected by FSP (p>0.05). Additionally, the albumin concentration was higher in pigs fed diets that contained 2.5% FSP than in pigs in the control group or the FSP 5 group (quadratic effect, p = 0.07). The creatinine concentrations were also evaluated at d 42 and found to be greater in pigs that received the FSP 2.5 diet (quadratic effect, p = 0.09). Moreover, the creatinine concentration increased linearly in response to FSP treatment (p = 0.09). Finally, although the BUN concentration on the final day of the experiment was greater in pigs that received the FSP 2.5 diet (quadratic effect, p = 0.10), there were no incremental differences in BUN concentrations among groups (p>0.05). Taken together, the results of this study indicate that feeding FSP to pigs during the late nursery phase improves growth performance and N digestibility. Fermented Soy Protein;Growth Performance;Blood Protein Contents;Nursery Pigs Supported by : Dankook University Anderson, R. L., J. J. Rackis and W. H. Tallent. 1979. Biologically active substances in soy products. In: Soy protein and human nutrition (Ed. H. L. Wilcke, D. T. Hopkins and D. H. Waggle). Academic Press, New York. Pp. 209-233 Kim, S. W., R. D. Mateo and F. Ji. 2005. Fermented soybean meal as a protein source in nursery diets replacing dried skim milk. J. Anim. Sci. 83(Suppl. 1):116 Peterson, R. G. 1985. Design and analysis of experiments. Marcel Dekker, Inc., NY SAS. 1996. SAS user's guide. Release 6. 12 edition. SAS Inst Inc Cary NC. USA Yang, I. S. 2004. Animal physiology. Release. 4. Kwang-il munwhasa. Korea Zhu, X. P., D. F. Li, S. Y. Qiao, C. T. Xiao, Q. Y. Qiao and C. Ji. 1998. Evaluation of HP300 soybean protein in starter pig diets. Asian-Aust. J. Anim. Sci. 11:201-207 Wolter, B. F., M. Ellis, B. P. Corrigan, J. M. DeDecker, S. E. Curtis, E. N. Parr and D. M. Webel. 2003. Impact of early postweaning growth rate as affected by diet complexity and space allocation on subsequent growth performance of pigs in a wean-to-finish production system. J. Anim. Sci. 81:353-359 Cho, J. H., B. J. Min, Y. J. Chen, J. S. Yoo, Q. Wang, J. D. Kim and I. H. Kim. 2007. Evaluation of FSP (fermented soy protein) to replace meal in weaned pigs: Growth performance, blood urea nitrogen and total protein concentration in serum and nutrient digestibility. Asian-Aust. J. Anim. Sci. 20(12): 1874-1879 Li, D. F., J. L. Nelssen, P. G. Reddy, F. Blecha, J. D. Hancock, G. L. Allee, R. D. Goodband and R. D. Klemm. 1990. Transient hypersensitivity to soybean-meal in the early-weaned pig. J Anim. Sci. 68:1790-1799 Duncan, D. B. 1955. Multiple range and multiple F test. Biometrics 11:1 https://doi.org/10.2307/3001478 Hancock, J. D., E. R. Peo, Jr., A. J. Lewis and J. D. Crenshaw. 1990. Effects of ethanol extraction and duration of heat treatment of soybean meal flakes on the utilization of soybean protein by growing rats and pigs. J. Anim. Sci. 68:3233-3243 Kim, Y. C. 2004. Evaluation of availability for fermented soybean meal in weanling pigs. Ph. D. Thesis, Department of animal sources and science, Seoul, The Republic of Korea AOAC. 1994. Official method of analysis. 16th Edition. Association of Official Analytical Chemists, Washington, DC Burnham, L. L., J. D. Hancock, M. R. Cabrera, I. H. Kim, K. L. Larsen and R. H. Hines. 1995. Alcohol- and water- extracted soy protein concentrates for early weaned pigs. J. Anim. Sci. 73(Suppl. 1):177 Min, B. J., J. W. Hong, O. S. Kwon, W. B. Lee, Y. C. Kim, I. H. Kim, W. T. Cho and J. H. Kim. 2004. The effect of feeding processed soy protein on the growth performance and apparent ileal digestibility in weanling pigs. Asian-Aust. J. Anim. Sci. 17:1271 Bassily, J. A., K. G. Michael and A. K. Said. 1982. Blood urea content for evaluating dietary protein quality. Br. J. Nutr. 24:983 https://doi.org/10.1079/BJN19700101 Dietz, G. N., C. V. Maxwell and D. S. Buchanan. 1988. Effect of protein source on performance of early-weaned pigs. J. Anim. Sci. 53:1011 Eggum, B. O. 1970. Blood urea measurement as a technique for assessing protein quality. Br. J. Nutr. 24:983 https://doi.org/10.1079/BJN19700101 Orok, E. J. and J. P. Bowland. 1975. Rapeseed, peanut and soybean meal as protein supplements: plasma urea concentrations of pigs on different feed intakes as indices of dietary protein quality. Can. J. Anim. Sci. 55:347 Sohn, K. S. and C. V. Maxwell. 1990. Effects of source of dietary protein on performance of early weaned pigs. Okla. Exp. Sta. MP 129:288 NRC. 1998. Nutrient requirement of pigs (10th Ed.) National Research Council, Academy Press. Washington, DC Effects of different fermented soy protein and apparent ileal digestible lysine levels on weaning pigs fed fermented soy protein-amended diets vol.83, pp.5, 2011, https://doi.org/10.1111/j.1740-0929.2011.00966.x Apparent ileal digestibility of nutrients and amino acids in soybean meal, fish meal, spray-dried plasma protein and fermented soybean meal to weaned pigs vol.87, pp.5, 2015, https://doi.org/10.1111/asj.12483 Influence of low-protein diets and protease and bromelain supplementation on growth performance, nutrient digestibility, blood urine nitrogen, creatinine, and faecal noxious gas in growing–finishing pigs vol.98, pp.3, 2018, https://doi.org/10.1139/cjas-2016-0116
CommonCrawl
Galaxy And Mass Assembly (GAMA): The Bright Void Galaxy Population in the Optical and Mid-IR Penny, SJ, Brown, MJI, Pimbblet, KA, Cluver, ME, Croton, DJ, Owers, MS, Lange, R, Alpaslan, M, Baldry, IK, Bland-Hawthorn, J, Brough, S, Driver, SP, Holwerda, BW, Hopkins, AM, Jarrett, TH, Jones, DH, Kelvin, LS, Lara-Lopez, MA, Liske, J, Lopez-Sanchez, AR et al, Loveday, J, Meyer, M, Norberg, P, Robotham, ASG and Rodrigues, M (2015) Galaxy And Mass Assembly (GAMA): The Bright Void Galaxy Population in the Optical and Mid-IR. Monthly Notices of the Royal Astronomical Society, 453 (4). pp. 3519-3539. ISSN 0035-8711 This is the latest version of this item. Publisher URL: http://dx.doi.org/10.1093/mnras/stv1926 We examine the properties of galaxies in the Galaxies and Mass Assembly (GAMA) survey located in voids with radii $>10~h^{-1}$ Mpc. Utilising the GAMA equatorial survey, 592 void galaxies are identified out to z~0.1 brighter than $M_{r} = -18.4$, our magnitude completeness limit. Using the $W_{\rm{H\alpha}}$ vs. [NII]/H$\alpha$ (WHAN) line strength diagnostic diagram, we classify their spectra as star forming, AGN, or dominated by old stellar populations. For objects more massive than $5\times10^{9}$ M$_{\odot}$, we identify a sample of 26 void galaxies with old stellar populations classed as passive and retired galaxies in the WHAN diagnostic diagram, else they lack any emission lines in their spectra. When matched to WISE mid-IR photometry, these passive and retired galaxies exhibit a range of mid-IR colour, with a number of void galaxies exhibiting [4.6]-[12] colours inconsistent with completely quenched stellar populations, with a similar spread in colour seen for a randomly drawn non-void comparison sample. We hypothesise that a number of these galaxies host obscured star formation, else they are star forming outside of their central regions targeted for single fibre spectroscopy. When matched to a randomly drawn sample of non-void galaxies, the void and non-void galaxies exhibit similar properties in terms of optical and mid-IR colour, morphology, and star formation activity, suggesting comparable mass assembly and quenching histories. A trend in mid-IR [4.6]-[12] colour is seen, such that both void and non-void galaxies with quenched/passive colours <1.5 typically have masses higher than $10^{10}$ M$_{\odot}$, where internally driven processes play an increasingly important role in galaxy evolution. This is a pre-copyedited, author-produced PDF of an article accepted for publication in Monthly Notices of the Royal Astronomical Society following peer review. The version of record MNRAS (November 11, 2015) 453 (4): 3519-3539 is available online at: http://dx.doi.org/10.1093/mnras/stv1926 10.1093/mnras/stv1926 Available Versions of this Item Galaxy And Mass Assembly (GAMA): The Bright Void Galaxy Population in the Optical and Mid-IR. (deposited 04 Sep 2015 09:14) Galaxy And Mass Assembly (GAMA): The Bright Void Galaxy Population in the Optical and Mid-IR. (deposited 26 Oct 2015 12:40) [Currently Displayed]
CommonCrawl
APS Home Session Chairs Using the Scheduler BAPS PDFs 2008 APS March Meeting Monday–Friday, March 10–14, 2008; New Orleans, Louisiana Session W31: Focus Session: New Materials and Properties of Complex Oxides Sponsoring Units: DMP GMAG Chair: Amos Sharoni, University of California, San Diego Room: Morial Convention Center 223 W31.00001: Magnetic and Structural Properties of Sr-Doped Ba$_{2-x}$Sr$_{x}$CoO$_{4}$ Hao Sha, Jiandi Zhang, Q. Huang, V.O. Garlea, B.C. Sales, D. Mandrus, R. Jin We have studied the structural and magnetic properties of a newly synthesized compound Ba$_{2-x}$Sr$_{x}$CoO$_{4}$ with different doping ($x)$ levels. Monoclinic Ba$_{2}$CoO$_{4}$ is an antiferromagnetic (AFM) insulator with N\'eel temperature $T_{N}$ = 25 K and a two-dimensional character with spins aligned in the \textit{ac} plane. The isovalent Sr doping causes changes in both crystal structure and magnetic properties. With increasing $x$, $T_{N}$ initially increases then decreases after reaching the maximum at $x$=0.5. Correspondingly, its crystal structure changes from monoclinic ($x <$ 0.5) to orthorhombic ($x\ge $0.5) at room temperature. The correlation between crystal structure and physical properties will be discussed. [Preview Abstract] W31.00002: Electronic structure changes in novel $J_{eff}$=1/2 system: Ruddlesden-Popper series Sr$_{n+1}$Ir$_{n}$O$_{3n+1}$ (n=1, 2, and $\infty )$ S.J. Moon, J.S. Lee, W.S. Choi, T.W. Noh, H. Jin, J. Yu, Y.S. Lee, V. Durairaj, G. Cao, A. Sumi, H. Funakubo We investigated the electronic structures of Ruddlesden-Popper series Sr$_{n+1}$Ir$_{n}$O$_{3n+1}$ (n=1, 2, and $\infty )$ compounds with optical spectroscopy and first-principles calculation. Among Sr$_{n+1}$Ir$_{n}$O$_{3n+1}$, while SrIrO$_{3}$ is a metal, Sr$_{2}$IrO$_{4}$ and Sr$_{3}$Ir$_{2}$O$_{7}$ are insulators. In optical conductivity spectra \textit{$\sigma $}(\textit{$\omega $}), we found unique bandwidth-driven changes of the electronic structures which were quite different from those of 3$d $or 4$d \quad S$=1/2 systems. From the comparison between \textit{$\sigma $}(\textit{$\omega $}) and the results of first-principles calculation, we found that the intriguing changes of the electronic structures can be realized by the cooperative interaction between the SO coupling and the electron correlation. These results clearly demonstrate that Sr$_{n+1}$Ir$_{n}$O$_{3n+1}$ should be considered as a $J_{eff}$=1/2 single band system. [Preview Abstract] W31.00003: Electrical Resistance of Quasi-1D Li$_{0.9}$Mo$_{6}$O$_{17}$ at Very High Magnetic Field Carlos A.M. dos Santos, J. Moreno, B.D. White, J.J. Neumeier, L. Balicas Recently, photoemission experiments, band structure calculations, tunneling, and the description of the electrical resistivity by two power-law terms suggest that Li$_{0.9} $Mo$_{6}$O$_{17}$ is an excellent example of a metallic Luttinger-liquid (LL) [a,b]. The crossover from metallic to insulating-like behavior near $T_M$ = 28 K was addressed by thermal expansion experiments which suggest that a dimensional crossover sets the stage for superconductivity [b]. To obtain more information about the crossover at $T_M$, magnetoresistance measurements were performed under very high magnetic field (0 $< H <$ 23 tesla). The results show that the minimum at $T_M$ increases with increasing $H$. The power-law temperature dependence of the electrical resistance at $T_M (H)$ is also evaluated. [a] C. A. M. dos Santos, M. S. da Luz, Yi-Kuo Yu, J. J. Neumeier, J. Moreno, and B. D. White. Submitted to Phys. Rev. Let. (2007). [b] C. A. M. dos Santos, B. D. White, Yi-Kuo Yu, J. J. Neumeier, and J. A. Souza, Phys. Rev. Let. {\bf98}, 266405 (2007). [Preview Abstract] W31.00004: Unconventional magneto-transport in novel layered cobalt oxides Invited Speaker: Ichiro Terasaki Among strongly correlated transition-metal oxides, cobalt oxides are known to have unique features arising from the spin-state degree of freedom tightly coupled with Co valence. The Co$^{4+}$ ion in the low spin-state is responsible for anomalous metallic states such as large thermopower in Na$_{x}$CoO$_{2}$ and unconventional superconductivity in hydrated Na$_{x}$CoO$_{2}$. The Co$^{2+}$ ion favors the high-spin state, which makes magnetic insulators. The Co$^{3+}$ ion is most interesting in the sense that the low-, intermediate- and high-spin states are nearly degenerate, where a spin-state crossover/transition occurs with temperature or pressure. Recently we have discovered two complex layered cobalt oxides, which exhibit unprecedented transport originated from interplay between charge, orbital and spin-states. The first one is SrCo$_{6}$O$_{11}$, in which the Co-O Kagome lattice and two-types of Co-O pillars are stacked along the c axis [1]. The conduction electrons in the Kagome lattice interact with Ising spins in the pillars, and shows two-step plateau in the magnetoresistance along the c axis. The second one is Sr$_{3}$YCo$_{4}$O$_{10.5}$, which exhibits a ferromagnetic insulating state below 340 K. Various substitutions of Sr, Y and Co sites dramatically suppress this ferromagnetic state, and concomitantly modify the magneto- and thermoelectric transport. We will discuss the structure-property relationship based on structure analyses. The main part of this work was done in collaboration with S. Ishiwata, W. Kobayashi, and M. Takano. \newline [1] S. Ishiwata et al., Chem. Mater. 17, 2789 (2005)~; Phys. Rev. Lett. 98, 217201 (2007) \newline [2] W. Kobayashi et al. Phys. Rev. B 72, 104408 (2005)~; S. Ishiwata et al. Phys. Rev. B75, 220406(R) (2002) [Preview Abstract] W31.00005: Search for Half-Metallic Antiferromagnetism in Double Perovskites V. Pardo, W. E. Pickett The wide class of double perovskite oxides was proposed earlier (PRB 57, 10613 [1998]) as promising for producing a half-metallic antiferromagnet [HMAFM] (more correctly, a spin-compensated half metal). Here we present examples of the affects of structural distortions on the electronic and magnetic properties in selected members. For La$_2$CrNiO$_6$ the idealized cubic perovskite structure had led to a spin-antiparallel state with net moment of 0.6 $\mu_B$, but a ferromagnetic half-metallic state (4 $\mu_B$) was 150 meV per metal atom lower in energy (within local density approximation). Starting with experimental information on LaCrO$_3$ and LaNiO$_3$ and their alloys, we have relaxed the volume and the (seven) internal coordinates within the orthorhombic Pnma space group. The charge states can be characterized by Cr$^{4+}$ and Ni$^{2+}$. The ferromagnetic state is lower by 50 meV within the generalized gradient approximation. Using LDA+U (U=3 eV on each transition metal ion) opens a gap of 0.6 eV (FM insulator) and is favored by 120 meV over the antialigned state. Although no HMAFM state is obtained, these results show that structural relaxation must be taken into account, and that in some cases (as here) it may make the antialigned state more favorable. [Preview Abstract] W31.00006: Electrical transport and thermodynamic properties of SrNbO$_{3.41}$ Ariana de Campos, Ann Deml, B.D. White, C.A.M. dos Santos, M.S. da Luz, J.J. Neumeier In 1991, Lichtenberg et al.$^1$ reported the electric conductivity of SrNbO$_{3.41}$ revealing quasi-1D behavior. This system offers many possibilities to vary the compositional, structural, chemical, and physical properties.$^1$ Depending upon the temperature range and crystallographic direction, it exhibits metallic behavior or a metal-semiconductor transition. In this work, the properties of SrNbO$_{3.41}$ single crystals are revisited. The single crystals were grown by the floating zone method and characterized by x-ray diffraction. Electrical resistivity as a function of temperature was measured with four-probe and Montgomery methods. We will also report results of heat capacity and thermal expansion measurements. $^1$F. Lichtenberg et al., Z. Phys. B {\bf 84}, 369 (1991); F. Lichtenberg et. al., Prog. Solid State Chem. {\bf 29}, 1-70 (2001). [Preview Abstract] W31.00007: Extreme electron-phonon coupling in magnetic rubidium sesquioxide Robert de Groot, Jisk Attema, S. Riyadi, Greame Blake, Gilles de Wijs, Thomas Palstra Rb$_2$O$_3$ is a black, opaque oxide. Early work suggests that the stability range of the sesquioxide phase in the rubidium-oxygen phase diagram is rather broad. Rb$_2$O$_3$ remains cubic down to the lowest temperature measured (5~K). The oxygens form dumbbells with interatomic distances in between those of peroxide and superoxide anions, and strong athermal motion persists down to low temperatures. [1] Electronic-structure calculations show that the dynamics at low temperature is caused by 6 phonon modes of zero frequency, which induce a very strong electron-phonon interaction. The softness of half of these modes is suppressed by the application of pressure. Calculated using the average oxygen positions, rubidium sesquioxide is a half-metallic ferromagnet. [2] \newline [1] CR CHIM (11-13): 591-594 NOV 1999\newline [2] JACS 127 (46): 16325-16328 NOV 23 2005 [Preview Abstract] W31.00008: Gigantic optical magneto-electric effect in CuB$_{2}$O$_{4}$ Mitsuru Saito, Kouji Taniguchi, Takahisa Arima It has been recognized since 1960s that magneto-electric (ME) materials may also show an optical magneto-electric (OME) effect showing up as a change in optical absorption with reversal of the propagating direction of light. The OME effect is an interesting object of scientific research and provides possibilities for applications. However, the changes in absorption coefficient ever discovered were very small (less than 0.2 {\%}). We present a gigantic OME effect in a noncentrosymmetric weak ferromagnet CuB$_{2}$O$_{4}$, in which the absorption coefficient changes by a factor of three with reversal of a very weak magnetic field of 300 Oe. This magnitude of OME effect enables us to observe it by a CCD camera with linearly polarized near-infrared and visible light. Spectroscopic study and comparison of OME effect with magnetization indicate an important role of canted antiferromagnetic spin ordering and local symmetry of a square Cu$^{2+}$ site. The gigantic OME effect can be applicable to optical devices like magnetic switching of color in the future. [Preview Abstract] W31.00009: Resonant inelastic X-ray scattering study of quasi-zero-dimensional copper metaborate Jason Hancock, Guillaume Chabot-Couture, Martin Greven, Guerman Petrakovskii, Kenji Ishii, Jun'ichiro Mizuki CuB$_2$O$_4$ consists of many CuO$_4$ plaquettes separated by B ions. We report a study of the electronic excitation spectra of this system in order to explore the relationship between excitation symmetry and the resonant inelastic X-ray scattering (RIXS) technique. We find a small number of well separated features in the experimentally accessible range of 0.5-15 eV energy transfer, and weak dispersion is suggestive of the quasi-zero-dimensional nature of this system. Systematic trends in the data are suggestive of a composite nature to one of the observed features. Using a cluster model, we describe these unexpected trends and clarify how the choice of experimental geometry selectively influences the sensitivity to particular excitation symmetries in the RIXS experimental technique. [Preview Abstract] W31.00010: Powder neutron diffraction study of quasi-one-dimensional Li$_{0.9}$Mo$_{6}$O$_{17}$ Mario S. da Luz, C.A.M. dos Santos, B.D. White, J.J. Neumeier, Q. Huang, J.B. Leao, J.W. Lynn The crystallographic structure of quasi-one-dimensional Li$_{0.9}$Mo$_6$O$_{17}$ was investigated by Rietveld refinement of powder neutron diffraction data at temperatures in the range 5 K $< T <$ 295 K. Structural parameters, atomic positions, occupation numbers, and isotropic thermal parameter $B_{iso}$ will be reported. The occupancy was refined revealing a Li occupancy greater than 0.9. Bond valences sums will also be reported for various Li and Mo sites. At room temperature, the crystal was found to exhibit monoclinic symmetry with space group P21/m and lattice parameters $a$ =12.7506(1) \AA, $b$ = 5.5242(1) \AA, $c$ = 9.4913(2) \AA \, and $\beta$ = 90.593(1)$^o$. Good agreement between the temperature dependence of lattice parameters and high resolution thermal expansion results$^*$ was obtained. $^*$C. A. M. dos Santos, B. D. White, Yi-Kuo Yu, J. J. Neumeier, and J.A. Souza, Phys. Rev. Lett. {\bf98}, 266405 (2007). [Preview Abstract] W31.00011: Unusual Physical Properties of Ca$_{3}$Co$_{4}$O$_{9}$ Rongying Jin, Larry Allard, Doug Blom, Sriparna Bhattacharya, Veerle Keppens, Brian Sales, David Mandrus We have investigated the structural and physical properties of Ca$_{3}$Co$_{4}$O$_{9}$ single crystals including electrical and thermal conductivity, thermopower, specific heat, magnetic susceptibility, and electron diffraction. The study reveals many interesting features that are unique to Ca$_{3}$Co$_{4}$O$_{9}$. In addition to high thermopower and low thermal conductivity, the low-temperature specific heat yields large electronic specific heat coefficient, suggesting strong electron-electron correlation. However, the electronic specific heat coefficient is dramatically decreased under magnetic field, implying the modification of electronic density of states by magnetic field. The magnetic susceptibility data indicate that there are several magnetic transitions above room temperature. The possible correlation between charge, spin, and lattice will be explored. [Preview Abstract] W31.00012: X-ray absorption and x-ray magnetic dichroism study on Ca$_3$CoRhO$_6$ and Ca$_3$FeRhO$_6$ Tobias Burnus, Zhiwei Hu, Julio C. Cezar, Seiji Niitaka, Hua Wu, Hidenori Takagi, Chun Fu Chang, Nicholas B. Brookes, Ling-Yun Jang, Keng S. Liang, L. Hao Tjeng The valence-state of the transition-metal ions in the chain-like compounds Ca$_3$CoRhO$_6$ and Ca$_3$FeRhO$_6$ is currently an issue under debate. Using numerical simulations and x-ray absorption spectroscopy at the Rh-$L_{2,3}$, the Co-$L_{2,3}$, and the Fe-$L_{2,3}$ edges we reveal a Co$^{2+}$/Rh$^{4+}$ configuration in Ca$_3$CoRhO$_6$ and Fe$^{3+}$/Rh$^{3+}$ in Ca$_3$FeRhO$_6$. X-ray magnetic circular dichroism at the Co-$L_{2,3}$ edge shows that the Co$^{2+}$ ions carry a giant orbital moment of about $1.7\mu_B$. We attribute this to a $d_1^0d_1^2$ ground state for the high-spin Co $3d^7$ configuration in trigonal prismatic coordination. The intrachain-ferromagnetic coupling of two neighboring Co ions is mediated by a low-spin Rh$^{4+}$ ion ($S = 1/2$) in between. The results agree with our recent ab-initio study [Hua Wu {\it et al.}, Phys. Rev. B {\bf 75}, 245118 (2007)]. [Preview Abstract] W31.00013: Anisotropy in magnetic properties of single crystal LiFePO$_{4}$ Gan Liang, Keeseong Park, John Markert, Jiying Li, David Vaknin We report the experimental and theoretical results on the anisotropies in the magnetic properties and x-ray absorption spectra of single crystal LiFePO$_{4}$. A mean-field theory is developed to explain the observed strong anisotropies in Lande g-factor, paramagnetic Curie temperature, and effective moment for LiFePO$_{4}$ single crystals. The values of the in-plane nearest- and next-nearest-neighbor spin-exchange ($J_{1}$ and $J_{2})$, inter-plane spin-exchange ($J_{\bot })$, and single-ion anisotropy ($D)$, obtained recently from neutron scattering measurements, are used for calculating the Curie temperatures with the formulas derived from the mean-field Hamiltonian. It is found that the calculated Curie temperatures match well with that obtained by fitting the magnetic susceptibility curves to the modified Curie-Weiss law. [Preview Abstract]
CommonCrawl
Visual preference for social stimuli in individuals with autism or neurodevelopmental disorders: an eye-tracking study Hayley Crawford1,2, Joanna Moss2,3, Chris Oliver2, Natasha Elliott4, Giles M. Anderson4,5 & Joseph P. McCleery4,6 Recent research has identified differences in relative attention to competing social versus non-social video stimuli in individuals with autism spectrum disorder (ASD). Whether attentional allocation is influenced by the potential threat of stimuli has yet to be investigated. This is manipulated in the current study by the extent to which the stimuli are moving towards or moving past the viewer. Furthermore, little is known about whether such differences exist across other neurodevelopmental disorders. This study aims to determine if adolescents with ASD demonstrate differences in attentional allocation to competing pairs of social and non-social video stimuli, where the actor or object either moves towards or moves past the viewer, in comparison to individuals without ASD, and to determine if individuals with three genetic syndromes associated with differing social phenotypes demonstrate differences in attentional allocation to the same stimuli. In study 1, adolescents with ASD and control participants were presented with social and non-social video stimuli in two formats (moving towards or moving past the viewer) whilst their eye movements were recorded. This paradigm was then employed with groups of individuals with fragile X, Cornelia de Lange, and Rubinstein-Taybi syndromes who were matched with one another on chronological age, global adaptive behaviour, and verbal adaptive behaviour (study 2). Adolescents with ASD demonstrated reduced looking-time to social versus non-social videos only when stimuli were moving towards them. Individuals in the three genetic syndrome groups showed similar looking-time but differences in fixation latency for social stimuli moving towards them. Across both studies, we observed within- and between-group differences in attention to social stimuli that were moving towards versus moving past the viewer. Taken together, these results provide strong evidence to suggest differential visual attention to competing social versus non-social video stimuli in populations with clinically relevant, genetically mediated differences in socio-behavioural phenotypes. Eye-tracking technology has been used to differentiate between people with and without autism spectrum disorder (ASD), with relative consistency, using measures of social attention. Furthermore, the extant literature indicates differences in social attention between groups of individuals displaying divergent profiles of social behaviour. For example, reduced attention to social information has been reported in ASD, which is associated with social withdrawal, whereas increased attention to social information has been reported in Williams syndrome, which is associated with hyper-sociability [1–4]. A plethora of research has indicated that people with ASD do not allocate as much attention to social information as typically developing (TD) individuals. For example, studies have reported that people with ASD spend less time than TD individuals viewing people and faces in static pictures of social interactions [4, 5], and further research suggests that reduced face gaze in ASD reflects a lack of interest in social information as it extends to human actors, cartoon images or movies, and clips of naturalistic social scenes [1, 6]. Attention to social stimuli has also been linked to social behaviour [6–8] with reduced social attention being associated with more severe autism symptomatology and consequently more impaired social communicative ability. However, these studies compare looking-time to social and non-social information within a single coherent scene. Therefore, participants in these studies are not required to choose between looking at social information or non-social information as these are both contained within the same stimulus. When a direct comparison of preference for looking at social versus non-social scenes is used, the findings also reveal that toddlers with ASD do not allocate as much attention to social stimuli as TD toddlers. Pierce and colleagues [9] measured total time spent looking at social video clips (videos of children dancing) compared with videos containing dynamic geometric shapes. Results indicated that toddlers with ASD spent significantly more time fixating on the geometric stimuli than did TD toddlers or toddlers with a developmental delay [9]. Klin and colleagues [10] similarly observed that TD toddlers and toddlers with developmental delays exhibited a visual preference for displays of human biological motion versus inverted displays resembling non-biological motion, whereas toddlers with ASD did not exhibit this preference. Although some studies contradict this by reporting typical overall looking-times to social versus non-social information in individuals with ASD, more nuanced analyses continue to reveal atypicalities in attention allocation to social information. For example, in a study where a static social scene was presented alongside a static non-social scene, overall looking times did not differ between adolescents with ASD and TD adolescents. However, a preference for social scenes at the first fixation was absent for those with ASD but present for TD individuals [11], indicating reduced attentional prioritisation of social information in ASD. The nature of social and non-social information may also influence looking patterns in those with ASD. Sasson and Touchstone [12] recently reported no differences between pre-schoolers with ASD and TD pre-schoolers on overall attention allocation to social versus non-social stimuli except for when the non-social stimuli represented common circumscribed interests of children with autism. When non-social stimuli were related to circumscribed interests, participants with ASD allocated less attention to social stimuli than TD controls. In addition, the ecological validity of social stimuli has been reported to influence attentional abnormalities in ASD. Specifically, videos of social interaction produced more sensitive group differences than videos of individual stimuli or static stimuli [13]. Although visual attention to social versus non-social information has been explored extensively in individuals with ASD, to date, studies using preferential looking paradigms to examine looking patterns to directly competing, dynamic, social, and non-social stimuli, such as those reported by Pierce and colleagues and Klin and colleagues [9, 10], have only used stimuli that are facing the participant. It is important to look at the factors that may influence typical and atypical social attention in individuals with ASD and other neurodevelopmental disorders. One possibility is that social information may be more threatening to individuals with ASD, which has been associated with heightened social anxiety (see [14] for a review) and social impairment. The current study aims to explore this further by presenting social and non-social stimuli, where the actor or object is either moving towards or moving past participants, to individuals with different neurodevelopmental disorders that are each differentially associated with social anxiety and social impairment. It has been proposed that biological motion that is facing towards the viewer is potentially more threatening than stimuli that are oriented away from the viewer. This 'facing-the-viewer' bias has also been associated with a heightened state of physiological arousal [15]. Therefore, the 'moving towards' stimuli presented in the current study are proposed to be more threatening than the 'moving past' stimuli. Further, individuals with anxiety without a neurodevelopmental disorder have been reported to show faster orienting to threatening stimuli, but not pleasant stimuli, when compared to non-anxious individuals (see [16] for a review). Although the current study does not directly measure anxiety in individuals with ASD, it is possible to postulate the extent to which social anxiety or social indifference governs atypical social attention in ASD. For example, social anxiety may subserve a pattern of results whereby participants demonstrate reduced looking to social stimuli only when it is moving towards them whereas reduced looking to both sets of social stimuli would more likely be governed by social indifference. Although the effect that this particular stimulus feature has on attentional allocation in ASD has not yet been investigated in depth, some study results have suggested that differences may emerge when this subtle factor is manipulated. For example, Chawarska and colleagues showed that when dyadic communication cues were introduced in a video of an actress making a sandwich, toddlers with ASD spent less time looking at scenes, and the actor in scenes, involving direct communication when the toys were moving in the background, than TD toddlers and toddlers with a developmental disability but no ASD. This suggests that the toddlers with ASD do not show a general deficit in attending to people but, rather, that reduced attention becomes apparent only in the presence of direct communication bids [17]. Similarly, increased activation in a number of brain regions has been reported when TD participants have observed a male walking towards them with direct gaze compared with averted gaze, which has not been replicated in ASD [18]. The manipulation of 'directed towards' used in these studies is perhaps the most similar manipulation that has been used in the existing literature to date, to the current 'moving towards' versus 'moving past' manipulation. These studies are discussed above as they provide a starting point for guiding hypotheses. However, the direction of stimuli was not assessed in these studies so we cannot assume that this was driving these results. Furthermore, 'moving towards' versus 'moving past' was used in the present study in order to potentially increase the contrast between degrees of threat. Although social attention abnormalities have also been demonstrated in ASD using stimuli that are not facing participants [6], incorporating this subtle experimental manipulation into a relatively simple preferential looking paradigm with directly competing social and non-social stimuli allows further delineation of social information processing in this group in a way that can break down the features of the stimuli that may result in abnormal attention patterns. Delineating this potential relationship is more achievable using a preferential looking paradigm as opposed to when facing and non-facing social and non-social stimuli are incorporated within a singular scene or video, which is largely used in the existing literature. In the current studies, we use a preferential looking paradigm to explore social attention to dynamic social and non-social stimuli that are either moving towards or moving past the viewer in adolescents with ASD versus individuals with special education needs (SEN) without ASD (study 1). As a relationship between social behaviour and social information processing has previously been documented [1–4, 6–8], we aim to further examine the value and validity of this paradigm to index variability in social-behavioural phenotypes. To do this, we employ this same paradigm to examine social attention in individuals with three different genetic syndromes associated with unique social profiles: fragile X (FXS), Cornelia de Lange (CdLS), and Rubinstein-Taybi syndromes (RTS; study 2). Previous literature, such as that reported above, has focussed on atypicalities in social information processing to help explain some of the social interaction difficulties observed in children and adults with ASD. Limited research has been conducted to further understand social information processing skills in children and adults with neurodevelopmental disorders, other than ASD that are also associated with social interaction difficulties. The three genetic syndromes studied here are associated with varied profiles of social behaviour, some aspects of which are comparable across the syndromes whilst other aspects are subtly different. One aim of this study is to use implicit measures, which reduce performance demand, to compare and contrast social information processing in FXS, CdLS, and RTS. Understanding social cognition may have important implications for further understanding the socio-behavioural impairments associated with these syndrome groups. However, as FXS, CdLS, and RTS are associated with intellectual disability, using measures that are typically used in mainstream social cognition literature may influence results and indicate impairments in social cognition that are more likely a result of task demands. Across a number of studies using eye-tracking methodology, Riby and colleagues have consistently reported a link between visual processing of social information using implicit measures and socio-behavioural characteristics. Specifically, individuals with Williams syndrome, which is associated with hyper-sociability, have been shown to spend more time looking at faces, and the eye region of faces, than TD participants [1, 2, 4]. On the other hand, the same series of studies has highlighted that individuals with ASD, which is associated with social withdrawal, spend less time looking at faces and eyes than TD participants. In addition, individuals with Williams syndrome have shown stronger emotional expression processing skills than individuals with ASD [19] and a greater ability to interpret cues from eye gaze [20]. These studies point to lower levels of interest in social information in individuals exhibiting social withdrawal and heightened interest in social information in individuals exhibiting hyper-sociability. FXS is the most common cause of inherited intellectual disability [21], affecting approximately 1 in 4000 males and 1 in 8000 females [22]. FXS has been associated with social anxiety, shyness, and eye gaze aversion [23, 24]. However, it has been suggested that these avoidant behaviours occur primarily during initial interactions, giving way to increasing social approach behaviours over time [25, 26]. As an X-linked disorder, females display fewer cognitive and social impairments [27]. CdLS is a genetic disorder affecting approximately 1 in 40,000 live births [28] and is associated with intellectual disability, selective mutism, social anxiety, and shyness [29–32]. RTS is also a genetic syndrome associated with intellectual disability affecting approximately one in 100,000–125,000 live births [33]. In contrast to FXS and CdLS, research generally suggests that individuals with RTS are sociable, with higher levels of social interest and social contact compared with a matched contrast group [34]. Due to the comparative rarity of FXS, CdLS, and RTS to ASD, the literature regarding social information processing is more limited in these genetic syndrome groups. One study that has investigated social information processing in FXS measured looking patterns to photographic scenes that incorporated social stimuli but manipulated the location of the social stimuli within the scene, thereby allowing comparison of attention allocation to social information when non-social information is also available [35]. No differences in the amount of time spent looking at social information were reported between those with FXS and controls matched on chronological and mental age. However, participants with FXS were faster than TD participants to look away, indicating active social avoidance. These results suggest that more nuanced analyses including speed of gaze aversion may highlight subtle differences. Out of the 14 participants with FXS in the study conducted by Williams et al. [35], 12 were female. Due to documented gender differences in FXS, it cannot be determined whether the same results would extend to males with FXS. Other studies utilising eye-tracking technology to investigate social information processing in FXS have focussed on looking patterns to faces. As faces are social in nature, visual preference for social or non-social information cannot be gleaned from these studies. However, they do provide evidence that social processing may be impaired in this syndrome group. These studies have been conducted primarily to investigate looking patterns to the eye region of facial stimuli and report reduced time spent looking at the eyes in participants with FXS compared to TD participants [36–38] and compared to individuals with ASD [39]. It was hypothesised that individuals with ASD would exhibit reduced looking to social versus non-social stimuli that is moving towards the viewer, when compared with a group of individuals with SEN who were matched for chronological age (CA) and verbal abilities but did not have ASD. This hypothesis was based directly upon previous research in which stimuli were people facing towards the camera [9, 10]. This study also allows for examination of whether or not the relative reduced looking at social versus non-social stimuli is present for stimuli that is moving past the viewer. Previous literature indicates differences in social processing that map onto social behaviours [1, 4]. Due to subtle differences in the documented socio-behavioural characteristics of the genetic syndrome groups of focus in this study, between-group differences in visual attention for social videos were hypothesised. Specifically, due to reports of sociability and social interest in RTS, it was predicted that participants with RTS would direct more visual attention towards social versus non-social stimuli, whereas individuals with FXS and CdLS would not exhibit this visual preference, in line with the reported social anxiety and shyness observed in these groups. These hypotheses were based upon the documented socio-behavioural phenotypes of the syndrome groups. However, the absence of previous literature on social versus non-social preference in the syndrome groups precludes us from making strong specific predictions (for visual scanning of social stimuli only, see [36–40] for FXS and [41] for CdLS and RTS). Due to the documented socio-behavioural profiles of both FXS and CdLS indicating social anxiety, a parental-report measure of this behaviour was included in the present study in an effort to investigate the potential relationship between social anxiety and visual attention towards social stimuli. Whilst it would have been interesting to investigate this potential relationship in the ASD group, limited access to parents when testing adolescents with ASD in a school setting rendered this unfeasible. Furthermore, because FXS, CdLS, and RTS are associated with intellectual disability, it was not possible to compare the ASD or SEN participants in the current study. Specifically, due to the wide range of chronological ages and ability levels in our participants with FXS, CdLS, and RTS, the Vineland Adaptive Behavior Scale (VABS)-II [42] was used in place of an intellectual quotient (IQ) measure. However, it was possible to obtain verbal IQ data on participants with ASD and SEN. The group comparisons are therefore split into two studies. The first study reports data from participants with ASD and SEN who are matched on chronological age, gender, and verbal IQ. The second study reports data from participants with FXS, CdLS, and RTS who are matched on chronological age and adaptive behaviour. Sixteen adolescents with ASD and 16 adolescents with SEN but no diagnosis of a neurodevelopmental disorder were included in study 1. All participants were recruited from a secondary school local to the research base and had normal or corrected to normal vision. An educational psychologist had previously diagnosed all 16 participants in the ASD group and ruled out a diagnosis of ASD in all 16 adolescents in the SEN group. The Autism Diagnostic Observation Schedule (ADOS) [43] was administered by a research-trained examiner to confirm the presence or absence of a diagnosis in participants in the ASD and SEN groups, respectively. The verbal similarities and word definitions portions of the school-age British Ability Scales—second edition [44] were administered to all participants in order to provide standardised information on verbal abilities. Participant characteristics are presented in Table 1. Table 1 Participant characteristics and comparison statistic for adolescents with autism spectrum disorder (ASD) and special educational needs (SEN) Ethics, consent, and permissions Parents of participants were sent information about the study and given the opportunity to opt their child out of participation. Additionally, all participants provided fully informed written consent prior to participation. This consent process was in accordance with an ethical protocol that was approved by the Science, Technology, Engineering, and Mathematics Ethical Review Committee at the University of Birmingham. An EyeLink 1000 Tower Mount system was used to measure participant's dwell time and eye movements. It has a temporal resolution of 2 ms (500 Hz), spatial accuracy of 0.5°–1° visual angle, and a spatial resolution of 0.01°. Stimuli During each trial, participants were presented with two videos side by side for 8000 ms. The videos were either social, where an actor was the focus of the video, or non-social, where an object was the focus of the video. Both videos were either 'moving towards' or 'moving past'. In the 'moving towards' videos, the person or object moved towards, or conducted an action (e.g., blowing bubbles) towards, the viewer. In the 'moving past' videos, the person or object moved past or conducted an action past the camera in a perpendicular fashion. Figure 1 shows an example of stimuli at three time points (between 0 and 8000 ms) in each of the conditions: the social 'moving towards' condition (Fig. 1a), the social 'moving past' condition (Fig. 1b), the non-social 'moving towards' condition (Fig. 1c), and the non-social 'moving past' condition (Fig. 1d). There were 28 trials in total, half of which contained one social video moving towards the viewer and one non-social video moving towards the viewer, whilst the other half contained one social video moving past the viewer and one non-social video moving past the viewer. Trials were counterbalanced so that the social and non-social videos were presented an equal number of times on the left and right side of the screen. Examples of social videos include a person skipping, a person blowing bubbles, and a person walking whilst talking on the phone. Examples of non-social videos include a train, an aeroplane, and a ball bouncing down steps. Actors in all of the videos wore plain black clothing and displayed a straight-ahead gaze and neutral facial expression. Each video subtended an average of 9.15 × 13.79° of visual angle and was displayed on a white background. The videos were positioned side by side, separated by a gap of 1.25° of visual angle. An example of the dynamic stimuli presented during the social 'moving towards' (a), social 'moving past (b), non-social 'moving towards (c), and non-social 'moving past (d) conditions. Written informed consent for publication of their image was obtained from the actor in (a) and (b) All participants were tested at their school and were seated approximately 50 cm from the computer screen displaying the stimuli. A five-point calibration was performed prior to the experiment in which participants looked at an animated blue dolphin that changed position around the screen. Following calibration, participants were presented with 28 trials. In between each trial, the animated dolphin, which served as a central fixation point, was displayed for 1000 ms, except for prior to every fifth trial when a single-point calibration drift-correction was made. Participants were told to look wherever they wished on the computer screen whilst the videos were presented but to look at the dolphin in between trials. The current study uses measures of overall dwell time to social versus non-social stimuli and time taken to orient to social versus non-social stimuli, as well as incorporating the manipulation of the direction of the stimuli. Fixations were assessed as occurring when eye movement did not exceed a velocity threshold of 30°/s, an acceleration threshold of 8000°/s2, or a motion threshold of 0.1°, and the pupil was not missing for three or more samples in a sequence. To determine whether participant groups differed in the amount of time spent looking at social relative to non-social videos, the mean proportion of dwell time to stimuli moving towards (Equation 1) and stimuli moving past (Equation 2) the viewer was calculated for each participant. This indicates what proportion, out of the total time spent looking at both videos on the screen, was spent viewing the social stimuli. Equation 1. Dwell time formula for stimuli moving towards the viewer. $$ \frac{\mathrm{Mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{towards}'\ \mathrm{videos}}{\left(\mathrm{Mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{towards}'\ \mathrm{videos} + \mathrm{mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{n}\mathrm{o}\mathrm{n}\hbox{-} \mathrm{social}\ `\mathrm{moving}\ \mathrm{towards}'\ \mathrm{videos}\right)} $$ Equation 2. Dwell time formula for stimuli moving past the viewer. $$ \frac{\mathrm{Mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{past}'\ \mathrm{videos}}{\left(\mathrm{Mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{past}'\ \mathrm{videos} + \mathrm{mean}\ \%\ \mathrm{o}\mathrm{f}\ \mathrm{dwell}\ \mathrm{time}\ \mathrm{o}\mathrm{n}\ \mathrm{n}\mathrm{o}\mathrm{n}\hbox{-} \mathrm{social}\ `\mathrm{moving}\ \mathrm{past}'\ \mathrm{videos}\right)} $$ To determine whether participant groups differed in their speed to fixate to social relative to non-social videos, the mean ratio of the latency of first fixations to social versus non-social videos was calculated for each participant for stimuli moving towards (Equation 3) and stimuli moving past (Equation 4) the viewer. A ratio above 1 reflects quicker fixation to social stimuli, so, for example, a ratio of 3 indicates that participants fixated to social stimuli three times faster than non-social stimuli. Equation 3. First fixation latencies formula for stimuli moving towards the viewer. $$ \frac{\mathrm{Mean}\ \mathrm{t}\mathrm{ime}\ \mathrm{t}\mathrm{aken}\ \mathrm{t}\mathrm{o}\ \mathrm{fixate}\ \mathrm{o}\mathrm{n}\ \mathrm{n}\mathrm{o}\mathrm{n}\hbox{-} \mathrm{social}\ `\mathrm{moving}\ \mathrm{t}\mathrm{o}\mathrm{wards}'\ \mathrm{videos}}{\mathrm{Mean}\ \mathrm{t}\mathrm{ime}\ \mathrm{t}\mathrm{aken}\ \mathrm{t}\mathrm{o}\ \mathrm{fixate}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{t}\mathrm{o}\mathrm{wards}'\ \mathrm{videos}} $$ Equation 4. First fixation latencies formula for stimuli moving past the viewer. $$ \frac{\mathrm{Mean}\ \mathrm{t}\mathrm{ime}\ \mathrm{t}\mathrm{aken}\ \mathrm{t}\mathrm{o}\ \mathrm{fixate}\ \mathrm{o}\mathrm{n}\ \mathrm{n}\mathrm{o}\mathrm{n}\hbox{-} \mathrm{social}\ `\mathrm{moving}\ \mathrm{past}'\ \mathrm{videos}}{\mathrm{Mean}\ \mathrm{t}\mathrm{ime}\ \mathrm{t}\mathrm{aken}\ \mathrm{t}\mathrm{o}\ \mathrm{fixate}\ \mathrm{o}\mathrm{n}\ \mathrm{social}\ `\mathrm{moving}\ \mathrm{past}'\ \mathrm{videos}} $$ These ratios were subjected to a logarithmic (Lg10) transformation in order to meet criteria for normal distribution. Data from one participant with ASD, one participant with SEN, one participant with RTS, and two participants with FXS were excluded from parametric analyses, as they could not be transformed to normality due to ratios below zero. Due to the potential bias this creates in the data, we also confirmed the findings using non-parametric tests, performed with the original ratios that were not normally distributed. Dwell time On average, participants with ASD and SEN spent 92 and 86 % of trial time looking at the videos, respectively, indicating good levels of task engagement. Figure 2 depicts the proportion of dwell time on social versus non-social stimuli in the 'moving towards' and 'moving past' conditions. A 2 × 2 mixed ANOVA was conducted where direction (moving towards/moving past) was the within subjects factor and participant group (ASD, SEN) was the between subjects factor. A main effect of direction was revealed (F (1, 30) = 17.029, p < .001). Neither a main effect of participant group (F (1, 30) = 2.657, p = .114) nor a significant interaction was observed (F (1, 30) = 2.112, p = .156). As previous studies have used stimuli facing the viewer and have consistently observed effects of participants with ASD looking less to social stimuli than control participants [9, 10], we employ two hypothesis-driven a priori independent t tests in order to determine whether or not these effects replicate in the current data. These revealed that adolescents with SEN demonstrated a significantly higher proportion of social versus non-social looking at stimuli moving towards the viewer compared to adolescents with ASD (t (30) = 2.183, p = .037). This difference was not observed for stimuli moving past the viewer (t (30) = .346, p = .732). The mean (±1 SE) proportion of social dwell time on 'moving towards' and 'moving past' videos for adolescents with autism spectrum disorder (ASD) and adolescents with special educational needs (SEN) In order to ensure that the observed effects were not driven by circumscribed interests, such as interest in vehicles in the ASD group, image-wise analyses were conducted whereby a dwell time of 2× standard deviations above or below the group mean dwell time was considered an outlier. These analyses revealed that none of the images used in the present study yielded dwell times that were deemed outliers for the ASD group. First fixation latencies In the following analyses, a larger ratio indicates quicker fixation to social versus non-social stimuli. Figure 3 depicts the first fixation latencies for social versus non-social stimuli in the 'moving towards' and 'moving past' conditions. A 2 (moving towards/moving past) × 2 (ASD/SEN) mixed ANOVA was conducted. No main effects or a significant interaction were observed (moving towards/moving past: F (1, 28) = 3.029, p = .093; ASD/SEN: F (1, 28) = .149, p = .702; interaction: F (1, 28) = 1.091, p = .305). The mean (±1 SE) ratio of first fixation latencies on social to non-social stimuli for adolescents with autism spectrum disorder (ASD) and adolescents with special educational needs (SEN) Except where mentioned, the methods used in study 2 were identical to those used in study 1. Fifteen individuals with FXS, 14 individuals with CdLS, and 19 individuals with RTS were included in study 2. Participant characteristics are presented in Table 2. Due to documented gender differences, all participants with FXS were male. Participants were either recruited through the participant database of the Cerebra Centre for Neurodevelopmental Disorders, the Cornelia de Lange Foundation UK and Ireland, or the Rubinstein-Taybi Syndrome UK Support Group. All participants had previously received a diagnosis by a paediatrician or clinical geneticist and had normal or corrected to normal vision. Participant groups were matched for CA, global adaptive behaviour, and verbal adaptive behaviour as measured by the VABS [42]. Table 2 Participant characteristics and comparison statistic for children and adults with fragile X (FXS), Cornelia de Lange (CdLS), and Rubinstein-Taybi (RTS) syndromes Participants aged 16 years and over, and parents of participants aged under 16 years, provided fully informed written consent to participate in the study. If necessary, participants aged 16 years and over were provided with a symbol sheet to explain the experimental procedure using pictures and short sentences. Participants were also given a 'stop' card, which they could hold up if they wanted a break or to stop the experiment, although this was not used by any participant. This was in accordance with an ethical protocol that was approved by the Science, Technology, Engineering, and Mathematics Ethical Review Committee at the University of Birmingham. Participants were tested individually in a quiet, dimly lit room either at the University of Birmingham (FXS = 15; CdLS = 2) or at a syndrome support group family meeting (CdLS = 12; RTS = 19). All participants were seated approximately 60 cm from the computer screen displaying the stimuli. Parents/primary caregivers completed the VABS [42], the Social Communication Questionnaire (SCQ) [45], and the parent version of the Spence Children's Anxiety Scale (SCAS-P) [46]. A researcher who was trained in ADOS administration at research-reliable level administered the ADOS [43] to all participants with FXS. On average, participants with FXS, CdLS, and RTS spent 83, 89, and 92 % of trial time looking at the videos, respectively, indicating good levels of task engagement. Figure 4 depicts the proportion of social versus non-social dwell time in the 'moving towards' and 'moving past' conditions. A 2 (moving towards/moving past) × 3 (FXS/CdLS/RTS) mixed ANOVA revealed a main effect of direction; participants evidenced a higher proportion of social versus non-social looking for stimuli moving towards versus moving past the viewer (F (1, 45) = 45.886, p < .001, η 2 = .505). There was no significant main effect of group or a significant interaction. As all participants with FXS were male, the three groups were not matched on gender. Therefore, these analyses were re-conducted with the CdLS and RTS groups only, who were matched on gender, to ensure that this did not affect results. These analyses revealed that the main effect of direction, and lack of significant findings for participant group and the interaction, remained the same when only the CdLS and RTS groups were compared. The mean (±1 SE) proportion of social dwell time on 'moving towards' and 'moving past' videos for participants with fragile X (FXS), Cornelia de Lange (CdLS), and Rubinstein-Taybi (RTS) syndromes Figure 5 depicts the first fixation latencies to social versus non-social stimuli in the 'moving towards' versus 'moving past' conditions. A 2 (moving towards/moving past) × 3 (FXS/CdLS/RTS) mixed ANOVA revealed a main effect of participant group (F (2, 42) = 3.566, p = .037, η 2 = .145) and a significant interaction (F (2, 42) = 4.821, p = .013, η 2 = .187), indicating differential impact of the direction of stimuli (moving towards/moving past) on the time taken to fixate on social relative to non-social stimuli across the participant groups. Bonferroni corrected post hoc tests indicated slower fixation to social relative to non-social stimuli that were moving towards the viewer in the CdLS group compared to the FXS (p < .001) and RTS (p = .005) groups. This between-group difference for 'moving towards' stimuli was confirmed with a Kruskal-Wallis test (χ 2 (2) = 21.070, p < .001) and follow-up Mann-Whitney U tests (FXS versus CdLS: U = 12.00, p < .001; FXS versus RTS: U = 92.00, p = .080; CdLS versus RTS: U = 34.00, p < .001). These analyses include a direct comparison between participants with CdLS and RTS and, therefore, show that the results are unlikely to be driven by gender differences as gender is matched across these two groups. Paired samples t tests further revealed that participants with CdLS fixated to social relative to non-social stimuli slower when stimuli were moving towards versus moving past the viewer (t(13) = −4.415, p = .001; confirmed with Wilcoxon Signed Ranks Test, Z = −3.107, p = .002), whereas participants with RTS demonstrated the opposite pattern (t(17) = 2.247, p = .038; confirmed with Wilcoxon Signed Ranks Test, Z = −2.213, p = .027), and participants with FXS showed no difference between 'moving towards' and 'moving past' ratios (p > .051; confirmed with Wilcoxon Signed Ranks Test, Z = −.682, p = .496). The mean (±1 SE) ratio of first fixation latencies on social to non-social stimuli for participants with fragile X (FXS), Cornelia de Lange (CdLS), and Rubinstein-Taybi (RTS) syndromes Association between participant characteristics and social preference Spearman correlations revealed a significant positive association between the proportion of social dwell time on stimuli moving towards the viewer and SCAS-P total anxiety score (rs (12) = .667, p = .009) and social phobia subscale score (rs (12) = .548, p = .043) for the FXS group only. There were no significant relationships between autism symptomatology, as measured by the SCQ, and dwell time on stimuli moving towards or moving past the viewer for any participant group (p > .05). No correlations were revealed for SCAS-P scores and first fixation latencies for any participant group (p > .05). However, a moderate positive relationship was revealed between autism symptomatology and first fixation latencies to stimuli moving towards the viewer for the RTS group only (rs (16) = .664 (p = .003). In the current studies, we used eye-tracking measures in conjunction with competing social and non-social videos, under 'moving towards' and 'moving past' conditions. The aim of this work was to determine whether or not indices of visual preferences or visual salience distinguish among individuals with and without ASD and whether this measure is sensitive to differences between other groups of individuals with differing socio-behavioural phenotypes as a result of rare genetically mediated syndromes (FXS, CdLS, and RTS). Consistent with existing literature, adolescents with SEN evidenced a higher proportion of dwell time for dynamic social versus non-social stimuli that were moving towards them, compared with adolescents with ASD. Extending the previous literature, the present study reported that this difference was not apparent for stimuli that were not facing participants. In study 2, analyses indicated that participants with CdLS took longer to fixate to social videos moving towards the viewer than did participants with FXS or RTS, whereas participants with FXS and RTS did not differ on this index. Together, these results suggest that eye-tracking measures of attentional maintenance and prioritisation to dynamic, social and non-social stimuli that are moving towards viewers can differentiate between groups of typical versus atypical social development (study 1), as well as between groups with subtly different socio-behavioural profiles (study 2). Interestingly, although the groups across the two studies were not directly compared, a visual comparison of Figs. 2 and 4 show that participants with ASD evidenced a lower proportion of dwell time for 'moving towards' social versus non-social stimuli than did participants with FXS, CdLS, and RTS, whose looking times more similarly reflected participants in the SEN group. In the 'moving past' condition, it appears that participants with FXS, CdLS, and RTS showed a slightly higher proportion of dwell time for social versus non-social stimuli than did participants with ASD and SEN. Furthermore, a visual comparison of Figs. 3 and 5 indicate that participants with FXS fixated quicker, and participants with CdLS fixated slower, on social 'moving towards' stimuli than any of the five participant groups. Participants with FXS also fixated quicker on social 'moving past' stimuli than any other participant group. It was not possible to statistically compare the patterns of results across the groups in these two separate studies because they were not matched on a number of important participant characteristics. However, the patterns observed suggest that future studies should aim to recruit and test matching participant groups in order to explore this further. The results from study 1 support and extend previous research that has observed that people with ASD do not allocate as much attention to social information as do TD individuals when social stimuli are presented alongside geometric images and point light displays of non-biological motion [9, 10]. This highlights the consistency of findings across ages in this population, from children through to adolescents, and suggests that reduced attention to social information is also apparent when social stimuli are presented alongside more naturalistic non-social stimuli consisting of objects commonly seen in everyday life. This study further highlights the important role of stimulus manipulation and, in particular, the use of stimuli that are facing the participant, when distinguishing differences in attentional allocation between groups with and without social impairment. These data pose interesting questions for the role of anxiety in attentional allocation to social stimuli in adolescents with ASD. It was postulated that social anxiety, rather than social indifference, would more likely govern visual attention to social stimuli if reduced looking was found only in the 'moving towards' condition. As this was the case, it is important to further explore the relationship between anxiety and social attention in ASD. Whilst overall looking time to social versus non-social stimuli did not differ between genetic syndrome groups, participants with CdLS took longer to fixate to social stimuli moving towards the viewer than the other groups. This suggests that attentional prioritisation of socially salient stimuli is reduced in individuals with CdLS compared to those with FXS and RTS. This pattern of results in CdLS may be related to social anxiety, or a reduced ability to interact, both of which have previously been highlighted in this group [47, 48]. However, these characteristics have also been associated with FXS, indicating potentially differential relationships between social behaviour and social attention for those with CdLS versus FXS, despite perceptions in the research community of similar behavioural profiles based on limited data. A positive association between social dwell time to stimuli moving towards the viewer and anxiety was revealed in the FXS group only. This may reflect hypervigilance and heightened attention towards threatening stimuli. In the present study, it may be the case that approaching social stimuli are perceived as more threatening than non-approaching social stimuli. However, social anxiety has also been reported in individuals with CdLS [47], but measures of attentional priority for social information in this study indicate differences between these two groups on this measure. This seems to provide some degree of support for the possibility proposed above that attentional priority for social information may have a differential association with social behaviour across individuals with FXS and CdLS. Finally, the results for individuals with RTS are generally consistent with some of the existing, albeit limited, reports of the social phenotype of this group. For example, individuals with RTS have been reported to display social interest, intact social skills relative to their intellectual ability, and the ability to initiate and maintain social contacts [34, 49, 50]. Specifically, this group exhibited increased attentional maintenance of attention towards socially salient (moving towards) stimuli compared to less salient (moving past) social stimuli and increased attentional prioritisation to social stimuli compared to those with CdLS. Notably, our previous studies of social attention in CdLS and RTS revealed no differences in eye- and mouth-looking [41], highlighting the importance of specific manipulations to social attention paradigms. The current study focuses on drawing comparisons in looking patterns to social versus non-social stimuli, whilst investigating the relative effect that stimulus direction has on visual preference. To our knowledge, this is the first study to systematically manipulate the extent to which stimuli moves towards versus past the viewer. An interesting avenue for future research would be to study the effects of stimulus direction independently by presenting pairs of social and non-social stimuli separately, where the actor or object in one video moves towards the viewer and the actor or object in the other video moves past the viewer. Study 1 documents reliable eye-tracking data from 16 participants with ASD and 16 control participants. Although this sample size is acceptable, obtaining a larger sample size may have resulted in stronger effects. Study 2 produced and examined reliable eye-tracking data from one of the largest samples of males with FXS and participants with CdLS and RTS, three rare genetic syndromes. This is the first study to compare participants with these three genetic syndromes, each associated with subtly different social profiles, on a measure of social attention. In summary, the results of the two studies presented here suggest that eye-tracking measures of social versus non-social video stimulus preferences and prioritisation index differences between groups of individuals with differing social profiles. Specifically, those with versus without ASD exhibited differences on a relatively coarse measure of overall dwell time to social versus non-social videos, whilst the nuanced measure of time taken to initially orient to social and non-social stimuli highlighted differences between groups exhibiting more subtle differences in their social presentation. Critically, the differences observed between the groups are each consistent with previously documented differences in their respective behaviourally measured social phenotypes. The current findings, therefore, provide further support for the potential of relatively simple eye-tracking measures of visual attention for social versus non-social stimuli to index differences across populations according to their socio-behavioural profiles. Ethical approval for study 1 and study 2 was granted by the Science, Technology, Engineering, and Mathematics Ethical Review Committee at the University of Birmingham. For study 1, participants provided written consent to participate in the study. Parents of participants were given the opportunity to opt their child out of participation in the study. For Study 2, participants aged 16 years and over provided written consent to participate in the study. Parents of participants aged below 16 years provided written consent for their child to participate in the study. ADOS: Autism Diagnostic Observation Schedule chronological age CdLS: Cornelia de Lange syndrome FXS: IQ: intellectual quotient RTS: SCAS-P: Spence Child Anxiety Scale—Parent Version SCQ: Social Communication Questionnaire SEN: typically developing VABS: Vineland Adaptive Behavior Scale Riby DM, Hancock PJB. Do faces capture the attention of individuals with Williams syndrome or autism? Evidence from tracking eye movements. J Autism Dev Disord. 2009;39(3):421–31. Riby DM, Doherty-Sneddon G, Bruce V. The eyes or the mouth? Feature salience and unfamiliar face processing in Williams syndrome and autism. Q J Exp Psychol. 2009;62(1):189–203. Riby DM, Hancock PJB. Looking at movies and cartoons: eye-tracking evidence from Williams syndrome and autism. J Intellect Disabil Res. 2009;53(2):169–81. Riby DM, Hancock PJB. Viewing it differently: social scene perception in Williams syndrome and autism. Neuropsychologia. 2008;46(11):2855–60. Kirchner JC, Hatri A, Heekeren HR, Dziobeck I. Autistic symptomatology, face processing abilities, and eye fixation patterns. J Autism Dev Disord. 2011;41(2):158–67. Klin A, Jones W, Schultz R, Volkmar F, Cohen D. Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Arch Gen Psychiatry. 2002;59(9):809–16. Speer LL, Cook AE, McMahon WM, Clark E. Face processing in children with autism effects of stimulus contents and type. Autism. 2007;11(3):265–77. Kliemann D, Dziobeck I, Hatri A, Steimke R, Heekeren HR. Atypical reflexive gaze patterns on emotional faces in autism spectrum disorders. J Neurosci. 2010;30(37):12281–7. Pierce K, Conant D, Hazin R, Stoner R, Desmond J. Preference for geometric patterns early in life as a risk factor for autism. Arch Gen Psychiatry. 2011;68(1):101–9. Klin A, Lin DJ, Gorrindo P, Ramsay G, Jones W. Two-year-olds with autism orient to nonsocial contingencies rather than biological motion. Nature. 2009;459(7244):257–61. Fletcher-Watson S, Leekam SR, Benson V, Frank MC, Findlay JM. Eye-movements reveal attention to social information in autism spectrum disorder. Neuropsychologia. 2009;47(1):248–57. Sasson NJ, Touchstone EW. Visual attention to competing social and object images by preschool children with autism spectrum disorder. J Autism Dev Disord. 2013;44(3):584–92. Chevallier C, Parish-Morris J, McVey A, Rump KM, Sasson NJ, Herrington JD, et al. Measuring social attention and motivation in autism spectrum disorder using eye-tracking: stimulus type matters. Autism Res. 2015. doi:10.1002/aur.1479. White SW, Oswald D, Ollendick T, Scahill L. Anxiety in children and adolescents with autism spectrum disorders. Clin Psychol Rev. 2009;29(3):216–29. Heenan A, Troje NF. Both physical exercise and progressive muscle relaxation reduce the facing-the-viewer bias in biological perception. PLoS ONE. 2014;9(7):e99902. Armstrong T, Olatunji BO. Eye tracking of attention in the affective disorders: a meta-analytic review and synthesis. Clin Psychol Rev. 2012;32(8):704–23. Chawarska K, Macari S, Shic F. Context modulates attention to social scenes in toddlers with autism. J Child Psychol Psychiatry. 2012;53(8):903–13. Pitskel NB, Bolling DZ, Hudac CM, Lantz SD, Minshew NJ, Vander Wyc BC, et al. Brain mechanisms for processing direct and averted gaze in individuals with autism. J Autism Dev Disord. 2011;41(12):1686–93. Riby DM, Doherty-Sneddon G, Bruce V. Exploring face perception in disorders of development: evidence from Williams syndrome and autism. J Neuropsychol. 2008;2(1):47–64. Riby DM, Hancock PJB, Jones N, Hanley M. Spontaneous and cued gaze following in autism and Williams syndrome. J Neurodev Disord. 2013;5:13. Crawford DC, Acuna JM, Sherman SL. FMR1 and the fragile X syndrome: human genome epidemiology review. Genet Med. 2001;3(5):359–71. Turner G, Webb T, Wake S, Robinson H. Prevalence of fragile X syndrome. Am J Med Genet. 1996;64(1):196–7. Cordeiro L, Ballinger E, Hagerman R, Hessl D. Clinical assessment of DSM-IV anxiety disorders in fragile X syndrome: prevalence and characterization. J Neurodev Disord. 2011;3(1):57–67. Merenstein SA, Sobesky WE, Taylor AK, Riddle JE, Tran HX, Hagerman R. Molecular-clinical correlations in males with an expanded FMR1 mutation. Am J Med Genet. 1996;64(2):388–94. Roberts JE, Weisenfeld LAH, Hatton DD, Heath M, Kaufmann WE. Social approach and autistic behavior in children with fragile X syndrome. J Autism Dev Disord. 2007;37(9):1748–60. Kau ASM, Reider EE, Payne L, Meyer WA, Freund LS. Early behavior signs of psychiatric phenotypes in fragile X syndrome. Am J Ment Retard. 2000;105(4):286–99. Clifford S, Dissanayake C, Bui QM, Huggins R, Taylor AK, Loesch DZ. Autism spectrum phenotype in males and females with fragile X full mutation and premutation. J Autism Dev Disord. 2007;37(4):738–47. Beck B. Epidemiology of Cornelia de Lange's syndrome. Acta Paediatr. 1976;65(4):631–8. Goodban MT. Survey of speech and language skills with prognostic indicators in 116 patients with Cornelia de Lange syndrome. Am J Med Genet. 1993;47(7):1059–63. Collis L, Oliver C, Moss J. Low mood and social anxiety in Cornelia de Lange syndrome. J Intellect Disabil Res. 2006;50:792. Moss J, Oliver C, Berg K, Kaur G, Jephcott L, Cornish K. Prevalence of autism spectrum phenomenology in Cornelia de Lange and Cri du Chat syndromes. Am J Ment Retard. 2008;113(4):278–91. Richards C, Moss J, O'Farrell L, Kaur G, Oliver C. Social anxiety in Cornelia de Lange syndrome. J Autism Dev Disord. 2009;39(8):1155–62. Hennekam RC, Van Den Boogaard M-J, Sibbles BJ, Van Spijker HG. Rubinstein-Taybi syndrome in the Netherlands. Am J Med Genet. 1990;37(56):17–29. Galéra C, Taupiac E, Fraisse S, Naudion S, Toussaint E, Rooryck-Thambo C, et al. Socio-behavioral characteristics of children with Rubinstein-Taybi syndrome. J Autism Dev Disord. 2009;39(9):1252–60. Williams TA, Porter MA, Langdon R. Viewing social scenes: a visual scan-path study comparing fragile X syndrome and Williams syndrome. J Autism Dev Disord. 2013;43(8):1880–94. Dalton KM, Holsen L, Abbeduto L, Davidson RJ. Brain function and gaze fixation during facial-emotion processing in fragile X and autism. Autism Res. 2008;1(4):231–9. Holsen LM, Dalton KM, Johnstone T, Davidson RJ. Prefrontal social cognition network dysfunction underlying face encoding and social anxiety in fragile X syndrome. Neuroimage. 2008;43(3):592–604. Farzin F, Rivera SM, Hessl D. Brief report: visual processing of faces in individuals with fragile X syndrome: an eye tracking study. J Autism Dev Disord. 2009;39(6):946–52. Crawford H, Moss J, Anderson GM, Oliver C, McCleery JP. Implicit discrimination of basic facial expressions of positive/negative emotion in fragile X syndrome and autism spectrum disorder. Am J Intellect Dev Disabil. 2015;120(40):328–45. Shaw TA, Porter MA. Emotion recognition and visual-scan paths in fragile X syndrome. J Autism Dev Disord. 2013;43(5):1119–39. Crawford H, Moss J, McCleery JP, Anderson GM, Oliver C. Face scanning and spontaneous emotion preference in Cornelia de Lange syndrome and Rubinstein-Taybi syndrome. J Neurodev Disord. 2015;7:22. Sparrow SS, Cicchetti DV, Balla DA. Vineland-II adaptive behavior scales: survey forms manual. Circle Pines, MN: AGS Publishing; 2005. Lord C, Rutter M, DiLavore P, Risi S. Autism diagnostic observation schedule: manual. Los Angeles, CA: Western Psychological Services; 2002. Elliot C, Smith P, McCulloch K. British ability scales—second edition. Administration and scoring manual. Windsor, UK: NFER; 1996. Rutter M, Bailey A, Lord C. The social communication questionnaire. Los Angeles, CA: Western Psychological Services; 2003. Spence SH. A measure of anxiety symptoms among children. Behav Res Ther. 1998;36(5):545–66. Moss J, Oliver C, Nelson L, Richards C, Hall S. Delineating the profile of autism spectrum disorder characteristics in Cornelia de Lange and fragile X syndromes. Am J Intellect Dev Disabil. 2013;118(1):55–73. Moss J, Howlin P, Magiati I, Oliver C. Characteristics of autism spectrum disorder in Cornelia de Lange syndrome. J Child Psychol Psychiatry. 2012;53(8):883–91. Hennekam RC, Baselier AC, Beyaert E, Bos A. Psychological and speech studies in Rubinstein-Taybi syndrome. Am J Ment Retard. 1992;96(6):645–60. Hennekam RC. Rubinstein–Taybi syndrome. Eur J Hum Genet. 2006;14(9):981–5. The authors would like to thank all participants and their families. The authors would also like to thank the secondary special educational needs school that helped with recruitment for Study 1 and allowed data to be collected on their premises. The authors are indebted to the Cornelia de Lange Foundation UK & Ireland and the Rubinstein-Taybi Syndrome Support Group for their assistance with recruitment of children and adults with Cornelia de Lange syndrome and Rubinstein-Taybi syndrome, respectively, and for arranging a room to be used for data collection during family meetings. The research reported here was supported by a grant from the Economic and Social Research Council (Grant Number: ES/I901825/1) awarded to HC and by Cerebra. Centre for Research in Psychology, Behaviour and Achievement, Coventry University, James Starley Building (JSG12), Priory Street, CV1 5FB, Coventry, UK Hayley Crawford Cerebra Centre for Neurodevelopmental Disorders, School of Psychology, University of Birmingham, Birmingham, UK , Joanna Moss & Chris Oliver Institute of Cognitive Neuroscience, University College London, London, UK Joanna Moss School of Psychology, University of Birmingham, Birmingham, UK Natasha Elliott , Giles M. Anderson & Joseph P. McCleery School of Psychology, Oxford Brookes University, Oxford, UK Giles M. Anderson Center for Autism Research, Children's Hospital of Philadelphia, Philadelphia, PA, USA Joseph P. McCleery Search for Hayley Crawford in: Search for Joanna Moss in: Search for Chris Oliver in: Search for Natasha Elliott in: Search for Giles M. Anderson in: Search for Joseph P. McCleery in: Correspondence to Hayley Crawford. All authors contributed to the conception and design of the study, the planning of analyses, and the writing of the manuscript. HC, NE, and GMA were involved in programming the eye-tracking task. HC, JPM, NE, and CO were involved in recruitment, data collection, and interpretation of the data. HC conducted the analysis and wrote the first draft of the manuscript. All authors have read and approved the final version of the manuscript. Crawford, H., Moss, J., Oliver, C. et al. Visual preference for social stimuli in individuals with autism or neurodevelopmental disorders: an eye-tracking study. Molecular Autism 7, 24 (2016) doi:10.1186/s13229-016-0084-x DOI: https://doi.org/10.1186/s13229-016-0084-x Social attention
CommonCrawl
Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption Creating reproducible pharmacogenomic analysis pipelines Anthony Mammoliti, Petr Smirnov, … Benjamin Haibe-Kains Privacy-first health research with federated learning Adam Sadilek, Luyang Liu, … John Hernandez FHIR Genomics: enabling standardization for precision medicine use cases Gil Alterovitz, Bret Heale, … Jeremy L. Warner The growing need for controlled data access models in clinical proteomics and metabolomics Thomas M. Keane, Claire O'Donovan & Juan Antonio Vizcaíno A scalable, secure, and interoperable platform for deep data-driven health management Amir Bahmani, Arash Alavi, … Michael P. Snyder Clinical Genome Data Model (cGDM) provides Interactive Clinical Decision Support for Precision Medicine Hyo Jung Kim, Hyeong Joon Kim, … Ju Han Kim Barriers to accessing public cancer genomic data Katrina Learned, Ann Durbin, … Isabel M. Bjork Managing expectations, rights, and duties in large-scale genomics initiatives: a European comparison Ruth Horn, Jennifer Merchant & The UK-FR GENE Consortium Responsible sharing of biomedical data and biospecimens via the "Automatable Discovery and Access Matrix" (ADA-M) J. Patrick Woolley, Emily Kirby, … Anthony J. Brookes David Froelicher ORCID: orcid.org/0000-0002-5012-74861, Juan R. Troncoso-Pastoriza ORCID: orcid.org/0000-0001-8764-55701, Jean Louis Raisaro ORCID: orcid.org/0000-0003-2052-61332,3, Michel A. Cuendet4, Joao Sa Sousa1, Hyunghoon Cho5, Bonnie Berger5,6,7, Jacques Fellay ORCID: orcid.org/0000-0002-8240-939X2,8 & Jean-Pierre Hubaux ORCID: orcid.org/0000-0003-1533-61321 Computational biology and bioinformatics An Author Correction to this article was published on 11 November 2021 Using real-world evidence in biomedical research, an indispensable complement to clinical trials, requires access to large quantities of patient data that are typically held separately by multiple healthcare institutions. We propose FAMHE, a novel federated analytics system that, based on multiparty homomorphic encryption (MHE), enables privacy-preserving analyses of distributed datasets by yielding highly accurate results without revealing any intermediate data. We demonstrate the applicability of FAMHE to essential biomedical analysis tasks, including Kaplan-Meier survival analysis in oncology and genome-wide association studies in medical genetics. Using our system, we accurately and efficiently reproduce two published centralized studies in a federated setting, enabling biomedical insights that are not possible from individual institutions alone. Our work represents a necessary key step towards overcoming the privacy hurdle in enabling multi-centric scientific collaborations. A key requirement for fully realizing the potential of precision medicine is to make large amounts of medical data interoperable and widely accessible to researchers. Today, however, medical data are scattered across many institutions, which renders centralized access and aggregation of such data challenging, if not impossible. The challenges are not due to the technical hurdles of transporting high volumes of heterogeneous data across organizations but to the legal and regulatory barriers that make the transfer of patient-level data outside a healthcare provider complex and time-consuming. Moreover, stringent data protection and privacy regulations (e.g., General Data-Protection Regulation (GDPR)1) strongly restrict the transfer of personal data, including even pseudonymized data, across jurisdictions. Federated analytics (FA) is emerging as a new paradigm that seeks to address the data governance and privacy issues related to medical-data sharing2,3,4. FA enables different healthcare providers to collaboratively perform statistical analyses and to develop machine-learning models, without exchanging the underlying datasets. Only aggregated results or model updates are transferred. In this way, each healthcare provider can define its own data governance and maintain control over the access to its patient-level data. FA offers opportunities for exploiting large and diverse volumes of data distributed across multiple institutions. These opportunities can facilitate the development and validation of artificial intelligence algorithms that yield more accurate, unbiased, and generalizable clinical recommendations, as well as accelerate novel discoveries. Such advances are particularly important in the context of rare diseases or medical conditions, where the number of affected patients in a single institution is often not sufficient to identify meaningful statistical patterns with enough statistical power. The adoption of FA in the medical sector, despite its potential, has been slower than expected. This is in large part due to the unresolved privacy issues of FA, related to the sharing of model updates or partial data aggregates in cleartext. Indeed, despite patient-level data not being transferred between the institutions engaging in FA, it has been shown that the model updates (or partial aggregates) themselves can, under certain circumstances, leak sensitive personal information about the underlying individuals, thus leading to re-identification, membership inference, and feature reconstruction5,6. Our work focuses on overcoming this key limitation of existing FA approaches. We note that limited data interoperability across different healthcare providers is another potential challenge in deploying FA; this, in practice, can be surmounted by harmonizing the data across institutions before performing the analysis. Several open-source software platforms have recently been developed to provide users streamlined access to FA algorithms3,7,8. For example, DataSHIELD7 is a distributed data analysis and a machine-learning (ML) platform based on the open-source software R. However, none of these platforms address the aforementioned problem of indirect privacy leakages that stem from their use of "vanilla" federated learning. Hence, it remains unclear whether these existing solutions are able to substantially simplify regulatory compliance, compared to more conventional workflows that centralize the data9,10,11, if the partial aggregates and model updates could still be considered as personal identifying data5,6,12,13,14. More sophisticated solutions for FA, which aim to provide end-to-end privacy protection, including for the shared intermediate data, have been proposed15,16,17,18,19,20,21,22,23,24,25. These solutions use techniques such as differential privacy (diffP)26, secure multiparty computation (SMC), and homomorphic encryption (HE). However, these techniques often achieve stronger privacy protection at the expense of accuracy or computational efficiency, thus limiting their applicability. Existing diffP techniques for FA, which prevent privacy leakage from the intermediate data by adding noise to it before sharing, often require prohibitive amounts of noise, which leads to inaccurate models. Furthermore, there is a lack of consensus around how to set the privacy parameters for diffP in order to provide acceptable mitigation of inference risks in practice27. SMC and HE are cryptographic frameworks for securely performing computation over private datasets (pooled from multiple parties in the context of FA, in an encrypted form) without any intermediate leakage, but both come with notable drawbacks. SMC incurs a high network-communication overhead and has difficulty scaling to a large number of data providers (DPs). HE imposes high storage and computational overheads and introduces a single point of failure in the standard centralized setup, where a single party receives all encrypted datasets to securely perform the joint computation. Distributed solutions based on HE21,22,23,28 have also been proposed to decentralize both the computational burden and the trust, but existing solutions address only simple calculations (e.g., counts and basic sample statistics) and are not suited for complex tasks. Here, we present FAMHE, an approach, based on multiparty homomorphic encryption (MHE)29, to privacy-preserving FA, and we demonstrate its ability to enable efficient federated execution of two fundamental workflows in biomedical research: Kaplan–Meier survival analysis and genome-wide association studies (GWAS). MHE is a recently proposed multiparty computation framework based on HE; it combines the power of HE to perform computation on encrypted data without communication between the parties, with the benefits of interactive protocols, which can simplify certain expensive HE operations. Building upon the MHE framework, we introduce an approach to FA, where each participating institution performs local computation and encrypts the intermediate results by using MHE; the results are then combined (e.g., aggregated) and distributed back to each institution for further computation. This process is repeated until the desired analysis is completed. Contrary to diffP-based approaches that rely on obfuscation techniques to mitigate the leakage in intermediate results, by sharing only encrypted intermediate results, FAMHE provides end-to-end privacy protection, without sacrificing accuracy. By sharing only encrypted information, our approach guarantees that, whenever needed, a minimum level of obfuscation can be applied only to the final result in order to protect it from inference attacks, instead of being applied to all intermediate results. Furthermore, FAMHE improves over both SMC and HE approaches by minimizing communication, by scaling to large numbers of DPs, and by circumventing expensive noninteractive operations (e.g., bootstrapping in HE). Our work also introduces a range of optimization techniques for FAMHE, including optimization of the local vs. collective computation balance, ciphertext packing strategies, and polynomial approximation of complex operations; these techniques are instrumental in our efficient design of FAMHE solutions for survival analysis and GWAS. We demonstrate the performance of FAMHE by replicating two published multicentric studies that originally relied on data centralization. These include a study of metastatic cancer patients and their tumor mutational burden (TMB)30, and a host genetic study of human immunodeficiency virus type 1 (HIV-1)-infected patients31. By distributing each dataset across multiple DPs and by performing federated analyses using our approach, we successfully recapitulated the results of both original studies. Our solutions are efficient in terms of both execution time and communication, e.g., completing a GWAS over 20K patients and four million variants in <5 h. In contrast to most prior work on biomedical FA, which relied on artificial datasets15,17,23,32, our results closely reflect the potential of our approach in real application settings. Furthermore, our approach has the potential to simplify the requirements for contractual agreements and the obligations of data controllers that often hinder multicentric medical studies, because data processed by using MHE can be considered anonymous data under the GDPR12. Our work shows that FAMHE is a practical framework for privacy-preserving FA for biomedical workflows and it has the power to enable a range of analyses beyond those demonstrated in this work. Overview of FAMHE In FAMHE, we rely on MHE to perform privacy-preserving FA by pooling the advantages of both interactive protocols and HE and by minimizing their disadvantages. In particular, by relying on MHE and on the distributed protocols for FA proposed by Froelicher et al.24, our approach enables several sites to compute on their local patient-level data and then encrypt (Local Computation & Encryption in Fig. 1) and homomorphically combine their local results under MHE (Collective Aggregation (CA) in Fig. 1). These local and global steps can be repeated (Iterate in Fig. 1), depending on the analytic task. At each new iteration, participating sites use the encrypted combination of the results of the previous iteration to compute on their local data without the need for decryption, e.g., gradient-descent steps in the training of a regression model. The collectively encrypted and aggregated final result is eventually switched (Collective Key Switching in Fig. 1) from encryption under the collective public key to encryption under the querier's public key (the blue lock in Fig. 1) such that only the querier can decrypt. The use of MHE ensures that the secret key of the underlying HE scheme never exists in full. Instead, the control over the decryption process is distributed across all participating sites, each one holding a fragment of the decryption key. This means that all participating sites have to agree to enable the decryption of any piece of data and that no single entity alone can decrypt the data. As described in System and Threat Model in the "Methods" section, FAMHE is secure in a passive adversarial model in which all but one DPs can be dishonest and collude among themselves. Fig. 1: System Model and FAMHE workflow. All entities are interconnected (dashed lines) and communication links at each step are shown by thick arrows. All entities (data providers (DPs) and querier) are honest but curious and do not trust each other. In 1. the querier sends the query (in clear) to all the DPs who (2.) locally compute on their cleartext data and encrypt their results with the collective public key. In 3. the DPs' encrypted local results are aggregated. For iterative tasks, this process is repeated (Iterate). In 4. the final result is then collectively switched by the DPs from the collective public key to the public key of the querier. In 5. the querier decrypts the final result. FAMHE builds upon optimization techniques for enabling the efficient execution of complex iterative workflows: (1) by relying on edge computing and optimizing the use of computations on the DPs' cleartext data; (2) by relying on the packing ability of the MHE scheme to encrypt a vector of values in a single ciphertext such that any computation on a ciphertext is performed simultaneously on all the vector values, i.e., Single Instruction, Multiple Data (SIMD); (3) by further building on this packing property to optimize the sequence of operations by formatting a computation output correctly for the next operation; (4) by approximating complex computations such as matrix inversion (i.e., division) by polynomial functions (additions and multiplications) to efficiently compute them under HE; and (5) by replacing expensive cryptographic operations by lightweight interactive protocols. Note that FAMHE avoids the use of centralized complex cryptographic operations that would require a more conservative parameterization and would result in higher computational and communication overheads (e.g., due to the use of larger ciphertexts). Therefore, FAMHE efficiently minimizes the computation and communication costs for a high-security level. We provide more details of our techniques in the "Methods" section. We implemented FAMHE based on Lattigo33, an open-source Go library for multiparty lattice-based homomorphic encryption cryptography. We chose the security parameters to always ensure high 128-bit-level security. We refer to the "Methods" section for a detailed configuration of FAMHE used in our experiments. To demonstrate the performance of FAMHE, we developed efficient FA solutions based on FAMHE and our optimization techniques for two essential biomedical tasks: Kaplan–Meier survival analysis and GWAS. We present the results of these solutions on real datasets from two peer-reviewed studies that were originally conducted by centralizing the data from multiple institutions. Multicentric Kaplan–Meier survival analysis using FAMHE Kaplan–Meier survival analysis is a widely used method to assess patient's response (i.e., survival) over time to a specific treatment. For example, in a recent study, Samstein et al.30 demonstrated that the TMB is a predictor of clinical responses to immune checkpoint inhibitor (ICI) treatments in patients with metastatic cancers. To obtain this conclusion, they computed Kaplan–Meier overall survival (OS) curves of 1662 advanced-cancer patients treated with ICI, and that are stratified by TMB values. OS was measured from the date of first ICI treatment to the time of death or the last follow-up. In Fig. 2, we show the survival curves obtained from the original centralized study (Centralized, Nonsecure) and those obtained through our privacy-preserving federated workflow of FAMHE executed among three DPs. Note that for FAMHE, to illustrate the workflow of federated collaboration, we distributed the dataset across the DPs, each hosted on a different machine. FAMHE's analysis is then performed with each DP having access only to the locally held patient-level data, thus closely reflecting a real collaboration setting that involves independent healthcare centers. As a result, our federated solutions circumvent the privacy risks associated with data centralization in the original study. We observed that FAMHE produces survival curves identical to those of the original nonsecure approach. By using either approach, we are able to derive the key conclusion that the benefits of ICI increase with TMB. Fig. 2: Secure and distributed reproduction of a survival-curve study. a Survival curves generated in a centralized nonsecure manner and with FAMHE on the data used by Samstein et al.30. TMB stands for tumor mutational burden. With FAMHE, the original data are split among three data providers, and the querier obtains exact results. The table in a displays the number of patients at risk at a specific time. The exact same numbers are obtained with the centralized, nonsecure solution and with FAMHE. b FAMHE execution time for the computation of one (or multiple) survival curve(s) with a maximum of 8192 time points. For both the aggregation and key switching (from the collective public key to the querier's key), most of the execution time is spent in communication (up to 98%), as the operations on the encrypted data are lightweight and parallelized on multiple levels, i.e., among the data providers and among the encrypted values. In Fig, 2b, we show that FAMHE produces exact results while maintaining computational efficiency, as the computation of the survival curves shown in Fig. 2a is executed in < 12 s, even when the data are scattered among 96 DPs. We also observe that the execution time is almost independent of the DPs' dataset size, as the same experiment performed on a 10 × larger dataset (replicated 10 ×) takes almost exactly the same amount of time. We show that FAMHE's execution time remains below 12 s for up to 8192 time points. We note that, in this particular study, the number of time points (instants at which an event can occur) is smaller than 200, due to the rounding off of survival times to months. In summary, the FAMHE-based Kaplan–Meier estimator produces precise results and scales efficiently with the number of time points, each DPs' dataset size, and the number of DPs. We remark that the hazard ratio, which is often computed in survival-curve studies, can be directly estimated by the querier, based on the final result34. It is also possible to compute the hazard ratios directly by following the general workflow of FAMHE described in Fig. 1. This requires the training of proportional-hazard regression models that are closely related to generalized linear models35 that our GWAS solution also utilizes. Multicentric GWAS using FAMHE GWAS are a fundamental analysis tool in medical genetics that identifies genetic variants that are statistically associated with given traits, such as disease status. GWAS have led to numerous discoveries about human health and biology, and efforts to collect larger and more diverse cohorts to improve the power of GWAS. Their relevance to diverse human populations continues to grow. As we progress toward precision medicine and genetic sequencing becomes more broadly incorporated into routine patient care, large-scale GWAS that span multiple medical institutions will become increasingly more valuable. Here, we demonstrate the potential of FAMHE to enable multicentric GWAS that fully protects the privacy of patients' data throughout the analysis. We evaluated our approach on a GWAS dataset from McLaren et al.31; they studied the host genetic determinants of HIV-1 viral load in an infected population of European individuals. It is known that the viral load observed in an asymptomatic patient after primary infection positively correlates with the rate of disease progression; this is the basis for the study of how host genetics modulates this phenotype. We obtained the available data for a subset of the cohort including 1857 individuals from the Swiss HIV Cohort Study, with 4,057,178 genotyped variants. The dataset also included 12 covariates that represent ancestry components, which we also used in our experiments to correct for confounding effects. To test our federated-analysis approach, we distributed, in a manner analogous to the survival analysis experiments, the GWAS dataset across varying numbers of DPs. Following the approach of McLaren et al.31, we performed GWAS using linear regression of the HIV-1 viral load on each of the more than four million variants, always including the covariates. To enable this large-scale analysis in a secure and federated manner, we developed two complementary approaches based on our system: FAMHE-GWAS and FAMHE-FastGWAS. FAMHE-GWAS performs exact linear regression and incurs no loss of accuracy, whereas FAMHE-FastGWAS achieves faster runtime through iterative optimization at a small expense of accuracy. We believe that both modes are practical and that the choice between them would depend on the study setting. Importantly, both solutions do not reveal intermediate results at any point during the computation, and any data exchanged between the DPs to facilitate the computation are always kept hidden by collective encryption. We also emphasize that the DPs in both solutions utilize their local cleartext data and securely aggregate encrypted intermediate results, following the workflow presented in Fig. 1. Both our solutions use a range of optimized computational routines that we developed in this work to carry out the sophisticated operations required in GWAS by using MHE. In FAMHE-GWAS, we exploit the fact that the same set of covariates are included in all regression models by computing once the inverse covariance matrix of the covariates, then for each variant computing an efficient update to the inverse matrix to reflect the contribution of each given variant. Our solution employs efficient MHE routines for each of these steps, including matrix inversion. In FAMHE-FastGWAS, we first subtract the covariate contributions from the phenotype by training once a linear model including only the covariates. We then train in parallel univariate models for all four million variants. We perform this step efficiently by using the stochastic gradient-descent algorithm implemented with MHE. Taken together, these techniques illustrate the computational flexibility of FAMHE and its potential to enable a wide range of analyses. Further details of our solutions are provided in the "Methods" section. We compare FAMHE-GWAS and FAMHE-FastGWAS against (i) Original, the centralized nonsecure approach adopted by the original study, albeit on the Swiss HIV Cohort Study dataset, (ii) Meta-analysis36, a solution in which each DP locally and independently performs GWAS to obtain summary statistics that are then shared and combined (through the weighted Z test) across DPs to produce a single statistic for each variant that represents its overall association with the target phenotype, and (iii) Independent, a solution in which a DP uses only its part of the dataset to perform GWAS. For all baseline approaches, we used the PLINK36 software to perform the analysis (see "Methods" section for the detailed procedure). Note that Meta-analysis can also be securely executed by first encrypting each DP's local summary statistics then following the FA workflow presented in Fig. 1. The Manhattan plots visualizing the GWAS results obtained by each method are shown in Fig. 3a. Both our FAMHE-based methods produced highly accurate outputs that are nearly indistinguishable from the Original results. Consequently, our methods successfully implicated the same genomic regions with genome-wide significance found by Original, represented by the strongest associated single-nucleotide polymorphisms (SNPs) rs7637813 on chromosome 3 (nominal P = 7.2 × 10−8) and rs112243036 on chromosome 6 (P = 7.0 × 10−21). Notably, both these SNPs are in close vicinity to the two strongest signals reported by the original study31: rs1015164 at a distance of 9 kbp and rs59440261 at a distance of 42 kbp, respectively. The former is found in the major histocompatibility complex region, and the latter is near the CCR5 gene; both have established connections to HIV-1 disease progression31. Although the two previous SNPs were not available in our data subset to be analyzed, we reasonably posit that our findings capture the same association signals as in the original study, related through linkage disequilibrium. Regardless, we emphasize that our federated-analysis results closely replicated the centralized analysis of the same dataset we used in our analysis. Fig. 3: Comparison of the GWAS results obtained with different approaches with 12 DPs (when applicable). a Original is considered as the ground truth and is obtained on a centralized cleartext dataset by relying on the PLINK36 software. Panels (c) and (e) are also obtained with PLINK (see "Methods" section and Supplementary Fig. 4). Panels (b) and (d) are the results obtained with FAMHE-GWAS and FAMHE-FastGWAS, respectively. In the original study and in our secure approach, genome-wide signals of association (log10(P) < 5 × 10−7, dotted line) were observed on chromosomes 6 and 3. The P values shown are nominal values without multiple testing correction and are obtained using standard two-sided t tests for testing whether the linear regression coefficient associated with a variant is nonzero. In contrast, the Meta-analysis approach, although successfully applied in many studies, severely underperformed in our experiments by reporting numerous associations that are likely spurious. We believe this observation highlights the limitation of meta-analyses when the sample sizes of individual datasets are limited. Similarly, the Independent approach obtained noisy results, which was further compounded by the issue of limited statistical power (for results obtained by every DP, see Supplementary Fig. 4). We complement these comparisons with Table 1 that quantifies the error in the reported negative logarithm of P value (−\({{{{{{{{\rm{log}}}}}}}}}_{10}({{{P}}})\)), as well as the regression weights (w), for all of the considered approaches compared to Original. We observed that FAMHE-FastGWAS yields an average absolute error always smaller than 10−2, which ensures accurate identification of association signals. FAMHE-GWAS further reduces the error by roughly a factor of three to obtain even more accurate results. Whereas Meta-analysis and Independent approaches result in considerably larger errors. Table 1 Absolute averaged error on the logarithm of the P values (−log10(P)) and on the model weights (w) between Original, Independent and federated approaches. FAMHE scales efficiently in all dimensions: number of DPs, samples, and variants (Fig. 4). As displayed by Fig. 4a, FAMHE's runtime decreases when the workload is distributed among more DPs, and it is below 1 h for a GWAS jointly performed by 12 DPs on more than 4 million variants with FAMHE-FastGWAS. It also shows that in a wide-area network, where the bandwidth is halved (from 1 Gbps to 500 Mbps) and the delay doubled (from 20 to 40 ms), FAMHE execution time increases by a maximum of 26% over all experiments. FAMHE's execution time grows linearly with the number of patients (or samples) and variants (Fig. 4c, d). In all experiments, the communication accounts for between 4 and 55% of FAMHE's total execution time. As described in the "Methods" section, FAMHE computes the P values of multiple (between 512 and 8192) variants in parallel, due to the SIMD property of the crypto scheme and is further parallelized among the DPs and by multithreading at each DP. FAMHE is therefore highly parallelizable, i.e., doubling the number of available threads would almost halve the execution time. Finally, FAMHE-GWAS, which performs exact linear regression, further reduces the error (by a factor of 3 × compared to FAMHE-FastGWAS), but its execution times are generally higher than FAMHE-FastGWAS. Fig. 4: FAMHE scaling. a FAMHE's scaling with the number of data providers, b with the size of the dataset, and c with the number of variants considered in the GWAS. Panel (d) is the legend box for (a–c). In (a), we also observe the effect of a reduced available bandwidth (from 1 Gbps to 500 Mbps) and increased communication delay (from 20 to 40 ms) on FAMHE's execution time. The original dataset containing 1857 samples and four million variants is evenly split among the data providers. By default, the number of DPs is fixed to 6. These results demonstrate the ability of FAMHE to enable the execution of FA workflows on data held by large numbers of DPs who keep their data locally while allowing full privacy with no loss of accuracy. To our knowledge, no other existing approaches achieve all of these properties: the FA approaches that share intermediate analysis results in cleartext among the DPs offer limited privacy protection or when used together with diffP techniques to mitigate leakage, they sacrifice accuracy. Meta-analysis approaches yield imprecise results compared to joint analysis, especially in settings where each DP has access to small cohorts, as we have shown. According to our estimates, centralized HE-based solutions have execution times that are 1–3 orders of magnitude greater than FAMHE due to the overhead of centralized computation, as well as compute-intensive cryptographic operations required by centralized HE (e.g., bootstrapping). Finally, SMC approaches, although an alternative for a small network of 2-4 DPs, have difficulty supporting a large number of DPs, due to their high communication overhead. Note that communication of SMC scales with the combined size of all datasets, whereas FAMHE shares only aggregate-level data, thus vastly reducing the communication burden. We provide a more detailed discussion of existing solutions and estimates of their computational costs in Supplementary Note 5. Here, we have demonstrated that efficient privacy-preserving federated-analysis workflows for complex biomedical tasks are attainable. Our efficient solutions for survival analysis and GWAS, based on our paradigm FAMHE, accurately reproduced published peer-reviewed studies while keeping the dataset distributed across multiple sites and ensuring that the shared intermediate data do not leak any private information. Alternative approaches based on meta-analysis or independent analysis of each dataset led to noisy results in our experiments, illustrating the benefits of our federated solutions. The fact that FAMHE led to practical federated algorithms for both the statistical calculations required by Kaplan–Meier curves and the large-scale regression tasks of GWAS reflects the ability of FAMHE to enable a wide range of other analyses in biomedical research, such as cohort exploration and the training and evaluation of disease risk prediction models. Conceptually, FAMHE represents a novel approach to FA; it has not been previously explored for complex biomedical tasks. FAMHE combines the strengths of both conventional federated-learning approaches and cryptographic frameworks for secure computation. Like federated learning, FAMHE scales to large numbers of DPs and enables noninteractive local computation over each institution's dataset (available locally in cleartext), which approach minimizes the computational and communication burdens that cryptographic solutions17,18,21,22,23,37 typically suffer from. However, FAMHE draws from the cryptographic framework of MHE to enable secure aggregation and local computation of intermediate results in an encrypted form. This approach departs from the existing federated-learning solutions2,3,7,15,16,20 that largely rely on data obfuscation to mitigate leakage in the intermediate data shared among the institutions. Our approach thus provides more rigorous privacy protection. In other words, in FAMHE, accuracy is traded off only with performance, similarly to nonsecure federated approaches, but differently from obfuscation-based solutions, FAMHE's security is absolute. We summarize our comparison of FAMHE with existing works in Supplementary Table 1, Supplementary Notes 2 and 5, and we refer to the "Methods" section for more details. The fact that FAMHE shares only encrypted data among the DPs have important implications for its suitability to regulatory compliance and its potential to catalyze future efforts for multicentric biomedical studies. In recent work, it has been established by privacy law experts that data processed using MHE can be considered "anonymous" data under the GDPR12. Anonymous data, which refers to data that require unreasonable efforts to re-identify the source individuals, lies outside the jurisdiction of GDPR. Therefore, our approach has the potential to significantly simplify the requirements for contractual agreements and the obligations of data controllers with respect to regulations, such as GDPR, that often hinder multicentric medical studies. In contrast, existing FA solutions, where the intermediate results are openly shared, present more complicated paths toward compliance, as intermediate results could still be considered personal data6,13,14. In cases where the potential leakage of privacy in the final output of the federated analysis is a concern, diffP techniques can be easily incorporated into FAMHE by adding a small perturbation to the final results before they are revealed. In contrast to the conventional federated-learning approach, which requires each DP to perturb its local results before aggregating them with other parties, FAMHE enables the DPs to keep the local results encrypted and reveals only the final aggregated results. Therefore, FAMHE can use a smaller amount of added noise and achieve the same level of privacy38. Notably, the choices of diffP parameters suitable for analyses with a high-dimensional output, such as GWAS, can be challenging and needs to be further explored. There are several directions in which our work could be extended to facilitate the adoption of FAMHE. Although we reproduced published studies by distributing a pooled dataset across a group of DPs, jointly analyzing multiple datasets by using FAMHE that could not be combined otherwise would be a challenging yet important milestone for this endeavor. Our work demonstrates FAMHE's applicability on a reliable baseline and constitutes an important and necessary step towards building trust in our technology and fostering its adoption, thus enabling its use for the discovery of new scientific insights. Furthermore, we will extend the capabilities of FAMHE by developing additional protocols for a broader range of standard analysis tools and ML algorithms in biomedical research (e.g., proportional-hazard regression models). A key step in this direction is to make our implementation of FAMHE easily configurable by practitioners for their own applications. Specifically, connecting FAMHE to existing user-friendly platforms such as MedCo39 to make it widely available would help empower the increasing efforts to launch multicentric medical studies and accelerate scientific discoveries. Here, we describe FAMHE's system and threat model, before detailing the execution of the privacy-preserving pipelines for survival curves and GWAS studies. Finally, we detail our experimental settings and explain how diffP can be ensured on the final result in FAMHE. System and Threat Model FAMHE supports a network of mutually distrustful medical institutions that act as DPs and hold subjects' records. An authorized querier (see Fig. 1) can run queries, without threatening the data confidentiality and subjects' privacy. The DPs and the querier are assumed to follow the protocol and to provide correct inputs. All-but-one DPs can be dishonest, i.e., they can try to infer information about other DPs by using the protocol's outputs. We assume that the DPs are available during the complete execution of a computation. However, to account for unresponsive DPs, FAMHE can use a threshold-encryption scheme, where the DPs secret-share40 their secret keys, thus enabling a subset of the DPs to perform the cryptographic interactive protocols. FAMHE can be extended to withstand malicious behaviors. A malicious DP can try to disrupt the federated collaboration process, i.e., by performing wrong computations or inputting wrong results. This can be partially mitigated by requiring the DPs to publish transcripts of their computations and to produce zero-knowledge proofs of range41, thus constraining the DPs' possible inputs. Also, the querier can try to infer information about a DP's local data from the final result. FAMHE can mitigate this inference attack by limiting the number of requests that a querier can perform and by adding noise to the final result (see "Discussion") to achieve diffP guarantees. Learning how to select the privacy parameters and to design a generic solution to apply these techniques for the wide range of applications enabled by FAMHE is part of future work. FAMHE's optimization techniques Here, we describe the main optimization techniques introduced in FAMHE. We then explain how these optimizations are used in FAMHE to compute survival curves and GWAS. In order to parallelize and efficiently perform computationally intensive tasks, we rely on the SIMD property of the underlying cryptographic scheme and on edge computing, i.e., the computations are pushed to the DPs. In MHE, a ciphertext encrypts a vector of N values, and any operation (i.e., addition, multiplication, and rotation) performed on the ciphertext is executed on all the values simultaneously, i.e., SIMD. After a certain number of operations, the ciphertext needs to be refreshed, i.e., bootstrapped. A rotation is, in terms of computation complexity, one order of magnitude more expensive than an addition/multiplication, and a bootstrapping in a centralized setting is multiple orders of magnitudes (2–4) more expensive than any other operation. As the security parameters determine how many operations can be performed before a ciphertext needs to be bootstrapped, conservative parameters that incur large ciphertexts, but enable more operations without bootstrap are usually required in centralized settings. This results in higher communication and computation costs. With MHE, a ciphertext can be refreshed by a lightweight interactive protocol that, besides its efficiency, also alleviates the constraints on the cryptographic parameters and enables FAMHE to ensure a high level of security and still use smaller ciphertexts. For example, we show in Fig. 2b how FAMHE's execution time to compute a survival curve increases when doubling the size of a ciphertext (from 4096 to 8192 slots). As discussed in the privacy-preserving pipeline for GWAS, in the case of GWAS, FAMHE efficiently performs multiple subsequent large-dimension matrix operations (Supplementary Fig. 2) by optimizing the data packing (Supplementary Fig. 3) to perform several multiplications in parallel and to minimize the number of transformations required on the ciphertexts. FAMHE builds on the DPs' ability to compute on their cleartext local data and combine them with encrypted data, thus reducing the overall computation complexity. GWAS also requires non-polynomial functions, e.g., the inverse of a matrix, to be evaluated on ciphertexts, which is not directly applicable in HE. In FAMHE, these non-polynomial functions are efficiently approximated by relying on Chebyshev polynomials. We chose to rely on Chebyshev polynomials instead of on least-square polynomial approximations in order to minimize the maximum approximation error hence avoid that the function diverges on specific inputs. This technique has been shown to accurately approximate non-polynomial functions in the training of generalized models24 and neural networks42, which further shows the generality and applicability of our proposed framework. FAMHE combines the aforementioned features to efficiently perform FA with encrypted data. In GWAS, for example, we rely on the Gauss–Jordan (GJ) method43 to compute the inverse of the covariance matrix. We chose this algorithm as it can be efficiently executed by relying on the aforementioned features: row operations can be efficiently parallelized with SIMD and divisions are replaced by polynomial approximations. Privacy-preserving pipeline for survival curves Survival curves are generally estimated with the Kaplan–Meier estimator44 $$\hat{S}(t)=\mathop{\prod}\limits_{j,\,{t}_{j}\le T}\left(1-\frac{{d}_{j}}{{n}_{j}}\right),$$ where tj is a time when at least one event has occurred, dj is the number of events at time tj, and nj is the number of individuals known to have survived (or at risk) just before the time point tj. We show in Fig. 2a the exact replica of the survival curve presented by Samstein et al.30 produced by our distributed and privacy-preserving computation. In a survival curve, each step down is the occurrence of an event. The ticks indicate the presence of censored patients, i.e., patients who withdrew from the study. The number of censored patients at time tj is indicated by cj. As shown in Supplementary Fig. 1, to compute this curve, each DP i locally computes, encodes, and encrypts a vector of the form \({n}_{0}^{(i)},{c}_{0}^{(i)},{d}_{0}^{(i)},...,{n}_{T}^{(i)},{c}_{T}^{(i)},{d}_{T}^{(i)}\) containing the values \({n}_{j}^{(i)},{c}_{j}^{(i)},{d}_{j}^{(i)}\) corresponding to each time point tj for tj = 0, . . . , T. All the DPs' vectors are then collectively aggregated. The encryption of the final result is then collectively switched from the collective public key to the querier's public key that can decrypt the result with its secret key and generate the curve following Eq. (1). Privacy-preserving pipeline for GWAS We briefly describe the genome-wide association-study workflow before explaining how we perform it in a federated and privacy-preserving manner. We conclude by detailing how we obtained our baseline GWAS results in "Results" with the PLINK software. We consider a dataset of p samples, i.e., patients. Each patient is described by f features or covariates (with indexes 1 to f). We list all recurrent symbols and acronyms in Supplementary Table 3. Hence, we have a covariates matrix \({{{{{{{\bf{X}}}}}}}}\in {{\mathbb{R}}}^{(p\times f)}\). Each patient also has a phenotype or label, i.e., \({{{{{{{\bf{y}}}}}}}}\in {{\mathbb{R}}}^{(p\times 1)}\) and v variant values, i.e., one for each variant considered in the association test. The v variant values for all p patients form another matrix \({{{{{{{\bf{V}}}}}}}}\in {{\mathbb{R}}}^{(p\times v)}\). To perform the GWAS, for each variant i, the matrix \({{{{{{{\bf{X}}}}}}}}^{\prime} =[{{{{{{{\boldsymbol{1}}}}}}}},{{{{{{{\bf{X}}}}}}}},{{{{{{{\bf{V}}}}}}}}[:,i]]\)\(\in {{\mathbb{R}}}^{(p\times (f+2))}\), i.e., the matrix X is augmented by a column of 1s (intercept) and the column of one variant i, is constructed. The vector \({{{{{{{\bf{w}}}}}}}}\in {{\mathbb{R}}}^{(f+2)}\) is then obtained by \({{{{{{{\bf{w}}}}}}}}={({{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{X}}}}}}}}^{\prime} )}^{(-1)}{{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{y}}}}}}}}\). The P value for variant i is then obtained with $${P} =2\cdot {{{{{{{\rm{pnorm}}}}}}}}\left(-\left| \frac{{{{{{{{\bf{w}}}}}}}}[f+2]}{\sqrt{\left({{{{{{{\rm{MSE}}}}}}}}\right.(y,y^{\prime} )\cdot {({{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{X}}}}}}}}^{\prime} )}^{(-1)}[f+2;f+2]}}\right| \right), $$ where pnorm is the cumulative distribution function of the standard normal distribution, w[f + 2] is the weight corresponding to the variant, \({{{{{{{\rm{MSE}}}}}}}}({{{{{{{\bf{y}}}}}}}},{{{{{{{\bf{y}}}}}}}}^{\prime} )\) is the mean-squared error obtained from the prediction \({{{{{{{\bf{y}}}}}}}}^{\prime}\) computed with w, and \({({{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{X}}}}}}}}^{\prime} )}^{(-1)}[f+2;f+2]\) corresponds to the standard error of the variant weight. Although this computation has to be performed for each variant i, we remark that X is common to all variants. In order to compute \({({{{{{{{{\bf{X}}}}}}}}}^{T}{{{{{{{\bf{X}}}}}}}})}^{(-1)}\) only once before adjusting it for each variant and thus obtain \({({{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{X}}}}}}}}^{\prime} )}^{(-1)}\), we rely on the Shermann–Morrison formula45 and the method presented in the report on cryptographic and privacy-preserving primitives (p. 52) of the WITDOM European project46. We describe this approach, i.e., FAMHE-GWAS, in Supplementary Fig. 2. Each DPi has a subset of pi patients. For efficiency, the DPs are organized in a tree structure and one DP is chosen as the root of the tree DPR. We remark that, as any exchanged information is collectively encrypted, this does not have any security implications. In a CA, each DP encrypts (E()) its local result with the collective key, aggregates its children DPs encrypted results with its encrypted local results, and sends the sum to its parent DP such that DPR obtains the encrypted result aggregated among all DPs. We recall here that with the homomorphic-encryption scheme used, vectors of values can be encrypted in one ciphertext and that any operation performed on a ciphertext is simultaneously performed on all vector elements, i.e., SIMD. We rely on this property to parallelize the operations at multiple levels: among the DPs, among the threads in each DP and among the values in the ciphertexts. We rely on the GJ method43 to compute the inverse of the encrypted covariance matrix. We chose this algorithm as it requires only row operations, which can be efficiently performed with SIMD. The only operation that is not directly applicable in HE is the division that we approximated with a Chebyshev polynomial. Note that we avoid any other division in the protocol by pushing them to the last step that is executed by the querier Q after decryption. In Supplementary Fig. 2, we keep 1/c until decryption. In Supplementary Fig. 2, we describe how we further reduce the computation overhead by obtaining the covariates' weights \({{{{{{{\bf{w}}}}}}}}^{\prime}\) with a lightweight federated gradient descent (FGD), by reporting the obtained covariates' contributions in the phenotype y, which becomes y″. To compute the P value, we then compute only one element of the covariance inverse matrix \({({{{{{{{{\bf{X}}}}}}}}}^{^{\prime} T}{{{{{{{\bf{X}}}}}}}}^{\prime} )}^{(-1)}[f+2;f+2]\), instead of the entire inverse. To perform the FGD, we follow the method described by Froelicher et al.24, without disclosing any intermediate values. We describe in Supplementary Fig. 3 how the (main) values used in both protocols are packed to optimize the communication and the number of required operations (multiplications, rotations). We perform permutations, duplications, and rotations on cleartext data that are held by the DPs (indicated in orange in Supplementary Figure 3); and we avoid, as much as possible, the operations on encrypted vectors. Note that rotations on ciphertexts are almost one order of magnitude slower than multiplications or additions and should be avoided when possible. As ciphertexts have to be aggregated among DPs, a tradeoff has to be found between computation cost (e.g., rotations) and data packing, as a smaller packing density would require the exchange of more ciphertexts. In both protocols, all operations for v variants are executed in parallel, due to the ciphertext packing (SIMD). For a 128-bit security level, the computations are performed simultaneously for 512 variants with FAMHE-GWAS and for 8192 with FAMHE-FastGWAS. These operations are further parallelized due to multithreading and to the distribution of the workload among the DPs. We highlight (in bold) the main steps and aggregated values in the protocol and note that DPs' local data are in cleartext, whereas all exchanged data are collectively encrypted (E()). Baseline computations with PLINK As explained in the "Results" section, we relied on the PLINK software to obtain our baseline results for the (i) Original approach in which GWAS is computed on the entire centralized dataset, (ii) the Independent approach in which each DP performs the GWAS on its own subset of the data, and (iii) for the Meta-analysis in which the DPs perform the GWAS on their local data before combining their results. For (i) and (ii), we relied on PLINK 2.0 and its linear regression (–glm option)-based association test. For (iii), we relied on PLINK 1.9 and used the weighted-Z test approach to perform the meta-analysis. Experimental settings We implemented our solutions by building on top of Lattigo33, an open-source Go library for lattice-based cryptography, and Onet47, an open-source Go library for building decentralized systems. The communication between DPs is done through TCP, with secure channels (by using TLS). We evaluate our prototype on an emulated realistic network, with a bandwidth of 1 Gbps and a delay of 20 ms between every two nodes. We deploy our solution on 12 Linux machines with Intel Xeon E5-2680 v3 CPUs running at 2.5 GHz with 24 threads on 12 cores and 256 GB of RAM, on which we evenly distribute the DPs. We choose security parameters to always achieve a security level of 128 bits. Differentially private mechanism DiffP is a privacy-preserving approach, introduced by Dwork26, for reporting results on statistical datasets. This approach guarantees that a given randomized statistic, \({{{{{{{\mathcal{M}}}}}}}}(DS)=R\), computed on a dataset DS, behaves similarly when computed on a neighbor dataset \({{{{{{{\rm{DS}}}}}}}}^{\prime}\) that differs from DS in exactly one element. More formally, (ϵ, δ)-diffP48 is defined by \(\Pr \left[{{{{{{{\mathcal{M}}}}}}}}({{{{{{{\rm{DS}}}}}}}})=R\right]\le \exp (\epsilon )\cdot \Pr \left[{{{{{{{\mathcal{M}}}}}}}}({{{{{{{\rm{DS}}}}}}}}^{\prime} )=R\right]+\delta\), where ϵ and δ are privacy parameters: the closer to 0 they are, the higher the privacy level is. (ϵ, δ)-diffP is often achieved by adding noise to the output of a function f(DS). This noise can be drawn from the Laplace distribution with mean 0 and scale \(\frac{{{\Delta }}f}{\epsilon }\), where Δf, the sensitivity of the original real-valued function f, is defined by \({{\Delta }}f={\max }_{{{{{{{{\rm{D}}}}}}}},{{{{{{{\rm{D}}}}}}}}^{\prime} }| | f({{{{{{{\rm{DS}}}}}}}})-f({{{{{{{\rm{DS}}}}}}}}^{\prime} )| {| }_{1}\). Other mechanisms, e.g., relying on a Gaussian distribution, were also proposed26,49. As explained before, FAMHE can enable the participants to agree on a privacy level by choosing whether to yield exact or obfuscated, i.e., differentially private results, to the querier. We also note that our solution would then enable the obfuscation of only the final result, i.e., the noise can be added before the final decryption, and all the previous steps can be executed with exact values as no intermediate value is decrypted. This is a notable improvement with respect to existing federated-learning solutions, based on diffP38, in which the noise has to be added by each DP at each iteration of the training. In the solution by Kim et al.38, each DP perturbs its locally computed gradient such that the aggregated perturbation, obtained when the DPs aggregate (combine) their locally updated model, is ϵ-differentially private. This is achieved by having each DP generate and add a partial noise such that, when aggregated, the total noise follows the Laplace distribution. The noise magnitude is determined by the sensitivity of the computed function and this sensitivity is similar for each DP output and for the aggregated final result. This means that, as the intermediate values remain encrypted in FAMHE, a noise with the same magnitude can be added only once on the final result, thus ensuring the same level of privacy with a lower distortion of the result. Further information on research design is available in the Nature Research Reporting Summary linked to this article. We replicated two existing medical studies, Samstein et al.30 and McLaren et al.31. The original data used by Samstein et al. and in our work is available at http://www.cbioportal.org/study/summary?id=tmb_mskcc_2018. The data used by McLaren et al. and in our work are protected and under the responsibility of the authors of the original study. Interested researchers should contact these authors if they wish to access the dataset. Our solution partially relies on open-source software and public libraries (i.e., the cryptographic library Lattigo33 and the decentralized systems library Onet47). Our code is currently not publicly available as its license does not allow for open-source redistribution. Pseudocode of the used algorithms and protocols is provided for completeness in the "Methods" section Upon request sent to the corresponding author(s), we can provide binaries that, in combination with open-source resources, can be used for the sole purpose of verifying and reproducing the experiments in the manuscript. European Commission. The EU General Data Protection Regulation. https://eugdpr.org/ (2021). Sheller, M. J. et al. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10, 1–12 (2020). Nasirigerdeh, R. et al. sPLINK: a federated, privacy-preserving tool as a robust alternative to meta-analysis in genome-wide association studies. Preprint at bioRxiv https://doi.org/10.1101/2020.06.05.136382 (2020). Warnat-Herresthal, S. et al. Swarm learning as a privacy-preserving machine learning approach for disease classification. Nature 594, 265–270 (2021). Zhu, L. & Han, S. Deep leakage from gradients. In Federated Learning, 17–31 (Springer, 2020). Melis, L., Song, C., De Cristofaro, E. & Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security and Privacy (SP), 691–706 (2019). Gaye, A. et al. DataSHIELD: taking the analysis to the data, not the data to the analysis. Int. J. Epidemiol. 43, 1929–1944 (2014). Moncada-Torres, A., Martin, F., Sieswerda, M., van Soest, J. & Geleijnse, G. VANTAGE6: an open source priVAcy preserviNg federaTed leArninG infrastructurE for Secure Insight eXchange. In AMIA Annual Symposium Proceedings, 870–877 (2020). NIH. All of Us Research Program. https://allofus.nih.gov/ (2021). Genomics England. 100,000 Genomes Project. https://www.genomicsengland.co.uk/ (2021). UK Biobank. Enabling scientific discoveries that improve human health. https://www.ukbiobank.ac.uk/ (2021). Scheibner, J. et al. Revolutionizing medical data sharing using advanced privacy enhancing technologies: technical, legal and ethical synthesis. J. Med. Internet Res. https://doi.org/10.2196/25120 (2021). Wang, Z. et al. Beyond inferring class representatives: user-level privacy leakage from federated learning. In The 38th Annual IEEE International Conference on Computer Communications (2019). Nasr, M., Shokri, R. & Houmansadr, A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In IEEE Symposium on Security and Privacy (SP) (2019). Bonomi, L., Jiang, X. & Ohno-Machado, L. Protecting patient privacy in survival analyses. J. Am. Med. Inform. Assoc. 27, 366–375 (2020). Li, W. et al. Privacy-preserving federated brain tumour segmentation. In MLMI, Vol. 11861 (eds Suk, H.-I. et al.) (2019). Jagadeesh, K. A., Wu, D. J., Birgmeier, J. A., Boneh, D. & Bejerano, G. Deriving genomic diagnoses without revealing patient genomes. Science 357, 692–695 (2017). Cho, H., Wu, D. J. & Berger, B. Secure genome-wide association analysis using multiparty computation. Nat. Biotechnol. 36, 547–551 (2018). Hie, B., Cho, H. & Berger, B. Realizing private and practical pharmacological collaboration. Science 362, 347–350 (2018). Simmons, S., Sahinalp, C. & Berger, B. Enabling privacy-preserving gwass in heterogeneous human populations. Cell Syst. 3, 54–61 (2016). Froelicher, D. et al. Unlynx: a decentralized system for privacy-conscious data sharing. Proceedings on Privacy Enhancing Technologies Symposium, 232–250. (2017). Raisaro, J. L. et al. Medco: enabling secure and privacy-preserving exploration of distributed clinical and genomic data. In IEEE/ACM Transactions on Computational Biology and Bioinformatics, Vol. 16 (IEEE, 2018). Froelicher, D., Troncoso-Pastoriza, J. R., Sousa, J. S. & Hubaux, J. Drynx: decentralized, secure, verifiable system for statistical queries and machine learning on distributed datasets. IEEE TIFS https://doi.org/10.1109/TIFS.2020.2976612 (2020). Froelicher, D. et al. Scalable privacy-preserving distributed learning. In Proceedings on Privacy Enhancing Technologies Symposium, 323–347 (2021). Blatt, M., Gusev, A., Polyakov, Y. & Goldwasser, S. Secure large-scale genome-wide association studies using homomorphic encryption. Proc. Natl. Acad. Sci. 117, 11608–11613 (2020) Dwork, C. et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9, 211–407 (2014). Article MathSciNet Google Scholar Jayaraman, B. & Evans, D. Evaluating differentially private machine learning in practice. In USENIX Security (2019). Raisaro, J. et al. SCOR: a secure international informatics infrastructure to investigate COVID-19. J. Am. Med. Info. Assoc. 27, 1721–1726 (2020). Mouchet, C., Troncoso-pastoriza, J. R., Bossuat, J.-P. & Hubaux, J.-P. Multiparty Homomorphic Encryption from Ring-Learning-with-Errors. Proceedings on Privacy Enhancing Technologies Symposium, (2021). Samstein, R. M. et al. Tumor mutational load predicts survival after immunotherapy across multiple cancer types. Nat. Genet. 51, 202–206 (2019). McLaren, P. J. et al. Polymorphisms of large effect explain the majority of the host genetic contribution to variation of HIV-1 virus load. Proc. Natl Acad. Sci. USA 112, 14658–14663 (2015). Human Genome Privacy. iDash Competition. http://www.humangenomeprivacy.org/2020/ (2021). Laboratory for Data Security, EPFL. Lattigo: A Library for Lattice-based Homomorphic Encryption in Go. https://github.com/ldsec/lattigo (2021). Tierney, J. F., Stewart, L. A., Ghersi, D., Burdett, S. & Sydes, M. R. Practical methods for incorporating summary time-to-event data into meta-analysis. Trials 8, 16 (2007). Laird, N. & Olivier, D. Covariance analysis of censored survival data using log-linear analysis techniques. J. Am. Stat. Assoc. 76, 231–240 (1981). PLINK Software. Whole genome association analysis toolset. https://www.cog-genomics.org/plink/ (2020). Lu, Y., Zhou, T., Tian, Y., Zhu, S. & Li, J. Web-Based privacy-preserving multicenter medical data analysis tools via threshold homomorphic encryption: design and development study. J. Med. Internet Res. 22, e22555 (2020). Kim, M., Lee, J., Ohno-Machado, L. & Jiang, X. Secure and differentially private logistic regression for horizontally distributed data. IEEE Trans. Inf. Forensics Secur. 15, 695–710 (2020). Medco software. Collective protection of medical data. https://medco.epfl.ch/ (2021). Shamir, A. How to share a secret. Commun. ACM https://doi.org/10.1145/359168.359176, 612–613 (1979). Libert, B., Ling, S., Nguyen, K. & Wang, H. Lattice-based zero-knowledge arguments for integer relations. In CRYPTO (2018). Sav, S. et al. POSEIDON: Privacy-Preserving Federated Neural Network Learning. In Conference: Network and Distributed System Security Symposium (2021). Atkinson, K. E. An Introduction to Numerical Analysis (Wiley, 2008). Goel, M. K., Khanna, P., & Kishore, J. Understanding survival analysis: Kaplan-Meier estimate. Int. J. Ayurveda Res. 1, 274–278 (2010). Sherman, J. & Morrison, W. J. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Stat. 21, 124–127 (1950). WITDOM Project. WITDOM: empoWering prIvacy and securiTy in non-trusteD envirOnMents. https://cordis.europa.eu/project/id/644371/results (2021). DeDiS Laboratory, EPFL. Cothority network library. https://github.com/dedis/onet (2021). Dwork, C., McSherry, F., Nissim, K. & Smith, A. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, 265–284 (Springer, 2006). Ghosh, A., Roughgarden, T. & Sundararajan, M. Universally utility-maximizing privacy mechanisms. SIAM J. Comput. 41, 1673–1693 (2012). We would like to thank Apostolos Pyrgelis for providing valuable feedback and the DeDiS lab, led by Bryan Ford, for providing software support. This work was partially supported by grant #2017-201 of the Strategic Focal Area "Personalized Health and Related Technologies (PHRT)" of the ETH Domain. This work was also partially supported by NIH R01 HG010959 (to B.B.) and NIH DP5 OD029574 (to H.C.). Laboratory for Data Security, EPFL, Lausanne, Switzerland David Froelicher, Juan R. Troncoso-Pastoriza, Joao Sa Sousa & Jean-Pierre Hubaux Precision Medicine Unit, Lausanne University Hospital, Lausanne, Switzerland Jean Louis Raisaro & Jacques Fellay Data Science Group, Lausanne University Hospital, Lausanne, Switzerland Jean Louis Raisaro Precision Oncology Center, Lausanne University Hospital, Lausanne, Switzerland Michel A. Cuendet Broad Institute of MIT and Harvard, Cambridge, MA, USA Hyunghoon Cho & Bonnie Berger Computer Science and AI Laboratory, MIT, Cambridge, MA, USA Bonnie Berger Department of Mathematics, MIT, Cambridge, MA, USA School of Life Sciences, EPFL, Lausanne, Switzerland Jacques Fellay David Froelicher Juan R. Troncoso-Pastoriza Joao Sa Sousa Hyunghoon Cho Jean-Pierre Hubaux J.R.T.-P., J.L.R., and J.-P.H. conceived the study. D.F. and J.R.T.-P. developed the methods. D.F. and J.S.S. implemented the software and performed experiments, the results of which were validated by M.A.C. and J.F. H.C. and B.B. provided biological interpretation of the results and helped revise the manuscript. All authors contributed to the methodology and wrote the manuscript. Correspondence to Jean-Pierre Hubaux. J.R.T.-P. and J.-P.H. are co-founders of the start-up Tune Insight (https://tuneinsight.com). All authors declare no other competing interests. Peer review information Nature Communications thanks Jun Sakuma and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Froelicher, D., Troncoso-Pastoriza, J.R., Raisaro, J.L. et al. Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption. Nat Commun 12, 5910 (2021). https://doi.org/10.1038/s41467-021-25972-y Harmonising electronic health records for reproducible research: challenges, solutions and recommendations from a UK-wide COVID-19 research collaboration Hoda Abbasizanjani Fatemeh Torabi Ashley Akbari BMC Medical Informatics and Decision Making (2023) Sequre: a high-performance framework for secure multiparty computation enables biomedical data sharing Haris Smajlović Ariya Shajii Ibrahim Numanagić Genome Biology (2023) EasySMPC: a simple but powerful no-code tool for practical secure multiparty computation Felix Nikolaus Wirth Tobias Kussel Fabian Prasser BMC Bioinformatics (2022) Sociotechnical safeguards for genomic data privacy Zhiyu Wan James W. Hazel Bradley A. Malin Nature Reviews Genetics (2022) Lead federated neuromorphic learning for wireless edge artificial intelligence Helin Yang Kwok-Yan Lam H. Vincent Poor
CommonCrawl
How to deliver materials necessary for terraforming? A lifeless moon of a gas giant (which orbits a K-type star [4200K] at about 0.7 AU) is chosen to become home to a space colony. Colonists plan to transform it from a barren rock into a garden of Eden. Since this moon lacks water, nitrogen, and $CO_2$, they need to be mined in the asteroid belt and other gas giant moons and then delivered to the terraforming site. What is the most effective way to deliver these materials? The colonists are looking for a delivery method that would provide results (such as atmosphere and free-flowing water) within years or decades. If it is absolutely impossible they can go into suspended animation and wake up in shifts to monitor the progress. It would also be nice to avoid: changes in orbit or rotation of the moon; significant damage to the moon's surface; creation of a debris cloud around the moon (the colonists are strongly opposed to space littering); loss of already delivered materials (as seen in case of comet or asteroid bombardment). Technological level The colonists have access to the following technologies: fully automated and robotised asteroid mining; space travel at 1/10 of the speed of light; terraforming technologies (however, only one project has been completed successfully by the time of their departure); genetic engineering; suspended animation. Technologies that are envisioned by scientists of today but cannot be built because of technical difficulties (materials, money, political will) are fine. However, something like teleportation is not possible unless it can be explained by existing science. science-based space-colonization terraforming OlgaOlga $\begingroup$ By the looks of your conditions you're opposed to direct ballistic delivery $\endgroup$ – Separatrix Dec 26 '17 at 20:03 $\begingroup$ @Separatrix, if ballistic delivery is indeed the only feasible way to do it, I will live with it. But other methods will be preferrable. $\endgroup$ – Olga Dec 26 '17 at 20:06 $\begingroup$ One important read when considering terraforming projects is the wiki on approaches to terraforming Venus. It's gives good insight into various problems with such an undertaking. en.m.wikipedia.org/wiki/Terraforming_of_Venus $\endgroup$ – Stephan Dec 26 '17 at 20:19 $\begingroup$ since most of the material you want to deliver directly to the atmosphere bombardment really has an advantage, it vaporizes the material at the same time it delivers it. of course drops do not need to be random you can use the bombardment to sculpt the surface to suit your needs. $\endgroup$ – John Dec 26 '17 at 21:15 $\begingroup$ I was thinking with ridiculously convoluted gravity assists but the time frame doesn't permit that. How is can travel at 1/10 of the speed of light not the answer? Also, your "terraforming technologies" had better include knowing how to start a dead planet's dynamo. In the given time frame, that'd be more of a handwave then the engine. $\endgroup$ – Mazura Dec 26 '17 at 23:41 Provenance of terraforming materials. You mention getting things from the asteroid belt. There may be an easier way. Since you can move cargo fast (0.1c) you can afford to get the cargo from farther away. Asteroids are generally rocky with possibly some ices on top. Moons of Saturn are generally icy with some rocks in the middle. There are a lot of moons of Saturn (and the moons of Uranus and Neptune are probably good targets as well). Collectively, they have far more ammonia and water than you could ever use to terraform a planet. So why not simply drag a few small to mediums sized moons of a gas giant into orbit around your planet and prepare to send them down? Something not mentioned is the need to refine the materials. If you want the proper elements to be added to your moon in a matter of decades, then you have to be careful about what you add. Like any good culinary creation, you must measure your ingredients carefully. You ingredients are bits of and/or whole moons. So how do you measure them? You have to melt them. You can utilize fractional distillation to melt away the various compounds. If you slowly cook (heat) a comet, all the Carbon monoxide will melt first (68 K), then methane (~91 K), ammonia (195 K), carbon dioxide (217 K) and finally water (273 K). All those temperatures are pretty far away from each other, so simply melt the ice ball slowly, and then separate the solid bits from the liquid at each step. Now you have a set of liquid or slushy balls in space. If you were smart, you would do this far from the sun, so the carbon monoxide and methane will refreeze for you before transport. You now have a bunch of ice balls of reasonably pure compounds ready to go smash into your planet! In the comments you say you want a plant with about 0.75 Earth's radius and mass; and 0.7 Earth's gravity. That doesn't work exactly, but going with some numbers that fit the bill more or less, let us assume your moon has radius 0.9 Earth's, density 0.8 Earth's, to get surface gravity 0.72 of Earth's. Mass ends up being 0.58 of Earth's. Since surface area is proportional to radius, squared, we will need about 80% of the Earth's atmosphere, oceans, and biological matter. An atmosphere will need 20% oxygen and 80% inert gas; nitrogen is the most common inert gas and should do nicely. The requirements for our moon will be $3.3\times10^{18}$ kg of nitrogen and $2.1\times10^{17}$ kg of oxygen. The ocean will need $1.1\times0^{21}$ kg of water (though this could vary widely, depending on how wet you want the planet). Lastly, the biosphere will need at least $1\times10^{12}$ kg of carbon. To provide these ingredients, we can add three compounds primarily. Ammonia can be used to generate atmospheric nitrogen; Carbon dioxide can be transformed in to atmospheric oxygen; and water is just water. At the ratio of two Ammonia per one diatomic nitrogen and one carbon dioxide per diatomic oxygen, our shopping list is roughly: $1\times10^{21}$ kg water $4\times10^{18}$ kg ammonia $2\times10^{17}$ kg carbon dioxide The great thing about these ingredients is that they are three of the most common compounds in the outer solar system. They also provide plenty of surplus material for making a biosphere: Carbon Dioxide has extra carbon and ammonia has extra hydrogen. No need to add methane, there are plenty of fossil fuels to go around! How to not make a mess The next challenge is to not make too big of a mess when you deliver your materials. Here are the various factors you outlined. How not to significantly damage the moon's surface Without an atmosphere, your moon will likely have a surface covered in fine regolith similar to what covers Luna and Mars. If this surface is hit by impacts from space, the dust will end up mostly settled back into the surface. So from this perspective, there isn't too much to damage done by hitting the planet with space snowballs; the holes will be filled by dust (relatively) soon after impact. Lunar regolith has a density about 2/3 of lunar surface rocks (and Earth rocks), so the holes will be filled with a material that will be reasonably solid. Newton's depth approximation for impacts is $$D\approx L\frac{\rho_i}{\rho_p}$$ where L is the length (or diameter, if spherical) of the projectile and $\rho_i$ and $\rho_p$ are the densities of the impactor and planet, respectively. Note that this approximation nowhere includes the velocity of the impactor. Let us assume that planet has a similar crust density to Earth (2500 kg/m$^3$), while the delivered volatiles, such as CO$_2$, water, and ammonia each have densities less than 1000 kg/m$^3$. Assuming we want to limit impact depth to 200 m so we don't make craters too large, we can throw objects up to 500m in diameter at the surface without making too much of a mess. How not to not make a debris cloud Putting stuff back into space will both anger your space-junk-OCD Chief Engineer and represent a loss of materials. We don't want to do that. How can we avoid it? First, we have to figure out escape velocity of our planet. From this post, we see that radius and density are both proportional to surface gravity. As calculated above, we have radius 0.9 Earth's, density 0.8 Earth's, to get surface gravity 0.72 of Earth's. Mass ends up being 0.58 of Earth's. Escape velocity is calculated here as $\sqrt{2gr}$, where $g$ is gravity and $r$ is radius. Given the factors above, escape velocity of your moon is 0.8 of Earth's, or 9000 m/s. To ensure nothing goes into space, we will make the average ejecta velocity from our impact craters no more than 4000 m/s. In this post, I perform calculations on the height of an ejecta plume. This model calculates ejection velocity as a function of distance from the impact site. We want the ejecta velocity at the edge of the impactor to be less than 4000 m/s. If you work out the equation, you find that the maximum ejecta speed is proportional only to the impact velocity, and not to the mass or radius of the impactor (although the density of the impactor is very important). Ultimately, the relationship is $$4000 \text{ m/s} = .1313v_i.$$ Thus, for a 4000 m/s ejecta, the maximum impact speed must be about 30 km/s. How not to eject volatile gasses into space Of the gasses you are interested in, the two lightest and therefore most likely to escape are water (molar mass 18) and ammonia (molar mass 17). Therefore, we must figure out how to keep those gasses on the planet upon impact. First, lets look at the ejecta plume from the last problem. Using basic kinematics, a particle (of ammonia) ejected at 4000 m/s, will reach a height of about 1100 km (Don't worry! I know that this is well into space, but without orbital velocity, it is coming back down!). The time it takes to get all the way up there is about 400 seconds, and the escape velocity at this height is about 8200 m/s. Using the calculations in the answers to this post, we can figure out how hot an ammonia particle must be at this height to escape the moon's gravity. A particle must reach about 40000 K to escape under these conditions. Ouch! Now, individual particles are able to escape because the molecular distribution of kinetic energy has some variance to it. However, given that the escape velocity at the top of the ejecta blast is still about the same as the last linked post's calculated necessary escape velocity to hold gasses over geological time (8500 m/s at Earth's distance from the Sun), I think we can assume that very little of our gaseous ejecta will hit space. How not to change the orbit and rotation of a moon I had some more in depth calculations here, but they are not really needed. As long as you have the technology to accelerate things to 0.1c, I assume you have sufficient space horsepower to aim your delivered payloads as you like. If that is the case, then you simply hit the moon from all directions, so the net force of the impacts is zero. Find a suitable mid-size moon. Melt it. Separate the various compounds into chunks of no more than 250m radius. Throw them into your planet at impact speeds of less than 30 km/s. Very little will escape into space. Profit! kingledionkingledion $\begingroup$ The Chief Engineer is very pleased with your attention to his desire to keep space nice and clean. He is also very impressed with your suggestions. He wonders if strategic melting of frozen materials can be accomplished with mirrors? $\endgroup$ – Olga Dec 27 '17 at 1:18 $\begingroup$ @Olga Mirrors would likely work too. I suggested solar panel powered lasers since they give you fine control over wavelength and heat delivered. You don't want to melt your moon all willy-nilly, you probably need to take years to heat it evenly and drive off the volatiles in just the right way so you can recover them. $\endgroup$ – kingledion Dec 27 '17 at 1:27 I would think that mining of the asteroid belt, either manned or automated, could be done to break up the chunks into smaller than SUV-sized pieces that could be launched at the moon. This would avoid any littering of rocket or other man-made materials. It would leave craters, but sizes this small wouldn't be as devastating as a full comet or asteroid. With it impacting the surface, it could help disbursement some, as well as creating friction heat to help bring up the temp of a barren planetoid. Larger pieces could be used to make divots large enough to be a lake or reservoir, without the heavy machinery current methods require. Smaller pieces will avoid large blow back out of the intended atmosphere. Heavily pounding the rocky surface will actually help pulverize it into more easily planted soil. There will likely need to be significant changes to the moons surface for humans to live there, so why not do it with the pot-shots of delivering material before we move in? Running water will change the surface, as will plants and the new weather patterns. Also, adding mass in the form of air, water, etc. will change the orbit of the moon, so that is unavoidable, to a certain extent. We have changed the orbit of the Earth by creating lakes with dams and other water reservoirs. An advantage of orbital bombardment is that it helps judge the level of available atmosphere. As the atmosphere forms, more and more friction will be shown on the debris. Once it gets near Earth density, most of the sub-SUV sized debris will never even hit the surface. This friction has the advantage of further disbursing the O2, N2, H2O, and other materials/minerals you are likely to need on the surface and in the atmosphere. Using robotic miners would be faster than manned mining, but there could be a mixture of both, since the robots are likely to need maintenance. There's always the need for people to feed their families, so there's likely the "adventure seeker" that's willing to spend their time earning hazard pay for asteroid mining. After all, robots are expensive (they keep breaking) and humans are comparatively cheap (since there aren't enough jobs on Earth). There's no need to render the materials to a refined state, just into small enough chunks. There could be a need to prevent certain volatile materials/substances from getting to the moon, but with the vast volume you are looking to fill, small pockets of even chlorine gas aren't likely to matter. And if you ship it with some sodium, it might even help, as in making salt. There's the high likelihood of needing to use some sort of genetic modification of the micro and macro biological elements of the first stage of plants. The plants would need to be adapted to that exact environment. Not all plants can deal with the rocky, low CO2, low O2, low temp, low gravity, low moisture area you are talking about. These would likely need to also be high yield plants and microbes that would output high levels of O2, N2, and lots of other things to be able to create an atmosphere in even 100 years. This flora would also need to be able to break into the rocky surface to get the required minerals they need. They may also need to be highly susceptible to a specific chemical or spray that would kill the fast spreading biome, so more Earth-like tame plants could be brought in, without fear of being killed off by the original planet evolving life forms. Even 1/10th of the speed of light is really fast. This would allow us to go from the Earth to the asteroid belt in hours or days, rather than the current months, so a manned expedition is well within range for this speed. We would just need to make sure that we don't send any material into the moon at that speed. You could, however, have a large transport that collects from the miners, then shoots over to the moon, slows down to open it's doors to offload/bombard the planet with the small fragments, then returns to the collection point. With 1/10th c, this could potentially be done in many points in the asteroid belt with nearly constant delivery to the moon. The Martian Way, by Isaac Asimov, did something slightly similar. It is about a Mars colony that was having a shortage of water, asking the Earth to supply it. An Earth politician advocated against giving them more, citing a shortage of supply, so the Mars colony found their own solution. They sent out a group in a large rocket to find a large, mostly ice, asteroid to bring back. They ended up embedding the rocket into the asteroid and using it as a source of fuel to get home. They ended up with more than enough water for themselves, but had to expend a sizable portion of it to get it there and land it, rather than just crash it. https://en.wikipedia.org/wiki/The_Martian_Way computercarguycomputercarguy Large scale projects like this need to consider the economics of moving all that material around the Solar System. You will need to apply energy to move it from whatever orbits it is currently in, then, since you are opposed to ballistic impact, more energy to match the orbital speed of the target and deliver it at minimal speeds. Depending on where the materials are in relation to the target, you have several choices. If you are in a farther orbit from the material source than the local sun, you can use high performance solar sails to tow the materials into the appropriate orbits. The sail can accelerate to the target planet, then "tack" by turning the trust vector against the direction of travel to match the orbital speed. Solar sail accelerating to the target Solar sail decelerating to the target While the usual image of solar sails is vast, slow moving devices, high performance sails with accelerations of 1mm/sec^2 can move across the Solar System at impressive speeds, a one way trip from Earth to Pluto at these speeds would only take 3 years (although that is a flypast). The real key is to set up a "pipeline" and send materials in a steady stream. While it may take 3 years for the first "package" to arrive, once the pipeline is filled, there is a steady stream of materials on the way. K Eric Drexler pioneered the idea of thin film solar sails as far back as the 1970's Using systems of mirrors at the target to reflect sunlight onto fast moving solar sails to assist slowing them down solves two issues, not only do you have finer control of incoming sails, but you can also use the solar energy when not controlling sails to provide energy to the surface, to assist in liquifying solids or turning liquid materials into gasses (an extreme case would be to focus solar energy onto the surface of Mars and boil Oxygen from the iron oxide on the surface. This is obviously energy intensive and inefficient, but with sufficient energy you can do almost anything). Looking the other way, you could set up continental sized mirrors or platoons of mirrors to accelerate solar sails from the far reaches of the Solar System to send cut up pieces of comets back to the inner Solar System for your terraforming project. Given the weaker sunlight and vast distances, you might be looking at a decade before the first deliveries from the "pipeline" arrive, but once again, once the pipeline is filled, you have a steady stream of deliveries. Without knowing important issues like the actual distances between the supply sources and targets, orbital velocities and so on, the answer is hand waved, but the ever useful Atomic Rockets site has a lot of relevant information and equations to work with so you can calculate delivery times, velocity changes etc. ThucydidesThucydides $\begingroup$ I am still calculating distances, orbits, and velocities. So, unfortunately, I am unable to provide more detailed information at this time. However, your answer is incredibly helpful. $\endgroup$ – Olga Dec 27 '17 at 1:26 Not the answer you're looking for? Browse other questions tagged science-based space-colonization terraforming or ask your own question. How to Effectively Collect and Recycle Space Junk? What impact is required for a visible (from Earth) ejecta plume on Earth's Moon, and would the Moon survive? What is the smallest planetary mass that can prevent 'me' from flying off into space? Gravity of a Super-Earth Terraforming for robots Mining Suns For Terraforming? Local terraforming Habitable environment on a big moon of a gas giant lacking magnetosphere Genetic engineering as an alternative to magnetosphere (radiation protection) Radiation Protection in Mechanical Counterpressure Spacesuit The Colonist - Part I: Construction The One Problem With Terraforming and Colonizing a Super-Earth Space station design for long-term safety and durability
CommonCrawl
Magnetic Flux Density Formula Solenoid Consider dl be the small current carrying element at point c at a distance r from point p. Faraday's law of induction, integral and differential form. The equation above will give us this flux density. the change in the slope of the graph. Θ = Angle between the magnetic field and normal to the surface. Measure total flux in industrial and measurement system settings. Solenoids and Toroids. 4 m, radius = 3 cm, and current = 10 A. o o o o o o o o x x x x x x x x. Describe and sketch flux patterns due to a long straight Wire. Since the Magnet-ic Force produced by the Solenoid is a function of the Flux Density, the Magnetic Force will have a propor-tionate hysteresis loop as the Flux Density, B. Previous (Denotation and connotation). The magnetic field intensity on the axis of a long solenoid of length / and internal diameter d, where l >>d, having N, turns and carrying a current i, is: H=1 2. o Fick's first law - The equation relating the flux of atoms by diffusion to the diffusion coefficient and the concentration gradient. Compare your results with theoretical value which is µ 0 =1. 25 cm 2, if a magnetic field crosses perpendicular to it, and has B= 2 T. When produced in a full B/H curve, it is apparent that the direction in which H is applied affects the graph. Tesla/m2 Sources of Magnetism Solenoid: Produces lines of flux as shown (in blue). l=magnetic path length cm along with u = B/H (u=125) B can be solved. 23 Jan 2019 242 – Solenoid Valve Attractive Force Analysis Taking into Account Movable Part Motion. For only the portion of the surface with radius contributes to the flux. 25663706212. although the frequency of oscillations in magnetic flux. The maximum flux density [B. If a loop of wire with an area A is in a magnetic field B, the magnetic flux is given by:. Flux density in an air coil. The area around a magnet within which magnetic force is exerted, is called a magnetic field. The magnetic field inside the solenoid is given to be 0. where is the induced electric field and is the magnetic flux inside the solenoid. The magnetic energy density at some point in space is given by Equation 28-20: 0 2 m 2μ B u =. 16 people chose this as the best definition of magnetic-flux-density: The amount of magnetic fl See the dictionary meaning, pronunciation, and sentence examples. 5), ( ) 2 0 B 2 0 2 0 2 0 B 2 1 V U u B V 2 1 n B n V 2 1 U µ µ µ µ = = = = 32. BL4 Cos θ4=0. i have developed a solenoid coil which is working good and obtained results are excellent but i need to calculate applied potential with respect to flux and flux density, magnetic force etc. Observing Induced EMF And Changing Magnetic Flux Using A Solenoid:-Procedure:-Make a solenoid 8 cm in length and measure the number of turns given to it. Start the motor by providing power which in turn rotates the disk. Outside the solenoid, the magnetic field is zero. Magnetic flux. OLAB is an italian company that produce solenoid valves, pumps and fitting made of brass, stainless steel and plastic for the fluid control. (a) The loop is perpendicular to the field; hence, η= 0, and Φ= BA. The magnetic field within a solenoid depends upon the current and density of turns. Figure 6-8 shows the magnetic flux density between the coils. Solenoid Force Equation. An important point is that at any location, the magnetic flux density B is always proportional to field intensity H. % Field distribution is calculated on XZ plane @ origin (Y=0). In this guide we explain water density, provide a chart you can use to find the density of water at different temperatures, and explain three different ways to calculate density. com, a free online dictionary with pronunciation, synonyms and translation. AbstractThe magnetically impelled arc butt (MIAB) welding process uses a rotating arc as its heat source and is known as an efficient method for pipe butt welding. To calculate reluctance:. Each spin can be thought of as a small dipole or solenoid channeling magnetic flux into and out of a tetrahedron. The field can be visualized in terms of magnetic flux lines (similar to the lines of force of an electric field). Stacking multiple loops also concentrates the field even more; this arrangement is known as a solenoid. A solenoid ( / ˈ s oʊ l ə n ɔɪ d / , [1] from the French solénoïde , a modern coinage based on Greek σωληνοειδής sōlēnoeidḗs 'pipe-shaped' [2] ) is a coil wound into a tightly packed helix. From equation 1, we know that the magnitude of the magnetic field at the center of a solenoid is given by B sol = μ o nI. The magnetic flux density, denoted by the symbol B, is a vector quantity. Magnetic Speed Sensors. Inductance Latent Heat Length Linear Charge Density Linear Current Density Luminance Magnetic Flux Magnetic Flux Density Magnetomotive Force Mass Mass Flux Density Moment of Inertia Numbers Permeability Power Pressure Radiation - Absorbed Dose Radiation - Absorb. Each atom has electrons, particles that carry electric charges. The probability density function (PDF) of a random variable, X, allows you to calculate the probability of an event, as follows: For continuous distributions, the probability that X has values in an interval (a, b) is precisely the area under its PDF in the interval (a, b). watt per square meter. The unit of the magnetic flux density is the tesla (T). An amorphous soft magnetic alloy of the formula (Fe1-αTMα)100-w-x-y-zPwBxLySiz TipCqMnrCus, wherein TM is Co or Ni; L is Al, Cr, Zr, Mo or Nb; 0≦α≦0. Jump to: Rock and Mineral density | Rock and mineral specific gravity You can download the questions (Acrobat (PDF) 25kB Jul24 09) if you would like To determine the density you need the volume and the mass since. Note that the curve in the figure is a hysteresis curve (alternating flux), not a magnetization curve. If the magnetic flux density at the centre of the solenoid is 4. Determine the magnetic flux through the loop when its plane makes the following with the B field : 0°, 30 °, 60 ° and 90 ° 4. The magnetic field intensity on the axis of a long solenoid of length / and internal diameter d, where l >>d, having N, turns and carrying a current i, is: H=1 2. current density. Equations and are the unique solutions (given the arbitrary choice of gauge) to the field equations -: they specify the magnetic vector and electric scalar potentials generated by a set of stationary charges, of charge density , and a set of steady currents, of current density. One of Maxwell's equations, Ampère's law, relates the curl of the magnetic field to the current density and is particularly useful for current distributions with high degrees of symmetry. The magnetic induction intensity at any point of finite solenoid was deduced based on Biot-Savart law and the elliptic integral is accomplished. If the current in the solenoid is I = amperes. Integrated Circuits. A solenoid is a coil wound into a tightly packed helix. The saturation point is the flux density at which the material can not contain any more magnetic flux. For a solenoid of radius r = m with N = turns, the turn density is n=N/(2πr)= turns/m. Density dependent polarization effect Shielding of electrical field far from particle path; effectively By measuring the particle momentum (deflection in a magnetic field) and the energy loss one gets the mass A charged particle approaching a boundary. Hysteresis curves show the relationship between a magnetizing force (H) and the resultant magnetic induction (B). Other articles where Magnetic flux is discussed: electromagnetism: Faraday's law of induction: …found that (1) a changing magnetic field in a circuit induces an electromotive force in the circuit; and (2) the magnitude of the electromotive force equals the rate at which the flux of the magnetic field through the circuit changes. The equation for hysteresis loss is given as: P b = η * B max n * f * V. As density of any quantity is related with volume, hence by increasing the cross section (i. When the temperature changes from either greater or less than 4 degrees, the. The density of these field lines shows the strength of the corresponding field. One line of. This is usually much greater than the field created by the current alone. (3) Consider a circular coil C with a radius b (b > a) as shown in Fig. It is fairly accurate any time the distance to either end of the solenoid is large compared with the radius. To find the volume, use the. (a) The flux at t= 0. Critical Products & Services. 01 T (calculated by described method, quasi-saturation), and 3 T (oversaturation). 0-mm diameter, and 12. Magnetic Sensors and Reed Switches. Flux density. f = frequency of magnetic reversals per second (Hz) V = volume of. 156 x 25 mm. The magnetic field inside the solenoid is given to be 0. It is made up of 80% nickel, 15% iron, and a balance of other metals depending upon the particular formula. It is apparent that the initial permeability is dependent on the stress state of the core. It is the flux per unit area perpendicular to the field. The probability density function (PDF) of a random variable, X, allows you to calculate the probability of an event, as follows: For continuous distributions, the probability that X has values in an interval (a, b) is precisely the area under its PDF in the interval (a, b). and the relative permeability of the core is k = , then the magnetic field at the center of the solenoid is. 1 - Flux, flux density and. This calculator is useful in matching a core size to the required power. us20060144035a1 us11/027,969 us2796905a us2006144035a1 us 20060144035 a1 us20060144035 a1 us 20060144035a1 us 2796905 a us2796905 a us 2796905a us 2006144035 a1 us2006144035 a1 us 2006144035a1. (b) The loop is parallel to the field; therefore, η= 90° and Φ= 0. The induction B is linked to the field by the permeability of the medium through which the field penetrates. To graph magnetic field variation with position along the line crossing the field. 23 Jan 2019 242 – Solenoid Valve Attractive Force Analysis Taking into Account Movable Part Motion. Outside the solenoid, the magnetic field is zero. What is the magnetic induction at the centre when it carries a current of 5 A. Other articles where Magnetic flux is discussed: electromagnetism: Faraday's law of induction: …found that (1) a changing magnetic field in a circuit induces an electromotive force in the circuit; and (2) the magnitude of the electromotive force equals the rate at which the flux of the magnetic field through the circuit changes. The relative measurement of magnetic properties includes magnetic field strength, magnetic flux and magnetic moment. l=magnetic path length cm along with u = B/H (u=125) B can be solved. For generation of extremely strong magnetic fields the electromagnet in form of the Bitter coil is used. Therefore we consider the magnetic flux density in a long coil, which is constant, so that equation (1) simplifies to the following relation. The standard deviation formula is: s. Flux Density (B) is the number of webers or of lines of induction per unit area, the area being taken at right angles to the direction of the flux. A circular solenoid contains 100 turns of wire and carries a current of 2 amps. A hysteresis loop shows the relationship between the induced magnetic flux density (B) and the magnetizing force (H). Thus, the energy stored in a solenoid or the magnetic energy density times volume is equivalent to With the substitution of (Figure) , this becomes Although derived for a special case, this equation gives the energy stored in the magnetic field of any inductor. It's amazing how much is based around the magnetic flux of the planet and depends on it staying stable in order to work. The average density of an object equals its total mass divided by its total volume. Predict the direction of the force on a charge moving in a magnetic field. ¾ Lower ion energies, however, result in the lower etch rates and reduced anisotropy!. The magnetic flux formula is given by, Where, B = Magnetic field, A = Surface area and. The magnetic field outside the solenoid has the same shape as the field around a bar magnet. The symbol for magnetic flux is phi ( Φ). Vocational Training. The 2nd half of the solenoid (ℓ 2 = ½ ℓ) is filled with air. As density of any quantity is related with volume, hence by increasing the cross section (i. The unit of the magnetic flux density is the tesla (T). Other articles where Magnetic flux is discussed: electromagnetism: Faraday's law of induction: …found that (1) a changing magnetic field in a circuit induces an electromotive force in the circuit; and (2) the magnitude of the electromotive force equals the rate at which the flux of the magnetic field through the circuit changes. 4 Magnetic Flux Density 4 1. When B < Brp, the OUT pin go into "off " state. Given: Relative permeability of air and coil 1 Current density in coil 1e6 Amp/m2 The B-H curve for the core and plunger H (A/m) 460 640 720 890 1280 1900 3400 6000. Relative permeability. C) does not depend on the area involved. Ø There are a variety of boundary conditions available for the electrons: - Wall which includes the effects of : × Secondary electron emission × Thermionic emission × Electron reflection - Flux which allows you to specify an arbitrary influx for the electron density and electron energy density. As ab and cd of the coil abcd cut through the lines of magnetic flux, electric current is produced in accordance with equation 2) above. Diagram by Geek3. you really know how to express this subject but have failed to get the meaning of magnetic flux density. The magnetic field lines inside the toroid are concentric circles. Solenoid Force Equation. be useful for extracting eective moments from graphs of χmT against T. Then, we can find linear density or planar density. State Lenz's law. 2 Inductance of Solenoid 45 2. the critical state model, allows one to predict the magnetic behavior of a high field superconductor in terms of a simple empirical parameter. Toll Free: 1-800-421-6692 • Fax: 1-310-390-4357 • Open Mon-Fri, 9-5 Pacific Standard Time (-8 GMT). lating the static magnetic flux density from three coils in the x-y plane as shown in Fig. Magnetic field intensity H flux density B THE JARGON. If the solenoid has 40 turns per cm length, calculate the current flowing through the solenoid. Abstract: An electromagnetic solenoid actuator comprises a. The first equation is really Faraday's Law of Induction. A long solenoid has a magnetic flux density of 0. B 1 = `mu_0N_1/lI_1` -----> (1) The magnetic flux linked with each turn of S 2 is B 1 A. Run the "solenoid. A closed circuit condition exists when the external flux path of a permanent magnet is confined with high permeability material. What is the new magnetic flux through the solenoid due to a current I in the loop? 2. 4 T for the solenoid, depending on aspect. • Recall Electric Flux: E = EAcos • Magnetic Flux = BAcos Three ways to change the Magnetic FluX: 1. It is fairly accurate any time the distance to either end of the solenoid is large compared with the radius. Electrical & electronic units of electric current, voltage, power, resistance, capacitance, inductance, electric charge, electric field, magnetic flux, frequency. 156 x 25 mm. It is often referred to as the B-H loop. In text, it is usually depicted as a cluster of vectors attached to a geometrically abstract surface. When the density is important; The Biot-Savart equation; Biot-Savart solves a current loop; The field in a solenoid [↑ Top of page] When the density is important. Thus, the magnetic flux through a circular path whose radius r is greater than R, the solenoid radius, is The induced field is tangent to this path, and because of the cylindrical symmetry of the system, its magnitude is constant on the path. Serway Chapter 31 Problem 20P. Magnetic flux / magnetischer Fluss weber: Wb = V s = m 2 kg/s 2 A Magnetic flux density, magnetic induction / Magnetische Induktion tesla: T = Wb/m 2 = kg/s 2 A Inductance / Induktivität henry: H = Wb/A = m 2 kg/s 2 A 2 Luminous flux / Lichtstrom lumen: lm = cd sr Illuminance / Beleuchtungsstärke lux: lx = lm/m 2 = cd sr/m 2 Activity. We can use the equation. So, most of the time, it really doesn't matter whether too much whether we're measuring B, the magnetic flux density, in microteslas, or H, the magnetic field strength, in amps per metre. Predict the direction of the force on a charge moving in a magnetic field. The symbol for magnetic flux is phi ( Φ). The magnetic flux density can be obtained by multiplying the B with cross-sectional area A, we get, Φ = B xA = μ₀ N x i x A / l…. At a point at the end of the solenoid, 18 Magnetic Flux. Mutual-, self induction. The number of turns per unit length is \[n = \frac{300 \, turns}{0. The magnet is held stationary to the solenoid. If the solenoid carries a current of 0. Mass Magnetic Susceptibility. 798⋅10-3 T and 96. The normal to the coil is at an angle of 60° to the magnetic field, as shown in the diagram. Energy density Î Ï. The difference in maximum magnetic flux density can also be due to the different shape of the generated hysteresis B-H curve by Ansys Electronics. also read similar questions: Current flowing through a long solenoid is varied. To graph magnetic field variation with position along the line crossing the field. (2) The magnetic flux density B outside the solenoid coil (r > a) is given by B O. 6 Suggest and explain two ways of varying the magnitude of the flux density in the solenoid. 3 Magnetic Flux and Flux Density Magnetic flux is the amount of magnetic filed produced by a magnetic source. If the inner core of a long solenoid coil with N number of turns per metre length is hollow, "air cored", then the magnetic induction within its core will be given as:. F = (BmAm)/(BgAg) F Magnetomotive force, (magnetic potential difference), is the line integral of the field strength, H, between any two points, p 1 and p 2. 0-cm length. Energy Density of a Magnetic Field in a Solenoid. magnetic field strength. Inside the solenoid the lines of flux are close together, parallel and equally spaced. Density can be calculated on the basis of bunker specifications. If the current in the solenoid is I = amperes. A closely wound rectangular coil abcd of n turns rotates about an axis OO which is perpendicular to a uniform magnetic field of flux density B. Magnetic flux. It is wound with 1000 turns and carries a current of 2A. Magnetic Flux Wildhelm Eduard Weber, German physicist, 1804-1891 Nikola Tesla, Croatian Engineer, 1856-1943 The unit of flux is the weber. To find the volume, use the. The high stresses due to Lorentz forces in the coil is one principal constraint. Magnetic field - a state of space described mathematically, with a direction and a magnitude, where electric currents and magnetic materials influence each other. One point to note, though, is that flux density is limited by saturation to below about 1. 2566x10-6 H/m. Typical magnetic field strengths for various parts of the Sun. surface density. Allegro is a market leader for rotational speed and direction detection Hall-effect sensor ICs. Of primary concern, however, is the magnetomotive force needed to establish a certain flux density, B in a unit length of the magnetic circuit. Flux density. Define magnetic flux and the weber. As the flux oscillates across the pole faces, so also does the neutral commulating zone oscillate. The magnetic field within a solenoid is very nearly uniform; The direction of the magnetic field can be obtained by the Corkscrew rule; The magnetic flux density at the ends of a solenoid is half that at the centre. This alternative description offers some actionable insight, as we shall point out at the end of this section. The detailed structural study of the crystal often enables to do chemical tasks, for instance defining or specifying of a chemical formula, bond type. The formula for the field inside the solenoid is B = m 0 I N / L This formula can be accepted on faith; or it can be derived using Ampere's law as follows. An amorphous soft magnetic alloy of the formula (Fe1-αTMα)100-w-x-y-zPwBxLySiz TipCqMnrCus, wherein TM is Co or Ni; L is Al, Cr, Zr, Mo or Nb; 0≦α≦0. The following equation can be used to calculate a magnetic flux. H = the field strength measured in Amperes per metre (Am-1). The permeability is most often denoted by the greek symbol mu (). Given: Number of turns = N = 500 x 2 = 1000, Length of solenoid = l = (π/2) m, Magnetic induction, Current through solenoid = i = 5 A. In the case illustrated, only the core within the coil is iron. Examples of how to use "flux density" in a sentence from the Cambridge Dictionary Labs. Leakage flux is the flux, which follows a leaking path, as shown in Fig. Magnetic fields are an intrinsic property of some materials, most notably permanent magnets. 5, depending on material. THE COIL FLUX: The magnetic flux produced by the coil, or solenoid, is composed of three parts: the flux in the air gap (the space between the coil and workpiece), the flux in the workpiece, and the flux in the coil (see Figure 2). Magnetic Fields and Lines. Density, Specific Gravity. Recall the total flux linkage is just the product of the magnetic flux and. The magnetic flux density at a given point (δB →), produced by a current element is given by the Biot- Savart equation. Write down the magnetic field inside the ideal solenoid. Fiecarui punct din campul magnetic ii corespunde o marime vectoriala numita inductie magnetica. 14 show a detailed surface plot of flux density and flux lines within the pole region at different armature positions. Magnetic Flux Density – Lines of flux per unit area, usually measured in Gauss (C. The magnetic field inside the solenoid is given to be 0. The GP400 II cartridge has a spherical diamond. It's amazing how much is based around the magnetic flux of the planet and depends on it staying stable in order to work. Find the magnetic flux Φ B through one single turn of the solenoid in #4. see the formula and the numbers. Therefore, the magnetic field inside and near the middle of the solenoid is given by Equation \ref{12. The Betatron is described as an example of Faraday's Law. formula unit, χSmI is measured in These numerical relationships can. Sorry ,this product is not sell anymore. The International System (SI) unit of field "magnetic flux density" is the tesla (T). The inclusion of a ferromagnetic core, such as iron, increases the magnitude of the magnetic flux density in the solenoid. Electric Flux. see the formula and the numbers. The calculator can use any two of the values Density is defined as mass per unit volume. For the purpose of calculations, the iron is supposed to carry whole of the flux throughout its entire length. AC magnetic flux density amplitude B m up to 1 T can be generated by an AC electromagnet with an air gap in the low frequency range (or up to 10 kHz for generating lower B m value). A superposition principle was utilized to calculate the magnetic field distribution of a solenoid of finite length with infinitely thin walls wound with a finite helical angle. surface density. Magnetic Flux Density Formula Solenoid. If the flux changes: indicate whether it is increasing or decreasing (and in which direction). Water has its maximum density of 1g/cm3 at 4 degrees Celsius. Density dependent polarization effect Shielding of electrical field far from particle path; effectively By measuring the particle momentum (deflection in a magnetic field) and the energy loss one gets the mass A charged particle approaching a boundary. Effects of core combination on AL value Stresses in the core affect not only the mechanical but also the magnetic properties. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. 25 mT, f = 10 kHz, T = 100 °C). \] The magnetic field produced inside the solenoid is. f is induced. Equations and are the unique solutions (given the arbitrary choice of gauge) to the field equations -: they specify the magnetic vector and electric scalar potentials generated by a set of stationary charges, of charge density , and a set of steady currents, of current density. 4 m, radius = 3 cm, and current = 10 A. (a) The loop is perpendicular to the field; hence, η= 0, and Φ= BA. Each output is latched until B is lower than release point (Brp), and then DO、DOB transfer each state. The saturation point is the flux density at which the material can not contain any more magnetic flux. In addition, you can see the charge signs of the two capacitor plates and arrows for the (conventional) current direction. magnetic flux through the area enclosed by a circuit varies with time because of time-varying currents in nearby circuits The current in coil 1 with N1 turns sets up a magnetic field some of which pass through coil 2 Coil 2 has N2 turns M12 = mutual inductance of coil 2 with respect to coil 1 If I1 varies with time Similarly for I2 55. Flux density The flux density is the flux divided by the cross sectional area of the magnetic conductor that the flux is transported in. the magnetic flux density of a magnetic field using a current balance. Problem 23-15 The area of a 120-turn coil / loop oriented its plane perpendicular to a 0. Toll Free: 1-800-421-6692 • Fax: 1-310-390-4357 • Open Mon-Fri, 9-5 Pacific Standard Time (-8 GMT). The detailed structural study of the crystal often enables to do chemical tasks, for instance defining or specifying of a chemical formula, bond type. Cobalt (Co-49%) – Iron (Fe-49%)–Vanadium (V-2%) together makes a soft magnetic alloy with the highest flux density of any strip core alloy, making it ideal for use in tape cores and magnetic cores in many innovative technologies. density times gravitational acceleration times height. Integrated Circuits. The magnetic field within a solenoid depends upon the current and density of turns. A similar application of the magnetic field distribution created by Helmholtz coils is found in a magnetic bottle that can temporarily trap charged particles. The two ends of a thin flux tube do not qualify as monopoles, because the return flux through the cross-section of the tube balances exactly the nonzero flux traversing the rest of any closed surface enclosing one pole (but. 0 T is generated in the Alcator fusion experiment at MIT. Therefore, the above equation (1) and (2) becomes. When a current passes through it, it creates a nearly uniform magnetic field inside. The magnetic field line representation and magnetic flux density of stage-3 damping system is shown in Fig. A hysteresis loop shows the relationship between the induced magnetic flux density (B) and the magnetizing force (H). Derivative of magnetic energy with respect to movement gives the force. The magnetic field intensity on the axis of a long solenoid of length / and internal diameter d, where l >>d, having N, turns and carrying a current i, is: H=1 2. Figure 6-8 shows the magnetic flux density between the coils. In actual magnetic materials, the flux does not drop to zero when the mmf returns to zero, but there is a remanent flux. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Delta B would be from the +/-100mA or 200mApp. Magnetism is caused by the motion of electric charges. magnetic flux density when the material reaches its limit with respect to the number of flux lines per unit area it can efficiently conduct. (𝑡)=𝜇 𝜇 (𝑡) Where B is the magnetic flux density(∅/ ), 𝜇 is the permeability of the material, 𝜇 is the permeability of air and H is the magnetic field Intensity. Generalization of loop theorem. 55 s, which is about −3 Wb. A 20cm long solenoid has a 1. File > Open Examples > Maxweell 2D > Solenoid. Magnetic field produced by a solenoid is constant inside the solenoid and parallel to the axis of it. the critical state model, allows one to predict the magnetic behavior of a high field superconductor in terms of a simple empirical parameter. Then, we can find linear density or planar density. The output state is held until a magnetic flux density reversal falls below Brp, causing Output to be turned off Magnetic flux density applied on the branded side of the package which turns the output driver ON (VOUT = VDSon). Using the vector potential A obtained from Equation (2), the magnetic flux density B, the magnetic field intensity H, the magnetic field gradient dBI dr, and the magnetic potential energy W were computed. Density dependent polarization effect Shielding of electrical field far from particle path; effectively By measuring the particle momentum (deflection in a magnetic field) and the energy loss one gets the mass A charged particle approaching a boundary. Due to formula complexity, secondary substitution is necessary. google solenoid+lock+arduino. Note the linearity of this equation. It is found that reactions with CO, O2, and O transform H3(+) into other molecular ions. When the density is important; The Biot-Savart equation; Biot-Savart solves a current loop; The field in a solenoid [↑ Top of page] When the density is important. The geometric shapes of the magnetic flux lines produced by moving charge carriers (electric current) are similar to the shapes of the flux lines in an electrostatic field. HDPE - High Density Polyethylene. " Example: Problem 7. , "Explicit Solution For a Two-Phase Fractional Stefan Problem With a Heat Flux Condition At the Fixed Face", Comput. 00 x 108 m/s. What does this tell you? For most of the length of the solenoid the flux density is constant. 4 Magnetic Flux Density 4 1. In reality there will always be some loss due to leakage and position, so the magnetic coupling between the two coils can never reach or exceed 100%, but can become very close to this value in some special inductive coils. The detailed structural study of the crystal often enables to do chemical tasks, for instance defining or specifying of a chemical formula, bond type. (a) The loop is perpendicular to the field; hence, η= 0, and Φ= BA. Campul magnetic este un camp vectorial care este descris cu ajutorul liniilor de camp. 200mA is put into the B=uH equation. Theory : Error or uncertainties could be caused by limitation of the measuring instruments, nature of. B can be represented in field lines that always close on themselves, which explains why magnetic field across a closed surface vanishes, div B=0. The commonly used formula to determine the density of an object is ρ = m/V, ρ (rho) represents density, m represents mass, and V represents volume. Flux Density. This question is okay but at the end it says; "It is apparent B =0 for r<(b-a) and r>(b+a) since the net total current enclosed by a contour constructed in these two regions is zero. The Betatron is described as an example of Faraday's Law. 23 Jan 2019 242 – Solenoid Valve Attractive Force Analysis Taking into Account Movable Part Motion. Textbook solution for Physics for Scientists and Engineers with Modern Physics… 10th Edition Raymond A. The output state is held until a magnetic flux density reversal falls below Brp, causing Output to be turned off Magnetic flux density applied on the branded side of the package which turns the output driver ON (VOUT = VDSon). L2 and L4 are perpendicular to the magnetic field i. The overall flux density profile also matches that of the previous work. And the magnetic flux in the air gap øg is given as the accumula-tion of flux density Bg over the area of the air gap Ag. 1 Magnetic fields in a Simple solenoid 9 2. by taking the derivatives of the magnetic flux. In physics, the term refers specifically to a long, thin loop of wire, often wrapped around a metallic core, which produces a uniform magnetic field in a volume of space (where some experiment might be carried out) when an electric current is passed through it. Voice coil diameter voice coil height AIR gap height linear coil travel ( p-p ) maximum coil travel ( p-p) magnetic gap flux density magnet weight total weight. What is the magnetic flux? Answer: From the formula of the magnetic flux, Φ = B A cos(θ) = 2 T * 2. Memberships. 00 x 108 m/s. Here, we will discuss the steps to investigate the effect on responsiveness caused by residual magnetization generated in the magnetic materials, and look at case examples. (1)], caused either by motion relative to the source or by changes in the source current, describing the effect of charge acceleration. These imaginary lines indicate the direction of the field in a given region. The saturation point is the flux density at which the material can not contain any more magnetic flux. Shaft Voltage Monitoring. Magnetic field, also known as magnetic flux density or magnetic induction is symbolized as B and is measured in Tesla (T). 00 x 108 m/s. 99966910952475 : gauss (International) 1 = 1 : line/square centimeter. Faraday's Law of Induction Faraday's law of induction is a basic law of electromagnetism that predicts how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF). Kobelev Abstract. State Faraday's law of electromagnetic induction. First, we should find the lattice parameter(a) in terms of atomic radius(R). We asses the magnetic field inside the toroid using the formula for the magnetic field in a solenoid because a toroid is in fact a solenoid whose ends are bent together to form a hollow ring. S unit of magnetic flux density; it is also commonly used, especially when dealing with weak magnetic flux densities because one Tesla is equal to 10000 G. A circular solenoid contains 100 turns of wire and carries a current of 2 amps. The sheet current density satisfies the continuity equation ∇ ⋅ J = 0. Resolving Φ in the above equation using equation (3) and (4) we get, (8) The unit of the flux density is Weber/ meter2, known as tesla (T). 3 Magnetic Flux and Flux Density Magnetic flux is the amount of magnetic filed produced by a magnetic source. The heat diffusion equation of the thin superconducting film is as follows:. (a) The flux at t= 0. Track your radio and television airplay and take your promotion to the next level. In this modern era of automation, we need to measure quantities more so than ever. This is a derivation of the magnetic flux density around a solenoid that is long enough so that fringe effects can be ignored. Briefly describe an experiment to determine magnetic flux density. Here, the hysteretic effect is disregarded. A region in which a force acts on magnetic materials or magnetically susceptible materials. The basic formula (advance_steps = delta_extrusion_speed * K) is the same as in the famous JKN pressure control, but with one important difference: JKN calculates the sum of all required advance extruder steps inside the planner loop and distributes them equally over every acceleration and. The equation above will give us this flux density. At a point at the end of the solenoid, 18 Magnetic Flux. Magnet weight 1412 g. Gaussmeter An instrument that measures the instantaneous value of magnetic induction, B. It is most economical to operate magnetic materials at as high a flux density as possible, usually near the knee of the curve. Notice the similarity to the potential energy stored by the electric field. The inductance of a solenoid is determined by its length, cross-sectional. For a solenoid this relation is:. P = power, I or J = Latin: influare, international ampere, or intensity and R = resistance. DC magnetic bias of P, RM, PM and E cores (B ≤0. 2 Inductance of Solenoid 45 2. Flux Density 2. Soft magnetic materials are also used for electromagnetic pole-pieces, to enhance the fields produced by the magnet. FLUX BYPASS FOR SOLENOID ACTUATOR. The GP400 II cartridge has a spherical diamond. 7 only inside the solenoid. Maximum Flux Density Calculator. Figure 1 We know that the energy stored in a magnetic field of no magnetic saturation is given by:. We have step-by-step solutions for your textbooks written by Bartleby experts!. Parameterize the solenoid $$\langle{rcos(\theta),rsin(\theta),k\theta}\rangle$$ and then take the cross product of the derivative with the vector to get the vertical component, and then take a further dot product with $(0,0,1)$ to get the vertical component. in/question/1933365. There are two peaks of flux density at fixed y and z coordinates but with varying x distance. The formula for the field inside the solenoid is B = m 0 I N / L This formula can be accepted on faith; or it can be derived using Ampere's law as follows. Chapter 23: Magnetic Flux and Faraday's Law of Induction James S. The high stresses due to Lorentz forces in the coil is one principal constraint. Magnetic field strength. Consequently a very favourable signal to noise ratio can be obtained with these cartridges. In addition, you can see the charge signs of the two capacitor plates and arrows for the (conventional) current direction. the change in the slope of the graph. Specific Gravity. Mains frequency transformer steels can be operated at least to 1. The permeability is most often denoted by the greek symbol mu (). Hall (Magnetic) Sensors. MAGNETIC CIRCUIT The magnetic circuit is designed for low distortion and high efficiency. Let's define: the vector of the magnetic induction, u unit vector the normal vector to the frame surface, n unit vector. The magnetic field lines inside the toroid are concentric circles. Objective : To estimate the accuracy of experimental result. And the magnetic flux in the air gap øg is given as the accumula-tion of flux density Bg over the area of the air gap Ag. An object made from a comparatively dense material (such as iron). An interesting anecdote that is related to the concept of. The self inductance if a solenoid is L= μ 0 n 2 Al. A solenoid is a long coil of wire wrapped in many turns. 25 mT, f = 10 kHz, T = 100 °C). Basic introduction to electromagnetic field. Magnetic flux density. 5 m long and 6 m free bore diameter, and of an iron flux-return yoke, which includes the central barrel, two end-caps and the ferromagnetic parts of the hadronic forward calorimeter. A solenoid similar to that shown in the diagram has 100 turns connected in a circuit over a length of 0. where μ₀=4π × 10−7 H/m is the magnetic constant, N is the number of turns, I is the current, and L is the solenoid length. 140 \, m} = 2. (Note that this definition of magnetic flux has the same form as the definition of the flux of the electric field used, for example, in Gauss's law. Appreciate the use of Ampere's law to find magnetic flux density inside a solenoid. The energy density in a magnetic field is derived. If you enter the measurement X and the remanence Br of the magnet material there, the software calculates the flux density at point (x). MAGNETIC FLUX. The density is to correspond to the temperature at the measuring point. When produced in a full B/H curve, it is apparent that the direction in which H is applied affects the graph. Derive and use the relation = B. The magnetic field is homogeneous inside the toroid and zero outside the toroid. Why High Density Plasmas? ¾ Lower ion bombardment energies improve selectivity and reduce ion-bombardment-induced physical damage of the wafer surface. 25 s and t = 0. Define magnetic flux density and the tesla. For this is. Download Magnetic Flux Density Unit Converter our powerful software utility that helps you make easy conversion between more than 2,100 various units of measure in more than 70 categories. see the formula and the numbers. Energy Density of a Magnetic Field in a Solenoid. Magnetic Field Formula Solenoid, A solenoid is a coil wound into a tightly packed helix. us20060144035a1 us11/027,969 us2796905a us2006144035a1 us 20060144035 a1 us20060144035 a1 us 20060144035a1 us 2796905 a us2796905 a us 2796905a us 2006144035 a1 us2006144035 a1 us 2006144035a1. When T2 is turned on, the valve rotates counterclockwise. A Solenoid 2. It is the flux per unit area perpendicular to the field. (c) Calculate the slope of 5 Wb 10 Wb. Of primary concern, however, is the magnetomotive force needed to establish a certain flux density, B in a unit length of the magnetic circuit. We do not sell or ship outside the USA. In electric circuits, this motivating force is voltage (a. Magnetic field strength (A/m). electric flux density, electric displacement. Wolfgang Pauli introduced the Bohr magneton as a fundamental unit of magnetic moment during an effort to find a quantum basis for magnetism, as Davide Castelvecchi recounts. A region in which a force acts on magnetic materials or magnetically susceptible materials. Find the magnetic flux Φ B through one single turn of the solenoid in #4. the critical state model, allows one to predict the magnetic behavior of a high field superconductor in terms of a simple empirical parameter. In the illustration above lines of magnetic flux would follow the lines created by the iron filings. Magnetic fields are created when electric current flows: the greater the current, the stronger the magnetic field. Consider an infinitely long conductor AB through which current I flows. Inducing currents by changing currents in a nearby wire. One point to note, though, is that flux density is limited by saturation to below about 1. These Hall-effect switches are monolithic integrated circuits with tighter magnetic specifications, designed to operate continuously over extended temperatures to +150°C, and are more stable with both temperature and supply voltage changes. You need a subscription to start this learning unit. What is the magnetic induction at the centre when it carries a current of 5 A. If a loop of wire with an area A is in a magnetic field B, the magnetic flux is given by:. A Solenoid 2. Compare your results with theoretical value which is µ 0 =1. B) is the magnetic field multiplied by the area. 55 s, which is about −3 Wb. Moving magnets versus moving wire loops. Total flux through a tightly wound solenoid. What does this tell you? For most of the length of the solenoid the flux density is constant. Generalization of loop theorem. Appendix A. Magnetic Rotor Flux. Flux density. This output can be connected to a voltmeter, oscilloscope, recorder or external analog-to-digital converter. com, a free online dictionary with pronunciation, synonyms and translation. The magnetic field is homogeneous inside the toroid and zero outside the toroid. Write down the magnetic field inside the ideal solenoid. Here, the hysteretic effect is disregarded. 6=(6,1)(2,3) Prime greater than1; divisible by 1 and itself. The magnetic field (another name is magnetic flux density) B of a long solenoid in air without a ferromagnetic core is calculated using the following formula. The magnetic flux density, B, is the total magnetic effect that results. Since the Magnet-ic Force produced by the Solenoid is a function of the Flux Density, the Magnetic Force will have a propor-tionate hysteresis loop as the Flux Density, B. The reluctance of a magnetic circuit is proportional to its length and inversely proportional to its cross-sectional area and a magnetic property of the given material called its permeability. ) Determine the reluctance of each air gap. The relative measurement of magnetic properties includes magnetic field strength, magnetic flux and magnetic moment. The properties of this perfectly stable material permit an optimal flux density of the magnet of 8500 gauss, a remarkably high value resulting in a high sensitivity. Here, the hysteretic effect is disregarded. which agrees with the zero magnetic eld outside the solenoid, B = ∇ A = BR2 2 ∇∇ ϕ = 0: (20) Equations for the Vector Potential A static magnetic eld of steady currents obeys equations ∇ B = 0; (21) ∇ B = 0J: (22) In terms of the vector potential A(x;y;z), the zero-divergence equation (21) is automatic: any B = ∇ A has zero. Point d represents a point of relatively small slope, while a is at a point of. Magnetic Flux Density Formula Solenoid. When B < Brp, the OUT pin go into "off " state. The term 'density' is a characteristic property of every object, and it forms a basic topic of study in science. Therefore, the energy that was carried by the Poynting vector has been stored as magnetic energy inside the solenoid. Learn more about Magnetic Field In A Solenoid Equation and solved example. Because of the regular geometries of the iron magnetic resistance (R1, R2 and R4), the magnetic voltage and the magnetic flux of an iron resistance can. Textbook solution for Physics for Scientists and Engineers with Modern Physics… 10th Edition Raymond A. The direction of the magnetic field is also indicated by these lines. Temp Temperature. and the relative permeability of the core is k = , then the magnetic field at the center of the solenoid is. Therefore we consider the magnetic flux density in a long coil, which is constant, so that equation (1) simplifies to the following relation. (1)], caused either by motion relative to the source or by changes in the source current, describing the effect of charge acceleration. The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds), and the CGS unit is the maxwell. Water is sometimes cloudy dueto the presence of some particles. For the purpose of calculations, the iron is supposed to carry whole of the flux throughout its entire length. Magnetic flux density. The number of turns per unit length is \[n = \frac{300 \, turns}{0. Compare your results with theoretical value which is µ 0 =1. Vocational Training. 6 Suggest and explain two ways of varying the magnitude of the flux density in the solenoid. Wolfgang Pauli introduced the Bohr magneton as a fundamental unit of magnetic moment during an effort to find a quantum basis for magnetism, as Davide Castelvecchi recounts. The equation above will give us this flux density. surface density. Water has its maximum density of 1g/cm3 at 4 degrees Celsius. by taking the derivatives of the magnetic flux. Reluctance This is the ratio of MMF to Flux in the magnetic conductor, and so is equivalent to electrical resistance. Magnetic flux is defined in terms of the forces exerted by the magnetic field on electric charge. 6T at its centre. Why High Density Plasmas? ¾ Lower ion bombardment energies improve selectivity and reduce ion-bombardment-induced physical damage of the wafer surface. The negative sign in both equation above is a result of Lenz's law, named after Heinrich Lenz. Solenoid model for magnetic medium. Magnetic flux density definition at Dictionary. For a solenoid of length L = m with N = turns, the turn density is n=N/L= turns/m. Where the surface S is a planar surface with area A, and the magnetic field is constant with magnitude B, a simplified version of the formula can be used: Φ=B. We find the magnetic field produced by solenoid with the following formula; Where: i is the current, N is the number of loops and l is the length of the solenoid. The maximum flux density for a core/coil geometry should be calculated to verify that it is below the specified value for a given core, so that the core doesn't saturate. HDPE - High Density Polyethylene. Self and mutual inductance are introduced. Shaft Voltage Monitoring. For a 1 T internal flux density, the Halbach cylinder must generate a flux density between 1. The meter can be fully configured and flux density readings acquired from a remote computer or PLC using the RS-232 communications port. See rectangular section in Figure 1. Grid Magnetic Angle is defined as the difference between Magnetic North and Grid North (East Angle is positive) and it is calculated as 'Magnetic Declination' minus 'Grid Convergence'. This alternative description offers some actionable insight, as we shall point out at the end of this section. A regarded point for design of solenoid actuator is flux density analysis, determination of plunger shape and mass, optimal bobbin design, selected magnetic analysis, determination of duty ratio, and calculation of coil turn number which is regarded temperature rising. Water is sometimes cloudy dueto the presence of some particles. Magnetic flux is denoted by ΦB where B is a magnetic field and its unit is Weber (Wb). • We know =ε0 in free space → similarly, the magnetic flux density 𝐵 is related to the magnetic field intensity as: 0 BH P Where, μ0 is a constant known as permeability of free space. In Figure 1, we immediately know that the flux density vector points in the positive z direction inside the solenoid, and in the negative z direction outside the solenoid. Electricity and magnetism were once thought to be separate forces. The magnitude of the vector field is then the line density. The unit of magnetic flux is weber, abbreviated as Wb. which has the same magnitude as the stored magnetic energy given in Equation (14). The unipolar switching characteristic makes these devices. The permittivity of a material relates the Electric Flux Density to the Electric Field. Find the numerical value of said inductance if the number of turns of the inductor is N = 300 turns, the length. The magnetic field of the earth is about one-half gauss in strength. Mount the Hall probe on a cart with help of long. What is magnetic flux density? What is its symbol and unit? What is the formula for the magnetic flux density of a conductor at right angles to a magnetic field? Rarrange to make force the subject. Gaussmeter An instrument that measures the instantaneous value of magnetic induction, B. 3: FEMM plot of the magnitude of the axial flux density produced by the active shield solenoid (@ 10 mA) at cell equator and at cut-off level. (a) When the current in the solenoid is 2. Flux Linkage. The electric current loop is the most common source of B. A flux of 12. ) Back to home. Faraday's law of induction, integral and differential form. The total magnetic flux flowing through the solenoid is therefore found by integrating across the cross-section of the solenoid: ( ) S rds Ni S d µ Φ=⋅ = ∫∫B where S is the cross-sectional area of the solenoid (e. When the drop attains terminal velocity, then it has no acceleration. 0 T is generated in the Alcator fusion experiment at MIT. 60225 is the Avogadro constant / 1. Mount the Hall probe on a cart with help of long. Hall (Magnetic) Sensors. In addition, you can see the charge signs of the two capacitor plates and arrows for the (conventional) current direction. The Betatron is described as an example of Faraday's Law. Magnetic Sensors and Reed Switches. Magnetic sensing products utilizing Reed and Hall Effect technologies, with custom solutions available. This high B m value is needed for magnetometer calibration. The magnetic field intensity on the axis of a long solenoid of length / and internal diameter d, where l >>d, having N, turns and carrying a current i, is: H=1 2. and the relative permeability of the core is k = , then the magnetic field at the center of the solenoid is. In electric circuits, this motivating force is voltage (a. Equation 3 to be taken for the typical 3 (2) materials in this magnetic circuit: iron, (magnet), air. Incidentally, we can prove that Eq. 2 Inductance of Solenoid 45 2. the magnetic flux density of a magnetic field using a current balance. Next (Dentistry). [↑ Top of page] The force on materials with low χ. google solenoid+lock+arduino. The total magnetic flux flowing through the solenoid is therefore found by integrating across the cross-section of the solenoid: ( ) S rds Ni S d µ Φ=⋅ = ∫∫B where S is the cross-sectional area of the solenoid (e. An amorphous soft magnetic alloy of the formula (Fe1-αTMα)100-w-x-y-zPwBxLySiz TipCqMnrCus, wherein TM is Co or Ni; L is Al, Cr, Zr, Mo or Nb; 0≦α≦0. (c) For a general. Faraday evidenţiază fenomenul de inducţie electromagnetică (un flux magnetic variabil care parcurge cu circuit generează o tensiune electromotoare şi un curent ) şi stabileşte legea care descrie acest fenomen. It is the flux per unit area perpendicular to the field. Leakage flux is the flux, which follows a leaking path, as shown in Fig. The magnetic energy density at some point in space is given by Equation 28-20: 0 2 m 2μ B u =. Magnetic flux density. anon89436 June 10, 2010. you really know how to express this subject but have failed to get the meaning of magnetic flux density. • Concentration-dependent density and viscosity in flow description. Remanence - The residual flux level in a core at H = 0.
CommonCrawl
Budget-cut: introduction to a budget based cutting-plane algorithm for capacity expansion models Bismark Singh ORCID: orcid.org/0000-0002-6943-657X1,2, Oliver Rehberg2, Theresa Groß3, Maximilian Hoffmann3, Leander Kotzur3 & Detlef Stolten3,4 Optimization Letters (2021)Cite this article We present an algorithm to solve capacity extension problems that frequently occur in energy system optimization models. Such models describe a system where certain components can be installed to reduce future costs and achieve carbon reduction goals; however, the choice of these components requires the solution of a computationally expensive combinatorial problem. In our proposed algorithm, we solve a sequence of linear programs that serve to tighten a budget—the maximum amount we are willing to spend towards reducing overall costs. Our proposal finds application in the general setting where optional investment decisions provide an enhanced portfolio over the original setting that maintains feasibility. We present computational results on two model classes, and demonstrate computational savings up to 96% on certain instances. Governments worldwide are pushing towards an increasing use of renewable energy technologies. In line with emission targets set by the European Commission—as part of the 2050 Low Carbon Economy roadmap to reduce emissions to 80% below the levels in the year 1990 [6]—the German federal government plans to increase the percentage of energy derived from renewable sources to 80% by the year 2050 [3]. This rampant increase in the share of renewables requires important investment decisions that guide future energy policies, e.g., the expansion of existing energy capacity infrastructure to accommodate the needs and demands of future energy production and supply. Expanding existing capacity to include photovoltaics, hybrid generation systems, and storage devices are a few examples of such investment decisions that can potentially lead to lower greenhouse gas emissions [18]. Optimization models have a rich history for both operations of installed energy systems [1, 33], as well as extending existing infrastructure; see, e.g., so-called capacity expansion models [2, 25], models for integrating renewable sources with existing fossil fuels [29], and planning for expanding transmission networks [24]. Typically, mixed-integer programs (MIPs) are developed and employed to inform these decisions as well as optimize operations. Two examples of such energy system models are MARKAL [22] and TIMES [23]; see, also [20] and references within. The Framework for Integrated Energy System Assessment (FINE) is an open source python package that provides a framework for the modeling, optimization and analysis of energy systems [7, 31]. The goal of such optimization models is to minimize the total annual costs for deploying and operating energy supply infrastructure, subject to the technical and operational constraints of a multi-commodity flow energy-system problem. FINE provides the functionalities to set up energy system models with multiple regions, commodities and time steps. Examples of commodities include electricity, hydrogen, and methane gas. Time steps can also be aggregated to reduce the complexity of the model [12]. In addition to existing infrastructural costs, costs are also incurred by building new components and increasing their capacities. The work in this article arises from a collaboration between mathematicians and energy-and-climate researchers at the Friedrich-Alexander-Universität Erlangen-Nürnberg and the Forschungszentrum Jülich, with the goal to improve the performance of the FINE package as well as other energy system models. Against the backdrop of decreasing prices of renewable energy sources [13], rising CO2 emission costs [5, 6], a transforming energy demand and new options for energy storage and conversion [32], the consideration of novel technologies offers opportunities for further cost reduction of existing energy systems. We consider the problem of determining which components of an energy capacity infrastructure to install such that total annual costs are further reduced. Installing a new component incurs a fixed cost; however, overall costs are potentially reduced by allowing new components access to larger ranges of energy supplies thereby leading to a more efficient utilization of the entire system. Neumann and Brown consider a similar problem to expand transmission while minimizing total annual costs [27]. Such problems also find application in several other contexts within energy systems that seek to minimize total annual costs; see, e.g., [30] for a model that extends natural gas transmission networks. For an overview of transmission expansion planning problems, see, Mahdavi et al. [26]. Such investment optimization problems are frequently formulated as MIPs that employ binary variables to inform "yes" or "no" decisions for utilizing new technologies; see, e.g., [9, 20]. The choice of such discrete decision variables governing whether a new technology "is-built" leads to a significant increase in the computational effort to solve the MIP. Models covering multiple spatial regions impose further computational challenges as additional binary variables are required for each region. For an introduction to the challenges and model simplifications, see, e.g., [8, 20]. However, as we mention in Sect. 1.1, integrating novel technologies into existing energy systems leads to potentially reduced overall costs and is also advantageous in keeping abreast with the dynamically changing energy sector. To this end, we provide a method that offers scope for reduced runtimes, both for systems with several existing technologies as well as new technologies to choose from. This work is related to previous works on a so-called "district model" that includes six multi-family houses and households in each building [15], and also to a model with 16 transmission zones within Great Britain [28]. In previous improvements to the FINE package, Kannengießer et al. use a time-series aggregation method and present a temporally aggregated simplification of the model [15]. The authors use a multi-regional district model and fix the binary design decisions for all components. The corresponding optimization model is thus reduced to a linear program (LP); this model is solved easily, however the solutions are suboptimal. In contrast the approach we present is an iterative heuristic that is guaranteed to converge to the optimal solution. Indices/ Sets \(c \in {{\mathcal {C}}}\) : Set of components [\(c_1,...,c_{|{\mathcal {C}}|}\)] \(l \in {\mathcal {L}}\) : Set of locations [\(l_1, \ldots , l_{|{\mathcal {L}}|}\)] \(\hbox {CapMin}_{c,l}\) : Minimum capacity for component c at location l [kW] \(\hbox {CapMin}_{c,(l_1,l_2)}\) : Minimum capacity for component c on edge \((l_1,l_2)\) [kW] \(\hbox {CapMax}_{c,l}\) : Maximum capacity for component c at location l [kW] \(\hbox {CapMax}_{c,(l_1,l_2)}\) : Maximum capacity for component c on edge \((l_1,l_2)\) [kW] \(\hbox {TAC}^{cap}_{c,l}\) : Total annual cost for installing one unit of capacity for component c at location location l [€/kW] \(\hbox {TAC}^{bin}_{c,l}\) : Total annual cost for building component c at location l; independent of the size of the installed capacity [€] \(\hbox {TAC}^{op}_{c,l}\) : Total annual cost for one unit of operation of component c at location l [€/kWh] \(\hbox {TAC}^{cap}_{c,(l_1,l_2)}\) : Total annual cost for installing one unit of capacity for component c on edge \((l_1,l_2)\) [€/kW] \(\hbox {TAC}^{bin}_{c,(l_1,l_2)}\) : Total annual cost for building component c on edge \((l_1,l_2)\); independent of the size of the installed capacity [€] \(\hbox {TAC}^{op}_{c,(l_1,l_2)}\) : Total annual cost for one unit of operation for component c on edge \((l_1,l_2)\) [€/kWh] Decision Variables \(bin_{c,l}\) : (IsBuilt-variable) 1 if component c is built at location l; else 0 \(bin_{c,(l_1,l_2)}\) : (IsBuilt-variable) 1 if component c is built on edge \((l_1,l_2)\); else 0 \( cap_{c,l}\) : Installed unit capacity of component c at location l [kW] \(cap_{c,(l_1,l_2)}\) : Installed unit capacity of component c on edge \((l_1,l_2)\) [kW] \(op_{c,l}\) : Used unit capacity of component c at location l [kWh] \(op_{c,(l_1,l_2)}\) : Used unit capacity of component c on edge \((l_1,l_2)\) [kWh] In the above notation, bin denotes a binary decision variable, cap and op denote continuous decision variables for the capacity and operation, respectively. The parameters CapMin and CapMax denote bounds on the cap variable, while TAC denotes total annual cost. We provide details in Sect. 2.2. Optimization model The models build with the FINE package allow the inclusion of two types of components: (i) "optional" components that are modeled with additional investment costs independent of the installed unit size, and (ii) already existing components that are modeled with an installation cost contribution that is linearly dependent on the installed unit size. We have five choices for these optional and existing components that are relevant to the discussion in this work: Source, Sink, Storage, Conversion, and Transmission. The complete FINE package includes other components as well, and the user specifications decide the components that form part of the optimization model. We then have a graph where the nodes include combinations of the first four component types, while each Transmission component represents an edge connecting two nodes. Let the index \(c \in {\mathcal {C}}\) denote the components, index \(l \in L\) denote the locations of a node, and tuple \((l_1,l_2)\) with \(l_1, l_2 \in {\mathcal {L}}\), \(l_1 \ne l_2\) denote the start and end locations for an edge, respectively. Further, let \({\mathcal {C}}^N \subset {\mathcal {C}}\) denote the set of four components that form nodes, and \({\mathcal {C}}^E\) denote the set of components that form edges; i.e., \({\mathcal {C}}^N = \{ {\mathcal {C}}^{\text {Source}}, {\mathcal {C}}^{\text {Sink}}, {\mathcal {C}}^{\text {Storage}}, {\mathcal {C}}^{\text {Conversion}} \}\) and \({\mathcal {C}}^E = \{ {\mathcal {C}}^{\text {Transmission}} \}\). Here, the source components include nodes that provide commodities to the graph, while the sink components withdraw commodities from the graph. The storage components are nodes that connect the different time steps with each other by incorporating a so-called state of charge (SoC) variable; these components operate as a sink (that increases the SoC) or a source (that decreases the SoC) "state variable" (the SoC) at a given time step. The conversion components are nodes that connect multiple "commodity-subgraphs" to each other by converting one commodity to another; e.g., an electrolyzer consumes electricity and converts it into hydrogen. Finally, let \({\mathcal {L}}^c\) denote the location(s) of a component c. Although the decision variables and parameters within the FINE package include other indices (such as time and commodity) as well, we suppress these indices as they are not relevant to the discussion in this article; see, the complete model description in the appendix, and further details in [31]. FINE includes binary variables to inform whether new optional components are built, and calls these binary variables as "IsBuilt-variables" [31]. A value of 1 denotes a component is built at a given location. If a component already exists, it does not have a IsBuilt-variable; we assume that there is at least one such component, else there is no configuration to start with. Building an optional component c at location l incurs a fixed total annual cost (TAC) of \(\text {TAC}^{bin}_{c,l}\). For the sake of notation, we assume existing components also have associated binary variables that are always 1 with \(\text {TAC}^{bin}_{c,l} = 0\). Further, the decision to build is governed by its corresponding capacity variable, \(cap_{c,l}\). The value of a capacity variable corresponds to the scale of the installed component, e.g., for a photovoltaic Source component it corresponds to the area of solar panels installed [31]. If any capacity is installed, the corresponding IsBuilt-variable takes the value 1. All optional components have minimum capacity thresholds that are informed by data; we take this threshold as 0 if data is unspecified. In other words, \(cap_{c,l}\) is a semi-continuous variable that is either 0 or lower bounded by the minimum capacity. Analogously, maximum capacity thresholds are available as well, and we take these as \(+\infty \) if unspecified. Equations (1) and (2) summarize this discussion for the nodes and edges, respectively. $$\begin{aligned} \forall c \in {\mathcal {C}}^E; l_1, l_2 \in {\mathcal {L}}^{c}, l_1 \ne l_2 :&\nonumber \\&\text {capMax}_{c,(l_1,l_2)} \cdot bin_{c,(l_1,l_2)} \ge cap_{c,(l_1,l_2)} \end{aligned}$$ $$\begin{aligned}&\text {capMin}_{c,(l_1,l_2)} \cdot bin_{c,(l_1,l_2)} \le cap_{c,(l_1,l_2)} \end{aligned}$$ $$\begin{aligned}&cap_{c,(l_1,l_2)} \ge 0 \end{aligned}$$ $$\begin{aligned}&bin_{c,(l_1,l_2)} \in \{0,1\}. \end{aligned}$$ (1d) $$\begin{aligned} \forall c \in {\mathcal {C}}^N ; l \in {\mathcal {L}}^c :&\nonumber \\&\quad \text {capMax}_{c,l} \cdot bin_{c,l} \ge cap_{c,l} \end{aligned}$$ $$\begin{aligned}&\quad \text {capMin}_{c,l} \cdot bin_{c,l} \le cap_{c,l} \end{aligned}$$ $$\begin{aligned}&\quad cap_{c,l} \ge 0 \end{aligned}$$ $$\begin{aligned}&\quad bin_{c,l} \in \{0,1\}. \end{aligned}$$ If \(bin_{c,(l_1,l_2)}=1\), then Eqs. (1) enforce \( \text {capMin}_{c,(l_1,l_2)} \le cap_{c,(l_1,l_2)} \le \text {capMax}_{c,(l_1,l_2)}\); else, \(cap_{c,(l_1,l_2)}=0\). Equations (2) are analogous to equations (1). Further, we have operational variables, that include time indices, corresponding to the components that we denote by \(op_{c,l}\); these denote how much of the installed capacity is actually used. Below is the optimization model we consider in this article. $$\begin{aligned} z^* =&\min&\text {TAC}^{cap} \cdot cap + \text {TAC}^{bin} \cdot bin + \text {TAC}^{op} \cdot op \end{aligned}$$ $$\begin{aligned}&\text {s.t.}&(1), (2) \end{aligned}$$ $$\begin{aligned}&\text {non-temporal bounding constraints} \end{aligned}$$ $$\begin{aligned}&\text {temporal bounding constraints} \end{aligned}$$ $$\begin{aligned}&\text {component-linking constraints} \end{aligned}$$ (3e) $$\begin{aligned}&\text {transmission constraints} \end{aligned}$$ (3f) $$\begin{aligned}&\text {storage constraints} . \end{aligned}$$ (3g) The objective function in Eq. (3a) abbreviates the following quantity: $$\begin{aligned}&\underset{c \in {\mathcal {C}}^N}{\sum } \ \underset{l \in {\mathcal {L}}^c}{\sum } \left( \text {TAC}^{cap}_{c,l} \cdot cap_{c,l} + \text {TAC}^{bin}_{c,l} \cdot bin_{c,l} + \text {TAC}^{op}_{c,l} \cdot op_{c,l} \right) + \nonumber \\&\quad \underset{c \in {\mathcal {C}}^E}{\sum } \ \underset{\begin{array}{c} (l_1,l_2) \in {{\mathcal {L}}^c \times {\mathcal {L}}^c},\\ l_1 \ne l_2 \end{array}}{\sum } \nonumber \\&\quad \left( \text {TAC}^{cap}_{c,(l_1,l_2)} \cdot cap_{c,(l_1,l_2)} + \text {TAC}^{bin}_{c,(l_1,l_2)} \cdot bin_{c,(l_1,l_2)} + \text {TAC}^{op}_{c,(l_1,l_2)} \cdot op_{c,(l_1,l_2)} \right) .\nonumber \\ \end{aligned}$$ Here, cap, bin, and op denote three vectors corresponding to the capacity, IsBuilt, and operational variables, respectively. The coefficients — \(\hbox {TAC}^{cap}\), \(\hbox {TAC}^{bin}\), and \(\hbox {TAC}^{op}\) — denote vectors of appropriate size for the TAC corresponding to installing capacity, building new components, and operating these components, respectively. That is, the objective function includes cost contributions that scale with the size of the installed capacities of the components linearly (\(\hbox {TAC}^{cap}\)), cost contributions that are independent of the size of the installed capacity but occur if a component is built (\(\hbox {TAC}^{bin}\)), as well as cost contributions that are connected to the operation of the built components (\(\hbox {TAC}^{op}\)); see, the appendix for details. Then, the objective function seeks to minimize the sum of the TACs of the entire system, with both optional and existing components. The bounding constraints in Eq. (3c) and (3d) enforce further limits on the capacity and operation variables with and without time indices, respectively. The component-linking-constraints of Eq. (3e) include commodity balances, annual commodity inflow and outflow limits as well as shared potentials; they also serve to define limits via the \(\text {capMax}\) parameter. The bidirectional and symmetric nature of the components along edges, as well as constraints that model the optimal power flow using the standard linearized DC formulation, is expressed via the transmission constraints of Eq. (3f). The storage constraints of Eq. (3g) express the charging and discharging status via the state of the charge for the different components. For a detailed description of these constraints, see [31]; we provide a complete model description in the appendix. Importantly, constraints (3c)–(3g) do not contain any binary variables. A budget-cut algorithm MIP solvers typically rely on branch-and-bound strategies to identify the optimal solution. The presence of additional binary variables results in a larger search tree that can create potential computational challenges. In this section, we propose an algorithm that prunes branches that lead to suboptimal solutions. To this end, we derive a sequence of cuts by solving a sequence of LPs. We use these cuts to determine a "budget" that provides a valid inequality for the original problem—that is now reduced in size. We formalize this concept in the Budget-Cut Algorithm, present it in Fig. 1, and explain it in more detail below. We first note that when constraints (1d) and (2d) are reduced to their continuous relaxation, model (3) is an LP. To this end, we solve two models where all the binary IsBuilt-variables are apriori determined. Consider the following optimization models: $$\begin{aligned} {\bar{z}} =&\min&\text {TAC}^{cap} \cdot cap + \text {TAC}^{op} \cdot op \end{aligned}$$ $$\begin{aligned}&\text {s.t.}&(1a) - (1c), (2a)-(2c) \end{aligned}$$ $$\begin{aligned}&(3c) -(3g) \end{aligned}$$ $$\begin{aligned}&bin = 0, \end{aligned}$$ $$\begin{aligned} \bar{z} =&\min&\text {TAC}^{cap} \cdot cap + \text {TAC}^{op} \cdot op \end{aligned}$$ $$\begin{aligned}&\text {s.t.}&(1a), (1c), (2a), (2c) \end{aligned}$$ $$\begin{aligned}&(3c)-(3g) \end{aligned}$$ $$\begin{aligned}&bin = 1. \end{aligned}$$ We denote models (5) and (6) as Existing and Extended with optimal objective function values of \({\bar{z}}\) and \(\bar{z}\), respectively. We assume Existing is feasible; in Sect. 5 we discuss the implications of this assumption. The Existing model determines a baseline where no optional components are built, and the only components used for the solution are those that that do not have costs independent of the size of the installed capacity; i.e., all the IsBuilt-variables are set to 0. Thus, the second term - \(\text {TAC}^{bin} \cdot bin\) - in the objective function of model (3) is 0 for all the IsBuilt variables. We note that the Existing model still includes the already existing components, and these are the only components we optimize over. The Extended model determines the other extreme where all the optional components are built; i.e., all the IsBuilt-variables are set to 1. The idea of the Extended model is subtle. Although all the IsBuilt-variables are set to 1, we ignore the cost to build them by not including the second term - \(\text {TAC}^{bin} \cdot bin\) - of the objective function of model (3). By ignoring the cost to build the components, the Extended model focuses on finding the most cost-effective components to use and install capacities on. Further, we do not include the lower thresholds on capacity for optional components in Eqs. (1c) and (2c). Since optional components are built to reduce total costs, the Extended model represents the best we can hope to achieve. Intuitively, the Existing model chooses an optimal solution from the existing capacity infrastructure, while the Extended model chooses the optimal solution from the existing and the optional capacity infrastructure. The following lemma relates the Existing and Extended models with the "true" model (3). Lemma 1 \(\bar{z} \le {z^*} \le {\bar{z}}\). The feasible region represented by constraints (5b)–(5d) is a subset of the feasible region represented by constraints (3b)–(3g). Further, the objective functions of models (3) and (5) are identical, since \(\text {TAC}^{bin} \cdot bin\) is 0 for model (5). With a minimization objective, \({z^*} \le {\bar{z}}\) follows. Next, we note that constraints (6b) and (6d) restrict the capacity variables between 0 and capMax; while, constraints (3b) enforce the capacity variables are either 0 or bounded between capMin and capMax. Since the other constraints of models (3) and (6) are identical, the feasible region of model (3) is smaller than that of model (6). Further, the objective function in Eq. (3a) is at least that in Eq. (6a). Thus, \(\bar{z} \le {z^*}\) follows. \(\square \) Let the triplets \([cap^*, bin^*, op^*]\), \([ \overline{cap}, \overline{bin}, \overline{op}]\) and \([\underline{cap}, \underline{bin}, \underline{op}]\) denote the optimal solutions for models (3), (5) and (6), respectively. Here, \(\overline{bin}=0\) and \(\underline{bin}=1\). Further, we define \( {\bar{z}} - \bar{z} = b \ge 0\). Next, we show that b determines a "budget" - the maximum cost we are willing to spend on building IsBuilt components. In other words, spending more than b exceeds the potential savings offered by the addition of optional components. We use the following corollary to Lemma 1 in the proof. Corollary 1 \( \text {TAC}^{cap} \cdot cap^* + \text {TAC}^{op} \cdot op^* \ge \text {TAC}^{cap} \cdot \underline{cap} + \text {TAC}^{op} \cdot \underline{op}\). From the proof of Lemma 1, it follows that the solution \((cap^*,op^*,1)\) is feasible for model (6). \(\square \) \(\text {TAC}^{bin} \cdot bin \le b\) is a valid inequality for model (3). From Lemma 1 and the definition of b we have, \(\text {TAC}^{cap} \cdot cap^* + \text {TAC}^{op} \cdot op^* + \text {TAC}^{bin} \cdot bin^* - (\text {TAC}^{cap} \cdot \underline{cap} + \text {TAC}^{op} \cdot \underline{op}) \le b.\) Then, from Corollary 1, we have \(\text {TAC}^{bin} bin^* \le b\), for any optimal solution for model (5). \(\square \) For sake of completeness, we rewrite the valid inequality in its full form: $$\begin{aligned} \underset{c \in {\mathcal {C}}^N}{\sum } \ \underset{l \in {\mathcal {L}} ^c}{\sum } \text {TAC}^{bin}_{c,l} \cdot bin_{c,l} +\underset{c \in {\mathcal {C}}^E}{\sum } \ \underset{\begin{array}{c} (l_1,l_2) \in {\mathcal {L}}^c \times {\mathcal {L}}^c, \\ l_1 \ne l_2 \end{array}}{\sum } \text {TAC}^{bin}_{c,(l_1,l_2)} \cdot bin_{c,(l_1,l_2)} \le b, \end{aligned}$$ as well as model (3) with the valid inequality: $$\begin{aligned}&\text {s.t.}&- , \end{aligned}$$ $$\begin{aligned}&. \end{aligned}$$ Equation (7) provides a valid inequality for model (3) that takes the form of a typical 0–1 knapsack constraint. For ease of exposition within this section, we represent Eq. (7) as \(\sum _{i \in I} a_i \cdot bin_i \le b\). Although the weights of the knapsack items, \(a_i\), are known, the benefit derived by the addition or removal of a single element requires the solution of a new instance of model (3). We could determine this benefit by solving an instance with a subset of IsBuilt-variables fixed to 1, and then compute the possible savings minus the construction costs. However, this requires a solution to \(2^{|I|}\) problem instances. Instead, we use b to determine elements that are too expensive to construct, remove them apriori via Lemma 2, and re-solve model (3) with the valid inequality (i.e., model (8)). Small values of b provide tighter models. In this section, we seek to further reduce the size of the reduced model (8) by reducing b. We begin by distinguishing three cases. Case 1: \(b < \min _i {a_i} \) It follows from Lemma 2 that \(bin_i =0, \forall i \in I\); then, from Lemma 1 \({\overline{z}}= z^*\). Case 2: \(\min _i{a_i} \le b \le \max _i{a_i}\) We proceed by first fixing all binaries with \(a_i >b\) to 0. Then, we recompute the budget; i.e., we solve Extended with these binaries set to 0. This guarantees the updated budget is no more than the previous budget, and we repeat this process. Case 3: \(\max _i{a_i} < b\) In this case, we cannot reduce the budget further. Further, if \( \sum _{i \in I}a_i < b\) is true, Eq. (7) is redundant. We reflect these three cases in the Budget-Cut Algorithm. The algorithm takes as input an instance of model (3), and a time limit, TIME. A key prerequisite of the algorithm is that model (5) is feasible, else the algorithm's step 3 fails. In Sect. 5, we provide a discussion on handling infeasible instances. The other assumption of the algorithm is that not all the components are optional. In the absence of this assumption, the initial configuration has no cost and thus there is nothing to do. The Budget-Cut Algorithm solves two LPs in Steps 3 and 4, at most |I| LPs in the loop around step 15, and finally a MIP in step 22. The algorithm outputs the optimal solution and objective function value for model (3), or the corresponding optimality gap and the best found feasible solution in the time limit. The Budget-Cut Algorithm provides at least three advantages compared to a naive solution method. First, from Lemma 2 we can directly proceed to Step 23; this happens if the algorithm recognizes that the budget does not allow building any optional elements. Second, if the algorithm does proceed to the while loop, we are guaranteed at least one element has a binary variable that is fixed to 0. This ensures a finite termination of the algorithm in at most |I| iterations. Third, the solutions of the LPs serve as warmstarts for model (8). Figure 1 presents a visualization of the budget update. In the next section, we compare the computational performance of the algorithm with a naive solution method. Visualization of the budget calculation and update during the Budget-Cut Algorithm. Here, Existing and Extended are the restricted and relaxed versions of the original problem. Here N is the number of optional components, while M is an integer less than N. See, Sect. 3 for details Computational results To examine the computational performance of the Budget-Cut Algorithm, we conduct a number of computational experiments on different instances of model (3). The FINE package uses a time series aggregation method to reduce the size of the optimization model; i.e., it aggregates the complete considered time horizon of, e.g., 365 days into so-called "typical days". For details on the time series aggregation methods used within FINE, see [12]. We define a model instance with a time horizon of one year using typical days — 7, 14, 28, 56, and 112 — and weather years from 1995 to 2000. The weather year parameter does not affect the size of the optimization model, but determines which input data set of commodity power demands and supplies is used. However, models with a larger number of typical days result in larger optimization models. Structure of the Self-sufficient building scenario of Sect. 4.1 from Figure 1 of [19] All tests in this article are carried out with Pyomo 5.7.1 [11] using Gurobi 9.0.2 [10] on two machines: (i) an Intel Core i7 2.8 GHz processor with 16 GB of memory, and (ii) a node on the Jülich Research on Exascale Cluster Architectures (JURECA) supercomputer, with a cluster's batch partition 2x Intel Xeon E5-2680 v3 (Haswell) and a 2.5GHz processor and 128 GB of memory [14]. We refer to these two machines as Machine I and Machine II, respectively. We solve smaller models of up to 56 typical days on Machine I with the Gurobi threads parameter set to 3, and larger models with 112 typical days on Machine II with the threads parameter set to 32. We use \(TIME=900\) seconds and \(TIME=15,000\) seconds as time limit for our computational experiments on Machine I and Machine II, respectively. We use the self-sufficient building scenario (SelfScen) of Kotzur et al. [19] for our experiments; see, also [16]. The SelfScen optimizes the cost of a single household building to construct and operate on its own, thereby being self-sufficient from an energy perspective. The available technologies include rooftop photovoltaic systems and batteries for short-term electricity storage, as well as reversible fuel cells and liquid organic hydrogen storage systems for long-term energy storage, to meet demand for power. There is also a demand for heat, this is fulfilled by a combination of electric boilers, heat pumps and heat storage. SelfScen includes the following commodities - heat, electricity, hydrogen, liquid organic hydrogen carrier (LOHC), and high temperature heat. Scenario instances include heat and electricity demand for the household, and the maximum power rate that can be generated by the photovoltaic units, for the weather years 1995-2000. To this end, the SelfScen chooses the technologies to install (i.e., bin), the capacities for each component (i.e., cap), and their operation (i.e., op). Optional modeled components are the heat pump, the reversible fuel cells, and conversion technologies that are required for using the hydrogen storage components. See Fig. 2 for an illustration of the SelfScen, and [17] for the dataset associated with the SelfScen. In Online Appendix A.3 we provide additional computational experiments for another scenario that includes multiple buildings. Computational experiments In Table 1, we compare the computational results for the naive solution method and the Budget-Cut Algorithm. Within our time limit, the Budget-Cut Algorithm succeeds in finding the optimal solution in all the instances. The naive solution method, however, fails to find even a feasible solution for all instances with 56 typical days. For the instance with 112 typical days and the year 1995—that we solve on Machine II—the naive solution terminates with a large MIP gap of 45%. The value in the third column of Table 1 is the best known value of the objective function in model (3); the value reported by the algorithm is indeed the optimal in all instances. Next, we note that the naive solution method performs better for smaller instances up to 7 typical days. However, for the larger instances on Machine I the improvements are significant; for instances with 14, 28, and 56 typical days the average improvement is 61.2%, 88.0%, and 82.2%, respectively. For the largest instance with 112 typical days that we solve on Machine II, the runtimes are an order of magnitude lower except for the year 2000; the average improvement here is 81.4%. Table 1 Summary of computational experiments for the SelfScen scenario. All instances are solved in zero iterations for Algorithm 1 Table 2 Summary of computational experiments for the ModSelfScen scenario, analogous to that of Table 1 All instances in Table 1 are solved without any iterations of the Budget-Cut Algorithm; i.e., the while loop in the Budget-Cut Algorithm is not entered. Then, the valid inequality in Eq. (7) is trivially true with \({bin}_i=1, \forall i \in I\). In other words, no component is trivially excluded for having too high costs. However, even then the naive solution method is significantly slower as the algorithm benefits from warmstarts derived from the Extended model as well as additional cuts derived by the optimization solver with the addition of the trivially true valid inequality. To this end, we modify the SelfScen to ensure \(\sum _{i \in I} a_i \cdot bin_i > b\); thus, the algorithm enters the while loop. For details on how we modify the SelfScen, see Online Appendix A.2. We denote this modified scenario as ModSelfScen. Table 2 presents our computational results for the ModSelfScen, analogous to Table 1. The optimal objective function values for the ModSelfScen are larger than those of SelfScen due to the increased objective function coefficients of the ModSelfScen. All instances in Table 2 are solved to optimality within the time limit, thus we remove the two columns corresponding to "Gap" in Table 1. All instances require two iterations. Trends for the ModSelfScen mirror those of the SelfScen, however the percentage savings are smaller for the former. The average improvements for instances with 14, 28, 56, and 112 typical days are 25.6%, 53.2%, 61.2% and 68.4%, respectively. In Online Appendix A.3 we provide an additional set of computational results on another class of scenarios. The results again follow the same trends as those we report above. At least two limitations of our proposed algorithm offer work for future research. First, as we mention in Sect. 3, the algorithm fails when the Existing model is infeasible. However, checking the usability of Budget-Cut Algorithm for an optimization model requires the solution of a single LP; computationally this is a small effort. Future work could focus on determining feasible start solutions for Budget-Cut Algorithm, as well as solving the Existing and Extended problems in parallel. Next, given a feasible solution to initiate the algorithm, we can determine a valid upper bound for model (3). That being said, determining feasible solutions for a MIP is, in general, hard [4]. However, for certain classes of problems—such as the one we present—feasibility is often maintained when no investment decisions are made [21]. A second limitation of Budget-Cut Algorithm occurs when the budget is too large; i.e., \(b > \sum {a_i}\). In this case, the algorithm skips directly to the final solution of the MIP without a valid inequality. The only benefits of our proposal in this case is the use of a feasible solution from Extended as a warmstart. However, even then, the computational benefits we demonstrate in Sect. 4 with the SelfScen instances are significant. Finally, we mention that several practical problems include a known budget—the maximum possible expenditure for building new components. Then, the parameter b of Algorithm Budget-Cut Algorithm serves as an input. To summarize, we present a simple-to-implement algorithm for reducing runtimes of a capacity extension problem. This problem is computationally demanding when exponentially many choices for installing new components exist. Intuitively, the general class of problems we study addresses the following concern: given a portfolio of potential investments with varying purchase and operation costs, choose the ones that minimize long-term horizon expenditures subject to a given budget. Such situations also find application in the general setting when investment decisions are optional and only provide an enhanced portfolio, thereby the original problem serves as a base-case. We relate this problem to a knapsack problem, and propose an algorithm that determines a valid inequality to cut off suboptimal branches of the branch-and-bound search tree. Our algorithm rests on determining upper and lower bounds by solving two extremes of problems — the first where no optional components are built, and the second where all optional components are built, respectively. By iteratively pruning off suboptimal solutions, we increase the lower bounds obtained by the second of these extremes. Documentation and data for the FINE code is available under the MIT License at https://github.com/FZJ-IEK3-VSA/FINE. Data for the SelfScen is available under the MIT License at https://data.mendeley.com/datasets/zhwkrc6k93/1 [16]. Banos, R., Manzano-Agugliaro, F., Montoya, F., Gil, C., Alcayde, A., Gómez, J.: Optimization methods applied to renewable and sustainable energy: A review. Renew. Sustain. Energy Rev. 15(4), 1753–1766 (2011). https://doi.org/10.1016/j.rser.2010.12.008 Billinton, R., Karki, R.: Capacity expansion of small isolated power systems using PV and wind energy. IEEE Trans. Power Syst. 16(4), 892–897 (2001). https://doi.org/10.1109/59.962442 Bundesregierung.de: Das Energiekonzept 2050 (2010). https://www.bundesregierung.de/resource/blob/997532/778196/c6acc2c59597103d1ff9a437acf27bd/infografik-energie-textversion-data.pdf?download=1. Accessed 07 Dec 2020 Chinneck, J.W.: Feasibility and infeasibility in optimization: algorithms and computational methods, vol. 118. Springer Science & Business Media, Berlin (2007) MATH Google Scholar Ember: Daily EU ETS carbon market price (Euros). https://ember-climate.org/data/carbon-price-viewer. Accessed 26 Jan 2021 European Commission: The roadmap for transforming the EU into a competitive, low-carbon economy by 2050. Tech. rep., European Commission (2011). https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2011:0112:FIN:EN:PDF. Accessed 19 Dec 2020 Forschungszentrum, J.: Welcome to FINE's documentation! https://vsa-fine.readthedocs.io/en/master/index.html. Accessed 21 Nov 2020 Gabrielli, P., Gazzani, M., Martelli, E., Mazzotti, M.: Optimal design of multi-energy systems with seasonal storage. Appl. Energy 219, 408–424 (2018). https://doi.org/10.1016/j.apenergy.2017.07.142 Goderbauer, S., Bahl, B., Voll, P., Lübbecke, M.E., Bardow, A., Koster, A.M.: An adaptive discretization MINLP algorithm for optimal synthesis of decentralized energy supply systems. Comput. Chem. Eng. 95, 38–48 (2016). https://doi.org/10.1016/j.compchemeng.2016.09.008 Gurobi Optimization LLC: Gurobi optimizer reference manual (2020). http://www.gurobi.com. Accessed 03 Dec 2020 Hart, W.E., Watson, J.P., Woodruff, D.L.: Pyomo: Modeling and solving mathematical programs in python. Math. Program. Comput. 3(3), 219–260 (2011). https://doi.org/10.1007/s12532-011-0026-8 Hoffmann, M., Kotzur, L., Stolten, D., Robinius, M.: A review on time series aggregation methods for energy system models. Energies 13, 641 (2020). https://doi.org/10.3390/en13030641 International Renewable Energy Agency (IRENA): Renewable power generation costs in 2019 (2020). https://www.irena.org/-/media/Files/IRENA/Agency/Publication/2020/Jun/IRENA_Power_Generation_Costs_2019.pdf. Accessed 29 Jan 2021 Jülich Supercomputing Centre: JURECA: Modular supercomputer at Jülich Supercomputing Centre. J. Large-scale Res. Facil. JLSRF (2018). https://doi.org/10.17815/jlsrf-4-121-1 Kannengießer, T., Hoffmann, M., Kotzur, L., Stenzel, P., Schuetz, F., Peters, K., Nykamp, S., Stolten, D., Robinius, M.: Reducing computational load for mixed integer linear programming: an example for a district and an island energy system. Energies (2019). https://doi.org/10.3390/en12142825 Knosala, K., Kotzur, L., Röben, F.T., Stenzel, P., Blum, L., Robinius, M., Stolten, D.: Hybrid hydrogen home storage for decentralized energy autonomy. Int. J. Hydrogen Energy 46(42), 21748–21763 (2021). https://doi.org/10.1016/j.ijhydene.2021.04.036 Knosala, K., Kotzur, L., Röben, F.T., Stenzel, P., Blum, L., Robinius, M., Stolten, D.: Hybrid hydrogen home storage for decentralized energy autonomy. Mendeley Data available at https://data.mendeley.com/datasets/zhwkrc6k93/1 Kotzur, L.: Future grid load of the residential building sector. Ph.D. thesis, RWTH Aachen (2018) Kotzur, L., Markewitz, P., Robinius, M., Stolten, D.: Kostenoptimale Versorgungssysteme für ein vollautarkes Einfamilienhaus. In: 10. Internationale Energiewirtschaftstagung, vol. 10, pp. 1–14 (2017) Kotzur, L., Nolting, L., Hoffmann, M., Groß, T., Smolenko, A., Priesmann, J., Büsing, H., Beer, R., Kullmann, F., Singh, B., Praktiknjo, A., Stolten, D., Robinius, M.: A modeler's guide to handle complexity in energy system optimization (2020). arXiv:2009.07216. Accessed 03 Feb 2021 Lim, S.R., Suh, S., Kim, J.H., Park, H.S.: Urban water infrastructure optimization to reduce environmental impacts and costs. J. Environ. Manag. 91(3), 630–637 (2010). https://doi.org/10.1016/j.jenvman.2009.09.026 Loulou, R., Goldstein, G., Noble, K., et al.: Documentation for the MARKAL family of models. Energy Technology Systems Analysis Programme pp. 65–73 (2004) Loulou, R., Remme, U., Kanudia, A., Lehtila, A., Goldstein, G.: Documentation for the TIMES model part II. Energy Technology Systems Analysis Programme (2005) Lumbreras, S., Ramos, A., Banez-Chicharro, F.: Optimal transmission network expansion planning in real-sized power systems with high renewable penetration. Electric Power Syst. Res. 149, 76–88 (2017). https://doi.org/10.1016/j.epsr.2017.04.020 Luss, H.: Operations research and capacity expansion problems: a survey. Oper. Res. 30(5), 907–947 (1982). https://doi.org/10.1287/opre.30.5.907 MathSciNet Article MATH Google Scholar Mahdavi, M., Sabillon, C., Ajalli, M., Romero, R.: Transmission expansion planning: literature review and classification. IEEE Syst. J. 3, 3129–3140 (2019). https://doi.org/10.1109/JSYST.2018.2871793 Neumann, F., Brown, T.: Heuristics for transmission expansion planning in low-carbon energy system models. In: 2019 16th International Conference on the European Energy Market (EEM), pp. 1–8 (2019). https://doi.org/10.1109/EEM.2019.8916411 Samsatli, S., Staffell, I., Samsatli, N.J.: Optimal design and operation of integrated wind-hydrogen-electricity networks for decarbonising the domestic transport sector in Great Britain. Int. J. Hydrogen Energy 41(1), 447–475 (2016). https://doi.org/10.1016/j.ijhydene.2015.10.032 Singh, B., Morton, D.P., Santoso, S.: An adaptive model with joint chance constraints for a hybrid wind-conventional generator system. CMS 15(3–4), 563–582 (2018). https://doi.org/10.1007/s10287-018-0309-x Üster, H., Dilaveroğlu, Ş: Optimization for design and operation of natural gas transmission networks. Appl. Energy 133, 56–69 (2014). https://doi.org/10.1016/j.apenergy.2014.06.042 Welder, L., Ryberg, D., Kotzur, L., Grube, T., Robinius, M., Stolten, D.: Spatio-temporal optimization of a future energy system for power-to-hydrogen applications in germany. Energy 158, 1130–1149 (2018). https://doi.org/10.1016/j.energy.2018.05.059 Yao, L., Yang, B., Cui, H., Zhuang, J., Ye, J., Xue, J.: Challenges and progresses of energy storage technology and its application in power systems. Journal of Modern Power Systems and Clean Energy (2016). https://doi.org/10.1007/s40565-016-0248-x Zhang, Y., Sahinidis, N.V.: Global optimization of mathematical programs with complementarity constraints and application to clean energy deployment. Optim. Lett. 10(2), 325–340 (2016). https://doi.org/10.1007/s11590-015-0880-9 We are grateful to Dane Lacey for the modification of the district scenario that we present in Online Appendix A.3. The authors acknowledge the financial support by the Federal Ministry for Economic Affairs and Energy of Germany in the project METIS (project number 03ET4064). Open Access funding enabled and organized by Projekt DEAL. Department of Data Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany Bismark Singh Department of Mathematics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany Bismark Singh & Oliver Rehberg Forschungszentrum Jülich GmbH, Institute of Techno-economic Systems Analysis (IEK-3), Wilhelm-Johnen-Str., Jülich, 52428, Germany Theresa Groß, Maximilian Hoffmann, Leander Kotzur & Detlef Stolten Chair for Fuel Cells, c/o Institute of Techno-economic Systems Analysis (IEK-3) , Forschungszentrum Jülich GmbH, RWTH Aachen University, Wilhelm-Johnen-Str., Jülich, 52428, Germany Detlef Stolten Oliver Rehberg Theresa Groß Maximilian Hoffmann Leander Kotzur Correspondence to Bismark Singh. Below is the link to the electronic supplementary material. Supplementary material 1 (pdf 388 KB) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Singh, B., Rehberg, O., Groß, T. et al. Budget-cut: introduction to a budget based cutting-plane algorithm for capacity expansion models. Optim Lett (2021). https://doi.org/10.1007/s11590-021-01826-w
CommonCrawl
One possibility is that when an individual takes a drug like noopept, they experience greater alertness and mental clarity. So, while the objective ability to see may not actually improve, the ability to process visual stimuli increases, resulting in the perception of improved vision. This allows individuals to process visual cues more quickly, take in scenes more easily, and allows for the increased perception of smaller details. According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated. At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can't turn up anything noticeable, I don't think I'll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it's only ~$15, after all.) In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.) Many laboratory tasks have been developed to study working memory, each of which taxes to varying degrees aspects such as the overall capacity of working memory, its persistence over time, and its resistance to interference either from task-irrelevant stimuli or among the items to be retained in working memory (i.e., cross-talk). Tasks also vary in the types of information to be retained in working memory, for example, verbal or spatial information. The question of which of these task differences correspond to differences between distinct working memory systems and which correspond to different ways of using a single underlying system is a matter of debate (e.g., D'Esposito, Postle, & Rypma, 2000; Owen, 2000). For the present purpose, we ignore this question and simply ask, Do MPH and d-AMP affect performance in the wide array of tasks that have been taken to operationalize working memory? If the literature does not yield a unanimous answer to this question, then what factors might be critical in determining whether stimulant effects are manifest? That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes." Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28]
CommonCrawl
arXiv.org > cs > arXiv:1401.5266 Computer Science > Logic in Computer Science arXiv:1401.5266 (cs) [Submitted on 21 Jan 2014 (v1), last revised 12 May 2014 (this version, v2)] Title:Subclasses of Presburger Arithmetic and the Weak EXP Hierarchy Authors:Christoph Haase Abstract: It is shown that for any fixed $i>0$, the $\Sigma_{i+1}$-fragment of Presburger arithmetic, i.e., its restriction to $i+1$ quantifier alternations beginning with an existential quantifier, is complete for $\mathsf{\Sigma}^{\mathsf{EXP}}_{i}$, the $i$-th level of the weak EXP hierarchy, an analogue to the polynomial-time hierarchy residing between $\mathsf{NEXP}$ and $\mathsf{EXPSPACE}$. This result completes the computational complexity landscape for Presburger arithmetic, a line of research which dates back to the seminal work by Fischer & Rabin in 1974. Moreover, we apply some of the techniques developed in the proof of the lower bound in order to establish bounds on sets of naturals definable in the $\Sigma_1$-fragment of Presburger arithmetic: given a $\Sigma_1$-formula $\Phi(x)$, it is shown that the set of non-negative solutions is an ultimately periodic set whose period is at most doubly-exponential and that this bound is tight. Comments: 10 pages, 2 figures Subjects: Logic in Computer Science (cs.LO) MSC classes: 03B70 ACM classes: F.4.1 Cite as: arXiv:1401.5266 [cs.LO] (or arXiv:1401.5266v2 [cs.LO] for this version) From: Christoph Haase [view email] [v1] Tue, 21 Jan 2014 11:13:27 UTC (78 KB) [v2] Mon, 12 May 2014 14:26:18 UTC (72 KB) cs.LO DBLP - CS Bibliography listing | bibtex Christoph Haase
CommonCrawl
Is there an energy cost associated with flipping the spin of an electron? A common example used to illustrate the limitations of restricted Hartree-Fock (RHF) theory is the H$_2$ dissociation energy ($D_e$) curves. RHF enforces electrons to be paired into spin orbitals, $\chi$, or two spatial orbitals $\phi$ with the same set of spatial coordinates $\mathbf{r}$ but a different spin function (i.e. $\alpha$ or $\beta$ for spin-up $\uparrow$ and spin-down $\downarrow$, respectively). Unrestricted Hartree-Fock theory gives each spatial orbital its own set of coordinates, allowing for $\phi_1(\alpha$) to have a different energy than $\phi_1(\beta)$, giving rise to a proper description of the H$_2$ dissociation energy (see the graph below). I didn't include units in this graph but the energy is in kcal mol$^{-1}$. The red curve gives the appropriate (and approximate) $D_e$. The RHF formalism (blue curve) gives a $D_e$ that is three-times greater than the 'right answer'. A corresponding electron configuration picture is shown below. Here we clearly see RHF forcing the electrons to be paired at infinite distances, giving rise to an H$^-$H$^+$ description of the system which is clearly wrong. (Note that this is a 'state' that included into the wave function by RHF). UHF allows the electrons to reside in their own spatial orbitals giving rise to two doublets at infinite separation. Now here is where things get interesting. In terms of the actual computations implemented to generate the curves in Figure 1, the electrons adopt opposite spins. However, let us imagine a theoretical scenario where two lone hydrogen atoms are floating around in space at infinite distance. The corresponding electrons (which we will assume is not interacting with each other) can adopt any spin they want so both electrons can have a spin-up configuration. Now imagine these two hydrogen atoms approaching each other. For a covalent bond to form, the electrons will be paired into a spin-orbital which means that one electron must flip its spin according to the Pauli-Exclusion principle. (We should note at this point that orbitals are merely mathematical functions whereas spin is an actual physical property). This now leads us to... THE QUESTION: Is there an energy cost associated with flipping the spin of an electron such as in the scenario I've described? The obvious follow up question (if the answer is yes to the first piece), would the H$_2$ dissociation energy curves be different for two H atoms each with spin up electrons and two H atoms with electrons of opposite spin (and how different would they be)? physical-chemistry computational-chemistry electronic-configuration quantum-chemistry LordStryker LordStrykerLordStryker $\begingroup$ Yes. (Cf: the 21 cm line in atomic hydrogen) $\endgroup$ – Uncle Al $\begingroup$ Your RHF and UHF description is not correct. In RHF at infinite distance there are still two linearcombinations of the orbitals (the same as in bonding distance) which are close to degenerate (the orbital with the electrons is lower in energy, since HF cannot describe empty orbitals correct). If one electron is at one H, then the other has a 50:50 probability to be at either H. In UHF the spin configuration stays formally the same (singlet). Since the restriction is lifted, it is no eigenfunction of the spinoperator anymore (spincontamination). $\endgroup$ – Martin - マーチン ♦ $\begingroup$ @UncleAl could you elaborate more, please. From my point of view, if two H with same spin electrons are put in close contact, then one electron would occupy the antibonding (excited) orbital. Relaxation then would release energy (negative singlet triplet gap). But I might be wrong - I am curious though. Or do electrons have to be of opposite spin to form a bond? If so, then spinflip would have to occour before - but how would the electron know? $\endgroup$ $\begingroup$ Restricted HF makes the requirement that two electrons of opposite spin have to occupy one spatial orbital. This is causing the orbitals being spread over the whole molecule. Two hydrogen atoms at infinite distance are still a molecule in the HF formalism, but this approximation is not applicable any more. (In theory the amplitude of the wavefunction of the orbitals only vanishes at infinity, causing always some overlap.) $\endgroup$ $\begingroup$ This is true, I have to admit I was not thinking of exchange correlation for the Hydrogen molecule. HF Exchange correlation is also often referred to as the exact exchange. As for the wording 'pairing' - this might just be a philosophical question. Anyways, this whole rambelage does actually not help finding an answer. Sorry for that. $\endgroup$ I will try to describe what happens when two hydrogen atoms approach each other from infinity. At infinite separation the hydrogen atoms don't feel their mutual presence and each atom has one electron localized in its atomic 1s orbital. In the absence of magnetic fields it will not matter whether the spins of the electrons are parallel or antiparallel and you will basically have an ensemble of systems with an equal amount of triplet (parallel spins, $S=1$) and singlet (antiparallel spins, $S=0$) states, since $E_{\uparrow \uparrow} = E_{\uparrow \downarrow}$. But when hydrogen atoms are brought closer together the situation changes: For one thing each atom has a magnetic moment associated with its electronic spin and there will be a dipolar interaction between those magnetic moments. However this dipolar interaction will be very weak (in the order of $10^{-4} \, \mathrm{eV}$ at close interatomic distance) and won't have much influence. Another factor that comes into the game is spin-orbit coupling. Yet again, since $\ce{H}$ is such a light element spin-orbit coupling will be very, very weak and so it can be ignored too. But the predominant factor to be taken into acount is the exchange energy (its effect is in the order of $0.1$ to $1 \, \mathrm{eV}$). Here you can find a very nice treatment of the exchange energy for the hydrogen molecule which comes to the conclusion that For the hydrogen molecule the exchange energy is negative [...] so the state when the resultant spin is $S=0$ has the lower energy than the state when $S=1$. This source also provides the potential energy curves for the hydrogen singlet and triplet states. These curves amount to the following electron configuration diagrams: ${}$ So, the situation for the approaching hydrogen atoms is like this: If both atoms start out with electrons of opposite spin (i.e. the system is in a singlet state) you have the common case that the hydrogen atoms will bond together thus lowering the total energy of the system. But if both atoms start out with parallel spins (i.e. the system is in a triplet state) the total energy will rise when they get closer to one another and the atoms will not bond together but keep their distance. This can be understood from the electron configuration diagram of the triplet hydrogen molecule: The antibonding $\sigma^{*}$ MO is more destabilized than the bonding $\sigma$ orbital is stabilized by the interaction (this is generally true for 2-orbital interactions) and the energy splitting becomes larger the closer the hydrogen atoms get. But because of the Pauli principle the two electrons of like spin in the system can't both occupy the $\sigma$ MO but one is forced to occupy the $\sigma^{*}$ MO and thus the system's total energy rises when the hydrogen atoms approach each other. And because there is virtually no spin-orbit coupling for hydrogen the total angular momentum (represented by $L$) and the total spin (represented by $S$) of the electrons are essentially conserved quantities and there is no mechanism to facilitate intersystem crossing, i.e. the process of changing from the triplet to the singlet state has such a low probability that it virtually doesn't happen. The situation changes of course if you add an external bias to the system, e.g. irradiation with high-energetic light. Then a triplet-singlet transition might be possible but as I read it that was not your question. So, the way MO-diagrams are often pictured with electrons of like spin pairing up in a bonding MO does not reflect the "real" situation of two atoms approaching each other from an infinite distance. But it is not meant that way although it is often pictured as such. So, in summary I established that if you have two hydrogen atoms in free space without spin-orbit coupling and relativistic effects they are stuck with the spin state they initially start out with and can't change it during their time evolution, i.e. they have to stay either on the triplet or the singlet Born-Oppenheimer surface and no switching between those is allowed. But I want to delve a little deeper into the reasons for it. If the hydrogen atoms wanted to change from triplet to singlet state, the atoms would have to transfer $1 \hbar$ of spin angular momentum somewhere else in order to comply with the law of conservation of momentum. In the absence of any particles to collide with the only possible mechanism by which this can be achieved is to emit a photon whose energy is equal to the energy difference between the triplet and the singlet state, i.e. $\Delta E = \hbar \omega = E_{\uparrow \uparrow} - E_{\uparrow \downarrow}$, and which would "carry away" the $1 \hbar$ of spin. But in order to emit a photon there must be a coupling between the electrons and the electromagnetic field. The magnitude of this coupling or in other words the probability for a transition from an initial to a final state under the emission of a photon is described by the transition dipole moment \begin{equation} \Theta_{\mathrm{i} \to \mathrm{f}} = \big\langle \Psi_{\mathrm{i}} \big| \sum_{n} q_{n} \mathbf{\hat{r}}_{n} \, \big| \Psi_{\mathrm{f}} \big\rangle \end{equation} where $q_{n}$ and $\mathbf{\hat{r}}_{n}$ are the charge and the position operator of the $n^{\mathrm{th}}$ particle in the system respectively, and the subscripts $\mathrm{i}$ and $\mathrm{f}$ signify the initial and final states of the transition. Under the conditions of the Born-Oppenheimer approximation the contribution of the nuclei to $\Theta_{\mathrm{i} \to \mathrm{f}}$ will be zero for the situation at hand, so that we only need to be concerned with the electrons in the system. If there is a sizable spin-orbit coupling then the spin degrees of freedom and the spatial degrees of freedom are coupled via the total angular momentum operator $\mathbf{\hat{J}} = \mathbf{\hat{L}} + \mathbf{\hat{S}}$ because \begin{align} \mathbf{\hat{J}}{}^{2} = \mathbf{\hat{L}}{}^{2} + \mathbf{\hat{S}}{}^{2} + \mathbf{\hat{L}} \cdot \mathbf{\hat{S}} \end{align} and $\mathbf{\hat{L}} = \mathbf{\hat{r}} \times \mathbf{\hat{p}}$ where $\mathbf{\hat{p}}$ is the momentum operator. However, if there is no spin-orbit coupling then $\mathbf{\hat{L}} \cdot \mathbf{\hat{S}} = 0$ and $\mathbf{\hat{J}}{}^{2} = \mathbf{\hat{L}}{}^{2} + \mathbf{\hat{S}}{}^{2}$, i.e. the spin degrees of freedom and the spatial degrees of freedom are decoupled. That means you can make a product ansatz for the wave function where you seperate the spin-dependent part $| \psi(S) \rangle$ and the spatial part $| \psi(\mathbf{r}) \rangle$ such that $| \Psi \rangle = | \psi(\mathbf{r}) \rangle | \psi(S) \rangle$. So, for the transition dipole moment you get \begin{equation} \Theta_{\mathrm{i} \to \mathrm{f}} = \big\langle \psi_{\mathrm{i}} (\mathbf{r}) \big| \sum_{n} q_{n} \mathbf{\hat{r}}_{n} \, \big| \psi_{\mathrm{f}}(\mathbf{r}) \big\rangle \underbrace{\big\langle \psi_{\mathrm{i}} (S) \big| \psi_{\mathrm{f}}(S) \big\rangle}_{= \, 0} = 0 \end{equation} where $\langle \psi_{\mathrm{i}} (S) | \psi_{\mathrm{f}}(S) \rangle = 0$ because the initial state would be the triplet state ($S =1$) and the final state would be the singlet state ($S=0$) and spin wave functions for different $S$ are orthogonal. So, no spin-orbit coupling means that the probability for a transition from a triplet to a singlet state under the emission of a photon is zero. In real atoms and molecules there is of course some spin-orbit coupling, so $\Theta_{\mathrm{i} \to \mathrm{f}}$ will be larger than zero. But the spin-orbit coupling in hydrogen is tiny and so $\Theta_{\mathrm{i} \to \mathrm{f}}$ will be tiny too and thus the probability of a triplet-singlet transition will be tiny as well and can be neglected for all practical purposes. Tyberius PhilippPhilipp $\begingroup$ Wow, I really have to think about this, thank you. Am I correct in assuming, that as long as there are other particles in the system, that can "carry away" the spin, transition from triplet to singlet is no problem? $\endgroup$ $\begingroup$ Yup, collisions can provide a mechanism for spin-flips too. The efficiency of the process depends on the particle, of course. $\endgroup$ – Philipp $\begingroup$ This by far is the most comprehensive and insightful response I have ever received to this question. It is also the most satisfying. Though it leads to more questions (ones I cannot verbalize at the moment), the realization of knowledge generally does so and I applaud your efforts to provide such a rigorous answer. $\endgroup$ – LordStryker $\begingroup$ @LordStryker Thanks for the kind words. Glad to hear it was helpful. It is a very interesting question and so I needed to provide a comprehensive and insightful answer to match it :) If the new questions ever reach a verbalizable state, just let me know. Helping is good for my karma :) $\endgroup$ $\begingroup$ Nice answer. Just a remark: "In the absence of magnetic fields it will not matter whether the spins of the electrons are parallel or antiparallel and you will basically have an ensemble of systems with an equal amount of triplet ..." You actually do not have an ensemble (since you only got 2 Atoms), but rather in the general case an undetermined superposition of both states unless you interact with a magnetic field (=measure). The probability of what you get then depends on how the state was prepared. $\endgroup$ – Raphael J.F. Berger Not the answer you're looking for? Browse other questions tagged physical-chemistry computational-chemistry electronic-configuration quantum-chemistry or ask your own question. Why is manganese(II) coloured although the transition should be spin-forbidden? How long would it take for a tank of same-spin hydrogen atoms to become a tank of H₂? Mathematically, how does the exchange integral for a closed-shell system reduce to zero? Spin interaction term for a two-particle system of fermions with a potential that contains spin-spin interaction Transforming AO basis electron repulsion integrals into molecular spin orbital basis? Potential energy surface when spin changes Why is an energy cost associated with splitting the HOMO spins in singlet oxygen? Number of atomic and molecular orbitals in different Hartree-Fock references Does computational chemistry get the molecular orbitals of dioxygen wrong? Factorizing Slater determinant into product of spin-up and spin-down slater determinant Can we relate approximate Hartree-Fock orbitals to true solutions in the basis-set limit?
CommonCrawl
Toffoli gate as FANOUT I was searching for examples of quantum circuits to exercise with Q# programming and I stumbled on this circuit: From: Examples of Quantum Circuit Diagrams - Michal Charemza During my introductory courses in quantum computation, we were taught that the cloning of a state is forbidden by the laws of QM, while in this case the first contol qubit is copied on the third, target, qubit. I quickly tried to simulate the circuit on Quirk, something like this, that sort of confirms the cloning of the state in output on the first qubit. Measuring the qubit before the Toffoli gate shows that is in fact no real cloning, but instead a change on the first control qubit, and an equal output on the first and third qubit. By making simple math, it can be shown that the "cloning" happens only if the third qubit is in initial state 0, and that only if on the first qubit is not performed a "spinning operation" (as indicated on Quirk) on Y or X. I tried writing a program in Q# that only confirmed which is aforesaid. I struggle in understanding how the first qubit is changed by this operation, and how something similar to a cloning is possible. quantum-gate quirk cloning Sanchayan Dutta D-BrcD-Brc $\begingroup$ It's an excellent question, and thank you for taking effort to format it so nicely. $\endgroup$ $\begingroup$ Related: quantumcomputing.stackexchange.com/questions/9113/… $\endgroup$ – Martin Vesely To simplify the question consider CNOT gate instead of Toffoli gate; CNOT is also fanout because \begin{align} |0\rangle|0\rangle \rightarrow |0\rangle|0\rangle\\ |1\rangle|0\rangle \rightarrow |1\rangle|1\rangle \end{align} and it looks like cloning for any basis state $x\in\{0,1\}$ \begin{align} |x\rangle|0\rangle \rightarrow |x\rangle|x\rangle \end{align} but if you take a superposition $|\psi\rangle=\alpha|0\rangle + \beta|1\rangle$ then \begin{align} (\alpha|0\rangle+\beta|1\rangle)|0\rangle \rightarrow \alpha|0\rangle|0\rangle+ \beta|1\rangle|1\rangle \end{align} so generally \begin{align} |\psi\rangle|0\rangle\not\rightarrow|\psi\rangle|\psi\rangle \end{align} and fanout is not cloning. As for the question of how the first qubit is changed - it is now entangled with the second qubit. kludgkludg $\begingroup$ in other words, because the no-cloning theorem says that there cannot be any unitary able to clone nonorthogonal states, while orthogonal states can be cloned without problems $\endgroup$ – glS ♦ The no cloning theorem says that there is no circuit which creates independent copies of all quantum states. Mathematically, no cloning states that: $$\forall C: \exists a,b: C \cdot \Big( (a|0\rangle + b|1\rangle)\otimes|0\rangle \Big) \neq (a|0\rangle + b|1\rangle) \otimes (a|0\rangle + b|1\rangle)$$ Fanout circuits don't violate this theorem. They don't make indepedent copies. They make entangled copies. Mathematically, they do: $$\text{FANOUT} \cdot \Big( (a|0\rangle + b|1\rangle) \otimes |0\rangle \Big) = a|00\rangle + b|11\rangle$$ So everything is fine because $a|00\rangle + b|11\rangle$ is not the same thing as $(a|0\rangle + b|1\rangle) \otimes (a|0\rangle + b|1\rangle)$. Craig GidneyCraig Gidney The answer is that the no-cloning theorem states that you cannot clone an arbitrary unknown state. This circuit does not violate the no-cloning theorem, because let's look at what it does when the input is $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. The output at the third register still has to be a $|0\rangle$ or a $|1\rangle$. Therefore it's impossible for this circuit to clone an arbitrary state $|\psi\rangle$, and one example of a state that it cannot clone is: $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. $\begingroup$ @NeildeBeaudrap: The original question has $|x\rangle$, so I'm saying that it only works when $|x\rangle$ is 0 or 1 but not when it's in a superposition. You changed it to $|\psi\rangle$, is it necesasry to have a different symbol? $\endgroup$ Not the answer you're looking for? Browse other questions tagged quantum-gate quirk cloning or ask your own question. How is the no-cloning theorem compatible with the fact that fan-out gates work? Why can't a fanout be made with a CNOT gate? FANOUT with Toffoli Gate When can a fanout be used without violating the no-cloning theorem? SWAP gate(s) in the $R(\lambda^{-1})$ step of the HHL circuit for $4\times 4$ systems How to complete this teleportation circuit? How to create a copy of $|\psi〉$? Input and output qubit notation in quantum gates How does the CX gate work? Why is a 15-qubit IBM quantum computer not working correctly? Is it possible to make a Toffoli gate using only CNOTS and ancillas? How the single qubit unitary (U) calculates when apply a gate to only one qubit at a time?
CommonCrawl
\documentclass{letter} \title{Performance of New Two-parameter Estimator for Multinomial logit model} \author{Muhammad Amir Saeed } \date{September 2021} \textbf{Abstract} Multinomial distribution in data analysis poses a threat to authenticity where the predictors have a linear relationship, hence ridge regression was considered as a solution to avoid erroneous non-significant results in the Multinomial Logit Model (MLM). Two-parameter Estimator (TPE), however, produced better results when extensively used, the Mean Squared Error (MSE) criterion in Monte Carlo Simulations. The study reports that in case of low moderate multicollinearity among the predictors, the estimated MSE of the $\operatorname{NTPE}\left(\hat{k}_{o p t}, \hat{d}_{o p t}\right)$ is smaller than the MLE. As the number of predictors increases, the respective MSE decreases. But in case of high multicollinearity, the $\operatorname{TPE}\left(\hat{k}_{\min }, \hat{d}_{\min }\right)$ showed consistency with the least MSE than the other estimators. \textbf{Keywords } Multinomial logit model, multicollinearity, Mean Square Error, Maximum Likelihood Estimator, New Two-parameter Estimator. \section{Introduction} The multiple linear regression model is given as follows$$ y=X \beta+\varepsilon In the above equation $\beta$ is a p×1 a vector of unknown regression coefficients, y is an n×1 vector of dependent variables, X is an n×p matrix of independent variables and $\epsilon$ is an vector of random error term which is normally distributed with zero mean vector and $\sigma^{2} I_{n}$ covariance matrix. Where $I_{n}$ is an identity matrix of n order. For estimating the regression coefficients $\beta$ the method of ordinary least squares is applied. Which can be obtained as $\hat{\beta}=\left(X^{\prime} X\right)^{-1} X^{\prime} y$, the covariance matrix for $\hat{\beta}$ can be calculated as $\operatorname{Cov}(\hat{\beta})=\sigma^{2}\left(X^{\prime} X\right)^{-1}$. Both $\hat{\beta}$ and $\operatorname{cov}(\hat{\beta})$ depends on the properties of $X^{\prime} X$ matrix. When the $X^{\prime} X$ matrix is ill-conditioned matrix then ordinary least square estimator become sensitive to the number of error. It is difficult to make valid statistical inference when the regression parameters become statistical insignificant or gave wrong signs. When the $X^{\prime} X$ matrix is ill-conditioned matrix then it mean the correlation exists among the explanatory variables. This problem is known as multicollinearity. The issue of multicollinearity can be resolved by the various methods available in the literature e.g. parameterizing the model, collecting the additional data, the principal component method and the Ridge regression method. Where the ridge regression method is one of the most widely used method proposed by Hoerl and Kennard (1970) in the presence of multicollinearity. They suggested to add the small value known as ridge parameter k in the diagonal elements of $X^{\prime} X$ matrix and the proposed estimator can be obtained as $$ \hat{\beta}_{R}=\left(X^{\prime} X+k I_{\rho}\right)^{-1} X^{\prime} y, \quad k \geq 0 $$, Which is recognized as the ridge regression estimator. This proposed estimator has smaller MSE than the ordinary least square method. Where the constant value $k \geq 0$ is known as Biased parameter of shrinkage parameter from the observed data. A new biased estimate known as Liu estimate is proposed by Liu (1993). In this estimator the parameter known as shrinkage parameter is denoted by d, where 0<d<1. Where the Liu estimator $\hat{\beta}_{L}$ contains the advantages of both the Stein estimator $\hat{\beta}_{S}$ and the Ridge estimator $\hat{\beta}_{R}$. Where the Liu estimator $\hat{\beta}_{L}$ is proposed by considering the concept of the ridge estimator $\hat{\beta}_{R}$. A simulation study shows that $\hat{\beta}_{L}$ is effective and the generalized form of $\hat{\beta}_{L}$ can be same as $\hat{\beta}_{L}$. The $\hat{\beta}_{L}$ is a linear function of d that's why it is easy to choose the value of d. The Liu estimator (1993) can be written as follows $$ \hat{\beta}_{L}=\left(X^{\prime} X+I\right)^{-1}\left(X^{\prime} X+d I\right) \hat{\beta}_{O L S} $$. The Liu estimator and the Ridge estimator is the commonly applied method and give effective results in the presence of multicollinearity. Further this work was extended by Schaeffer et al. (1984) for the logit model. Then Mansson and Shukur (2011) proposed some new methods for the estimation of ridge parameter k for the logit regression model. This method also have some disadvantages because the given estimated parameters are non-linear function of the ridge parameter k. Which lies between zero to infinity. Considering the work of Liu (1993) many researchers proposed new methods of estimation to estimate the shrinkage parameter d. Then after some time in (2010) Yang and Chang proposed another estimator which performed better than all other available estimators. The estimator known as New Two-parameter estimator for the linear regression model. They combined the three estimators (OLS, Ridge and Liu estimator) and proposed New Two-parameter estimator which can be writer as follows \hat{\beta}_{N T P E}=\left(X^{\prime} X+I_{p}\right)^{-1}\left(X^{\prime} X+d I_{p}\right)\left(X^{\prime} X+k I_{p}\right)^{-1} X^{\prime} y \subsection{Special Cases of the New Two-parameter Estimator(NTPE)} Now from the definition of $\hat{B}(k, d)$, we can see that the NTPE is a general estimator which includes the OLS, RR and Liu estimator as special cases. \begin{itemize} \begin{item} $$\hat{\beta}(0,1)=\hat{\beta}=\left(X^{\prime} X\right)^{-1}\left(X^{\prime} y\right), \quad \text { The OLS estimator. }$$ \end{item} $$ \hat{\beta}(k, 1)=\hat{\beta}(k)=\left(X^{\prime} X+k I\right)^{-1}\left(X^{\prime} y\right), \quad \text { The RR Estimator. }$$ $$\hat{\beta}(0, d)=\hat{\beta}(d)=\left(X^{\prime} X+I\right)^{-1}\left(X^{\prime} X+d I\right) \hat{\beta}, \text { The Liu Estimator. }$$ \end{itemize} The objective of this article is to propose New Two-parameter estimators and compare its performance with MLE under the multinomial logit model. The organization of the article is as follows: Statistical methodology is presented in sec. 2. A Monte Carlo Simulation study has been conducted in sec. 3. Some concluding remarks are provided in sec. 4. \section{Methodology} \subsection{The multinomial logit model} The multinomial logit (MNL) model is developed by Luce (1959) [6]is one of the most popular statistical methods when the dependent variable consists of m different categories and m>2. The MNL specifies $$ \pi_{j}=\frac{\exp \left(x_{l} B_{j}\right)}{\sum_{j-1}^{m} \exp \left(x_{l} B_{j}\right)^{2}} \quad j=1, \ldots, m $$ Where $B_{j}$ is a (p+1)×1 vector of coefficients and $x_{l}$ is the lth row of X which is an n×(p+1) data matrix with p predictors. The most common method MLE is used for the estimation of $B_{j}$ where the following log likelihood should be maximized: $$ l=\sum_{l=1}^{N} \sum_{j=1}^{m} y_{l j} \log \left(\pi_{l j}\right) $$ Where the above stated equation is the log likelihood of the multinomial logit and if we take the first derivative of the above stated equation to be equal to zero. Then the MLE estimates can be found by solving the following equation: $$ \frac{\partial l}{\partial B_{j}}=\sum_{l=1}^{N}\left(y_{l j}-\pi_{l j}\right) x_{i}=0 $$ Hence the above equation is nonlinear in $\beta$, since we will use the iterative weighted least square (IWLS) technique to solve this problem: $$ \beta_{j}^{(M L)}=\left(X^{\prime} W_{j} X\right)^{-1} X^{\prime} W_{j} z_{j} Where $W_{j}=\pi_{l j}\left(1-\pi_{l j}\right)$ and $z$ is a vector where the $l^{t h}$ element equals $z_{i}=\log \left(\pi_{l j}\right)+\frac{\left(y_{l j}-\pi_{l j}\right)}{\pi_{l j}\left(1-\pi_{l j}\right)}$. The asymptotic covariance matrix of the ML estimator equals the inverse of the matrix of second derivatives (most commonly referred to as the inverse of the Hessian matrix): \operatorname{Cov}\left(\beta_{j}^{(M L)}\right)=-E\left(\frac{\partial^{2} l}{\partial \beta_{j} \partial \beta_{k}^{\prime}}\right) \operatorname{Cov}\left(\beta_{j}^{(M L)}\right)=\left(X^{\prime} W_{j} X\right)^{-1} and the asymptotic MSE equals: E\left(L_{j}^{2(R R)}\right)=E\left(\beta_{j}^{(M L)}-\beta\right)^{\prime}\left(\beta_{j}^{(M L)}-\beta\right) E\left(L_{j}^{2(R R)}\right)=\operatorname{tr}\left[\left(X^{\prime} W_{j} X\right)^{-1}\right] E\left(L_{j}^{2(R R)}\right)=\sum_{i=1}^{p} \frac{1}{\lambda_{j t}} Where $\lambda_{j i}$ is the ith eigenvalue of the $X^{\prime} W_{j} X$ matrix. In the presence of multicollinearity the weighted matrix of cross-products, $X^{\prime} W_{j} X$, is ill-conditioned which leads to instability and high variance of the ML estimator. In that situation it is very hard to interpret the estimated parameters since the vector of estimated coefficients is on average too long. \subsection{The Multinomial Ridge Regression Estimator} Multinomial Logit Ridge Regression (MLNRR) estimator is first purposed by Mansson et.al (2018) for solving the problem of inflated variance of the maximum likelihood estimator. They introduced $\beta_{M L}$ by using iterative weighted least square (IWLS) technique which minimizes the weighted sum of square of error (WSSE). where WSSE is the optimal estimator in a WSSE sense. Following Schaeffer (1986), the Purposed Estimator was: \hat{\beta}_{R R}=\left(X^{\prime} W_{j} X+k I\right)^{-1} X^{\prime} W_{j} X \beta_{M L} Where $W_{j}=\pi_{j}\left(1-\pi_{j}\right)$ and $z$ is a vector where the $l^{\text {th }}$ element equals $z_{i}=\log \left(\pi_{j}\right)+\frac{\left(y_{j}-\pi_{j}\right)}{\pi_{j}\left(1-\pi_{j}\right)}$. The above estimator minimizes the rise of the WSSE. The asymptotic mean square error of the above estimator proposed by Mansson and Shukur (2011) is given below. $$ \operatorname{EMSE}\left(\hat{\beta}_{R R}\right)=E\left(\hat{\beta}_{R R}-\beta\right)^{\prime}\left(\hat{\beta}_{R R}-\beta\right) \operatorname{EMSE}\left(\hat{\beta}_{R R}\right)=\sum_{i=1}^{p} \frac{\lambda_{i j}}{\left(\lambda_{i j}+k\right)^{2}}+k^{2} \sum_{i=1}^{p} \frac{\alpha_{i j}}{\left(\lambda_{i j}+k\right)^{2}} E\left(\operatorname{EMSE}\left(\hat{\beta}_{R R}\right)\right)=\gamma_{1}(k)+\gamma_{2}(k) Where $\alpha_{i j}{ }^{2}$ is describe as the $i^{t h}$ value of $\gamma_{j} \beta_{R R}$ and eigenvector is $\gamma_{j}$ such that $X^{\prime} W X=$ $\gamma_{j}^{\prime} \wedge_{j} \gamma_{j}$ where $\wedge_{j}$ equals $\operatorname{diag}\left(\gamma_{i j}\right)$. Hoerl and Kennard (1970) [1] also exposed that there exist a $k$ such that $E\left(L_{R R}^{2}\right)<E\left(L_{M L}^{2}\right)$. To express this we start by using that $\gamma_{1}(k)$ and $\gamma_{2}(k)$ are monotonically increasing and decreasing function of $k$ respectively. Then we find the first derivative of the expression given above. \frac{\partial E\left(\operatorname{EMSE}\left(\hat{\beta}_{R R}\right)\right)}{\partial k}=\frac{\partial \gamma_{1}(k)}{\partial k}+\frac{\partial \gamma_{2}(k)}{\partial k} \frac{\partial E\left(E M S E\left(\hat{\beta}_{R R}\right)\right)}{\partial k}=-2 \sum_{i=1}^{p} \frac{\lambda_{j i}}{\left(\lambda_{j i}+k\right)^{3}}+2 k \sum_{i=1}^{p} \frac{\lambda_{j i} \alpha_{j i}^{2}}{\left(\lambda_{j i}+k\right)^{3}} In this situation, it make us unable to conclude that the first derivative of $\gamma_{1}(k)$ will be permanently negative and the first derivative of $\gamma_{2}(k)$ will be permanently positive. Hence $\frac{\partial E\left(L_{M L}^{2}\right)}{\partial k}=0$, but it is compulsory to show that the ridge parameter k will be always greater than zero. For such situation $\frac{\partial E\left(L_{R R}^{2}\right)}{\partial k}<0$, they have to show that $E\left(L_{R R}^{2}\right)<E\left(L_{M L}^{2}\right) .$ Then the value of ridge parameter $k$, can be written as k<\frac{1}{\alpha_{j i}^{2}} Moreover, by equating equation to zero the optimal value of the ridge parameter k can be showed as given below. k=\frac{1}{\alpha_{j i}^{2}} The above value is the optimal value of ridge parameter k. \subsection{The Proposed NTPE for multinomial logit model} As a remedy to the problem of inflated variance of the ML estimator we are going to purpose a New Two-Parameter Estimator for Multinomial Logit Model. Since $\hat{\beta}_{M L}$ is found using the weighted least square (WLS) algorithm it approximately minimizes the weighted sum of square of error (WSSE). Hence $\hat{\beta}_{M L}$ can be seen as the optimal estimator in a WSSE sense. Following, we have the following estimator: \hat{\beta}_{N T P}=\left(X^{\prime} W X+I_{p}\right)^{-1}\left(X^{\prime} W y+(d-k) \hat{\beta}_{M L R}\right) \hat{\beta}_{N T P}=\left(X^{\prime} W X+I_{p}\right)^{-1}\left(X^{\prime} W X+d I_{p}\right) \hat{\beta}_{M L R} \hat{\beta}_{N T P}=\left(X^{\prime} W X+I_{p}\right)^{-1}\left(X^{\prime} W X+d I_{p}\right)\left(X^{\prime} W X+k I_{p}\right)^{-1} X^{\prime} W y Where $W=\pi_{j}\left(1-\pi_{j}\right)$ and $z$ is a vector where the $l^{\text {th }}$ element equals $z_{i}=\log \left(\pi_{j}\right)+\frac{\left(y_{j}-\pi_{j}\right)}{\pi_{j}\left(1-\pi_{j}\right)}$. Which is the proposed new two-parameter estimator. \subsubsection{Selection of the optimized Biasing Parameters} The new two-parameters can be achieved by finding the optimal values of $k$ and $d$. Where an operational estimator for $k$, where $d=1, \hat{\beta}(k, 1)=\hat{\beta}(k)$, and the optimal value of $\hat{k}$ can be written as \hat{k}_{o p t}=\frac{\hat{\sigma}^{2}\left(\lambda_{i}+d\right)-(1-d) \lambda_{i} \alpha_{i}^{2}}{\left(\lambda_{i}+1\right) \alpha_{i}^{2}} The above value is the estimated value of k given by Hoerl and Kennard (1970). If $k=0, \hat{\beta}(k, d)$ become the Liu estimator, then the optimal value of $hat d$can be written as \hat{d}_{o p t}=\frac{\sum_{i=1}^{p}\left(\hat{\alpha}_{i}^{2}-\hat{\sigma}^{2}\right) /\left(\lambda_{i}+1\right)^{2}}{\sum_{i=1}^{p}\left(\hat{\sigma}^{2}+\lambda_{i} \hat{\alpha}_{i}^{2}\right) /\left(\lambda_{i}+1\right)^{2} \lambda_{i}} \subsubsection{Selection of the Biasing Parameter k via articles} The biasing parameter k of the NTPE is selected from the article "Performance of some ridge regression estimators for the multinomial logit model.pdf". In this article authors proposed 16 estimator's k and concluded that k13 performed best among all of them. \hat{k}_{13}=\prod_{i=1}^{p}\left(\frac{1}{q_{j i}}\right)^{\frac{1}{p}} \text { Where } q_{j i}=\frac{\lambda_{j m a x}}{n-p+\lambda_{j \max } \hat{a}_{j i}^{2} i} The other biasing parameter "d" of NTPE is selected from the article "On Liu estimators for the logit regression model.pdf." In this article the authors proposed 5 biasing parameters of Liu estimator for the logit regression model. They concluded that among that 5 estimators D5 performed best. D 5=\max \left[0, \min \left(\frac{\hat{\alpha}_{j}^{2}-\hat{\varphi}}{\frac{1}{\hat{\lambda}_{j}}+\hat{\alpha}_{j}^{2}}\right)\right] \subsubsection{Selection of the Biasing Parameters by Self-selection} we have studied in the literature and observed that the criteria for the selection of biasing parameters k and d is different. As both of the biasing parameters have same range 0 to 1. But after so many calculation we have disclosed that the biasing parameter k should be near to 0 whether the value other biasing parameter d should be near to 1. In this study we have used these values of biasing parameters. k=0.02 d=0.80 With the help of these values we got better results. \subsubsection{Judging the performance of the estimators} To examine whether the NTPE is better than the MLE we compute the MSE by using the given equation: M S E=\frac{\sum_{i=1}^{R} \sum_{j=2}^{m}\left(\hat{\beta}_{j}-\beta\right)^{\prime}\left(\hat{\beta}_{j}-\beta\right)}{R} \subsection{Performance of the proposed estimator under different distributed error terms. i.e. Normal, t, and Exponential Distributed} As we know that in most of the studies the error term is normally distributed with mean 0 and constant variance. But in this study we need to use the error term distributed with different distributions because we want to see the trend and changes in the empirical results of study. Standard Normal Distribution $\epsilon~N(0,1)$ T-distribution $\epsilon~t(n,n-1)$ Exponential Distributed $\epsilon~exp(n,\lambda)$ \subsection{Performance Criterion for the Estimator} The estimated MSE and standard deviation of the Liu parameters are considered as performance criteria to inspect the performance of the proposed and existing Liu parameters in the presence of multicollinearity. The estimated MSE is defined as M S E=\frac{\sum_{i=1}^{R}\left(\hat{\beta}_{i}-\beta\right)^{\prime}\left(\hat{\beta}_{i}-\beta\right)}{R} Where R is the total number of replications which are set to be 3000 and $\hat{\beta}_{j}$ is the estimated value of $\beta$ in the ith replication obtained from the Liu estimator and the ML method. The standard deviation of d is also calculated to check which estimation methods of the Liu parameter give stable MSE. \section{The Monte Carlo Simulation} \subsection{The design of the experiment} The response variable of the multinomial logit model is produced by using the pseudo random numbers taken for the multinomial regression model where $$ \pi_{j}=\frac{\exp \left(x_{j} \beta_{j}\right)}{\sum_{j-1}^{m} \exp \left(x_{j} \beta_{j}\right)^{2}}, \quad j=1, \ldots, p Where the value of parameters in above equation are taken so that $\beta^{\prime} \beta=1$, and the base category will be taken from the first category. The first factor we select to differ in the experimental design is the relationship among the explanatory variables. In the experimental design we choose five levels of correlation $\rho$ corresponding to 0.75,0.80,0.85,0.90,0.95 and 0.99 are considered. These level of correlation are used to generate the data with different degrees of correlation by using the formula given below. x_{i j}=\left(1-\theta^{2}\right)^{1 / 2} Z_{i j}+\theta z_{i p+1}, \quad i=1, \ldots, n, i=1, \ldots, p Where $Z_ij$ are the standard normally distributed pseudo random numbers. There are two factors the differ of sample size and the number of independent variables affect the mean square error and the performance of the estimators. Where this idea is take from the previous studies, in which Muniz and Kibria, 2009; Mansson and Shukur, 2011 are notable. To get the valid results from the multinomial regression the researchers Mansson and Shukur (2011) increase the sample size with the number of independent variables. If the sample size is adjusted and make little correction in the degree of freedom, then the result will be more meaningful (Muniz, Kibria and Shukur, 2012). \begin{table}[h] \begin{tabular}{lclllll} \hline \multicolumn{7}{l}{Table 3.1: Combination of the sample Sizes} \\ \hline Number of explanatory variables & \multicolumn{6}{c}{Sample Sizes} \\ \hline & 50 & 75 & 100 & 150 & 200 & 250 \\ \hline 4 & * & * & * & * & & \\ \hline 6 & & * & * & * & * & \\ \hline 8 & & & * & * & * & * \\ \hline \end{table} \subsection{A real application: rental price data set Since the theoretical and Monte Carlo simulation evidence are not enough to judge the performance of the proposed estimators. Therefore, we used an empirical application on rental price data set. This data set was taken from Späeth [25] and consists of 67 observations where the response variable is the rental price per acre of the given variety of grass (y) and three explanatory variables. These explanatory variables include the average cost of rent per acre of arable land in dollars (x1), the number of milk cows per square mile (x2) and the difference between pasturage and arable land (x3). The main objective of this data set is to investigate the rent structure with respect to a particular variety of grass. Before statistical modelling, it is more appropriate to test the probability distribution of the response variable. On the basis of the Cramér-von Mises test, we find that the rental price data set well fits the multinomial distribution with test statistic (p-value) given as 0.0969 (0.1239). Therefore, we apply the MNL model to investigate the rent structure with respect to a particular variety of grass instead of the LRM. We also observed through condition index (CI = 1387.65) which means that the data set is highly multicollinear. The estimate of $\phi$ is $\hat{phi}= 0.07700553$. In this application, the reciprocal link function is used. The estimated coefficients, standard errors and estimated MSE of the ML and NTPE with best parameter are presented. The NTPE with optimized parameters gains efficiency over the ML estimator in the form of the estimated MSE. It shows that the average cost of rent per acre of arable land and the number of milk cows per square mile have a negative impact while the difference between pasturage and arable land has a positive impact on the rental price per acre of the given variety of grass. We observed that a higher value of rent per acre of arable land and the milk cows per square mile indicates a low rent structure with respect to a particular variety of grass. While the positive impact of the difference between pasturage and arable land indicates a high rent structure. One can easily see that the standard errors of the variables x1, x2 and x3 increase when applying the common ML estimator. When applying the NTPE with the optimized parameter it was shown in theoretical and simulation study that it performs superior to the ML estimator.
CommonCrawl
Electron spin resonance resolves intermediate triplet states in delayed fluorescence Fast spin-flip enables efficient and stable organic electroluminescence from charge-transfer states Lin-Song Cui, Alexander J. Gillett, … Richard H. Friend Organic light emitters exhibiting very fast reverse intersystem crossing Yoshimasa Wada, Hiromichi Nakagawa, … Hironori Kaji Dielectric control of reverse intersystem crossing in thermally activated delayed fluorescence emitters Alexander J. Gillett, Anton Pershin, … David Beljonne Spontaneous exciton dissociation enables spin state interconversion in delayed fluorescence organic semiconductors Alexander J. Gillett, Claire Tonnelé, … Richard H. Friend Properties and applications of photoexcited chromophore–radical systems Theresia Quintes, Maximilian Mayländer & Sabine Richert Tailoring spin mixtures by ion-enhanced Maxwell magnetic coupling in color-tunable organic electroluminescent devices Junwei Xu, Yue Cui, … David L. Carroll Organic photostimulated luminescence associated with persistent spin-correlated radical pairs Manabu Sakurai, Ryota Kabe, … Takashi Tachikawa Generating and sustaining long-lived spin states in 15N,15N′-azobenzene Kirill F. Sheberstov, Hans-Martin Vieth, … Alexandra V. Yurkovskaya Femtosecond formation dynamics of the spin Seebeck effect revealed by terahertz spectroscopy Tom S. Seifert, Samridh Jaiswal, … Tobias Kampfrath Bluebell H. Drummond ORCID: orcid.org/0000-0001-5940-86311,2, Naoya Aizawa ORCID: orcid.org/0000-0003-4673-45123, Yadong Zhang4, William K. Myers ORCID: orcid.org/0000-0001-5935-91122, Yao Xiong4, Matthew W. Cooper ORCID: orcid.org/0000-0002-3589-86884, Stephen Barlow ORCID: orcid.org/0000-0001-9059-99744, Qinying Gu1, Leah R. Weiss1,5, Alexander J. Gillett ORCID: orcid.org/0000-0001-7572-73331, Dan Credgington1, Yong-Jin Pu ORCID: orcid.org/0000-0003-3841-24173, Seth R. Marder4 & Emrys W. Evans ORCID: orcid.org/0000-0002-9092-39381,6 Electronic properties and materials Organic LEDs Molecular organic fluorophores are currently used in organic light-emitting diodes, though non-emissive triplet excitons generated in devices incorporating conventional fluorophores limit the efficiency. This limit can be overcome in materials that have intramolecular charge-transfer excitonic states and associated small singlet-triplet energy separations; triplets can then be converted to emissive singlet excitons resulting in efficient delayed fluorescence. However, the mechanistic details of the spin interconversion have not yet been fully resolved. We report transient electron spin resonance studies that allow direct probing of the spin conversion in a series of delayed fluorescence fluorophores with varying energy gaps between local excitation and charge-transfer triplet states. The observation of distinct triplet signals, unusual in transient electron spin resonance, suggests that multiple triplet states mediate the photophysics for efficient light emission in delayed fluorescence emitters. We reveal that as the energy separation between local excitation and charge-transfer triplet states decreases, spin interconversion changes from a direct, singlet-triplet mechanism to an indirect mechanism involving intermediate states. Thermally activated delayed fluorescence (TADF) organic molecules convert electricity to light more efficiently than traditional organic fluorophores, making them attractive materials for organic light-emitting diodes (OLEDs)1,2,3,4,5,6. Spin statistics for OLEDs dictate that 75% of electrically-generated excitons populate triplet (spin = 1) states (T1), which, in the case of traditional fluorophores, limits electroluminescence (EL) efficiency since radiative de-excitation of these states to the singlet (spin = 0) ground state (S0) is spin-forbidden7,8. Current commercial OLED devices employ organometallic chromophores based on iridium and platinum as red and green emitters. In such materials, enhanced spin–orbit coupling from the heavy element promotes direct emission from triplet excitons (phosphorescence), allowing up to 100% internal quantum efficiency (IQE) for EL9,10,11. Blue emitters based on the same approach have remained elusive, particularly because the long lifetimes of high-energy triplet excitons lead to OLED degradation12,13. Consequently, blue OLEDs typically employ organic fluorophores, with an enhancement of EL IQE achieved using triplet-triplet annihilation (TTA, T1 + T1 → S1 + S0), which can harvest up to half of the triplet excitons for light emission via singlets14,15,16. TTA-based OLEDs can in principle achieve up to 62.5% EL IQE. Within this context, TADF emitters present an alternative approach to achieving 100% EL IQE in devices, thereby offering a step-change in performance for blue OLED technologies17,18. In TADF devices, indirect emission from dark triplet states via bright singlet excitons (T1 → S1 → S0) is activated by ambient thermal energy. There is general agreement that TADF emitters should be designed to have small singlet-triplet exchange energies that promote rapid forward and reverse intersystem crossing (ISC) between singlet and triplet excitons; such small exchange energies can be engineered by spatial separation of the highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO)1,2,3,4,5,6. However, there is no agreement on the spin interactions and spin-flip mechanisms responsible for the singlet-triplet interconversion in fluorophores exhibiting TADF with intramolecular charge-transfer (CT) states19,20,21,22. Typical TADF emitters have low-lying singlet and triplet excitonic states with mixed local excitation (LE) and intramolecular CT character21,23. Emission from the predominantly CT singlet excitonic state (1CT) via prompt or delayed fluorescence is desirable. Therefore, the triplet excitons populating either CT or local excitation states (3CT and 3LE) must be efficiently converted to 1CT. ISC between the singlet and triplet manifolds of fluorophores can be driven by a spin-orbit coupling (SOC) mediated spin conversion, whereby an orbital transition between two states generates a torque capable of flipping an electron's spin. Direct SOC (dependent only on the electronic character of the coupled states) will in general be suppressed between a singlet and triplet state if they possess similar orbital character (El Sayed's Rule)24,25. Therefore, in order to achieve singlet-triplet ISC between two electronic states of the same orbital character (e.g. 1CT to 3CT) additional processes must occur, which are dependent on both the electronic character and nuclear coordinates of the coupled states. However, the presence of a significant change in orbital character upon spin-flip (e.g. 1CT to 3LE) is not in itself sufficient to guarantee rapid ISC and, furthermore, it has been shown that direct SOC alone cannot explain high rates of ISC observed26,27,28. Vibration-mediated SOC has been suggested to explain this discrepancy, and also to facilitate singlet-triplet conversion between states of similar orbital character18,29. Vibrational contributions from an additional, or 'intermediate', triplet state give rise to the 'spin-vibronic mechanism' for the efficient conversion of dark triplet excitons to emissive singlet excitons (reverse ISC) in TADF emitters30,31,32. Revealing the character and formation dynamics of intermediate triplet states is challenging, although critical to understanding the TADF mechanism33,34. Here we use transient electron spin resonance (trESR) spectroscopy to study the role of spin-orbit coupling and vibrational perturbations in ISC for TADF. While optical spectroscopy has proved a powerful tool to examine the photophysics and spin-conversion rates in TADF30,31,35,36, trESR provides a complementary window into the underlying spin physics. TrESR is sensitive to paramagnetic states and can therefore probe the formation dynamics and coupling mechanisms of triplet excitons37, as highlighted by broad applications in studying triplet excitons in organic photovoltaics38,39,40, singlet fission41,42,43,44, photosensitizers37,45,46,47, and molecular electronics48,49. Because trESR is not sensitive to singlet excitons (due to their diamagnetism) it can only explicitly probe forward ISC. However, assuming microscopic reversibility, the triplet exciton formation dynamics revealed are applicable to TADF-relevant reverse ISC. We have previously shown using trESR that spin-orbit coupling interactions mediate ISC for two benchmark TADF molecules: 1,2,3,5-tetrakis(carbazol-9-yl)-4,6-dicyanobenzene (4CzIPN) and 1,2-bis(carbazol-9-yl)-4,5-dicyanobenzene (2CzPN)20. The trESR data suggested a dynamic picture of SOC-mediated ISC involving vibrational modes, although the excited-state ordering and coupling requirements for efficient ISC were not established. To reveal these requirements, here we have conducted trESR studies on a series of ten molecular organic fluorophores exhibiting delayed fluorescence with varying 3LE–3CT energy gaps; this allows us to probe the spin-vibronic mechanism directly and resolve the nature of intermediate triplet states. TrESR gives conclusive evidence that 3LE can be populated from the singlet manifold by the vibrational SOC ISC mechanism. However, when the 3LE–3CT gap is sufficiently small, trESR reveals that additional spin conversion pathways emerge, and 3CT is populated via the spin-vibronic ISC mechanism because of enhanced 3LE–3CT coupling. TrESR can crucially resolve these different ISC mechanisms, which are influenced by difficult to predict molecular properties beyond the excited-state character and energetics measured with optical spectroscopy. Our spin-sensitive measurements reveal direct evidence that distinct triplet excitonic states (T1 + T2) mediate the spin conversion responsible for delayed fluorescence. These findings uncover the fundamental mechanism of the spin-vibronic ISC pathway relevant in TADF emitters, providing detail on the nature of the process by probing distinct yet vibronically-accessible 3LE and 3CT states. Design and detection of multiple triplet excitonic states Three donor-acceptor emitters sharing a common 4-(3,6-di-tert-butyl-9H-carbazol-9-yl)diphenyl sulfone (DTCz-DPS) core structure were synthesised. We employed inductively electron-withdrawing fluorine substituents on a phenyl group linking carbazolyl and sulfone moieties to lower the LUMO energy of the DTCz-DPS core, resulting in lowered CT state energies as in our previous study50. The three emitters, DTCz-DPS-1, -2, and -3 (Fig. 1b–d) share a fixed 3LE energy level, associated with the carbazole donor, but possess increasingly red-shifted 1CT and 3CT, from DTCz-DPS-1 to -3, consistent with time-dependent density-functional theory (TD-DFT) calculations (Supplementary Table 1), described in 'Methods'. The calculated Gibbs free energy difference between the 3LE and 3CT minima (the 3LE–3CT gap), in their optimised geometries, is reduced from DTCz-DPS-1 to -3 (Table 1). Importantly, our calculations also found that the direct SOC matrix elements between 1CT and 3LE are relatively constant for the three emitters (Table 1). Therefore, our structural modifications of excited-state energies allow higher-order SOC effects to be studied due to changing of the 3LE–3CT energy gap and energetic landscape, while direct SOC matrix elements between 1CT and 3LE are fixed. Fig. 1: Molecular structures and electron spin resonance spectra. a Schematic diagram depicting the zero-field splitting (ZFS) of the lowest-lying triplet state (T1) into its three sublevels (Tx,y,z). The magnitude of the ZFS parameters, D and E, describes the splitting of the triplet sublevels where x, y, and z are the ZFS axes along which spin density is distributed. The energetic spacing is not to scale. Molecular structures of (b) DTCz-DPS-1, (c) DTCz-DPS-2, and (d) DTCz-DPS-3. The in-plane molecular axes a and c and the 3LE ZFS axes z and x are shown. e Schematic diagram depicting the Zeeman energy separation of the triplet sublevels (low-field: Tx,y,z; high-field: T−1,0,+1) when a magnetic field (B) is applied parallel to the z axis and the absorptive (A) and emissive (E) transitions that occur between sublevels when microwave radiation is resonant with energetic separation. Spin-polarised trESR signals collected at 30 K in toluene for (f) DTCz-DPS-1, (g) DTCz-DPS-2, and (h) DTCz-DPS-3. Solid lines show the trESR signal recorded 2 μs after 355 nm laser excitation and integrated over 1 μs. Dashed and dotted lines are the simulated local excitation triplet state (3LE) and charge-transfer triplet state (3CT) polarisation patterns, respectively. Dash-dot grey lines are the weighted sums of the 3LE and 3CT simulations. The spin-polarised patterns are characterised by absorptive (A) and emissive (E) features. i Spin-vibronic coupling of 1CT, 3LE and 3CT potential energy surfaces. Table 1 Experimental and TD-DFT calculated excited-state properties. For the trESR measurements, pulsed laser excitation (355 nm) and cryogenic temperatures (30 K) were used to generate long-lived, molecular triplet excitons in dilute (500 μM), deoxygenated and flash-frozen toluene solutions. A variable applied magnetic field (200–500 mT) induced Zeeman energy separation of the triplet sublevels from the zero-field splitting (ZFS) eigenstates (Tx,y,z, Fig. 1a, e). A continuous wave microwave source (~9.7 GHz) induced triplet sublevel transitions (absorptive, A, and emissive, E) when resonant with the sublevel energetic spacing; the intensity of these transitions is dependent on the triplet sublevel population (Px,y,z), which is determined by the nature of the triplet formation (ISC) process. The magnetic field positions of the positive (absorptive) and negative (emissive) peaks in the trESR signal (also known as 'polarisation pattern turning points') are determined by the ZFS parameters, D and E (Fig. 1a). The spin–spin dipolar interaction contribution to ZFS can yield a D-dependence on spin-density delocalisation across the molecule that allows for more LE- and CT-type triplets to be distinguished; a smaller D-value typically correlates with greater delocalisation, indicative of a CT state in the molecules investigated here19,51. D, E, and Px,y,z are obtained from inspection, simulation and least-squares fitting (using EasySpin52) of the spin-polarised trESR signal. Following photoexcitation, the all-organic emitters have typically low triplet yield and hence data acquisition times of 4–12 h were required for each trESR spectrum. Each emitter exhibited spin-polarised trESR signals with lifetimes of several microseconds, throughout which the spectral shapes were conserved. The trESR signal of DTCz-DPS-1 (Fig. 1f) has a 6-turning point EEEAAA polarisation pattern with a preferential population of Tx and Tz (Px,y,z = 0.38, 0.00, 0.62). The ZFS parameters (|D | , | E | = 108, 9 mT), the position of the half-field transition (161 mT, Supplementary Fig. 4), and EEEAAA polarisation pattern of DTCz-DPS-1 are all typical of triplet states in organic chromophores51. These parameters are consistent with the 3LE associated with the carbazole monomer53, leading us to attribute the trESR signal to a triplet exciton localised on the carbazole donor. The trESR signals of DTCz-DPS-2 and -3 (Fig. 1g, h) show spin signatures of two overlapping triplet signals. It is evident from the line shape of the DTCz-DPS-2 trESR spectrum that the wider triplet signal resembles that seen for DTCz-DPS-1. Furthermore, the prominent half-field transition peak in the DTCz-DPS-2 trESR spectrum (162 mT, Supplementary Fig. 4) matches well with that in DTCz-DPS-1. After subtracting the 3LE contribution from the trESR signal of DTCz-DPS-2, the resulting triplet is narrower (|D | , | E | = 23, 3 mT) and has a spin-polarisation pattern that indicates relative overpopulation of Tx and Ty (Px,y,z = 0.4, 0.6, 0.0). A reduced magnitude of D of this scale typically indicates an increased spatial separation of interacting spins; therefore, we attribute the narrow signal to the delocalised 3CT19,51. The trESR signal of DTCz-DPS-3 (Fig. 1h) comprises a more intense central feature compared to DTCz-DPS-2. The total width of the DTCz-DPS-3 signal is unchanged and the same 161 mT half-field transition peak is recorded as in DTCz-DPS-1 (Supplementary Fig. 4); therefore, the same 3LE is assigned to the broad triplet signal. By subtracting the 3LE contribution from the total DTCz-DPS-3 trESR spectrum a narrower triplet signal ascribed to 3CT is obtained with Px,y,z = 0.6, 0.4, 0.0, |D | , | E | = 56, 19 mT and a half-field transition peak at 169 mT; the breadth of the DTCz-DPS-3 3CT signal compared to the DTCz-DPS-2 3CT signal is a point to which we will later return. These results show that with decreasing 3LE–3CT gap (from DTCz-DPS-1 to -3), the recorded trESR spectra progress from a relatively pure 3LE signal to one resulting from contributions of both 3LE and 3CT polarisation patterns. The triplet ZFS simulation parameters of all three emitters are shown in Table 2. Table 2 Zero-field splitting parameters (D and E) and spin sublevel populations (Px,y,z) of 3LE and 3CT obtained by simulation of the trESR signal detected at 30 K in deoxygenated toluene or by calculation of TD-DFT optimised excited-state geometries. Similarly, we observe the emergence of unstructured 3CT emission, in addition to structured 3LE emission, with decreasing 3LE–3CT gap in the frozen (77 K) toluene phosphorescence spectra (solid lines in Fig. 2d–f)50,54. We note that the phosphorescence was recorded at 77 K, which is a higher temperature than used in the trESR experiment (30 K). Crucially, both these temperatures are well below the freezing point of toluene (178 K). Therefore, the energetic relaxation of the 1CT states (a common phenomenon undergone by intramolecular CT states in solution) observed when comparing the photoluminescence (PL) in frozen (77 K) and solution (295 K) toluene (Fig. 2a–f) is minimised at both 30 K and 77 K. The progression in trESR and phosphorescence shows that 3LE and 3CT are both populated when the 3LE–3CT gap is sufficiently small55. Time-resolved PL measurements confirmed that all three of the emitters exhibit intramolecular CT character in prompt and delayed fluorescence at room temperature (295 K) in deoxygenated toluene (Fig. 2, Table 1, and Supplementary Fig. 10). The contribution of delayed fluorescence to the total emission was determined from the integrated time-resolved PL and is listed in Table 1. The electroluminescence performance of DTCz-DPS-1 and DTCz-DPS-3 were also evaluated in preliminary solution-processed OLEDs and are presented in the Supplementary Information (Supplementary Fig. 11 and Supplementary Table 7). Fig. 2: Photophysical spectra. Normalised absorbance (dotted line) and photoluminescence (filled line) spectra in deoxygenated toluene at 295 K (a DTCz-DPS-1, b DTCz-DPS-2, c DTCz-DPS-3). Normalised steady-state photoluminescence (dashed line) and gated (0.5 ms) phosphorescence (solid line) spectra in deoxygenated toluene at 77 K (d DTCz-DPS-1, e DTCz-DPS-2, f DTCz-DPS-3). Normalised integrated photoluminescence kinetics in deoxygenated toluene at 295 K (g DTCz-DPS-1, h DTCz-DPS-2, i DTCz-DPS-3). Density-functional theory calculations Optimised geometries and molecular orbitals were established for the low-lying excitonic states of DTCz-DPS-1, -2, and -3 (Fig. 3a–c) using the range-separated LC-BLYP functional56. The calculations predicted both 3LE and 3CT in close proximity to the 1CT, supporting our experimental observation of both triplets being populated in trESR and PL measurements in frozen toluene. The relative energy and donor-acceptor dihedral angle was calculated in the 3LE, 3CT configurations, and also at the minimal-energy conical intersection where two electronic triplet states are equal in energy and electronically coupled (Fig. 3). The geometrical change between the 3LE and 3CT configuration is predominantly characterised by rotation around the donor-acceptor linker: see dihedral angle between shaded planes in Fig. 3d–f. Fig. 3: Molecular orbitals of relevant triplet excitonic states. The local excitation triplet state (3LE) and charge-transfer triplet state (3CT) configurations and highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of (a) DTCz-DPS-1, (b) DTCz-DPS-2, and (c) DTCz-DPS-3. The molecular geometry, relative Gibbs free energy, and donor–acceptor dihedral angle of (d) DTCz-DPS-1, (e) DTCz-DPS-2, and (f) DTCz-DPS-3 in the 3LE, 3CT configurations and at the minimum-energy conical intersection where two electronic triplet states are equal in energy and electronically coupled. The ZFS tensors (D and E parameters) were calculated for each 3LE and 3CT (Table 2) using the DFT-based method reported by Neese et al.57,58 that considers contributions to ZFS from both spin–spin interactions and spin–orbit interactions (Supplementary Table 2). DFT predicted D < 0 in all cases, therefore, D was assumed to be negative in all trESR spectral simulations. The sign of E was chosen such that axes labelling conformed with molecular axes labelling convention, as opposed to ESR convention. The total D and E values calculated for the 3LE configurations are in good quantitative agreement with the values obtained from trESR measurements. The calculated contribution to total D from spin–spin interactions decreases from 3LE to 3CT, whereas the contribution from spin–orbit interactions does not decrease. The resulting 3CT total D and E values are overestimated in comparison to those obtained from trESR measurements, although the experimental observation that the ZFS D-value of DTCz-DPS-3 3CT is larger than DTCz-DPS-2 3CT is correctly predicted. The larger calculated D-value of DTCz-DPS-3 compared to DTCz-DPS-2 arises from a greater spin–spin interaction contribution to ZFS (Supplementary Table 2). Variation in spin–spin interaction can be rationalised by examining the 3CT molecular orbitals in Fig. 3: the electron-withdrawing effect of the additional fluorine atoms in DTCz-DPS-3 localises the LUMO more strongly on the fluorinated phenyl ring than in DTCz-DPS-2 where the LUMO is more delocalised over both phenyl rings. Closer average HOMO-LUMO proximity leads to reduced spin-spin distance and enhanced interaction, which typically manifests as larger D-value. Further transient electron spin resonance studies: molecular modifications and concentration series We designed a number of additional experiments to test our interpretation that the trESR signals comprise polarisation patterns arising from 3LE and 3CT excitonic molecular states. First, we explored the effect of molecular substitutions on excited-state energies and subsequently the photophysics and ISC mechanism. We modified the donor groups of DTCz-DPS-2 and DTCz-DPS-3 by removing the tertiary butyl groups (to give Cz-DPS-1 and -2, respectively), or by replacing the tertiary butyl groups with additional carbazole units (to give 3Cz-DPS-1 and -2, respectively). The 3LE–3CT gap was decreased by fluorination on going from Cz-DPS-1 to −2 and from 3Cz-DPS-1 to -2; we observed with decreasing gap: (a) emergence of the 3CT trESR signals (Fig. 4) and 3CT phosphorescence (Supplementary Fig. 7, 8) and (b) increase in contribution to total PL from delayed fluorescence. These observations are consistent with our findings for DTCz-DPS-1, -2, and -3. Photophysics and trESR simulation parameters can be found in the Supplementary Information (Supplementary Fig. 7, 8, and Supplementary Table 4, 5). Trends in the calculated excited-state energies and direct SOC matrix elements between 1CT and 3LE for Cz-DPS-1 and -2 and 3Cz-DPS-1 and -2 (Supplementary Table 6) are also consistent with the trends identified for DTCz-DPS-1, -2, and -3. In non- or partially-fluorinated versions of the emitters (Cz-DPS-0, DTCz-DPS-0, 3Cz-DPS-0) the CT states were blue-shifted, widening both the 3LE–1CT and 3LE–3CT gaps such that no substantial delayed component of the time-resolved PL was detected at room temperature (Supplementary Fig. 9). The trESR spectra of the fluorescent emitters (Cz-DPS-0, DTCz-DPS-0, and 3Cz-DPS-0) exhibit only EEEAAA polarisation patterns, with signal widths and half-field peaks consistent with the 3LE associated with monomer carbazole, as before (Supplementary Fig. 9). The effects of changing the molecular environment by switching the solvent from toluene to 2-methyltetrahydrofuran were compared for DTCz-DPS-1 and 3Cz-DPS-2 and are discussed in the Supplementary Information (Supplementary Fig. 6). Finally, to rule out intermolecular interactions we reduced the concentration of DTCz-DPS-1, DTCz-DPS-3, and 3Cz-DPS-2 in toluene from 500 μM to 20 μM. No significant differences in trESR signal were observed, supporting our interpretation of the mechanisms responsible for triplet state formation as all being intramolecular processes, rather than concentration-dependent aggregate states or intermolecular energy transfer (Supplementary Fig. 5). Fig. 4: Electron spin resonance of further modified emitters. Molecular structures and spin-polarised trESR signal collected at 30 K in deoxygenated toluene of (a) Cz-DPS-1, (b) Cz-DPS-2, (c) 3Cz-DPS-1, and (d) 3Cz-DPS-2. Solid lines show the trESR signal, recorded 2 μs after 355 nm laser excitation and integrated over 1 μs. Dashed and dotted lines are the simulated local excitation triplet state (3LE) and charge-transfer triplet state (3CT) polarisation patterns, respectively. Dash-dot grey lines are the weighted sums of the 3LE and 3CT simulations. TADF facilitated by spin-orbit coupling, as described in the 'Introduction', can be interpreted within the following framework of ISC mechanisms26,59: Direct spin–orbit coupling between 1CT and 3LE; Vibrational spin–orbit coupling between 1CT and 3LE; Spin-vibronic coupling between 1CT and 3CT via intermediate 3LE. Vibrational modes can enhance ISC in (II) because spin–orbit coupling matrix elements for 1CT–3LE transitions depend on the nuclear degree of freedom, \({Q}_{\alpha }\). The vibronic coupling in (III) drives rapid internal conversion between 3CT and 3LE, equilibrating their population; 3LE is then coupled to 1CT via either (I) or (II)26,30. 3LE is considered an intermediate in the sequence of state formation: 1CT–3LE–3CT (Fig. 1i). Any initial population of 1LE is assumed to rapidly convert to 1CT, as will be discussed later. Here, the 3CT–3LE vibronic coupling in (III) is assigned to modes associated with the donor–acceptor linker20,60,61. These three ISC processes are expressed together by the sum of their first non-zero terms: $${\hat{H}}_{{SO}}=\left\langle {\varPsi }_{^1{CT}}|{\hat{H}}_{{SO}}|{\varPsi }_{^3{LE}}\right\rangle$$ $$+\mathop{\sum}\limits_{\alpha }\frac{\partial \left\langle {\varPsi }_{^1{CT}}|{\hat{H}}_{{SO}}|{\varPsi }_{^3{LE}}\right\rangle }{\partial {Q}_{\alpha }}{Q}_{\alpha }$$ $$+\frac{\left\langle {\varPsi }_{^1{CT}}|{\hat{H}}_{{SO}}|{\varPsi }_{^3{LE}}\right\rangle \left\langle {\varPsi }_{^3{LE}}|{\hat{H}}_{{vib}}|{\varPsi }_{^3{CT}}\right\rangle }{{E}_{^1{CT}}-{E}_{^3{LE}}}$$ where \(\varPsi\) is the molecular wavefunction of states coupled either by the spin–orbit (\({\hat{H}}_{{SO}}\)) or vibronic (\({\hat{H}}_{{vib}}\)) Hamiltonian26. The vibronic coupling between two states generally increases as the energy gap between the states decreases and hence mechanism (III) only significantly contributes to the full spin–orbit interaction if the 3LE–3CT gap (ΔG) is sufficiently small (<0.5 eV). Furthermore, orbital and vibrational mode-dependent mechanisms are determined by the geometry of a molecule and differently affect ISC mechanisms (I), (II), and (III)22,25,26. TrESR polarisation patterns are fingerprints for the relative spin population of the triplet ZFS sublevels (x, y, z), which are also related to the molecular geometry62—Fig. 1 shows how the 3LE ZFS axes are related to the molecular axes (a, b, c) in DTCz-DPS-1, -2, and -3. These relationships mean that triplet formation by each ISC mechanism will yield a particular trESR polarisation pattern that can be distinguished by the ISC mechanism's orbital and/or vibrational mode dependence on molecular geometry. TADF emitters generally comprise all-organic electron donor and acceptor units. Individually the donor and acceptor units, on which LE states are localised, typically possess at least a C2 symmetry axis; symmetry does not persist in the delocalised CT states. Under these symmetry conditions the triplet sublevel populations describing the trESR polarisation pattern for each ISC mechanism in a TADF emitter is as follows: Overpopulation along the C2 symmetry axis (x here); Overpopulation of the in-plane axes (x and z here); Redistribution of the population from the in-plane (x and z here) to the out-of-plane (y here) axes. The difference between ISC mechanisms (II) and (III) is the CT-like transition from 3LE to 3CT upon internal conversion. In a donor–acceptor emitter, the electron density in the 3LE configuration is localised on either the acceptor or, in this case, the donor unit and hence the triplet ZFS (x, y, z) and molecular (a, b, c) axes will be well-aligned. In contrast, the electron density in a 3CT configuration will be delocalised across both the donor and acceptor units (Fig. 3). The donor-acceptor twist, integral to TADF design, results in the rotation of the 3CT ZFS axes relative to the 3LE ZFS axes63. Therefore, the spin-density redistribution upon charge-transfer internal conversion from 3LE to 3CT rotates the triplet ZFS axes such that overpopulation of the out-of-plane axes (relative to the 3LE ZFS axes) can arise. This additional out-of-plane population causes the difference between the population of Tx,y,z in ISC mechanism (II) and (III)64,65. The donor–acceptor twist is a fundamental TADF molecular design rule because the spatial separation of HOMO and LUMO is required to reduce singlet-triplet offset and enable reverse ISC. Therefore, we can expect the relationship between molecular structure, ISC mechanism, and trESR pattern to be applicable to many classes of TADF emitters, as well as to donor–acceptor CT-type molecules in general. Applying the framework outlined above to the triplet sublevel populations from experimental spectra (Table 2), we can infer the ISC mechanisms responsible for the population of 3LE and 3CT in each emitter. The 3LE states of DTCz-DPS-1, -2, and -3 (and Cz-DPS-1 and -2, 3Cz-DPS-1 and -2) are populated via vibrational SOC with the photoexcited 1CT – mechanism (II) – apparent from the overpopulation of Tx and Tz45,51,62,66,67. On the other hand, the triplet sublevel corresponding to the out-of-plane molecular axis (Ty) is relatively overpopulated in the 3CT polarisation patterns of DTCz-DPS-2 and -3 (and Cz-DPS-2 and 3Cz-DPS-2). The distinct triplet trESR spectra, therefore, reveal that spin-vibronic coupling between 1CT and 3CT via intermediate 3LE – mechanism (III) – is responsible for 3CT population20,59,60. The ability to resolve these ISC mechanisms from one another is a crucial advantage of using trESR; it is not possible to predict how excited states interact based purely on optical measurements of their separation energies or excitation character because additional factors arising from molecular structure affect ISC. Our computational results qualitatively support the experimental triplet sublevel population assignments of the trESR spectra, and the involvement of vibrational and vibronic enhancement to SOC. First, the calculated \(\left\langle {\varPsi }_{^1{CT}}|{\hat{H}}_{{SO}}|{\varPsi }_{^3{LE}}\right\rangle\) at the optimised 3LE geometries (described in the 'Methods') confirms that if direct SOC between 3LE and 1CT – mechanism (I) – were the dominant mechanism we would indeed observe overpopulation predominantly in the 3LE x plane (Supplementary Table 3, 6). The DFT calculations also predict the relationship between the 3LE ZFS and in-plane molecular axes (overpopulation in Tx and Tz). A rotation of the in-plane x–z frame upon charge-transfer internal conversion would redistribute population from Tz to Ty, consistent with the experimental observations (Table 2). The most significant mode identified in the calculations that are associated with the 3LE–3CT conversion is the rotation around the single bond between the DTCz donor and DPS acceptor in DTCz-DPS-1, -2, and -3 (Fig. 3d–f); this torsional mode likely drives vibronic coupling20. The D and E values predicted by DFT for 3LE in each emitter agree well quantitively with those obtained from the measured trESR (Table 2). However, only the qualitative trend in relative magnitude is well-predicted in DFT calculations of 3CT; the calculated D and E values are overestimated compared to those obtained from the measured trESR (Table 2). As discussed, we would expect spin–orbit interactions related to 3CT configurations to be weaker compared to their 3LE counterparts; no such reduction in the contribution to D from spin–orbit interactions is predicted in the DFT calculations (Supplementary Table 2). We suggest that the mismatch between predicted and measured 3CT D and E values is a result of this overestimation of spin–orbit interactions between 3CT and other nearby electronic states. This discrepancy highlights the current limitations of common computational calculations and the necessity for experimental trESR to reveal the complex nature of multiple triplet states, which underpin the spin physics of TADF-type organic emitters. To reiterate: our findings show that it is indirect, not direct, coupling of 3CT to the singlet manifold that facilitates ISC. This assertion is supported by the following pieces of evidence, which rule out direct ISC between 3CT and either 1CT or 1LE. Direct ISC from 1CT to 3CT would need to be mediated by hyperfine coupling because the lack of orbital change suppresses SOC. Hyperfine-mediated ISC facilitates the transfer from the singlet state to only the energetically closest high-field triplet sublevel and consequently has a trESR polarisation pattern that is distinct from the SOC-mediated ISC patterns described above, and it is not observed in our measurements19,39. If 1LE to 3CT ISC were prevalent, we would not observe a dependence of the 3CT trESR signal on the 3LE–3CT gap, and would instead observe the 3CT spectra in all measurements, which we do not. Further, we would not expect a substantial population of 1LE for two reasons. First, 1LE to 1CT internal conversion is rapid (ps) compared with ISC (ns)22,45,68,69,70,71. Second, in the experiment, the 1CT absorption band was selectively excited (and is spectrally distinct from the 1LE absorption) in order to mimic operational OLED conditions where the lowest energy excitonic states are electrically generated (Fig. 2a–c solid lines and Supplementary Table 1). To summarise, the progression from a purely 3LE polarisation pattern to a combination of both 3LE and 3CT polarisation patterns with decreasing 3LE–3CT gap is evidence that a small 3LE–3CT gap regime exists in which strong vibronic coupling persists, even at low temperatures (kT < 3 meV), such that 3LE and 3CT are simultaneously populated. It is worth noting that, as TADF development is predominantly focused on the critical task of achieving efficient blue emission, the above outlined spin-vibronic mechanism is described in the context of the energetic regime attainable in blue emitters and shown in Fig. 1i: 1CT > 3CT > 3LE6,22. However, the same principles can be applied to a donor–acceptor emitter exhibiting a different excited-state ordering. We note that the unusual observation of multiple, distinct triplet signals in trESR spectra has also been reported for twisted donor–acceptor triplet photosensitizers, although the mechanism through which both triplets were populated was not rationalised;45,46,47 we postulate that the spin-vibronic mechanism may be active in such triplet photosensitizers. Finally, we return to discuss the photophysical results in light of the spin-vibronic mechanism determined from trESR. The increase of 3CT spectral contribution with decreasing 3LE–3CT gap observed here (and in other systems55,69,72,73) reinforces the direct evidence for the spin-vibronic mechanism from trESR. Furthermore, the emergence of the additional 3CT trESR spectra, reflecting the strength of vibronic coupling, correlates with increased contribution to total emission from delayed fluorescence. However, as the 3LE–3CT gap decreases so does the 3LE–1CT gap and therefore their combined effect on delayed fluorescence cannot be decoupled. We note that the dynamics of excited state couplings are also affected by time- and temperature-dependent molecular reorganisation, as is evident when comparing the 295 K fluorescence with the 77 K fluorescence (dashed lines in Fig. 2a–c versus dashed lines in Fig. 2d–f). The spectral redshift undergone by charge-transfer states from 77 K to 295 K (due to post-photoexcitation molecular reorganisation) increases from DTCz-DPS-1 to -3. The differing reorganisation complicates the temporal evolution of energetic spacing between 1CT, 3LE, and 3CT, and hence coupling dynamics at room temperature are not directly comparable to frozen solution measurements. TrESR has been used previously to probe triplet excitons in twisted donor-acceptor emitters that, due to their large T1–T2 energy gap (>0.6 eV), are non-interacting and exhibit delayed fluorescence through either the hot-exciton74 or hyperfine pathways75. Our experimental results for the present series of donor–acceptor molecules, supported by computational calculations, show that coupling between the singlet and triplet manifold is facilitated by the spin-vibronic mechanism when 3LE is in energetic proximity (<100 meV) to 3CT. This coupling drives ISC, and therefore TADF-relevant reverse ISC (assuming microscopic reversibility). We have demonstrated that trESR can reveal mechanistic details of ISC, providing a critical, complementary approach to optical spectroscopy methods33,76,77 and have established a trESR framework to study the spin-vibronic mechanism in TADF emitters. We have demonstrated that the mechanism for ISC within an emitter can be tuned through chemical modifications; engineering and measuring the spin properties are critical for the informed development of efficient TADF emitters. We have characterised the nature of triplet excitations in the chemically modified set of delayed fluorescence emitters, resolving two distinct triplet excitonic states (T1 and T2) involved in spin-vibronic coupling. This distinction supports the basis for achieving spin-vibronic TADF by engineering distinct yet vibronically-accessible 3LE and 3CT states. As discussed, the mechanism of 3CT population by vibronic coupling with 3LE may find further application in the development of heavy-atom free triplet photosensitizer design45,46,47. Tuning spin–orbit coupling in molecular systems and their subsequent trESR signatures, as demonstrated here, is not only of paramount importance to blue OLED development but may also find resonance more broadly in the development of organic materials for light-controlled quantum information and spintronics78,79,80,81,82. Synthesis of emitters The emitters were generally synthesised by nuclophilic substitution reactions between the appropriate carbazole derivatives and partially fluorinated derivatives of diphenylsulfone in dimethylsulfoxide in the presence of potassium carbonate, as described in the detail in the Supplementary Information. The fluorinated diphenylsulfones were obtained by reaction of lithio(fluoroarenes), obtained through bromine-lithium exchange or deprotonation, with 1,2-diphenyldisulfane, followed by oxidation with 3-chlorobenzoperoxoic acid. Transient electron spin resonance spectroscopy Dilute solutions were prepared at 500 μM in toluene in a nitrogen environment. The solutions were contained within 3.8 mm O.D. clear-fused quartz tubes and degassed by 4 freeze-pump-thaw cycles before being flame-sealed. Measurement of transient continuous wave ESR (trESR) was done in the Centre for Advanced ESR (CAESR) in the Department of Chemistry of the University of Oxford, using a Bruker BioSpin EleXSys I E680 at X-band (9.7 GHz, 0.2 mW) with an ER 4118X-MD5W resonator. The temperature was controlled with an Oxford Instruments CF935O cryostat under liquid helium flow and an ITC-503S controller. An Opotek Opolete HE355 LASER was used for optical excitation of the samples and it was synchronised to the spectrometer by a Stanford Research DG645 delay generator. Use of a Stanford Research SR560 low-noise preamplifier of the Schottky diode-detected CW Mode signal at 3–300 kHz was in substitute of the video amplifier, following verification against the quadrature mix-down DC-AFC Transient Mode signal of 20 MHz bandwidth setting. Spectra were recorded for between 4 and 12 h. Data was processed with MatLab (The Mathworks, Natick, N.J.) and ESR simulations made use of the EasySpin routines53. The linear-response TD-DFT calculations were performed using the LC-BLYP/6-31 G(d) within the Tamm–Dancoff approximation83 implemented in the Gaussian 16 programme. The geometry optimisation of the minimum-energy conical intersection between the 3LE and the 3CT was performed with the GRRM17 programme84, which refers to the energy and gradient calculated by the Gaussian 16 programme. \(\left\langle {\varPsi }_{^1{CT}}|{\hat{H}}_{{SO}}|{\varPsi }_{^3{LE}}\right\rangle\) was calculated using the TD-DFT LC-BLYP/6-31 G(d) with the scalar relativistic zeroth-order regular approximation (ZORA) Hamiltonian85,86 implemented in the ORCA 4.2.1 programme. The ZFS parameters of the 3LE and the 3CT were simulated using the DFT PBE0/6–31 G(d) with the spin-spin term calculated from the spin-densities derived from the unrestricted natural orbitals57 and the spin-orbit term calculated by the coupled-perturbed SOC approach58 implemented in the ORCA 4.2.1 programme. Steady-state absorption and emission spectra Dilute solutions were prepared at 100 μM in toluene. UV−vis absorption spectra were measured using a Cary 5000UV−vis−NIR spectrophotometer. PL spectra were recorded on a Horiba FL3-2i Fluorometer equipped with a liquid nitrogen attachment in solution at ambient temperature or either as steady-state or gated spectra at 77 K. Time-resolved photoluminescence Dilute solutions were prepared at 100 μM in deoxygenated toluene in a nitrogen environment and subsequently measured in a nitrogen environment. Time-resolved PL spectra were recorded using an electrically-gated intensified charge-coupled device (ICCD) camera (Andor iStar DH740 CCI-010) connected to a calibrated grating spectrograph (Andor SR303i). Pulsed 325 nm photoexcitation was provided at a repetition rate of 1 kHz. Temporal evolution of the PL was obtained by stepping the ICCD gate delay with respect to the excitation pulse. The minimum gate width of the ICCD was 5 ns. Recorded data was subsequently corrected to account for camera sensitivity. The data underlying this article are available at: https://doi.org/10.6084/m9.figshare.14428766. Code availability Codes used to analyse data in this manuscript are available from the corresponding author upon reasonable request. Uoyama, H., Goushi, K., Shizu, K., Nomura, H. & Adachi, C. Highly efficient organic light-emitting diodes from delayed fluorescence. Nature 492, 234–238 (2012). Goushi, K., Yoshida, K., Sato, K. & Adachi, C. Organic light-emitting diodes employing efficient reverse intersystem crossing for triplet-to-singlet state conversion. Nat. Photonics 6, 253–258 (2012). Congrave, D. G. et al. A simple molecular design strategy for delayed fluorescence toward 1000 nm. J. Am. Chem. Soc. 141, 18390–18394 (2019). Freeman, D. M. E. et al. Synthesis and exciton dynamics of donor-orthogonal acceptor conjugated polymers: reducing the singlet-triplet energy gap. J. Am. Chem. Soc. 139, 11073–11080 (2017). Park, I. S., Matsuo, K., Aizawa, N. & Yasuda, T. High-performance dibenzoheteraborin-based thermally activated delayed fluorescence emitters: molecular architectonics for concurrently achieving narrowband emission and efficient triplet-singlet spin conversion. Adv. Funct. Mater. 28, 1802031 (2018). Kim, J. U. et al. Nanosecond-time-scale delayed fluorescence molecule for deep-blue OLEDs with small efficiency rolloff. Nat. Commun. 11, 1765 (2020). Article ADS CAS PubMed PubMed Central Google Scholar Huang, R. et al. Balancing charge-transfer strength and triplet states for deep-blue thermally activated delayed fluorescence with an unconventional electron rich dibenzothiophene acceptor. J. Mater. Chem. C. 7, 13224–13234 (2019). Masui, K., Nakanotani, H. & Adachi, C. Analysis of exciton annihilation in high-efficiency sky-blue organic light-emitting diodes with thermally activated delayed fluorescence. Org. Electron. 14, 2721–2726 (2013). Li, T.-Y. et al. Rational design of phosphorescent iridium(III) complexes for emission color tunability and their applications in OLEDs. Coord. Chem. Rev. 374, 55–92 (2018). Song, W. & Lee, J. Y. Degradation mechanism and lifetime improvement strategy for blue phosphorescent organic light-emitting diodes. Adv. Opt. Mater. 5, 1600901 (2017). Giebink, N. C. et al. Intrinsic luminance loss in phosphorescent small-molecule organic light emitting devices due to bimolecular annihilation reactions. J. Appl. Phys. 103, 044509 (2008). Seifert, R. et al. Chemical degradation mechanisms of highly efficient blue phosphorescent emitters used for organic light emitting diodes. Org. Electron. 14, 115–123 (2013). Moraes, I. R. De, Scholz, S., Lüssem, B. & Leo, K. Analysis of chemical degradation mechanism within sky blue phosphorescent organic light emitting diodes by laser-desorption/ionization time-of-flight mass spectrometry. Org. Electron. 12, 341–347 (2011). Chou, P. Y. et al. Efficient delayed fluorescence via triplet–triplet annihilation for deep-blue electroluminescence. Chem. Commun. 50, 6869–6871 (2014). Peng, L. et al. Efficient soluble deep blue electroluminescent dianthracenylphenylene emitters with CIE y (y≤0.08) based on triplet-triplet annihilation. Sci. Bull. 64, 774–781 (2019). Ieuji, R., Goushi, K. & Adachi, C. Triplet–triplet upconversion enhanced by spin–orbit coupling in organic light-emitting diodes. Nat. Commun. 10, 5283 (2019). Kondakov, D. Y., Pawlik, T. D., Hatwar, T. K. & Spindler, J. P. Triplet annihilation exceeding spin statistical limit in highly efficient fluorescent organic light-emitting diodes. J. Appl. Phys. 106, 124510 (2009). Samanta, P. K., Kim, D., Coropceanu, V. & Brédas, J.-L. Up-conversion intersystem crossing rates in organic emitters for thermally activated delayed fluorescence: impact of the nature of singlet vs triplet excited states. J. Am. Chem. Soc. 139, 4042–4051 (2017). Ogiwara, T., Wakikawa, Y. & Ikoma, T. Mechanism of intersystem crossing of thermally activated delayed fluorescence molecules. J. Phys. Chem. A 119, 3415–3418 (2015). Evans, E. W. et al. Vibrationally assisted intersystem crossing in benchmark thermally activated delayed fluorescence molecules. J. Phys. Chem. Lett. 9, 4053–4058 (2018). Olivier, Y., Moral, M., Muccioli, L. & Sancho-García, J.-C. Dynamic nature of excited states of donor–acceptor TADF materials for OLEDs: how theory can reveal structure–property relationships. J. Mater. Chem. C. 5, 5718–5729 (2017). Eng, J. & Penfold, T. J. Understanding and designing thermally activated delayed fluorescence emitters: beyond the energy gap approximation. Chem. Rec. 1–27 (2020) https://doi.org/10.1002/tcr.202000013. Olivier, Y. et al. Nature of the singlet and triplet excitations mediating thermally activated delayed fluorescence. Phys. Rev. Mater. 1, 1–6 (2017). El‐Sayed, M. A. Spin—orbit coupling and the radiationless processes in nitrogen heterocyclics. J. Chem. Phys. 38, 2834–2838 (1963). Salem, L. & Rowland, C. The electronic properties of diradicals. Angew. Chem. Int. Ed. Engl. 11, 92–111 (1972). Penfold, T. J., Gindensperger, E., Daniel, C., & Marian, C. M. Spin-vibronic mechanism for intersystem crossing. Chem. Rev. 118, 6975–7025 (2018). Marian, C. M. Spin-orbit coupling and intersystem crossing in molecules. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2, 187–203 (2012). Kleinschmidt, M., Tatchen, J., & Marian, C. M. Spin-orbit coupling of DFT/MRCI wavefunctions: method, test calculations, and application to thiophene. J. Comput. Chem. 23, 824–833 (2002). Chen, X. K., Zhang, S. F., Fan, J. X. & Ren, A. M. Nature of highly efficient thermally activated delayed fluorescence in organic light-emitting diode emitters: nonadiabatic effect between excited states. J. Phys. Chem. C. 119, 9728–9733 (2015). Gibson, J., Monkman, A. P. & Penfold, T. J. The importance of vibronic coupling for efficient reverse intersystem crossing in thermally activated delayed fluorescence molecules. ChemPhysChem 17, 2956–2961 (2016). Etherington, M. K., Gibson, J., Higginbotham, H. F., Penfold, T. J. & Monkman, A. P. Revealing the spin–vibronic coupling mechanism of thermally activated delayed fluorescence. Nat. Commun. 7, 13680 (2016). Gibson, J. & Penfold, T. J. Nonadiabatic coupling reduces the activation energy in thermally activated delayed fluorescence. Phys. Chem. Chem. Phys. 19, 8428–8434 (2017). Tsuchiya, Y. et al. Molecular design based on donor-weak donor scaffold for blue thermally-activated delayed fluorescence designed by combinatorial DFT calculations. Front. Chem. 8, 2–11 (2020). Aizawa, N., Harabuchi, Y., Maeda, S. & Pu, Y.-J. Kinetic prediction of reverse intersystem crossing in organic donor–acceptor molecules. Nat. Commun. 1–6 (2020) https://doi.org/10.26434/chemrxiv.12203240. Santos, P. L. et al. Engineering the singlet–triplet energy splitting in a TADF molecule. J. Mater. Chem. C. 4, 3815–3824 (2016). Dias, F. B. et al. The role of local triplet excited states and D-A relative orientation in thermally activated delayed fluorescence: photophysics and devices. Adv. Sci. 3, 1600080 (2016). Hou, Y. et al. Charge separation, charge recombination, long-lived charge transfer state formation and intersystem crossing in organic electron donor/acceptor dyads. J. Mater. Chem. C. 7, 12048–12074 (2019). Biskup, T. Structure–function relationship of organic semiconductors: detailed insights from time-resolved EPR spectroscopy. Front. Chem. 7, 10 (2019). Kraffert, F. et al. Charge separation in PCPDTBT:PCBM blends from an EPR perspective. J. Phys. Chem. C. 118, 28482–28493 (2014). Thomson, S. A. J. et al. Charge separation and triplet exciton formation pathways in small-molecule solar cells as studied by time-resolved EPR spectroscopy. J. Phys. Chem. C. 121, 22707–22719 (2017). Weiss, L. R. et al. Strongly exchange-coupled triplet pairs in an organic semiconductor. Nat. Phys. 13, 176–181 (2017). Tait, C. E., Bedi, A., Gidron, O. & Behrends, J. Photoexcited triplet states of twisted acenes investigated by Electron Paramagnetic Resonance. Phys. Chem. Chem. Phys. 21, 21588–21595 (2019). Bayliss, S. L. et al. Spin signatures of exchange-coupled triplet pairs formed by singlet fission. Phys. Rev. B 94, 045204 (2016). Bae, Y. J. et al. Competition between singlet fission and spin‐orbit‐induced intersystem crossing in anthanthrene and anthanthrone derivatives. Chempluschem 84, 1432–1438 (2019). Dong, Y. et al. Spin–orbit charge-transfer intersystem crossing (SOCT-ISC) in bodipy-phenoxazine dyads: effect of chromophore orientation and conformation restriction on the photophysical properties. J. Phys. Chem. C. 123, 22793–22811 (2019). Zhao, Y. et al. Efficient intersystem crossing in the Tröger's base derived from 4‐Amino‐1,8‐naphthalimide and application as a potent photodynamic therapy reagent. Chem. – A Eur. J. 26, 3591–3599 (2020). Wang, Z. et al. Insights into the efficient intersystem crossing of bodipy-anthracene compact dyads with steady-state and time-resolved optical/magnetic spectroscopies and observation of the delayed fluorescence. J. Phys. Chem. C. 123, 265–274 (2019). Richert, S., Limburg, B., Anderson, H. L. & Timmel, C. R. On the influence of the bridge on triplet state delocalization in linear porphyrin oligomers. J. Am. Chem. Soc. 139, 12003–12008 (2017). Olshansky, J. H., Zhang, J., Krzyaniak, M. D., Lorenzo, E. R. & Wasielewski, M. R. Selectively addressable photogenerated spin qubit pairs in DNA hairpins. J. Am. Chem. Soc. 142, 3346–3350 (2020). Li, Y. et al. The role of fluorine-substitution on the π-bridge in constructing effective thermally activated delayed fluorescence molecules. J. Mater. Chem. C. 6, 5536–5541 (2018). Richert, S., Tait, C. E. & Timmel, C. R. Delocalisation of photoexcited triplet states probed by transient EPR and hyperfine spectroscopy. J. Magn. Reson. 280, 103–116 (2017). Stoll, S. & Schweiger, A. EasySpin, a comprehensive software package for spectral simulation and analysis in EPR. J. Magn. Reson. 178, 42–55 (2006). Saiful, I. S. M. et al. Interplanar interactions in the excited triplet states of carbazole dimers by means of time-resolved EPR spectroscopy. Mol. Phys. 104, 1535–1542 (2006). Aloïse, S. et al. The benzophenone S1(n,π*) →T1(n,π *) states intersystem crossing reinvestigated by ultrafast absorption spectroscopy and multivariate curve resolution. J. Phys. Chem. A 112, 224–231 (2008). Dos Santos, P. L., Etherington, M. K. & Monkman, A. P. Chemical and conformational control of the energy gaps involved in the thermally activated delayed fluorescence mechanism. J. Mater. Chem. C. 6, 4842–4853 (2018). Iikura, H., Tsuneda, T., Yanai, T. & Hirao, K. A long-range correction scheme for generalized-gradient-approximation exchange functionals. J. Chem. Phys. 115, 3540–3544 (2001). Sinnecker, S. & Neese, F. Spin−spin contributions to the zero-field splitting tensor in organic triplets, carbenes and biradicals—a density functional and ab initio study. J. Phys. Chem. A 110, 12267–12275 (2006). Neese, F. Calculation of the zero-field splitting tensor on the basis of hybrid density functional and Hartree-Fock theory. J. Chem. Phys. 127, 164112 (2007). Budil, D. E. & Thurnauer, M. C. The chlorophyll triplet state as a probe of structure and function in photosynthesis. BBA - Bioenerg. 1057, 1–41 (1991). Marian, C. M. Mechanism of the triplet-to-singlet upconversion in the assistant dopant ACRXTN. J. Phys. Chem. C. 120, 3715–3721 (2016). Kim, D. H. et al. High-efficiency electroluminescence and amplified spontaneous emission from a thermally activated delayed fluorescent near-infrared emitter. Nat. Photonics 12, 98–104 (2018). Antheunis, D. A., Schmidt, J. & van der Waals, J. H. Spin-forbidden radiationless processes in isoelectronic molecules: anthracene, acridine and phenazine. Mol. Phys. 27, 1521–1541 (1974). Baryshnikov, G., Minaev, B. & Ågren, H. Theory and calculation of the phosphorescence phenomenon. Chem. Rev. 117, 6500–6537 (2017). Keijzers, C. P. & Haarer, D. EPR spectroscopy of delocalized and localized charge-transfer excitons in phenanthrene-PMDA single crystals. J. Chem. Phys. 67, 925–932 (1977). Di Valentin, M., Salvadori, E., Barone, V. & Carbonera, D. Unravelling electronic and structural requisites of triplet-triplet energy transfer by advanced electron paramagnetic resonance and density functional theory. Mol. Phys. 111, 2914–2932 (2013). Tait, C. E., Neuhaus, P., Peeks, M. D., Anderson, H. L. & Timmel, C. R. Transient EPR reveals triplet state delocalization in a series of cyclic and linear π-conjugated porphyrin oligomers. J. Am. Chem. Soc. 137, 8284–8293 (2015). Tatchen, J., Gilka, N., & Marian, C. M. Intersystem crossing driven by vibronic spin-orbit coupling: a case study on psoralen. Phys. Chem. Chem. Phys. 9, 5209–5221 (2007). Hou, Y. et al. Spin–orbit charge recombination intersystem crossing in phenothiazine–anthracene compact dyads: effect of molecular conformation on electronic coupling, electronic transitions, and electron spin polarizations of the triplet states. J. Phys. Chem. C. 122, 27850–27865 (2018). Nobuyasu, R. S. et al. The influence of molecular geometry on the efficiency of thermally activated delayed fluorescence. J. Mater. Chem. C. 7, 6672–6684 (2019). Zhang, Q. et al. Efficient blue organic light-emitting diodes employing thermally activated delayed fluorescence. Nat. Photonics 8, 326–332 (2014). Dance, Z. E. X. et al. Intersystem crossing mediated by photoinduced intramolecular charge transfer: julolidine−anthracene molecules with perpendicular π systems. J. Phys. Chem. A 112, 4194–4201 (2008). Etherington, M. K. et al. Regio- and conformational isomerization critical to design of efficient thermally-activated delayed fluorescence emitters. Nat. Commun. 8, 14987 (2017). dos Santos, P. L., Ward, J. S., Bryce, M. R. & Monkman, A. P. Using guest–host interactions to optimize the efficiency of TADF OLEDs. J. Phys. Chem. Lett. 7, 3341–3346 (2016). Sharma, N. et al. Exciton efficiency beyond the spin statistical limit in organic light emitting diodes based on anthracene derivatives. J. Mater. Chem. C. 8, 3773–3783 (2020). Tang, G. et al. Red thermally activated delayed fluorescence and the intersystem crossing mechanisms in compact naphthalimide-phenothiazine electron donor/acceptor dyads. J. Phys. Chem. C. 123, 30171–30186 (2019). Noda, H. et al. Critical role of intermediate electronic states for spin-flip processes in charge-transfer-type organic molecules with multiple donors and acceptors. Nat. Mater. 18, 1084–1090 (2019). Hosokai, T. et al. Evidence and mechanism of efficient thermally activated delayed fluorescence promoted by delocalized excited states. Sci. Adv. 3, e1603282 (2017). Article ADS PubMed PubMed Central CAS Google Scholar Nelson, J. N. et al. CNOT gate operation on a photogenerated molecular electron spin-qubit pair. J. Chem. Phys. 152, 014503 (2020). Rugg, B. K. et al. Photodriven quantum teleportation of an electron spin state in a covalent donor–acceptor–radical system. Nat. Chem. 11, 981–986 (2019). Schott, S. et al. Tuning the effective spin-orbit coupling in molecular semiconductors. Nat. Commun. 8, 15200 (2017). Article ADS PubMed PubMed Central Google Scholar Wasielewski, M. R. et al. Exploiting chemistry and molecular systems for quantum information science. Nat. Rev. Chem. (2020) https://doi.org/10.1038/s41570-020-0200-5. Szumska, A. A., Sirringhaus, H. & Nelson, J. Symmetry based molecular design for triplet excitation and optical spin injection. Phys. Chem. Chem. Phys. 21, 19521–19528 (2019). Hirata, S. & Head-Gordon, M. Time-dependent density functional theory within the Tamm–Dancoff approximation. Chem. Phys. Lett. 314, 291–299 (1999). Maeda, S., Ohno, K. & Morokuma, K. Updated branching plane for finding conical intersections without coupling derivative vectors. J. Chem. Theory Comput. 6, 1538–1545 (2010). Van Lenthe, E., Van Leeuwen, R., Baerends, E. J. & Snijders, J. G. Relativistic regular two-component hamiltonians. Int. J. Quantum Chem. 57, 281–293 (1996). Van Wüllen, C. Molecular density functional calculations in the regular relativistic approximation: method, application to coinage metal diatomics, hydrides, fluorides and chlorides, and comparison with first-order relativistic calculations. J. Chem. Phys. 109, 392–399 (1998). B.H.D. thanks Dr Saul Jones and Daniel Sowood for useful discussions regarding interpretation of photophysical and trESR data, and thanks Dr Yi-Ting Lee and Antti Reponen for preliminary TD-DFT calculations and interpretation. B.H.D. and E.W.E. thank Dr Claudia Tait and Prof. Sir. Richard H. Friend for informative discussions of the ESR and optoelectronics context. This work was supported by the Engineering and Physical Sciences Research Council (grant no. EP/M005143/1) and the European Research Council (ERC). B.H.D. acknowledges support from the EPSRC Cambridge NanoDTC (grant no. EP/L015978/1). E.W.E is grateful to the Leverhulme Trust (ECF-2019-054) and Isaac Newton Trust for Fellowship funding. E.W.E is thankful to the Royal Society for a University Research Fellowship (URF/R1/201300). N.A. acknowledges support from the JST PRESTO (grant no. JPMJPR17N1). Y. X. thanks the China Scholarship Council for a Fellowship. The Centre for Advanced Electron Spin Resonance is supported by the EPSRC (EP/L011972/1). Department of Physics, Cavendish Laboratory, J J Thomson Avenue, University of Cambridge, Cambridge, UK Bluebell H. Drummond, Qinying Gu, Leah R. Weiss, Alexander J. Gillett, Dan Credgington & Emrys W. Evans Centre for Advanced Electron Spin Resonance (CAESR), Department of Chemistry, University of Oxford, Inorganic Chemistry Laboratory, Oxford, UK Bluebell H. Drummond & William K. Myers RIKEN Center for Emergent Matter Science (CEMS), Saitama, Japan Naoya Aizawa & Yong-Jin Pu School of Chemistry and Biochemistry and Center for Organic Photonics and Electronics, Georgia Institute of Technology, Atlanta, GA, USA Yadong Zhang, Yao Xiong, Matthew W. Cooper, Stephen Barlow & Seth R. Marder Pritzker School of Molecular Engineering, University of Chicago, Chicago, IL, USA Leah R. Weiss Department of Chemistry, Swansea University, Swansea, UK Emrys W. Evans Bluebell H. Drummond Naoya Aizawa Yadong Zhang William K. Myers Yao Xiong Matthew W. Cooper Stephen Barlow Qinying Gu Alexander J. Gillett Dan Credgington Yong-Jin Pu Seth R. Marder B.H.D., N.A., Y.Z., D.C., S.R.M., E.W.E. designed research; B.H.D. and W.K.M. performed and analysed trESR; B.H.D. and M.C. performed and analysed emission and absorption spectra; N.A. performed the computational calculations, the results of which were analysed by N.A. and B.H.D.; Y.Z., Y.X., M.C., S.B. and S.R.M. designed the materials and Y.Z., Y.X. and M.C. synthesised the materials and characterised their optical properties; Q.G. fabricated and measured OLEDs; L.R.W. and A.J.G. contributed to discussion of the work; all authors wrote the paper; S.B., D.C., Y.J.P., S.R.M., E.W.E. supervised the research. Correspondence to Emrys W. Evans. Peer review information Nature Communications thanks Marc Etherington and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Drummond, B.H., Aizawa, N., Zhang, Y. et al. Electron spin resonance resolves intermediate triplet states in delayed fluorescence. Nat Commun 12, 4532 (2021). https://doi.org/10.1038/s41467-021-24612-9 Received: 21 October 2020 Anton Pershin David Beljonne Nature Materials (2022) Constructing high-efficiency orange-red thermally activated delayed fluorescence emitters by three-dimension molecular engineering Lei Hua Yuchao Liu Zhongjie Ren
CommonCrawl
Convolutional Extreme Learning Machines: A Systematic Review Iago Richard Rodrigues, Sebastião Rogério, Judith Kelner, Djamel Sadok, Patricia Takako Endo Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Convolutional extreme learning machine; Deep learning; Multimedia analysis Online: 28 April 2021 (15:31:14 CEST) Many works have recently identified the need to combine deep learning with extreme learning to strike a performance balance with accuracy especially in the domain of multimedia applications. Considering this new paradigm, namely convolutional extreme learning machine (CELM), we present a systematic review that investigates alternative deep learning architectures that use extreme learning machine (ELM) for a faster training to solve problems based on image analysis. We detail each of the architectures found in the literature, application scenarios, benchmark datasets, main results, advantages, and present the open challenges for CELM. We follow a well structured methodology and establish relevant research questions that guide our findings. We hope that the observation and classification of such works can leverage the CELM research area providing a good starting point to cope with some of the current problems in the image-based computer vision analysis. Extreme Learning Machines as Encoders for Sparse Reconstruction Abdullah Al-Mamun, Chen Lu, Balaji Jayaraman Subject: Engineering, Mechanical Engineering Keywords: sparse reconstruction, extreme learning machines, sensors, SVD, POD, compressive sensing Online: 21 August 2018 (16:18:06 CEST) Reconstruction of fine-scale information from sparse data is often needed in practical fluid dynamics where the sensors are typically sparse and yet, one may need to learn the underlying flow structures or inform predictions through assimilation into data-driven models. Given that sparse reconstruction is inherently an ill-posed problem, the most successful approaches encode the physics into an underlying sparse basis space that spans the manifold to generate well-posedness. To achieve this, one commonly uses generic orthogonal Fourier basis or data specific proper orthogonal decomposition (POD) basis to reconstruct from sparse sensor information at chosen locations. Such a reconstruction problem is well-posed as long as the sensor locations are incoherent and can sample the key physical mechanisms. The resulting inverse problem is easily solved using $l_2$ minimization or if necessary, sparsity promoting $l_1$ minimization. Given the proliferation of machine learning and the need for robust reconstruction frameworks in the face of dynamically evolving flows, we explore in this study the suitability of non-orthogonal basis obtained from Extreme Learning Machine (ELM) auto-encoders for sparse reconstruction. In particular, we assess the interplay between sensor quantity and sensor placement for a given system dimension for accurate reconstruction of canonical fluid flows in comparison to POD-based reconstruction. Online Sequential Extreme Learning Machine: A New Training Scheme for Restricted Boltzmann Machines BERGHOUT Tarek Subject: Mathematics & Computer Science, Computational Mathematics Keywords: restricted Boltzmann machine; contrastive divergence; extreme learning machine; online sequential extreme learning machine; autoencoders; deep belief network; deep learning Online: 27 May 2020 (08:18:39 CEST) Abstract: The main contribution of this paper is to introduce a new iterative training algorithm for restricted Boltzmann machines. The proposed learning path is inspired from online sequential extreme learning machine one of extreme learning machine variants which deals with time accumulated sequences of data with fixed or varied sizes. Recursive least squares rules are integrated for weights adaptation to avoid learning rate tuning and local minimum issues. The proposed approach is compared to one of the well known training algorithms for Boltzmann machines named "contrastive divergence", in term of time, accuracy and algorithmic complexity under the same conditions. Results strongly encourage the new given rules during data reconstruction. Classification of Alzheimer's Disease with and without Imagery Using Gradient Boosted Machines and ResNet-50 Lawrence V. Fulton, Diane Dolezel, Jordan Harrop, Yan Yan, Christopher P. Fulton Subject: Medicine & Pharmacology, Clinical Neurology Keywords: Alzheimer's Disease; Extreme Gradient Boosting; Deep Residual Learning; conolutional neural networks; machine learning; dementia Alzheimer's is a disease for which there is no cure. Diagnosing Alzheimer's Disease (AD) early facilitates family planning and cost control. The purpose of this study is to predict the presence of AD using socio-demographic, clinical, and Magnetic Resonance Imaging (MRI) data. Early detection of AD enables family planning and may reduce costs by delaying long-term care. Accurate, non-imagery methods also reduce patient costs. The Open Access Series of Imaging Studies (OASIS-1) cross-sectional MRI data were analyzed. A gradient boosted machine (GBM) predicted the presence of AD as a function of gender, age, education, socioeconomic status (SES), and Mini-Mental State Exam (MMSE). A Residual Network with 50 layers (ResNet-50) predicted CDR presence and severity from MRI's (multi-class classification). The GBM achieved a mean 91.3% prediction accuracy (10-fold stratified cross validation) for dichotomous CDR using socio-demographic and MMSE variables. MMSE was the most important feature. ResNet-50 using image generation techniques based on an 80% training set resulted in 98.99% three class prediction accuracy on 4,139 images (20% validation set) at Epoch 133 and nearly perfect multi-class predication accuracy on the training set (99.34%). Machine Learning methods classify AD with high accuracy. GBM models may help provide initial detection based on non-imagery analysis, while ResNet-50 network models might help identify AD patients automatically prior to provider review. SuperHyperGirth Type-Results on extreme SuperHyperGirth theory and (Neutrosophic) SuperHyperGraphs Toward Cancer's extreme Recognition Henry Garrett Subject: Mathematics & Computer Science, Applied Mathematics Keywords: extreme SuperHyperGraph; (extreme) SuperHyperGirth; Cancer's extreme Recognition In this research, the extreme SuperHyperNotion, namely, extreme SuperHyperGirth, is up. $E_1$ and $E_3$ are some empty extreme SuperHyperEdges but $E_2$ is a loop extreme SuperHyperEdge and $E_4$ is an extreme SuperHyperEdge. Thus in the terms of extreme SuperHyperNeighbor, there's only one extreme SuperHyperEdge, namely, $E_4.$ The extreme SuperHyperVertex, $V_3$ is extreme isolated means that there's no extreme SuperHyperEdge has it as an extreme endpoint. Thus the extreme SuperHyperVertex, $V_3,$ is excluded in every given extreme SuperHyperGirth. $ \mathcal{C}(NSHG)=\{E_i\}~\text{is an extreme SuperHyperGirth.} \ \ \mathcal{C}(NSHG)=jz^i~\text{is an extreme SuperHyperGirth SuperHyperPolynomial.} \ \ \mathcal{C}(NSHG)=\{V_i\}~\text{is an extreme R-SuperHyperGirth.} \ \ \mathcal{C}(NSHG)=jz^I~{\small\text{is an extreme R-SuperHyperGirth SuperHyperPolynomial.}} $ The following extreme SuperHyperSet of extreme SuperHyperEdges[SuperHyperVertices] is the extreme type-SuperHyperSet of the extreme SuperHyperGirth. The extreme SuperHyperSet of extreme SuperHyperEdges[SuperHyperVertices], is the extreme type-SuperHyperSet of the extreme SuperHyperGirth. The extreme SuperHyperSet of the extreme SuperHyperEdges[SuperHyperVertices], is an extreme SuperHyperGirth $\mathcal{C}(ESHG)$ for an extreme SuperHyperGraph $ESHG:(V,E)$ is an extreme type-SuperHyperSet with the maximum extreme cardinality of an extreme SuperHyperSet $S$ of extreme SuperHyperEdges[SuperHyperVertices] such that there's only one extreme consecutive sequence of the extreme SuperHyperVertices and the extreme SuperHyperEdges form only one extreme SuperHyperCycle. There are not only four extreme SuperHyperVertices inside the intended extreme SuperHyperSet. Thus the non-obvious extreme SuperHyperGirth isn't up. The obvious simple extreme type-SuperHyperSet called the extreme SuperHyperGirth is an extreme SuperHyperSet includes only less than four extreme SuperHyperVertices. But the extreme SuperHyperSet of the extreme SuperHyperEdges[SuperHyperVertices], doesn't have less than four SuperHyperVertices inside the intended extreme SuperHyperSet. Thus the non-obvious simple extreme type-SuperHyperSet of the extreme SuperHyperGirth isn't up. To sum them up, the extreme SuperHyperSet of the extreme SuperHyperEdges[SuperHyperVertices], isn't the non-obvious simple extreme type-SuperHyperSet of the extreme SuperHyperGirth. Since the extreme SuperHyperSet of the extreme SuperHyperEdges[SuperHyperVertices], is an extreme SuperHyperGirth $\mathcal{C}(ESHG)$ for an extreme SuperHyperGraph $ESHG:(V,E)$ is the extreme SuperHyperSet $S$ of extreme SuperHyperVertices[SuperHyperEdges] such that there's only one extreme consecutive extreme sequence of extreme SuperHyperVertices and extreme SuperHyperEdges form only one extreme SuperHyperCycle given by that extreme type-SuperHyperSet called the extreme SuperHyperGirth and it's an extreme SuperHyperGirth . Since it 's the maximum extreme cardinality of an extreme SuperHyperSet $S$ of extreme SuperHyperEdges[SuperHyperVertices] such that there's only one extreme consecutive extreme sequence of extreme SuperHyperVertices and extreme SuperHyperEdges form only one extreme SuperHyperCycle. There are only less than four extreme SuperHyperVertices inside the intended extreme SuperHyperSet, thus the obvious extreme SuperHyperGirth, is up. The obvious simple extreme type-SuperHyperSet of the extreme SuperHyperGirth, is: ,is the extreme SuperHyperSet, is: does includes only less than four SuperHyperVertices in a connected extreme SuperHyperGraph $ESHG:(V,E).$ It's interesting to mention that the only simple extreme type-SuperHyperSet called the extreme SuperHyperGirth amid those obvious[non-obvious] simple extreme type-SuperHyperSets called the neutrosophic SuperHyperGirth , is only and only. A basic familiarity with extreme SuperHyperGirth theory, SuperHyperGraphs, and extreme SuperHyperGraphs theory are proposed. Extreme Learning Machine for Robustness Enhancement of Gas Detection Based on Tunable Diode Laser Absorption Spectroscopy Wenhai Ji, Li Zhong, Ying Ma, Di Song, Xiaocui Lv, Chuantao Zheng, Guolin Li Subject: Physical Sciences, Atomic & Molecular Physics Keywords: gas analyzers; optical sensors; TDLAS; extreme learning machine Online: 28 December 2018 (05:05:28 CET) In this work, a tailored extreme learning machine (ELM) algorithm to enhance the overall robustness of gas analyzer based on the tunable diode laser absorption spectroscopy (TDLAS) method is presented. The ELM model is tailored through activation function selection, input weight and bias searching, and cross validation method to address the analyzer robustness issues for industrial process analysis field application. The two particular issues are the inaccurate gas concentration measurement caused by the process gas background components variation, and the inaccurate spectra shift calculation caused by spectral interference. By using our algorithm, the concentration error is reduced by one order of magnitude over a much larger stream pressure and component range compared with that obtained by classical least square (CLS) fitting methods based on reference curves. Additionally, it is shown that with our algorithm, the wavelength shift accuracy is improved to less than 1 count over 1000 counts spectra length. In order to test the viability of our algorithm, a trace ethylene (C2H4) TDLAS analyzer with coexisting methane was implemented, and its experimental measurements support analyzing robustness enhancement effect. Landslide Susceptibility Mapping at Two Adjacent Catchments Using Advanced Machine Learning Algorithms Ananta Man Singh Pradhan, Yun-Tae Kim Subject: Earth Sciences, Geology Keywords: Deep Neural Network; Extreme Gradient Boosting; Random Forest; Landslide Susceptibility Landslides impact on human activities and socio-economic development especially in mountainous areas. This study focuses on the comparison of the prediction capability of advanced machine learning techniques for rainfall-induced shallow landslide susceptibility of Deokjeokri catchment and Karisanri catchment in South Korea. The influencing factors for landslides i.e. topographic, hydrologic, soil, forest, and geologic factors are prepared from various sources based on availability and a multicollinearity test is also performed to select relevant causative factors. The landslide inventory maps of both catchments are obtained from historical information, aerial photographs and performing field survey. In this study, Deokjeokri catchment is considered as a training area and Karisanri catchment as a testing area. The landslide inventories content 748 landslide points in training and 219 points in testing areas. Three landslide susceptibility maps using machine learning models i.e. Random Forest (RF), Extreme Gradient Boosting (XGBoost) and Deep Neural Network (DNN) are prepared and compared. The outcomes of the analyses are validated using the landslide inventory data. A receiver operating characteristic curve (ROC) method is used to verify the results of the models. The results of this study show that the training accuracy of RF is 0.757 and the testing accuracy is 0.74. Similarly, training accuracy of XGBoost is 0.756 and testing accuracy is 0.703. The prediction of DNN revealed acceptable agreement between susceptibility map and the existing landslides with training and testing accuracy of 0.855 and 0.802, respectively. The results showed that, the DNN model achieved lower prediction error and higher accuracy results than other models for shallow landslide modeling in the study area Extreme Learning Machine-Based Model for Solubility Estimation of Hydrocarbon Gases in Electrolyte Solutions Narjes Nabipour, Amir Mosavi, Alireza Baghban, Shahaboddin Shamshirband, Imre Felde Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: hydrocarbon gases; solubility; extreme learning machines; electrolyte solution; predicting model Online: 2 January 2020 (04:39:59 CET) Show abstract| Supplementary Files| Share Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases including methane, ethane, propane and butane in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points concluded to R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility leaded to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants. An Extreme Learning Machine Approach to Effective Energy Disaggregation Valerio Mario Salerno, Graziella Rabbeni Subject: Engineering, Energy & Fuel Technology Keywords: Non-intrusive Load Monitoring; Machine Learning; Deep Modeling; Extreme Learning Machine; Data Driven Approach. Power disaggregation aims at determining the appliance-by-appliance electricity consumption leveraging upon a single meter only, which measures the entire power demand. Data-driven procedures based on Factorial Hidden Markov Models have been proven remarkable results on energy disaggregation. Nevertheless, those procedures have various weaknesses: there is a scalability problem as the number of devices to observe raises and the algorithmic complexity of the inference step is severe. DNN architectures, such as Convolutional Neural Networks, have demonstrated to be a viable solution to deal with FHMMs shortcomings. Nonetheless, there are two significant limitations: a complicated and time-consuming training system based on back-propagation has to be employed to estimates the neural architecture parameters, and large amounts of training data covering as many operation conditions as possible need to be collected to attain top performances. In this work, we aim to overcome those limitations by leveraging upon the unique and useful characteristics of the extreme learning machine technique, which is based on a collection of randomly chosen hidden units and analytically defined output weights. Experiment evaluation has been conducted using the UK-DALE corpus. We find that the suggested approach achieves similar performances to recently proposed ANN-based methods and outperforms FHMMs. Besides, our solution generalises well to unseen houses. A New Health Assessment Prediction Approach: Multi-Scale Ensemble Extreme Learning Machine Subject: Keywords: remaining useful life; c-mapss; extreme learning machine; prognostic and health management; neural networks This work can be considered as a first step of designing a future competitive data-driven approach for remaining useful life prediction of aircraft engines. The proposed approach is an ensemble of serially connected extreme learning machines. The results of prediction of the first networks are scaled and fed to the next networks as an additive features to the original inputs. This feature mapping allows increasing the correlation of training inputs with their targets by holding new prior knowledge about the probable behavior of the target function. The proposed approach is evaluated under remaining useful estimation using a set of "time-varying" data retrieved from the public dataset C-MAPSS (Commercial Modular Aero Propulsion System Simulation) provided by NASA. The prediction performances are compared to basic extreme learning machine and proved the effectiveness of the proposed methodology. Extreme SuperHyperClique as the Firm Scheme of Confrontation under Cancer's Recognition as the Model in the Setting of (Neutrosophic) SuperHyperGraphs Subject: Mathematics & Computer Science, Applied Mathematics Keywords: (Neutrosophic) SuperHyperGraph; Extreme SuperHyperClique; Cancer's Extreme Recognition In this research, new setting is introduced for assuming a SuperHyperGraph. Then a ``SuperHyperClique'' $\mathcal{C}(NSHG)$ for a neutrosophic SuperHyperGraph $NSHG:(V,E)$ is the maximum cardinality of a SuperHyperSet $S$ of SuperHyperVertices such that there's a SuperHyperVertex to have a SuperHyperEdge in common. Assume a SuperHyperGraph. Then an ``$\delta-$SuperHyperClique'' is a \underline{maximal} SuperHyperClique of SuperHyperVertices with \underline{maximum} cardinality such that either of the following expressions hold for the (neutrosophic) cardinalities of SuperHyperNeighbors of $s\in S:$ $~|S\cap N(s)| > |S\cap (V\setminus N(s))|+\delta,~|S\cap N(s)| < |S\cap (V\setminus N(s))|+\delta.$ The first Expression, holds if $S$ is an ``$\delta-$SuperHyperOffensive''. And the second Expression, holds if $S$ is an ``$\delta-$SuperHyperDefensive''; a``neutrosophic $\delta-$SuperHyperClique'' is a \underline{maximal} neutrosophic SuperHyperClique of SuperHyperVertices with \underline{maximum} neutrosophic cardinality such that either of the following expressions hold for the neutrosophic cardinalities of SuperHyperNeighbors of $s\in S:$ $~|S\cap N(s)|_{neutrosophic} > |S\cap (V\setminus N(s))|_{neutrosophic}+\delta,~ |S\cap N(s)|_{neutrosophic} < |S\cap (V\setminus N(s))|_{neutrosophic}+\delta.$ The first Expression, holds if $S$ is a ``neutrosophic $\delta-$SuperHyperOffensive''. And the second Expression, holds if $S$ is a ``neutrosophic $\delta-$SuperHyperDefensive''. A basic familiarity with Extreme SuperHyperClique theory, SuperHyperGraphs, and Neutrosophic SuperHyperGraphs theory are proposed. Hybrid Machine Learning Model of Extreme Learning Machine Radial Basis Function for Breast Cancer Detection and Diagnosis: A Multilayer Fuzzy Expert System Sanaz Mojrian, Gergo Pinter, Javad Hassannataj Joloudari, Imre Felde, Akos Szabo-Gali, Laszlo Nadai, Amir Mosavi Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: hybrid machine learning; extreme learning machine (ELM); radial basis function (RBF); breast cancer; support vector machine (SVM) Online: 24 February 2020 (04:10:49 CET) Mammography is often used as the most common laboratory method for the detection of breast cancer, yet associated with the high cost and many side effects. Machine learning prediction as an alternative method has shown promising results. This paper presents a method based on a multilayer fuzzy expert system for the detection of breast cancer using an extreme learning machine (ELM) classification model integrated with radial basis function (RBF) kernel called ELM-RBF, considering the Wisconsin dataset. The performance of the proposed model is further compared with a linear-SVM model. The proposed model outperforms the linear-SVM model with RMSE, R2, MAPE equal to 0.1719, 0.9374 and 0.0539, respectively. Furthermore, both models are studied in terms of criteria of accuracy, precision, sensitivity, specificity, validation, true positive rate (TPR), and false-negative rate (FNR). The ELM-RBF model for these criteria presents better performance compared to the SVM model. Breaking the Continuity and Uniformity of Cancer In The Worst Case of Full Connections With Extreme Failed SuperHyperClique In Cancer's Recognition Applied in (Neutrosophic) SuperHyperGraphs Subject: Mathematics & Computer Science, Applied Mathematics Keywords: (Neutrosophic) SuperHyperGraph, Extreme Failed SuperHyperClique, Cancer's Extreme Recognition In this research, assume a SuperHyperGraph. Then a ``Failed SuperHyperClique'' $\mathcal{C}(NSHG)$ for a neutrosophic SuperHyperGraph $NSHG:(V,E)$ is the maximum cardinality of a SuperHyperSet $S$ of SuperHyperVertices such that there's a SuperHyperVertex to have a SuperHyperEdge in common. Assume a SuperHyperGraph. Then an ``$\delta-$Failed SuperHyperClique'' is a \underline{maximal} Failed SuperHyperClique of SuperHyperVertices with \underline{maximum} cardinality such that either of the following expressions hold for the (neutrosophic) cardinalities of SuperHyperNeighbors of $s\in S:$ $~|S\cap N(s)| > |S\cap (V\setminus N(s))|+\delta,~|S\cap N(s)| < |S\cap (V\setminus N(s))|+\delta.$ The first Expression, holds if $S$ is an ``$\delta-$SuperHyperOffensive''. And the second Expression, holds if $S$ is an ``$\delta-$SuperHyperDefensive''; a``neutrosophic $\delta-$Failed SuperHyperClique'' is a \underline{maximal} neutrosophic Failed SuperHyperClique of SuperHyperVertices with \underline{maximum} neutrosophic cardinality such that either of the following expressions hold for the neutrosophic cardinalities of SuperHyperNeighbors of $s\in S:$ $~|S\cap N(s)|_{neutrosophic} > |S\cap (V\setminus N(s))|_{neutrosophic}+\delta,~ |S\cap N(s)|_{neutrosophic} < |S\cap (V\setminus N(s))|_{neutrosophic}+\delta.$ The first Expression, holds if $S$ is a ``neutrosophic $\delta-$SuperHyperOffensive''. And the second Expression, holds if $S$ is a ``neutrosophic $\delta-$SuperHyperDefensive''. A basic familiarity with Extreme Failed SuperHyperClique theory, Extreme SuperHyperGraphs theory, and Neutrosophic SuperHyperGraphs theory are proposed. Stacking Ensemble of Machine Learning Methods for Landslide Susceptibility Mapping in Zhangjiajie City, Hunan Province, China Yuke Huan, Lei Song, Umair Khan, Baoyi Zhang Subject: Earth Sciences, Geoinformatics Keywords: landslide susceptibility; stacking ensemble; machine learning; random forest; gradient boosting decision tree; extreme gradient boosting Online: 5 October 2022 (10:29:51 CEST) The current study aims to apply and compare the performance of six machine learning algorithms, including three basic classifiers: random forest (RF), gradient boosting decision tree (GBDT), and extreme gradient boosting (XGB), as well as their hybrid classifiers, using the logistic regression (LR) method (RF+LR, GBDT+LR, and XGB+LR), in order to map the landslide susceptibility of Zhangjiajie City, Hunan Province, China. First, a landslide inventory map was created with 206 historical landslide points and 412 non-landslide points, which was randomly divided into two datasets for model training (80%) and model testing (20%). Second, 15 landslide conditioning factors (i.e., altitude, slope, aspect, plane curvature, profile curvature, relief, roughness, rainfall, topographic wetness index (TWI), normalized difference vegetative index (NDVI), distance to roads, distance to rivers, land use/land cover (LULC), soil texture, and lithology) were initially selected to establish a landslide factor database. Thereafter, the multicollinearity test and information gain ratio (IGR) technique were applied to rank the importance of the factors. Subsequently, we used a series of metrics (e.g., accuracy, precision, recall, f-measure, area under the ROC (receiver operating characteristic) curve (AUC), kappa index, mean absolute error (MAE), and root mean square error (RMSE)) to evaluate the accuracy and performance of the six models. Based on the AUC values derived from the models, the GBDT+LR model with the highest AUC value (0.8168) was identified as the most efficient model for mapping landslide susceptibility, followed by the XGB+LR, XGB, RF+LR, GBDT, and RF models, which achieved AUC values of 0.8124, 0.8118, 0.8060, 0.7927, and 0.7883, respectively. The results from this study suggest that the stacking ensemble machine learning method is promising for use in landslide susceptibility mapping in the Zhangjiajie area and is capable of targeting the areas prone to landslides. Understanding Weather and Climate Extremes Emmanuel Eresanya, Olufemi Sunday Durowoju, Israel R. Orimoloye, Mojolaoluwa Daramola, Akindamola Ayobami Akinyemi, Olasunkanmi Olorunsaye Subject: Earth Sciences, Atmospheric Science Keywords: climate; weather; extreme The understanding of weather and climate extremes provides academics, decision makers, international development agencies, nongovernmental organizations and civil society the necessary information for monitoring and giving early warning to prevent or minimize the risks associated with weather related hazards. Different researches were carried out to provide vital information that will further enhance the assessment of vulnerability and its impacts. Lack of proper understanding of weather and climate extremes was realized to be responsible for the huge and devastating losses that could have being averted or minimized over the past decades. Different countries and institutions have put in place a number of ways to increase sensitization and awareness of weather extremes. This became necessary in order to reduce the losses associated with these extremes both on local and regional scales. Using Enhanced Sparrow Search Algorithm-Deep Extreme Learning Machine Model to Forecast End-Point Phosphorus Content of BOF Lingxiang Quan, Ailian Li, Guimei Cui, Shaofeng Xie Subject: Engineering, Industrial & Manufacturing Engineering Keywords: End-point phosphorus content; Deep extreme learning machine; Sparrow search algorithm; Trigonometric substitution; Cauchy mutation :An effective technology for predicting the end-point phosphorous content of basic oxygen furnace (BOF) can provide theoretical instruction to improve the quality of steel via controlling the hardness and toughness. Given the slightly inadequate prediction accuracy in the existing prediction model, a novel hybrid method was suggested to more accurately predict the end-point phosphorus content by integrating an enhanced sparrow search algorithm (ESSA) and a multi-strategy with a deep extreme learning machine (DELM) as ESSA-DELM in this study. To begin with, the input weights and hidden biases of DELM were randomly selected, resulting in that DELM inevitably had a set of non-optimal or unnecessary weights and biases. Therefore, the ESSA was used to optimize the DELM in this work. For the ESSA, the Trigonometric substitution mechanism and Cauchy mutation were introduced to avoid trapping in local optima and improve the global exploration capacity in SSA. Finally, to evaluate the prediction efficiency of ESSSA-DELM, the proposed model was tested on process data of the converter from the Baogang steel plant. The efficacy of ESSA-DELM was more superior to that of other DELM-based hybrid prediction models and conventional models. The result demonstrated that the hit rate of end-point phosphorus content within ±0.003%, ±0.002%, and ±0.001% was 91.67%, 83.33%, and 63.55%, respectively. The proposed ESSA-DELM model could possess better prediction accuracy compared with other models, which could guide field operations. A Real-Time BOD Estimation Method in Wastewater Treatment Process Based on an Optimized Extreme Learning Machine Ping Yu, Jie Cao, Veeriah Jegatheesan, Xianjun Du Subject: Engineering, Control & Systems Engineering Keywords: Biochemical oxygen demand (BOD); Cuckoo search algorithm (CSA); Extreme learning machine (ELM); Soft sensor; Wastewater treatment process It is difficult to capture the real-time online measurement data for biochemical oxygen demand (BOD) in wastewater treatment processes. An optimized extreme learning machine (ELM) based on an improved cuckoo search algorithm (ICS) is proposed in this paper for the design of soft BOD measurement model. In ICS-ELM, the input weights matrices of the extreme learning machine (ELM) and the threshold of the hidden layer are encoded as the cuckoo's nest locations. The best input weights matrices and threshold are obtained by using the strong global search ability of improved cuckoo search (ICS) algorithm. The optimal results can be used to improve the precision of forecasting based on less number of neurons of the hidden layer in ELM. Simulation results show that the soft sensor model has good real-time performance, high prediction accuracy and stronger generalization performance for BOD measurement of the effluent quality compared to other modeling methods such as back propagation (BP) network in most cases. Short and Very Short Term Firm-level Load Forecasting for Warehouses: A Comparison of Machine Learning and Deep Learning Models Andrea Maria N. C. Ribeiro, Pedro Rafael X. do Carmo, Patricia Takako Endo, Pierangelo Rosati, Theo Lynn Subject: Engineering, Energy & Fuel Technology Keywords: Very short term load forecasting; VSTLF; Short term load forecasting; STLF; deep learning; RNN; LSTM; GRU; machine learning; SVR; random forest; extreme gradient boosting, energy consumption; ARIMA; time series prediction. Commercial buildings are a significant consumer of energy worldwide. Logistics facilities, and specifically warehouses, are a common building type yet under-researched in the demand-side energy forecasting literature. Warehouses have an idiosyncratic profile when compared to other commercial and industrial buildings with a significant reliance on a small number of energy systems. As such, warehouse owners and operators are increasingly entering in to energy performance contracts with energy service companies (ESCOs) to minimise environmental impact, reduce costs, and improve competitiveness. ESCOs and warehouse owners and operators require accurate forecasts of their energy consumption so that precautionary and mitigation measures can be taken. This paper explores the performance of three machine learning models (Support Vector Regression (SVR), Random Forest, and Extreme Gradient Boosting (XGBoost)), three deep learning models (Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)), and a classical time series model, Autoregressive Integrated Moving Average (ARIMA) for predicting daily energy consumption. The dataset comprises 8,040 records generated over an 11-month period from January to November 2020 from a non-refrigerated logistics facility located in Ireland. The grid search method was used to identify the best configurations for each model. The proposed XGBoost models outperform other models for both very short load forecasting (VSTLF) and short term load forecasting (STLF); the ARIMA model performed the worst. Future Changes in Precipitation Extremes Over East Africa Based on CMIP6 Projections Brian Ayugi, Victor Dike, Hamida Ngoma, Hassen Babaousmail, Victor Ongoma Subject: Earth Sciences, Atmospheric Science Keywords: CMIP6; extreme precipitation; model evaluation; east Africa This paper presents an analysis of precipitation extremes over the East African region. The study employs six extreme precipitation indices defined by the Expert Team on Climate Change Detection and Indices (ETCCDI) to evaluate possible climate change. Observed datasets and CMIP6 simulations and projections are employed to assess the changes during the two main rainfall seasons of March to May (MAM) and October to December (OND). The study evaluated the capability of CMIP6 simulations in reproducing the observed extreme events during the period 1995 – 2014. Our results show that the multi-model ensemble (herein referred to as MME) of CMIP6 models can depict the observed spatial distribution of precipitation extremes for both seasons, albeit with some noticeable exceptions in some indices. Overall, MME's assessment yields considerable confidence in CMIP6 to be employed for the projection of extreme events over the study area. Analysis of extreme estimations shows an increase (decrease) in CDD (CWD) during 2081 – 2100 relative to the baseline period in both seasons. Moreover, SDII, R95p, R20mm, and PRCPTOT demonstrate significant OND estimates compared to the MAM season. The spatial variation for extreme incidences shows likely intensification over Uganda and most parts of Kenya, while reduction is observed over the Tanzania region. The increase in projected extremes during two main rainfall seasons poses a significant threat to the sustainability of societal infrastructure and ecosystem wellbeing. The results from these analyses present an opportunity to understand the emergence of extreme events and the capability of model outputs from CMIP6 in estimating the projected changes. More studies are encouraged to examine the underlying physical features modulating the occurrence of extremes incidences projected for relevant policies. Metabolic Pathway Analysis for Nutrient Removal of the Consortium between C. vulgaris and P. aeruginosa A. Suggey Guerra-Rentería, Mario García-Ramírez, César Gómez-Hermosillo, Abril Goméz-Guzmán, Yolanda González-García, Orfil Gonzalez-Reynoso Subject: Life Sciences, Microbiology Keywords: Extreme Pathways, Nutrient Removal, C. vulgaris, P.aeruginosa Online: 1 March 2019 (07:20:22 CET) Anthropogenic activities have increased the amount of urban wastewater discharged into natural aquatic reservoirs confining in them a high amount of nutrients and organics contaminants. Several studies have reported that an alternative to reduce those contaminants is using consortiums of microalgae and endogenous bacteria. In this research, a genome-scale biochemical reaction network is reconstructed for the co-culture between the microalga Chlorella vulgaris and the bacterium Pesudomonas aeruginosa. Metabolic Pathway Analysis (MPA), is applied to understand the metabolic capabilities of the co-culture and to elucidate the best conditions in removing nutrients such as Phosphorus (inorganic phosphorous and phosphate) and Nitrogen (nitrates and ammonia). Theoretical yields for Phosphorus removal under photoheterotrophic conditions are calculated, determining their values as 0.042 mmol of PO4/ g DW of C. vulgaris, 19.53 mmol of inorganic Phosphorus /g DW of C. vulgaris and 4.90 mmol of inorganic Phosphorus/ g DW of P. aeruginosa. Similarly, according to the genome-scale biochemical reaction network the theoretical yields for Nitrogen removal are 10.3 mmol of NH3/g DW of P. aeruginosa and 7.19 mmol of NO3 /g DW of C. vulgaris. Thus, this research proves the metabolic capacity of these microorganisms in removing nutrients and their theoretical yields are calculated. The Rain-induced Urban Waterlogging Risk and its Evaluation: A Case Study in the Central City of Shanghai Lanjun Zou, Zhi Wang, Qinjing Lu, Shenglan Wu, Lei Chen, Zhengkun Qin Subject: Earth Sciences, Atmospheric Science Keywords: urban waterlogging risk; extreme rain; drainage capacity; Shanghai Waterlogging induced by rain in urban areas has a potential risk impact on property and safety. This paper focuses on the impact of rain on waterlogging and evaluates the waterlogging risk in the central city of Shanghai. A simplified waterlogging depth model is developed in different areas with different drainage capacity and rainfall in consumption of simplifying the effect of complex terrain characteristics and hydrological situation. Based on urban waterlogging depth and its classification collection, a Rain-induced Urban Waterlogging Risk Model(RUWRM) is further established to evaluate waterlogging risk in the central city. The results show that waterlogging depth is closely linked with rainfall and drainage, with a linear relationship between them. More rainfall leads to higher waterlogging risk, especially in the central city with imperfect drainage facilities. Rain-induced urban waterlogging risk model can rapidly gives the waterlogging rank caused by rainfall with a clear classification collection. The results of waterlogging risk prediction indicate that it is confident to get the urban waterlogging risk rank well and truly in advance with more accurate rainfall prediction. This general study is a contribution that allows the public, policy makers and relevant departments of urban operation to assess the appropriate management to reduce traffic intensity and personal safety or strategy to lead to less waterlogging risk. A Systematic Review on Extreme Phenotype Strategies to Search for Rare Variants in Genetic Studies of Complex Disorders Sana Amanat, Teresa Requena, Jose Antonio Lopez-Escamez Subject: Life Sciences, Genetics Keywords: genetic association studies; extreme phenotype; genetic epidemiology; tinnitus Exome sequencing has been commonly used in rare diseases by selecting multiplex families or singletons with an extreme phenotype (EP) to search for rare variants in coding regions. The EP strategy covers both extreme ends of a disease spectrum and it has been also used to investigate the contribution of rare variants to heritability in complex clinical traits. We have conducted a systematic review to find evidence supporting the use of EP strategies to search for rare variants in genetic studies of complex diseases, to highlight the contribution of rare variation to the genetic structure of multiallelic conditions. After performing the quality assessment of the retrieved records, we selected 19 genetic studies considering EP to demonstrate genetic association. All the studies successfully identified several rare variants, de novo mutations and many novel candidate genes were also identified by selecting an EP. There is enough evidence to support that the EP approach in patients with an early onset of the disease can contribute to the identification of rare variants in candidate genes or pathways involved in complex diseases. EP patients may contribute to a better understanding of the underlying genetic architecture of common heterogeneous disorders such as tinnitus or age-related hearing loss. Is Perceived Exertion an Useful Indicator of Metabolic and Cardiovascular Response to Metabolic Conditioning of Functional-Fitness Session? A Randomized Controlled Trial Ramires Alsamir Tibana, Nuno Manuel Frade de Sousa, Jonato Prestes, Dahan da Cunha Nascimento, Carlos Ernesto, Joao Henrique Falk Neto, Michael Kennedy, Fabrício Azevedo Voltarelli Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: CrossFit; high-intensity functional training; Extreme conditioning programs The purpose of this study was to assess if self-regulation of intensity based on rating of perceived exertion (RPE) is a reliable method to control the intensity of metabolic conditioning of functional-fitness session. In addition, the relationship between RPE and changes in heart rate and lactate responses was also analyzed. Eight male participants (age 28.1 ± 5.4 years; body mass 77.2 ± 4.4kg; VO2max: 52.6 ± 4.6 mL·(kg·min)−1) completed three randomly sessions (5 to 7 days apart) under different conditions: (1) all-out (ALL); (2) self-regulation of intensity based on a RPE of 6 (hard) on the Borg CR-10 scale (RPE6); and (3) a control session. Rate of perceived exertion, LAC and HR response were measured pre, during and immediately after the sessions. The RPE and LAC during the ALL-OUT sessions were higher (p ≤ 0.05) than the RPE6 and control sessions for all the analyzed time points during the sessions. Regarding HR, the 22 min area under the curve of HR during ALL-OUT and RPE6 sessions were significantly higher (p ≤ 0.05) than the control session. The average number of repetitions was lower (p ≤ 0.05) for the RPE6 session (190.5 ± 12.5 repetitions) when compared to the ALL session (214.4 ± 18.6 repetitions). There was a significant correlation between RPE and LAC (p = 0.001; r = 0.76; very large) and number of repetitions during the session (p = 0.026; r = 0.55; large). No correlation was observed between RPE and HR (p = 0.147; r = 0.380). These results indicate that self-regulation of intensity of effort based on RPE may be a useful tool to control exercise intensity during a metabolic conditioning session of functional-fitness. Evolution of the Inclusion Population During the Processing of Al-killed Steel Marco Mata Rodriguez, Martin Herrera-Trejo, Arturo Isaías Martínez Enríquez, Rodolfo Sanchez-Martinez, M. J. Castro-Román, Fabian Castro Uresti Subject: Engineering, Automotive Engineering Keywords: Inclusion; size distribution; population distribution function; extreme value theory The increasing demand for higher inclusion cleanliness levels motivates the control over the formation and evolution of inclusions in the steel production process. In this work, the evolution of the chemical composition and size distribution of inclusions throughout a slab production process of Al-killed steel, including ladle furnace (LF) treatment and continuous casting (CC), was followed. The initial solid Al2O3 and Al2O3-MgO inclusions were modified to liquid Al2O3-CaO-MgO inclusions during LF treatment. The evolution of the size distributions during LF treatment was associated with the growth and removal of inclusions, as new inclusions were not created after the deoxidation process, according to a population density function (PDF) analysis. Additionally, the size distributions tended to be similar as the LF treatment progressed regardless of their initial features, whereas they differed during CC. Analysis of the upper tails of the distributions through generalized extreme values theory showed that inclusion distributions shifted from larger to smaller sizes as the process progressed. There were great changes in the distributions of large inclusions throughout the LF treatment and between the end of the LF treatment and the start of the CC process. Additionally, distributions of large inclusions differed at the end of the LF treatment, whereas such differences decreased as CC progressed. Climate Change Projections of Dry and Wet Events in Iberia Based on the WASP-Index Cristina Andrade, Joana Contente, João Andrade Santos Subject: Earth Sciences, Atmospheric Science Keywords: WASP-Index; Climate change; Projections; Extreme precipitation; Iberian Peninsula The WASP-Index is computed over Iberia for three monthly timescales in 1961-2020, based on an observational gridded precipitation dataset (E-OBS), and in 2021-2070, based on bias-corrected precipitation generated by a six-member climate model ensemble from EURO-CORDEX, under RCP4.5 and RCP8.5. The WASP performance in identifying extremely dry or wet events, reported by the EM-DAT disaster database, is assessed for 1961–2020. An overall good agreement between the WASP spatial patterns and the EM-DAT records is found. The areolar mean values revealed an upward trend in the frequency of occurrence of intermediate-to-severe dry events over Iberia, which will be strengthened in the future, particularly for the 12m-WASP intermediate dry events under RCP8.5. Besides, the number of 3m-WASP intermediate-to-severe wet events is projected to increase, mostly the severest events under RCP4.5, but no evidence was found for an increase in the number of more persistent (12m-WASP) wet events under both RCPs. Despite important spatial heterogeneities, an increase(decrease) of the intensity, duration, and frequency of occurrence of the 12m-WASP intermediate-to-severe dry(wet) events is found under both scenarios, mainly in the southernmost regions of Iberia, thus becoming more exposed to prolonged and severe droughts in the future, corroborating the results from previous studies. Regional Temporal and Spatial Trends in Drought and Flood Disasters in China and Assessment of Economic Losses in Recent Years Jieming Chou , Tian Xian, Wenjie Dong, Yuan Xu Subject: Earth Sciences, Atmospheric Science Keywords: damaged area; direct economic loss; disaster; drought; extreme precipitation Understanding the distribution in drought and floods plays an important role in disaster risk management. The present study aims to explore the trends in the standardized precipitation index and extreme precipitation days in China, as well as to estimate the economic losses they cause. We found that in the Northeast China, northern of North China and northeast of Northwest China were severely affected by drought disasters (average damaged areas were 6.44 million hectares) and the most severe drought trend was located in West China. However, in the north of East China and Central China, the northeastern of the Southwest China was severely affected by flood disasters (average damaged areas were 3.97 million hectares) and the extreme precipitation trend is increasing in the northeastern of the Southwest China. In the Yangtze River basin, there were increasing trends in terms of drought and extreme precipitation, especially in the northeastern of the Southwest China, where accompanied by severe disaster losses. By combining the trends in drought and extreme precipitation days with the distribution of damaged areas, we found that the increasing trend in droughts shifted gradually from north to south, especially in the Southwest China, and the increasing trend in extreme precipitation gradually shifted from south to north. Linear or non Linear modeling for ENSO dynamics? Marco Bianucci, Antonietta Capotondi, Riccardo Mannella, Silvia Merlino Subject: Earth Sciences, Environmental Sciences Keywords: El Nino Southern Oscillation; ENSO, El Nino extreme events; Online: 7 November 2018 (16:30:41 CET) Observed ENSO statistics exhibits a non-gaussian behavior, which is indicative of the presence of nonlinear processes. In this paper we use the Recharge Oscillator model (ROM), a largely used Low-Order Model (LOM) of ENSO, as well as methodologies borrowed from the field of statistical mechanics to identify which aspects of the system may give rise to nonlinearities that are consistent with the observed ENSO statistics. In particular, we are interested in understanding whether the nonlinearities reside in the system dynamics or in the fast atmospheric forcing. Our results indicate that one important dynamical nonlinearity often introduced in the ROM cannot justify a non-gaussian system behavior, while the nonlinearity in the atmospheric forcing can instead produce a statistics similar to the observed. The implications of the non-Gaussian character of ENSO statistics for the frequency of extreme El Nino events is then examined. Fatalities Caused by Hydrometeorological Disasters in Texas Srikanto H. Paul, Hatim O. Sharif, Abigail M. Crawford Subject: Earth Sciences, Other Keywords: natural hazards; weather disasters; hydrometeorological fatalities; flooding; tornadoes; extreme temperatures Texas ranks first in number of natural hazard fatalities in the United States (U.S.). Based on data culled from the National Climatic Data Center databases from 1959 to 2016, the number of hydrometeorological fatalities in Texas have increased over the 58-year study period, but the per capita fatalities have significantly decreased. Spatial review found that flooding is the predominant hydrometeorological disaster in a majority of the Texas counties located in "Flash Flood Alley" and accounts for 43% of all hydrometeorological fatalities in the state. Flooding fatalities are highest on "Transportation Routes" followed by heat fatalities in "Permanent Residences". Seasonal and monthly stratification identifies Spring and Summer as the deadliest seasons, with the month of May registering the highest number of total fatalities dominated by flooding and tornado fatalities. Demographic trends of hydrometeorological disaster fatalities indicated that approximately twice as many male fatalities occurred during the study period than female fatalities, but with decreasing gender disparity over time. Adults are the highest fatality risk group overall, children most at risk to die in flooding, and the elderly at greatest risk of heat-related death. Assessing the Performance of Satellite-Based Products for Monitoring Extreme Rainfall Events in Bangladesh Carlo Montes, Nachiketa Acharya, Quamrul Hassan Subject: Earth Sciences, Atmospheric Science Keywords: remote sensing rainfall; extreme precipitation indices; gridded rainfall products; monsoon rainfall This work focuses on the analysis of the performance of satellite-based precipitation products for monitoring extreme rainfall events. Five precipitation products are inter-compared and evaluated in capturing indices of extreme rainfall events during 1998-2019 considering four indices of extreme rainfall. Satellite products show a variable performance, which in general indicates that the occurrence and amount of rainfall of extreme events can be both underestimated or overestimated by the datasets in a systematic way throughout the country. Also, products that consider the use of ground truth data have the best performance. The Effect of Air Quality and Weather on the Chinese Stock: Evidence from Shenzhen Stock Exchange Zhuhua Jiang, Rangan Gupta, Sowmya Subramaniam, Seong-Min Yoon Subject: Social Sciences, Accounting Keywords: air quality; extreme weather; MA-MSD method; investor sentiment; behavioral finance We investigate the impact of air quality and weather on the equity returns of the Shenzhen Exchange. To capture the air quality and weather effects, we use dummy variables created by employing a moving average and moving standard deviation. The important results are as follows. First, in the whole sample period (2005–2019), we find that high air pollution and extremely high temperature have significant and negative influence on the equity returns. In the sub-period I (2005–2012), the 11-day model and 31-day model show that high air pollution have significant and negative impacts on the Shenzhen stock returns. Second, the results of the quantile regression show that high air pollution have significant and negative effects during bullish market phase, and extremely high temperature have significant and negative effects during bearish market phase. This implies that the air quality and weather effects are asymmetric. Third, the weather effect of the abnormal temperature on the stock returns is greater in severe bearish market. Whereas the effect of the air pollution on the stock returns is greater in the bullish market. Fourth, the least squares method underestimates the air quality and weather effects compared to the quantile regression method, suggesting that the quantile regression method is more suitable in analyzing these effects in a very volatile emerging market such as the Shenzhen stock market. The Effect of Air Quality and Weather on the Chinese Stock Market: Evidence from Shenzhen Stock Exchange Subject: Behavioral Sciences, Social Psychology Keywords: Air quality; Extreme weather; MA-MSD method; Investor sentiment; Behavioural finance We investigate the impact of air quality and weather on the stock market returns of the Shenzhen Exchange. To capture the air quality and weather effects, we apply dummy variables generated by applying a moving average and moving standard deviation. Our study provides several interesting results. First, in the whole sample period (2005–2019), we find that high air pollution and extremely high temperature have significant and negative effects on the Shenzhen stock returns. In the sub-period I (2005–2012), the 11-day model and 31-day model show that high air pollution have significant and negative effects on the Shenzhen stock returns. Second, the results of the quantile regression show that high air pollution have significant and negative effects during bullish market phase, and extremely high temperature have significant and negative effects during bearish market phase. This implies that the air quality and weather effects are asymmetric. Third, the more the Shenzhen stock returns drop, the greater the effect of the abnormal temperature is. Whereas, the more the Shenzhen stock returns increase, the greater the effect of the abnormal air quality is. Fourth, the least squares method underestimates the air quality and weather effects compared to the quantile regression method, suggesting that the quantile regression method is more suitable in analysing these effects in a very volatile emerging market such as the Shenzhen stock market. The Impact of Climate Change on Reservoir Inflows Using Multi Climate-Model under RCPs' Including Extreme Events—A Case of Mangla Dam, Pakistan Muhammad Babur, Mukand Singh Babel, Sangam Shrestha, Akiyuki Kawasaki, Nitin Kumar Tripathi Subject: Engineering, Civil Engineering Keywords: climate change; GCMs'; RCPs'; downscaling; temperature; precipitation; extreme events; SWAT; discharge Assessment of extreme events and climate change on reservoir inflow is important for water and power stressed countries. Projected climate is subject to uncertainties related to climate change scenarios and Global Circulation Models (GCMs'). Extreme climatic events will increase with the rise in temperature as mentioned in the AR5 of the IPCC. This paper discusses the consequences of climate change that include extreme events on discharge. Historical climatic and gauging data were collected from different stations within a watershed. The observed flow data was used for calibration and validation of SWAT model. Downscaling was performed on future GCMs' temperature and precipitation data, and plausible extreme events were generated. Corrected climatic data was applied to project the influence of climate change. Results showed a large uncertainty in discharge using different GCMs' and different emissions scenarios. The annual tendency of the GCMs' is bi-vocal: six GCMs' projected a rise in annual flow, while one GCM projected a decrease in flow. The change in average seasonal flow is more as compared to annual variations. Changes in winter and spring discharge are mostly positive, even with the decrease in precipitation. The changes in flows are generally negative for summer and autumn due to early snowmelt from an increase in temperature. The change in average seasonal flows under RCPs' 4.5 and 8.5 are projected to vary from -29.1 to 130.7% and -49.4 to 171%, respectively. In the medium range (RCP 4.5) impact scenario, the uncertainty range of average runoff is relatively low. While in the high range (RCP 8.5) impact scenario, this range is significantly larger. RCP 8.5 covered a wide range of uncertainties, while RCP 4.5 covered a short range of possibilities. These outcomes suggest that it is important to consider the influence of climate change on water resources to frame appropriate guidelines for planning and management. Global Mangrove Deforestation and Its Interacting Social-Ecological Drivers: A Systematic Review and Synthesis Avit K. Bhowmik, Rajchandar Padmanaban, Pedro Cabral, Maria M. Romeiras Subject: Earth Sciences, Environmental Sciences Keywords: Mangroves; Drivers; Anthropogenic activities; Climate change; Extreme events; Wetlands; Interaction; Aquaculture; Agriculture Globally mangrove forests are substantially declining and a globally synthesized database of the drivers of deforestation and drivers' interaction is scarce. Here we synthesized the key social-ecological drivers of global mangrove deforestation by reviewing about two hundred published scientific studies over the last four decades (from 1980 to 2021). Our focus was on both natural and anthropogenic drivers with gradual and abrupt impacts and their geographic ranges of effects and how these drivers interact. We also summarized the patterns of global mangrove coverage decline between 1990 and 2020 and identified the threatened mangrove species and their geographic ranges. Our consolidated studies reported a 8,600 km2 decline in the global mangrove coverage between 1990 and 2020 with the highest decline occurring in South and Southeast Asia (3870 km2). We could identify 11 threatened mangrove species, two of which are critically endangered (Sonneratia griffithii and Bruguiera hainseii). Our reviewed studies pointed to aquaculture and agriculture as the predominant driver of global mangrove deforestation though the spatial distribution of their impacts varied. Gradual climate variations, i.e. seal-level rise, long-term precipitation and temperature changes and driven coastline erosion, constitute the second major group of drivers. Our findings underline a strong interaction across natural and anthropogenic drivers with the strongest interaction between the driver groups aquaculture and agriculture and industrialization and pollution. Our results suggest prioritizing globally coordinated empirical studies linking drivers and mangrove changes and a global development of policies for mangrove conservation. Extreme Capsule is a Bottleneck for Ventral Pathway Ehsan Shekari, Elahe Shahriari, Mohammad Taghi Joghataei Subject: Life Sciences, Other Keywords: Extreme capsule; uncinate fasciculus; IFOF; ventral pathway of language; bottle neck; DTI Online: 7 September 2020 (10:26:24 CEST) On review of neuroscience literature extreme capsule considered as a whiter matter tract. Nevertheless it is not clear that extreme capsule itself is a association fiber pathway or is o bottleneck for passing other association fiber. By a systematic search with investigating anatomical position, dissection, connectivity and cognitive role of extreme capsule it can be argued that extreme capsule probably is a bottleneck for passing uncinated fasciculus (UF) and inferior fronto – occipital fasciculus(IFOF), And its different role of language processing is duo to different tract that passing it. Trends In Cumulative Incidence and Case Fatality Of COVID-19 in The United States: Extreme Epidemiologic Response Laurens Holmes. Jr, Glen Philipcien, Keeti Deepika, Chinaka Chinacherem, Prachi Chavan, Janille Williams, Benjamin Ogundele, Kirk Dabney, Maura Poleon, Lavisha Pelaez, Tatiana Picolli, Valescia John, Leslie Stalnaker, Michael Enwere Subject: Medicine & Pharmacology, Other Keywords: COVID-19; SARS-CoV2; extreme epidemiology response; population at risk; case fatality Objectives: COVID-19, a respiratory disease caused by SARS-COV2 and transmitted from person-to-person through viral droplets remains a global pandemic. There is a need to understand the transmission modes, populations at risk, and how to mitigate the spread and case fatality in the United States (US) and globally. The current study aimed to assess the global COVID-19 transmission and case fatality, examine similar parameters by countries and determine evidence-based practice in extreme epidemiology response in epidemic curve flattening and case fatality reduction. Methods: A cross-sectional ecologic design was used to assess the preexisting data on confirmed COVID-19 cases and mortality in March 2020 from the CDC, WHO, Worldodomter, and STATISTA. A rapid assessment between March 23rd and 31st, 2020, was utilized for the extreme epidemiology response. The case fatality, termed fatality proportion, was examined using mortality in relation to confirmed cases involving the world, United States of America (USA), United Kingdom (UK), Italy, France, Spain, China, Germany, India and South Korea. Results: The COVID-19 is a global pandemic, with the US as the epicenter for transmission, representing 20.9% of all confirmed cases worldwide, while Italy is the epicenter for case fatality, 30.6% of mortality as at 03/31/ 2020. The fatality proportion (FP) in Italy was 11.4%, Spain (8.8%), France (6.8%) and UK (6.4%). Despite the increased number of confirmed cases, the lowest FP was observed in Germany (0.96%) and South Korea (1.66%). There is increasing linear tends in transmission in the US, R2=0.97 as well as positive daily percentage change, ranging from 1.27% to 20.5%. Conclusions: The USA remains the epicenter for COVID-19 transmission, while Italy is the epicenter for case fatality. The observed relatively low case fatality in Germany and South Korea is due to an "extreme epidemiology" response through the application of Wuhan, China's early data on COVID-19 transmission control measures and optimized patient care. These data are suggestive of relaxing the clinical guidelines in the United States in COVID-19 testing, application of contact tracing and testing, case isolation and most importantly enhancing resources for case management and social and physical distancing globally, hence epidemic curve flattening and case fatality reduction. Glomerular Filtration Rate in Former Extreme Low Birth Weight Infants over the Full Pediatric Age Range: A Pooled Analysis Elise Goetschalkx, Djalila Mekahli, Elena Levtchenko, Karel Allegaert Subject: Medicine & Pharmacology, Pediatrics Keywords: glomerular filtration rate; Brenner hypothesis; extreme low birth weight infants; renal outcome Different cohort studies documented a lower glomerular filtration rate (GFR) in former extremely low birth weight (ELBW, <1000 g) neonates throughout childhood when compared to term controls. The current aim is to pool these studies to describe the GFR pattern over the pediatric age range. To do so, we conducted a systematic review on studies reporting on GFR measurements in former ELBW cases while GFR data of healthy age-matched controls included in these studies were co-collected. Based on 248 hits, 6 case-control and 3 cohort studies were identified, with 444 GFR measurements in 380 former ELBW cases (median age 5.3-20.7 years). The majority were small (17-78 cases) single center studies, with heterogeneity in GFR measurement (inulin, Cystatin C or creatinine estimated GFR formulae) tools. Despite this, the median GFR (ml/kg/1.73m2) within case-control studies was consistently lower (-13, range -8 to -25%) in cases, so that a relevant minority (15-30%) has a eGFR<90 mgl/kg/1.73m2). Consequently, this pooled analysis describes a consistent pattern of reduced eGFR in former ELBW cases throughout childhood. Research should focus on perinatal risk factors for impaired GFR and long-term outcome, but is hampered by single center cohorts, study size, and heterogeneity of GFR assessment tools. Coastal Flood Assessment Due to Sea Level Rise and Extreme Storm Events - Case Study of the Atlantic Coast of Portugal Mainland Carlos Antunes, Carolina Rocha, Cristina Catita Subject: Earth Sciences, Geophysics Keywords: sea level rise; coastal flood hazard; storm surge; extreme tidal level; GIS Online: 6 May 2019 (10:57:09 CEST) Portugal Mainland has hundreds of thousands of people living in the Atlantic coastal zone, with numerous high economic value activities and a high number of infrastructures that must be protected from natural coastal hazard, namely extreme storms and sea level rise (SLR). In the context of climate change adaptation strategies, a reliable and accurate assessment of the physical vulnerability to SLR is crucial. This study is a contribution to the implementation of flooding standards imposed by the European Directive 2007/60/EC, which requires each member state to assess the risk associated to SLR and floods caused by extreme events. Therefore, coastal hazard in the Continental Atlantic coast of Portugal Mainland was evaluated for 2025, 2050 and 2100 in the whole coastal extension with different sea level scenarios for different extreme event return periods and due to SLR. A coastal flooding probabilistic map was produced based on the developed methodology using Geographic Information Systems (GIS) technology. The Extreme Flood Hazard Index (EFHI) was determined on flood probabilistic bases through five probability intervals of 20% of amplitude. For a given SLR scenario, the EFHI is expressed, on the probabilistic flooding maps for an extreme tidal maximum level, by five hazard classes ranging from 1 (Very Low) to 5 (Extreme). How to Explain and Predict the Shape Parameter of the Generalized Extreme Value Distribution of Streamflow Extremes Using a Big Dataset Hristos Tyralis, Georgia Papacharalampous, Sarintip Tantanee Subject: Earth Sciences, Other Keywords: CAMELS; flood frequency; hydrological signatures; extreme value theory; random forests; spatial modelling The finding of important explanatory variables for the location parameter and the scale parameter of the generalized extreme value (GEV) distribution, when the latter is used for the modelling of annual streamflow maxima, is known to have reduced the uncertainties in inferences, as estimated through regional flood frequency analysis frameworks. However, important explanatory variables have not been found for the GEV shape parameter, despite its critical significance, which stems from the fact that it determines the behaviour of the upper tail of the distribution. Here we examine the nature of the shape parameter by revealing its relationships with basin attributes. We use a dataset that comprises information about daily streamflow and forcing, climatic indices, topographic, land cover, soil and geological characteristics of 591 basins with minimal human influence in the contiguous United States. We propose a framework that uses random forests and linear models to find (a) important predictor variables of the shape parameter and (b) an interpretable model with high predictive performance. The process of study comprises of assessing the predictive performance of the models, selecting a parsimonious predicting model and interpreting the results in an ad-hoc manner. The findings suggest that the shape parameter mostly depends on climatic indices, while the selected prediction model results in more than 20% higher accuracy in terms of RMSE compared to a naïve approach. The implications are important, since incorporating the regression model into regional flood frequency analysis frameworks can considerably reduce the predictive uncertainties. Water Quality Dynamics of Urban Water Bodies During Flooding in Can Tho City, Vietnam Hong Quan Nguyen, Mohanasundar Radhakrishnan, Thi Thao Nguyen Huynh, Maria Luisa Baino-Salingay, Long Phi Ho, Peter Van der Steen, Assela Pathirana Subject: Earth Sciences, Environmental Sciences Keywords: Can Tho city; extreme event; urban flooding; water quality monitoring; water pollution Online: 31 March 2017 (09:50:47 CEST) Water pollution associated with flooding is one of the major problems in cities in the global South. However, studies of water quality dynamics during flood events are not often reported in literature, probably due to difficult conditions for sampling during flood events. Water quality parameters in open water (canals, rivers, and lakes), floodwater on roads and water in sewers have been monitored during the extreme fluvial flood event on 7 October 2013 in Can Tho city, Vietnam. This is the pioneering study of urban flood water pollution in real time in Vietnam. The results showed that water quality is very dynamic during flooding, especially at the beginning of the event. In addition, it was observed that the pathogen and contaminant levels in the floodwater are almost as high as in sewers. The findings show that population exposed to flood water runs a health risk that is nearly equal to that of being in contact with sewer water. Therefore the people of Can Tho not only face physical risk due to flooding, but also exposed to health risk. Application of ALO-ELM in Load Forecasting Based on Big Data Ming He, Yi Li, Wan Zou, Xiangxi Duan Subject: Engineering, Electrical & Electronic Engineering Keywords: load forecasting; extreme learning machine (ELM); ant lion optimization (ALO) ;parameter optimization; model. Online: 21 October 2021 (09:34:56 CEST) The load of power system changes with the development of economy, short-term load forecasting play a very important role in dispatching and management of power system. In this paper, the Ant Lion Optimizer (ALO) is introduced to improve the input weights and hidden-layer Matrix of extreme learning machine (ELM), after the parameters of ELM are optimized by ALO, then input nodes, hidden layer nodes and output nodes are determined, so a load forecasting model based on ALO-ELM combined algorithm is established. The proposed method is illustrated based on the historical load data of a city in China. The results show that the average absolute error of short-term load demand predicted by ALO-ELM model is 1.41, while that predicted by ELM is 4.34, the proposed ALO-ELM algorithm is superior to the ELM and meet the requirements of engineering accuracy, which proves the effectiveness of proposed method. Associating Stochastic Modelling of Flow Sequences With Climatic Trends Sandhya Patidar, Eleanor Tanner, Bankaru-Swamy Soundharajan, Bhaskar Sen Gupta Subject: Engineering, Automotive Engineering Keywords: Stochastic modelling; Climate change; Streamflow; El Nino/Southern Oscillation (ENSO), Extreme events modelling Water is essential to all life-forms including various ecological, geological, hydrological, and climatic processes/activities. With changing climate, associated El Nino/Southern Oscillation (ENSO) events appear to stimulate highly uncertain patterns of precipitation (P) and evapotranspiration (EV) processes across the globe. Changes in P and EV patterns are highly sensitive to temperature variation and thus also affecting natural streamflow processes. This paper presents a novel suite of stochastic modelling approaches for associating streamflow sequences with climatic trends. The present work is built upon a stochastic modelling framework HMM_GP that integrates a Hidden Markov Model with a Generalised Pareto distribution for simulating synthetic flow sequences. The GP distribution within HMM_GP model is aimed to improve the model's efficiency in effectively simulating extreme events. This paper further investigated the potentials of Generalised Extreme Value Distribution (EVD) coupled with an HMM model within a regression-based scheme for associating impacts of precipitation and evapotranspiration processes on streamflow. The statistical characteristic of the pioneering modelling schematic has been thoroughly assessed for their suitability to generate/predict synthetic river flows sequences for a set of future climatic projections. The new modelling schematic can be adapted for a range of applications in the area of hydrology, agriculture and climate change. A Novel Prediction Scheme for Risk Factors of Second Colorectal Cancer in Patients with Colorectal Cancer Wen-Chien Ting, Horng-Rong Chang, Chi-Chang Chang, Chi-Jie Lu Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: risk factors; second primary cancer (SPC); colorectal cancer; classification techniques; extreme gradient boosting In Taiwan, colorectal cancer is ranked second and third in terms of mortality and cancer incidence, respectively. In addition, medical expenditures related to colorectal cancer are considered to be the third highest. While advances in treatment strategies have provided cancer patients with longer survival, potentially harmful second primary cancers can occur. Therefore, second primary colorectal cancer analysis is an important issue with regard to clinical management. In this study, a novel predictive scheme was developed for predicting the risk factors associated with second colorectal cancer in patients with colorectal cancer by integrating five data mining classification techniques, including support vector machine, random forest, multivariate adaptive regression splines, extreme learning machine, and extreme gradient boosting. In total, 4,287 patients in the datasets provided by three hospital tumor registries were used. Our empirical results revealed that this proposed predictive scheme provided promising classification results and the identification of important risk factors for predicting second colorectal cancer based on accuracy, sensitivity, specificity, and area under the curve metrics. Collectively, our clinical findings suggested that the most important risk factors were the combined stage, age at diagnosis, BMI, surgical margins of the primary site, tumor size, sex, regional lymph nodes positive, grade/differentiation, primary site, and drinking behavior. Accordingly, these risk factors should be monitored for the early detection of second primary tumors in order to improve treatment and intervention strategies. Characterization of Extreme Precipitation Events in the Pyrenees. From the Local to the Synoptic Scale. Marc Lemus-Canovas, Joan-A. Lopez-Bustins, Javier Martín-Vide, Amar Halifa-Marin, Damián Insua-Costa, Joan Martinez-Artigas, Laura Trapero, Roberto Serrano-Notivoli, José María Cuadrat Subject: Earth Sciences, Atmospheric Science Keywords: extreme precipitation; Mediterranean region; Pyrenees; return period; teleconnection indices; weather type.; Backward trajectory; IVT Mountain systems within the Mediterranean region, e.g. the Pyrenees, are very sensitive to climate change. In the present study, we quantified the magnitude of extreme precipitation events and the number of days with torrential precipitation (daily precipitation ≥ 100 mm) in all the rain gauges available in the Pyrenees for the 1981-2015 period, analyzing the contribution of the synoptic scale in this type of events. The easternmost (under the Mediterranean influence) and north-westernmost (under Atlantic influence) areas of the Pyrenees registered the highest number of torrential events. The heaviest events are expected in the eastern part, i.e. 400 mm day-1 for a return period of 200 years. Northerly advections over the Iberian Peninsula, which present a low zonal index, i.e. im-plying a stronger meridional component, give rise to torrential events over the western Pyrenees; and easterly advections favour extreme precipitation over the eastern Pyrenees. The air mass travels a long way, from the east coast of North America, bringing heavy rainfall to the western Pyrenees. In the case of the torrential events over the eastern Pyrenees, the air mass causing the events in these areas is very short and originates in the Mediterranean Basin. The NAO index has no influence upon the occurrence of torrential events in the Pyrenees, but these events are closely related to certain Mediterranean teleconnections such as the WeMO Predictive Modeling the Free Hydraulic Jumps Pressure through Advanced Statistical Methods Seyed Nasrollah Mousavi, Renato Steinke Júnior, Eder Daniel Teixeira, Daniele Bocchiola, Narjes Nabipour, Amir Mosavi, Shahaboddin Shamshirband Subject: Mathematics & Computer Science, Applied Mathematics Keywords: mathematical modeling; characteristic points; extreme pressure; hydraulic jump; pressure fluctuations; standard deviation; stilling basin Pressure fluctuations beneath hydraulic jumps downstream of Ogee spillways potentially damage stilling basin beds. This paper deals with the extreme pressures underneath free hydraulic jumps along a smooth stilling basin. The experiments were conducted in a laboratory flume. From the probability distribution of measured instantaneous pressures, the pressures with different non-exceedance probabilities (P*a%) could be determined. It was verified that the maximum pressure fluctuations, as well as the negative pressures, are located at the positions closest to the spillway toe. The minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of P*a% related to the characteristic points along the basin, and different Froude numbers. To benchmark, the results, the dimensionless forms of mean pressures, standard deviations, and pressures with different non-exceedance probabilities were assessed. It was found that an existing methodology can be used to interpret the present data, and pressure distribution in similar conditions, by using a new third-order polynomial relationship for the standard deviation (σ*X) with the determination coefficient (R2) equal to 0.717. It was verified that the new optimized adjustment gives more accurate results for the estimation of the maximum extreme pressures than the minimum extreme pressures. RESPIRE: A Spectral Kurtosis-based Method to Extract Respiration Rate from Wearable PPG Signals Hari Dubey Subject: Engineering, Biomedical & Chemical Engineering Keywords: wearable; photoplethysmography; spectral kurtosis; extreme learning machine (ELM) regression; respiration rate; cardiovascular diseases (CVD) In this paper, we present the design of a wearable photoplethysmography (PPG) system, R-band for acquiring the PPG signals. PPG signals are influenced by the respiration or breathing process and hence can be used for estimation of respiration rate. R-Band detects the PPG signal that is routed to a Bluetooth low energy device such as a nearbyplaced smartphone via microprocessor. Further, we developed an algorithm based on Extreme Learning Machine (ELM) regression for the estimation of respiration rate. We proposed spectral kurtosis features that are fused with the state-ofthe-art respiratory-induced amplitude, intensity and frequency variations-based features for the estimation of respiration rate (in units of breaths per minute). In contrast to the neural network (NN), ELM does not require tuning of hidden layer parameter and thus drastically reduces the computational cost as compared to NN trained by the standard backpropagation algorithm. We evaluated the proposed algorithm on Capnobase data available in the public domain. Correlation between Intense Solar Energetic Particle Fluxes and Atmospheric Weather Extremes Georgios Anagnostopoulos, Sofia Anna Menesidou, Vasilios G. Vassiliadis, Alexandros Rigas Subject: Physical Sciences, Astronomy & Astrophysics Keywords: extreme weather events; heat waves; sun-earth relationships; sun and weather; space weather and extreme atmospheric events; global atmospheric anomalies; SEP events and weather; SEP and NAO; gulf stream and heat waves In the past two decades the world experienced an exceptional number of unprecedented extreme weather events, some causing major human suffering and economic damage, such as the March 2012 heat event, which was called "Meteorological March Madness." From the beginning of space era a correlation of solar flares with pressure changes in atmosphere within 2–3 days or even less was reported. In this study we wanted to test the possible relation of highly warm weather events in North-East America with Solar Energetic Particle (SEP) events. For this reason we compared ground temperatures TM in Madison, Wisconsin, with energetic particle fluxes P measured by the EPAM instrument onboard the ACE spacecraft. In particular, we elaborated case events and the results of a statistical study of the SEP events related with the largest (Dst ≤ −150nT) Coronal Mass Ejection (CME)-induced geomagnetic storms, between with the years 1997–2015. The most striking result of our statistical analysis is a very significant positive correlation between the highest temperature increase. ΔTM and the time duration of the temperature increase ΔTM (r = 0.8, p <0.001) at "winter times" ( r = 0.5, p , 0.01 for the whole sample of 26 SEP examined events). The time response of TM to P was found to be in general short (a few days), but in the case of March 2015, during a gradual P8 increase, a cross-correlation test indicated highest c.c. within 1 day (p < 0.05). The March 2012 "meteorological anomaly" was elaborated in the case of South-East Europe, where, beside a period of strong winds and rainfall (6-13.3.2012), intense precipitation in North-East Greece (Alexandroupoli) were found to be correlated with distinct high energy flux enhancements. A rough theoretical interpretation is discussed for the space—atmospheric extreme weather relationship we found. However, much work should be done to achieve early warning of space weather dependent extreme meteorological events. Such future advances in understanding the relationships between space weather and extreme atmospheric events would improve atmospheric models and help people's safety, health and life. Computational Analysis of Specific Indicators to Manage Crop Yield and Profits Under Extreme Heat and Climate Change Conditions Maya Sharma Subject: Engineering, General Engineering Keywords: Agriculture; Extreme Heat; Climate Change; Grandview; Edge Computing; 5G; IoT; Drone Imagery; LiDAR; Decision framework Online: 6 December 2021 (15:19:50 CET) The US pacific northwest recorded its highest temperature in late June 2021. The three-day stretch of scorching heat had a devastating effect on not only the residents of the state, but also on the crops thus impacting the food supply-chain. It is forecasted that streaks of 100-degree temperatures will become common. Farmers will have to adapt to the changing landscape to preserve their crop yield and profitability. A research collaborative consisting of researchers and academicians in Eastern Washington led by a pioneering startup has setup a 16.9-acre Honeycrisp Apple Smart Orchard in Grandview, WA as a laboratory to study the environmental and plant growth factors in real-time using modern computational tools and techniques like IoT (Internet of Things), Edge and Cloud Computing, and Drone and LiDAR (Light Detection and Ranging) imaging. The computational analysis is used to develop guidelines for precision agriculture for orchard blocks to address plant growth issues scientifically and in a timely fashion. The analysis also helps in creating risk-mitigation strategies for severe weather events while helping prepare farmers to maximize crop yield and profitability per acre. I was fortunate to gain access to the terabytes of farm data related to the weather, soil, water, tree, and canopy health, to analyze and formulate recommendations for the farmers that can be adopted nationwide for different crops and weather conditions. This paper discusses the different streams of farm data that were analyzed (ex. soil moisture, soil water potential, and sap flow) and the development of the framework to use data to convert insights into actionable steps. For example, the use of sensors can inform a farmer that their level of soil water potential is below threshold in a specific patch of the orchard, prompting them to turn on irrigation for the patch instead of the whole orchard. I estimate that using an IoT-sensor-based decision framework discussed in this paper, growers can save up to 55% of their water costs for the season. Using these insights, farmers can better manage their irrigation resources and labor, thus maximizing their crop yield and profits. Resistance and Resilience of Pelagic and Littoral Fishes to Drought in the San Francisco Estuary Brian Mahardja, Vanessa Tobias, Shruti Khanna, Lara Mitchell, Peggy Lehman, Ted Sommer, Larry Brown, Steve Culberson, J. Louise Conrad Subject: Biology, Ecology Keywords: drought; climate variability; resilience; resistance; estuary; fish; extreme events; Delta Smelt; Chinook Salmon; Largemouth Bass Many estuarine ecosystems and the fish communities that inhabit them have undergone substantial changes in the past several decades, largely due to multiple interacting stressors that are often of anthropogenic origin. Few are more impactful than droughts, which are predicted to increase in both frequency and severity with climate change. In this study, we examined over five decades of fish monitoring data from the San Francisco Estuary, California, U.S.A, to evaluate the resistance and resilience of fish communities to disturbance from prolonged drought events. High resistance was defined by the lack of decline in species occurrence from a wet to a subsequent drought period, while high resilience was defined by the increase in species occurrence from a drought to a subsequent wet period. We found some unifying themes connecting the multiple drought events over the fifty-year period. Pelagic fishes consistently declined during droughts (low resistance), but exhibit a considerable amount of resiliency and often rebound in the subsequent wet years. However, full recovery does not occur in all wet years following droughts, leading to permanently lower baseline numbers for some pelagic fishes over time. In contrast, littoral fishes seem to be more resistant to drought and may even increase in occurrence during dry years. Based on the consistent detrimental effects of drought on pelagic fishes within the San Francisco Estuary and the inability of these fish populations to recover in some years, we conclude that freshwater flow remains a crucial but not sufficient management tool for the conservation of estuarine biodiversity. Temperature Extreme May Exaggerate the Mortality Risk of COVID-19 in the Low- and Middle-income Countries: A Global Analysis Mizanur Rahman, Mahmuda Islam, Mehedi Hasan Shimanto, Jannatul Ferdous, Abdullah Al-Nur Shanto Rahman, Pabitra Singha Sagor, Tahasina Chowdhury Subject: Life Sciences, Biophysics Keywords: temperature extreme; warm climate; low-and middle-income economies; COVID-19; mortality; mixed effect modelling We performed a global analysis with data from 149 countries to test whether temperature can explain the spatial variability of the spread rate and mortality of COVID-19 at the global scale. We performed partial correlation analysis and linear mixed effect modelling to evaluate the association of the spread rate and motility of COVID-19 with maximum, minimum, average temperatures and temperature extreme (difference between maximum and minimum temperature) and other environmental and socioeconomic parameters. After controlling the effect of the duration after the first positive case, partial correlation analysis revealed that temperature was not related with the spatial variability of the spread rate of COVID-19. Mortality was negatively related with temperature in the countries with high-income economies. In contrast, temperature extreme was significantly and positively correlated with mortality in the low-and middle-income countries. Taking the country heterogeneity into account, mixed effect modelling revealed that inclusion of temperature as a fixed effect in the model significantly improved model skill predicting mortality in the low-and middle-income countries. Our analysis suggest that warm climate may reduce the mortality rate in high-income economies but in low and middle-income countries temperature extreme may increase the mortality risk. Reconstruction of Earth Extreme Topography from UAV Structure from Motion Photogrammetry Francisco Agüera-Vega, Fernando Carvajal-Ramírez, Patricio Jesús Martínez-Carricondo, Julián Sánchez-Hermosilla López Subject: Engineering, Civil Engineering Keywords: Unmanned Aerial Vehicle (UAV); UAV-photogrammetry; Structure From Motion (SfM); cut slope; extreme topography; landslide Online: 3 April 2017 (18:34:22 CEST) UAV photogrammetry development during the last decade has allowed to catch information at a very high spatial and temporal resolution from terrains with very difficult or impossible human access. This paper deals with the application of these techniques to study and produce information of very extreme topography which is useful to plan works on this terrain or monitoring it over the time to study its evolution. The methodology stars with the execution of UAV flights on the cut slope studied, one with the cam vertically oriented and other at 45º respect that orientation. Ground control points (GCPs) and check points (CPs) were measured for georeference and accuracy measurement purposes. Orthophoto was obtained projecting on a fitted plane to a studied surface. Moreover, since a digital surface model (DSM) is not able to represent faithfully that extreme morphology, information to project works or monitoring it has been derived from the point cloud generated during the photogrammetric process. An informatics program was developed to generate contour lines and cross sections derived from the point cloud, which was able to represent all terrain geometric characteristics, like several Z coordinates for a given planimetric (X, Y) point. Results yield a root mean square error (RMSE) in X, Y and Z directions of 0.053 m, 0.070 m and 0.061 m respectively. Furthermore, comparison between contour lines and cross sections generated from point cloud with the developed program on one hand and those generated from DSM on other hand, shown that the former are capable of representing terrain geometric characteristics that the latter cannot. The methodology proposed in this work has been shown as an adequate alternative to generate manageable information, as orthophoto, contour lines and cross sections, useful for the elaboration, for example, of projects for repairing or maintenance works of cut slopes with extreme topography. Coastal Flooding & Sea-Level Rise: Calculation of Flood Risk Using Tide-Gauge Measurements Subject: Earth Sciences, Oceanography Keywords: Sea Level Rise; coastal flooding; JPM; Gumbel; exceedance; extreme value statistics; flood return period; sea-defences Online: 1 June 2022 (05:58:45 CEST) AbstractLocal estimates of coastal flood risk are required for coastal planning and development, including the location and design of sea-defences and coastal buildings, such as harbours and associated infrastructure. This paper discusses the use of three parameters associated with estimating such risks; the flood return period, the instantaneous flood probability and the flood design risk, and it describes the mathematical background for their derivation. The discussion is extended to include the effects of sea level rise and how it can be incorporated into the calculations. Flood height can vary quite rapidly with distance along the coast, being affected by coastal topology, which may magnify or diminish the tidal and surge effects. Similarly land heave influences the local effects of sea level rise and can be influenced by water extraction, tectonic movements and melting ice. Tide gauge measurements provide a local historical record from which the various parameters can be retrieved. This paper discusses the algorithms used to derive these measures from tide-gauge records. The figures have been derived for four tide gauges located on the UK east coast. Integrating Monte Carlo and the Hydrodynamic Model for Predicting Extreme Water Levels in River Systems Wen-Cheng Liu, Hong-Ming Liu Subject: Engineering, Civil Engineering Keywords: extreme water level; hydrodynamic model; Monte Carlo; joint probability; model calibration and verification; Danshuei River system Estimates of extreme water level return periods in river systems are crucial for hydraulic engineering design and planning. Recorded historical water level data of Taiwan's rivers are not long enough for traditional frequency analyses when predicting extreme water levels for different return periods. In this study, the integration of a one-dimensional flash flood routing hydrodynamic model with the Monte Carlo simulation was developed to predict extreme water levels in the Danshuei River system of northern Taiwan. The numerical model was calibrated and verified with observed water levels using four typhoon events. The results indicated a reasonable agreement between the model simulation and observation data. Seven parameters, including the astronomical tide and surge height at the mouth of the Danshuei River and the river discharge at five gauge stations, were adopted to calculate the joint probability and generate stochastic scenarios via the Monte Carlo simulation. The validated hydrodynamic model driven by the stochastic scenarios was then used to simulate extreme water levels for further frequency analysis. The design water level was estimated using different probability distributions in the frequency analysis at five stations. The design high-water levels for a 200-year return period at Guandu Bridge, Taipei Bridge, Hsin-Hai Bridge, Da-Zhi Bridge, and Chung-Cheng Bridge were 2.90 m, 5.13 m, 6.38 m, 6.05 m, and 9.94 m, respectively. The estimated design water levels plus the freeboard are proposed and recommended for further engineering design and planning. Delicate Comparison of the Central and Non-central Lyapunov Ratios with Applications to the Berry–Esseen Inequality for Compound Poisson Distributions Vladimir Makarenko, Irina Shevtsova Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Lyapunov fraction; extreme problem, moment inequality; central limit theorem; Berry–Esseen inequality; compound Poisson distribution; normal approximation For each $t\in(-1,1)$, exact values of the least upper bounds $$ H(t)=\sup_{\E X=t,\,\E X^2=1} \frac{\E\abs{X}^3}{\E \abs{X-t}^3},\quad \sup_{\E X=t,\,\E X^2=1} \frac{L_1(X)}{L_1(X-t)} $$ are obtained, where $L_1(X)=\E|X|^3/(\E X^2)^{3/2}$ is the non-cental Lyapunov ratio. It is demonstrated that these values are attained only at two-point distributions. As a corollary, S.\,Shorgin's conjecture is proved that states that the exact value is $$ \sup\frac{L_1(X)}{L_1(X-\E X)}= \frac{\sqrt{17 + 7\sqrt7}}{4} = 1.4899\ldots, $$ where the supremum is taken over all non-degenerate distributions of the random variable $X$ with the finite third moment. Also, in terms of the central Lyapunov ratio $L_1(X-\E X)$, an analog of the Berry--Esseen inequality is proved for Poisson random sums of independent identically distributed random variables with the constant $$ 0.3031\cdot H\left(\frac{\E X}{\sqrt{\E X^2}}\right) \left(1-\frac{(\E X)^2}{\E X^2}\right)^{3/2} \le 0.3031\cdot \frac{\sqrt{17 + 7\sqrt7}}{4}<0.4517.$$ where $\Law(X)$ is the common distribution of the summands. The Imprint of Recent Meteorological Events on Boulder Deposits along the Mediterranean Rocky Coasts Marco Delle Rose, Paolo Martano Subject: Earth Sciences, Geophysics Keywords: Coastal storm; Wind wave; Storm surge; Extreme coastal water level; Boulder dynamics; Geomorphological proxy; Interdisciplinary climate research In this review, the potential of an emerging field of interdisciplinary climate research, that is the Coastal Boulder Deposits (CBDs) as natural archives for intense storms, is explored with particular reference to the Mediterranean region. First, the identification of the pertinent scientific articles was performed by using Web of Science (WoS) engine. Thus, the selected studies have been analysed to feature CBDs produced and/or activated during the last half century. Then, the meteorological events responsible to the literature reported cases were analysed in some details using the web archives of the Globo-Bolam-Moloch model cascade. The study of synoptical and local characteristics of the storms involved in the documented cases of boulder production/activation proved useful to assess the suitability of selected sites as geomorphological storm proxies. It is argued that a close and fruitful collaboration involving several scientific disciplines is required to develop this climate research field. The Compound Inverse Rayleigh as an Extreme Wind Speed Distribution and its Bayes Estimation Elio Chiodo, Maurizio Fantauzzi, Giovanni Mazzanti Subject: Engineering, Electrical & Electronic Engineering Keywords: renewable energy; bayes estimation; beta distribution; lognormal distribution; compound inverse Rayleigh distribution; extreme values; safety; wind power The paper deals with the Compound Inverse Rayleigh distribution, shown to constitute a proper model for the characterization of the probability distribution of extreme values of wind-speed, a topic which is gaining growing interest in the field of renewable generation assessment, both in view of wind power production evaluation and the wind-tower mechanical reliability and safety. The first part of the paper illustrates such model starting from its origin as a generalization of the Inverse Rayleigh model - already proven to be a valid model for extreme wind-speeds - by means of a continuous mixture generated by a Gamma distribution on the scale parameter, which gives rise to its name. Moreover, its validity to interpret different field data is illustrated, also by means of numerous numerical examples based upon real wind speed measurements. Then, a novel Bayes approach for the estimation of such extreme wind-speed model is proposed. The method relies upon the assessment of prior information in a practical way, that should be easily available to system engineers. In practice, the method allows to express one's prior beliefs both in terms of parameters, as customary, and/or in terms of probabilities. The results of a large set of numerical simulations – using typical values of wind-speed parameters - are reported to illustrate the efficiency and the accuracy of the proposed method. The validity of the approach is also verified in terms of its robustness with respect to significant differences compared to the assumed prior information. Earthquake Safety Assessment of Buildings Through Rapid Visual Screening Ehsan Harirchian, Tom Lahmer, Sreekanth Buddhiraju, Kifaytullah Mohammad, Amir Mosavi Subject: Engineering, Civil Engineering Keywords: Buildings; earthquake safety assessment; extreme events; urban sustainability; seismic 16 assessment; rapid visual screening; reinforced concrete buildings Online: 6 February 2020 (10:50:33 CET) Earthquake is among the most devastating natural disasters causing severe economic, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainable development through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider Rapid Visual Screening (RVS) method which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 11 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable while EMPI and IITK-GGSDMA provide for more accurate and practical estimation, respectively. TPE-RBF-SVM Model for Soybean Categories Recognition in Selected Hyperspectral Bands Based on Extreme Gradient Boosting Feature Importance Values Qinghe Zhao, Zifang Zhang, Yuchen Huang, Junlong Fang Subject: Engineering, General Engineering Keywords: Hyperspectral Technology; Non-destructive Testing; Soybean; Machine Learning; Support Vector Machine; Extreme Gradient Boosting; Tree-structured Parzen Estimator Soybean with insignificant differences in appearance have large differences in their internal physical and chemical components, therefore follow-up storage, transportation and processing require targeted differential treatment. A fast and effective machine learning method based on hyperspectral data of soybean for pattern recognition of categories is designed as a non-destructive testing method in this paper. A hyperspectral-image dataset with 2299 soybean seeds in 4 categories is collected; Ten features is selected by extreme gradient boosting algorithm from 203 hyperspectral bands in range 400 to 1000 nm; A Gaussian radial basis kernel function support vector machine with optimization by the Tree-structured Parzen Estimator algorithm is built as TPE-RBF-SVM model for pattern recognition of soybean categories. The metrics of TPE-RBF-SVM are significantly improved compared with other machine learning algorithms. The accuracy is 0.9165 in the independent test dataset which is 9.786% higher for vanilla RBF-SVM model and 10.02% higher than the extreme gradient boosting model. An Ecological Dynamics Approach to Understanding Human-Environment Interactions in the Adventure Sport Context – Implications for Research and Practice Tuomas Immonen, Eric Brymer, Keith Davids, Timo Jaakkola Subject: Behavioral Sciences, Other Keywords: adventure sport; extreme sport; ecological dynamics; transdisciplinary; form of life; skill; skill development; decision-making; freeriding; avalanche education The last few decades have witnessed a surge of interest in adventure sports, and an emerging research focus on these activities. However, recent conceptual analyses and scientific reviews have highlighted a major, fundamental question that remains unresolved: what constitutes an adventure sport (and are they 'sports' at all)? Despite several proposals for definitions, the field still seems to lack a shared conceptualization. This deficit may be a serious limitation for research and practice, restricting the development of a more nuanced theoretical explanation of participation and prac-tical implications within and across adventure sports. In this article we address another crucial question, how can adventure sports be better understood for research and practice? We briefly summarize previous definitions to address evident confusion and lack of conceptual clarity in the discourse. Alternatively, we propose how an ecological perspective on human behaviors, as in-teractions with the environment, may provide an appropriate conceptualization to guide and enhance future research and practice, using examples from activities such as freeride skiing / snowboarding, white-water kayaking, climbing, mountaineering and the fields of sport science, psychology and avalanche research and education. We draw on ecological dynamics as a trans-disciplinary approach to discuss how this holistic framework presents a more detailed, nuanced, and precise understanding of adventure sports. Integration of Sentinel-3 and MODIS Vegetation Indices with ERA-5 Agro-meteorological Indicators for Operational Crop Yield Forecasting Jędrzej S. Bojanowski, Sylwia Sikora, Jan P. Musiał, Edyta Woźniak, Katarzyna Dąbrowska-Zielińska, Przemysław Slesiński, Tomasz Milewski, Artur Łączyński Subject: Earth Sciences, Geoinformatics Keywords: satellite data; machine learning; data calibration; thermal time; growing degree days; Extreme Gradient Boosting; crop yield; crop monitoring Timely crop yield forecasts at national level are substantial to support food policies, to assess agricultural production and to subsidize regions affected by food shortage. This study presents an operational crop yield forecasting system for Poland that employs freely available satellite and agro-meteorological products provided by the Copernicus programme. The crop yield predictors consist of: (1) vegetation condition indicators provided daily by Sentinel-3 OLCI (optical) and SLSTR (thermal) imagery, (2) a backward extension of Sentinel-3 data (before 2018) derived from cross-calibrated MODIS data, (3) air temperature, total precipitation, surface radiation, and soil moisture derived from ERA-5 climate reanalysis generated by the European Centre for Medium-Range Weather Forecasts. The crop yield forecasting algorithm is based on thermal time (growing degree days derived from ERA-5 data) to better follow the crop development stage. The recursive feature elimination is used to derive an optimal set of predictors for each administrative unit, which are ultimately employed by the Extreme Gradient Boosting regressor to forecast yields using official yield statistics as a reference. According to intensive leave-one-year-out cross validation for 2000–2019 period, the relative RMSE for NUTS-2 units are: 8% for winter wheat, and 13% for winter rapeseed and maize. Respectively, for the LAU units it equals 14% for winter wheat, 19% for winter rapeseed, and 27% for maize. The system is designed to be easily applicable in other regions and to be easily adaptable to cloud computing environments (such as DIAS or Amazon AWS), where data sets from the Copernicus programme are directly accessible. Establishment of Rainfall Intensity-Duration-Frequency Equations and Curves Used to Design an Appropriate and Sustainable Hydraulic Structure for Controlling Flood in Nyabugogo Catchment-Rwanda Nizeyimana Jean Claude, Shanshan Lin, Ndayisenga Fabrice, Gratien Twagirayezu, Junaid Khan, Phyoe Marnn, Ahmed Ali Ahmed Al-Masnay Yousef, Usman Kaku Dawuda, Musaed Abdullah Al-Shaibah Bazel, Ahmed Al-aizari Hussein, Olivier Irumva Subject: Engineering, Automotive Engineering Keywords: Adequate drainage structures; Rainfall IDF Curve relationship; predicted peak rate of runoff (Qlogy); Gumbel's Extreme Value Distribution Method. Due to the increase in the emission of greenhouse gases, the hydrologic cycle is being altered on the daily basis. This has affected the variations in relations of intensity, duration, and frequency of rainfall events. Intensity Duration Frequency (IDF) curves describe the relationship between rainfall intensity, rainfall duration and return period. IDF curves are one of the most often applied implements in water resource engineering, in areas such as for operating, planning and designing of water resource projects, or for numerous engineering projects aimed at controlling floods. In particular, IDF curves for precipitation answer problems of improper drainage systems or conditions and extreme characters of precipitation which are the main cause of floods in Nyabugogo catchment. This study aims to establish Rainfall IDF empirical equations, curves and hydrological discharge (predicted peak rate of runoff (Qlogy)) equations for eight Districts that will be used for designing an appropriate and sustainable hydraulic structures for controlling flood to reduce potential loss of human and aquatic life, degradation of water, air and soil quality and property damage and economic lessen caused by flood in Nyabugogo catchment. However Goodness of Fit tests revealed that Gumbel's Extreme-Value Distribution method appears to have the most appropriate fit compared with Pearson type III distribution for validating the Intensity-Duration-Frequency curves and equations through the use of daily annual for each meteorological station. The findings of the study show that the intensity of rainfall increases with a decrease in rainfall duration. Additionally, a rainfall of every known duration will have a higher intensity if its return period is high, while the predicted peak rate of runoff (Qlogy) increases also with an increase in the intensity of rainfall. The Effectiveness of Intervening on Social Isolation to Reduce Mortality during Heat Waves in Aged Population: A Retrospective Ecological Study Stefano Orlando, Claudia Mosconi, Carolina De Santo, Leonardo Emberti Gialloreti, Maria Chiara Inzerilli, Olga Madaro, Sandro Macinelli, Fausto Ciccacci, Maria Cristina Marazzi, Leonardo Palombi, Giuseppe Liotta Subject: Medicine & Pharmacology, General Medical Research Keywords: extreme weather; heat waves; anvironment and public healt; aged; older adults; social behaviour; interpersonal relation; social isolation; mortality; lonelliness Background: Heat waves are correlated with increased mortality in the aged population. Social isolation is known as a vulnerability factor. This study aims at evaluating the correlation between an intervention to reduce social isolation and the increase in mortality in the population over 80 during heat waves. Methods: The study adopts a retrospective ecologic design. We compared the excess mortality rate (EMR) in the over 80 population during heat waves in urban areas of Rome (Italy), where a program to reduce social isolation was implemented compared to others where it was not implemented. We measured mortality of the summer periods from 2015 to 2019 compared with 2014 (a year without heat waves). Winter mortality, cadastral income and proportion of over 90 were included in the multivariate Poisson regression. Results: The EMR in the intervention and controls was 2.70% and 3.81%, respectively. Rate ratio 0.70 (c.i. 0.54 - 0.92, p-value 0.01). The Incidence Rate Ratio (IRR) of the interventions with respect to the controls is 0.76 (c.i. 0.59 - 0.98). After adjusting for other variables, the IRR was 0.44 (c.i. 0.32 - 0.60). Conclusions: Reducing social isolation could limit the impact of heat waves on the mortality of the elderly population. Mscligan- A Structure-Informed Generative Adversarial Model for Multi-Site Statistical Downscaling of Extreme Precipitation -Using Multi-Model Ensemble Chiranjib Chaudhuri, Colin Robertson Subject: Earth Sciences, Atmospheric Science Keywords: Multi-site statistical downscaling; Generative Adversarial Network; Combination of Errors; Convolutional Neural Network; Struc-tural Similarity Index; Wasserstein GAN; extreme precipitation Although the statistical methods of downscaling climate data have progressed significantly, the development of high-resolution precipitation products continues to be a challenge. This is especially true when interest centres on downscaling value over several study sites. In this paper , we report a new downscaling method termed the multi-site Climate Generative Adversarial Network (MSCliGAN), which can simulate annual maximum precipitation to the regional scale during the 1950-2010 period in different cities in Canada by using different AOGCM's from the Coupled Model Inter-Comparison Project 6 (CMIP6) as input. Auxiliary information provided to the downscaling model included topography and land-cover. The downscaling framework uses a convolution encoder-decoder U-net network to create a generative network and a convolution encoder network to create a critic network. An adversarial training strategy is used to train the model. The critic/discriminator used Wasserstein distance as a loss measure and on the other hand the generator is optimized using a summation of content loss on Nash-Shutcliff Model Efficiency (NS), structural loss on structural similarity index (SSIM), and adversarial loss Wasserstein distance. Downscaling results show that downscaling AOGCMs by incorporating topography and land-use/land-cover can produce spatially coherent fields close to observation over multiple-sites. We believe the model has sufficient downscaling potential in data sparse regions where climate change information is often urgently needed. Seasonal Rainfall Variability over Southern Ghana Mohammed Braimah, Vincent Antwi Asante, Maureen Ahiataku, Samuel Owusu Ansah, Frederick Otu-Larbi, Bashiru Yahaya, John Bright Ayabila Subject: Earth Sciences, Atmospheric Science Keywords: rainfall trend; Mann Kendall's test; Sen's slope estimator; climate statistics; seasonal rainfall; standardized anomaly index; extreme precipitation indicators; rainfall variability; southern Ghana Rainfall variability has resulted in extreme events like devastating floods and droughts which is the main cause of human vulnerability to precipitation in West Africa. Attempts have been made by previous studies to understand rainfall variability over Ghana but these have mostly focused on the major rainy season of April-July, leaving a gap in our understanding of the variability in the September-November season which is a very important aspect of the Ghanaian climate system. The current study seeks to close this knowledge gap by employing statistical tools to quantify variabilities in rainfall amounts, rain days, and extreme precipitation indices in the minor rainfall season over Ghana. We find extremely high variability in rainfall with a Coefficient of variation (CV) between 25.3% and 70.8%, and moderate to high variability in rain days (CV=14.0% - 48.8%). Rainfall amount was found to be higher over the middle sector (262.7 mm – 400.2 mm) but lowest over the east coast (125.2 mm – 181.8 mm). Analysis of the second rainfall season using the Mankandell Test presents a non-significant trend of rainfall amount and extreme indices (R10, R20, R99p, and R99p) for many places in southern Ghana. Rainfall Anomaly Indices show that the middle sector recorded above normal precipitation which is the opposite for areas in the transition zone. The result of this work provides a good understanding of rainfall in the minor rainfall season and may be used for planning purposes. Compatibility of Drought Magnitude Based Method With Spa for Assessing Reservoir Volumes: Analysis Using Canadian River Flows Tribeni C. Sharma, Umed Panu Subject: Engineering, Automotive Engineering Keywords: Deficit volume; drought intensity; drought magnitude; extreme number theorem; Markov chain; moving average smoothing; standardized hydrological index; sequent peak algorithm; reservoir volume. The traditional sequent peak algorithm (SPA) was used to assess the reservoir volume (VR) for comparison with deficit volume, DT, (subscript T representing the return period) obtained from the drought magnitude (DM) based method with draft level set at the mean annual flow on 15 rivers across Canada. At an annual scale, the SPA based estimates were found to be larger with an average of nearly 70% compared to DM based estimates. To ramp up DM based estimates to be in parity with SPA based values, the analysis was carried out through the counting and the analytical procedures involving only the annual SHI (standardized hydrological index, i.e. standardized values of annual flows) sequences. It was found that MA2 or MA3 (moving average of 2 or 3 consecutive values) of SHI sequences were required to match the counted values of DT to VR. Further, the inclusion of mean, as well as the variance of the drought intensity in the analytical procedure, with aforesaid smoothing led DT comparable to VR. The distinctive point in the DM based method is that no assumption is necessary such as the reservoir being full at the beginning of the analysis - as is the case with SPA. Generalizations of the R-Matrix Method to the Treatment of the Interaction of Short Pulse Electromagnetic Radiation with Atoms Barry Schneider, Kathryn R Hamilton, Klaus Bartschat Subject: Physical Sciences, Atomic & Molecular Physics Keywords: B-spline R-matrix; R-matrix with time dependence; intense short-pulse extreme ultra14 violet radiation; time-dependent Schrdinger equation; Arnoldi-Lanczos propagation Since its initial development in the 1970's by Phil Burke and his collaborators, the R-matrix theory and associated computer codes have become the de facto approach for the calculation of accurate data for general electron-atom/ion/molecule collision and photoionization processes. The use of a non-orthonormal set of orbitals based on B-splines, now called the B-spline R-matrix (BSR) approach, was pioneered by Zatsarinny. It has considerably extended the flexibility of the approach and improved particularly the treatment of complex many-electron atomic and ionic targets, for which accurate data are needed in many modelling applications for processes involving low-temperature plasmas. Both the original R-matrix approach and the BSR method have been extended to the interaction of short, intense electromagnetic (EM) radiation with atoms and molecules. Here we provide an overview of the theoretical tools that were required to facilitate the extension of the theory to the time domain. As an example of a practical application, we show results for two-photon ionization of argon by intense short-pulse extreme ultraviolet radiation Energetic Materials Performance Enhancement Through Predictive Programming the Spatial Structure and Physics-Chemical Properties of the Functionalized Carbon-Based Nano-Sized Additive Alexander Lukin Subject: Materials Science, Biomaterials Keywords: energetic materials; solid propulsion systems; extreme thrust control; reaction zones; functionalized carbon-based nanostructured metamaterials; nano-sized additives; carbon atomic wires, sp1-hybridized bonds; ion-assisted pulsed-plasma deposition; self-organizing of the nanostructures; universal phenomena of nano-cymatics; electrostatic field; synergistic effect A new generation of nano-technologies is expanding solid propulsion capabilities and increasing their relevance for versatile and manoeuvrable micro-satellites with safe high-performance propulsion. We propose the innovative concept, connected with application of new synergistic effect of the energetic materials performance enhancement and reaction zones programming for the next generation small satellite multimode solid propulsion system. The main idea of suggested concept is manipulating by the self-organized wave patterns excitation phenomenon, by the properties of the energetic materials reaction zones and by localization of the energy release areas. This synergistic effect can be provided through application of the functionalized carbon-based nanostructured metamaterials as a nano-additives along with simultaneous manipulating by their properties through the electrostatic field. Mentioned effect will be controlled through predictive programming both by the spatial structure and physics-chemical properties of the functionalized carbon-based nano-additives and through the electromagnetic control of the self-organized wave pattern excitation and micro- and nano- scale oscillatory networks in the energetic material reaction zones. Suggested new concept makes it possible to increase the energetic material regression rate and increase the thrust of the solid propulsion system with minimal additional energy consumption.
CommonCrawl
The Center for Geometry and Physics is loosely organized into multiple research groups, each of which comprises a senior scholar who leads the group and several researchers whose areas of expertise and interest overlap synergistically. A brief description of each group's areas of focus, research goals, and members can be seen below. Symplectic topology, Hamiltonian dynamics and mirror symmetry Team Leader: Yong-Geun Oh The current status of symplectic topology resembles that of classical topology in the middle of the twentieth century. Over time, a systematic algebraic language was developed to describe problems in classical topology. Similarly, a language for symplectic topology is emerging, but has yet to be fully developed. The development of this language is much more challenging both algebraically and analytically than in the case of classical topology. The relevant homological algebra of $A_\infty$ structures is harder to implement in the geometric situation due to the analytical complications present in the study of pseudo-holomorphic curves or "instantons" in physical terms. Homological mirror symmetry concerns a certain duality between categories of symplectic manifolds and complex algebraic varieties. The symplectic side of the story involves an $A_\infty$ category, called the Fukaya category, which is the categorified version of Lagrangian Floer homology theory. In the meantime, recent developments in the area of dynamical systems have revealed that the symplectic aspect of area preserving dynamics in two dimensions has the potential to further understanding of these systems in deep and important ways. Research themes and research members Elijah Fender (The interplay of dynamics and symplectic/contact geometry) Volker Genz (Explicit problems in representation theory) Hongtaek Jung (Symplectic structures of Hitchin components and Anosov representations) Sungkyung Kang (Heegaard Floer theory, knot theory) Jongmyeong Kim (Homological mirror symmetry) Seungwon Kim (Topology and geometry) Taesu Kim (Homotopy theoretic aspects of symplectic geometry) Norton Lee (Supersymmetry, Integrable Systems, Quantum Field Theories, Mathematical Physics) Sangjin Lee (Lagrangian foliations, Symplectic mapping class group, Fukaya category) Yong-Geun Oh (symplectic topology, Hamiltonian dynamics and mirror symmetry) Yat-Hin Suen (Complex geometry, Symplectic Geometry, SYZ Mirror Symmetry, Homological Mirror Symmetry, Mathematical Physics) Arithmetic, birational and complex geometry of Fano varieties Team Leader: Jihun Park Fano varieties are algebraic varieties whose anticanonical classes are ample. They are classical and fundamental varieties that play many significant roles in contemporary geometry. Verified or expected geometric and algebraic properties of Fano varieties have attracted attentions from many geometers and physicists. In spite of extensive studies on Fano varieties for more than one centuries, numerous features of Fano varieties are still shrouded in a veil of mist. Contemporary geometry however requires more comprehensive understanding of Fano varieties. Sai Somanjana Sreedhar Bhamidi (Algebraic K-theory, algebraic cycles, algebraic stacks and derived categories) Shinyoung Kim (Complex geometry) Rahul Kumar (Analytic number theory, special functions, and the theory of partitions) Eunjeng Lee (Toric topology, Newton-Okounkov bodies, representation theory, and algebraic combinatorics) Jihun Park (Arithmetic, birational and complex geometry of Fano varieties) Samarpita Ray (Category Theory, Algebraic Geometry) Sumit Roy (Moduli of vector bundles, Hitchin system, Higgs bundles, Complex algebraic geometry, Differential geometry of bundles) Haowu Wang (Theory of modular forms and its applications) Yuto Yamamoto (Tropical geometry) Team Leader: Alexander Aleksandrov and Yong-Geun Oh The mathematical relevance and deep interconnections between theoretical physics and mathematics are well-established. This subject is universally appreciated for its integrative role and for being one of the most fruitful sources of new ideas, theories and methods, and have numerous powerful applications to problems in mathematics, in particular, of geometry and topology. In recent decades, there have been various developments in supersymmetric quantum field theories and string/M-theory. In this premise, matrix models, integrable systems, Chern-Simons gauge theory, Landau-Ginzburg theory and mirror symmetry, and topological quantum field theories are the main themes of research pursued in this group. Alexander Aleksandrov (Mathematical physics, random matrix models, integrable systems, enumerative geometry) Saswati Dhara (Theoretical high energy physics, Chern-Simons theory in knot invariants, conformal field theory, topological field theory) Yifan Li (Algebraic geometry, algebraic topology and mathematical physics) Hisayoshi Muraki (Noncommutative geometry, nongeometric backgrounds in supergravity, discretized geometry, matrix model) Abbas Mohamed Sherif (Einstein's general relativity theory, interfacing differential geometry, geometric analysis and general relativity)
CommonCrawl
How to find volume Video: Search Volume - Get Informatio 6 Ways to Calculate Volume - wikiHo Search for Search Volume Now. Find More Reuslts at Life.123.co Multiply the area of the base of the pyramid by its height, and divide by 3 to find the volume. Remember that the formula for the volume is V = 1/3bh. In our example pyramid, that had a base with area 36 and height 10, the volume is: 36 * 10 * 1/3, or 120 The volume of the waffle cone with a circular base with radius 1.5 in and height 5 in can be computed using the equation below: volume = 1/3 × π × 1.5 2 × 5 = 11.781 in 3 Bea also calculates the volume of the sugar cone and finds that the difference is < 15%, and decides to purchase a sugar cone Volume Calculato To calculate the volume of a box or rectangular tank you need three dimensions: width, length, and height. They are usually easy to measure due to the regularity of the shape Calculator online on how to calculate volume of capsule, cone, conical frustum, cube, cylinder, hemisphere, pyramid, rectangular prism, triangular prism and sphere. Calculate volume of geometric solids. Volume formulas. Free online calculators for area, volume and surface area For instance, if you know a cube's surface area, all you need to do to find its volume is to divide the surface area by 6, then take the square root of this value to find the length of the cube's sides. From here, all you'll need to do is cube the length of the side to find the volume as normal To calculate the volume of any space, measure the length, width and height of the room. Multiply the length by the width and then by the height. Measuring the volume of non-rectangular rooms is a bit more complicated, requiring the division of the room into measurable sections How do you calculate the surface area to volume ratio of a cylinder? Find the volume of the cylinder using the formula πr²h. Find the surface area of the cylinder using the formula 2πrh + 2πr 2. Make a ratio out of the two formulas, i.e. πr 2 h : 2πrh + 2πr 2. Alternatively, simplify it to rh : 2(h+r) Volume of package to be dispatched to add to shipping paperwork; Gravel volume required to fill a path, car park or driveway. Rectangular storage tank capacity. Car, truck or van load space volume capacity. Car load volume to move storage. Maximum volume capacity a water tank will hold. How much fuel is required to fill a tank Volume Calculator - calculate the volume of a cube, box Volume Calculation. This is the calculated volume of the cylindrical object, which this tool derives by entering the values for length and diameter into the formula described above. you can calculate the volume in different units by changing the unit selection below the result Solve for volume in the ideal gas law equation given pressure, moles, temperature and the universal gas constan The volume of a rectangular box can be calculated if you know its three dimensions: width, length and height. The formula is then volumebox = width x length x height There are three common formulas used to calculate specific volume (ν): ν = V / m where V is volume and m is mass ν = 1 /ρ = ρ-1 where ρ is density ν = RT / PM = RT / P where R is the ideal gas constant, T is temperature, P is pressure, and M is the molarit Volume of Rectangle-Based Solids Whereas the basic formula for the area of a rectangular shape is length × width, the basic formula for volume is length × width × height. How you refer to the different dimensions does not change the calculation: you may, for example, use 'depth' instead of 'height' In this section, the first of two sections devoted to finding the volume of a solid of revolution, we will look at the method of rings/disks to find the volume of the object we get by rotating a region bounded by two curves (one of which may be the x or y-axis) around a vertical or horizontal axis of rotation Height × width × depth = volume If the height, width and depth are measured in cm, the answer will be cm³. If the height, width and depth are measured in m, the answer will be m³ 9cm × 6cm × 10cm =.. The below given is the online volume of water calculator cylinder to calculate the liquid volume filled in a vertical, horizontal, rectangle, horizontal oval, vertical oval, horizontal capsule and vertical capsule cylinder. Just choose the cylinder type and fill the requested values in the liquid volume calculator to know the total volume and water-filled volume inside the cylinder Volume of an oval tank is calculated by finding the area, A, of the end, which is the shape of a stadium, and multiplying it by the length, l. A = π r 2 + 2ra and it can be proven that r = h/2 and a = w - h where w>h must always be true To calculate the volume of a cylinder, you need to know its height and the area of its base. Because a cylinder is a flat-top figure (a solid with two congruent, parallel bases), the base can be either the top or bottom. If you know a cylinder's height and lateral area, but not its radius, [ Assuming we want to find how much base should be added to an acid with a known concentration. In the question, it should be provided the following data: Concentration of the acid: #M_a#. Volume of the acid: #V_a# Concentration of the base: #M_b# and we will need to determine the volume of the base (#V_b#) needed to titrate the acid Welcome to Finding Volume with Unit Cubes with Mr. J! Need help with how to find volume? You're in the right place!Whether you're just starting out, or need. Volume percent is defined as: v/v % = [ (volume of solute)/ (volume of solution)] x 100% Note that volume percent is relative to the volume of solution, not the volume of solvent. For example, wine is about 12% v/v ethanol When you have the tank dimensions and the appropriate formula to solve for volume, simply enter the dimensions into the formula and solve. For example, let's find the volume of a cylinder tank that is 36″ in diameter and 72″ long. radius = 36″ ÷ 2 radius = 18� �� Learn how to find the volume and the surface area of a prism. A prism is a 3-dimensional object having congruent polygons as its bases and the bases are j.. 4 Ways to Calculate the Volume of a Cube - wikiHo Trapezoidal Prism Volume Calculator. In geometry, a triangular prism is a three-sided prism; it is a polyhedron made of a triangular base, a translated copy, and 3 faces joining corresponding sides. A right triangular prism has rectangular sides, otherwise it is oblique Finding volume given by a triple integral over the sphere, using spherical coordinates. Example. Use spherical coordinates to find the volume of the triple integral, where ???B??? is a sphere with center ???(0,0,0)??? and radius ???4??? The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height. A circle is the shape of the base, so its area, according to the well-known equation, is equal to π * radius². Therefore the formula for a vertical cylinder tanks volume looks like To calculate the volume of a rectangle: Volume = height * width * depth The result of calculating the rectangle's volume is in cubic inches (or centimeters). Using the volume, the liquid capacity of the rectangle is calculated in several common metrics (cups, ounces, pints, litres, etc.) Note that the volume capacity will be slightly larger. Calculate the volume of the rock by measuring its diameter and dividing by 2 to find its radius (r). The volume of a sphere is 4/3πr 3, so if the rock has a radius of 10 inches, its volume is 418.67 cubic inches. Convert to cubic feet by multiplying by 0.00057. The result is 0.239 cubic feet How Do You Calculate the Volume of a Room HPLC Column Volume Calculator Length ( mm ): Requires a number between 1 and 500 Internal Diameter ( mm ): Requires a number between 0.1 and 500 Volume: mL Discover our HPLC offe Calculate the volume of a steel vessel it's very simple, from iProperties, but Is there anyone here that can help me with the following problem: I want to calculate the volume of a steel vessel after temperature material expansion. (1 degree) Find the volume of the solid of revolution generated when the finite region R that lies between \(y = 4 − x^2\) and \(y = x + 2 \) is revolved about the \(x\)-axis. Solution. First, we must determine where the curves \(y = 4 − x^2\) and \(y = x + 2\) intersect. Substituting the expression for \(y\) from the second equation into the first. Equation arranged to solve for volume at state 2. Inputs: pressure at state 1 (P 1) volume at state 1 (V 1) pressure at state 2 (P 2) Conversions: pressure at state 1 (P 1) = 0 = 0. pascal . volume at state 1 (V 1) = 0 = 0. meter^3 . pressure at state 2 (P 2) = 0 = 0. pascal . Solution: volume. Example 1: Find the volume of the solid generated by revolving the region bounded by y = x 2 and the x‐axis on [−2,3] about the x‐axis. Because the x‐axis is a boundary of the region, you can use the disk method (see Figure 1). Figure 1 Diagram for Example 1. The volume ( V) of the solid is . Washer metho Measure Volume. Skip to main content. ×. Support and learning; Support and learning. Learn Tell us about your issue and find the best support option. CONTACT SUPPORT . Post a Question, Get an Answer. Get answers fast from Autodesk support staff and product experts in the forums Circumference to Volume Calculator. A sphere is a perfectly round shaped object and has no edges and vertices. A circle is an object in two-dimensional space and the sphere is a three-dimensional object with all the points are at equal distance from the given point called as a center Volume of a Cylinder Calculato Calculate the volume of the solid obtained by rotating the region bounded by the parabola \(y = {x^2}\) and the square root function \(y = \sqrt x\) around the \(x-\)axis. Solution. Figure 7 The number of moles of oxygen is far less than one mole, so the volume should be fairly small compared to molar volume \(\left( 22.4 \: \text{L/mol} \right)\) since the pressure and temperature are reasonably close to standard. The result has three significant figures because of the values for \(T\) and \(P\) A pack measuring 22″ x 14″ x 9″ is 2,772 cubic inches in volume. Just multiply the three measurements together. 2,772 cubic inches translates to 45.2 liters. A 45-liter pack will afford you all the space you will need for international travels with none of the hassles of checked luggage Specific volume is defined as the number of cubic meters occupied by one kilogram of matter.It is the ratio of a material's volume to its mass, which is the same as the reciprocal of its density.In other words, specific volume is inversely proportional to density. Specific volume may be calculated or measured for any state of matter, but it is most often used in calculations involving gases Volume Math Centers - My favorite part of this resource are the volume math centers. The math centers include a little of everything: building prisms and calculating the volume of the prisms, spinning dimensions and then calculating volume, and then task cards that review a variety of volume skills Archimedes was able to calculate the density of the crown using the water displacement method. We too can calculate the density of any irregular object by using the steps given below. 1) Find the volume of the object as described in the previous section. 2) Use a weighing machine to find the mass of the object To find the volume label with Command Prompt requires a simple command called the vol command. The next best method is to look through the volumes listed in Disk Management . Next to each drive is a letter and name; the name is the volume label Explicitly we find that dV = dx dy dz = Jdw 1 dw 2 dw 3 where J is called the Jacobian of the transformation from variables x, y, z, to w 1 , w 2 , w 3 , and is given by. The Jacobian tells you how to express the volume element dxdydz in the new coordinates We do have trapezoidal formula that would take the shape under a curve and find out the area of those area. However to make it more precise and better approximation, Simpson's rule came to rescue. Through Simpson's rule parabolas are used to find parts of curve. The approximate area under the curve are given by the following formula The shell method for finding volume of a solid of revolution uses integration along an axis perpendicular to the axis of revolution instead of parallel, as we've seen with the disk and washer methods. The nice thing about the shell method is that you can integrate around the \(y\)-axis and not have to take the inverse of functions 7.2 Finding Volume Using Cross Sections Warm Up: Find the area of the following figures: 1. A square with sides of length x 2. A square with diagonals of length x 3. A semicircle of radius x 4. A semicircle of diameter x 5. An equilateral triangle with sides of length x 6. An isosceles right triangle with legs of length Length, Width & Height to Volume Calculato you are likely already familiar with finding the area between cars and in fact if you're not I encourage you to review that on Khan Academy for example we could find this yellow area using a definite integral but what we're going to do in this video is do something even more interesting we're gonna find the volume of shapes where the base is defined in some way by the area between two curves. Volume (V) = 7 x 4 x ((3+2)/2) = 28 x 2.5 = 70. Thus, the volume of the prism is 70 cubic centimeters (cc). Example #2. A trapezoidal prism has a length of 5 cm and bottom width of 11 cm. The top width is 6 cm, and slant height is 2 cm. Find the volume of this geometric structure. Solution. The given data consists of: S = 7 cm. L = 5 cm. P = 2. Use this online calculator to calculate Mass, Density and Volume. The Density of an object is its mass per unit volume, the Mass is a physical quantity expressing the amount of matter in an object and the Volume is the quantity of three-dimensional space enclosed within an object. Entering two of these propertiess in the calculator below calculates the third property of the object Length & Diameter to Volume Calculato Volume of Pentagonal Prism is the amount of the space which the shapes takes up is calculated using volume = (5/2)*(Length * Width * Height). To calculate Volume of Pentagonal Prism, you need Length (l), Width (w) and Height (h). With our tool, you need to enter the respective value for Length, Width and Height and hit the calculate button The volume or capacity of a tank can be found in a few easy steps. Of course, the calculator above is the easiest way to calculate tank volume, but follow along to learn how to calculate it yourself. Step One: Measure the Tank. The first step is to measure the key dimensions of the tank. For round tanks find the diameter and length or height Enlarged volume = n 3 x original volume. Questions & Answers. Question: If you have 2 areas in a ratio, how do we find scale factors? Answer: This works in a similar way to finding the scale factors for length and area. If you have a ratio for the areas of two similar shapes, then the ratio of the lengths would be the square roots of this area. An object's density is represented by a ratio of its mass to volume. The units, used for measurements are, therefore, mass per unit volume. Mass, if we look from a physicist's perspective, can be defined as a measure of the quantity that is inside a body, excluding such factors as the volume of an object or any forces that might be acting on the object To find the area ratios, raise the side length ratio to the second power. This applies because area is a square or two-dimensional property. Similarity and Volume Ratios How are the ratios of the surface area of solids related to their corresponding volumes This said, the volume will depend on the concentration of the acid and of the salt (if it's a solution) that you are provided with. This ratio will give you the relative concentrations required to get a buffer of the pH 4.5 Find the volume of each figure by counting up how many cubic units were used to make each figure. 5th grade. Math. Worksheet. Create Shapes and Find the Volume. Worksheet. Create Shapes and Find the Volume. Cut out and assemble the unfolded shapes, then calculate their volumes! 5th grade. Math. Worksheet Volume of Composite Shapes. Learn to find the volume of composite shapes that are a combination of two or more solid 3D shapes. Begin with counting squares, find the volume of L -blocks, and compound shapes by adding or subtracting volumes of decomposed shapes To find the actual concentrations of [A-] and [HA] in the buffer solution, you need to find the moles of each and divide by the volume of solution, 0.500L. Since the final solution has a volume of 500mL, the volumes of the two solutions must add to 500mL or 0.500L Ideal Gas Law Equations Formulas Calculator Volum For example, in order to calculate the amount of paint it's useful to know the surface area. We all know how to measure the surface area of a simple shape like a cube. But it's get more complicated when you've to calculate the surface area of an organic shape, like the Citrus Squeezer To calculate the volume of this shell, consider Figure \(\PageIndex{3}\). Figure \(\PageIndex{3}\): Calculating the volume of the shell. The shell is a cylinder, so its volume is the cross-sectional area multiplied by the height of the cylinder Another way of finding the volume of a rectangular prism involves dividing it into fractional cubes, finding the volume of one, and then multiplying that volume by the number of cubes that fit into our rectangular prism We are asked to find the previous volume, so we click on the V1 button. Yes, we could call the previous volume V2, and designate the present volume and pressure as V1 and P1, but the important thing is to pair up the variables correctly. When the 3 numbers are entered in the 3 boxes, make sure they are input into the correct boxes The volume equation, shown here, is used to calculate volume in cubic feet when height, width and depth are measured in inches. Volume (Cu. Ft.) = Height (in.) X Width (in.) X Depth (in.) / 1728. Calculating Internal Enclosure Volume. To find the internal volume of an enclosure, internal dimensions must be used From the properties of the geometric definition of the cross product and the scalar triple product, we can discover a link between $2 \times 2$ determinants and area, and a link between $3 \times 3$ determinants and volume.. 2 $\times$ 2 determinants and area. The area of the parallelogram spanned by $\vc{a}$ and $\vc{b}$ is the magnitude of $\vc{a} \times \vc{b}$ Finding the Volume of an Object Using Integration: Suppose you wanted to find the volume of an object. For many objects this is a very intuitive process; the volume of a cube is equal to the length multiplied by the width multiplied by the height. For a cylinder the volume is equal to the area of t Step by step procedure to find out volume of earthwork using Simpson's Rule. This article is about using Simpson's rule (also known as Prismoidal Rule) to find out the quantity of earthwork by means of contour maps. The procedure is explained here with the help of an example. In the example here below, the map is divided in to 6 horizontal. Calculate the volume. Use a calculator to find the volume of the rectangle by multiplying the length, width and height. However, when calculating the volume of a rectangle attached to a triangle, find the area of the base and multiply it by the height. The area of the base is found by multiplying the length by the width of the rectangle Volume = length x width x height Volume = 12 x 4 x 3 = 144 The Cube A special case for a box is a cube. This is when all the sides are the same length. You can find the volume of a cube by just knowing the measurement of one side. If a cube has side length a then Volume = a x a x a Volume = a 3 This is where we get the term cubed Hence, the total Riemann sum approximates the volume under the surface by the volume of a bunch of these thin boxes. In the limit as $\Delta x, \Delta y \to 0$, we obtain the total volume under the surface over the region $\dlr$, i.e., $\iint_\dlr f(x,y)\, dA$ The volume of a shape is similar to the area of a shape, in that volume measures the space inside of an object. While area measures the space inside of a 2-dimensional, or flat, shape, volume measure the space inside of a 3-dimensional object. For example, if you want to buy paint for the walls of your bedroom, you will need to calculate the area of each flat, two-dimensional (length/width) wall Find the Volume of Prisms. When a rectangular prism has whole-number edge dimensions, you can find its volume using the formula V = l ⋅ w ⋅ h, where l, w, and h represent the length, width, and height of the prism. Investigate whether this formula works even if the dimensions are not whole numbers 1. Finding volume of a solid of revolution using a disc method. The simplest solid of revolution is a right circular cylinder which is formed by revolving a rectangle about an axis adjacent to one side of the rectangle, (the disc). To see how to calculate the volume of a general solid of revolution with a disc cross-section, usin Area, surface area, and volume problems (6th grade) Find the volume of a 3-dimensional figure composed of 2 rectangular prisms An updated version of this instructional video is available Volume of a Box Calculator - Box Volume Calculato For a simple shape, use a formula to find volume. For irregular shapes, the easiest solution is to measure volume displaced by placing the object in a liquid. This example problem shows the steps needed to calculate the density of an object and a liquid when given the mass and volume How to Calculate Polytropic work? Polytropic work calculator uses polytropic_work = ((Final Pressure of System * Volume of gas 2)-(Initial Pressure of System * Volume of gas 1))/(1-Polytropic index) to calculate the Polytropic work, Polytropic work is the energy transferred to or from an object via the application of force along with a displacement for a system whose pressure and volume obey a. Free Cylinder Volume & Radius Calculator - calculate cylinder volume, radius step by step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy g.. Fitting examples and sample programs have been added for the sake of greater comprehension for interested people Formulas for volume: Cone =, where r is the radius and h is the height. Cube =, where s is the length of the side. Cylinder =, where r is the radius and h is the height Specific Volume: Definition, Formulas, Example The volume of a solid is expressed in cubic measurements, such as cubic centimeters or cubic meters. The basic volume formula for volume is: volume = area (sometimes called base) x height However, finding the areas of different solids can require different formulas When a rectangular prism has whole-number edge dimensions, you can find its volume using the formula V = l ⋅ w ⋅ h, where l, w, and h represent the length, width, and height of the prism. Investigate whether this formula works even if the dimensions are not whole numbers ed by these vectors. But we know the volume in a parallelopiped deter When reflecting on the strategies they used to find the volume, students should discover that the shortcut to finding volume is to multiply length and width and height. Tall and Short Containers Use two identical pieces of construction paper to make cylinders—one tall and skinny, the other short and stout Find the volume of the cone extending from x = 0 to x = 6. The length (height) of the cone will extend from 0 to 6 The area from the segments will be from the function Quadrant mathplane.com x (These are the 'radii') dx And, the volume of the solid from rotation (revolution If a volume is known in cubic inches, it can be divided by 1,728 to find the volume in cubic feet. The volume equation, shown here, is used to calculate volume in cubic feet when height, width and depth are measured in inches. Volume (Cu. Ft.) = Height (in.) X Width (in.) X Depth (in.) / 1728 Calculating Internal Enclosure Volume Calculating Volume SkillsYouNee First of all, you don't find volumes for triangle. Why? Because it's just a plane shape a.k.a 2D (2-Dimensional) shape like squares, rectangle et al. So, however you can calculate the area for a triangle. And that is (base of the triangle X heig.. Clicking on CALCULATE we get the answer of 1,329 in³ . 4) 7 liters of a gas are at a temperature of 300 K. If the volume increases to 7.5 liters, what is the new temperature of the gas? The two values that pair up are 7 liters (V₁) and 300 Kelvin (T₁) The number is how you will reference a disk in a command so, if you want to find the GUID for disk 0, you need to select it with the command; select disk number. Once the disk has been selected, run this command to find its GUID. uniqueid disk. Volume GUID. The easiest way to find the GUID of a volume on your system is to go through PowerShell The volume of a solid \(U\) in Cartesian coordinates \(xyz\) is given by \[V = \iiint\limits_U {dxdydz} .\] In cylindrical coordinates, the volume of a solid is defined by the formula \[V = \iiint\limits_U {\rho d\rho d\varphi dz} .\] In spherical coordinates, the volume of a solid is expressed a Go to Tools > Mass properties or click on the Mass properties icon In the Mass properties box, you can find many material properties. The surface area of the squeezer is 40091mm2 as you can see in the orange rectangle. As you can see in the Mass Properties box in SolidWorks, the volume of our model is 169301 mm3 For example, to calculate the volume of a rectangular prism (named because of its rectangular base), you have to find the area of the rectangle at its base, then multiply that area by the height of the shape. The same process applies for another simple 3-D shape, the cube The average daily trading volume represents an average number of stocks or other assets and securities traded in one single day. Also, it is an average number of stocks traded over a particular time frame. To calculate this you will need to know the number of shares traded over a particular time, for example, 20 days Volume of Pentagonal Prism is the amount of the space which the shapes takes up and is represented as V = (5/2)* (l*w*h) or volume = (5/2)* (Length*Width*Height) Challenge students to find the formula (or shortcut) to calculating volume. Have students find the volume of shapes where only the dimensions are provided, or shapes where they need to measure the dimensions themselves. Technology Integration. Video: Volume Song For Kids ★ Measuring Video by NUMBEROCK Normally, when making mathematical exercises, this information will be given to you so you can apply the formula to calculate the volume of a cube. 2 The formula for calculating the volume of a cube is equal to the length of its edge cubed: V=a³ Thus, you must multiply the dimensions of the side by itself 3 times Water volume calculation based on heat load capacity generally gives the minimum amount of water. The accuracy of this method depends on the accuracy and completeness of the informatino provided by customer's technical team. It is strongly recommended that another method is also used to calculate the volume of water in the system Authoritative name server lookup. CBSE NET result. Much like humans, the fish's ear provides it with a sense of balance. how does this work?. Mid hamstring pain running. I miss you Beyoncé lyrics. Frugal Gourmet pierogi recipe. Average pg&e bill for a one bedroom apartment. How to setup dual monitors with DVI and HDMI. Photo to cartoon free. 4x4 matrix keypad pinout. Year round campgrounds in PA. Top 10 cocktails. Cheap gifts for boyfriend Amazon. Buscopan dosage. Summoners War Mod apk BlackMod. Reset Hub 3.0 password. Subway Footlong calories Chicken. RAID 10 calculator. Estate Administration forms. Headlight aiming screen for sale. Warp tool Photoshop iPad. Drilling and blasting equipment. Replacing trampoline mat. How to find NHS number online. North Sea Jazz Festival 2021. Will it be hot tomorrow. Louis Vuitton Slides. Vienna to Prague: Flight. Labatt Blue sodium content. Is Doritos good for weight loss. Why was Cash Cab cancelled. Zantac alternatives UK. Kashmir conflict Essay. Honey Roasted baby carrots. A/prof. Cracker Barrel Apple dessert recipe. IKEA lamp shades replacement. 1200 btu/hr to hp. Alexander McQueen death scene. Metrodome baseball dimensions. Easy deep fried banana batter recipe.
CommonCrawl
1History and cryptanalysis 2Security 2.1Overview of security issues 2.2Collision vulnerabilities 2.3Preimage vulnerability 4Algorithm 4.1Pseudocode 5MD5 hashes 6Implementations 9Further reading Message-digest hashing algorithm Ronald Rivest MD2, MD4, MD5, MD6 Cipher detail Digest sizes Block sizes Merkle–Damgård construction 4[1] Best public cryptanalysis A 2013 attack by Xie Tao, Fanbao Liu, and Dengguo Feng breaks MD5 collision resistance in 218 time. This attack runs in less than a second on a regular computer.[2] MD5 is prone to length extension attacks. The MD5 message-digest algorithm is a widely used hash function producing a 128-bit hash value. MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function MD4,[3] and was specified in 1992 as RFC 1321. MD5 can be used as a checksum to verify data integrity against unintentional corruption. Historically it was widely used as a cryptographic hash function; however it has been found to suffer from extensive vulnerabilities. It remains suitable for other non-cryptographic purposes, for example for determining the partition for a particular key in a partitioned database, and may be preferred due to lower computational requirements than more recent Secure Hash Algorithms.[4] History and cryptanalysis[edit] MD5 is one in a series of message digest algorithms designed by Professor Ronald Rivest of MIT (Rivest, 1992). When analytic work indicated that MD5's predecessor MD4 was likely to be insecure, Rivest designed MD5 in 1991 as a secure replacement. (Hans Dobbertin did indeed later find weaknesses in MD4.) In 1993, Den Boer and Bosselaers gave an early, although limited, result of finding a "pseudo-collision" of the MD5 compression function; that is, two different initialization vectors that produce an identical digest. In 1996, Dobbertin announced a collision of the compression function of MD5 (Dobbertin, 1996). While this was not an attack on the full MD5 hash function, it was close enough for cryptographers to recommend switching to a replacement, such as SHA-1 (also compromised) or RIPEMD-160. The size of the hash value (128 bits) is small enough to contemplate a birthday attack. MD5CRK was a distributed project started in March 2004 to demonstrate that MD5 is practically insecure by finding a collision using a birthday attack. MD5CRK ended shortly after 17 August 2004, when collisions for the full MD5 were announced by Xiaoyun Wang, Dengguo Feng, Xuejia Lai, and Hongbo Yu.[5][6] Their analytical attack was reported to take only one hour on an IBM p690 cluster.[7] On 1 March 2005, Arjen Lenstra, Xiaoyun Wang, and Benne de Weger demonstrated construction of two X.509 certificates with different public keys and the same MD5 hash value, a demonstrably practical collision.[8] The construction included private keys for both public keys. A few days later, Vlastimil Klima described an improved algorithm, able to construct MD5 collisions in a few hours on a single notebook computer.[9] On 18 March 2006, Klima published an algorithm that could find a collision within one minute on a single notebook computer, using a method he calls tunneling.[10] Various MD5-related RFC errata have been published. In 2009, the United States Cyber Command used an MD5 hash value of their mission statement as a part of their official emblem.[11] On 24 December 2010, Tao Xie and Dengguo Feng announced the first published single-block (512-bit) MD5 collision.[12] (Previous collision discoveries had relied on multi-block attacks.) For "security reasons", Xie and Feng did not disclose the new attack method. They issued a challenge to the cryptographic community, offering a US$10,000 reward to the first finder of a different 64-byte collision before 1 January 2013. Marc Stevens responded to the challenge and published colliding single-block messages as well as the construction algorithm and sources.[13] In 2011 an informational RFC 6151[14] was approved to update the security considerations in MD5[15] and HMAC-MD5.[16] Security[edit] One basic requirement of any cryptographic hash function is that it should be computationally infeasible to find two distinct messages that hash to the same value. MD5 fails this requirement catastrophically; such collisions can be found in seconds on an ordinary home computer. On 31 December 2008, the CMU Software Engineering Institute concluded that MD5 was essentially "cryptographically broken and unsuitable for further use".[17] The weaknesses of MD5 have been exploited in the field, most infamously by the Flame malware in 2012. As of 2019[update], MD5 continues to be widely used, despite its well-documented weaknesses and deprecation by security experts.[18] The security of the MD5 hash function is severely compromised. A collision attack exists that can find collisions within seconds on a computer with a 2.6 GHz Pentium 4 processor (complexity of 224.1).[19] Further, there is also a chosen-prefix collision attack that can produce a collision for two inputs with specified prefixes within seconds, using off-the-shelf computing hardware (complexity 239).[20] The ability to find collisions has been greatly aided by the use of off-the-shelf GPUs. On an NVIDIA GeForce 8400GS graphics processor, 16–18 million hashes per second can be computed. An NVIDIA GeForce 8800 Ultra can calculate more than 200 million hashes per second.[21] These hash and collision attacks have been demonstrated in the public in various situations, including colliding document files[22][23] and digital certificates.[24] As of 2015, MD5 was demonstrated to be still quite widely used, most notably by security research and antivirus companies.[25] As of 2019, one quarter of widely used content management systems were reported to still use MD5 for password hashing.[18] Overview of security issues[edit] In 1996, a flaw was found in the design of MD5. While it was not deemed a fatal weakness at the time, cryptographers began recommending the use of other algorithms, such as SHA-1, which has since been found to be vulnerable as well.[26] In 2004 it was shown that MD5 is not collision-resistant.[27] As such, MD5 is not suitable for applications like SSL certificates or digital signatures that rely on this property for digital security. Researchers additionally discovered more serious flaws in MD5, and described a feasible collision attack -- a method to create a pair of inputs for which MD5 produces identical checksums.[5][28] Further advances were made in breaking MD5 in 2005, 2006, and 2007.[29] In December 2008, a group of researchers used this technique to fake SSL certificate validity.[24][30] As of 2010, the CMU Software Engineering Institute considers MD5 "cryptographically broken and unsuitable for further use",[31] and most U.S. government applications now require the SHA-2 family of hash functions.[32] In 2012, the Flame malware exploited the weaknesses in MD5 to fake a Microsoft digital signature.[33] Collision vulnerabilities[edit] Further information: Collision attack In 1996, collisions were found in the compression function of MD5, and Hans Dobbertin wrote in the RSA Laboratories technical newsletter, "The presented attack does not yet threaten practical applications of MD5, but it comes rather close ... in the future MD5 should no longer be implemented ... where a collision-resistant hash function is required."[34] In 2005, researchers were able to create pairs of PostScript documents[35] and X.509 certificates[36] with the same hash. Later that year, MD5's designer Ron Rivest wrote that "md5 and sha1 are both clearly broken (in terms of collision-resistance)".[37] On 30 December 2008, a group of researchers announced at the 25th Chaos Communication Congress how they had used MD5 collisions to create an intermediate certificate authority certificate that appeared to be legitimate when checked by its MD5 hash.[24] The researchers used a PS3 cluster at the EPFL in Lausanne, Switzerland[38] to change a normal SSL certificate issued by RapidSSL into a working CA certificate for that issuer, which could then be used to create other certificates that would appear to be legitimate and issued by RapidSSL. VeriSign, the issuers of RapidSSL certificates, said they stopped issuing new certificates using MD5 as their checksum algorithm for RapidSSL once the vulnerability was announced.[39] Although Verisign declined to revoke existing certificates signed using MD5, their response was considered adequate by the authors of the exploit (Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, and Benne de Weger).[24] Bruce Schneier wrote of the attack that "we already knew that MD5 is a broken hash function" and that "no one should be using MD5 anymore".[40] The SSL researchers wrote, "Our desired impact is that Certification Authorities will stop using MD5 in issuing new certificates. We also hope that use of MD5 in other applications will be reconsidered as well."[24] In 2012, according to Microsoft, the authors of the Flame malware used an MD5 collision to forge a Windows code-signing certificate.[33] MD5 uses the Merkle–Damgård construction, so if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more likely to be accepted as valid data by the application using it. Furthermore, current collision-finding techniques allow specifying an arbitrary prefix: an attacker can create two colliding files that both begin with the same content. All the attacker needs to generate two colliding files is a template file with a 128-byte block of data, aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm. An example MD5 collision, with the two messages differing in 6 bits, is: d131dd02c5e6eec4 693d9a0698aff95c 2fcab58712467eab 4004583eb8fb7f89 55ad340609f4b302 83e488832571415a 085125e8f7cdc99f d91dbdf280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2b487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080a80d1e c69821bcb6a88393 96f9652b6ff72a70 55ad340609f4b302 83e4888325f1415a 085125e8f7cdc99f d91dbd7280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e23487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080280d1e c69821bcb6a88393 96f965ab6ff72a70 Both produce the MD5 hash 79054025255fb1a26e4bc422aef54eb4.[41] The difference between the two samples is that the leading bit in each nibble has been flipped. For example, the 20th byte (offset 0x13) in the top sample, 0x87, is 10000111 in binary. The leading bit in the byte (also the leading bit in the first nibble) is flipped to make 00000111, which is 0x07, as shown in the lower sample. Later it was also found to be possible to construct collisions between two files with separately chosen prefixes. This technique was used in the creation of the rogue CA certificate in 2008. A new variant of parallelized collision searching using MPI was proposed by Anton Kuznetsov in 2014, which allowed finding a collision in 11 hours on a computing cluster.[42] Preimage vulnerability[edit] In April 2009, an attack against MD5 was published that breaks MD5's preimage resistance. This attack is only theoretical, with a computational complexity of 2123.4 for full preimage.[43][44] Applications[edit] MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 (known as md5sum) checksum for the files, so that a user can compare the checksum of the downloaded file to it. Most unix-based operating systems include MD5 sum utilities in their distribution packages; Windows users may use the included PowerShell function "Get-FileHash", install a Microsoft utility,[45][46] or use third-party applications. Android ROMs also use this type of checksum. As it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. In some cases, the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files. Historically, MD5 has been used to store a one-way hash of a password, often with key stretching.[47][48] NIST does not include MD5 in their list of recommended hashes for password storage.[49] MD5 is also used in the field of electronic discovery, to provide a unique identifier for each document that is exchanged during the legal discovery process. This method can be used to replace the Bates stamp numbering system that has been used for decades during the exchange of paper documents. As above, this usage should be discouraged due to the ease of collision attacks. Algorithm[edit] Figure 1. One MD5 operation. MD5 consists of 64 of these operations, grouped in four rounds of 16 operations. F is a nonlinear function; one function is used in each round. Mi denotes a 32-bit block of the message input, and Ki denotes a 32-bit constant, different for each operation. <<<s denotes a left bit rotation by s places; s varies for each operation. ⊞ {\displaystyle \boxplus } denotes addition modulo 232. MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit words); the message is padded so that its length is divisible by 512. The padding works as follows: first, a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264. The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A, B, C, and D. These are initialized to certain fixed constants. The main algorithm then uses each 512-bit message block in turn to modify the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions; a different one is used in each round: F ( B , C , D ) = ( B ∧ C ) ∨ ( ¬ B ∧ D ) G ( B , C , D ) = ( B ∧ D ) ∨ ( C ∧ ¬ D ) H ( B , C , D ) = B ⊕ C ⊕ D I ( B , C , D ) = C ⊕ ( B ∨ ¬ D ) {\displaystyle {\begin{aligned}F(B,C,D)&=(B\wedge {C})\vee (\neg {B}\wedge {D})\\G(B,C,D)&=(B\wedge {D})\vee (C\wedge \neg {D})\\H(B,C,D)&=B\oplus C\oplus D\\I(B,C,D)&=C\oplus (B\vee \neg {D})\end{aligned}}} ⊕ , ∧ , ∨ , ¬ {\displaystyle \oplus ,\wedge ,\vee ,\neg } denote the XOR, AND, OR and NOT operations respectively. Pseudocode[edit] The MD5 hash is calculated according to this algorithm.[50] All values are in little-endian. // : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating var int s[64], K[64] var int i // s specifies the per-round shift amounts s[ 0..15] := { 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22 } s[16..31] := { 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20 } s[32..47] := { 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23 } // Use binary integer part of the sines of integers (Radians) as constants: for i from 0 to 63 do K[i] := floor(232 × abs (sin(i + 1))) end for // (Or just use the following precomputed table): K[ 0.. 3] := { 0xd76aa478, 0xe8c7b756, 0x242070db, 0xc1bdceee } K[ 4.. 7] := { 0xf57c0faf, 0x4787c62a, 0xa8304613, 0xfd469501 } K[ 8..11] := { 0x698098d8, 0x8b44f7af, 0xffff5bb1, 0x895cd7be } K[12..15] := { 0x6b901122, 0xfd987193, 0xa679438e, 0x49b40821 } K[16..19] := { 0xf61e2562, 0xc040b340, 0x265e5a51, 0xe9b6c7aa } K[20..23] := { 0xd62f105d, 0x02441453, 0xd8a1e681, 0xe7d3fbc8 } K[24..27] := { 0x21e1cde6, 0xc33707d6, 0xf4d50d87, 0x455a14ed } K[28..31] := { 0xa9e3e905, 0xfcefa3f8, 0x676f02d9, 0x8d2a4c8a } K[32..35] := { 0xfffa3942, 0x8771f681, 0x6d9d6122, 0xfde5380c } K[36..39] := { 0xa4beea44, 0x4bdecfa9, 0xf6bb4b60, 0xbebfbc70 } K[40..43] := { 0x289b7ec6, 0xeaa127fa, 0xd4ef3085, 0x04881d05 } K[44..47] := { 0xd9d4d039, 0xe6db99e5, 0x1fa27cf8, 0xc4ac5665 } K[48..51] := { 0xf4292244, 0x432aff97, 0xab9423a7, 0xfc93a039 } K[52..55] := { 0x655b59c3, 0x8f0ccc92, 0xffeff47d, 0x85845dd1 } K[56..59] := { 0x6fa87e4f, 0xfe2ce6e0, 0xa3014314, 0x4e0811a1 } K[60..63] := { 0xf7537e82, 0xbd3af235, 0x2ad7d2bb, 0xeb86d391 } // Initialize variables: var int a0 := 0x67452301 // A var int b0 := 0xefcdab89 // B var int c0 := 0x98badcfe // C var int d0 := 0x10325476 // D // Pre-processing: adding a single 1 bit append "1" bit to message // Notice: the input bytes are considered as bits strings, // where the first bit is the most significant bit of the byte.[51] // Pre-processing: padding with zeros append "0" bit until message length in bits ≡ 448 (mod 512) // Notice: the two padding steps above are implemented in a simpler way // in implementations that only work with complete bytes: append 0x80 // and pad with 0x00 bytes so that the message length in bytes ≡ 56 (mod 64). append original length in bits mod 264 to message // Process the message in successive 512-bit chunks: for each 512-bit chunk of padded message do break chunk into sixteen 32-bit words M[j], 0 ≤ j ≤ 15 // Initialize hash value for this chunk: var int A := a0 var int B := b0 var int C := c0 var int D := d0 // Main loop: var int F, g F := (B and C) or ((not B) and D) g := i F := (D and B) or ((not D) and C) g := (5×i + 1) mod 16 F := B xor C xor D F := C xor (B or (not D)) g := (7×i) mod 16 // Be wary of the below definitions of a,b,c,d F := F + A + K[i] + M[g] // M[g] must be a 32-bits block A := D D := C C := B B := B + leftrotate(F, s[i]) // Add this chunk's hash to result so far: a0 := a0 + A b0 := b0 + B c0 := c0 + C d0 := d0 + D var char digest[16] := a0 append b0 append c0 append d0 // (Output is in little-endian) Instead of the formulation from the original RFC 1321 shown, the following may be used for improved efficiency (useful if assembly language is being used – otherwise, the compiler will generally optimize the above code. Since each computation is dependent on another in these formulations, this is often slower than the above method where the nand/and can be parallelised): ( 0 ≤ i ≤ 15): F := D xor (B and (C xor D)) (16 ≤ i ≤ 31): F := C xor (D and (B xor C)) MD5 hashes[edit] The 128-bit (16-byte) MD5 hashes (also termed message digests) are typically represented as a sequence of 32 hexadecimal digits. The following demonstrates a 43-byte ASCII input and the corresponding MD5 hash: MD5("The quick brown fox jumps over the lazy dog") = 9e107d9d372bb6826bd81d3542a419d6 Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period to the end of the sentence: MD5("The quick brown fox jumps over the lazy dog.") = e4d909c290d0fb1ca068ffaddf22cbd0 The hash of the zero-length string is: MD5("") = The MD5 algorithm is specified for messages consisting of any number of bits; it is not limited to multiples of eight bits (octets, bytes). Some MD5 implementations such as md5sum might be limited to octets, or they might not support streaming for messages of an initially undetermined length. Implementations[edit] Below is a list of cryptography libraries that support MD5: Botan Bouncy Castle cryptlib Crypto++ Libgcrypt wolfSSL Comparison of cryptographic hash functions Hash function security summary HashClash MD5Crypt md5deep ^ Rivest, R. (April 1992). "Step 4. Process Message in 16-Word Blocks". The MD5 Message-Digest Algorithm. IETF. p. 5. sec. 3.4. doi:10.17487/RFC1321. RFC 1321. Retrieved 10 October 2018. ^ Xie Tao; Fanbao Liu; Dengguo Feng (2013). "Fast Collision Attack on MD5" (PDF). Cryptology ePrint Archive. ^ Ciampa, Mark (2009). CompTIA Security+ 2008 in depth. Australia; United States: Course Technology/Cengage Learning. p. 290. ISBN 978-1-59863-913-1. ^ Kleppmann, Martin (2 April 2017). Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems (1 ed.). O'Reilly Media. p. 203. ISBN 978-1449373320. ^ a b J. Black, M. Cochran, T. Highland: A Study of the MD5 Attacks: Insights and Improvements Archived 1 January 2015 at the Wayback Machine, 3 March 2006. Retrieved 27 July 2008. ^ Hawkes, Philip; Paddon, Michael; Rose, Gregory G. (13 October 2004). "Musings on the Wang et al. MD5 Collision". Cryptology ePrint Archive. Archived from the original on 5 November 2018. Retrieved 10 October 2018. ^ Bishop Fox (26 September 2013). "Fast MD5 and MD4 Collision Generators". BishopFox. Archived from the original on 26 April 2017. Retrieved 10 February 2014. ^ Lenstra, Arjen; Wang, Xiaoyun; Weger, Benne de (1 March 2005). "Colliding X.509 Certificates". Cryptology ePrint Archive. Retrieved 10 October 2018. ^ Klíma, Vlastimil (5 March 2005). "Finding MD5 Collisions – a Toy For a Notebook". Cryptology ePrint Archive. Retrieved 10 October 2018. ^ Vlastimil Klima: Tunnels in Hash Functions: MD5 Collisions Within a Minute, Cryptology ePrint Archive Report 2006/105, 18 March 2006, revised 17 April 2006. Retrieved 27 July 2008. ^ "Code Cracked! Cyber Command Logo Mystery Solved". USCYBERCOM. Wired News. 8 July 2010. Retrieved 29 July 2011. ^ Tao Xie; Dengguo Feng (2010). "Construct MD5 Collisions Using Just A Single Block Of Message" (PDF). Retrieved 28 July 2011. ^ "Marc Stevens – Research – Single-block collision attack on MD5". Marc-stevens.nl. 2012. Retrieved 10 April 2014. ^ Turner, Sean (March 2011). "RFC 6151 – Updated Security Considerations for the MD5 Message-Digest and the HMAC-MD5 Algorithms". Internet Engineering Task Force. doi:10.17487/RFC6151. Retrieved 11 November 2013. ^ Rivest, Ronald L. (April 1992). "RFC 1321 – The MD5 Message-Digest Algorithm". Internet Engineering Task Force. doi:10.17487/RFC1321. Retrieved 5 October 2013. ^ Krawczyk, Hugo; Bellare, Mihir; Canetti, Ran (February 1997). "RFC 2104 – HMAC: Keyed-Hashing for Message Authentication". Internet Engineering Task Force. doi:10.17487/RFC2104. Retrieved 5 October 2013. ^ Chad R, Dougherty (31 December 2008). "Vulnerability Note VU#836068 MD5 vulnerable to collision attacks". Vulnerability notes database. CERT Carnegie Mellon University Software Engineering Institute. Retrieved 3 February 2017. ^ a b Cimpanu, Catalin. "A quarter of major CMSs use outdated MD5 as the default password hashing scheme". ZDNet. Retrieved 17 June 2019. ^ M.M.J. Stevens (June 2007). On Collisions for MD5 (PDF) (Master's thesis). ^ Marc Stevens; Arjen Lenstra; Benne de Weger (16 June 2009). "Chosen-prefix Collisions for MD5 and Applications" (PDF). École Polytechnique Fédérale de Lausanne. Archived from the original (PDF) on 9 November 2011. Retrieved 31 March 2010. ^ "New GPU MD5 cracker cracks more than 200 million hashes per second". ^ Magnus Daum, Stefan Lucks. "Hash Collisions (The Poisoned Message Attack)". Eurocrypt 2005 rump session. Archived from the original on 27 March 2010. ^ Max Gebhardt; Georg Illies; Werner Schindler (31 October 2005). "A Note on the Practical Value of Single Hash Collisions for Special File Formats" (PDF). National Institute of Standards and Technology. Archived from the original (PDF) on 17 September 2008. ^ a b c d e Sotirov, Alexander; Marc Stevens; Jacob Appelbaum; Arjen Lenstra; David Molnar; Dag Arne Osvik; Benne de Weger (30 December 2008). "MD5 considered harmful today". Retrieved 30 December 2008. Announced at the 25th Chaos Communication Congress. ^ "Poisonous MD5 – Wolves Among the Sheep | Silent Signal Techblog". Retrieved 10 June 2015. ^ Hans Dobbertin (Summer 1996). "The Status of MD5 After a Recent Attack". CryptoBytes. Retrieved 22 October 2013. ^ Xiaoyun Wang; Hongbo Yu (2005). "How to Break MD5 and Other Hash Functions" (PDF). Advances in Cryptology – Lecture Notes in Computer Science. pp. 19–35. Archived from the original (PDF) on 21 May 2009. Retrieved 21 December 2009. ^ Xiaoyun Wang, Dengguo ,k.,m.,m, HAVAL-128 and RIPEMD, Cryptology ePrint Archive Report 2004/199, 16 August 2004, revised 17 August 2004. Retrieved 27 July 2008. ^ Marc Stevens, Arjen Lenstra, Benne de Weger: Vulnerability of software integrity and code signing applications to chosen-prefix collisions for MD5, 30 November 2007. Retrieved 27 July 2008. ^ Stray, Jonathan (30 December 2008). "Web browser flaw could put e-commerce security at risk". CNET.com. Archived from the original on 28 August 2013. Retrieved 24 February 2009. ^ "CERT Vulnerability Note VU#836068". Kb.cert.org. Retrieved 9 August 2010. ^ "NIST.gov — Computer Security Division — Computer Security Resource Center". Csrc.nist.gov. Archived from the original on 9 June 2011. Retrieved 9 August 2010. ^ a b "Flame malware collision attack explained". Archived from the original on 8 June 2012. Retrieved 7 June 2012. ^ Dobbertin, Hans (Summer 1996). "The Status of MD5 After a Recent Attack" (PDF). RSA Laboratories CryptoBytes. 2 (2): 1. Retrieved 10 August 2010. The presented attack does not yet threaten practical applications of MD5, but it comes rather close. .... [sic] in the future MD5 should no longer be implemented... [sic] where a collision-resistant hash function is required. [permanent dead link] ^ "Schneier on Security: More MD5 Collisions". Schneier.com. Retrieved 9 August 2010. ^ "Colliding X.509 Certificates". Win.tue.nl. Retrieved 9 August 2010. ^ "[Python-Dev] hashlib — faster md5/sha, adds sha256/512 support". Mail.python.org. Retrieved 9 August 2010. ^ "Researchers Use PlayStation Cluster to Forge a Web Skeleton Key". Wired. 31 December 2008. Retrieved 31 December 2008. ^ Callan, Tim (31 December 2008). "This morning's MD5 attack — resolved". Verisign. Archived from the original on 16 January 2009. Retrieved 31 December 2008. ^ Bruce Schneier (31 December 2008). "Forging SSL Certificates". Schneier on Security. Retrieved 10 April 2014. ^ Eric Rescorla (17 August 2004). "A real MD5 collision". Educated Guesswork (blog). Archived from the original on 15 August 2014. Retrieved 13 April 2015. ^ Anton A. Kuznetsov. "An algorithm for MD5 single-block collision attack using high performance computing cluster" (PDF). IACR. Retrieved 3 November 2014. ^ Yu Sasaki; Kazumaro Aoki (16 April 2009). "Finding Preimages in Full MD5 Faster Than Exhaustive Search". Advances in Cryptology - EUROCRYPT 2009. Lecture Notes in Computer Science. Vol. 5479. Springer Berlin Heidelberg. pp. 134–152. doi:10.1007/978-3-642-01001-9_8. ISBN 978-3-642-01000-2. ^ Ming Mao and Shaohui Chen and Jin Xu (2009). "Construction of the Initial Structure for Preimage Attack of MD5". 2009 International Conference on Computational Intelligence and Security. International Conference on Computational Intelligence and Security. Vol. 1. IEEE Computer Society. pp. 442–445. doi:10.1109/CIS.2009.214. ISBN 978-0-7695-3931-7. S2CID 16512325. ^ "Availability and description of the File Checksum Integrity Verifier utility". Microsoft Support. 17 June 2013. Retrieved 10 April 2014. ^ "How to compute the MD5 or SHA-1 cryptographic hash values for a file". Microsoft Support. 23 January 2007. Retrieved 10 April 2014. ^ "FreeBSD Handbook, Security – DES, Blowfish, MD5, and Crypt". Retrieved 19 October 2014. ^ "Synopsis – man pages section 4: File Formats". Docs.oracle.com. 1 January 2013. Retrieved 10 April 2014. ^ NIST SP 800-132 Section 5.1 ^ "Reference Source". ^ RFC 1321, section 2, "Terminology and Notation", Page 2. Berson, Thomas A. (1992). "Differential Cryptanalysis Mod 232 with Applications to MD5". EUROCRYPT. pp. 71–80. ISBN 3-540-56413-6. Bert den Boer; Antoon Bosselaers (1993). "Collisions for the Compression Function of MD5". Advances in Cryptology — EUROCRYPT '93. EUROCRYPT. Berlin; London: Springer. pp. 293–304. ISBN 978-3-540-57600-6. Hans Dobbertin, Cryptanalysis of MD5 compress. Announcement on Internet, May 1996. "CiteSeerX". Citeseer.ist.psu.edu. Retrieved 9 August 2010. Dobbertin, Hans (1996). "The Status of MD5 After a Recent Attack". CryptoBytes. 2 (2). Xiaoyun Wang; Hongbo Yu (2005). "How to Break MD5 and Other Hash Functions" (PDF). EUROCRYPT. ISBN 3-540-25910-4. Archived from the original (PDF) on 21 May 2009. Retrieved 6 March 2008. W3C recommendation on MD5 MD5 Calculator Cryptographic hash functions and message authentication codes Known attacks Common functions MD5 (compromised) SHA-1 (compromised) BLAKE2 SHA-3 finalists Grøstl Keccak (winner) CubeHash ECOH HAS-160 Kupyna MASH-1 MDC-2 N-hash RIPEMD RadioGatún SWIFFT Snefru Streebog VSH Password hashing/ key stretching functions Argon2 LM hash Lyra2 Makwa PBKDF2 yescrypt key derivation functions HKDF KDF1/KDF2 MAC functions CBC-MAC OMAC/CMAC PMAC SipHash Authenticated encryption modes ChaCha20-Poly1305 GCM IAPM Collision attack Preimage attack Birthday attack Brute-force attack Rainbow table Side-channel attack Length extension attack Avalanche effect Hash collision Sponge function HAIFA construction CRYPTREC NIST hash function competition Hash-based cryptography Merkle tree History of cryptography Outline of cryptography Cryptographic protocol Authentication protocol Cryptographic primitive Cryptosystem Cryptographic nonce Cryptovirology Key derivation function Kleptography Key (cryptography) Key exchange Key generator Key schedule Key stretching Cryptojacking malware Random number generation Cryptographically secure pseudorandom number generator (CSPRNG) Pseudorandom noise (PRN) Secure channel Insecure channel Subliminal channel Information-theoretic security Codetext Ciphertext Shared secret Trapdoor function Trusted timestamping Key-based routing Onion routing Garlic routing Kademlia Mix network Stream cipher Symmetric-key algorithm Authenticated encryption Public-key cryptography Quantum key distribution Quantum cryptography Message authentication code Steganography Retrieved from "https://en.wikipedia.org/w/index.php?title=MD5&oldid=1133028786" Cryptographic hash functions Broken hash functions Checksum algorithms Articles with dead external links from February 2020 Articles with example pseudocode
CommonCrawl
Type I' String theory as M-theory compactified on a line segment? Can a 4-dimensional equivalent of M-theory be obtained through T-duality? A duality between F-theory and M-theory? Calabi-Yau manifolds and compactification of extra dimensions in M-theory Why is full M-theory needed for compactification on singular 7-folds and what does that even mean? M(atrix) theory and things other than D0-branes? And is it non-peturbative M-theory or non-peturbative Type IIA theory? Advanced topics in string theory G(2) lattice and the M-theory landscape Diagonalize mass matrix term for fermions and "doubling trick" in m(atrix) theory Dimensional reduction of Yang-Mills to m(atrix) theory Is M-theory just a M-yth? I would like to know what are the scientific counterarguments to a statement of this type: "M-theory is a chimera, a non-existent holy grail more wished for than actually proven. It's conjecture is a PR stunt to cover up the fact that there is no unique string theory." I.e. what are the tangible arguments for the equivalence of all string-theory types under one mother theory and for the mere existence of such a theory if we are not to assume that string theory in fact describes one unified theory of everything. The question also aims to clearly discern what is the difference between the mathematical and physical "theory of everything" content in the various statements in the discussion. The points I feel should be adressed are: There are various dualities between string theory types. But how does the mere fact of a formal mathematical duality prove that the theories are physically identical? There is a duality between the electric and magnetic intensity in Maxwell vacuum equations; would that mean that the electrical and magnetic field are physically identical? What if a theory is self-dual, does that mean that a sector of the theory is redundant? (As it turns out, in electricity/magnetism the answer is "yes" but I do not see how this necessitates from the discrete duality.) A number of the dualities such as T-duality require a compactified version of a theory to equate between theories. But what about the non-compactified versions, are they also recoverable from the mother theory? Many approaches state that the non-compactified versions are recovered from the compactified ones by taking the typical length $L$ of the compactification and sending $L \to \infty$. But the topological inequivalence is clear. Amongst other things it is clear that for the non-compact theory, the $1/L$ in the dual theory should be substituted not by a "limit infinity" but by some kind of bare infinity (isolated, non-limit, true... whatever is to your terminological liking). More specifically, the D0 branes in IIA and their excitations are interpreted as Kaluza-Klein towers from a higher dimension and the "strong coupling limit of IIA" is basically the definition of M-Theory (as far as I have read). But, apart from blind belief, what compels us to think it is necessary that IIA is a compactification of another theory? Rather obviously, if a strong field theory looks as a compactified theory, the corresponding weak-field effective theory is probably going to look as one too. Hence, a starting point would be to find the corresponding theory which reduces to the weak field limit of IIA under compactification. Such a theory exists, it is 11D supergravity. But a weak-field limit has a myriad of "strong completions" so finding this "weak anticompactification" does not prove anything. Furthermore, the "going around two corners" approach (i.e. assuming weak-fieldization and compactification commutes) raises doubts, especially in the light of the fact that quantization and stability seem to need some very exotic handling in a 11D membrane theory. Is it possible that the weak-field limit of the conjectured M-theory is not 11D supergravity? compactification t-duality theory-of-everything asked May 30, 2015 in Theoretical Physics by Void (1,645 points) [ no revision ] LOL PR stunt. commented May 30, 2015 by Prathyush (705 points) [ revision history ] edited May 30, 2015 by Prathyush Where did you read this critic? commented May 30, 2015 by conformal_gk (3,625 points) [ no revision ] @conformal_gk The quote is a paraphrase of a few private conversations. I do not have a sharp opinion on the topic but I have admit that after all the hype around string theory and the social pressure this put on the string theorists to present the final "theory of everything", it is natural to ask whether the conjecture forces our expectations on the theory rather than letting the theory speak for itself. commented May 31, 2015 by Void (1,645 points) [ no revision ] 1)" But how does the mere fact of a formal mathematical duality prove that the theories are physically identical?" In the formulation of this question there are many strange redundancies. A theory is a mathematical structure. Two theories are the same if there exists a natural identification between the corresponding mathematical structures. A given theory is supposed to describe a given physics. By definition two identical theories describe the same physics and so are "physically identical". In some sense any mathematical result is formal so "formal mathematical duality" should be simply "mathematical duality" or even just "duality" because, as I recalled it, a theory is a mathematical structure and so a duality between theories is obviously "mathematical". About the Maxwell equations. The electromagnetic duality does not identify the electric and magnetic fields but says that which one is called "electric" or which one is called "magnetic" is arbitrary. Two physicists using both the Maxwell equations but with an opposite convention about what is electric and what is magnetic describe the same physics. The fact that a theory is self-dual does not mean that a sector is redundant. It just means that some labels attached to the theory are in some sense arbitrary. If you think to a theory as a mathematical structure, a self-duality of the theory is simply a non-trivial automorphism. We are not quotienting by this transformation. 2)I am not sure to understand this question. Yes, some dualities only make sense after some compactifications. If it is possible to decompactify both sides of the duality at the same time then we expect the duality to still make sense for the uncompactified theories (the objection about the "topological inequivalence" is in general resolved by some degree of locality of the theories). But in general the point of these dualities is that it is not possible to decompactify both sides at the same time, it is why there are non-trivial, and in this case the question does not make sense. 3)To define M-theory as the "strong coupling limit of IIA string theory" is doing nothing if we have nothing non-trivial to say about this limit. The non-trivial statement is that this theory is 11 dimensional and has 11 dimensional supergravity as its low-energy limit. The question is essentially where does this claim come from. The study of D0 branes indeed plays an important role here. The point is that type IIA string theory contains an bound state of $n$ $D0$-branes for every integer $n$, each one of mass proportional to $|n|/g$ where $g$ is the IIA coupling constant. In the limit $g \rightarrow 0$, this infinite number of states become massless. Do you know a mechanism to produce an infinite number of massless states ? It is not known how to do such thing in a local quantum field theory or in the known string theories. The only known mechanism to do that is a Kaluza-Klein compactification in the decompactification limit. In fact there are other independent arguments in favour of this 11 dimensional picture. For example the fact that the low energy type IIA supergravity is the dimensional reduction of the 11 dimensional supergravity and that in this identification the value of the dilaton is directly related to the radius of the 11 dimensional circle. Furthermore the IIA/M relation is only one small piece of the full web of dualities describing the strong coupling dynamics of the various string theories. For each of them one can give non-trivial arguments and what is remarquable is the consistency of the global picture. 4)The fact that "a weak-field theory limit has a myriad of completions" is just not correct in general. Even in standard quantum field theory, to find a UV completion of an IR effective field theory is already a non-trivial problem which seems to have no solution in Lagrangian terms in spacetime dimension $d>4$ and requires non-abelian gauge fields in $d=4$. But it is true that once these restrictions are understood there can be many possible choices. But it is certainly not what we know for theories containing gravity. To find a UV completion of an IR field theory containing gravity is just difficult and before the proposal of the existence of M-theory the only known examples where string theories. For example the only known quantum theory whose low energy limit is type IIA supergravity theory is type IIA string theory and the fact that it is possible to UV complete type IIA supergravity is a kind of miracle from the field theory point of view: it seems that one needs string theory to do it. It is why many string theorists thought in the 80s that 11 dimensional supergravity was just an ordinary non-renormalizable field theory without UV completion. It is the arguments given in 3) which suggests the existence of a UV completion of 11 dimensional supergravity and we call this theory M-theory, which does not seem to be a local quantum field theory or to have a perturbative string theory description. If there were many known quantum gravity theories in 11 dimensions, we could ask which one is the strong coupling limit of type IIA string theory and so should be called M-theory. But it would be an embarassement of riches and we are not in this situation. Rather the fact that we have arguments showing that the strong coupling limit of type IIA theory is 11 dimensional implies the non-trivial fact that there exists a 11 dimensional quantum gravity. There is no expected problem of commutativity in the double limit weak energy/compactification simply because the Kaluza-Klein modes are massive (because we are compactifying on a circle of given radius. Things are different for more general compactifications on possibly singular spaces) and so do not affect the low energy dynamics. I don't know what is the needed "very exotic handling in a 11D membrane theory". As the membranes are massive, they do not enter in the low energy discussion anyway. The fact that at low energies M-theory reduces to 11 dimensional supergravity is imposed by supersymmetry. 11 dimensional supergravity is the only field theory with at most two derivatives in the Lagrangian with $N=1$ $11d$ supersymmetry, as the type IIA supergravity is the only field theory with at most two derivatives in the Lagrangian with $N=(1,1)$ $10d$ supersymmetry. answered May 30, 2015 by 40227 (5,140 points) [ no revision ] edited May 30, 2015 by 40227 Thanks, this is great. 1) I just realized the el-mag example is self-explanatory: the non-vacuum equations are actually not self-dual, they correspond to the interchange of magnetic monopoles and electric charges. I.e. the duality says that the theory of electromagnetism with charges is physically equivalent to the theory of electromagnetism with monopoles (which is true). 2) Do you perhaps have a nice reference on examples how the topological inequivalence is handled? As I naively imagine it, one side of the duality using T-duality may be decompactified while the other is ultracompactified, i.e. the extra dimension is completely eliminated. I do not understand how two theories with different dimensionality can be equivalent directly (i.e. not through the limiting process). 3) The question is: do we need a mechanism to generate an infinite number of massless states? They are, after all, generated by the IIA theory itself. Is there an inconsistency in the strong-coupling limit of IIA which needs to be resolved? Or is it just "elegance" or "convenience" which leads us to the conjecture of M-theory? 4) I do not have a reference but from various conversations I have obtained the (maybe mistaken) idea that extended objects of higher dimensions than strings bring with themselves pathologies upon quantization. "Nobody knows how to quantize membranes" (p. 10 here), hence I assumed the correspondence of the M-theory dynamics to IIA (aka "compactification") as well as the correspondence to a "usual" effective low energy quantum field theory might be very non-trivial. Again, my naive picture is that even in the direction M-theory $\to$ IIA there is an additional "effectivization" deforming the alternative quantization process. Not knowing the nature of this "effectivization" and it's limit-commutation is my main concern. But the last paragraph of your post indeed strengthens the case for M-theory. 1)Exactly. 2)In T-duality, a theory 1 on a circle of radius R is equivalent to a theory 2 on a circle of radius 1/R. In that case it is not possible to decompactify both sides at the same time. In some sense the duality should still be true in the limit R goes to zero but it does not make really sense because we do not have a good control of the theory 1 on a extremely small circle. Rather the point of the T-duality is to give a dual description under control of this limit. 3) In some sense this is similar to 2). One could just say that the theory with infinitely many states is just the strong coupling limit of type IIA. But the point is that we would like a "better" description of this limit. There is no real reason why such dual description should exist except probably the general philosophy that when a system becomes strongly coupled, often we have some emergent degrees of freedom which are themself weakly coupled and so that it is better to base the theory on them. It just happens that it seems that in the case of the strong coupling limit of type IIA, the Kaluza-Klein description is a well-known mechanism which seems to give the expected limit. In fact, at the time of the discoveries of the various string dualities, some particular strong coupling limits appeared like nothing known before (some theories of tensionless strings). In such cases one just has to work harder to understand what happens. 4)It is true that there is no known quantum worldsheet description of the membranes and I agree that it is certainly a gap in our understanding of M-theory. But this is irrelevant for the low energy limit question because the membranes do not appear in the discussion anyway. If I want to compute the low energy limit of some compactification of a string theory on a smooth compact space, the fact that I have a good wordsheet description of the string does not help, it just does not enter in the discussion. commented May 31, 2015 by 40227 (5,140 points) [ revision history ] p$\hbar$ysicsOverf$\varnothing$ow
CommonCrawl
4041번 - Pro-Test Voting Old Bob Test is currently running for Mayor in the Hamlet of Kerning. Kerning is divided up into a number of precincts (numbered 0, 1, 2, ...), and after extensive polling by his crack staff, Bob knows the current percentage of voters in each precinct who plan to vote for him. Needless to say, he would like to increase these percentages in all precincts, but he has limited funds to spend. Based on past results, the effects of spending on any precinct obey the following equation: \[F_p = I_p+(\frac{M}{10.1 + M}) \Delta \] where \(I_p\) is the current percentage of pro-Test voters, \( \Delta \) is the maximum increase in this percentage possible, \(M\) is the amount of money spent in the precinct, in integer multiples of \$1, and \(F_p\) is the final expected percentage. What Bob needs to know is the best way to spend his money to maximize the number of votes he can get. The first line of each test case contains two integers m and n, representing the amount of money Bob has to spend (in dollars) and the number of precincts. The maximum value for both of these is 100. After this will be n lines of the form N \(I_p\) \(\Delta\), all positive integers, which contain information on each precinct: N is the population of the precinct and \(I_p\) and \(\Delta\) are as described above. The value of N will be less than 10000. The first of these lines refers to precinct 0, the next to precinct 1, and so on. A line containing 0 0 follows the last test case. NOTE: When calculating the number of pro-Test voters in a precinct, you should first perform a double calculation of \(F_p\) using the formula above, then multiply this percentage by the population N and round to get the final result. Output for each test case should consist of two lines. The first should contain the case number followed by the maximum number of votes Bob can obtain through optimum spending. The second line should list each precinct and the amount of money which Bob should spend there. The format for each precinct should be precinctnum:money, and each such pair should be separated by 1 blank. In the case where there is more than one way to spend Bob's money that yields the maximum number of votes, give the one that spends the most on precinct 0. If there is more than one with the same spent on precinct 0, take the one that spends the most on precinct 1, etc. Case 1: 3095 0:42 1:24 2:34 ACM-ICPC > Regionals > North America > East Central North America Regional > 2010 East Central Regional Contest F번
CommonCrawl
Assessing the impact of antiretroviral therapy on tuberculosis notification rates among people with HIV: a descriptive analysis of 23 countries in sub-Saharan Africa, 2010–2015 Diya Surie ORCID: orcid.org/0000-0003-3931-14231, Martien W. Borgdorff2, Kevin P. Cain3, Eleanor S. Click1, Kevin M. DeCock4 & Courtney M. Yuen5,6 HIV is a major driver of the tuberculosis epidemic in sub-Saharan Africa. The population-level impact of antiretroviral therapy (ART) scale-up on tuberculosis rates in this region has not been well studied. We conducted a descriptive analysis to examine evidence of population-level effect of ART on tuberculosis by comparing trends in estimated tuberculosis notification rates, by HIV status, for countries in sub-Saharan Africa. We estimated annual tuberculosis notification rates, stratified by HIV status during 2010–2015 using data from WHO, the Joint United Nations Programme on HIV/AIDS, and the United Nations Population Division. Countries were included in this analysis if they had ≥4 years of HIV prevalence estimates and ≥ 75% of tuberculosis patients with known HIV status. We compared tuberculosis notification rates among people living with HIV (PLHIV) and people without HIV via Wilcoxon rank sum test. Among 23 included countries, the median annual average change in tuberculosis notification rates among PLHIV during 2010–2015 was -5.7% (IQR -6.9 to -1.7%), compared to a median change of -2.3% (IQR -4.2 to -0.1%) among people without HIV (p-value = 0.0099). Among 11 countries with higher ART coverage, the median annual average change in TB notification rates among PLHIV was -6.8% (IQR -7.6 to -5.7%) compared to a median change of -2.1% (IQR -6.0 to 0.7%) for PLHIV in 12 countries with lower ART coverage (p = 0.0106). Tuberculosis notification rates declined more among PLHIV than people without HIV, and have declined more in countries with higher ART coverage. These results are consistent with a population-level effect of ART on decreasing TB incidence among PLHIV. To further reduce TB incidence among PLHIV, additional scale-up of ART as well as greater use of isoniazid preventive therapy and active case-finding will be necessary. Human immunodeficiency virus (HIV) infection is the most powerful known risk factor for tuberculosis (TB). HIV increases both the risk of reactivation of latent TB infection and the risk of rapid progression to active TB disease [1,2,3,4]. At the population level, the increased risk of TB among people living with HIV (PLHIV) has resulted in the resurgence of TB epidemics worldwide [5, 6]. Nowhere has this convergence of epidemics been more pronounced than in sub-Saharan Africa where HIV remains a major driver of the TB epidemic [7,8,9]. While only 10% of 10.4 million new TB cases reported worldwide in 2016 were among PLHIV, almost three quarters of these cases occurred in sub-Saharan Africa [9]. Successful TB control in this region requires addressing the disproportionate burden of TB among PLHIV. In the last decade, the use of antiretroviral therapy (ART) in sub-Saharan Africa has greatly increased [10]. By helping to restore the immune system, ART has been shown to have a substantial effect on preventing TB in PLHIV [11,12,13,14]. In a meta-analysis of 11 studies, participants receiving ART had a 65% reduction in the development of TB compared to participants receiving no ART, regardless of their baseline CD4 count [11]. Given its proven individual-level effect, one would expect that the expansion of ART coverage would lead to the population-effect of declining TB incidence among PLHIV. Indeed, in Kenya, South Africa, and Malawi, TB notification rates among people with HIV are estimated to have declined substantially more than TB notification rates among people without HIV, concurrent with the expansion of ART coverage [15,16,17,18]. However, the extent to which this is true across the sub-Saharan African region is unclear. To examine whether there is evidence of a population-level effect of ART on TB across sub-Saharan Africa, we estimated TB case notification rates stratified by HIV status for countries in the WHO African region. We then sought to compare trends in case notification rates among PLHIV and people without HIV, assessing both in the context of changing ART coverage. We estimated TB case notification rates stratified by HIV status using several existing, publicly available data sources. We obtained the number of total notified TB cases in each country, the number with HIV test results, and the number with positive HIV test results, by year, from WHO [19]. We obtained HIV prevalence estimates among adults aged 15–49 years, by year, from the Joint United Nations Programme on HIV/AIDS (UNAIDS) [20], and population estimates among adults aged 15–49 years, by year, from the United Nations Population Division (UNPD) [21]. To describe trends in ART coverage, we obtained ART coverage estimates, by year, from UNAIDS [22]. Coverage is defined as the percentage of all adults and children living with HIV who are currently receiving ART [23]. At the time of publication, ART coverage estimates from UNAIDS were available from 2010 onwards, and population estimates from UNDP were available through 2015; the period of our analysis was therefore from 2010 through 2015. All 47 countries in the WHO African region were considered for this analysis. We considered data quality for each country and each year to determine inclusion. A year of data was considered to be of adequate quality if an HIV prevalence estimate was available and if ≥75% of notified TB cases had been tested for HIV. Any country with <4 years of data meeting these criteria was excluded from analysis. After estimating HIV-stratified TB notification rates, we excluded countries that showed greater than 50% year-to-year variation in the estimated notification rates, suggesting either data quality issues or too few patients with TB to make rate calculations meaningful. Estimating TB notification rates stratified by HIV status To estimate TB notifications among PLHIV, we estimated a numerator based on TB notifications and a denominator based on population-level HIV-prevalence estimates (Fig. 1a). For the numerator, we estimated the annual number of notified TB cases in PLHIV in a country in a given year by multiplying the number of total notified TB cases in the country by the proportion of TB cases with an HIV-positive test result. We therefore assumed that TB cases who did not receive an HIV test had an equal likelihood of being HIV-positive as those who did; we believe this assumption to be reasonable for the countries included in our analysis given that routine HIV testing for TB patients has been recommended since 2004 [24] and that we only included countries where ≥75% of TB patients had been tested. We used aggregate TB case notifications instead of age-stratified TB case notifications due to inconsistencies in the age-stratified data. For the denominator, we calculated the number of PLHIV in each country by multiplying UNAIDS HIV prevalence estimates among adults aged 15–49 years by the UNPD population estimate for this age group. We restricted this population denominator to adults aged 15–49 years, as the HIV prevalence estimates are more robust for this age group than for children or older adults. To address the mismatch between the age groups in the numerator and denominator, we multiplied the numerator by the median average proportion of notified TB cases that were aged 15–49 years old among 19 countries in which the sum of all reported TB cases with a known age was ≥90% of all reported TB during 2010–2015 (median: 72%, interquartile range [IQR]: 69–75%). a and b Calculation of estimated annual TB notification rates for (a) people living with HIV and (b) people without HIV A similar process was used to estimate TB notification rates among people without HIV (Fig. 1b). We subtracted the calculated TB/HIV cases from total notified TB cases to obtain the numerator, multiplying by 72% to correct for the mismatch in age groups between numerator and denominator. We subtracted the population of adults 15–49 years old with HIV from the total population in this age group to form the denominator. Trends in TB notification rates To describe trends in TB notification rates during 2010–2015, we calculated for each country the average annual percent change in TB notification rate among PLHIV and the average annual percent change in TB notification rate among people without HIV. These calculations were based on the corresponding 2010 and 2015 notification rates and assumed a constant annual percent change during this period. That is: $$ N=\mathrm{M}\times {\left(1+A\right)}^4 $$ = case notification rate in 2015 = average annual percent change in case notification rate For countries that lacked data for either 2010 or 2015, we performed an analogous calculation using notification rates for the terminal years of the date range. Because of the potential error introduced by the assumptions we made in estimating TB case notification rates stratified by HIV status, we focused our analysis on assessment of trends over time, reasoning that even if the estimates for a country were inaccurate in a systematic way, the relationship between estimates in different years would remain robust. We used the Wilcoxon rank sum test with exact p-values to compare whether average annual changes in TB notification rates among PLHIV were significantly different from average annual changes in TB notification rates among people without HIV across all countries. To determine whether declines in TB notification rates among PLHIV were greater in countries with higher ART coverage, we compared the countries with greater-than-average ART coverage to those with lower-than-average ART coverage using the Wilcoxon rank sum test. The same comparison was made for declines in TB notification rates among people without HIV. Data were analyzed using SAS version 9.3 (SAS Institute, Cary, NC). Countries included in analysis Of 47 countries in the WHO African region, 23 (49%) met inclusion criteria (Figs. 2 and 3) with 156 individual years of data included across the 23 included countries. In 2010, the median HIV prevalence in these countries was 5.6% (IQR 1.9–14.2%) and median WHO estimated TB incidence was 219 cases per 100,000 population (IQR 133–633 cases per 100,000 population; Table 1). In 2010, the median ART coverage was 26% (IQR 16–33%). From 2010 to 2015, the median absolute increase in ART coverage, defined as the difference between ART coverage in 2015 and ART coverage in 2010, was 25% (IQR 16–31%). Across all 156 years of included data, the median HIV testing coverage among TB cases was 96% (IQR 90–99%). Flowchart depicting selection of countries for this analysis Countries included in analysis — sub-Saharan Africa, 2010–2015, (n = 23) Table 1 TB and ART trends: comparative summary statistics — 23 countries, sub-Saharan Africa, 2010–2015 Trends in TB notifications Among 19 countries with sufficient data to make HIV-stratified TB case notification estimates for 2010, the median estimated TB case notification rate among PLHIV (rounded to the nearest 10) was 1420 cases per 100,000 population (IQR 910–2410 cases per 100,000 population, Fig. 4a). In 2015, all 23 countries had sufficient data, and the median estimated TB notification rate among PLHIV was 1250 cases per 100,000 population (IQR 780–1580 cases per 100,000 population). By contrast, the median estimated TB case notification rate among people without HIV in 2010 was 130 cases per 100,000 population (IQR 70–220 cases per 100,000 population, Fig. 4b). In 2015, it was 120 cases per 100,000 population (IQR 60–180 cases per 100,000 population). a and b Trends in estimated TB notification rates among (a) PLHIV and (b) people without HIV by region in Africa — 23 countries, sub-Saharan Africa, 2010–2015 Among all countries in the analysis, the median annual average change in estimated TB notification rates among PLHIV was -5.7% (IQR -6.9 to -1.7%; range -10.9 to +3.3%) compared to a median change of -2.3% (IQR -4.2 to -0.1%; range -7.4 to +9.3) for people without HIV (Table 1). This difference was statistically significant (p = 0.0099). TB notification rates declined more among PLHIV than among people without HIV in 16 (70%) countries; in 1 (4%) country, TB notification rates increased in both populations, but the increase was smaller for PLHIV than with people without HIV. In 6 (26%) countries, TB notification rates either decreased less or increased more than TB notification rates in people without HIV; 5 (83%) out of these 6 countries were located in western Africa. Among 11 countries whose average ART coverage during the analytic period was greater than the median of 36%, the median annual average change in TB notification rates among PLHIV was -6.8% (IQR -7.6 to -5.7%) compared to a median change of -2.1% (IQR -6.0 to 0.7%) for PLHIV in 12 countries whose average ART coverage was less than or equal to the median. This difference was statistically significant (p = 0.0106). By contrast, the median annual average change in TB notification rates among people without HIV in the 11 countries with higher ART coverage was -3.9% (IQR -4.7 to -0.1%) compared to a median change of -1.8% (IQR -3.3 to 0.9) for people without HIV in the 12 countries with lower ART coverage. This difference was not statistically significant (p = 0.1693). Based on estimated HIV-stratified TB case notification rates in 23 sub-Saharan African countries, TB notification rates among PLHIV have declined more than among people without HIV, concurrently with the expansion of ART. Additionally, TB notification rates among PLHIV declined more in countries with higher ART coverage. While this analysis was descriptive and could not explore causality, these results are consistent with a population-level effect of ART on decreasing TB incidence among PLHIV. To our knowledge, this analysis is the first attempt to broadly examine the relationship between ART coverage and TB notification rates among PLHIV across sub-Saharan Africa. Reports from a few countries in eastern and southern Africa [15,16,17] had previously shown greater declines in TB notification rates among PLHIV than among people without HIV in the era of ART scale-up. Our results suggest this to be the case in other eastern and southern African countries as well. However, in most of the western African countries in our analysis, TB notification rates among PLHIV decreased less than among people without HIV, or actually increased. One possible contributor to this worrisome trend is the low ART coverage in several western African countries; there may be a threshold of ART coverage that is required before population-level declines in TB are observed. Another possible contributor is the fact that western African countries tend to have concentrated rather than generalized HIV epidemics, which leads to greater uncertainty in their adult HIV prevalence estimates. Furthermore, the marginalization of key populations with HIV in these countries may exacerbate their susceptibility to TB despite the availablity of ART [25]. Thus, without more information about the overlap of TB and HIV epidemiology in these settings, estimates made based only on population-level data are more difficult to interpret. Nonetheless, improving the accessibility and acceptability of HIV testing and ART to key populations in these countries may be critical to decreasing TB among PLHIV. As countries move toward implementing the "Test and Start" policy [26] to treat all PLHIV with ART regardless of CD4 cell count, monitoring the impact of ART scale-up on TB trends will become increasingly important. The ability to do so depends on being able to stratify TB trends by HIV status. As our study demonstrates, it is possible to make crude estimates based on the types of data that are currently available. However, only half the countries in the region had sufficient data of adequate quality, with insufficient HIV testing coverage among TB patients being the most common reason for excluding countries from our analysis. Prioritizing HIV testing for TB patients is thus not only important to ensure appropriate clinical management, but also for monitoring trends in TB incidence among PLHIV. Although ART is expected to play a critical role in reducing TB incidence among PLHIV, it is not the only important factor. For example, isoniazid preventive therapy (IPT) has been shown to reduce the development of TB, independent of ART, and is increasingly considered an integral component of routine care for PLHIV [27,28,29]. As scale-up of IPT also occurs across sub-Saharan Africa, monitoring the population-level impact of both ART and IPT on reducing TB incidence will be needed. Furthermore, focusing interventions only on PLHIV will not be sufficient to halt the incidence of TB among PLHIV. People without HIV tend to be a reservoir for TB transmission to PLHIV [1, 6, 30,31,32], so investment in general TB elimination strategies such as active case-finding to find and treat all cases are crucial to reducing TB incidence among PLHIV. Finally, to better understand the contribution of different factors on reducing TB incidence among PLHIV, research is needed that goes beyond analyzing population-level indicators. For instance, long-term cohort studies of PLHIV can help quantify the impact of IPT and ART delivered in programmatic settings, while molecular epidemiology studies can provide insight into transmission of TB from people without HIV to PLHIV. Our study was subject to limitations that affect the conclusions that can be drawn from our results. Inherent in a descriptive analysis is the limitation that causal inferences cannot be made. Therefore, although TB incidence declined more among PLHIV than people without HIV, and although the declines in TB incidence were greater among PLHIV in countries with the highest ART coverage, we cannot claim that ART caused the declines. Other factors such as improved nutrition or housing conditions, as well as improved TB programs or health system improvements that occurred in the process of building stronger programs to deliver ART, may have affected TB incidence. It is also possible that increasing IPT coverage could have contributed to the greater declines in TB notification rates observed among PLHIV compared to people without HIV. However, we were unable to assess the potential impact of IPT, or even describe its scale-up, as notification data for the number of people receiving IPT were completely missing for a third of the countries in this analysis, and missing for two thirds of countries for the early years of the analytic period [19]. Our study was also subject to limitations related to the data that were available for analysis. Because we were limited to publicly available country-level data, adequate data were unavailable for over half of the countries in the WHO African region. As a result, our analysis was relatively complete in its coverage of eastern and southern African countries, but highly incomplete for the countries of western and central Africa. This limitation highlights the need to strengthen data in major public domains if we are to assess the impact of recent policy changes moving forward. Finally, as with all analyses based on TB case notifications, we were unable to account for potential changes in case detection rates over time, or differences in case detection between PLHIV and people without HIV. While we do not know for sure how case detection rates have changed over time, it is likely that improvements in national reporting systems over time have led to increases in the likelihood of people with TB being notified to WHO; therefore, our analysis could underestimate the declines in case notification rates that have occurred. In addition, TB case detection among PLHIV and people without HIV may differ, but the direction of this difference is unknown. For example, given that HIV is a stigmatized disease and the majority of new HIV cases tend to present with advanced immunosuppression [33], delays in TB diagnosis (or missed TB diagnoses altogether) occur more frequently than for people without HIV. By contrast, PLHIV who are in care are routinely screened for TB, while people without HIV are generally not, so case detection among PLHIV may be higher than among people without HIV in some settings. Thus, the quantitative comparison between the case notification rates we estimate for PLHIV and people without HIV must be interpreted with these limitations in mind. This analysis suggests encouraging trends that TB notification rates have declined more among PLHIV than among people without HIV from 2010 to 2015. We believe that the expansion of ART has likely contributed to this decline. To further reduce TB incidence among PLHIV, additional scale-up of ART, as well as active case-finding in the general population will be necessary. To monitor the impact of these activities, it will be important to collect data on each intervention as well as routinely assess TB case notification rates stratified by HIV status. And finally, to better understand the factors contributing to changes in TB epidemiology among PLHIV, long-term cohort studies will be important to help interpret trends in programmatic data. IPT: Isoniazid preventive therapy PLHIV: UNAIDS: Joint United Nations Programme on HIV/AIDS UNDP: Selwyn PA, Hartel D, Lewis VA, Schoenbaum EE, Vermund SH, Klein RS, et al. A prospective study of the risk of tuberculosis among intravenous drug users with human immunodeficiency virus infection. N Engl J Med. 1989;320:545–50. https://doi.org/10.1056/NEJM198903023200901. Horsburgh CRJ. Priorities for the treatment of latent tuberculosis infection in the United States. N Engl J Med. 2004;350:2060–7. https://doi.org/10.1056/NEJMsa031667. Centers for Disease Control and Prevention. Guidelines for prevention and treatment of opportunistic infections in HIV-infected adults and adolescents: recommendations from CDC, the National Institutes of Health, and the HIV medicine Association of the Infectious Diseases Society of America. MMWR. 2009;58(RR-04):1–198. Sonnenberg P, Glynn JR, Fielding K, Murray J, Godfrey-Faussett P, Shearer S. How soon after infection with HIV does the risk of tuberculosis start to increase? A retrospective cohort study in south African gold miners. J Infect Dis. 2004;191:150–8. Epub Dec 13 2004. https://doi.org/10.1086/426827. Centers for Disease Control and Prevention. Tuberculosis morbidity — United States, 1992. MMWR. 1993;42:696–7 703–4. Kwan CK, Ernst JD. HIV and tuberculosis: a deadly human syndemic. Clin Microbiol Rev. 2011;24:351–76. https://doi.org/10.1128/cmr.00042-10. Corbett EL, Watt CJ, Walker N, Maher D, Williams BG, Raviglione MC, et al. The growing burden of tuberculosis: global trends and interactions with the HIV epidemic. Arch Intern Med. 2003;163:1009–21. https://doi.org/10.1001/archinte.163.9.1009. Lawn SD, Zumla AI. Tuberculosis. Lancet. 2011;378:57–72. https://doi.org/10.1016/%20S0140-6736(10)62173-3. World Health Organization. Global tuberculosis report 2017. Geneva: World Health Organization; 2017. Available from: http://www.who.int/tb/publications/2017/en/. [cited 2017 Jan 28] Joint United Nations Programme on HIV/AIDS. Together we will end AIDS. Geneva: Joint United Nations Programme on HIV/AIDS; 2012. Available from: http://files.unaids.org/en/media/unaids/contentassets/documents/epidemiology/2012/JC2296_UNAIDS_TogetherReport_2012_en.pdf. [cited 2016 Sep 12] Suthar AB, Lawn SD, del Amo J, Getahun H, Dye C, Sculier D, et al. Antiretroviral therapy for prevention of tuberculosis in adults with HIV: a systematic review and meta-analysis. PLoS Med. 2012;9(7):e1001270. https://doi.org/10.1371/journal.pmed.1001270. Badri M, Wilson D, Wood R. Effect of highly active antiretroviral therapy on incidence of tuberculosis in South Africa: a cohort study. Lancet. 2002;359:2059–64. Lawn SD, Kranzer K, Wood R. Antiretroviral therapy for control of the HIV-associated tuberculosis epidemic in resource-limited settings. Clin Chest Med. 2009;30:685–99, viii. https://doi.org/10.1016/j.ccm.2009.08.010. Lawn SD, Wood R, De Cock KM, Kranzer K, Lewis JJ, Churchyard GJ. Antiretrovirals and isoniazid preventive therapy in the prevention of HIV-associated tuberculosis in settings with limited health-care resources. Lancet Infect Dis. 2010;10:489–98. https://doi.org/10.1016/S1473-3099(10)70078-5. Yuen CM, Weyenga HO, Kim AA, Malika T, Muttai H, Katana A, et al. Comparison of trends in tuberculosis incidence among adults living with HIV and adults without HIV — Kenya, 1998–2012. PLoS One. 2014;9(6). https://doi.org/10.1371/journal.pone.0099880. Hermans S, Boulle A, Caldwell J, Pienaar D, Wood R. Temporal trends in TB notification rates during ART scale-up in Cape Town: an ecological analysis. J Int AIDS Soc. 2015;18:20240. https://doi.org/10.7448/IAS.18.1.20240. Middelkoop K, Bekker LG, Myer L, Johnson LF, Kloos M, Morrow C, et al. Antiretroviral therapy and TB notification rates in a high HIV prevalence south African community. J Acquir Immun Defic Syndr. 2011;56:263–9. https://doi.org/10.1097/QAI.0b013e31820413b3. Kanyerere H, Girma B, Mpunga J, Tayler-Smith K, Harries AD, Jahn A. Scale-up of ART in Malawi has reduced case notification rates in HIV-positive and HIV-negative tuberculosis. Public Health Action. 2016;6:247–51. https://doi.org/10.5588/pha.16.0053. World Health Organization. Global tuberculosis report database. Available from: http://www.who.int/tb/country/data/download/en/. Accessed 23 Dec 2017 Joint United Nations Programme on HIV/AIDS. HIV estimates with uncertainty bounds 1990–2016. Available from: http://www.unaids.org/en/resources/documents/2017/HIV_estimates_with_uncertainty_bounds_1990-2016. Accessed 23 Dec 2017. United Nations Population Division. World population prospects, the 2015 revision. Available from: https://esa.un.org/unpd/wpp/Download/Standard/Population/. Accessed 23 Dec 2017. Joint United Nations Programme on HIV/AIDS. Coverage of people receiving ART. Available from: http://aidsinfo.unaids.org/#. Accessed 23 Dec 2017. Joint United Nations Programme on HIV/AIDS. Global AIDS Response Progress Reporting 2016. Geneva: Joint United Nations Programme on HIV/AIDS; 2016. Available from: https://aidsreportingtool.unaids.org/static/docs/GARPR_Guidelines_2016_EN.pdf. Accessed 22 Sep 2016. World Health Organization. Interim Policy on Collaborative TB/HIV Activities. Geneva: World Health Organization; 2004. Available from: http://www.who.int/hiv/pub/tb/tbhiv/en/. Accessed 2 Feb 2017. Papworth E, Geesay N, An L, Thiam-Niangoin M, Ky-Zerbo O, Holland C, et al. Epidemiology of HIV among female sex workers, their clients, men who have sex with men and people who inject drugs in west and Central Africa. J Int AIDS Soc. 2013;16(Suppl 3):18751. https://doi.org/10.7448/IAS.16.4.18751. World Health Organization. Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection. Recommendations for a public health approach – second edition. Geneva: World Health Organization; 2016. Available from: http://www.who.int/hiv/pub/arv/arv-2016/en/. [cited 2016 Sep 12] Ayele HT, Mourik MS, Debray TP, Bonten MJ. Isoniazid prophylactic therapy for the prevention of tuberculosis in HIV infected adults: a systematic review and meta-analysis of randomized trials. PLoS One. 2015;10(11):e0142290. Golub JE, Saraceni V, Cavalcante SC, Pacheco AG, Moulton LH, King BS, et al. The impact of antiretroviral therapy and isoniazid preventive therapy on tuberculosis incidence in HIV-infected patients in Rio de Janeiro, Brazil. AIDS. 2007;21:1441–8. Group TAS, Danel C, Moh R, et al. A trial of early Antiretrovirals and isoniazid preventive therapy in Africa. N Engl J Med. 2015;373:808–22. https://doi.org/10.1056/NEJMoa1507198. Corbett EL, Charalambous S, Moloi VM, Fielding K, Grant AD, Dye C, et al. Human immunodeficiency virus and the prevalence of undiagnosed tuberculosis in African gold miners. Am J Respir Crit Care Med. 2004;170:673–9. https://doi.org/10.1164/rccm.200405-590OC. Corbett EL, Charalambous S, Fielding K, Clayton T, Hayes RJ, De Cock KM, et al. Stable incidence rates of tuberculosis (TB) among human immunodeficiency virus (HIV)–negative south African gold miners during a decade of epidemic HIV-associated TB. J Infect Dis. 2003;188:1156–63. https://doi.org/10.1086/378519. Middelkoop K, Mathema B, Myer L, Shashkina E, Whitelaw A, Kaplan G, et al. Transmission of tuberculosis in a south African community with a high prevalence of HIV infection. J Infect Dis. 2015;211:53–61. https://doi.org/10.1093/infdis/jiu403. The IeDEA and ART cohort collaborations. Immunodeficiency at the start of combination antiretroviral therapy in low-, middle- and high-income countries. J Acquir Immune Defic Syndr. 2014;65(1):e8–e16. https://doi.org/10.1097/QAI.0b013e3182a39979 PubMed PMID: PMC3894575. We thank Ray Shiraishi for his help with merging datasets from the data sources used in this analysis. This research has been supported by the President's Emergency Plan for AIDS Relief (PEPFAR) through the Centers for Disease Control and Prevention (CDC). The funders had no role in study design, data collection, analysis, interpretation of data, decision to publish, or preparation of the manuscript. The dataset for TB notifications and HIV test results among TB notifications is available in the World Health Organization repository: http://www.who.int/tb/country/data/download/en/. The dataset for HIV prevalence estimates is available in the Joint United Nations Programme on HIV/AIDS repository: http://www.unaids.org/en/resources/documents/2018/HIV_estimates_with_uncertainty_bounds_1990-present. The dataset for national population estimates is available in the United Nations Population Division repository: https://esa.un.org/unpd/wpp/Download/Standard/Population/. The dataset for ART coverage estimates is available in the Joint United Nations Programme on HIV/AIDS respository: http://aidsinfo.unaids.org/#. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention. Division of Global HIV and TB, Centers for Disease Control and Prevention, Atlanta, GA, USA Diya Surie & Eleanor S. Click Center for Global Health, Office of the Director, Centers for Disease Control and Prevention, Kisumu, Kenya Martien W. Borgdorff Division of Global HIV and TB, Centers for Disease Control and Prevention, Kisumu, Kenya Kevin P. Cain Division of Global HIV and TB, Centers for Disease Control and Prevention, Nairobi, Kenya Kevin M. DeCock Division of Global Health Equity, Brigham and Women's Hospital, Boston, MA, USA Courtney M. Yuen Department of Global Health and Social Medicine, Harvard Medical School, Boston, MA, USA Search for Diya Surie in: Search for Martien W. Borgdorff in: Search for Kevin P. Cain in: Search for Eleanor S. Click in: Search for Kevin M. DeCock in: Search for Courtney M. Yuen in: Conceptualization: CMY, KDC, and KPC. Methodology: CMY, MWB, KPC, and ESC. Data analysis: DS and CMY. Original draft preparation: DS and CMY. Review and edits: All. Supervision: CMY. All authors read and approved the final manuscript. Correspondence to Diya Surie. We used publicly available, aggregate country-level data from existing data sources; therefore, this analysis was not considered human subjects research and did not require ethical review. Participatory consent was not applicable. Surie, D., Borgdorff, M.W., Cain, K.P. et al. Assessing the impact of antiretroviral therapy on tuberculosis notification rates among people with HIV: a descriptive analysis of 23 countries in sub-Saharan Africa, 2010–2015. BMC Infect Dis 18, 481 (2018) doi:10.1186/s12879-018-3387-z Tuberculosis and other mycobacterial diseases
CommonCrawl
Earth, Planets and Space December 2016 , 68:54 | Cite as A possible restart of an interplate slow slip adjacent to the Tokai seismic gap in Japan Shinzaburo Ozawa Mikio Tobita Hiroshi Yarai 2k Downloads Part of the following topical collections: 4. Seismology The Tokai region of Japan is known to be a seismic gap area and is expected to be the source region of the anticipated Tokai earthquake with a moment magnitude of over 8. Interplate slow slip occurred from approximately 2001 and subsided in 2005 in the area adjacent to the source region of the expected Tokai earthquake. Eight years later, the Tokai region again revealed signs of a slow slip from early 2013. This is the first evidence based on a dense Global Positioning System network that Tokai long-term slow slips repeatedly occur. Two datasets with different detrending produce similar transient crustal deformation and aseismic slip models, supporting the occurrence of the Tokai slow slip. The center of the current Tokai slow slip is near Lake Hamana, south of the center of the previous Tokai slow slip. The estimated moments, which increase at a roughly constant rate, amount to that of an earthquake with a moment magnitude of 6.6. If the ongoing Tokai slow slip subsides soon, it will suggest that there are at least two different types of slow slip events in the Tokai long-term slow slip area: that is, a large slow slip with a moment magnitude of over 7 with undulating time evolution and a small one with a moment magnitude of around 6.6 with a roughly linear time evolution. Because the Tokai slow slip changes the stress state to one more favorable for the expected Tokai earthquake, intense monitoring is going on. Plate subduction zone GPS Interplate earthquake Transient deformation Slow slip event Tokai seismic gap Tokai slow slip Low-frequency earthquake The Tokai region in Japan is located near the Suruga trough, where the Philippine Sea plate subducts in the northwest direction beneath the Amurian plate at an annual rate of 2–3 cm/year (Sella et al. 2002) (Fig. 1). Because of the subduction of the Philippine Sea plate, large interplate earthquakes have repeatedly occurred in the Tokai region at time intervals of around 150 years (e.g., Kumagai 1996). The last Tokai earthquake occurred in 1854 (Richter magnitude M = 8.4). Currently, approximately 150 years have elapsed since the last Tokai earthquake. The Tokai region is deemed a seismic gap because it did not rupture at the time of the 1944 Tonankai earthquake [moment magnitude (M w) = 8.1] (Ishibashi 1981; Mogi 1981). a Tectonic setting in and around Japan. Dashed lines indicate plate boundaries. Red solid lines show the fault patch adopted to estimate aseismic slip for the Tohoku and Tokai regions. Blue circles show the locations of the GPS sites used in the inversion of the first detrended dataset (see text). The T-direction and S-direction are denoted by black arrows (see text). Epicenters of the 2004 off Kii peninsula earthquakes are denoted by white circles. b Enlarged map of the Tokai region in a. Green arrows show the first detrended crustal movements (see text) for the period between January 1, 2013, and October 25, 2015, relative to the Misumi site. The red dashed line shows the source region of the anticipated Tokai earthquake or a seismic gap. The red solid line shows the fault patch adopted in the Tokai region to estimate aseismic slip. The T-direction and S-direction are denoted by black arrows (see text). Solid blue circles show the locations of the GPS sites used in the inversion of the second detrended dataset. Position time series of sites 996, 097, and 102 are shown in Figs. 2 and 6. The position time series of site 303 is shown in Additional file 1 In this tectonic setting, the dense Global Positioning System (GPS) network (GEONET) in Japan detected transient movements in the Tokai region from early 2001, which disappeared around 2005 (e.g., Ozawa et al. 2002; Miyazaki et al. 2006; Liu et al. 2010). This transient was interpreted as a long-term slow slip on the plate interface near Lake Hamana, central Japan, adjacent to the Tokai seismic gap (e.g., Ozawa et al. 2002; Ohta et al. 2004; Miyazaki et al. 2006; Liu et al. 2010; Tanaka et al. 2015). The previous Tokai slow slip gradually stopped and did not trigger the anticipated Tokai earthquake. After the discovery of the Tokai slow slip by GEONET, it was proposed that the Tokai long-term slow slip occurred during the periods of approximately 1978–1983 and 1987–1991 on the basis of baseline measurements by an electromagnetic distance meter or based on leveling data (e.g., Kimata et al. 2001; Ochi 2014). This hypothesis is consistent with other slow slip events around Japan, such as the Bungo slow slip and the Hyuga-nada slow slip (e.g., Ozawa et al. 2013; Yarai and Ozawa 2013), in that they have occurred quasi-periodically. Since the 2011 M w9 Tohoku earthquake, eastward postseismic deformation has been the dominant source of crustal deformation in the Tokai region (see Fig. 1b). Around 12 years after the start of the previous Tokai slow slip, GEONET has been detecting a similar transient signal in the Tokai region since early 2013, which is mixed with the postseismic deformation due to the 2011 Tohoku earthquake. This transient movement suggests the possibility of the restart of the Tokai slow slip on the interface between the Amurian plate and the Philippine Sea plate. In addition to the crustal deformation detected by GEONET, a strain meter in the Tokai region also shows transient deformation (Miyaoka and Kimura 2016). Because there is a possibility that the ongoing slow slip will lead to the anticipated Tokai earthquake, it is important to monitor the state of the current Tokai slow slip. In this study, we estimate the spatial and temporal evolution of the possible Tokai slow slip by applying square-root information filtering following the time-dependent inversion technique and compare it with the previous event. We also discuss the relationship between the low-frequency earthquakes and the estimated slow slip model and the influence of the latter on the expected Tokai earthquake. GPS data were analyzed to obtain daily positions with Bernese GPS software (version 5.0). We adopted the F3 solution (Nakagawa et al. 2008), which uses the final orbit and earth rotation parameters of the International GNSS Service (IGS) and provides a higher S/N ratio than the previous F2 position time series (Hatanaka et al. 2003). We used the east–west (EW), north–south (NS), and up–down (UD) components at approximately 400 GPS sites in the Tokai, Kanto, and Tohoku regions. Postseismic deformation due to the 2011 Tohoku earthquake has strong potential to contaminate and bias our search for slow slip in the Tokai region. We therefore attempt to remove its influence in two different ways, each generating a different dataset. First, we invert two fault patches (one for the Tokai slow slip and one for the Tohoku afterslip). In this analysis, we attributed all the postseismic deformation of the Tohoku earthquake to afterslip on the plate interface in the Tohoku region and did not take viscoelastic relaxation into account. This first approach assumes that the postseismic deformation due to viscoelastic relaxation can be partly modeled by afterslip modeling. However, it is a fact that viscoelastic relaxation contributes to the postseismic deformation due to the Tohoku earthquake (e.g., Sun et al. 2014). Thus, we need another approach to estimate the effects of viscoelastic relaxation and afterslip on the plate interface to support the results of the analysis with two fault patches. For this purpose, second, we attempt to remove the Tohoku postseismic deformation by considering exponential and logarithmic trends in the position time series in the analysis with one fault patch for the Tokai slow slip. Analysis with two fault patches In the analysis with two fault patches, we estimated and removed annual components separately from the EW, NS, and UD raw position time series at each station using a polynomial function corresponding to the trend component and trigonometric functions corresponding to the periodic annual components. Because the antenna was changed in 2012, we estimated different annual components before and after January 1, 2013, for the data from January 1, 2012, to October 25, 2015, using the following functions. $$Y(t) = \mathop \sum \limits_{i = 0}^{n} A_{i} t^{i} + \mathop \sum \limits_{i = 1}^{m} B_{i} \sin \left( {2\pi it} \right) + \mathop \sum \limits_{i = 1}^{m} C_{i} \cos \left( {2\pi it} \right)\quad t \le {\text{January}}\;1,2013$$ $$Y(t) = \mathop \sum \limits_{i = 0}^{n} A_{i} t^{i} + \mathop \sum \limits_{i = 1}^{m} D_{i} \sin \left( {2\pi it} \right) + \mathop \sum \limits_{i = 1}^{m} E_{i} \cos \left( {2\pi it} \right)\quad t > {\text{January}}\;1,2013$$ Here, Y(t) is the position time series, t is time, A i are the coefficients of the polynomial functions, and B i , C i , D i , and E i are the coefficients of the trigonometric functions. The degree of the polynomial functions n and the overtone of the trigonometric functions m were estimated on the basis of Akaike information criterion (AIC) (Akaike 1974). After removing the annual components, we estimated the linear trend for the data between January 1, 2008, and January 1, 2011, during which no transient displacements occurred, and removed it from the data without annual components. We consider that the adopted steady state for this period is satisfactory for emphasizing the results, because the previous 2001–2005 slow slip and the current slow slip were clearly detected as a deviation from this adopted steady state. After this detrending, we smoothed the position time series by averaging over 3 days to reduce errors. Thus, this first detrending does not take into account the postseismic deformation due to the 2011 Tohoku earthquake, which is the main difference from the following second detrended dataset. We call this dataset the first detrended dataset. We applied square-root information filtering (Ozawa et al. 2012) to the first detrended dataset following the inversion technique of McGuire and Segall (2003) for the period between January 1, 2013, and October 25, 2015. To reduce the computational burden, we used position time series with an interval of 3 days. Because we used detrended data, the estimated aseismic interplate slip is the deviation from the steady state for the period between January 1, 2008, and January 1, 2011. We used 389 GPS sites in the filtering analysis for the first detrended dataset (see Fig. 1a). We weighted the EW, NS, and UD displacements with a ratio of 1:1:1/3 considering the standard deviations estimated from ordinary Kalman filtering. We used two fault patches for the upper boundary of the Pacific plate along the Japan Trench and that of the Philippine Sea plate along the Suruga trough (Fig. 1). Because postseismic deformation occurred after the 2011 Tohoku earthquake as mentioned above, we attributed the cause of all postseismic deformation to afterslip on the Tohoku fault patch. In this case, we do not take the viscoelastic response due to the Tohoku earthquake into account because we consider that the effect of the viscoelastic response to ground deformation occurs on a spatially large scale and is similar to the afterslip effect at this point. That is, the contribution of the viscoelastic response to the surface deformation in the Tokai region may be partly compensated by our afterslip model on the first fault patch, which is the upper surface of the Pacific plate after the Tohoku earthquake. We incorporated the inequality constraint that the slip is within 40° of the direction of the plate motion following the method of Simon and Simon (2006). In this filtering analysis, we incorporated the spatial roughness of the slip distribution (McGuire and Segall 2003). Hyperparameters that scale the spatial and temporal smoothness were estimated by approximately maximizing the log-likelihood of the system (Kitagawa and Gersch 1996; McGuire and Segall 2003). The optimal values of the spatial and temporal smoothness of the Tohoku fault patch are approximately 1.0 and 0.001, while those of the Tokai fault patch are approximately 0.004 and 0.001, respectively. In the above analysis, a fault patch and a slip distribution on a fault patch are represented by the superposition of spline functions (Ozawa et al. 2001). The fault patch for the Tokai region consists of 11 nodes in the T-direction and 15 nodes in the S-direction (see Fig. 1b) (Ozawa et al. 2001). These directions are defined in Fig. 1b. The spacing of knots on the fault patch is approximately 20 km in the Tokai region. The plate boundary model is from Hirose et al. (2008). With regard to the fault patch in the Tohoku region, we used 12 nodes in the T-direction and 10 nodes in the S-direction with spacing of approximately 80 and 40 km in the T-direction and S-direction, respectively (see Fig. 1a). This Tohoku fault patch is used only in the analysis with two fault patches. Although the spacing between the parallel trench nodes is larger than that between the normal trench nodes, the results for the afterslip on this Tohoku fault patch are similar to those of Ozawa et al. (2012), in which the grid spacing is shorter. Thus, we consider that the fault patch adopted for the Tohoku region can satisfactorily describe the afterslip of the Tohoku earthquake. The plate boundary model is from Nakajima and Hasegawa (2006) and Nakajima et al. (2009). The coefficients of the spline functions that represent the slip distribution on the fault patches are estimated in this inversion (Ozawa et al. 2001). Analysis with one fault patch In the second detrending, we assumed that the postseismic deformation after the 2011 Tohoku earthquake follows a time evolution with exponential and logarithmic decay, because the causes of the postseismic deformation due to the 2011 Tohoku earthquake seem to be afterslip and viscoelastic relaxation (e.g., Sun et al. 2014). Theoretically, exponential decay corresponds to the viscoelastic relaxation in a medium with linear viscoelasticity, and logarithmic decay corresponds to the afterslip on a plate boundary with a rate- and state-dependent friction law (e.g., Hetland and Hager 2006; Marone et al. 1991). Tobita and Akashi (2015) first proposed that the postseismic deformation due to the 2011 Tohoku earthquake can be well reproduced by a function consisting of logarithmic and exponential functions. In this case, we first produced a dataset without annual components or the linear trend from the raw data in the same way as for the first detrended dataset, although we estimated annual components between January 1, 1997, and January 1, 2011. In this estimate of annual components, the 2001–2005 Tokai slow slip does not affect the results. The linear trend is estimated for the same period as for the first detrended dataset. After this process, we estimated the logarithmic and exponential components for the period between March 12, 2011, and March 11, 2013, assuming that the crustal deformation for this period was caused by the afterslip and viscoelastic relaxation due to the Tohoku earthquake and the local Tokai transient starting from early 2013. We regressed the following function to the position time series, as was first conducted by Tobita (2016): $$Y(t) = A\;\log \left( {\frac{t}{t1} + 1} \right) + B\left( {\exp \left( { - \frac{t}{t2}} \right) - 1} \right) + C,$$ where t, t1, and t2 are the time elapsed from the 2011 Tohoku earthquake on March 11, 2011, and the time constants of logarithmic and exponential decay, respectively. The time constants t1 and t2 of the logarithmic and exponential components were estimated from the position time series of four GPS sites in the Tohoku and Kanto regions of Japan (Tobita 2016). After the estimates of the time constants, we estimated A, B, and C in Eq. (3) and deleted the logarithmic and exponential components from all the position time series on the assumption that the time constants are the same among all the GPS position time series. With regard to the remaining annual components due to the changes to the antenna in 2012, we estimated them from the data for the period between 2012 and 2015 by employing a different annual component before and after January 1, 2013. We removed this annual component, as for the case of the first detrended dataset, so that we could compare the first and second detrended datasets. We call this detrended dataset the second detrended dataset from now. We used 129 GPS sites in the Tokai region of the second detrended dataset for the time-dependent inversion analysis (Fig. 1b). This is because it is not necessary to take into account the postseismic deformation due to the 2011 Tohoku earthquake for the second detrended dataset, although we have to take it into account in the first detrended dataset. In the second detrended dataset, we used the same fault patch between the Philippine Sea plate and the Amurian plate beneath the Tokai region as that in the first detrended dataset without the Tohoku fault patch, because we consider that the effects of the viscoelastic relaxation and afterslip due to the 2011 Tohoku earthquake are nonexistent in the second detrended dataset owing to the removal of the exponential and logarithmic trends. The spatial and temporal smoothness of the second detrended dataset is set to the same values as those of the Tokai fault patch in the analysis with two fault patches so that we can approximately compare the results of this analysis with those of the analysis with two fault patches. Analysis of the previous Tokai slow slip In addition, we created a third detrended dataset using the same method as for the first detrended dataset but with a different period of estimation for the annual components. That is, we estimated the annual components of the EW, NS, and UD position time series separately for the period up to January 1, 2011, together with a polynomial function from the raw position time series at each station. We estimated the linear trend for the same period as for the first and second datasets and removed it from the position time series without annual components. We used 86 GPS sites in the following inversion. Using this third detrended dataset, we estimated an approximate model of the previous Tokai slow slip for the period between January 1, 2001, and January 1, 2008, by the same method as for the second detrended dataset because there are no other signals, such as those corresponding to postseismic deformation due to the 2011 Tohoku earthquake, for this period. We consider that the postseismic deformation due to the 2004 off Kii peninsula earthquakes (M w > 7) (see Fig. 1a) does not significantly affect the previous Tokai slow slip model. The estimated optimal spatial and temporal parameters are approximately 3.0 and 1.0, respectively. The first detrended crustal deformation data show eastward displacements relative to the Misumi site (Fig. 1b). The time evolution of these eastward displacements can be clearly observed in the first detrended position time series of selected GPS sites (Fig. 2a–c). The eastward displacements are mostly attributed to the postseismic deformation due to the 2011 Tohoku earthquake, although the Philippine Sea plate is subducting beneath the Amurian plate from the Suruga trough in the northwestward direction. However, there are also southward displacements of approximately 1 cm in the north–south position time series from early 2013 in Fig. 2a–c compared with approximately 2 cm eastward movements, which cannot be explained by either afterslip or viscoelastic relaxation due to the 2011 Tohoku earthquake, suggesting aseismic slip on the plate interface in the Tokai region. Thus, we assume that transient crustal deformation started locally from early 2013 at the latest in the Tokai region. We do not treat the southward transient movements before 2013 in the same time series in this paper, because the spatial pattern of this change before 2013 is less clear than that of the transient movements after 2013. (Top) First detrended position time series (see text) at sites a 996, b 097, and c 102, whose locations are shown in Fig. 1b. EW, NS, and UD represent east–west, north–south, and up–down components, with eastward, northward, and upward positive, respectively. The offset in 2011 indicates coseismic deformation from the 2011 Tohoku earthquake. Red lines show values computed by our best-fitting interplate aseismic slip model, which consists of two fault patches for the upper boundary of the Pacific plate along the Japan Trench and that of the Philippine Sea plate along the Suruga trough. (Bottom) Position time series at sites d 996, e 097, and f 102 with the computed values of the estimated optimal Tohoku afterslip model removed from the first detrended position time series to make the transient from the Tokai slow slip clear. The red line shows values computed from the Tokai slow slip model Although we estimated the afterslip on the Pacific plate in the Tohoku region together with the slip on the Philippine Sea plate in the Tokai region, we will not discuss it in this paper because our focus is on the Tokai slow slip. The characteristic features of the estimated afterslip on the Pacific plate in the Tohoku region are similar to the results of Ozawa et al. (2012) (Additional file 2). Figure 2d–f shows position time series with the effect of the Tohoku afterslip model removed from the first detrended dataset to make the Tokai transient clear. That is, we calculated the contribution to the ground displacements from the optimal Tohoku afterslip model estimated in this study and subtracted it from the first detrended position time series. We consider that Fig. 2d–f may be directly comparable to the second dataset. All the GPS sites in Fig. 2d–f show southeastward or eastward displacements from early 2013. Figure 3 shows the characteristic features of the spatial patterns of the data derived by removing the component of the 2011 Tohoku afterslip model estimated in this research from the first detrended dataset for the period between January 1, 2013, and October 25, 2015. In this figure, we can observe southeastward crustal deformation west of Lake Hamana and eastward deformation east of Lake Hamana, strongly indicating the occurrence of aseismic interplate slip near Lake Hamana. Furthermore, the vertical crustal deformation shows uplift east of Lake Hamana for the same period (Fig. 3a), which is also expected to result from a slow slip near Lake Hamana. The 1σ error is 1–2 mm for horizontal displacements and 3–6 mm for vertical displacements in Fig. 3a, as estimated by ordinary Kalman filtering. Thus, the observed transients in Fig. 3a exceed the 1σ error. We do not consider that the postseismic deformation due to the 2011 Tohoku earthquake could have created the pattern in Fig. 3a by viscoelastic relaxation or afterslip on the Pacific plate. This is because this pattern occurs in a spatially small scale in the Tokai region compared with the viscoelastic relaxation and afterslip on the Pacific plate, which probably causes spatially larger-scale ground deformation because the source is so far from the ground surface in the Tokai region. Furthermore, afterslip or viscoelastic relaxation could not have created the uplift near Lake Hamana. a Horizontal crustal deformation, which was obtained by subtracting the effect of the afterslip model of the Tohoku earthquake from the first detrended crustal deformation data to make the transient from the Tokai slow slip clear. The period is between January 1, 2013, and October 25, 2015. Arrows show the movements of the GPS sites. The color indicates the spatially smoothed vertical displacement. b Crustal deformation computed from our best-fitting Tokai slow slip model based on the first detrended dataset. Arrows show the computed horizontal displacements, while the color indicates the computed vertical displacement On the basis of the observed first detrended displacements, our filtering analysis shows a slow slip area near the west boundary of the source region of the expected Tokai earthquake (Fig. 4a–c) as expected from Fig. 3a. The aseismic interplate southeastward slip rate from January 1, 2013, to January 1, 2014, amounts to 1 cm/year in Fig. 4a and is centered near Lake Hamana. The maximum aseismic interplate slip rate for the period between January 1, 2014, and January 1, 2015, exceeds 2 cm/year in Fig. 4b. Figure 4c shows an aseismic interplate slip rate of approximately 1–2 cm/year near Lake Hamana for the period between January 1, 2015, and October 25, 2015. The aseismic slip in the Nagoya area in Fig. 4c is less reliable because it is located near the western boundary of the fault patch used in the modeling, and the estimated error is relatively large in this area. The time evolution of the slip area indicates that the center of the aseismic slip is near Lake Hamana, and movement of the center of the slip area is not observed. Figure 4d–f shows the 1σ error of the estimated aseismic slip rate with a contour interval of 2 mm/year, indicating that the estimated interplate slip rates for all the periods are larger than the 1σ error. The main area in which the estimated total slip is in the southeastward direction is located on the west boundary of the seismic gap, and the slip magnitude surpasses 4 cm for the period between January 1, 2013, and October 25, 2015, with a moment magnitude exceeding 6.6 (Fig. 5a, b). The estimated moment release rate seems to be roughly constant in Fig. 5b from a long-term viewpoint. The horizontal and vertical displacements computed from our best-fitting Tokai aseismic slip model based on the first detrended dataset reproduce the observed crustal deformation well, as shown in Figs. 2 and 3b. (Top) Estimated aseismic slip rate on the interface between the Amurian plate and the Philippine Sea plate based on the first detrended dataset for the periods between a January 1, 2013–January 1, 2014, b January 1, 2014–January 1, 2015, and c January 1, 2015–October 25, 2015, with a contour interval of 1 cm/year (red lines). The red arrows also indicate the estimated aseismic slip rate. The dashed lines indicate isodepth contours of the plate interface with an interval of 20 km (Hirose et al. 2008). The blue dashed line shows the source region of the anticipated Tokai earthquake. Green dots indicate epicenters of low-frequency earthquakes determined by Japan Meteorological Agency for the same period. Thin solid lines indicate prefectural boundaries in Japan. (Bottom) Estimated 1σ error with a contour interval of 2 mm/year. d–f The periods of a–c a Estimated aseismic slip based on the first detrended dataset for the period between January 1, 2013, and October 25, 2015, shown by red contours with an interval of 2 cm. The red arrows also indicate the estimated aseismic slip. The nomenclature is the same as that in Fig. 4a. b Time evolution of the estimated moment of the current Tokai slow slip based on the first detrended dataset. Dashed lines indicate the 3σ error Figure 6a–c shows the position time series with the annual components and the linear trend removed from the raw data, as described in detail in the subsection on the analysis with one fault patch in the "Methods" section, and the values computed using an optimal function consisting of logarithmic and exponential functions. The second detrended dataset is shown for the same selected GPS sites in Fig. 6d–f. In Fig. 6d–f, we can clearly observe southeastward and eastward residuals from early 2013. Figure 7a shows the spatial pattern of the second detrended dataset for the period between January 1, 2013, and October 25, 2015. The observed spatial pattern of the second detrended dataset shows southeastward displacements west of Lake Hamana and eastward displacements east of Lake Hamana. Uplift is observed near Lake Hamana. The 1σ error ranges from 1 to 2 mm for the horizontal components and from 3 to 6 mm for the vertical components. These features are similar to those in Fig. 3a for the first detrended dataset without the Tohoku afterslip effects. The filtering analysis of the second detrended dataset shows an area of aseismic slip centered on Lake Hamana (Fig. 8a–c), which is similar to the slip area estimated from the first detrended dataset in Fig. 4. Furthermore, the slip rate magnitude seems to be very similar to that obtained by analysis using the first detrended dataset, ranging from 1 to 2 cm/year (Fig. 8a–c). We cannot observe any significant movements of the center of the slow slip area for the entire period. Figure 8d–f shows that the estimated aseismic slip rates in Fig. 8a–c are larger than the 1σ error. The total slip magnitude amounts to more than 4 cm for the period between January 1, 2013, and October 25, 2015, with a moment magnitude reaching 6.6 (Fig. 9a, b). This model closely reproduces the observed transient deformation shown in Figs. 6d–f and 7a, b. (Top) Position time series with the linear trend estimated for the period between January 1, 2008, and January 1, 2011, and annual components estimated between January 1, 1997, and January 1, 2011, removed from the raw data (see text) at sites a 996, b 097, and c 102. The nomenclature is the same as that in Fig. 2. Red lines show values computed by our best-fitting function consisting of logarithmic and exponential functions (see text). (Bottom) Second detrended position time series at sites d 996, e 097, and f 102 (see text). The green line shows values computed from the Tokai slow slip model based on the second detrended dataset a Second detrended crustal deformation for the period between January 1, 2013, and October 25, 2015. Arrows show the horizontal movements of the GPS sites. The color indicates the spatially smoothed vertical displacement. b Crustal deformation computed from our best-fitting Tokai slow slip model based on the second detrended dataset. Arrows show the computed horizontal displacements, while the color indicates the computed vertical displacement (Top) Aseismic slip rate estimated from the second detrended dataset on the interface between the Amurian plate and the Philippine Sea plate for the periods between a January 1, 2013–January 1, 2014, b January 1, 2014–January 1, 2015, and c January 1, 2015–October 25, 2015, with a contour interval of 1 cm/year (red lines). The red arrows also indicate the estimated aseismic slip rate. The nomenclature is the same as that in Fig. 4a. (Bottom) Estimated 1σ error with a contour interval of 2 mm/year. d–f The periods of a–c a Aseismic slip based on the second detrended data for the period between January 1, 2013, and October 25, 2015, shown by red contours with an interval of 2 cm. The red arrows also indicate the estimated aseismic slip. The nomenclature is the same as that in Fig. 5a. b Time evolution of the estimated moment of the current Tokai slow slip based on the second detrended dataset. Dashed lines indicate the 3σ error Comparison between the two models We obtained almost the same results using the two different detrended datasets. With regard to the estimated model based on the first detrended dataset, we cannot rule out the existence of a systematic error resulting from the postseismic deformation since the 2011 Tohoku earthquake. However, we believe that any such systematic error would be small considering the difference in the spatial scale of the viscoelastic relaxation effect mentioned above. Furthermore, because our model based on the second detrended dataset, which excludes the exponential and logarithmic time evolution corresponding to viscoelastic relaxation and afterslip, respectively, shows similar results for the location and time evolution of aseismic slip to those obtained from the first detrended dataset, we consider that slow slip is now occurring on the west boundary of the Tokai seismic gap area, without signs of decay. This finding is consistent with the strain meter observations in this region by Japan Meteorological Agency, which also indicate interplate slow slip near Lake Hamana (Miyaoka and Kimura 2016). Although the start time of the current slow slip event is unclear, we assumed that it started in early 2013 at the latest on the basis of the approximate start time of the transient in Figs. 2 and 6 and the moment release rate shown in Figs. 5b and 9b. For the second detrended data, we also changed the regression periods for the logarithmic and exponential functions to March 12, 2011–March 12, 2012, and March 12, 2011–March 12, 2014, respectively, and found that the characteristic feature of a slip area appearing that was centered on Lake Hamana was not changed. The reason why the two different detrended datasets gave similar results is that the postseismic deformation due to the 2011 Tohoku earthquake can be well explained by both the kinematic afterslip model and the logarithmic and exponential functions, which are based on the physics of the rate- and state-dependent friction law and viscoelasticity. However, it remains unclear why the kinematic afterslip model and the logarithmic and exponential functions produced similar postseismic deformation. We cannot rule out the possibility that the estimated logarithmic and exponential functions may reflect an actual process of afterslip and viscoelastic relaxation involved in the postseismic deformation due to the Tohoku earthquake. This problem remains to be solved in a future study. Comparison between the previous and current slow slips Interplate aseismic slip occurred in the Tokai region approximately during the period between 2001 and 2005. Figure 10 shows the spatial pattern of the third detrended dataset for the period between January 1, 2001, and January 1, 2008. As shown in Fig. 10, the spatial pattern of this transient deformation is very similar to that of the current Tokai transient (Figs. 3a, 7a). This strongly indicates that the current transient is a true signal caused by aseismic interplate slip in the Tokai region. The center of the previous slow slip area for the period between January 1, 2001, and January 1, 2008, is located between the low-frequency earthquake area and Lake Hamana, while the ongoing slow slip area is located in the southern part of the previous Tokai slow slip area (Ozawa et al. 2002; Ohta et al. 2004; Miyazaki et al. 2006; Liu et al. 2010; Ochi and Kato 2013) (see Fig. 11). Low-frequency earthquakes are accompanied by episodic slip on the plate interface (e.g., Shelly et al. 2006). The estimated moment magnitude (M w) of the previous event is over 7 and much larger than the values based on the two different detrended datasets (see Fig. 12). Fig. 10 a Third detrended crustal deformation for the period between January 1, 2001, and January, 1 2008. The color indicates the spatially smoothed vertical displacement. b Crustal deformation computed from the aseismic slip model at the time of the previous Tokai slow slip for the period between January 1, 2001, and January 1. 2008, which is shown in Fig. 11 Comparison of the estimated aseismic interplate slip. The gray contours show the slip magnitude of the previous Tokai slow slip for the period between January 1, 2001, and January 1, 2008, with a contour interval of 10 cm based on the third detrended data (see text). The red contours show the estimated aseismic slip based on the first detrended data (see text) for the period between January 1, 2013, and October 25, 2015. The black contours show the estimated aseismic slip based on the second detrended data for the period between January 1, 2013, and October 25, 2015. Green dots indicate the epicenters of low-frequency earthquakes for the period between January 1, 2001, and January 1, 2008. The other nomenclature is the same as that in Fig. 5a Estimated time evolution of moment release. Line 1 indicates the estimated moment of the previous Tokai slow slip. Lines 2 and 3 indicate the estimated moments based on the first and second detrended datasets, respectively. We set the start time to 2013 for Lines 2 and 3 to match the start time of 2001 for the previous Tokai slow slip Because the previous Tokai slow slip reached a moment magnitude of over 7, we cannot rule out the possibility that the present stage may correspond to the initial stage of the time evolution of the Tokai slow slip. The estimated moment release of the current Tokai slow slip seems to have increased almost linearly since early 2013, as shown in Figs. 5b and 9b and in Fig. 12 in the long term, while the moment release rate of the previous Tokai slow slip changed in the long term. At this point, the current Tokai slow slip seems to be following a different time evolution from that in the previous event (Fig. 12). At the time of the previous Tokai slow slip, the center of the slip area was located near Lake Hamana in the early period, then moved north over time (e.g., Ozawa et al. 2001; Miyazaki et al. 2006) (see Additional file 3). Thus, there is a possibility that the ongoing slow slip will move northward over time with an increase in the slip magnitude. However, if the current aseismic slip stops in the near future, it will indicate the coexistence of relatively small slow slip events in the Tokai long-term slow slip area. Our previous Tokai slow slip model reproduces the observations well as shown in Fig. 10. Relationship with low-frequency earthquakes Non-volcanic low-frequency tremors have been discovered in the Nankai trough subduction zone in Japan (Obara 2002). These low-frequency tremors include low-frequency earthquakes whose hypocenters can be determined by identification of coherent S-wave and P-wave arrivals (Katsumata and Kamaya 2003). Low-frequency earthquakes occur at a depth of approximately 30–40 km on the plate interface in the Tokai region. Low-frequency tremors that contain low-frequency earthquakes occur in the Tokai region with a recurrence interval of approximately 6 months accompanied by aseismic slip, which continues for 2–3 days, and release a seismic moment corresponding to M w ~ 6 (Hirose and Obara 2006). This relationship between low-frequency tremors or earthquakes and slow slip events has also been observed in many other areas (e.g., Rogers and Dragert 2003; Shelly et al. 2006). Thus, we can expect low-frequency tremors or earthquakes with higher activity owing to the influence of the current Tokai slow slip. However, such a relationship has not been observed this time, although there was a correlation between the low-frequency earthquake activity in the Tokai region and the previous Tokai slow slip (Ishigaki et al. 2005). We consider that low-frequency earthquakes are not being activated by the current slow slip because the central part of the slow slip area is located away from the low-frequency earthquake area (Figs. 5a, 9a) and its magnitude is still small, suggesting little change in the stress in the low-frequency earthquake area. Although short-term slow slip events of M w6 in the Tokai region trigger low-frequency tremors or earthquakes (Hirose and Obara 2006), the area of induced low-frequency tremors or earthquakes overlaps with the short-term slow slip area, indicating a large change in stress in this case. The short-term slow slip with a maximum moment magnitude of M w6 in the low-frequency earthquake area (Hirose and Obara 2006) does not affect our optimal Tokai slow slip model owing to its small moment magnitude compared with the current Tokai event and the center location of our current Tokai model. Influence on the expected Tokai earthquake There is a possibility that the 2011 Tohoku earthquake and its postseismic deformation have affected the seismic activity of the Japanese archipelago. In studies on the Coulomb failure stress change (ΔCFS) due to the Tohoku earthquake, Toda et al. (2011) showed seismic excitation throughout central Japan after the Tohoku earthquake and Ishibe et al. (2015) reported changes in seismicity in the Kanto region that were correlated with ΔCFS. Furthermore, slow slip events off the Boso peninsula, east Japan, have shown a shorter recurrence interval after the Tohoku earthquake (Ozawa, 2014). Our Tohoku coseismic model (Ozawa et al. 2012) produces a ΔCFS of approximately 3 kPa near Cape Omaezaki in the Tokai seismic gap. Furthermore, the two estimated models of the current Tokai slow slip produce ΔCFS of approximately 3 kPa near Cape Omaezaki (see Fig. 1b). We assumed a rigidity of 30 GPa, Poisson's ratio of 0.25, and a friction coefficient of 0.4 in these calculations (Harris 1998). Considering these points, we cannot rule out the possibility that the ongoing slow slip will trigger the anticipated Tokai earthquake, although the previous event did not cause the expected earthquake. Thus, it is very important to intensively monitor the ongoing Tokai slow slip using the dense GPS network in Japan. It has been proposed that the Tokai slow slip occurs with a recurrence interval of 9–14 years, although the truth of this hypothesis is difficult to establish because of the scarcity of data (e.g., Kimata et al. 2001; Ochi 2014). Thus, our finding, obtained using the dense GPS network in Japan, confirms that the Tokai slow slip has occurred repeatedly on the west boundary of the Tokai seismic gap and changed the stress state in favor of the anticipated Tokai earthquake. Although the start time of the current event is not clear, the recurrence interval of the Tokai slow slip is approximately 12 years if we assume it to be early 2013. Our estimated models of the current Tokai slow slip indicate differences from the previous event regarding the following points. First, the magnitude of the current event is approximately 6.6, which is very small compared with the M w of over 7 for the previous event, and increasing at a roughly constant rate. Second, the center of the slip area is located south of that in the previous event. Third, the moment release rate is very small compared with that in the previous event. We cannot clearly state whether the current slow slip will become similar to the previous Tokai slow slip or whether it will be a different type of slow slip from the above points. SO performed all the data analysis, prepared the graphical material, and wrote the manuscript. MT proposed the detrending method using a function consisting of logarithmic and exponential functions and estimated the time constants of the logarithmic and exponential functions. MT and HY supervised SO and helped to improve the manuscript. All authors read and approved the final manuscript. We are grateful to our colleagues for helpful discussions. Prof. Teruyuki Kato of the Earthquake Research Institute, the University of Tokyo, advised us about detrending. Prof. Sagiya of Nagoya University provided us with helpful information. The hypocenter data of low-frequency earthquakes are from Japan Meteorological Agency. The data used in this paper are available by contacting [email protected]. 40623_2016_430_MOESM1_ESM.pdf (2.7 mb) Additional file 1. (A) Third detrended position time series at site 303, whose location is shown in Fig. 1b. The red lines indicate the values computed from our best-fitting Tokai model. (B) Third detrended position time series at site 097, whose location is shown in Fig. 1b. The red lines indicate the computed values. (C) Third detrended position time series at site 102, whose location is shown in Fig. 1b. Our model reproduces the observations well. Additional file 2. (Top) First detrended crustal deformation for the periods between (A) January 1, 2013 and January 1, 2014, (B) January 1, 2014 and January 1, 2015, and (C) January 1, 2015 and October 25, 2015. (Bottom) Estimated aseismic slip with a contour interval of 20 cm. (D–F) The periods of (A–C). All the estimated slip patterns are similar to the results of Ozawa et al. (2012). Additional file 3. Estimated aseismic slip based on the third detrended dataset for the period (A) between January 1, 2001 and January 1, 2002. The red lines indicate the aseismic slip magnitude with a contour interval of 5 cm. The nomenclature is the same as that in Fig. 4a. We can see a relatively large slip area on the west boundary of the expected source region of the Tokai earthquake. (B) January 1, 2002–January 1, 2003. In this period, the slip area west of Lake Hamana disappears and the slip area north of Lake Hamana expands slightly. (C) January 1, 2003–January 1, 2004. The slip area north of Lake Hamana becomes large in this period. (D) January 1, 2004–January 1, 2005. In this period, the slip area expands to the southwest, which is attributed the postseismic deformation due to the 2004 off Kii-peninsula earthquakes with a moment magnitude of around 7.3–7.4 inside the Philippine Sea plate (Fig. 1a). Thus, we consider that this expansion of the slip area to the southwest is not reliable. (E) January 1, 2005–January 1, 2006. In this period, the aseismic interplate slip seems to be nearing its end. (F) January 1, 2006–January 1, 2007. (G) January 1, 2007–January 1, 2008. (H–N) Estimated 1σ error for the periods corresponding to (A–G). Akaike H (1974) A new look at the statistical model identifications. IEEE Trans Autom Control 19:716–723CrossRefGoogle Scholar Harris RA (1998) Introduction to special section: stress triggers, stress shadows, and implications for seismic hazard. J Geophys Res 103:24347–24358CrossRefGoogle Scholar Hatanaka Y, Iizuka T, Sawada M, Yamagiwa A, Kikuta Y, Johnson JM, Rocken C (2003) Improvement of the analysis strategy of GEONET. Bull Geogr Surv Inst 49:11–37Google Scholar Hetland EA, Hager BH (2006) The effects of rheological layering on post-seismic deformation. Geophys J Int 166:277–292CrossRefGoogle Scholar Hirose H, Obara K (2006) Short-term slow slip and correlated tremor episodes in the Tokai region, central Japan. Geophys Res Lett 33:L17311. doi: 10.1029/2006GL026579 CrossRefGoogle Scholar Hirose F, Nakajima J, Hasegawa A (2008) Three-dimensional seismic velocity structure and configuration of the Philippine Sea slab in southwestern Japan estimated by double-difference tomography. J Geophys Res 113:B09315. doi: 10.1029/2007JB005274 CrossRefGoogle Scholar Ishibashi K (1981) Specification of a soon-to-occur seismic faulting in the Tokai district central Japan based upon seismotectonics. In: Simpson DW, Richards PG (eds) Earthquake prediction. Maurice Ewing series, vol 4. American Geophysical Union, Washington, pp 297–332Google Scholar Ishibe T, Satake K, Sakai S, Shimazaki K, Tsuruoka H, Yokota Y, Nakagawa S, Hirata N (2015) Correlation between Coulomb stress imparted by the 2011 Tohoku-Oki earthquake and seismicity rate change in Kanto, Japan. Geophys J Int 201:112–134CrossRefGoogle Scholar Ishigaki Y, Katsumata A, Kamaya N, Nakamura K, Ozawa S (2005) The relation between the slow slip of plate boundary in Tokai district and low frequency earthquakes. Q J Seismol 68:81–97Google Scholar Katsumata A, Kamaya N (2003) Low-frequency continuous tremor around the Moho discontinuity away from volcanoes in the southwest Japan. Geophys Res Lett 30:1020. doi: 10.1029/2002GL0159812 CrossRefGoogle Scholar Kimata F, Hirahara K, Fujii N, Hirose H (2001) Repeated occurrence of slow slip events on the subducting plate interface in the Tokai region, central Japan, the focal region of the anticipated Tokai earthquake (M = 8). Paper presented at the AGU Fall Meeting, San Francisco, 10–14 December 2001Google Scholar Kitagawa G, Gersch W (1996) Linear gaussian state space modelling. In: Kitagawa G, Gersch W (eds) Smoothness priors analysis of time series. Lecture notes in statistics, vol 116. Springer, New YorkCrossRefGoogle Scholar Kumagai H (1996) Time sequence and the recurrence models for large earthquakes along the Nankai Trough revisited. Geophys Res Lett 23:1139–1142CrossRefGoogle Scholar Liu Z, Owen S, Dong D, Lundgren P, Webb F, Hetland E, Simons M (2010) Integration of transient strain events with models of plate coupling and areas of great earthquakes in southwest Japan. Geophys J Int 181:1292–1312CrossRefGoogle Scholar Marone CJ, Scholtz CH, Bilham R (1991) On the mechanics of earthquake afterslip. J Geophys Res 96:8441–8452. doi: 10.1029/91JB00275 CrossRefGoogle Scholar McGuire JJ, Segall P (2003) Imaging of aseismic slip transients recorded by dense geodetic networks. Geophys J Int 155:778–788CrossRefGoogle Scholar Miyaoka K, Kimura H (2016) Detection of long-term slow slip event by strainmeters using the stacking method. Q J Seismol 79:15–23Google Scholar Miyazaki S, Segall P, McGuire J, Kato T, Hatanaka Y (2006) Spatial and temporal evolution of stress and slip rate during the 2000 Tokai slow earthquake. J Geophys Res 111:B03409. doi: 10.1029/2004JB003426 Google Scholar Mogi K (1981) Seismicity in western Japan and long-term earthquake forecasting. In: Simpson DW, Richards PG (eds) Earthquake prediction. Maurice Ewing series, vol 4. American Geophysical Union, Washington, pp 43–51Google Scholar Nakagawa H, Miyahara B, Iwashita C, Toyofuku T, Kotani K, Ishimoto M, Munekane H, Hatanaka Y (2008) New analysis strategy of GEONET. In: Proceedings of international symposium on GPS/GNSS, Tokyo, 11–14 November 2008Google Scholar Nakajima J, Hasegawa A (2006) Anomalous low-velocity zone and linear alignment of seismicity along it in the subducted Pacific slab beneath Kanto, Japan: reactivation of subducted fracture zone? Geophys Res Lett 33:L16309. doi: 10.1029/2006GL026773 CrossRefGoogle Scholar Nakajima J, Hirose F, Hasegawa A (2009) Seismotectonics beneath the Tokyo metropolitan area, Japan: effect of slab-slab contact and overlap on seismicity. J Geophys Res 114:B08309. doi: 10.1029/2008JB006101 CrossRefGoogle Scholar Obara K (2002) Nonvolcanic deep tremor associated with subduction in southwest Japan. Science 296:1679–1681. doi: 10.1126/science.1070378 CrossRefGoogle Scholar Ochi T (2014) Possible long-term SSEs in the Tokai area, central Japan, after 1981: size, duration, and recurrence interval. Paper presented at the AGU Fall Meeting, San Francisco, 15–19 December 2014Google Scholar Ochi T, Kato T (2013) Depth extent of the long-term slow slip event in the Tokai district, central Japan: a new insight. J Geophys Res 118:4847–4860. doi: 10.1002/jgrb.50355 CrossRefGoogle Scholar Ohta Y, Kimata F, Sagiya T (2004) Reexamination of the interplate coupling in the Tokai region, central Japan, based on the GPS data in 1997–2002. Geophys Res Lett 31:L24604. doi: 10.1029/2004GL021404 CrossRefGoogle Scholar Ozawa S, Murakami M, Tada T (2001) Time-dependent inversion study of the slow thrust event in the Nankai trough subduction zone, southwestern Japan. J Geophys Res 106:787–802CrossRefGoogle Scholar Ozawa S, Murakami M, Kaidzu M, Tada T, Sagiya T, Hatanaka Y, Yarai H, Nishimura T (2002) Detection and monitoring of ongoing aseismic slip in the Tokai region, central Japan. Science 298:1009–1012CrossRefGoogle Scholar Ozawa S, Nishimura T, Munekane H, Suito H, Kobayashi T, Tobita M, Imakiire T (2012) Preceding, coseismic, and postseismic slips of the 2011 Tohoku earthquake, Japan. J Geophys Res 117:B07404. doi: 10.1029/2011JB009120 CrossRefGoogle Scholar Ozawa S, Yarai H, Imakiire T, Tobita M (2013) Spatial and temporal evolution of the long-term slow slip in the Bungo Channel, Japan. Earth Planets Space 65:67–73CrossRefGoogle Scholar Rogers G, Dragert H (2003) Episodic tremor and slip on the Cascadia subduction zone: the chatter of silent slip. Science 300:1942–1943CrossRefGoogle Scholar Sella GF, Dixon TH, Mao A (2002) REVEL: a model for recent plate velocities from space geodesy. J Geophys Res 107:ETG11-1–ETG11-30. doi: 10.1029/2000JB000033 CrossRefGoogle Scholar Shelly DR, Beroza GC, Ide S, Nakamula S (2006) Low-frequency earthquakes in Shikoku, Japan and their relationship to episodic tremor and slip. Nature 442:188–191. doi: 10.1038/nature04931 CrossRefGoogle Scholar Simon D, Simon DL (2006) Kalman filtering with inequality constraints for turbofan engine health estimation. IEE Proc Control Theory Appl 153:371–378CrossRefGoogle Scholar Sun T, Wang K, Iinuma T, Hino R, He J, Fujimoto H, Kido M, Osada Y, Miura S, Ohta Y, Hu Y (2014) Prevalence of viscoelastic relaxation after the 2011 Tohoku-oki earthquake. Nature 514:84–87. doi: 10.1038/nature13778 CrossRefGoogle Scholar Tanaka Y, Yabe S, Ide S (2015) An estimate of tidal and non-tidal modulations of plate subduction speed in the transient zone in the Tokai district. Earth Planets Space 67:141. doi: 10.1186/s40623-015-0311-2 CrossRefGoogle Scholar Tobita M (2016) Combined logarithmic and exponential function model for fitting postseismic GNSS time series after 2011 Tohoku-Oki earthquake. Earth Planets Space 68:41. doi: 10.1186/s40623-016-0422-4 CrossRefGoogle Scholar Tobita M, Akashi T (2015) Evaluation of forecast performance of postseismic displacements. Rep Coord Comm Earthq Predict 93:393–396 (in Japanese) Google Scholar Toda S, Stein RS, Lin J (2011) Widespread seismicity excitation throughout central Japan following the 2011 M = 9.0 Tohoku earthquake and its interpretation by Coulomb stress transfer. Geophys Res Lett 38:L00G03. doi: 10.1029/2011GL047834 CrossRefGoogle Scholar Yarai H, Ozawa S (2013) Quasi-periodic slow slip events in the afterslip area of the 1996 Hyuga-nada earthquakes, Japan. J Geophys Res Solid Earth 118:2512–2527. doi: 10.1002/jgrb.50161 CrossRefGoogle Scholar © Ozawa et al. 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Geospatial Information Authority of JapanTsukubaJapan Ozawa, S., Tobita, M. & Yarai, H. Earth Planet Sp (2016) 68: 54. https://doi.org/10.1186/s40623-016-0430-4
CommonCrawl
Kullback–Leibler vs Kolmogorov-Smirnov distance I can see that there are a lot of formal differences between Kullback–Leibler vs Kolmogorov-Smirnov distance measures. However, both are used to measure the distance between distributions. Is there a typical situation where one should be used instead of the other? What is the rationale to do so? distributions distance-functions kolmogorov-smirnov-test kullback-leibler edited Apr 7 '11 at 14:26 $\begingroup$ A related question: stats.stackexchange.com/questions/4/… $\endgroup$ – GaBorgulya The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful. The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common. Suresh VenkatasubramanianSuresh Venkatasubramanian $\begingroup$ Another use of KL-divergence worth mentioning is in hypothesis testing. Assume $X_1, X_2, \ldots$ are iid from measures with density either $p_0$ or $p_1$. Let $T_n = n^{-1} \sum_{i=1}^n \log( p_1(X_i) / p_0(X_i) )$. By Neyman--Pearson, an optimal test rejects when $T_n$ is large. Now, under $p_0$, $T_n \to -D(p_0 \,\vert\vert\, p_1)$ in probability and under $p_1$, $T_n \to D(p_1 \,\vert\vert\, p_0)$. Since $D(\cdot \,\vert\vert\, \cdot)$ is nonnegative, the implication is that using the rule $T_n > 0$ to reject $p_0$ is asymptotically perfect. $\endgroup$ – cardinal $\begingroup$ Indeed. that's an excellent example. And in fact most general versions of the Chernoff-Hoeffding tail bounds use the KL-divergence. $\endgroup$ – Suresh Venkatasubramanian $\begingroup$ @cardinal Is that an alternative to KS testing distribution equality? $\endgroup$ – Dave Another way of stating the same thing as the previous answer in more layman terms: KL Divergence - Actually provides a measure of how big of a difference are two distributions from each other. As mentioned by the previous answer, this measure isnt an appropriate distance metric since its not symmetrical. I.e. distance between distribution A and B is a different value from the distance between distribution B and A. Kolmogorov-Smirnov Test - This is an evaluation metric that looks at greatest separation between the cumulative distribution of a test distribution relative to a reference distribution. In addition, you can use this metric just like a z-score against the Kolmogorov distribution to perform a hypothesis test as to whether the test distribution is the same distribution as the reference. This metric can be used as a distance function as it is symmetric. I.e. greatest separation between CDF of A vs CDF of B is the same as greatest separation between CDF of B vs CDF of A. SriKSriK KL divergence upper bounds Kolmogrov Distance and Total variation, meaning that if two distributions $\mathcal{D}_1, \mathcal{D}_2$ have a small KL divergence, then it follows that $\mathcal{D}_1, \mathcal{D}_2$ have a small total variation and subsequently a small Kolmogrov distance (in that order). Also check out this paper for more information https://math.hmc.edu/su/wp-content/uploads/sites/10/2019/06/ON-CHOOSING-AND-BOUNDING-PROBABILITY.pdf moloculemolocule $\begingroup$ Welcome to CV. Please add the full reference of the paper in case your link dies in the future $\endgroup$ KS test and KL divergence test both are used to find the difference between two distributions KS test is statistical-based and KL divergence is information theory-based But the one major diff between KL and KS test, and why KL is more popular in machine learning is because the formulation for KL divergence is differentiable. And for solving optimization problems in machine learning we need a function to be differentiable. In the context of machine learning, KL_dist(P||Q) is often called the information gain achieved if Q is used instead of P links: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test Yogesh SaswadeYogesh Saswade Not the answer you're looking for? Browse other questions tagged distributions distance-functions kolmogorov-smirnov-test kullback-leibler or ask your own question. Assessing the significance of differences in distributions Ranking distributional data by similarity An adaptation of the Kullback-Leibler distance? Bhattacharyya distance for histograms How to measure the distance (or divergence - not sure) between data and a probability distribution? Multivariate nonparametric divergence (or distance) between distributions When KL Divergence and KS test will show inconsistent results? Kullback-Leibler distance for comparing two distribution from sample points What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? Why Kullback-Leibler in Stochastic Neighbor Embedding Wasserstein distance and Kolmogorov-Smirnov statistic as measures of effect size
CommonCrawl
Identifying local structural states in atomic imaging by computer vision Nouamane Laanait ORCID: orcid.org/0000-0001-7100-42501,2, Maxim Ziatdinov1,2, Qian He3 & Albina Borisevich1,3 Advanced Structural and Chemical Imaging volume 2, Article number: 14 (2016) Cite this article The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both local and nonlocal information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect-engineered multilayer graphene surface. A multitude of imaging probes such as scanning transmission electron microscopy (STEM) have reached the requisite spatial resolution, at least in two dimensions, to directly distinguish the individual structural microstate of a material, namely an atom and its local neighbors [1]. In addition, the prevalence of auxiliary information channels such as electron energy-loss spectra acquired at similar spatial resolutions allows one to append to these structural microstates additional chemical/electronic state information [2–4]. The data that emanate from such modalities reveal a wealth of information regarding the static modulation of material properties by local structural deviations [5], competing structural ground states [6], and even dynamic phase transformations or ensuing structural reordering during in situ atomic resolution imaging of materials growth [7]. These imaging modalities are crucial to fundamental investigations of modern materials, which often display a range of structural configurations and order parameter phases. In many cases, some structural phases are not directly discernible by the diffraction-based methods of X-rays and neutron scattering [8, 9] due to either their small volume fraction and/or their lack of long-range periodicity, and therefore require an imaging approach [10, 11] for identification. To identify and classify local structural states and their correlations as resolved by atomic resolution imaging, the traditional language of crystallography with its restrictive assumptions of symmetry and periodicity leaves much to be desired [12]. Nevertheless, many successful approaches that extract structural information from atomically resolved data [13, 14] still adopt many of the underlying assumptions of traditional crystallography, through the use of integral transforms such as Fourier transforms (e.g., in geometric phase analysis [15]) and other techniques from harmonic analysis. Such techniques explicitly transform the local spatial information into a space that presupposes the presence of a coherent superposition of components to classify the structural states present in an image. Recent work has taken a different route to identify local structural states by analyzing the intrinsic intensity signatures in atomically resolved images through multivariate statistics [16]. The feature identification method used is strictly local, however, and does not incorporate the information present in neighboring intensity distributions around an atom or defect site. Here, we explore an alternative method to identify and classify local structural states in atomically resolved images that is rooted in a multi-scale extraction and classification of structural states present in an image. The presented approach, in essence, provides a middle ground between structure identification that relies on "single-point" intensities and those that analyze information obtained from an extended region through integral transforms. The underlying assumptions of the presented approach are contextual information and scale invariance. The former implies that the local intensity distribution in the neighborhood of a particular structural state, e.g., atomic coordination surrounding a defect site, is the key measure by which we perform detection of local structural states. Furthermore, to not assume a priori, the spatial extent of these local states our approach should be scale invariant, whereby we would like to detect not only atoms but also clusters of atoms whose intensity distribution becomes more localized at larger length scales in the image (i.e., obtained through progressive down-sampling). Our methodology borrows heavily from techniques developed in the field of computer vision to perform tasks such as pattern recognition, through the use of a scale-invariant feature detectors and descriptors [17]. Following detection, we classify the structural states by a hierarchical clustering strategy [18, 19] using the scale-invariant descriptor associated with each state. We tested the fundamental assumptions of our approach, namely scale invariance and contextual information, by applying it to simulated scanning transmission electron microscopy images of ideal crystals and atomically sharp interfaces between crystals. To explore the utility of this analysis in practice, we performed an extensive quantitative study of the accuracy in detection of local structural states in the presence of instrumental factors such as noise- and material-dependent factors such as low contrast, finding that this approach is robust under common experimental conditions. Finally, we conclude by demonstrating automated extraction and classification of local structural states in STEM images of strained interfaces of SrTiO3/LaCoO3 and local modulations in the electron density of states near defects on graphite surfaces imaged by scanning tunneling microscopy. In what follows, we restrict our attention to 2-dimensional atomically resolved images with gray scale value, where the image I is defined as a mapping from a 2-dimensional spatial domain x (i.e., pixels) to a strictly positive real number (i.e., intensity): \(I{:}\,{x} \to {\mathbb{R}}^{ + }\). Feature detection proceeds by locating keypoint features, denoted by Kp( x), in an image I(x), that are extrema of a detector function F(ζ, x), where ζ is a parameter or set of parameters that specify the feature detector. The detector function is an operator that transforms the image locally, and often involves spatial derivatives of the image. A keypoint can be then generally expressed as $$\begin{array}{*{20}c} {Kp = \text{argmax}_{{\zeta ,{\text{x }}}} \left( {F \circ I} \right)\left( \varvec{x} \right)\;\;\text{or} \;\; \text{argmin}_{{\zeta ,{\text{x }}}} \left( {F \circ I} \right)\left( \varvec{x} \right) } \\ \end{array}.$$ Numerous feature detection methods have been developed in the field of computer vision that achieve scale invariance [20, 21]. Here, we restricted our attention to the Laplacian of Gaussian operator (LoG). The latter is one of the most widely used feature detectors and defined as $$\begin{array}{*{20}c} {F\left( {\sigma ,\varvec{x}} \right) = \nabla_{\text{x}}^{ 2} G\left( {\varvec{x},\sigma } \right)} \\ \end{array},$$ where G(.) is a multivariate Gaussian distribution with variance \(\sigma\) and ∇2 is the Laplacian operator, evaluated in the spatial domain of the image. The LoG operator is efficient in detecting local intensity curvatures in images. Given that atomically resolved images show pronounced local intensity curvatures, we use the LoG throughout as a detector to extract features. As first pointed out by Lindeberg [22, 23], the Laplacian of Gaussian kernel provides a natural way to extract keypoint features that are stable in both the image spatial domain and the scale space of the image. The latter is constructed by consecutive blurring (convolution with a Gaussian filter) and down-sampling of the original image I(x) [23]. With additional approximations in regard to the detector function, the construction of a scale space, and search strategies for the extrema in the spatial and scale domains, Lowe constructed a feature extraction and description framework known as the scale-invariant feature transform (SIFT) [24]. SIFT is widely regarded as one of the most effective detector-based feature extraction techniques with wide-range applications from pattern recognition [25] to image registration [20], and was used throughout this work as descriptor for a local structural state. Scale-invariant detection and description of structural states We used simulated electron microscopy images of bulk SrTiO3 and SrTiO3/BaTiO3 interface projected on the [100] direction. The images were generated using an implementation of the standard multislice code using standard imaging conditions for Nion UltraSTEM200 for 200 kV operation and an aberration-free probe [26]. The raw simulated images were convoluted with a Gaussian probe size with a full-width-half-max of 0.7 Å to account for the finite source size of the electron beam. No other preprocessing of the images was performed. A global scaling of the intensity was applied. This intensity scaling has no effect on the Laplacian of Gaussian detector, since the detector is only sensitive to the local image contrast gradient (Fig. 1a). Structural states as scale-invariant features. a Simulated STEM images of bulk SrTiO3 and SrTiO3/BaTiO3 interface with the electron beam propagating along the [100] crystallographic direction. Images are convoluted with a Gaussian function with FWHM of 0.7 Å to account for the finite source size of the electron beam. b Features extracted by the Laplacian of Gaussian detector are shown as an overlay of circles on the images in a. The intensity scale was inverted to improve the visibility. The size of the circle indicates the scale at which the feature was detected. For simplicity in the ensuing analyses, the contrast threshold of the LoG is tuned so that oxygen columns in the right image in b are not detected (see Additional file 1 for all atomic columns). c Close-up of the left image in b indicating both the keypoint, Kp, which describes the atom locally and the descriptor vectors, Ds, which encode the intensity distribution of neighboring columns to provide a nonlocal description of the column. Descriptors for the different atomic columns are shown as 1-dimensional vectors, indicating that columns with the same intensity can have different descriptors due to the different angular configuration of their neighboring atoms. The structural state, in this case an atomic column, is then defined by the pair composed of (Kp, Ds). The implementations of the LoG detector in the Python scikit-image library [41] and SIFT in OpenCV [42] were used throughout The atoms detected in each image by the LoG are indicated by circles. The size of each circle is proportional to the scale (i.e., σ) at which the feature was found to be an extrema of the LoG operator (Fig. 1b). Note that while the oxygen columns in the bulk SrTiO3 are not clearly evident in Fig. 1a due to their low intensity relative to Sr and Ti, they are readily detected by LoG albeit at a smaller scale than either Sr or Ti columns. The detected feature is commonly referred as a keypoint, Kp, in computer vision. Associated with each keypoint are the coordinates of the feature (x,y) and its scale (Fig. 1c), as well as other properties that we do not make use of in this work. Given a particular Kp, we use the scale-invariant feature transform to compute a descriptor, Ds. The descriptor is centered around Kp(x,y) and encodes the intensity distributions around that feature (Fig. 1c). Both the spatial extent of Ds and the intensities it contains are sampled from the spatial domain of the image but at the appropriate scale. Consequently, the image patch from which Ds is extracted (16 × 16 pixels centered on Kp(x,y)) varies in size with respect to the spatial domain in the original image. The SIFT descriptor is composed of intensity gradient magnitudes and orientations that are appropriately weighted to decrease their contribution to the descriptor as a function of their distance from Kp(x,y) [24]. Furthermore, the intensity values in Ds are transformed to a local frame of reference, i.e., with respect to Kp(x,y) The latter provides a description of the feature that is rotation invariant and reduced sensitivity to global changes in imaging conditions such as illumination [17, 27]. The resultant SIFT descriptor is a 128-dimensional unit vector and is shown in Fig. 1c in a vector format for the different detected columns in Fig. 1b. In this work, we modified the SIFT descriptor, by intentionally breaking its rotational invariance through a choice of a preferred orientation angle of the Ds image patch (0° defined with respect to the x-axis of the image) (see Fig. 1c). This modification leads to a minimalistic descriptor that is only translation invariant and does not incorporate other symmetry assumptions. Consequently, Ds provides a distinct description of intensity gradients that are dissimilar for atomic columns such as O1 and O2 despite them having identical local intensities, since their neighboring columns (Sr, Ti) are in a different orientation order. Given Kp and Ds, we then define a structural state, $$\begin{array}{*{20}c} {S = \left( {Kp,\varvec{Ds}}\right)}, \\ \end{array}$$ as a pair composed of a keypoint, which gives a local description of the image intensity, and Ds which provides a nonlocal description of neighboring intensity gradients. This description of a structural state, such as an atomic column, is both scale invariant and context dependent. Noise and contrast behavior of structural state detection We assumed that the imaging is free from all geometric distortions due to scanning of the electron probe, and focused on testing the robustness of the above formulation at different noise levels and local contrast values. Each simulated STEM image (Fig. 1a) is altered with noise that is sampled from a Poisson distribution and added in a linear convex fashion to the ideal image, with the noise level given by λ. The accuracy of the atomic column detection as a function of λ is calculated by direct comparison to the ideal case (i.e., λ = 0, accuracy = 1). Furthermore, in the case of bulk SrTiO3, we split the accuracy into two classes depending on the local contrast of the detect atoms. We found that a detection accuracy of Sr and Ti atoms fluctuates about 0.85 (±0.06) for λ ≤ 0.4, and falls off precipitously for λ > 0.4. As expected, local intensity fluctuations affect the detection of Ti atoms first, as shown in Fig. 2a. The detection accuracy of O atoms, on the other hand, becomes unreliable for noise levels that even exceed 0.05 due to their low contrast values (<0.05). Such behavior is well known in experimental Z-contrast STEM images [28], where oxygen columns, while in principle resolvable, are often not detectable due their weak Rutherford cross-sections relative to heavier atoms and the finite dynamic range of the detector. The detection accuracy of Sr, Ti, and Ba columns in the simulated image of SrTiO3/BaTiO3 as a function of noise level behaves in an analogous manner to simulated bulk SrTiO3. Robust image de-noising strategies can, of course, be employed in practice to increase the accuracy of atomic column detection by the LoG detector, but this was not performed here as the de-noising constitutes a separate problem from the focus of this paper, and is well covered in both electron microscopy and image recognition literature. Atomic column detection in the presence of noise and low contrast. The accuracy of atom detection is analyzed as a function of noise level, λ. The noise, sampled from a Poisson distribution, is added to the STEM simulated images of bulk SrTiO3 a and SrTiO3/BaTiO3 (b). The accuracy is computed by comparison of detected features Kp(x,y) at some \(\lambda \ne 0\) to the ideal images (λ = 0). To demonstrate the dependency of atom detection on the contrast of the atomic column, the accuracy of the detection of O columns and "Sr + Ti" is calculated and shown separately in (a). The images indicate the detected atomic columns at the corresponding noise levels. For ease of comparison with experimental images, the noise level shown here is defined as a fraction of the highest scattered intensity (from an atomic column) that is present in the simulated image. Due to the projective nature of STEM imaging, Ti columns are in fact not pure Ti columns but mixed Ti and O columns The primary reason for the reduced accuracy in detected atomic columns is the delocalization of their response to the LoG kernel in scale space [29]. Note, however, that the LoG detector has a strong response to features near edges (of an image), which, in practice, can lead to an overestimation of the detection accuracy. From the above analysis, we conclude that for noise levels \(\lambda < 0.4 C_{\rm{max}}\), where C max is the maximum image contrast of the structural feature of interest, the presented approach can produce a meaningful and robust detection. An additional aspect of the LoG worth mentioning is that the presence of other instrumental factors, such as blurring, only affects the scale at which the feature is detected, but not the accuracy of the LoG detector. Finally, we emphasize that the LoG searches for both maxima and minima in the local imaging contrast as a function of scale and therefore can be used to detect missing atoms or used in imaging modes such as bright-field imaging where atomic columns can also be represented by the image minima. In such an instance, its detection robustness will be affected by the presence of noise in a manner similar to the above analysis. Structural state classification The definition of a structural state in Eq. 3 allows us to classify the different detected atomic columns to find the main structural classes present in a particular image. Numerous methods exist to perform these classification tasks. Here, we focus on unsupervised machine learning to explore the effectiveness of the presented approach to "learn" the overall structural configuration in a material. To that effect, we use hierarchical agglomerative clustering. In agglomerative clustering, each structural state S is initially considered to belong to a distinct class \({\mathbf{\mathcal{C}}}_{\text{i}}\). Following this initial assignment, different classes \({\mathbf{\mathcal{C}}}_{\text{i}}\) and \({\mathbf{\mathcal{C}}}_{\text{j}}\) are merged into a new class \({\mathbf{\mathcal{C}}}_{\text{k}}\) if their respective members (i.e., structural states) are similar, given some notion of similarity, g. In our case, the similarity (or affinity) measure between two structural states, S i and S j , is naturally defined by the (Euclidean) distance between their respective descriptors, Ds i and Ds j , $$\begin{array}{*{20}c} {g\left( {\varvec{S}_{i} ,\varvec{S}_{j} } \right) = \varvec{Ds}_{i} - \varvec{Ds}_{j}^{2},} \\ \end{array}$$ and is used to merge the different structural classes. Different methods, known as linkage, apply the similarity measure to the classes in a specific way. We use the average linkage method which uses the average similarity between classes: $$\begin{array}{*{20}c} {\bar{g}\left( {{\mathbf{\mathcal{C}}},{\mathbf{\mathcal{D}}}} \right) = \frac{1}{{N_{\text{C}} N_{\text{D}} }} \mathop \sum \limits_{{i \in {\mathbf{\mathcal{C}}}}} \mathop \sum \limits_{{j \in {\mathbf{\mathcal{D}}}}} g\left( {i,j} \right),} \\ \end{array}$$ where N C (N D) are the number of structural states belonging to each class \({\mathcal{C}}\left( {\mathcal{D}} \right)\). With \(\bar{g}\) as similarity measure, agglomerative clustering results in a classification that groups structural states into relatively compact classes that are well separated [30]. The only remaining parameter that must be specified to perform the hierarchical clustering of structural states is the level at which we must truncate the merging procedure. This was determined by a statistical measure that optimizes the similarity between structural states that belong to the same class (see Additional file 1 for additional details and illustration of this analysis for the classification used here, Additional file 1: Fig. S2). The results of the classification of atomic columns in the SrTiO3 and SrTiO3/BaTiO3 images (shown in Fig. 1b) using agglomerative clustering at various noise levels are shown in Fig. 3, where each structural class is represented by a different color coding. For bulk SrTiO3, we find that the classification clearly distinguishes between the different atomic columns in the unit cell. Note that although O1 and O2 oxygen columns have identical imaging intensities and are equivalent under the rotational symmetry of SrTiO3 (P2 mm), they are grouped into different clusters, since their descriptors are not rotationally invariant as discussed above. Classification of Local Structural States. Detected atomic columns at different noise levels in simulated STEM images are classified by hierarchical clustering, with different structural classes represented by circles with different colors. The different atomic columns in the [100] projection of the SrTiO3 unit cell are all classified as distinct structural states by the presented approach (a). In the presence of noise, the distinction between Sr and Ti atomic columns is still maintained (b). Note that Sr atoms at the edge of the image belong to separate classes since their coordination is different than that of Sr atoms in the "bulk". c Classification of atoms in the image of a SrTiO3/BaTiO3 interface distinguishes the interfacial atoms (Sr, Ti, Ba) from those present in the bulk phases, and provides a complete description of the structural configurations present in the image We found that even in the presence of large noise levels (λ = 0.75), columns of different types (Sr, Ti) are still classified separately, giving good evidence of the robustness of Eq. 3 in the presence of noise. In the case of SrTiO3/BaTiO3, a complete classification of the unit cell configurations is achieved, with Ti columns in bulk STO, at the interface, and in bulk BTO grouped as distinct states. Similar results are also obtained for Sr and Ba atomic columns. These observations are crucial evidence that the definition of an atomic column given in Eq. 3 does encapsulate the local coordination environment necessary to discriminate between different structural states and further reinforce the utility of formulating a structural state as a combination of local and nonlocal image intensities. Strained interfaces and defects We illustrate the utility of the structural state extraction and classification in experimental images by two case studies from some of the most widely used atomic imaging modalities, namely scanning transmission electron microscopy data of interfaces in heteroepitaxial systems and scanning tunneling microscopy (STM) data of defect states found on the surface of graphite. In studies of solid/solid interfaces, in particular those interfaces that originate through epitaxial growth, characterizing the structural nature of the interface is crucial to tailoring the materials properties. For instance, solid/solid interfaces are often the starting point of extended defects such as misfit dislocations that arise to compensate epitaxial strain, and lead to elastic fields propagating in both directions from the interface, substantially modifying its crystal structure and potentially its properties. It has also been demonstrated that, even for a case of coherent epitaxy, the different symmetry of the film and substrate can result in a progression of distinct structural states localized in the vicinity of the interface [31]. In all these instances, it is crucial to precisely extract and identify the local structural states present at interfaces. We applied the presented approach to a Z-contrast STEM image of SrTiO3(STO)/LaCoO3 (LCO) interface. This image was acquired using Nion UltraSTEM 100 operated at 100 kV (Fig. 4a). a HAADF-STEM images of LaCoO3/SrTiO3 interface. The color scale in normalized intensity. b Classified structural states clearly highlight the diffuse nature of the interface, with each boxed region outlining a particular structural configuration: 1 bulk LaCoO3, 2 Interfacial LaCoO3, 3 distorted column of Co atoms, 4 interfacial SrTiO3, 5 bulk SrTiO3 The classification of structural states leads to a succinct representation of the evolution of structural states at the interface, as represented by classes of LaCoO3 unit cells (region 1 in Fig. 4b) that are clearly distinct from their bulk phase (region 2). By capturing these structural deviations in LCO that span multiple unit cells, our approach produces an automated and unsupervised technique to determine the extent of the interfacial structural states (compare Figs. 4b to 3c). Large structural distortions in a column of Co atoms (region 3) are also singled out by the classification as a distinct structural state and represent the elastic effects that originate at a defect at this incoherent interface and propagate far into the bulk phase. For STO 2, atomic planes were identified as separate structural (region 4) classes than the bulk (region 5). Given this classification of the atomic columns in this system, additional properties (e.g., displacements of Co with respect to the center of LCO unit cell) can be then readily computed for a structural class and compared to others to fully characterize the nature of the interface in this system. Next, we extract and classify structural states that arise due to point-like defects on a graphite surface. Point defects such as monovacancies, adsorbed atoms, interstitials, and Stone–Wales defects are known to affect strongly the electronic and magnetic properties of graphene layers [32]. Recently, it was realized that the electronic structure of atomic vacancy is highly sensitive to the details of the passivation of its dangling σ bonds with foreign chemical species, such as hydrogen and oxygen [33]. Here, we focus on the so-called V111 type of the monovacancy–hydrogen complexes [33, 34]. The V111 complex, in which each σ dangling bond is passivated with one hydrogen atom, is characterized by the formation of a localized nonbonding π electronic state at the Fermi level [34] whose decay into the "clean" area of the lattice can be described by r −2 law [35]. To date, the studies of monovacancy–hydrogen complexes (as well as other types of point defects) in graphene-like materials have been limited to either the single-layer structure or AB (Bernal)-stacked structure. On the other hand, a rotation of graphene layers with respect to each other, particularly in the case of low twist angles (below 10°), may result in an alternation of the system's low-energy electronic structure, such as a reduction of the Fermi velocity and associated localization of charge carriers [36, 37], which may in turn alter the electronic and magnetic properties of the vacancy. Below, we analyze the scanning tunneling microscopy (STM) data on hydrogen-passivated single atomic vacancies of the V111 type in the topmost graphene layer of graphite that is rotated relative to the underlying layer(s). Figure 5a shows the STM image of the topmost graphene layer of graphite that features a well-defined Moiré pattern and is peppered with monovacancy–hydrogen complexes of the V111 type. The V111 complexes were prepared by sputtering the surface of a graphite sample with low-energy Ar+ ions and its subsequent exposure to atomic hydrogen environment and annealing. The choice of experimental parameters was the same as reported in the study of V111 complexes in Ref. [34]. The extracted and classified structural states by our methodology are shown in Fig. 5b. First, note that the "edge" atoms around the vacancy produce a strongly nonequivalent response in terms of the corresponding local intensity of the STM signal (see inset to Fig. 5a). Given that the STM signal is a convolution between topographic and electronic features, this in-equivalency may reflect the out-of-the-plane structural distortions at the vacancy site. Our analysis allows the extraction of detailed information on the distribution of the vacancy's nonbonding state for each V111 complex (associated with magenta, green, and orange circles, e.g., region 1 in Fig. 5b). In particular, we found that the distribution of the STM signal associated with the vacancy's nonbonding state (i) does not follow the threefold symmetry of underlying atomic lattice, which can be related either to the aforementioned structural distortions or to the rotational direction of the topmost graphene layer (nonzero twist angle), and (ii) the details of its propagation appear to be sensitive to the relative position of vacancy with respect to Moiré spots on the surface. To confirm the latter, our analysis must be carried out on a larger set of STM images and sample conditions, and is beyond the scope of this article. Nonetheless, the efficient extraction and classification of structural states associated with the monovacancy–hydrogen complexes represents a crucial first step in a more systematic study of modulating the electronic configurations of graphene through point defects. Scanning tunneling microscopy of defects on graphite. The image was acquired with a sample bias voltage of 100 mV and tunneling current setpoint of 0.7 nA. a The defects (box outline) are monovacancy–hydrogen complexes generated through Ar+ ion bombardment, followed by annealing in a hydrogen environment. These defects modulate the local electronic density of states in their immediate vicinity as shown in the inset. The color bar is normalized intensity. b Extraction and classification of atoms, select edge atoms surrounding a monovacancy–hydrogen complex (e.g., outlined region 1) as being distinct from the rest of the atoms in the system, with different structural classes encoded by a unique color label A key ingredient to the success of the Computer Vision-based analysis of local structural states resides in the definition of a structural state that combines both local and nonlocal image intensity distributions, in contrast with previous methods that rely on single-point intensities [14, 16]. For instance, a single-point intensity method would not differentiate between the Ti columns present in bulk BaTiO3, Ti columns in bulk SrTiO3, and those at the interface of STO/BTO, since they all have indistinguishable intensity values, and spatial separations and angles with respect to their neighboring atoms, yet, differ only in the type of atoms that constitutes their coordination (Fig. 3c). The latter is a direct consequence of the definition of a structural state given in Eq. 3, whereby intensity gradients in a neighborhood around Kp are encoded in Ds, with the size of this neighborhood directly given by the appropriate scale at which the keypoint was found to be an extremum of the Laplacian of Gaussian detector. Another illustrative example of the advantages of the present approach is in detecting a range of distinct classes in the local configuration of (La, Co) columns at the interface of LCO/STO that clearly reflect the strained nature of the latter. Given the success of our approach in detecting these subtle variations in the structure of materials, it would be interesting to explore in future work if one can reconstruct the fundamental ingredients of the lattice and unit cell directly from the more primitive definition of a structural state in Eq. 3, which relies solely on real space image information and the concept of scale invariance, without relying on the priori knowledge of the average crystallographic symmetry. The classification procedure used here, namely hierarchical clustering, enabled a physically meaningful categorization of structural states in a number of cases, both for simulated and experimental data. This unsupervised learning approach, however, lacks a clear connection to the physics of the problem. In many contexts, one often seeks the identification/classification of local structural states subject to well-defined physical principles such as spatial connectivity, or localization due to the presence of interfaces, defects, etc. Under these conditions, one can supplement hierarchical clustering with connectivity constraints to generate structural classes that obey a set of physical assumptions. In essence, it allows one to test different physical hypotheses regarding the local structure present in the system at hand. We have shown that representing a structural state with computer vision-based descriptors that are efficient at encoding image information leads to an analysis approach that can discriminate between the myriad of local states in the presented data across vastly different imaging modalities. The preponderance of atomically resolved images both in the literature and open databases provides an opportunity to begin data exploration of local structural states that are shared by a variety of materials and their evolution during varying experimental conditions. The SIFT descriptor with its scale invariance could provide one of the promising methods by which one can fingerprint local structural states of interest to perform structural recognition against the above databases. Furthermore, the structural identification we presented could also be used to identify recurring artifacts in atomically resolved imaging such as dynamic scattering and electron beam channeling [38], by comparing local state descriptors obtained from a library of simulated images, for instance, as a function of thickness, to those local descriptors extracted from experimental data. Modern imaging modalities such as STEM are hyperspectral in nature, where in addition to atomic resolution images (by Z-contrast), a full electron energy-loss spectrum can be acquired. In the case of STM, tunneling spectroscopy can be performed to measure the full electronic density of states. As such, incorporating this additional information into the feature detection/description method is an important task that should be explored in future work [39], to construct descriptors that are more physics based, thereby taking full advantage of all the information present in modern imaging modalities. This would benefit, in particular, atomic imaging modalities, such as atom probe tomography, that provide a full three-dimensional view of a material's structure [40]. In summary, we have explored a novel approach by which one can detect, identify, and classify local structural states in spatially resolved atomic images. We showed that the principles of scale invariance and contextual structural state identification, defined based on neighboring intensity distributions, give an efficient and discriminative approach by which one can extract and identify local states without the assumptions of symmetry, and illustrated the application of this method to simulated and experimental images from electron microscopy and scanning tunneling microscopy. Moreover, we showed that the more primitive concept of a structural state is sufficient to extract the salient structural configurations present in atomic imaging of materials. We foresee that our approach may provide a natural and powerful method by which one can express more complex structural correlations such as those present in frustrated and disordered systems, correlations that may lie obscured by the rigid assumptions of classical crystallography in two dimensions. scanning transmission electron microscopy STM: scanning tunneling microscopy SIFT: scale-invariant feature transform Laplacian of Gaussian STO: SrTiO3 LCO: LaCoO3 BTO: BaTiO3 Pennycook, S.J., Kalinin, S.V.: Microscopy: hasten high resolution. Nature 515, 487–488 (2014) Zhou, W., et al.: Direct determination of the chemical bonding of individual impurities in graphene. Phys. Rev. Lett. 109(20), 206803 (2012) Krivanek, O.L., et al.: Atom-by-atom structural and chemical analysis by annular dark-field electron microscopy. Nature 464(7288), 571–574 (2010) Erni, R., et al.: Atomic-resolution imaging with a sub-50-pm electron probe. Phys. Rev. Lett. 102(9), 096101 (2009) Kim, Y.M., et al.: Probing oxygen vacancy concentration and homogeneity in solid-oxide fuel-cell cathode materials on the subunit-cell level. Nat. Mater. 11(10), 888–894 (2012) Catalan, G., et al.: Flexoelectric rotation of polarization in ferroelectric thin films. Nat. Mater. 10(12), 963–967 (2011) Nagao, K., et al.: Experimental observation of quasicrystal growth. Phys. Rev. Lett. 115(7), 075501 (2015) Als-Nielsen, J., McMorrow, D.: Elements of Modern X-ray Physics, 2nd edn. Wiley, Hoboken (2011) Cross, J.O., et al.: Materials characterization and the evolution of materials. MRS. Bull. 40(12), 1019–1033 (2015) Laanait, N., et al.: Full-field X-ray reflection microscopy of epitaxial thin-films. J. Synchrotron Radiat 21(6), 1252–1261 (2014) Holt, M., et al.: Nanoscale hard X-ray microscopy methods for materials studies. Ann. Rev. Mater. Res. 43(1), 183–211 (2013) Keen, D.A., Goodwin, A.L.: The crystallography of correlated disorder. Nature 521(7552), 303–309 (2015) Borisevich, A.Y., et al.: Suppression of octahedral tilts and associated changes in electronic properties at epitaxial oxide heterostructure interfaces. Phys. Rev. Lett. 105(8), 087204 (2010) Gai, Z., et al.: Chemically induced Jahn-Teller ordering on manganite surfaces. Nat. Commun 5, 4528 (2014) Hytch, M.J., Snoeck, E., Kilaas, R.: Quantitative measurement of displacement and strain fields from HREM micrographs. Ultramicroscopy 74(3), 131–146 (1998) Belianinov, A., et al.: Identification of phases, symmetries and defects through local crystallography. Nat. Commun 6, 7801 (2015) Szeliski, R.: Computer vision—algorithms and applications. Springer London, London (2011) Bishop, C.: Pattern recognition and machine learning. Springer, Heidelberg (2006) Ward, J.H.: Hierarchical Grouping to Optimize an Objective Function. J. Am. Stat. Assoc 58(301), 236–244 (1963) Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005) Triggs, B. Detecting keypoints with stable position, orientation, and scale under illumination changes. In: Eighth European conference on computer vision. Prague (2004) Lindeberg, T.: Scale-space theory: a basic tool for analysing structures at different scales. J. Appl. Stat 21(2), 224–270 (1994) Burt, P.J., Adelson, E.H.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31(4), 532–540 (1983) Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004) Obdrzˇa ́lek, S., Matas, J. Object recognition using local affine frames on maximally stable extremal regions. In: Ponce, J. (ed) Toward Category-Level Object Recognition, New York: Springer (2006) Kirkland, E.J.: Advanced computing in electron microscopy. Plenum Press, New York (1998) McLachlan, G., Peel, D.: Finite mixture models: wiley series in probability and mathematical statistics. Wiley, Hoboken (2000) Pennycook, S.J.: Z-contrast transmission electron-microscopy—direct atomic imaging of materials. Ann. Rev. Mater. Sci. 22, 171–195 (1992) Rublee, E., et al. ORB: An efficient alternative to SIFT or SURF. In: Computer Vision (ICCV), 2011 IEEE international conference on. 2011 Hastie, T., Tobshirani, R., Friedman, J.: The elements of statistical learning: data mining, inference, and prediction. Springer series in statistics. Springer Science+ Business Media, New York (2009) He, Q., et al.: Towards 3D mapping of BO6 octahedron rotations at perovskite heterointerfaces, unit cell by unit cell. ACS. Nano 9(8), 8412–8419 (2015) Humberto, T., et al.: The role of defects and doping in 2D graphene sheets and 1D nanoribbons. Rep. Prog. Phys. 75(6), 062501 (2012) Fujii, S., et al.: Role of edge geometry and chemistry in the electronic properties of graphene nanostructures. Faraday Discuss. 173, 173–199 (2014) Ziatdinov, M., et al.: Direct imaging of monovacancy-hydrogen complexes in a single graphitic layer. Phys. Rev. B 89(15), 155405 (2014) Ugeda, M.M., et al.: Missing atom as a source of carbon magnetism. Phys. Rev. Lett. 104(9), 096804 (2010) Bistritzer, R., MacDonald, A.H.: Moiré bands in twisted double-layer graphene. Proc. Natl. Acad. Sci. 108(30), 12233–12237 (2011) de Trambly Laissardière, G., Mayou, D., Magaud, L.: Localization of Dirac Electrons in Rotated Graphene Bilayers. Nano. Lett 10(3), 804–808 (2010) Loane, R.F., Xu, P., Silcox, J.: Incoherent imaging of zone axis crystals with ADF stem. Ultramicroscopy 40(2), 121–138 (1992) Brown, M., Hua, G., Winder, S.: Discriminative learning of local image descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 43–57 (2011) Amouyal, Y., Schmitz, G.: Atom probe tomography—a cornerstone in materials characterization. MRS. Bull. 41, 13 (2016) van der Walt, S., et al.: Scikit-image: image processing in Python. PeerJ 2, e453 (2014) Bradski, G. Kaehler, A. Learning OpenCV: Computer Vision in C++ with the OpenCV Library. 2013: O'Reilly Media, Inc. 575 NL conceived and designed the research, and performed the analysis. QH and AB performed the simulations and collected the STEM data. MZ collected the STM data. NL and AB wrote the manuscript with contributions from MZ. All authors read and approved the final manuscript. NL thanks Sergei V. Kalinin for insightful discussions and for bringing his attention to this research topic. This work was supported by the Eugene P. Wigner Fellowship (NL) at Oak Ridge National Laboratory (ORNL), a US Department of Energy (DOE) facility managed by UT-Battelle, LLC for US DOE Office of Science under Contract No. DE-AC05-00OR22725. Data analysis was performed at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL. Electron microscopy imaging and simulations (AB, QH) were supported by Materials Science and Engineering Division of the US DOE Office of Science. MZ acknowledges the support from Materials Science and Engineering Division of the US DOE Office of Science. Institute for Functional Imaging of Materials, Oak Ridge, 37831, TN, USA Nouamane Laanait , Maxim Ziatdinov & Albina Borisevich Center for Nanophase Materials Sciences, Oak Ridge, 37831, TN, USA & Maxim Ziatdinov Materials Sciences and Technology Division, Oak Ridge National Laboratory, Oak Ridge, 37831, TN, USA Qian He Search for Nouamane Laanait in: Search for Maxim Ziatdinov in: Search for Qian He in: Search for Albina Borisevich in: Correspondence to Nouamane Laanait. 40679_2016_28_MOESM1_ESM.pdf Additional file 1: Figure S1. Detected Keypoints from all atomic columns present in the simulated. STEM of a SrTiO3/BaTiO3. In the main text, only Sr, Ti, and Ba columns are included to simplify the analysis of noise dependency and classification. The addition of detected oxygen columns shown above does not modify the results in the main text. Figure S2. Silhouette Coefficient Analysis of the Classification is shown here for the simulated STEM image of a bulk SrTiO3 lattice. Top is a plot of the silhouette coefficient with different number of clusters. Bottom are the 4 structural classes (Sr, Ti, O1, O2). Laanait, N., Ziatdinov, M., He, Q. et al. Identifying local structural states in atomic imaging by computer vision. Adv Struct Chem Imag 2, 14 (2016) doi:10.1186/s40679-016-0028-8 Unsupervised machine learning
CommonCrawl
Discrete Mathematics/Set theory From Wikibooks, open books for an open world < Discrete Mathematics ← Introduction Set theory Set theory/Page 2 → Wikipedia has related information at set theory 2 Definition: Set 2.1 Elements 3 Set Notation 4 Special Sets 4.1 The universal set 4.2 The empty set 4.2.1 Operations on the empty set 4.3 Special numerical sets 4.3.1 Natural numbers 4.3.2 Integers 4.3.3 Real numbers 4.3.4 Rational numbers 4.3.5 Irrational numbers 5 Set Theory Exercise 1 6 Relationships between Sets 6.1 Equality 6.2 Subsets 6.3 Disjoint 7 Venn Diagrams 7.1 Venn diagrams: Worked Examples 7.2 The regions in a Venn Diagram and Truth Tables 9 Operations on Sets 9.1 Intersection 9.2 Union 9.3 Difference 9.4 Complement 9.5 Properties of set operations 9.5.1 1. Commutative 9.5.2 2. Associative 9.5.3 3. Distributive 9.5.4 4. Special properties of complements 9.5.5 5. De Morgan's Law 9.7 Cardinality 9.8 Generalized set operations 10 Set Theory Exercise 3 Set Theory starts very simply: it examines whether an object belongs, or does not belong, to a set of objects which has been described in some non-ambiguous way. From this simple beginning, an increasingly complex (and useful!) series of ideas can be developed, which lead to notations and techniques with many varied applications. Definition: Set[edit | edit source] The present definition of a set may sound very vague. A set can be defined as an unordered collection of entities that are related because they obey a certain rule. 'Entities' may be anything, literally: numbers, people, shapes, cities, bits of text, ... etc The key fact about the 'rule' they all obey is that it must be well-defined. In other words, it must describe clearly what the entities obey. If the entities we're talking about are words, for example, a well-defined rule is: X is English A rule which is not well-defined (and therefore couldn't be used to define a set) might be: X is hard to spell where X is any word Elements[edit | edit source] An entity that belongs to a given set is called an element of that set. For example: Henry VIII is an element of the set of Kings of England. : kings of England {Henry VIII} Set Notation[edit | edit source] To list the elements of a set, we enclose them in curly brackets, separated by commas. For example: {\displaystyle \{-3,-2,-1,0,1,2,3\}} The elements of a set may also be described verbally: integers between {\displaystyle \{{\text{ integers between }}-3{\text{ and }}3{\text{ inclusive}}\}} The set builder notation may be used to describe sets that are too tedious to list explicitly. To denote any particular set, we use the letter x {\displaystyle x} : is an integer and {\displaystyle \{x|x{\text{ is an integer and }}|x|<4\}} or equivalently {\displaystyle \{x|x\in \mathbb {Z} ,|x|<4\}} The symbols ∈ {\displaystyle \in } and ∉ {\displaystyle \notin } denote the inclusion and exclusion of element, respectively: {\displaystyle {\text{dog }}\in \{{\text{quadrupeds}}\}} ∉ European capital cities {\displaystyle {\text{Washington DC}}\notin \{{\text{European capital cities}}\}} Sets can contain an infinite number of elements, such as the set of prime numbers. Ellipses are used to denote the infinite continuation of a pattern: {\displaystyle \{2,3,5,7,11,\dots \}} Note that the use of ellipses may cause ambiguities, the set above may be taken as the set of integers individible by 4, for example. Sets will usually be denoted using upper case letters: A {\displaystyle A} , B {\displaystyle B} , ... Elements will usually be denoted using lower case letters: x {\displaystyle x} , y {\displaystyle y} , ... Special Sets[edit | edit source] The universal set[edit | edit source] The set of all the entities in the current context is called the universal set, or simply the universe. It is denoted by U {\displaystyle U} . The context may be a homework exercise, for example, where the Universal set is limited to the particular entities under its consideration. Also, it may be any arbitrary problem, where we clearly know where it is applied. The empty set[edit | edit source] The set containing no elements at all is called the null set, or empty set. It is denoted by a pair of empty braces: { } {\displaystyle \{\}} or by the symbol ∅ {\displaystyle \varnothing } . It may seem odd to define a set that contains no elements. Bear in mind, however, that one may be looking for solutions to a problem where it isn't clear at the outset whether or not such solutions even exist. If it turns out that there isn't a solution, then the set of solutions is empty. If U = { words in the English language } {\displaystyle U=\{{\text{words in the English language}}\}} then { words with more than 50 letters } = ∅ {\displaystyle \{{\text{words with more than 50 letters}}\}=\varnothing } . If U = { whole numbers } {\displaystyle U=\{{\text{whole numbers}}\}} then { x | x 2 = 10 } = ∅ {\displaystyle \{x|x^{2}=10\}=\varnothing } . Operations on the empty set[edit | edit source] Operations performed on the empty set (as a set of things to be operated upon) can also be confusing. (Such operations are nullary operations.) For example, the sum of the elements of the empty set is zero, but the product of the elements of the empty set is one (see empty product). This may seem odd, since there are no elements of the empty set, so how could it matter whether they are added or multiplied (since "they" do not exist)? Ultimately, the results of these operations say more about the operation in question than about the empty set. For instance, notice that zero is the identity element for addition, and one is the identity element for multiplication. Special numerical sets[edit | edit source] Several sets are used so often, they are given special symbols. Natural numbers[edit | edit source] The 'counting' numbers (or whole numbers) starting at 1, are called the natural numbers. This set is sometimes denoted by N. So N = {1, 2, 3, ...} Note that, when we write this set by hand, we can't write in bold type so we write an N in blackboard bold font: N {\displaystyle \mathbb {N} } Integers[edit | edit source] All whole numbers, positive, negative and zero form the set of integers. It is sometimes denoted by Z. So Z = {..., -3, -2, -1, 0, 1, 2, 3, ...} In blackboard bold, it looks like this: Z {\displaystyle \mathbb {Z} } Real numbers[edit | edit source] If we expand the set of integers to include all decimal numbers, we form the set of real numbers. The set of reals is sometimes denoted by R. A real number may have a finite number of digits after the decimal point (e.g. 3.625), or an infinite number of decimal digits. In the case of an infinite number of digits, these digits may: recur; e.g. 8.127127127... ... or they may not recur; e.g. 3.141592653... In blackboard bold: R {\displaystyle \mathbb {R} } Rational numbers[edit | edit source] Those real numbers whose decimal digits are finite in number, or which recur, are called rational numbers. The set of rationals is sometimes denoted by the letter Q. A rational number can always be written as exact fraction p/q; where p and q are integers. If q equals 1, the fraction is just the integer p. Note that q may NOT equal zero as the value is then undefined. For example: 0.5, -17, 2/17, 82.01, 3.282828... are all rational numbers. In blackboard bold: Q {\displaystyle \mathbb {Q} } Irrational numbers[edit | edit source] If a number can't be represented exactly by a fraction p/q, it is said to be irrational. Examples include: √2, √3, π. Set Theory Exercise 1[edit | edit source] Click the link for Set Theory Exercise 1 Relationships between Sets[edit | edit source] We'll now look at various ways in which sets may be related to one another. Equality[edit | edit source] Two sets A {\displaystyle A} and B {\displaystyle B} are said to be equal if and only if they have exactly the same elements. In this case, we simply write: {\displaystyle A=B} Note two further facts about equal sets: The order in which elements are listed does not matter. If an element is listed more than once, any repeat occurrences are ignored. So, for example, the following sets are all equal: {\displaystyle \{1,2,3\}=\{3,2,1\}=\{1,1,2,3,2,2\}} (You may wonder why one would ever come to write a set like { 1 , 1 , 2 , 3 , 2 , 2 } {\displaystyle \{1,1,2,3,2,2\}} . You may recall that when we defined the empty set we noted that there may be no solutions to a particular problem - hence the need for an empty set. Well, here we may be trying several different approaches to solving a problem, some of which in fact lead us to the same solution. When we come to consider the distinct solutions, however, any such repetitions would be ignored.) Subsets[edit | edit source] If all the elements of a set A {\displaystyle A} are also elements of a set B {\displaystyle B} , then we say that A {\displaystyle A} is a subset of B {\displaystyle B} and we write: {\displaystyle A\subseteq B} In the examples below: {\displaystyle T=\{2,4,6,8,10\}} even integers {\displaystyle E=\{{\text{even integers}}\}} , then {\displaystyle T\subseteq E} alphanumeric characters {\displaystyle A=\{{\text{alphanumeric characters}}\}} printable characters {\displaystyle P=\{{\text{printable characters}}\}} {\displaystyle A\subseteq P} {\displaystyle Q=\{{\text{quadrilaterals}}\}} plane figures bounded by four straight lines {\displaystyle F=\{{\text{plane figures bounded by four straight lines}}\}} {\displaystyle Q\subseteq F} Notice that A ⊆ B {\displaystyle A\subseteq B} does not imply that B {\displaystyle B} must necessarily contain extra elements that are not in A {\displaystyle A} ; the two sets could be equal – as indeed Q {\displaystyle Q} and F {\displaystyle F} are above. However, if, in addition, B {\displaystyle B} does contain at least one element that isn't in A {\displaystyle A} , then we say that A {\displaystyle A} is a proper subset of B {\displaystyle B} . In such a case we would write: ⊂ {\displaystyle A\subset B} In the examples above: {\displaystyle E} contains ... -4, -2, 0, 2, 4, 6, 8, 10, 12, 14, ... , so {\displaystyle T\subset E} {\displaystyle P} contains $, ;, &, ..., so {\displaystyle A\subset P} But Q {\displaystyle Q} and F {\displaystyle F} are just different ways of saying the same thing, so Q = F {\displaystyle Q=F} The use of ⊂ {\displaystyle \subset } and ⊆ {\displaystyle \subseteq } ; is clearly analogous to the use of < and ≤ when comparing two numbers. Notice also that every set is a subset of the universal set, and the empty set is a subset of every set. (You might be curious about this last statement: how can the empty set be a subset of anything, when it doesn't contain any elements? The point here is that for every set A {\displaystyle A} , the empty set doesn't contain any elements that aren't in A {\displaystyle A} . So ⊘ ⊆ A {\displaystyle \oslash \subseteq A} for all sets A {\displaystyle A} .) Finally, note that if A ⊆ B and B ⊆ A then A {\displaystyle A\subseteq B{\text{ and }}B\subseteq A{\text{ then }}A} and B {\displaystyle B} must contain exactly the same elements, and are therefore equal. In other words: {\displaystyle A\subseteq B{\text{ and }}B\subseteq A{\text{ then }}A=B} Disjoint[edit | edit source] Two sets are said to be disjoint if they have no elements in common. For example: even numbers {\displaystyle A=\{{\text{even numbers}}\}} {\displaystyle B=\{1,3,5,11,19\}} {\displaystyle A} {\displaystyle B} are disjoint sets Venn Diagrams[edit | edit source] A Venn diagram can be a useful way of illustrating relationships between sets. In a Venn diagram: The universal set is represented by a rectangle. Points inside the rectangle represent elements that are in the universal set; points outside represent things not in the universal set. You can think of this rectangle, then, as a 'fence' keeping unwanted things out - and concentrating our attention on the things we're talking about. Other sets are represented by loops, usually oval or circular in shape, drawn inside the rectangle. Again, points inside a given loop represent elements in the set it represents; points outside represent things not in the set. Venn diagrams: Fig. 2 On the left, the sets A and B are disjoint, because the loops don't overlap. On the right A is a subset of B, because the loop representing set A is entirely enclosed by loop B. Venn diagrams: Worked Examples[edit | edit source] Fig. 3 represents a Venn diagram showing two sets A and B, in the general case where nothing is known about any relationships between the sets. Note that the rectangle representing the universal set is divided into four regions, labelled i, ii, iii and iv. What can be said about the sets A and B if it turns out that: (a) region ii is empty? (b) region iii is empty? (a) If region ii is empty, then A contains no elements that are not in B. So A is a subset of B, and the diagram should be re-drawn like Fig 2 above. (b) If region iii is empty, then A and B have no elements in common and are therefore disjoint. The diagram should then be re-drawn like Fig 1 above. (a) Draw a Venn diagram to represent three sets A, B and C, in the general case where nothing is known about possible relationships between the sets. (b) Into how many regions is the rectangle representing U divided now? (c) Discuss the relationships between the sets A, B and C, when various combinations of these regions are empty. (a) The diagram in Fig. 4 shows the general case of three sets where nothing is known about any possible relationships between them. (b) The rectangle representing U is now divided into 8 regions, indicated by the Roman numerals i to viii. (c) Various combinations of empty regions are possible. In each case, the Venn diagram can be re-drawn so that empty regions are no longer included. For example: If region ii is empty, the loop representing A should be made smaller, and moved inside B and C to eliminate region ii. If regions ii, iii and iv are empty, make A and B smaller, and move them so that they are both inside C (thus eliminating all three of these regions), but do so in such a way that they still overlap each other (thus retaining region vi). If regions iii and vi are empty, 'pull apart' loops A and B to eliminate these regions, but keep each loop overlapping loop C. ...and so on. Drawing Venn diagrams for each of the above examples is left as an exercise for the reader. The following sets are defined: U = {1, 2, 3, …, 10} A = {2, 3, 7, 8, 9} B = {2, 8} C = {4, 6, 7, 10} Using the two-stage technique described below, draw a Venn diagram to represent these sets, marking all the elements in the appropriate regions. The technique is as follows: Draw a 'general' 3-set Venn diagram, like the one in Example 2. Go through the elements of the universal set one at a time, once only, entering each one into the appropriate region of the diagram. Re-draw the diagram, if necessary, moving loops inside one another or apart to eliminate any empty regions. Don't begin by entering the elements of set A, then set B, then C – you'll risk missing elements out or including them twice! After drawing the three empty loops in a diagram looking like Fig. 4 (but without the Roman numerals!), go through each of the ten elements in U - the numbers 1 to 10 - asking each one three questions; like this: First element: 1 Are you in A? No Are you in B? No Are you in C? No A 'no' to all three questions means that the number 1 is outside all three loops. So write it in the appropriate region (region number i in Fig. 4). Second element: 2 Are you in A? Yes Are you in B? Yes Yes, yes, no: so the number 2 is inside A and B but outside C. Goes in region iii then. ...and so on, with elements 3 to 10. The resulting diagram looks like Fig. 5. The final stage is to examine the diagram for empty regions - in this case the regions we called iv, vi and vii in Fig. 4 - and then re-draw the diagram to eliminate these regions. When we've done so, we shall clearly see the relationships between the three sets. So we need to: pull B and C apart, since they don't have any elements in common. push B inside A since it doesn't have any elements outside A. The finished result is shown in Fig. 6. The regions in a Venn Diagram and Truth Tables[edit | edit source] Perhaps you've realized that adding an additional set to a Venn diagram doubles the number of regions into which the rectangle representing the universal set is divided. This gives us a very simple pattern, as follows: With one set loop, there will be just two regions: the inside of the loop and its outside. With two set loops, there'll be four regions. With three loops, there'll be eight regions. ...and so on. It's not hard to see why this should be so. Each new loop we add to the diagram divides each existing region into two, thus doubling the number of regions altogether. In A? In B? In C? Y Y Y Y Y N Y N Y Y N N N Y Y N Y N N N Y But there's another way of looking at this, and it's this. In the solution to Example 3 above, we asked three questions of each element: Are you in A? Are you in B? and Are you in C? Now there are obviously two possible answers to each of these questions: yes and no. When we combine the answers to three questions like this, one after the other, there are then 23 = 8 possible sets of answers altogether. Each of these eight possible combinations of answers corresponds to a different region on the Venn diagram. The complete set of answers resembles very closely a Truth Table - an important concept in Logic, which deals with statements which may be true or false. The table on the right shows the eight possible combinations of answers for 3 sets A, B and C. You'll find it helpful to study the patterns of Y's and N's in each column. As you read down column C, the letter changes on every row: Y, N, Y, N, Y, N, Y, N Reading down column B, the letters change on every other row: Y, Y, N, N, Y, Y, N, N Reading down column A, the letters change every four rows: Y, Y, Y, Y, N, N, N, N Click link for Set Theory Exercise 2 Operations on Sets[edit | edit source] Just as we can combine two numbers to form a third number, with operations like 'add', 'subtract', 'multiply' and 'divide', so we can combine two sets to form a third set in various ways. We'll begin by looking again at the Venn diagram which shows two sets A and B in a general position, where we don't have any information about how they may be related. Y Y iii Y N ii N Y iv N N i The first two columns in the table on the right show the four sets of possible answers to the questions Are you in A? and Are you in B? for two sets A and B; the Roman numerals in the third column show the corresponding region in the Venn diagram in Fig. 7. Intersection[edit | edit source] Region iii, where the two loops overlap (the region corresponding to 'Y' followed by 'Y'), is called the intersection of the sets A and B. It is denoted by A ∩ B. So we can define intersection as follows: The intersection of two sets A and B, written A ∩ B, is the set of elements that are in A and in B. (Note that in symbolic logic, a similar symbol, ∧ {\displaystyle \scriptstyle \wedge } , is used to connect two logical propositions with the AND operator.) For example, if A = {1, 2, 3, 4} and B = {2, 4, 6, 8}, then A ∩ B = {2, 4}. We can say, then, that we have combined two sets to form a third set using the operation of intersection. Union[edit | edit source] In a similar way we can define the union of two sets as follows: The union of two sets A and B, written A ∪ B, is the set of elements that are in A or in B (or both). The union, then, is represented by regions ii, iii and iv in Fig. 7. (Again, in logic a similar symbol, ∨ {\displaystyle \scriptstyle \vee } , is used to connect two propositions with the OR operator.) So, for example, {1, 2, 3, 4} ∪ {2, 4, 6, 8} = {1, 2, 3, 4, 6, 8}. You'll see, then, that in order to get into the intersection, an element must answer 'Yes' to both questions, whereas to get into the union, either answer may be 'Yes'. The ∪ symbol looks like the first letter of 'Union' and like a cup that will hold a lot of items. The ∩ symbol looks like a spilled cup that won't hold a lot of items, or possibly the letter 'n', for i'n'tersection. Take care not to confuse the two. Difference[edit | edit source] The difference of two sets A and B (also known as the set-theoretic difference of A and B, or the relative complement of B in A) is the set of elements that are in A but not in B. This is written A - B, or sometimes A \ B. The elements in the difference, then, are the ones that answer 'Yes' to the first question Are you in A?, but 'No' to the second Are you in B?. This combination of answers is on row 2 of the above table, and corresponds to region ii in Fig.7. For example, if A = {1, 2, 3, 4} and B = {2, 4, 6, 8}, then A - B = {1, 3}. Complement[edit | edit source] So far, we have considered operations in which two sets combine to form a third: binary operations. Now we look at a unary operation - one that involves just one set. The set of elements that are not in a set A is called the complement of A. It is written A′ (or sometimes AC, or A ¯ {\displaystyle {\bar {A}}} ). Clearly, this is the set of elements that answer 'No' to the question Are you in A?. For example, if U = N and A = {odd numbers}, then A′ = {even numbers}. Notice the spelling of the word complement: its literal meaning is 'a complementary item or items'; in other words, 'that which completes'. So if we already have the elements of A, the complement of A is the set that completes the universal set. Properties of set operations[edit | edit source] 1. Commutative[edit | edit source] A ∪ B = B ∪ A {\displaystyle A\cup B=B\cup A} A ∩ B = B ∩ A {\displaystyle A\cap B=B\cap A} 2. Associative[edit | edit source] A ∪ ( B ∪ C ) = ( A ∪ B ) ∪ C {\displaystyle A\cup (B\cup C)=(A\cup B)\cup C} A ∩ ( B ∩ C ) = ( A ∩ B ) ∩ C {\displaystyle A\cap (B\cap C)=(A\cap B)\cap C} 3. Distributive[edit | edit source] A ∪ ( B ∩ C ) = ( A ∪ B ) ∩ ( A ∪ C ) {\displaystyle A\cup (B\cap C)=(A\cup B)\cap (A\cup C)} A ∩ ( B ∪ C ) = ( A ∩ B ) ∪ ( A ∩ C ) {\displaystyle A\cap (B\cup C)=(A\cap B)\cup (A\cap C)} 4. Special properties of complements[edit | edit source] ( A ′ ) ′ = A {\displaystyle (A')'=A} U ′ = ϕ {\displaystyle U'=\phi } ϕ ′ = U {\displaystyle \phi '=U} A ∩ B ′ = A − B {\displaystyle A\cap B'=A-B} 5. De Morgan's Law[edit | edit source] ( A ∩ B ) ′ = A ′ ∪ B ′ {\displaystyle (A\cap B)'=A'\cup B'} ( A ∪ B ) ′ = A ′ ∩ B ′ {\displaystyle (A\cup B)'=A'\cap B'} . Summary[edit | edit source] Intersection: things that are in A and in B Union: things that are in A or in B (or both) Difference: things that are in A and not in B Symmetric Difference: things that are in A or in B but not both Complement: things that are not in A Cardinality[edit | edit source] Finally, in this section on Set Operations we look at an operation on a set that yields not another set, but an integer. The cardinality of a finite set A, written | A | (sometimes #(A) or n(A)), is the number of (distinct) elements in A. So, for example: If A = {lower case letters of the alphabet}, | A | = 26. Generalized set operations[edit | edit source] If we want to denote the intersection or union of n sets, A1, A2, ..., An (where we may not know the value of n) then the following generalized set notation may be useful: A1 ∩ A2 ∩ ... ∩ An = ⋂ i = 1 n {\displaystyle \scriptstyle \bigcap _{i=1}^{n}} Ai A1 ∪ A2 ∪ ... ∪ An = ⋃ i = 1 n {\displaystyle \scriptstyle \bigcup _{i=1}^{n}} Ai In the symbol ⋂ i = 1 n {\displaystyle \scriptstyle \bigcap _{i=1}^{n}} Ai, then, i is a variable that takes values from 1 to n, to indicate the repeated intersection of all the sets A1 to An. Retrieved from "https://en.wikibooks.org/w/index.php?title=Discrete_Mathematics/Set_theory&oldid=3982591" Book:Discrete Mathematics Discussion for this IP address Using Wikibooks Reading room forum
CommonCrawl
Image (534674) Tabular Data (184609) Article Repositories (555851) arXiv (555851) View 555851 Results 555851 results A parity map of framed chord diagrams Contributors: Ilyutko, Denis, Manturov, Vassily ... We consider framed chord diagrams, i.e. chord diagrams with chords of two types. It is well known that chord diagrams modulo 4T-relations admit Hopf algebra structure, where the multiplication is given by any connected sum with respect to the orientation. But in the case of framed chord diagrams a natural way to define a multiplication is not known yet. In the present paper, we first define a new module $\mathcal{M}_2$ which is generated by chord diagrams on two circles and factored by $4$T-relations. Then we construct a "covering" map from the module of framed chord diagrams into $\mathcal{M}_2$ and a weight system on $\mathcal{M}_2$. Using the map and weight system we show that a connected sum for framed chord diagrams is not a well-defined operation. In the end of the paper we touch linear diagrams, the circle replaced by a directed line. Quantum Markov chains: description of hybrid systems, decidability of equivalence, and model checking linear-time properties Contributors: Li, Lvzhou, Feng, Yuan ... In this paper, we study a model of quantum Markov chains that is a quantum analogue of Markov chains and is obtained by replacing probabilities in transition matrices with quantum operations. We show that this model is very suited to describe hybrid systems that consist of a quantum component and a classical one, although it has the same expressive power as another quantum Markov model proposed in the literature. Indeed, hybrid systems are often encountered in quantum information processing; for example, both quantum programs and quantum protocols can be regarded as hybrid systems. Thus, we further propose a model called hybrid quantum automata (HQA) that can be used to describe these hybrid systems that receive inputs (actions) from the outer world. We show the language equivalence problem of HQA is decidable in polynomial time. Furthermore, we apply this result to the trace equivalence problem of quantum Markov chains, and thus it is also decidable in polynomial time. Finally, we discuss model checking linear-time properties of quantum Markov chains, and show the quantitative analysis of regular safety properties can be addressed successfully. Nonlinear optics determination of the symmetry group of a crystal using structured light Contributors: Jauregui, Rocio, Torres, Juan P. ... We put forward a technique to unveil to which symmetry group a nonlinear crystal belongs, making use of nonlinear optics with structured light. We consider as example the process of spontaneous parametric down-conversion. The crystal, which is illuminated with a special type of Bessel beam, is characterized by a nonlinear susceptibility tensor whose structure is dictated by the symmetry group of the crystal. The observation of the spatial angular dependence of the lower-frequency generated light provides direct information about the symmetry group of the crystal. Testing magnetic helicity conservation in a solar-like active event Contributors: Pariat, E., Valori, G., Démoulin, P., Dalmasse, K. ... Magnetic helicity has the remarkable property of being a conserved quantity of ideal magnetohydrodynamics (MHD). Therefore, it could be used as an effective tracer of the magnetic field evolution of magnetized plasmas. Theoretical estimations indicate that magnetic helicity is also essentially conserved with non-ideal MHD processes, e.g. magnetic reconnection. This conjecture has however been barely tested, either experimentally or numerically. Thanks to recent advances in magnetic helicity estimation methods, it is now possible to test numerically its dissipation level in general three-dimensional datasets. We first revisit the general formulation of the temporal variation of relative magnetic helicity on a fully bounded volume when no hypothesis on the gauge are made. We introduce a method to precisely estimate its dissipation independently of the type of non-ideal MHD processes occurring. In a solar-like eruptive event simulation, using different gauges, we compare its estimation in a finite volume with its time-integrated flux through the boundaries, hence testing the conservation and dissipation of helicity. We provide an upper bound of the real dissipation of magnetic helicity: It is quasi-{} during the quasi-ideal MHD phase. Even when magnetic reconnection is acting the relative dissipation of magnetic helicity is also very small (30 times larger). We finally illustrate how the helicity-flux terms involving velocity components are gauge dependent, hence limiting their physical meaning. Strong three-meson couplings of $J/\psi$ and $\eta_c$ Contributors: Lucha, Wolfgang, Melikhov, Dmitri, Sazdjian, Hagop, Simula, Silvano ... We discuss the strong couplings $g_{PPV}$ and $g_{VVP}$ for vector ($V$) and pseudoscalar ($P$) mesons, at least one of which is a charmonium state $J/\psi$ or $\eta_c$. The strong couplings are obtained as residues at the poles of suitable form factors, calculated in a broad range of momentum transfers using a dispersion formulation of the relativistic constituent quark model. The form factors obtained in this approach satisfy all constraints known for these quantities in the heavy-quark limit. Our results suggest sizably higher values for the strong meson couplings than those reported in the literature from QCD sum rules. Open quantum system parameters from molecular dynamics Contributors: Wang, Xiaoqing, Ritschel, Gerhard, Wüster, Sebastian, Eisfeld, Alexander ... We extract the site energies and spectral densities of the Fenna-Matthews-Olson (FMO) pigment protein complex of green sulphur bacteria from simulations of molecular dynamics combined with energy gap calculations. Comparing four different combinations of methods, we investigate the origin of quantitative differences regarding site energies and spectral densities obtained previously in the literature. We find that different forcefields for molecular dynamics and varying local energy minima found by the structure relaxation yield significantly different results. Nevertheless, a picture averaged over these variations is in good agreement with experiments and some other theory results. Throughout, we discuss how vibrations external- or internal to the pigment molecules enter the extracted quantities differently and can be distinguished. Our results offer some guidance to set up more computationally intensive calculations for a precise determination of spectral densities in the future. These are required to determine absorption spectra as well as transport properties of light-harvesting complexes. Librational solution for dust particles in mean motion resonances under the action of stellar radiation Contributors: Pastor, Pavol ... This paper presents a librational solution for evolutions of parameters averaged over a synodic period in mean motion resonances in planar circular restricted three-body problem (PCR3BP) with non-gravitational effects taken into account. The librational solution is derived from a linearization of modified Lagrange's planetary equations. The presented derivation respects properties of orbital evolutions in the mean motion resonances within the framework of the PCR3BP. All orbital evolutions in the PCR3BP with the non-gravitational effects can be described by four varying parameters. We used the semimajor axis, eccentricity, longitude of pericenter and resonant angular variable. The evolutions are found for all four parameters. The solution can be applied also in the case without the non-gravitational effects. We compared numerically and analytically obtained evolutions in the case when the non-gravitational effects are the Poynting-Robertson effect and the radial stellar wind. The librational solution is good approximation when the libration amplitude of the resonant angular variable is small. Waveforms produced by a scalar point particle plunging into a Schwarzschild black hole: Excitation of quasinormal modes and quasibound states Contributors: Decanini, Yves, Folacci, Antoine, Hadj, Mohamed Ould El ... With the possibility of testing massive gravity in the context of black hole physics in mind, we consider the radiation produced by a particle plunging from slightly below the innermost stable circular orbit into a Schwarzschild black hole. In order to circumvent the difficulties associated with black hole perturbation theory in massive gravity, we use a toy model in which we replace the graviton field with a massive scalar field and consider a linear coupling between the particle and this field. We compute the waveform generated by the plunging particle and study its spectral content. This permits us to highlight and interpret some important effects occurring in the plunge regime which are not present for massless fields such as (i) the decreasing and vanishing, as the mass parameter increases, of the signal amplitude generated when the particle moves on quasicircular orbits near the innermost stable circular orbit and (ii) in addition to the excitation of the quasinormal modes, the excitation of the quasibound states of the black hole. On the probability of the collision of a Mars-sized planet with the Earth to form the Moon Contributors: Dvorak, Rudolf, Loibnegger, Birgit, Maindl, Thomas I. ... The problem of the formation of the Moon is still not explained satisfactorily. While it is a generally accepted scenario that the last giant impact on Earth between some 50 to 100 million years after the starting of the formation of the terrestrial planets formed our natural satellite, there are still many open questions like the isotopic composition which is identical for these two bodies. In our investigation we will not deal with these problems of chemical composition but rather undertake a purely dynamical study to find out the probability of a Mars-sized body to collide with the Earth shortly after the formation of the Earth-like planets. For that we assume an additional massive body between Venus and Earth, respectively Earth and Mars which formed there at the same time as the other terrestrial planets. We have undertaken massive n-body integrations of such a planetary system with 4 inner planets (we excluded Mercury but assumed one additional body as mentioned before) for up to tens of millions of years. Our results led to a statistical estimation of the collision velocities as well as the collision angles which will then serve as the basis of further investigation with detailed SPH computations. We find a most probable origin of the Earth impactor at a semi-major axis of approx. 1.16 AU. Inflation, evidence and falsifiability Contributors: Gubitosi, Giulia, Lagos, Macarena, Magueijo, Joao, Allison, Rupert ... In this paper we consider the issue of paradigm evaluation by applying Bayes' theorem along the following nested chain of progressively more complex structures: i) parameter estimation (within a model), ii) model selection and comparison (within a paradigm), iii) paradigm evaluation. In such a chain the Bayesian evidence works both as the posterior's normalization at a given level and as the likelihood function at the next level up. Whilst raising no objections to the standard application of the procedure at the two lowest levels, we argue that it should receive an essential modification when evaluating paradigms, in view of the issue of falsifiability. By considering toy models we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor. We argue that the evidence for a paradigm should not only be high for a given dataset, but exceptional with respect to what it would have been, had the data been different. We propose a measure of falsifiability (which we term predictivity), and a prior to be incorporated into the Bayesian framework, suitably penalising unfalsifiability. We apply this measure to inflation seen as a whole, and to a scenario where a specific inflationary model is hypothetically deemed as the only one viable as a result of information alien to cosmology (e.g. Solar System gravity experiments, or particle physics input). We conclude that cosmic inflation is currently difficult to falsify and thus to be construed as a scientific theory, but that this could change were external/additional information to cosmology to select one of its many models. We also compare this state of affairs to bimetric varying speed of light cosmology.
CommonCrawl
Science – Society – Technology Evaluation of deep geothermal exploration drillings in the crystalline basement of the Fennoscandian Shield Border Zone in south Sweden Jan-Erik Rosberg ORCID: orcid.org/0000-0002-4610-23741 & Mikael Erlström2,3 Geothermal Energy volume 9, Article number: 20 (2021) Cite this article The 3.1- and 3.7-km-deep FFC-1 and DGE-1 geothermal explorations wells drilled into the Precambrian crystalline basement on the southern margin of the Fennoscandian Shield are evaluated regarding experiences from drilling, geological conditions, and thermal properties. Both wells penetrate an approximately 2-km-thick succession of sedimentary strata before entering the crystalline basement, dominated by orthogneiss, metabasite and amphibolite of the (1.1–0.9 Ga) Eastern Interior Sveconorwegian Province. The upper c. 400 m of the basement is in FFC-1 severely fractured and water-bearing which disqualified the use of percussion air drilling and conventional rotary drilling was, therefore, performed for the rest of the borehole. The evaluation of the rotary drillings in FFC-1 and DGE-1 showed that the average bit life was very similar, 62 m and 68 m, respectively. Similarly, the average ROP varied between 2 and 4 m/h without any preferences regarding bit-type (PDC or TCI) or geology. A bottomhole temperature of 84.1 °C was measured in FFC-1 borehole with gradients varying between 17.4 and 23.5 °C/km for the main part of the borehole. The calculated heat flow varies between 51 and 66 mW/m2 and the average heat production is 3.0 µW/m3. The basement in FFC-1 is, overall, depleted in uranium and thorium in comparison to DGE-1 where the heat productivity is overall higher with an average of 5.8 µW/m3. The spatial distribution of fractures was successfully mapped using borehole imaging logs in FFC-1 and shows a dominance of N–S oriented open fractures, a fracture frequency varying between 0.85 and 2.49 frac/m and a fracture volumetric density between 1.68 and 3.39 m2/m3. The evaluation of the two boreholes provides insight and new empirical data on the thermal properties and fracturing of the concealed crystalline basement in the Fennoscandian Shield Border Zone that, previously, had only been assessed by assumptions and modelling. The outcome of the drilling operation has also provided insight regarding the drilling performance in the basement and statistical data on various drill bits used. The knowledge gained is important in feasibility studies of deep geothermal projects in the crystalline basement in south Sweden. Enhanced Geothermal Systems (EGS) are defined as geothermal reservoirs that are created to be able to extract commercial amounts of heat from tight sedimentary formations or crystalline basement (Tester et al. 2006). Already in the 1970s research with hot dry rock projects as precursors to the EGS concept started (e.g., Armstead and Tester 1987; Brown 2009). Numerous EGS projects have since then been performed mainly in relatively hot crustal regions from which knowledge has and is developed regarding establishing deep geothermal reservoirs in tight rock. Recent examples are the FORGE project in Utah (Allis et al. 2016) and the United Downs Deep Geothermal Power hot granitic rock project in Cornwall UK (Ledingham et al. 2019). On the contrary, EGS feasibility studies in colder stable crustal regions, such as the Fennoscandian Shield, are sparse. The concept of creating a deep EGS in relatively cold crust for direct heat exchange is challenging but will, if successful, enable the use of geothermal heat in areas which, so far, have been disqualified for high-enthalpy geothermal systems. However, knowledge on the EGS-related properties of the crystalline bedrock at greater depth in these areas and in Sweden is constrained and every new deep drilling project as the ones presented here provide invaluable new information and knowledge. In 2002–2003 Lund Energy AB (today Kraftringen AB) and the Department of Engineering Geology at the Lund University performed a geothermal exploration drilling project in which the DGE-1 well was drilled. The aim of the project was to find hot water in fractured crystalline bedrock associated with the Romeleåsen Fault Zone on the margins of the Fennoscandian Shield in Skåne, south Sweden (Rosberg and Erlström 2019). However, the project ended after encountering less hydraulically conductive rock mass than expected. Around 10 years after the DGE-1 project in Lund, the interest for EGS applications in the crystalline basement increased. This has primarily been driven by the St1 EGS exploration project in Espoo, Finland, which aims to build a commercial EGS at c. 6 km depth in the Fennoscandian crystalline bedrock (e.g., Leary et al. 2017; Kukkonen and Pentti 2021). The St1 project in Finland triggered E.ON (a large European electric utility company) in 2016 to investigate the potential for EGS as a future sustainable heat source in the city of Malmö, Skåne, south Sweden. The drilling operation of the basement section of FFC-1 occurred in 2020 and was the result from several years of feasibility and pre-investigation studies. In 2021, Gothenburg Energy on the Swedish west coast has also (besides E.ON) started EGS investigations, including drilling of test holes to 1–2 km depth, which furthermore highlights the increasing interest in EGS in Sweden. This increasing interest leads to the necessity to build knowledge on technical issues related to drilling, rock mass composition, geo-mechanical properties and, thermal and hydraulic conditions of the crystalline bedrock at great depths. In line with this, the aim of this study is to evaluate and compare the results from the drilling operation and measurements conducted in the crystalline basement in FFC-1 and the results from the neighbouring DGE-1 well in Lund (Rosberg and Erlström 2019). Our study evaluates the drilling performance, rock mass composition and physical properties as well as fracturing and thermal properties, which all play a significant role in assessing the bedrock prerequisites for EGS in south Sweden. Furthermore, since the wells are no more than c. 20 km apart, it is obvious that a comparison will increase their scientific value in a regional assessment of the EGS prerequisites in Skåne. A regional assessment is included in this study which provides data on geological and thermal properties of similar rock types exposed in Dalby quarry located on the Romeleåsen Ridge. Apart from being the third and the fourth deepest well in Sweden, both DGE-1 and FFC-1 are unique sources of information since they represent the only ones this far south on the Fennoscandian Shield that give information of the crystalline basement at great depths. Only the nearly 7-km-deep drillings at Stenberg-1 and Gravberg-1 in the Siljan impact structure in central Sweden (see Juhlin et al. 1998) are deeper. Furthermore, there are only a few other wells that reach deeper than 1 km in the Fennoscandian basement, e.g., Outokumpu R-2500 in Finland (Kukkonen 2011), the deep wells OTN1-3 in Espoo Finland (Leary et al. 2017; Malin et al. 2021; Kukkonen and Pentti 2021), the Kola super deep borehole in Russia (Arshavskaya et al. 1984) and the COSC boreholes in mid-Sweden (Lorenz et al. 2015). Rosberg and Erlström (2019) compared the thermal properties measured and evaluated from DGE-1 with the values obtained from the wells mentioned above. In the same paper, a comparison was also made with data from wells located in the upper crystalline crust, but outside the Fennoscandian Shield, such as the KTB in Germany (Emmermann and Lauterjung 1997) and the Hunt well in Canada (Majorowicz et al. 2014). These previously made comparisons will, in this paper, be updated with the new data acquired from FFC-1. In addition, comparisons are made with the 1700-m-deep borehole, KLX02, at Laxemar in Sweden (Andersson 1994) and the 1820-m-deep borehole, Bh32012, in Lake Vättern (Sundberg et al. 2016), which both are located within the Fennoscandian Shield. Cost-efficient drilling is essential for building competitive EGS business cases, in comparison to other energy resources, especially in areas where the target depths in relatively cold shield areas approach depths of 6–7 km. In EGS projects the drilling cost represents often more than 50% of the total cost (Garabetian 2019). It is important to increase drill bit longevity and reduce the drill bit consumption for all drilling projects and, especially, for deep EGS drillings as a step to reduce the total cost. The time to replace a drill bit around, e.g., 3 km depth can take around 18 h and the daily drilling cost can be 30,000 Euros or higher (E.ON 2021). It is therefore valuable to use experience from other drilling operations in similar geological settings, which in this case is the crystalline basement, when designing the drilling program and selecting the best suited drill bits. The two drillings in Malmö and Lund provide a unique opportunity to evaluate the drilling performance in the crystalline rocks of the Fennoscandian Shield that can guide future similar drilling projects in assessing the best drilling strategy. The Precambrian crystalline bedrock in the central and western parts of Skåne belongs to the Sveconorwegian Eastern interior and boundary segments of the Fennoscandian Shield (Fig. 1). The rocks are dominated by various orthogneisses with lenses and layers of amphibolite and metabasite. The protolith rocks are c. 1.74–1.66 Ga old granitoid rocks with minor dioritoid components and c. 1.4 and 1.2 Ga intrusive rocks, which were affected by tectonometamorphic reworking during the 1.1–0.9 Ga Sveconorwegian Orogeny (Ulmius et al. 2018; Stephens and Wahlgren 2020). Outcrops on Kullen and Söderåsen (Fig. 1) exemplify c. 1.7 Ga migmatitic gneisses, and 1.7–0.9 Ga amphibolites of the interior segment (Ulmius et al. 2018). The Eastern boundary segment, exemplified by outcrops on the Romeleåsen Ridge (e.g., Dalby quarry), and in the DGE-1 well (Rosberg and Erlström 2019), illustrates a somewhat more varied bedrock composition of 1.8–1.7 Ga orthogneiss, 1.4–1.2 Ga granite, syenite and metamorphic equivalents. In addition, the Eastern interior and the boundary segments display a high frequency of Permo-Carboniferous (c. 300 Ma) NW-oriented dolerite dykes, which constitute up to c. 10% of the rock mass. Geological setting of the southwestern part of the Fennoscandian Shield and the location of the DGE-1 and FFC-1 wells Skåne is in the outer margin of the Fennoscandian Shield Border Zone, which defines the weakened southwestern part of the Fennoscandian Shield (Erlström 2020). The zone extends from north Jutland, across Kattegat and Skåne, and includes numerous fault zones of which many are incorporated in the Sorgenfrei-Tornquist Zone (Fig. 1). The border zone portrays the transition from older stable craton areas to the northeast and younger geological provinces to the southwest and south. Towards the south and southwest the crustal thickness decreases likewise from c. 45 to 35 km (Thybo 1990), which contributes to an increasing heat flow and higher temperature gradients across the border zone (Balling 1995). The Fennoscandian crystalline bedrock is also successively covered by several kilometres-thick sedimentary strata to the south. Thus, the crystalline bedrock in SW Skåne is concealed by a c 2-km-thick sedimentary succession belonging to the marginal part of the Danish Basin (Sivhed et al. 1999; Erlström 2020). Prior to the drilling of FFC-1 and DGE-1, there were only a few scattered observations on the composition of the topmost metres of the crystalline basement in SW Skåne. These came primarily from oil and gas prospecting wells touching the basement beneath the thick sedimentary cover (Sivhed et al. 1999) and give only indications of a gneiss-dominated bedrock. The main source of neighbouring information on the crystalline bedrock comes from outcrops in north and northwest Skåne, e.g., Kullen and Söderåsen ridges, and especially on the Romeleåsen Ridge in south central Skåne (Norling and Wikman 1990; Wikman et al. 1993; Sivhed et al. 1999; Erlström et al. 2004, cf. Fig. 1). These outcrops display a tectonic pattern and structure of the crystalline basement which mirrors a complex mixture of Precambrian inherited tectonic signatures and younger tectonic events. The later correlates to the Phanerozoic break-up of the southwestern margin of the Fennoscandian Shield, including Skåne, which created a complex crustal transition zone between the stable shield and the tectonically active Phanerozoic continental Europe (Liboriussen et al. 1987; EUGENO-s Working Group 1998; Erlström et al. 1997; Thybo 1990; Erlström 2020). Rifting, strike–slip faulting, and crustal shortening, i.e., the Caledonian, Variscan and Alpine orogenic deformation phases, have resulted in a rock mass that is highly intersected by shear fractures, extensional fractures, joints, fissures, and veins. This predominantly brittle deformation is portrayed in most outcrops as heavily crushed bedrock. There is also a distinct positive Bouguer gravity anomaly in the deeper crust coinciding with the Sorgenfrei-Tornquist Zone in Skåne. This is interpreted to consist of high-density rock associated with the intrusion of magmatic dolerite dyke swarms across Skåne during the Permo-Carboniferous Variscan rift induced setting (cf. Balling 1990; Thybo 1990; Erlström 2020). Consequently, a highly fractured rock mass, relatively thin crust, deep-seated young fault zones and magmatic bodies provide perhaps, in comparison to other areas on the Fennoscandian Shield, the best geological prerequisites for EGS in Sweden. Our study is based on data, observations and documentation from the drilling operations, e.g., the real-time Geo-data acquisition system on rig-related properties (e.g., rate of penetration, weight on bit, rpm), well-site geological monitoring and sampling, daily drilling reports, geophysical wire-line logging, and end of well reports. Complementary analyses on cuttings from FFC-1 have included analyses of thermal conductivity, matrix density, chemical composition and petrographical studies on thin sections. Rock samples from the Dalby quarry on the Romeleåsen Ridge of comparable rock types have in addition been analysed regarding the thermal conductivity and density. Drilling of FFC-1 In 2002, a 2110-m-deep well, FFC-1, was drilled and cased as a part of geothermal exploration project in Malmö, Sweden. The target was extraction of geothermal heat from the 50–60 °C warm Mesozoic sandstone reservoirs between 1600 and 2100 m depth. Two potential production zones, 1615–1828 m and 1862–2071 m, were found and in total 315 m of the 9 5/8″ (244 mm) casing was perforated to access and test the two zones. The drilling stopped after reaching c. 18 m into the crystalline basement. Information about the drilling and the testing is found in DONG reports (2003, 2006). A schematic well completion is illustrated in Fig. 2. Schematic description of the well design of DGE-1 and FFC-1 In 2020, FFC-1 was re-entered, with the aim to deepen it to 4000 m and with the target to gather information about the crystalline basement, such as drilling performance, rock types, fractures, mechanical and thermal properties, as a step before deciding if the location is suitable for a full-scale EGS-plant or not. The deepening of FFC-1 started on June 27 and the total depth of 3133 m was reached on August 25. The drill rig used was HAS-Innova, type Herrenknecht Vertical (Hook load 4100 kN). The first planned drilling method was percussion drilling using air. To be able to use this method a 7 5/8″ (194 mm) casing was installed from surface to 2150 m depth. The main purpose of the casing installation was to seal the older perforated intervals and to minimize the risk for collapse of the existing casing. The new casing was cemented from the casing shoe (2142 m) to 1488 m depth, 127 m above the topmost perforations. One of the objectives with deepening the well was to evaluate the applicability of percussion drilling using air in the crystalline basement. The used dimension of the percussion drill bit was 170 mm and the method was used for around 90 m of drilling to 2241 m depth. However, high-water influx disqualified continuation of this underbalanced drilling method. Hence, the subsequent drilling had to rely on conventional rotary drilling using a solids free salt polymer mud. The initial drill bit dimension was 6 1/2 in. (165 mm) and used to 2652 m depth and 6 1/8 in. (156 mm) drill bit dimension was used for the following drilling to total depth 3133 m; see Fig. 2. The reason for changing the drill bit dimension was only due to a shortage of 6 1/2 in. drill bits. Both tungsten carbide insert (TCI) roller cone bits and polycrystalline diamond compact (PDC) bits were used and 15 bottom hole assemblies (BHA) were tested. A downhole motor was included in seven of the these. Information about the BHAs can be found in E.ON (2021), as well as a more detailed description of the entire drilling operation. Drilling of DGE-1 The DGE-1 drilling started on October 19, 2002 and finished on March 19, 2003. The total depth reached was 3701.8 m, of which the lower 1756 m were drilled in the crystalline basement. The well is completed with casing down to 3198 m and has an 8 1/2″ (216 mm) open main target section down to the total depth. Four different drilling methods were used: conventional mud rotary drilling, air rotary drilling, percussion drilling using air and percussion drilling using mud. A detailed description of the drilling can be found in the final well report (Howard-Orchard 2003) and in the publication by Rosberg and Erlström (2019). A schematic well design is illustrated in Fig. 2. Logging campaign in FFC-1 The Weatherford logging operation between 2138 and 3109 m was performed 3 months after the drilling terminated and included the following sensors: Gamma Ray, Compact Cross Dipole (CXD), Compact Spectral Gamma Ray, Compact PhotoDensity and Slim Compact Micro Imager (SCMI), with multi-arm Caliper and borehole deviation. The SCMI tool is a high-resolution resistivity tool for imaging borehole features such as fractures. The SCMI tool had a dual design which optimized the quality of the borehole resistivity image. The CXD tool is an acoustic tool that gives information on the mechanical properties and wellbore anisotropic conditions related to fracturing and in situ stresses. The Spectral Gamma Ray tool was used since it had in DGE-1 proven very useful in separating mafic rocks such as amphibolite from felsic rocks such as gneiss and granite. The spectral data on potassium (K), thorium (Th) and uranium (U), in combination with density data from the Compact PhotoDensity log, are necessary for calculating the heat productivity. The logging operation in DGE-1 did not include any borehole imaging tools or density logging. Primarily Caliper, Sonic and Spectral Gamma Ray and a Production log (PLT) were used to characterize this well (Rosberg and Erlström 2019). Temperature survey and thermal properties The temperature was measured in FFC-1 around 3 months after the drilling ended; however, there is a risk that the borehole is not fully thermally stabilized. Nevertheless, the temperature still gives indication of the temperature gradient and the heat flow in the investigated crystalline basement. The heat flow (Q) is calculated using Eq. 1, where the depth compensated thermal conductivity value (kin-situ) is multiplied by the temperature gradient (Tgrad): $$Q = k_{{\text{in-situ}}} \cdot T_{{{\text{grad}}}}$$ The temperature gradient is calculated over 10-m intervals using linear regression. The correction formula (Eq. 2) presented by Chapman and Furlong (1992) is used for compensating the thermal conductivity (k0) measured at a room temperature of 20 °C to the in situ thermal conductivity (kin-situ), which depends on the borehole temperature (T) and the depth (z): $$k_{{\text{in-situ}}} = k_{0} {{\left( {1 + {\text{cz}}} \right)} \mathord{\left/ {\vphantom {{\left( {1 + {\text{cz}}} \right)} {\left( {1 + {\text{b}}\left( {T - 20} \right)} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {1 + {\text{b}}\left( {T - 20} \right)} \right)}}$$ Since the drilled Precambrian section in FFC-1 is dominated by orthogneiss, the same coefficient values for c and b are used in this paper, as the ones given by Chapman and Furlong (1992), i.e., c = 1.5·10–6/m and b = 1.5·10–3/K, representative for granitic upper crust. The heat production (A) is calculated using the concentrations of the radiogenic isotopes of uranium (U), thorium (Th) and potassium (K) from the spectral gamma ray log. Most of the geothermal heat generated in the crust derives from decay of these isotopes (Wollenberg and Smith 1987). Therefore, it is a common approach to use spectral gamma ray logging data for evaluating the heat production in deep boreholes in the upper crust, e.g., exemplified in Majorowicz et al. (2014), Jiang et al. (2016), and Rosberg and Erlström (2019). The empirical formula, see (Eq. 3), presented in Bücker and Rybach (1996) is used for calculating the heat production: $$A = 10^{ - 5} \cdot \rho \cdot \left( {9.52\,{\mkern 1mu} {\text{U}}_{{{\text{ppm}}}} + 2.56{\mkern 1mu} \,{\text{Th}}_{{{\text{ppm}}}} + 3.48\,{\mkern 1mu} {\text{K}}_{{{\text{percent}}}} } \right)$$ The density values (ρ) are obtained from the photo-density logging. Drilling experiences The rate of penetration (ROP) acquired in FFC-1 when drilling in the crystalline basement illustrates that high penetration rates, up to 15 m/h, were obtained during the attempts with percussion drilling using air, between 2150 and 2242 m depth (Fig. 3). Cost consuming trips to inspect hammer, as well as difficulties to monitor downhole hammer function, and problems with hole cleaning due to heavily fractured rock and inflow of formation fluid led to the abandonment of the percussion drilling method. The Caliper log also verifies that the basement down to c 2500 m included several zones of weak and unstable rock that resulted in significant break-out and greater borehole diameter (Fig. 3). These zones were also associated with inflow of water, which rendered difficulties to clean the well as well as getting the hammer to properly operate. However, it is not concluded if the method could have worked at greater depth with less fractured rock and less inflow of water. Composite log of FFC-1 including rate of penetration (ROP), borehole diameter (Caliper), natural gamma radiation, density, temperature, calculated heat flow and evaluated linear and volumetric fracture density from the SCMI logging data The ROP for the conventional mud rotary drilling varied between 1 and 4 m/h, the values are similar for the ROPs acquired during the drilling of the crystalline basement in DGE-1 (Rosberg and Erlström 2019). Larger bit dimensions and initially different drilling method, air rotary, was used in the crystalline basement in that well. Cardoe et al. (2021) report average ROPs between 2.3 and 2.6 m/h for the sections drilled with rotary techniques in the two deep wells in Espoo, Finland. In total, 15 drill bits were used in FFC-1, seven PDC and eight TCI roller cone bits; see Fig. 4. Two of the used drill bits, one TCI roller cone and one PDC, are shown in Fig. 5. Drill bit manufacturers often use their own product names and nomenclature and to be able to compare the drill bits presented in Fig. 4, the IADC code (International Association of Drilling Contractors) is used. Information about the IADC code can be found in the IADC Drilling Manual (2000) and there are also the, so-called, IADC calculators available on the web. On average the drill bits drilled 58 m before being changed or 66 m if the two bits that only lasted for 2 and 3 m are excluded. Three drill bits lasted more than 100 m and where all used in combination with a downhole motor. The longest distance drilled with one bit was 146 m and it was achieved using a PDC drill bit. The operational parameters varied within the following intervals during the bit run: WOB: 2.3–10 ton, Bit RPM: 165–215, ROP 2–6 m/h, Torque: 3–10 kNm (E.ON 2021). Three of the PDC bits also resulted in the highest average ROP obtained during the different bit runs, 3.1–3.3 m/h. Drill bit consumption in FFC-1 and DGE-1 Examples of the used drill bits in FFC-1. To the left, PDC (Z613) used between 2319 and 2338 m, and to the right TCI roller cone bit (637Y) used between 2413 and 2416 m. Photo J-E Rosberg The drill bit summary from the drilling in the crystalline basement in DGE-1 is also included in Fig. 4. The drill bits used before and after the whipstock installation are presented separately. Air rotary drilling was used before and mud rotary after whipstock installation, and in both cases larger drill bit sizes than in FFC-1 were used. Detailed information about the used drilling methods, drilling dimensions and the whipstock installation can be found in Rosberg and Erlström (2019). In total, 21 drill bits were consumed before the whipstock installation and in average the drill bits lasted for 68 m. Three drill bits lasted more than 100 m and the longest distance drilled with one of the TCI roller cone bits was 152 m. It can also be seen in Fig. 4 that there is a learning curve, after around 175 m drilling into the crystalline basement the metres drilled by each drill bit increased. In this case it is more likely due to experienced gain during the drilling operation than due to geological changes. In total nine drill bits were used after the whipstock installation and on average the drill bits lasted for 56 m or 62 m if the water hammer prototype bit is excluded. Two of the drill bits lasted for around 110 m. The lessons learnt from comparing the drill bit consumption during the two deep crystalline basement drillings in Skåne are that the drill bits lasted in average between 62 and 68 m. Quite similar results, despite different bit dimensions and drilling fluids were used. This is somewhat unexpected since a larger dimension TCI bit normally lasts longer than a smaller one, because of larger and more robust bearings. The rate of penetration is also quite similar when comparing the two drillings as well. It seems like PDC bits were a better option than TCI roller cone bits in FFC-1, but to make this evaluation more correctly information about the bottom hole assemblies (BHAs), operational parameters and the rock composition must also be included in the evaluation. It can, for example, be noted that the best bit performance was achieved using a downhole motor in the FFC-1 drilling. In this paper, the drill bit consumption in the two wells have been compared but not the drill bit performance. One way to do this is to compare the mechanical specific energy (MSE), initially presented by Teale (1965), for the different bit runs. The MSE is the ratio between the mechanical energy input from the drill rig and the responding ROP and it can also be explained as the energy required to remove a unit volume of rock. That different drill bit dimensions that were used could be considered if the MSE is calculated. There is room for improvements to decrease the drill bit consumption when drilling new deep wells in the Fennoscandian Shield Border Zone. Longer bit life and a lower drill bit consumption will markedly decrease the total well cost or expressed as footage cost/cost per metre. Cardoe et al. (2021) reported that the average rotary bit life in the first deep well drilled in Espoo, Finland was 69 m, which is like the values obtained in FFC-1 and DGE-1. The same paper also describes that a special bit design was made before rotary drilling was applied in the second deep well, which increased the average bit life to 116 m. Recently, Energy and Geoscience Institute at the University of Utah (2021) reported exceptional results for a 9145 ft (2787 m) deep well, where three 8.75″ (222 mm) PDC bits were changed after drilling 332 m, 368 m and 376 m, respectively, in granodiorite. The longest bit life, with almost the same dimension 8.5 in. (216 mm), of the TCI bits used in DGE-1 well was around 110 m. Despite it being a different bit-type and used in a much younger rock, it is still at least three times less drilled in comparison to the PDC bits used in Utah. The evaluation of the percussion drilling using air is omitted from this paper, but still one observation can be made that the drilling method cannot be applied if the water influx is high. Percussion drilling using air has been successfully used in the deep EGS-drillings in Espoo, Finland (e.g., Think Geoenergy 2019) and during the drilling of DGE-1 both air rotary drilling and the testing of percussion drilling were applied down to 3365 m, which is described in Rosberg and Erlström (2019). The main difference if the two drillings are compared with FFC-1 is that problems with water influx are only reported from FFC-1. Bedrock composition The description of the cuttings (examples are shown in Fig. 6) and correlation with the Spectral Gamma Ray log (Fig. 7) reveals that the crystalline basement in FFC-1 is dominated by two rock suites. The relative distributions of the various rock types in FFC-1 are compared with DGE-1 in Table 1. Photographs showing examples of cuttings from the dominant rock types in FFC-1 and DGE-1. Photo M. Erlström Composite logs of DGE-1 and FFC-1 based on descriptions of cuttings and interpretation of wire-line logging data, and calculated heat productivity Table 1 Relative distribution of rock types in FFC-1 and DGE-1 The bulk part (80.6%) of the penetrated rock mass in FFC-1 is composed of different gneisses. These felsic rocks (> 70% SiO2) are dominated by quartz, feldspar, and minor amounts of minerals such as mica and hornblende. The colour varies from dark red to light red and grey. Darker red banded gneiss, i.e. with numerous concordant thin layers of meta-mafic rocks, is predominantly found in the upper part of the FFC-1 borehole down to c. 2300 m depth. The varieties occurring below are less foliated and relatively quartz-rich and greyish. There are also scattered thin intervals with very quartz-rich and mica-rich (muscovite) gneiss at 2630–2640 m, 2756–2686 m and 2835–2841 m. The amphibolite/metabasite/amphibolite–gneiss rock suite represents a group of metamorphic silica-poor mafic rocks dominated by hornblende and plagioclase. The low content of potassium feldspars is also portrayed by a corresponding low potassium signature in the Spectral Gamma Ray log where these rocks occur in the borehole. These mafic rocks are characterized as fine- and medium-crystalline, mostly dark grey and black and often with a white spotted texture of light-coloured plagioclase in a matrix dominated by hornblende and biotite. The amphibolite also frequently contains minute garnet crystals. Several of these rocks can also be classified as amphibolite–gneiss since the quartz content is relatively high and they show a weak foliation. The mafic rocks constitute 19.4% in FFC-1 and occur both as thick bodies in the upper part and lowermost part of FFC-1, and as metre-thick and thinner bands/streaks within the gneiss-dominated intervals. The density ranges between 2.9 and 3.2 kg/dm3, which is significantly higher than the gneisses with densities between 2.6 and 2.7 kg/dm3. This difference is also significantly pictured in the PhotoDensity log (Fig. 3), which furthermore helps to differentiate these rock types in the borehole. Larger bodies of dense mafic rocks also provide conditions for delivering strong seismic reflectors, which is verified in the older seismic data in Malmö where the thick amphibolite at the bottom of the borehole (3025–3133 m) links up to a seismic reflector (Hammar et al. 2021). Likewise, strong seismic reflectors in the basement in DGE-1 were confirmed to originate from changes in the acoustic impedance at the gneiss–amphibolite boundaries (Rosberg and Erlström 2019). The two dominating rock suites in FFC-1 and DGE-1 correspond to the Sveconorwegian gneiss-dominated terrain in SW Sweden exemplified by the outcrop areas on the Romeleåsen Ridge and in northwest Skåne, e.g., Kullen and Söderåsen (Fig. 1). The gneisses are quite similar in both wells. The main difference is that there is a greater proportion of dark red–brown, foliated and banded gneiss in DGE-1 and that the quartz-rich variety in FFC-1 is not identified in DGE-1. Another notable observation is the absence of Permo-Carboniferous dolerites in FFC-1. These rocks are a common part of the rock mass in DGE-1 as well on the Romeleåsen Ridge and within the Sorgenfrei-Tornquist Zone in Skåne. This difference could be explained as the main phase of Variscan rifting and emplacement of NW–SE dolerite dykes in Permo-Carboniferous time was mainly within the Sorgenfrei-Tornquist Zone and not as frequent outside this zone to the southwest. There is also a notable difference in the overall thorium and uranium content in the rock mass in FFC-1 in comparison to DGE-1 (cf. Spectral Gamma log in Fig. 7). The significantly lower values in FFC-1 are interpreted to represent the relatively U–Th-depleted rock mass in the Eastern interior Sveconorwegian segment in comparison to the eastern boundary and transition segments (data from the radiometric map of Sweden; Geological Survey of Sweden). Fracturing and structure of the rock mass The upper ca 500 m of the crystalline bedrock is found to be severely fractured in FFC-1. The Caliper log shows over-sized and very poor borehole conditions (Fig. 3), which affects the quality of the geophysical logs in this section. Below 2500 m the fracturing is less significant to the borehole which provided good operational conditions for the CXD and the SCMI tools. The results from these gave unique and novel data on the fracturing of the rock mass at these depths. The first study of the fracturing, performed by Weatherford on commission by E.ON, shows that there is conclusive evidence of a conductive fractured zone between 2562 and 2695 m (Badulescu and Ciuperca 2021). The fracture volumetric density as well as the fracture frequency in this zone is significantly higher (average 3.39 m2/m3 and 2.49 frac/m) than below where it is generally less than 2 m2/m3 and 1–2 frac/m (Fig. 3, Table 2). The fracture frequency is commonly > 4 frac/m for the most fractured part of the zone where also individual fracture apertures reach up 12 mm (Ciuperca et al. 2021). Table 2 Calculated linear fracture frequency for three intervals in FFC-1 A calculation of the linear fracture frequency for three main borehole intervals including the fractured zone (2562–2695 m) and the interval above (2450–2522 m) and below (2695–3106 m) is shown in Table 2. No values could be calculated above 2450 m due to poor borehole conditions. The average linear fracture frequencies in these three major intervals range between 0.85 and 2.49 frac/m and average volumetric fracture densities between 1.68 and 3.39 m2/m3. Borehole data on the spatial distribution and orientation of fractures are, besides in situ stress and hydraulic data, essential parameters to build Discrete Fracture Network (DFN) models. However, it is not a straightforward process to assess the linear fracture frequency and the volumetric fracture density in FFC-1 with respect to the feasibility of the rock mass to be hydraulically stimulated and made suitable as a geothermal reservoir. There are also few reference values on the linear fracture intensity for the Fennoscandian crystalline basement at greater depths in Sweden. Existing data come primarily from the Swedish Nuclear Fuel and Waste Management Company (SKB) borehole investigations down to c. 1000 m depth. Their data from boreholes in granitoid rocks at Laxemar, on the east coast of Sweden, give an open fracture frequency generally below 3 frac/m and a volumetric fracture density between 1.4 and 4.6 m2/m3 (La pointe et al. 2008; SKB 2009). Another reference data set comes from image-log data of the spatial distribution of fractures in the crystalline basement in the deep geothermal projects in Basel in Switzerland, Soultz-sous-Forêts in France and the Rosemanowes site in the UK. These give fracture spatial distribution signatures for depths ranging between c. 2000 and 5000 m which are mostly below c. 1 frac/m (Meet 2019). In addition, < 0.3 frac/m are reported between 6900 and 7135 m depth in the KTB deep borehole (Zimmerman et al. 2000). Comparable reference values on the fracture volumetric density are even more scarce. A study by Rogers et al. (2015), even if this is on relative shallow rock masses, indicates that the transition from a massive rock mass to a more kinematically governed blocky rock mass occurs at c. 2–2.5 m2/m3. With respect to the scarcity of data and difference in geological setting between the various reference values, it appears that the FFC-1 data fall within what is generally observed for other deep drilling projects in crystalline rocks. The dominant strike of the SCMI-identified hydraulically conductive borehole cross-cutting fractures, between 2154 and 3106 m is N–S. There is also a less dominant NE–SW strike noted. Remarkably there is no significant NW–SE strike, which is otherwise the general direction of the main faults in Skåne (Badulescu and Ciuperca 2021). The conductive fracture sets are interpreted to be open and correspond to the NW–SE strike–slip and associated N–S extension in Skåne since the Permo-Carboniferous (Bergerat et al. 2007). These initial interpretations will, however, be further scrutinized in an ongoing in-depth analysis. Scattered iron oxide coatings on the fracture/fissure planes, are identified on the cuttings, especially for the section down to c. 2500 m. Only few of these were noted in the deeper part of the borehole. Besides this, a white, soft non-calcareous clayey material is found in most samples from the gneiss-dominated intervals. XRD and chemical analysis show that it is mainly composed of feldspars, quartz and mixed-layer clay minerals with chlorite. A possible explanation is that this is low temperature alteration of feldspars, a process that is common in granitoids (Plümper and Putnis 2009; Morad et al. 2010). Noticeable is that the same type of material also occurs frequently in the DGE-1 cuttings (Rosberg and Erlström 2019). But it still needs to be clarified how these likely hydrothermal alterations are found in the rock mass. Are they primarily found in association to fractures or are they more evenly dispersed in the rock? There are also frequent occurrences of calcite fracture fillings in the cuttings, especially in the metabasite and amphibolite intervals. Beside these mineralizations there are also greenish undulating fracture fillings with epidote and chlorite found in the gneiss. The overall fracture fillings found in FFC-1 and DGE-1 agrees well with the fracture mineralogy in rocks from the Dalby quarry (Halling 2015). The overall structure, foliation, banding and folding, of the bedrock in Skåne is interpreted by Ulmius et al. (2018) and by Wikman et al. (1993) to be steeper in the boundary zone and gradually more horizontal in the interior part of the Eastern segment in the Sveconorwegian Province. A dominant low angle and horizontal foliation, identified in the CXD and SCMI logs in FFC-1, fits with the overall structure of the interior part of the Eastern segment. Thermal conductivity, density, specific heat capacity and calculated in situ thermal conductivity Thermal conductivity, density and specific heat capacity measured on cutting samples from FFC-1 are presented in Table 3, based on analysis by Klitzsch and Ahrensmeier (2021). Results, except specific heat capacity, from similar bedrock samples from Dalby quarry on the Romeleåsen Ridge are also included in Table 3. The measured thermal conductivity values on cuttings dominated by gneiss varied between 3.85 and 3.91 W/(mK) and between 2.54 and 2.59 W/(mK) for cuttings dominated by amphibolite/metabasite. In addition, the thermal conductivity was measured to 3.1 W/(mK) on cuttings dominated by foliated and banded gneiss with relatively high amount of mafic minerals The values are used in Eq. 2 for calculating the in situ thermal conductivity. The calculated average kin-situ is 3.57 W/(mK) for gneiss, 2.35 W/(mK) for amphibolite/metabasite and 2.85 W/(mK) for the banded gneiss. Based on the geological classification the thermal conductivity for the banded gneiss is later used for calculating the heat flow in the open hole section down to 2693 m and the higher value for the gneiss is used, below this depth. In addition, the thermal conductivity value for amphibolite/metabasite is used for intervals dominated by these rock types. Table 3 Compilation of thermal conductivity and density measured on cutting samples from FFC-1 and bedrock samples from Dalby quarry on the Romeleåsen Ridge It can be seen in Table 3 that samples dominated by gneiss have a significantly higher thermal conductivity and lower density than samples dominated by amphibolite/metabasite. There is also a good agreement between density values obtained from the cuttings and the values obtained from the outcrop samples. The thermal conductivity values measured on the cuttings from FFC-1 are relatively close but higher than the values measured on the outcrop samples. The specific heat capacity is an important parameter for future thermal modelling of an EGS-system. Klitzsch and Ahrensmeier (2021) measured the parameter using a calorimeter for different temperatures. Unfortunately, there are no measurements on the outcrop samples. In Table 3, the values for gneiss-dominated samples are quite like the values obtained from amphibolite/metabasite. In case the values are expressed as volumetric heat capacity, the average value for amphibolite/metabasite, 2.32 and 2.62 MJ/m3K is greater than the average for the gneiss-dominated samples, 2.05 and 2.33 MJ/m3K, measured at 30 °C and 100 °C, respectively. The values on the FFC-1 samples correlate to what is known for cored boreholes in granitoid rocks at Laxemar on the west coast of Sweden, thus, further in on the Fennoscandian Shield. Sundberg et al. (2009) present heat capacities for these rocks that are between 2.16 and 2.23 MJ/m3K. Thermal gradient The bottomhole temperature in the FFC-1 borehole is 84.1 °C. In the upper part, above 2610 m, the mean temperature gradient is 23.5 °C/km and in the lower part, below 2880 m, the mean temperature gradient is 17.4 °C/km; see Fig. 3. The zone in between seems to be thermally disturbed since the temperature gradient dropped from 23.5 to 7 °C/km. Below the zone the gradient increased again to 17.4 °C/km (Fig. 3). The temperature anomaly is interpreted to be caused by water influx through conjugated open natural fractures which are identified in the logs by lower density, sonic anisotropy, changes in the brittleness index polarity, increase of the fracture volumetric density, increasing fracture aperture and the presence of Stoneley chevron up-going and down-going reflections (Ciuperca et al. 2021). Another explanation can be that parts of this interval was acting as a loss zone during the drilling operation, meaning that the colder drilling fluid has propagated into and cooled parts of this formation interval. In comparison with the average temperature gradient, 22 °C/km in the DGE-1, the upper part in FFC-1, above 2610 m, has a 2 °C/km higher gradient, but in its lower part, below 2880 m, the gradient is 5 °C/km less. The bottomhole temperature in DGE-1 was 85.1 °C at around 3700 m depth and the bottomhole temperature in FFC-1 was just 1 °C less, but it was measured at 3100 m depth. The extrapolated temperature in FFC-1 at 3700 m is 94.3 °C, using the lower temperature gradient, 17 °C/km. Rosberg and Erlström (2019) reported that the temperature gradient in the sedimentary succession in DGE-1 was much lower than expected and that is an explanation of the lower bottomhole temperature in DGE-1. In FFC-1, the temperature gradient in the sedimentary succession was like gradients, between 28 and 32 °C/km, observed in other wells located in the sedimentary succession in southwest Skåne (Erlström et al. 2018). The lower gradient in FFC-1, 17.4 °C/km, is comparable with gradient measured in other deep wells in the Fennoscandian basement, such as the temperature gradient in the 6957-m-deep borehole Gravberg-1, varies between 14 and 18 °C/km (Juhlin et al. 1998) and the gradient, 14 and 17 °C/km in the Outokumpu R-2500 research borehole in Finland (Kukkonen et al. 2011). Similar values are also presented in Sundberg et al. (2009) from measurements down to around 1400 m depth in Laxemar, southeast of Sweden and gradients between 15 and 20 °C/km are obtained in the 1820-m-deep borehole, Bh32012, in Lake Vättern, Sweden. The gradient in the deep wells OTN1-3 in Espoo Finland is also 17 °C/km (Kukkonen and Pentti 2021). The higher gradient in FFC-1, 23.5 °C/km, is higher than the gradient measured in other deep wells in the Fennoscandian basement, as well as the gradient of 20 °C/km that was observed for the 2500-m-deep COSC-1 borehole in the Swedish Caledonides in west central Sweden (Lorenz et al. 2015). The higher gradient in FFC-1 and the one measured in DGE-1 are more like the gradients between, 21 and 28 °C/km measured in the KTB borehole in the upper central European crust in Germany (Emmermann and Lauterjung 1997) and the gradient of 20 °C/km measured in the Precambrian Canadian Shield (Majorowicz et al. 2014). Heat flow In the upper part of the open hole section, above 2610 m, most of the calculated heat flow is between 60 and 70 mW/m2, with an average of 66 mW/m2; see Fig. 3. In the lower part of the open hole section most of the calculated heat flow varies between 40 and 60 mW/m2, with an average around 51 mW/m2. The zone in between seems to be thermally disturbed, which has been mentioned previously, and the calculated heat flow values for this section will most probably differ from values obtained during undisturbed thermal conditions. Therefore, those values are not considered in the further evaluation of the calculated heat flow values. The heat flow values in the upper part of FFC-1 are close to the values reported for DGE-1 (Rosberg and Erlström 2019). The heat flow values are also close to the values reported for other deep boreholes in the Danish Basin and the heat flow model of the Eugeno transverse for the southwest margin of the Fennoscandian Shield (EUGENO-s Working Group 1998; Balling 1995). Balling (1995) also reports that the values for the central parts of the shield are less, around 40–50 mW/m2, which corresponds more to the values calculated for the lower part for the open hole section in FFC-1. Similar values are also presented in Aldahan et al. (1991) for the Gravberg-1 borehole in central Sweden, the Bh32012 drilling in Lake Vättern (Sundberg et al. 2016), the Outokumpu R-2500 research borehole in Finland (Kukkonen et al. 2011) and for OTN-1 well in Espoo (Kukkonen and Pentti 2021). Heat production The calculated heat production (A) for the crystalline section in FFC-1 using the concentrations of the radiogenic isotopes of uranium (U), thorium (Th) and potassium (K) from the spectral gamma ray log in Fig. 7 and the photo-density log is presented in Fig. 3. The average heat production is around 3.0 µW/m3. However, the heat production is considerably lower in intervals dominated by metabasite/amphibolite, around 1.5 µW/m3. In addition, the upper part down to around 2340 m depth also shows lower values around 2.4 µW/m3, which are influenced by the low concentration of uranium content measured over this section. It can also be seen in Fig. 7 that the concentrations of potassium and thorium differ significantly between the metabasite–amphibolite and gneiss–granite rock types, but the uranium content is quite similar. The same pattern can be seen in the spectral gamma ray log from DGE-1; see Fig. 7. The heat production in DGE-1 is higher, values reaching up to 8 µW/m3, in the open hole section down to around 3040 m depth. Below this depth the average heat productivity is around 3.5 µW/m3 and the values are in the same order as the ones calculated in FFC-1. This is given by the relatively higher content of uranium in the red–brown gneiss interval between 2160 and 3040 m in DGE-1 that contribute to a higher heat productivity in comparison to FFC-1. The average value for the entire open section in DGE-1 is 5.4 µW/m3. Heat production values within the same range as the ones calculated for FFC-1 can also be found in other wells located within the Fennoscandian Shield, such as Gravberg-1 borehole in central Sweden, Laxemar, southeast of Sweden and the Outokumpu R-2500 research borehole in Finland (Aldahan et al. 1991; Sundberg et al. 2009; Kukkonen et al. 2011). Summary of thermal data from FFC-1 in relation to other deep boreholes The previously presented thermal data from FFC-1 are summarized in Table 4, as well as a comparison to data from other deep wells drilled in the upper crystalline crust within the Fennoscandian Shield. A comparison is also made with the Hunt well, which represents a similar geological setting on another Precambrian shield margin, i.e., Canadian Shield. Table 4 Comparison of thermal data from the FFC-1 well with other deep boreholes drilled in the crystalline upper crust The thermal data acquired in FFC-1 are a valuable contribution, since there are limited number of deep wells located in the Fennoscandian Shield, as well as in the upper crystalline crust. In addition, the thermal data have been acquired from the only deep crystalline basement well in the Danish Basin. However, new temperature measurements are required to explain the different temperature gradients obtained in the well and the calculated heat flow. It seems like the temperature survey was conducted under thermal conditions different from the pre-drilling conditions. However, the acquired values still give an indication of the general thermal regime in the well. Unfortunately, new temperature measurements will be both expensive and difficult to conduct. This is because it is a deep well and special logging equipment is required for entering the well since there is an off set in the borehole. In other words, there are only a few companies that can do the additional logging, making it even more expensive. The geological investigations of FFC-1 and DGE-1 verify that the crystalline basement below the c. 2-km-thick sedimentary cover is dominated by various gneisses and mafic intrusive rocks belonging to the Sveconorwegian Province of southwestern Sweden. The density contrast between 2.6 and 3.1 kg/dm3 for the two dominating rock types (gneiss and amphibolite) provides favourable conditions for creating seismic reflectors, which is also verified by the thick amphibolite in gneiss-dominated surroundings as in the bottom of the FFC-1 borehole. The upper c. 400 m of the basement in FFC-1 is severely fractured and water-bearing which disqualified the continuation of the planned use of percussion air drilling and conventional rotary drilling was performed for the rest of the borehole. The evaluation of the conventional rotary drillings in FFC-1 and DGE-1 concluded high drill bit consumption and low ROPs in both boreholes. The average bit life was 62 m and 68 m in the two wells with an average ROP between 2 and 4 m/h without any preferences regarding bit-type (PDC or TCI). The geological investigations show a comparable composition of the crystalline basement which is interpreted to not significantly affect any difference in drilling performance between the two sites. The bottomhole temperature in the FFC-1 borehole is 84.1 °C. In the upper part of the crystalline basement 2150–2610 m, the temperature gradient of 24 °C/km and calculated heat flow of 66 mW/m2 are considerably higher than reported from other deep wells in the Fennoscandian basement. The corresponding values for the deeper parts of the FFC-1 well are 17 °C/km and 51 mW/m2, which are more like the values reported from other deep wells in the Fennoscandian basement. However, inflow of colder formation fluid from hydraulically active fractures strongly influences the temperature conditions in the well below 2562 m. New measurements under equilibrated temperature conditions are required to evaluate how much the low gradient is influenced by the hydraulically active fractures between 2562 and 2695 m. The average heat production is 3.0 µW/m3 in FFC-1 and generally lower values are observed for intervals dominated by amphibolite/metabasite, around 1.5 µW/m3. The rocks in FFC-1 are overall depleted in uranium and thorium in comparison to the rocks in the DGE-1 borehole. The heat productivity in DGE-1 is, overall, higher with an average of 5.8 µW/m3. The spatial distribution of fractures has successfully been mapped using borehole imaging logs. The evaluation of the data and calculation of the linear fracture frequency for three main borehole intervals including the fractured zone (2562–2695 m) and the interval above (2450–2522 m) and below (2695–3106 m) show a dominance of N–S oriented open fractures, a fracture frequency varying between 0.85 and 2.49 frac/m and a fracture volumetric density between 1.68 and 3.39 m2/m3. Scattered thinner intervals with higher values are also noted, but overall these data correlate to what is noted for similar, though scarce reference data from other deep wells in crystalline rocks. In sum, the FFC-1 and DGE-1 boreholes have provided insight and new empirical data on the concealed crystalline basement in the Fennoscandian Shield Border Zone that previously only had been assessed by assumptions and modelling. The outcome of the drilling operation has also provided insight regarding the drilling performance of the basement and statistical data on various drill bits used. The logging data, cuttings material and well reports used for this study are available at E.ON Malmö, the Geological Survey of Sweden and at the Department of Engineering Geology at Lund University. The FFC-1 well is presently accessible and located in a locked concrete cellar, even though there is an off set in the well at 2263 m, which requires special logging tool strings to pass. Aldahan AA, Castañ J, Collini B, Gorody T, Juhlin C, Sandstedt H. Scientific summary report of the deep gas drilling project in the Siljan Ring Impact structure. Vattenfall Rep. p. 1–257. 1991. Allis R, Moore J, Davatzes N, Gwynn M, Hardwick C, Kirby S, McLennan J, Pankow, K, Potter S, Simmons, S. EGS concept testing and development at the Milford, Utah FORGE Site. In: proceedings, 41st Workshop on geothermal reservoir engineering Stanford University, Stanford, California, February 22–24, 2016 SGP-TR-209. p. 13. 2016. Andersson O. Deep drilling KLX 02-drilling and documentation of a 1700 m deep borehole at Laxemar, Sweden. SKB Rep TR 94-19. p. 46. 1994. Armstead HCH, Tester JW. Heat mining. London: E. and F. N. Spon; 1987. Arshavskaya NI, Galdin NE, Karus EW, Kuznetsov OL, Lubimova EA, Milanovsky SY, Nartikoev VD, Semashko SA, Smirnova EV. Geothermic investigations. In: Kozlovsky YA, editor. The superdeep well of the Kola Peninsula. Berlin: Springer; 1984. p. 387–93. Badulescu C, Ciuperca C. FFC-1 data interpretation report IAES weatherford report, internal project report. 2021. Balling N. Heat flow and lithospheric temperature along the northern segment of the European Geotraverse, In: Freeman R, Mueller St, editors. In: proceedings of the 6th workshop of the European Geotraverse (EGT) Project, European Science Foundation, Strasbourg. 1990; 405–16. Balling N. Heat flow and thermal structure of the lithosphere across the Baltic Shield and northern Tornquist Zone. Tectonophysics. 1995;244:13–50. Bergertat F, Angelier J, Andreasson P-G. Evolution of paleostress fields and deformation of the Tornquist Zone in Scania (Sweden) during Permo-Mesozoic and Cenozoic times. Tectonophysics. 2007;444:93–110. Brown DW. Hot dry rock geothermal energy: important lessons from Fenton Hill. In: proceedings 34th workshop on geothermal reservoir engineering, SGP-TR-187. Stanford, CA; 2009. p. 139–42. Bücker C, Rybach L. A simple method to determine heat production from gamma-ray logs. Mar Pet Geol. 1996;13:373–5. Cardoe J, Nygaard G, Lane C, Saarno T, Bird M. Oil and gas drill bit technology and drilling application in engineering saves 77 drilling days on the World's deepest engineered geothermal systems EGS wells. SPE/IADC-204121-MS. 2021. Chapman DS, Furlong KP. Thermal state of the continental lower crust. In: Arculus DM, Kay KW, editors. Continental lower crust. Amsterdam: Elsevier; 1992. p. 179–99. Ciuperca C-L, Camil B, Erlström M, Hammar A, Egard, M. An integrated formation evaluation approach evaluated the basement temperature anomaly. EAGE 82nd conference and exhibition, amsterdam, extended abstract: p. 4. 2021. DONG. Final testing report FFC-1, geothermal project Malmö, E.ON project report. 2003. DONG. Final well report FFC-1, Geothermal project Malmö, E.ON project report. 2006. E.ON. End of well report—air hammer FFC 1 20201221, internal project report. 2021. Emmermann R, Lauterjung J. The German continental deep drilling program KTB: overview and major results. J Geophys Res. 1997;102:18179–201. Energy and Geoscience Institute at the University of Utah. Utah FORGE: well 56-32 drilling data and logs. 2021. https://doi.org/10.15121/1777170. Erlström M. Chapter 24: carboniferous-neogene tectonic evolution of the Fennoscandian transition zone, southern Sweden. In: Stephens M-B, Weihed JB, editors. Sweden lithotectonic framework, tectonic evolution and mineral resources, vol. 50. London: Geological Society Memoir; 2020. p. 603–20. Erlström M, Deeks N, Sivhed U, Thomas S. Structure and evolution of the Tornquist Zone and adjacent sedimentary basins in Scania and the southern Baltic Sea area. Tectonophysics. 1997;271:191–215. Erlström M, Sivhed U, Wikman H, Kornfält KA. Beskrivning till berggrundskartorna 2D Tomelilla NV, NO, SV, SO, 2E Simrishamn NV, NO, 1E Örnahusen NV. Sveriges geologiska undersökning; Af 212–214:1–141. 2004 (in Swedish with an English summary). Erlström M, Boldreel LO, Lindström S, Kristensen L, Mathiesen A, Andersen MS, Nielsen LH. Stratigraphy and geothermal assessment of Mesozoic sandstone reservoirs in the Øresund Basin—exemplified by well data and seismic profiles. Bull Geol Soc Den. 2018;66:123–49. EUGENO-s Working Group. Crustal structure and tectonic evolution of the transition between the Baltic Shield and the north German Caledonides. Tectonophysics. 1998;150:253–348. Garabetian T (2019) Report on competitiveness of the geothermal industry. Tech Rep, European technology and innovation platform on deep geothermal (ETIP-DG). http://www.etip-dg.eu/front/wp-content/uploads/D4.6-Report-on-Competitiveness.pdf. Accessed 10 May 2021. Halling J. Inventering av sprickmineraliseringar i en del av Sorgenfrei-Tornquistzonen, Dalby stenbrott, Skåne. Master of Science thesis Department of Geology, Lund University. 2015; 448:1–36 (in Swedish). Hammar A, Erlström M, Heikkinen P, Juhlin C, Malin P. Malmö EGS site basement-reflection seismic reinterpretation and drill confirmation. Abstract, geothermal resources council annual meeting and expo, San Diego. 2021. Howard-Orchard D. Final well report Lund DGE#1 deep geothermal energy project, Lunds Energi, technical report. 2003. IADC. IADC drilling manual eBook version (V.11). International association of drilling contractors. 2000. Jiang G, Tang X, Rao S, Gao P, Zhang L, Zhao P, Hu S. High-quality heat flow determination from the crystalline basement of south-east margin of north China Craton. J Asian Earth Sci. 2016;118:1–10. Juhlin C, Wallroth T, Smellie J, Leijon B, Eliasson T, Ljunggren C, Beswick J. The very deep hole concept—geoscientific appraisal of conditions at great depth. Swedish Nuclear Waste Programme Technical Report SKB-TR-98-05. p. 128. Klitzsch N, Ahrensmeier L. Thermal matrix properties measured on cuttings from the Malmö well. Brief report about test measurements on 5 cuttings. RWTH Aachen Univ Rep. 2021. Kukkonen IT, editor. Outokumpu deep drilling project 2003–2010. Geological Survey of Finland, Special paper 51. 2011. Kukkonen IT, Pentti M. St1 deep heat project: geothermal energy to the district heating network in Espoo. IOP Conf Ser Earth Environ Sci. 2021;703: 012035. Kukkonen IT, Rath V, Kivekäs L, Šafanda J, Cermak V. Geothermal studies of the Outokumpu Deep Drill Hole, Finland: vertical variation in heat flow and paleoclimatic implications. Phys Earth Planet Inter. 2011;188:9–25. Leary P, Malin P, Saarno T, Kukkonen I. Prospects for assessing enhanced geothermal system (EGS) basement rock flow stimulation by wellbore temperature data. Energies. 2017;10(12):1–33. https://doi.org/10.3390/en10121979. Ledingham P, Cotton L, Law R (2019) The united downs deep geothermal power project. In: proceedings, 44th workshop on geothermal reservoir engineering Stanford University, Stanford, California, February 11–13, 2019 SGP-TR-214, p. 11. Liboriussen J, Ashton P, Tygesen T. The tectonic evolution of the Fennoscandian Border Zone. Tectonophysics. 1987;137:21–9. Lorenz H, Rosberg J-E, Juhlin C, Bjelm L, Almqvist B, Berthet T, Conze T, Gee D, Klonowska I, Pascal C, Pedersen K, Roberts N, Tsang C-F. COSC-1-drilling of a subduction-related allochthon in the Palaeozoic Caledonide orogen of Scandinavia. Sci Dril. 2015;19:1–11. Majorowicz J, Chan J, Crowell J, Gosnold W, Heaman LM, Kück J, Nieuwenhuis G, Schmitt DR, Unsworth M, Walsh N, Weiders S. The first deep heat flow determination in crystalline basement rocks beneath the Western Canadian Sedimentary Basin. Geophys J Int. 2014;197:731–47. Malin P, Saarno T, Kwiatek G, Kukkonen I, Leary P, Heikkinen P. Six Kilometers to Heat: Drilling, Characterizing & Stimulating the OTN-3 Well in Finland. In: Proceedings: World Geothermal Congress. Reykjavik, Iceland; 2021. Meet. Deliverable D3.2. 1D/2D DFN models of borehole fractures and hydraulic circulation simulations. H2020 Grant Agreement No 792037. 2019. https://www.meet-h2020.com/wp-content/uploads/2020/07/MEET_Deliverable_D3.2_25042019_VF.pdf. Accessed 15 Apr 2021. Morad S, El-Gahli MAK, Caja MA, Sirat M, Al-Ramadan K, Mansurbeg H. Hydrothermal alteration of plagioclase in granitic rocks of proterozoic basement SE Sweden. Geol J. 2010. https://doi.org/10.1002/gj1078. Norling E, Wikman H. Beskrivning till berggrundskartan Höganäs NO/Helsingborg NV. Sveriges Geologiska Undersökning Af. 1990;129:1–123 (in Swedish with an English summary). Plümper O, Putnis A. The complex hydrothermal history of granitic rocks: multiple feldspar replacement reactions under subsolidus conditions. J Petrol. 2009;50:967–87. La Pointe P, Fox A, Hermanson J, Öhman J (2008) Geological discrete fracture network model for the Laxemar site—site descriptive modelling SDM-Site Laxemar. SKB report R-08-55. p. 260. Rogers S, Elmo D, Webb G, Catalan A. Volumetric fracture intensity measurement for improved rock mass characterisation and fragmentation assessment in block caving operations. Rock Mech Rock Eng. 2015;48:633–49. Rosberg J-E, Erlström M. Evaluation of the Lund deep geothermal exploration project in the Romeleåsen fault zone, south Sweden: a case study. Geotherm Energy. 2019;7:10. Sivhed U, Wikman H, Erlström M. Beskrivning till berggrundskartorna 1C Trelleborg NV, NO, 2C Malmö SV, SO, NV, NO. Sveriges geologiska Undersökning. Af 191–194, 196, 198:1–143. 1999. (in Swedish with an English summary). SKB (2009) Site description of Laxemar at completion of the site investigation phase SDM-site Laxemar. SKB Rep TR 09-01. p. 637. Stephens MB, Wahlgren C-H. Chapter 15: polyphase continental crust eastern segment, sveconorwegian orogeny. In: Stephens M-B, Weihed JB, editors. Sweden lithotectonic framework, tectonic evolution and mineral resources, vol. 50. London: Geological Society Memoir; 2020. p. 351–96. Sundberg J, Back P-E, Ländell M, Sundberg A (2009) Modelling of temperature in deep boreholes and evaluation of geothermal heat flow at Forsmark and Laxemar. SKB Techn Rep. TR-09-14. 2009; p. 87. Sundberg J, Näslund J-O, Claesson Liljedahl L, Wrafter J, O'Regan M, Jakobsson M, Preto, P, Larson S-Å. Thermal data for paleoclimate calculations from boreholes at Lake Vättern. SKB Rep. P-16-03. 2016; p. 160. Teale R. The concept of specific energy in rock drilling. Int J Rock Mech Min Sci. 1965;2:57–73. Tester JW, Anderson BJ, Batchelor AS, Blackwell DD, DiPippo R, Drake EM, Garnish J, Livesay B, Moore MC, Nichols K, Petty S, Toksoz MN, Veatch RW. The Future of Geothermal Energy Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century, Massachusetts Institute of Technology; 2006. https://www1.eere.energy.gov/geothermal/pdfs/future_geo_energy.pdf. Accessed 5 Apr 2021. Think Geoenergy. Drilling Finland's deepest well on record—the geothermal well at Otanemi, Richter A. 2019. https://www.thinkgeoenergy.com/drilling-finlands-deepest-well-on-record-the-geothermal-well-at-otaniemi/. Accessed 26 April 2021. Thybo H. A seismic model along the EGT profile - from the North German Basin into the Baltic Shield. In: Proceedings of the 5th Study Centre on the European Geotraverse Project. Strasbourg: ESF; 1990. p. 99–108. Ulmius J, Möller C, Page L, Johansson L, Ganerod M. The eastern boundary of Sveconorwegian reworking in the Baltic Shield defined by 40Ar/39Ar geochronology across the southernmost Sveconorwegian Province. Precambr Res. 2018;307:201–17. Wikman H, Bergström J, Sivhed U. Beskrivning till berggrundskartan Helsingborg SO. Sveriges Geologiska Undersökning Af. 1993;180:1–114 (in Swedish with an English summary). Wollenberg HA, Smith AR. Radiogenic heat production of crustal rocks: an assessment based on geochemical data. Geophys Res Lett. 1987;16:295–8. Zimmermann G, Alexander K, Burkhardt H. Hydraulic pathways in the crystalline rock in KTB. Geophys J Int. 2000;142:4–14. We sincerely thank E.ON for sharing the FFC-1 data with us. Special thanks to the E.ON testhole project group with Mats Egard, Axel Hammar, Peder Berne, Jorge Torres and Christian Helgesson. We also want to recognize Mats Renntun and Mats Åbjörnsson, who now are retired, for their efforts in initiating the EGS exploration project in Malmö. Hanna Kervall, Frans Lundberg, Per Wahlquist, Mustafa Abaoutaka and Tobias Erlström are thanked for their valuable contributions during the FFC-1 well site and mudlogging operation. Personnel from Geoop/Ross Engineering are thanked for their openness to share information and knowledge during the drilling of FFC-1. Open access funding provided by Lund University. The used background data on the FFC-1 well come primarily from the E.ON testhole project, that was partly funded by the Swedish Energy Agency (Project No: 49110-1). The performed appraisal by the authors and complementary analyses have been funded by the Geological Survey of Sweden and the departments of Engineering Geology and Geology at Lund University. Engineering Geology, Faculty of Engineering, Lund University, Box 118, 221 00, Lund, Sweden Jan-Erik Rosberg Geological Survey of Sweden, Kiliansgatan 10, 223 50, Lund, Sweden Mikael Erlström Department of Geology, Lund University, Sölvegatan 12, 223 62, Lund, Sweden Both authors contributed to the synopsis, background, and method descriptions. ME compiled and interpreted the geological and geophysical data. JER compiled and evaluated the drilling and the thermal data. Both authors contributed to the discussion, conclusions, and illustrations. Both authors read and approved the final manuscript. Correspondence to Jan-Erik Rosberg. Rosberg, JE., Erlström, M. Evaluation of deep geothermal exploration drillings in the crystalline basement of the Fennoscandian Shield Border Zone in south Sweden. Geotherm Energy 9, 20 (2021). https://doi.org/10.1186/s40517-021-00203-1 FFC-1 DGE-1 Drilling operation and performance Geophysical logging Sveconorwegian Fracturing
CommonCrawl
BMC Medical Research Methodology Estimation of average treatment effect based on a multi-index propensity score Jiaqin Xu1 na1, Kecheng Wei1 na1, Ce Wang1, Chen Huang1, Yaxin Xue1, Rui Zhang1, Guoyou Qin1,2,3 & Yongfu Yu1,2,3 BMC Medical Research Methodology volume 22, Article number: 337 (2022) Cite this article Estimating the average effect of a treatment, exposure, or intervention on health outcomes is a primary aim of many medical studies. However, unbalanced covariates between groups can lead to confounding bias when using observational data to estimate the average treatment effect (ATE). In this study, we proposed an estimator to correct confounding bias and provide multiple protection for estimation consistency. With reference to the kernel function-based double-index propensity score (Ker.DiPS) estimator, we proposed the artificial neural network-based multi-index propensity score (ANN.MiPS) estimator. The ANN.MiPS estimator employed the artificial neural network to estimate the MiPS that combines the information from multiple candidate models for propensity score and outcome regression. A Monte Carlo simulation study was designed to evaluate the performance of the proposed ANN.MiPS estimator. Furthermore, we applied our estimator to real data to discuss its practicability. The simulation study showed the bias of the ANN.MiPS estimators is very small and the standard error is similar if any one of the candidate models is correctly specified under all evaluated sample sizes, treatment rates, and covariate types. Compared to the kernel function-based estimator, the ANN.MiPS estimator usually yields smaller standard error when the correct model is incorporated in the estimator. The empirical study indicated the point estimation for ATE and its bootstrap standard error of the ANN.MiPS estimator is stable under different model specifications. The proposed estimator extended the combination of information from two models to multiple models and achieved multiply robust estimation for ATE. Extra efficiency was gained by our estimator compared to the kernel-based estimator. The proposed estimator provided a novel approach for estimating the causal effects in observational studies. Estimating the average treatment effect (ATE) is essential for assessing causal effects of treatments or interventions in biometrics, epidemiology, econometrics, and sociology. The ATE can be estimated by directly comparing mean outcomes between treated and controlled groups in randomized controlled trials [1]. However, randomized controlled trials are usually difficult to implement because of budget restrictions, ethics, and subjects' noncompliance. Therefore, observational studies are increasingly used for estimating ATE. However, the baseline covariates are commonly unbalanced between treated and controlled groups in observational studies, and simply comparing mean outcomes may induce confounding bias [2]. Inverse probability weighting (IPW) under potential outcome framework is a popular approach for correcting confounding bias [3,4,5]. The IPW approach specifies a propensity score (PS) model to estimate subjects' PS and uses the inverse of PS to balance baseline covariates between groups [6, 7]. For binary treatment, the mostly used PS model is the logistic regression. Some machine learning models, such as decision tree[8] and artificial neural network [9,10,11,12] are also used to estimate the PS. Another widely used approach is outcome regression (OR) [13]. The OR approach specifies an OR model, such as generalized linear model [14] to model the outcome as a function of the treatment and covariates to correct confounding bias directly. Some machine learning models, such as random forest [15] and artificial neural network [16] are also used as the OR model. Both IPW and OR approaches yield consistent estimation only if the corresponding model is correctly specified, but neither can be verified by the data alone. Doubly robust approach, combining the models of PS and OR, can yield consistent estimation when any one of these two models is correctly specified (not necessarily both). Recently, a variety of doubly robust estimators for ATE have been proposed, such as augmented estimating equations estimator [17] and target maximum likelihood estimator [18]. The kernel function-based double-index propensity score (Ker.DiPS) estimator proposed by Cheng et al. [19] is one of the weighting-based doubly robust estimators. They used the Nadaraya-Watson-type kernel function to combine the information from one PS model and one OR model to obtain an integrated PS, which they named as double-index propensity score (DiPS). Using IPW approach based on the DiPS, the Ker.DiPS estimator achieved doubly robust estimation for ATE. However, the integrated PS estimated by Nadaraya-Watson-type kernel may be out of range between 0 to 1. The unreasonable PS violates the causal inference assumption and may yield uncertain estimation. Moreover, the Ker.DiPS estimator allows only two opportunities for estimation consistency. To provide more protection on estimation consistency, we would like to develop an estimator allowing specifying multiple candidate models and can achieve estimation consistency when any one model is correctly specified. Such type of estimator is defined as multiply robust estimator [20, 21]. When combining the information from multiple candidate models to obtain the multi-index propensity score (MiPS), the Nadaraya-Watson-type kernel function may yield unstable estimation as it suffers from the "curse of dimensionality" [22,23,24]. With the development of scalable computing and optimization techniques [25, 26], the use of machine learning, such as artificial neural network (ANN) has been one of the most promising approaches in connection with applications related to approximation and estimation of multivariate functions [27, 28]. The ANN has the potential of overcoming the curse of dimensionality [29, 30] and has been used as a universal approximators for various functional representations [31,32,33]. Therefore, we replaced the kernel function with ANN to conduct nonparametric regression to estimate the MiPS. We aim to achieve multiply robust estimation for ATE using the ANN-based MiPS. The rest of the article is organized as follows. In the Notations and assumptions section, we introduce necessary notations and causal inference assumptions. In the Some existing approaches section, we introduce some existing estimators that leads to the development of our estimator. In the Proposed multi-index propensity score section, we describe the origin and construction of the proposed estimator in detail. In the Simulation studies section, we perform simulations to evaluate the performance of the proposed estimator. A real data analysis was conducted in the Application to NHEFS data section. We make further discussion in the Discussion section and conclude the paper in the Conclusions section. Notations and assumptions Suppose that \({\mathbf{Z}}_{i}={\left({Y}_{i},{A}_{i},{\mathbf{X}}_{i}^{{\top }}\right)}^{{\top }}, i=1,\dots ,n\) be the observed data for \({i}^{\mathrm{th}}\) subject from independent and identically distributed copies of \(\mathbf{Z}={\left(Y,A,{\mathbf{X}}^{{\top }}\right)}^{{\top }}\), where \(Y\) is the outcome, \(A\) is the binary indicator of treatment (\(A=1\) if treated and \(A=0\) if controlled), and \(\mathbf{X}\) is the p-dimensional vector of pretreatment covariates. Let \({Y}^{1}\) and \({Y}^{0}\) represent the potential outcomes if a subject was assigned to treated or controlled group, respectively. The formula for average treatment effect (ATE) is $$\Delta ={\mu }_{1}-{\mu }_{0}=E\left({Y}^{1}\right)-E\left({Y}^{0}\right).$$ Under causal inference framework, the identifiability assumptions are usually assumed, that is [6], Assumption 1. Consistency: \(Y=A{Y}^{1}+(1-A){Y}^{0}\) with probability 1; Assumption 2. Ignorability: (Y 1, Y 0) ⫫ A | X, ⫫ denotes statistical independence; Assumption 3. Positivity: \(0<\pi \left(\mathbf{X}\right)<1\), where \(\pi \left(\mathbf{X}\right)=P\left(A=1 \right| \mathbf{X})\) denotes the propensity score. Some existing approaches The IPW estimator is usually used for correcting confounding bias. The propensity score (PS) \(\pi \left(\mathbf{X}\right)=P\left(A=1 \right| \mathbf{X})\) can be modeled as \(\pi \left(\mathbf{X};\boldsymbol{\alpha }\right)={g}_{\pi }\left({\alpha }_{0}+{\boldsymbol{\alpha }}_{1}^{\mathrm{T}}\mathbf{X}\right)\), where \({g}_{\pi }\left(\cdot \right)\) is a specified link function, for example, the inverse of the logit function for the logistic regression, and \(\boldsymbol{\alpha }={\left({\alpha }_{0},{\boldsymbol{\alpha }}_{1}^{\mathrm{T}}\right)}^{\mathrm{T}}\) are the unknown parameters and can be estimated from maximum likelihood estimation. Under causal inference assumptions, the ATE can be estimated by the IPW estimator $$\begin{array}{c}{\widehat\Delta}_{IPW}=\left(\sum\limits_{i=1}^n\frac{A_i}{\pi\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{A_i}{\pi\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}\right)}Y_i-\\ \left(\sum\limits_{i=1}^n\frac{1-A_i}{1-\pi\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{1-A_i}{1-\pi\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}\right)}Y_i,\end{array}$$ where \(\widehat{\boldsymbol{\alpha }}\) is the estimated value of \(\boldsymbol{\alpha }\). If \(\pi \left(\mathbf{X};\boldsymbol{\alpha }\right)\) is correctly specified, \({\widehat{\Delta }}_{IPW}\) is a consistent estimator of \(\Delta\). The OR estimator is another commonly used approach for correcting confounding bias. Let \({\mu }_{A}\left(\mathbf{X}\right)=E\left(Y \right| \mathbf{X},A)\) denote outcome regression (OR), where \(A\in \{\mathrm{0,1}\}\). It can be modeled as \({\mu }_{A}\left(\mathbf{X};{\varvec{\beta}}\right)={g}_{\mu }\left({\beta }_{0}+{{\varvec{\beta}}}_{1}^{T}\mathbf{X}+{\beta }_{2}A\right)\), where \({g}_{\mu }(\cdot )\) is a specified link function, for example, the identity function for the linear regression, \({\varvec{\beta}}={\left({\beta }_{0},{{\varvec{\beta}}}_{1}^{{\top }},{\beta }_{2}\right)}^{{\top }}\) are the unknown parameters and can be estimated from maximum likelihood estimation. Interactions between \(A\) and \(\mathbf{X}\) in OR model can also be accommodated by estimating the OR separately by treated and controlled groups [19]. Under causal inference assumptions, the ATE also can be estimated by the OR estimator $${\widehat{\Delta }}_{OR}=\frac{1}{n}\sum_{i=1}^{n} {\mu }_{1}\left({\mathbf{X}}_{i};\widehat{{\varvec{\beta}}}\right)-\frac{1}{n}\sum_{i=1}^{n} {\mu }_{0}\left({\mathbf{X}}_{i};\widehat{{\varvec{\beta}}}\right),$$ where \(\widehat{{\varvec{\beta}}}\) is the estimated value of \({\varvec{\beta}}\). If \(\mu \left(\mathbf{X},A;{\varvec{\beta}}\right)\) is correctly specified, \({\widehat{\Delta }}_{OR}\) is a consistent estimator of \(\Delta\). If the PS model for IPW estimator or the OR model for OR estimator is incorrectly specified, the estimation consistency of \({\widehat{\Delta }}_{IPW}\) or \({\widehat{\Delta }}_{OR}\) with \(\Delta\) can not be guaranteed. To provide protection against model misspecification, Cheng et al. [19] considered integrating the information of PS \(\pi \left(\mathbf{X};\boldsymbol{\alpha }\right)\) and OR \({\mu }_{a}\left(\mathbf{X};{\varvec{\beta}}\right)\) to construct double-index propensity score (DiPS), which is denoted by \(\pi \left(\mathbf{X};{\boldsymbol{\alpha }}_{1},{{\varvec{\beta}}}_{1}\right)=E\left[A | {\boldsymbol{\alpha }}_{1}^{\mathrm{T}}\mathbf{X},{{\varvec{\beta}}}_{1}^{\mathrm{T}}\mathbf{X}\right]\). In order to estimate this conditional expectation, Cheng et al. [19] firstly got the estimated value \({\widehat{\boldsymbol{\alpha }}}_{1}\) of PS model and the estimated value \({\widehat{{\varvec{\beta}}}}_{1}\) of OR model, then used the Nadaraya-Watson kernel estimator [34] to conduct nonparametric regression of \(A\) on \({\widehat{\boldsymbol{\alpha }}}_{1}^{\mathrm{T}}\mathbf{X}\) and \({\widehat{{\varvec{\beta}}}}_{1}^{\mathrm{T}}\mathbf{X}\), to get the estimated value of DiPS as $$\widehat{\pi }\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1},{\widehat{{\varvec{\beta}}}}_{1}\right)=\frac{\sum_{j=1}^{n} {\mathcal{K}}_{\mathbf{H}}\left\{\left({\widehat{\mathbf{S}}}_{j}-\widehat{\mathbf{S}}\right)\right\}{A}_{j}}{\sum_{j=1}^{n} {\mathcal{K}}_{\mathbf{H}}\left\{\left({\widehat{\mathbf{S}}}_{j}-\widehat{\mathbf{S}}\right)\right\}}$$ where \({\widehat{\mathbf{S}}}_{i}=\left({\widehat{\boldsymbol{\alpha }}}_{1}^{\mathrm{T}}{\mathbf{X}}_{i},{\widehat{{\varvec{\beta}}}}_{1}^{\mathrm{T}}{\mathbf{X}}_{i}\right)\) and \(\widehat{\mathbf{S}}=\left({\widehat{\boldsymbol{\alpha }}}_{1}^{\mathrm{T}}\mathbf{X},{\widehat{{\varvec{\beta}}}}_{1}^{\mathrm{T}}\mathbf{X}\right)\) are bivariate regressors, which is named double-index. \({\mathcal{K}}_{\mathbf{H}}\left(\bullet \right)\) is a kernel function with a bandwidth \(\mathbf{H}\) of \(2\times 2\) matrix. Using the estimated DiPS \(\widehat{\pi }\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1},{\widehat{{\varvec{\beta}}}}_{1}\right)\), the ATE can be estimated by $$\begin{array}{c}{\widehat\Delta}_{DiPS}=\left(\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi\left({\mathbf X}_i;{\widehat{\boldsymbol\alpha}}_1,{\widehat{\beta}}_1\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi\left({\mathbf X}_i;{\widehat{\boldsymbol\alpha}}_1,{\widehat{\beta}}_1\right)}Y_i-\\ \left(\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi\left({\mathbf X}_i;{\widehat{\boldsymbol\alpha}}_1,{\widehat{\beta}}_1\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi\left({\mathbf X}_i;{\widehat{\boldsymbol\alpha}}_1,{\widehat{\beta}}_1\right)}Y_i.\end{array}$$ Cheng et al. [19] demonstrated that \({\widehat{\Delta }}_{DiPS}\) is a doubly robust estimator: it is consistent when \(\pi \left(\mathbf{X};\boldsymbol{\alpha }\right)\) is correctly specified, or \({\mu }_{A}\left(\mathbf{X};{\varvec{\beta}}\right)\) is correctly specified, but not necessarily both. Proposed multi-index propensity score Although \({\widehat{\Delta }}_{DiPS}\) in (3) can achieve doubly robust estimation for ATE, the DiPS estimated by the Nadaraya-Watson kernel estimator in (2), which may make the estimated probability outside the range of 0 to1, then the above Assumption 3 is violated. Furthermore, \({\widehat{\Delta }}_{DiPS}\) in (3) only allows a single model for PS and a single model for OR, the estimation consistency cannot be guaranteed when both models are incorrect. To provide more protection on estimation consistency, we would like to develop an approach that allows multiple candidate models for PS and/or OR, to achieve multiple robustness: the estimator is consistent when any model for PS or any model for OR is correctly specified. Specifically, we consider multiple candidate models for PS \(\{{\pi }^{k}\left(\mathbf{X};{\boldsymbol{\alpha }}^{k}\right)={g}_{\pi }\left({\alpha }_{0}^{k}+{\boldsymbol{\alpha }}_{1}^{k\mathrm{T}}\mathbf{X}\right),k=1,\dots ,K\}\) and multiple candidate models for OR \(\left\{{\mu }_{A}^{l}\left(\mathbf{X};{{\varvec{\beta}}}^{l}\right)={g}_{\mu }\left({\beta }_{1}^{l}+{{\varvec{\beta}}}_{1}^{l\mathrm{T}}\mathbf{X}+{\beta }_{2}^{l}A\right),l=1,\dots ,L\right\}\), probably with different choices or functional forms of covariates. Then we integrate the information from multiple PS models and multiple OR models to construct multi-index propensity score (MiPS), which is denoted by \(\pi \left(\mathbf{X};{\boldsymbol{\alpha }}_{1}^{1},...,{\boldsymbol{\alpha }}_{1}^{K},{{\varvec{\beta}}}_{1}^{1},...,{{\varvec{\beta}}}_{1}^{L}\right)=E\left[A | {\boldsymbol{\alpha }}_{1}^{1\mathrm{T}}\mathbf{X},...{\boldsymbol{\alpha }}_{1}^{K\mathrm{T}}\mathbf{X},{{\varvec{\beta}}}_{1}^{1\mathrm{T}}\mathbf{X},...,{{\varvec{\beta}}}_{1}^{L\mathrm{T}}\mathbf{X}\right]\). In order to estimate this conditional expectation, we firstly get the estimated values \({\widehat{\boldsymbol{\alpha }}}_{1}^{1}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K}\) of multiple PS models and the estimated values \({\widehat{{\varvec{\beta}}}}_{1}^{1}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L}\) of multiple OR models, then a naive idea is to use the multivariate Nadaraya-Watson kernel estimator to conduct nonparametric regression of \(A\) on \({\widehat{\boldsymbol{\alpha }}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K\mathrm{T}}\mathbf{X}\) and \({\widehat{{\varvec{\beta}}}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L\mathrm{T}}\mathbf{X}\) to get the estimated value of MiPS as $${\widehat{\pi }}^{Ker}\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1}^{1},...,{\widehat{\boldsymbol{\alpha }}}_{1}^{K},{\widehat{{\varvec{\beta}}}}_{1}^{1},...,{\widehat{{\varvec{\beta}}}}_{1}^{L}\right)=\frac{\sum_{j=1}^{n} {\mathcal{K}}_{\mathbf{H}}\left\{\left({\widehat{\mathbf{S}}}_{j}-\widehat{\mathbf{S}}\right)\right\}{A}_{j}}{\sum_{j=1}^{n} {\mathcal{K}}_{\mathbf{H}}\left\{\left({\widehat{\mathbf{S}}}_{j}-\widehat{\mathbf{S}}\right)\right\}},$$ where \({\widehat{\mathbf{S}}}_{j}=\left({\widehat{\boldsymbol{\alpha }}}_{1}^{1\mathrm{T}}{\mathbf{X}}_{j},\dots , {\widehat{\boldsymbol{\alpha }}}_{1}^{K\mathrm{T}}{\mathbf{X}}_{j},{\widehat{{\varvec{\beta}}}}_{1}^{1\mathrm{T}}{\mathbf{X}}_{j},\dots , {\widehat{{\varvec{\beta}}}}_{1}^{L\mathrm{T}}{\mathbf{X}}_{j}\right)\) and \(\widehat{\mathbf{S}}=\left({\widehat{\boldsymbol{\alpha }}}_{1}^{1\mathrm{T}}\mathbf{X},\dots , {\widehat{\boldsymbol{\alpha }}}_{1}^{K\mathrm{T}}\mathbf{X},{\widehat{{\varvec{\beta}}}}_{1}^{1\mathrm{T}}\mathbf{X},\dots , {\widehat{{\varvec{\beta}}}}_{1}^{L\mathrm{T}}\mathbf{X}\right)\) are multivariate regressors, which is named multi-index. \({\mathcal{K}}_{\mathbf{H}}\left(\bullet \right)\) is a kernel function with a bandwidth \(\mathbf{H}\) of \(\left(K+L\right)\times \left(K+L\right)\) matrix. Using the estimated kernel-based MiPS \({\widehat{\pi }}^{Ker}\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1}^{1},...,{\widehat{\boldsymbol{\alpha }}}_{1}^{K},{\widehat{{\varvec{\beta}}}}_{1}^{1},...,{\widehat{{\varvec{\beta}}}}_{1}^{L}\right)\), the ATE can be estimated by $$\begin{array}{c}\widehat\Delta_{MiPS}^{Ker}=\left(\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi^{Ker}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi^{Ker}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}Y_i-\\ \left(\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi^{Ker}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi^{Ker}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}Y_i.\end{array}$$ However, if there are no additional assumptions about the regression structure, the performance of Nadaraya-Watson kernel estimator in (5) degrades as the number of regressors increases. This degradation in performance is often referred to as the "curse of dimensionality" [22,23,24]. Our following simulation results also show that \({\widehat{\Delta }}_{MiPS}^{Ker}\) has obvious bias when multiple candidate models are included in \({\widehat{\pi }}^{Ker}\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1}^{1},...,{\widehat{\boldsymbol{\alpha }}}_{1}^{K},{\widehat{{\varvec{\beta}}}}_{1}^{1},...,{\widehat{{\varvec{\beta}}}}_{1}^{L}\right)\), even if the correct PS and/or OR model is covered. With the development of scalable computing and optimization techniques [25, 26], the use of machine learning has been one of the most promising approaches in connection with applications related to approximation and estimation of multivariate functions [27, 28]. Artificial neural network (ANN) is one of machine learning approaches. Benefiting from its flexible structure, the ANN becomes a universal approximator of a variety of functions [31,32,33]. The ANN comprises an input layer, a researcher-specified number of hidden layer(s), and an output layer. The hidden layer(s) and output layer consist of a number of neurons (also specified by researchers) with activation functions [35]. The operation of ANN includes following steps: 1) Information is input from the input layer, which passes it to the hidden layer; 2) In the hidden layer(s), the information is multiplied by the weight and a bias is added, and then passed to the next layer after transforming by the activation function; 3) The information is passed layer by layer until the last layer, where it is multiplied by the weight and then transformed by the activation function to provide the output; and 4) Calculate the error between the output and the actual value, and minimize the error by optimizing the weight parameters and bias parameters through the backpropagation algorithm [36]. In addition to having the potential of overcoming the "curse of dimensionality" [29, 30], the ANN is capable of automatically capturing complex relationships between variables [27]. It may be suited for modeling the relationship between treatment and multi-index because interactions commonly exist between indexes due to shared covariates in candidate PS and/or OR models. Therefore, we replaced the kernel function by ANN and proposed our ANN-based MiPS (ANN.MiPS) estimator. Now we propose the ANN-based MiPS. We firstly get the estimated values \({\widehat{\boldsymbol{\alpha }}}_{1}^{1}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K}\) of multiple PS models and the estimated values \({\widehat{{\varvec{\beta}}}}_{1}^{1}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L}\) of multiple OR models, then use the ANN to conduct nonparametric regression of \(A\) on multiple indexes \({\widehat{\boldsymbol{\alpha }}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K\mathrm{T}}\mathbf{X}\) and \({\widehat{{\varvec{\beta}}}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L\mathrm{T}}\mathbf{X}\) to get the estimated value of MiPS as \({\widehat{\pi }}^{Ann}\left(\mathbf{X};{\widehat{\boldsymbol{\alpha }}}_{1}^{1},...,{\widehat{\boldsymbol{\alpha }}}_{1}^{K},{\widehat{{\varvec{\beta}}}}_{1}^{1},...,{\widehat{{\varvec{\beta}}}}_{1}^{L}\right)\). Then the ATE can be estimated by $$\begin{array}{c}\widehat\Delta_{MiPS}^{Ann}=\left(\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi^{Ann}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{A_i}{\widehat\pi^{Ann}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}Y_i-\\ \left(\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi^{Ann}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}\right)^{-1}\sum\limits_{i=1}^n\frac{1-A_i}{1-\widehat\pi^{Ann}\left({\mathbf X}_i;\widehat{\boldsymbol\alpha}_1^1,...,\widehat{\boldsymbol\alpha}_1^K,\widehat{\beta}_1^1,...,\widehat{\beta}_1^L\right)}Y_i.\end{array}$$ Our following simulations indicate the multiple robustness of \({\widehat{\Delta }}_{MiPS}^{Ann}\): its bias is ignorable when any model for PS or any model for OR is correctly specified. We implemented the ANN that contains 2 hidden layers with 4 neurons in each hidden layer using AMORE package [37] for ANN.MiPS estimator. Therefore, the total number of parameters to be estimated in the ANN is \(4*(K+L)+32\), including \(4*(K+L)+24\) weight parameters and 8 bias parameters. The learning rate is set as 0.001 [10, 12]. The momentum is set as 0.5, the default value in the AMORE package. The hyperbolic tangent function was specified as the activation function for hidden layer. The sigmoid function was specified as the activation function for output layer to ensure the estimated ANN-based MiPS is between 0 to 1 [38]. To examine the performance stability of the estimator, we performed a sensitivity analysis using different hyperparameter selections. The simulations, real data analysis, and all statistical tests were conducted using R software (Version 4.1.0) [39]. A zip file of AMORE package and an example code for implementing the ANN.MiPS approach can be found in the attachment. Simulation studies We conducted simulation studies to evaluate the performance of (i) single model-based estimators: IPW estimator in (1) and OR estimator in (2); (ii) doubly robust estimators: augmented inverse probability weighting (AIPW) [17] and target maximum likelihood estimator (TMLE) [18], which allows a single model for PS and a single model for OR; (iii) multiple models-based estimators: kernel-based estimator in (6) and ANN-based estimator in (7), which allows multiple candidate models for PS and/or OR. Ten covariates \({X}_{1}-{X}_{10}\) were generated from standard normal distribution, and the correlation between them are shown in Fig. 1. The binary treatment indicator \(A\) was generated from a Bernoulli distribution according to the following propensity score The simulation data structure in our simulation studies $$\begin{array}{c}\mathrm{logit}\left[\pi\left(\mathbf X;\alpha\right)\right]=\alpha_0+0.16X_1-0.05X_2+0.12X_3-\\ 0.1X_4-0.16X_5-0.1X_6+0.15X_7\end{array}$$ \({\alpha }_{0}\) was set to be 0 or -1.1 to make approximately 50% or 25% subjects entering the treatment group. The continuous outcome \(Y\) was generated from $$\begin{array}{c}Y=-3.85-0.4A-0.8X_1-0.36X_2-0.73X_3-\\ 0.2X_4+0.71X_8-0.19X_9+0.26X_{10}+\varepsilon,\end{array}$$ where \(\varepsilon\) follows the standard normal distribution. The true ATE was \(\Delta =E\left({Y}^{1}\right)-E\left({Y}^{0}\right)=-0.4\). In the estimation, two estimation models were specified $${\mathbb{A}}=\left\{\begin{array}{c}logit\left[{\pi }^{1}\left(\mathbf{X};{\boldsymbol{\alpha }}^{1}\right)\right]=\left(1,{X}_{1},{X}_{2},{X}_{3},{X}_{4},{X}_{5},{X}_{6},{X}_{7}\right){\boldsymbol{\alpha }}^{1}\\ logit\left[{\pi }^{2}\left(\mathbf{X};{\boldsymbol{\alpha }}^{2}\right)\right]=\left(1,{X}_{1}^{2},{X}_{2}^{2},{X}_{3}^{2},{X}_{4}^{2},{X}_{5}^{2},{X}_{6}^{2},{X}_{7}^{2}\right){\boldsymbol{\alpha }}^{2}\end{array}\right\}$$ for propensity score, and two estimation models were specified $${\mathbb{B}}=\left\{\begin{array}{c}{{\mu }_{A}}^{1}\left(\mathbf{X};{{\varvec{\beta}}}^{1}\right)=\left(1,{A,X}_{1},{X}_{2},{X}_{3},{X}_{4},{X}_{8},{X}_{9},{X}_{10}\right){{\varvec{\beta}}}^{1}\\ {{\mu }_{A}}^{2}\left(\mathbf{X};{{\varvec{\beta}}}^{2}\right)=\left(1,{A,X}_{1}^{2},{X}_{2}^{2},{X}_{3}^{2},{X}_{4}^{2},{X}_{8}^{2},{X}_{9}^{2},{X}_{10}^{2}\right){{\varvec{\beta}}}^{2}\end{array}\right\}$$ for outcome regression. According to the data-generating mechanism, \({\pi }^{1}\left(\mathbf{X};{\boldsymbol{\alpha }}^{1}\right)\) and \({{\mu }_{A}}^{1}\left(\mathbf{X};{{\varvec{\beta}}}^{1}\right)\) were correct PS and correct OR models, whereas \({\pi }^{2}\left(\mathbf{X};{\boldsymbol{\alpha }}^{2}\right)\) and \({{\mu }_{A}}^{2}\left(\mathbf{X};{{\varvec{\beta}}}^{2}\right)\) were incorrect PS and incorrect OR models, due to the mis-specified functional forms of covariates. To distinguish these estimation methods, each estimator is denoted as "method-0000". Each of the four numbers, from left to right, represents if \({\pi }^{1}\left(\mathbf{X};{\boldsymbol{\alpha }}^{1}\right)\), \({\pi }^{2}\left(\mathbf{X};{\boldsymbol{\alpha }}^{2}\right)\), \({{\mu }_{A}}^{1}\left(\mathbf{X};{{\varvec{\beta}}}^{1}\right)\) or \({{\mu }_{A}}^{2}\left(\mathbf{X};{{\varvec{\beta}}}^{2}\right)\) is included in the estimator, where "1" indicates yes and "0" indicates no. We investigated sample sizes of \(n=300\) and \(n=1000\) with 1000 replications in all settings. Tables 1 and 2 show the estimation results of all estimators, along with five evaluation measures including percentage of bias (BIAS, in percentage), root mean square error (RMSE), Monte Carlo standard error (MC-SE), bootstrapping standard error (BS-SE) based on 100 resamples, and coverage rate of 95% Wald confidence interval (CI-Cov). Our bootstrapping procedure resamples from the original sample set with replacement until the bootstrapping sample size reaches the original sample size. Fig. S1 shows the distribution of the estimated ATEs of Ker.MiPS and ANN.MiPS estimators. The following conclusions can be obtained. For estimation bias, If specifying one model for PS or one for OR: The IPW, Ker.MiPS, and ANN.MiPS estimators all have a small bias if the PS model is correctly specified (IPW.correct, Ker.MiPS-1000, ANN.MiPS-1000). The OR, Ker.MiPS, and ANN.MiPS estimators all have a small bias if the OR model is correctly specified (IPW.correct, Ker.MiPS-0010, ANN.MiPS-0010). If specifying one model for PS and one model for OR: The AIPW, TMLE, Ker.MiPS and ANN.MiPS estimators all have a small bias if the PS model is correctly specified (AIPW-1010, AIPW-1001, Ker.MiPS-1010, Ker.MiPS-1001, ANN.MiPS-1010, ANN.MiPS-1001), or if the OR model is correctly specified (AIPW-1010, AIPW-0110, Ker.MiPS-1010, Ker.MiPS-0110, ANN.MiPS-1010, ANN.MiPS-0110). If specifying multiple candidate models for PS and OR: The multiple robustness property of the ANN.MiPS estimator is well demonstrated by the ignorable bias of ANN.MiPS-1110, ANN.MiPS-1101, ANN.MiPS-1011, ANN.MiPS-0111, and ANN.MiPS-1111. On the contrary, the biases of the Ker.MiPS estimators under all model specifications are close to or larger than 5%. Table 1 Estimation results under 50% treated based on 1000 replications For estimation efficiency, If models for both PS and OR are correctly specified: The MC-SE of AIPW-1010, TMLE-1010, and ANN.MiPS-1010 estimators are all smaller than that of IPW.correct and ANN.MiPS-1000 estimators. The improved efficiency may benefit from the information of the correct OR model. If multiple candidate models incorporate the correct PS and OR models: The MC-SE of ANN.MiPS-1110, ANN.MiPS-1011, and ANN.MiPS-1111 estimators are all close to ANN.MiPS-1010. To evaluate the performance of the MiPS estimator when the number of specified models increases, we have considered three additional estimators: MiPS-1111-2PS, adding two additional incorrect PS models \(\left\{\begin{array}{c}logit\left[{\pi }^{3}\left(\mathbf{X};{\boldsymbol{\alpha }}^{3}\right)\right]=\left(1,{X}_{1},{X}_{2},{X}_{3}\right){\boldsymbol{\alpha }}^{3}\\ logit\left[{\pi }^{4}\left(\mathbf{X};{\boldsymbol{\alpha }}^{4}\right)\right]=\left(1,{X}_{1}^{2},{X}_{2}^{2},{X}_{3}^{2}\right){\boldsymbol{\alpha }}^{4}\end{array}\right\}\) on the basis of the MiPS-1111; MiPS-1111-2OR, adding two additional incorrect OR models \(\left\{\begin{array}{c}{\mu }_{A}^{3}\left(\mathbf{X};{{\varvec{\beta}}}^{3}\right)=\left(1,{X}_{1},{X}_{2},{X}_{3},A\right){{\varvec{\beta}}}^{3}\\ {\mu }_{A}^{4}\left(\mathbf{X};{{\varvec{\beta}}}^{4}\right)=\left(1,{X}_{1}^{2},{X}_{2}^{2},{X}_{3}^{2},A\right){{\varvec{\beta}}}^{4}\end{array}\right\}\) on the basis of the MiPS-1111; MiPS-1111-2PS-2OR, adding two additional incorrect PS models \({\pi }^{3}\left(\mathbf{X};{\boldsymbol{\alpha }}^{3}\right)\) and \({\pi }^{4}\left(\mathbf{X};{\boldsymbol{\alpha }}^{4}\right)\) and two additional incorrect OR models \({\mu }_{A}^{3}\left(\mathbf{X};{{\varvec{\beta}}}^{3}\right)\) and \({\mu }_{A}^{4}\left(\mathbf{X};{{\varvec{\beta}}}^{4}\right)\) on the basis of the MiPS-1111. Table 3 shows the estimation results. The following conclusions can be obtained. The estimation bias of ANN.MiPS-1111-2PS, ANN.MiPS-1111-2OR, and ANN.MiPS-1111-2PS2OR estimators is still ignorable. The estimation efficiency of these estimators is hardly degraded compared to ANN.MiPS-1010 estimator. The estimation bias of Ker.MiPS-1111-2PS, Ker.MiPS-1111-2OR, and Ker-1111-2PS2OR estimators is close to or larger than 10%. The MC-SE of these estimators is obviously larger than that of Ker.MiPS-1010 estimator. Table 3 Estimation results for multi-index propensity score estimator incorporating extra incorrect models based on 1000 replications We also evaluated the performance of ANN.MiPS estimator under the simulation scenario with both continuous and discrete covariates. The simulation setting was described in Supplementary Document. Similar conclusions can be obtained as the above scenario with all continuous covariates (Table S1, S2). The sensitivity analysis of hyperparameters selection in ANN revealed the performance stability of ANN.MiPS estimator (Table S3). Application to NHEFS data To illustrate our proposed method, we analyzed a subset of real data from the National Health and Nutrition Examination Survey Data | Epidemiologic Follow-up Study (NHEFS) (wwwn.cdc.gov/nchs/nhanes/nhefs/). The dataset consists of 1,507 participants aged 25–74 who smoked at the first survey and were followed for approximately 10 years. The empirical study aimed to estimate the ATE of smoking cessation (coded as quitting and non-quitting, with non-quitting as the reference group) on weight gain. Participants were categorized as treated if they quit smoking during follow-up, otherwise controlled. Weight gain for each individual was measured as weight at the end of follow-up minus weight at baseline survey (in kilograms). During the 10-year follow-up, 379 (25.15%) participants quit smoking. The average weight gain was greater for those who quit smoking with an unadjusted difference of 2.4 kg. Table 4 summarized the baseline characteristics, including age, gender, race, baseline weight, active life level, education level, exercise, smoking intensity, smoking years, and ever use of weight loss medication between the smoking quitters and non-quitters. As shown in the table, the distribution of age, gender, race, education level, smoking intensity, and smoking years was different between quitters and non-quitters. When estimating the ATE of smoking cessation on weight gain, these factors should be adjusted for if they are confounders. Table 4 The NHEFS data analysis: baseline characteristics between non-quitters and quitters To identify candidate models for ANN.MiPS estimator, we explored the association of smoking cessation with all potential risk factors by logistic regression, and explored the association of weight gain with all potential risk factors by linear regression. The covariates in model 1 and model 2 for both PS and OR models were identified at significant levels of 0.05 and 0.1, respectively. The covariates in PS model 1 and model 2 were (i) age, gender, race, smoking intensity, and smoking years; (ii) age, gender, race, smoking intensity, smoking years, education level, and exercise situation. The covariates in OR model 1 and model 2 were (i) age, weight at baseline, smoking intensity, education level, and active life level; (ii) age, weight at baseline, smoking intensity, education level, active life level, and family income level. We applied the single model-based IPW estimator, single model-based OR estimator, and our proposed ANN.MiPS estimator to estimate the ATE. The four numbers in the ANN.MiPS estimator, from left to right, represents if PS model 1, PS model 2, OR model 1, or OR model 2 is included in the estimator, where "1" indicates yes and "0" indicates no. For example, "ANN.MiPS-1010" represents that the PS model 1 and OR model 1 are included in the estimator. The standard error of estimation was estimated based on 500 resampled bootstrapping. The estimation results in Table 5 indicated that all estimators suggested quitting smoking significantly increased participants' weight gain. Most of the estimated adjusted effects based on these estimators were greater than the estimated unadjusted effects of 2.4, which seems more precise and reliable. The point estimation and its bootstrap standard error for ATE of the ANN.MiPS estimator was stable under different model specifications. Table 5 The NHEFS data analysis: estimated average treatment effect of quitting smoking on weight gain (not quitting smoking as reference) In this paper, we considered causal inference in observational studies where effects estimation was susceptible to confounding bias due to imbalanced covariates between groups. With reference to the Ker.DiPS estimator [19], we proposed the ANN.MiPS estimator to provide more chances for correcting the confounding bias. We evaluated the performance of our estimator under simulation scenarios with small (\(n=300\)) or large (\(n=1000\)) sample size, with treatment rate of 25% or 50%, and with covariates consisting of all continuous type or both continuous and discrete types. The results indicated the multiple robustness property of our estimator: the estimation bias is small if any model for PS or any model for OR is correctly specified. In addition to achieving multiply robust estimation for ATE, the proposed estimator showed a higher estimation efficiency than the kernel-based estimator when any model for PS or OR is correctly specified, especially when only the OR model is correctly specified. One limitation of our approach is that the multiple candidate models for PS \(\{{\pi }^{k}\left(\mathbf{X};{\boldsymbol{\alpha }}^{k}\right)={g}_{\pi }\left({\alpha }_{0}^{k}+{\boldsymbol{\alpha }}_{1}^{kT}\mathbf{X}\right),k=1,\dots ,K\}\) and the multiple candidate models for OR \(\left\{{\mu }^{l}\left(\mathbf{X},A;{{\varvec{\beta}}}^{l}\right)={g}_{\mu }\left({\beta }_{1}^{l}+{{\varvec{\beta}}}_{1}^{lT}\mathbf{X}+{\beta }_{2}^{l}A\right),l=1,\dots ,L\right\}\) need to be parametric, since the MiPS is defined as \(\pi \left(\mathbf{X};{\boldsymbol{\alpha }}_{1}^{1},...,{\boldsymbol{\alpha }}_{1}^{K},{{\varvec{\beta}}}_{1}^{1},...,{{\varvec{\beta}}}_{1}^{L}\right)=E\left[A |{\boldsymbol{\alpha }}_{1}^{1T}\mathbf{X},...{\boldsymbol{\alpha }}_{1}^{KT}\mathbf{X},{{\varvec{\beta}}}_{1}^{1T}\mathbf{X},...,{{\varvec{\beta}}}_{1}^{LT}\mathbf{X}\right]\), in which we need to conduct nonparametric regression of \(A\) on \({\widehat{\boldsymbol{\alpha }}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K\mathrm{T}}\mathbf{X}\) and \({\widehat{{\varvec{\beta}}}}_{1}^{1\mathrm{T}}\mathbf{X}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L\mathrm{T}}\mathbf{X}\). Therefore, the nonparametric models, such as the kernel function, ANN, and random forest are not suitable as candidate models for the MiPS estimator because the coefficients of covariates cannot be obtained. When the candidate models are constructed by nonparametric models, some other multiply robust approaches may be adopted to integrate the information from multiple candidate models, such as the regression-based estimator under least square's framework [40], the estimator based on empirical likelihood weighting [20], and the estimator based on model mixture procedures [41]. At this point, double/debiased machine learning approach may be extended to multiple/debiased machine learning for obtaining valid inference about ATE [42]. Although the performance of ANN.MiPS estimator remains stable when specifying eight candidate models, an excessive number of models can impose a heavy computational burden. Therefore, we recommend carefully constructing a comprehensive set of reasonable but less similar candidate models to control the model number in practical applications, using both subject knowledge and reliable data-driven tools, such as causality diagrams [43], variable selection techniques [44], and covariate balancing diagnostics [45]. Finally, we give some intuitive discussions about the theoretical properties of the proposed estimator. Referring to proof Chen et al. [19], \({\widehat{\Delta }}_{MiPS}^{ANN}\) is consistent for $${\overline{\Delta } }_{MiPS}^{ANN}=\frac{E\left\{\frac{{A}_{i}{Y}_{i}}{{\overline{\pi }}^{ANN}\left({\mathbf{X}}_{i};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)}\right\}}{E\left\{\frac{{A}_{i}}{{\overline{\pi }}^{ANN}\left({\mathbf{X}}_{i};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)}\right\}}-\frac{E\left\{\frac{\left(1-{A}_{i}\right){Y}_{i}}{\left[1-{\overline{\pi }}^{ANN}\left({\mathbf{X}}_{i};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)\right]}\right\}}{E\left\{\frac{\left(1-{A}_{i}\right)}{\left[1-{\overline{\pi }}^{ANN}\left({\mathbf{X}}_{i};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)\right]}\right\}}$$ where \({\widehat{\boldsymbol{\alpha }}}_{1}^{1},...,{\widehat{\boldsymbol{\alpha }}}_{1}^{K},{\widehat{{\varvec{\beta}}}}_{1}^{1},...,{\widehat{{\varvec{\beta}}}}_{1}^{L}\) converge to \({\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\), \({\widehat{\pi }}^{ANN}\left(\bullet \right)\) converges to \({\overline{\pi }}^{ANN}\left(\bullet \right)\). According to some theoretical results on ANN, under certain conditions, \({\overline{\pi }}^{ANN}\left(\mathbf{X};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)=\pi \left(\mathbf{X};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)\). At this time, when one of candidate models for PS \(\{{\pi }^{k}\left(\mathbf{X};{\boldsymbol{\alpha }}^{k}\right)={g}_{\pi }\left({\alpha }_{0}^{k}+{\boldsymbol{\alpha }}_{1}^{kT}\mathbf{X}\right),k=1,\dots ,K\}\) is correctly specified, \(\pi \left(\mathbf{X};{\overline{\boldsymbol{\alpha }} }_{1}^{1},...,{\overline{\boldsymbol{\alpha }} }_{1}^{K},{\overline{{\varvec{\beta}}} }_{1}^{1},...,{\overline{{\varvec{\beta}}} }_{1}^{L}\right)=\pi \left(\mathbf{X}\right)\), \({\overline{\Delta } }_{MiPS}^{ANN}=\Delta\). On the other hand, when one of candidate models for OR \(\left\{{\mu }_{A}^{l}\left(\mathbf{X};{{\varvec{\beta}}}^{l}\right)={g}_{\mu }\left({\beta }_{1}^{l}+{{\varvec{\beta}}}_{1}^{lT}\mathbf{X}+{\beta }_{2}^{l}A\right),l=1,\dots ,L\right\}\) is correctly specified, \(E\left[Y |{\overline{\boldsymbol{\alpha }} }_{1}^{1T}\mathbf{X},...{\overline{\boldsymbol{\alpha }} }_{1}^{KT}\mathbf{X},{\overline{{\varvec{\beta}}} }_{1}^{1T}\mathbf{X},...,{\overline{{\varvec{\beta}}} }_{1}^{LT}\mathbf{X},A \right]={\mu }_{A}\left(\mathbf{X}\right)\), \({\overline{\Delta } }_{MiPS}^{ANN}=\Delta\). As for the asymptotic distribution of proposed estimator, the variability of \({\widehat{\Delta }}_{MiPS}^{ANN}\) mainly comes from: (1) the estimated values \({\widehat{\boldsymbol{\alpha }}}_{1}^{1}\),…, \({\widehat{\boldsymbol{\alpha }}}_{1}^{K}\) of multiple PS models and the estimated values \({\widehat{{\varvec{\beta}}}}_{1}^{1}\),…, \({\widehat{{\varvec{\beta}}}}_{1}^{L}\) of multiple OR models, (2) the estimated nonparametric function \({\widehat{\pi }}^{ANN}\left(\bullet \right)\) using ANN. For the first variation, if the parameters are estimated by maximum likelihood, the asymptotic normality of the estimators has been obtained by White [46]. For the second variation, the error bound and convergence rate have been discussed in some theoretical research [29, 47]. It will be our future research topic to give and prove the theoretical properties of \({\widehat{\Delta }}_{MiPS}^{ANN}\) estimator strictly and systematically. IN this study, we proposed the ANN.MiPS estimator to correct confounding bias when using the observational data to estimate the ATE. The proposed estimator allowed multiple candidate models for PS and OR, and guaranteed the estimated integrated PS is between 0 and 1. The multiple robustness property of our estimator was illustrated through simulation studies. Extra efficiency was gained compared to the kernel function-based estimator. The proposed estimator provided a new choice for multiply robust estimation of ATE in observational studies. The simulated data can be simulated from the example code in the attachment. The real-world data used can be accessed from https://wwwn.cdc.gov/nchs/nhanes/nhefs/default.aspx/. ATE: Average treatment effect IPW: Inverse probability weighting Propensity score Outcome regression AIPW: Augment inverse probability weighting TMLE: Target maximum likelihood estimator DiPS: Double-index propensity score Ker.DiPS: Kernel function-based double-index propensity score MiPS: Multi-index propensity score ANN: ANN.MiPS: Artificial neural network-based multi-index propensity score Ker.MiPS: Kernel function-based multi-index propensity score RMSE: Root mean square error MC-SE: Monte Carlo standard error BS-SE: Bootstrapping standard error 95CI-Cov: 95% Confidence interval coverage rate NHEFS: Nutrition Examination Survey Data | Epidemiologic Follow-up Study Kovesdy CP, Kalantar-Zadeh K. Observational studies versus randomized controlled trials: avenues to causal inference in nephrology. Adv Chronic Kidney Dis. 2012;19(1):11–8. Imbens GW, Rubin DB. Causal inference in statistics, social, and biomedical sciences. New York: Cambridge University Press; 2015. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55. Wooldridge JM. Inverse probability weighted M-estimators for sample selection, attrition, and stratification. Port Econ J. 2002;1(2):117–39. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med. 2004;23(19):2937–60. Hernán MA, Robins JM. Causal Inference: What If. Boca Raton: Chapman & Hall/CRC; 2020. Joffe MM, Ten Have TR, Feldman HI, Kimmel SE. Model selection, confounder control, and marginal structural models: review and new applications. Am Stat. 2004;58(4):272–9. Lee BK, Lessler J, Stuart EA. Improving propensity score weighting using machine learning. Stat Med. 2010;29(3):337–46. Keller B, Kim JS, Steiner PM. Neural networks for propensity score estimation: Simulation results and recommendations. Quantitative psychology research. Wisconsin: Springer; 2015: 279–291. Collier ZK, Leite WL, Zhang H. Estimating propensity scores using neural networks and traditional methods: a comparative simulation study. Commun Stat-Simul Comput 2021:1–16. Collier ZK, Zhang H, Liu L. Explained: Artificial intelligence for propensity score estimation in multilevel educational settings. Pract Assess Res Eval. 2022;27(1):3. Setoguchi S, Schneeweiss S, Brookhart MA, Glynn RJ, Cook EF. Evaluating uses of data mining techniques in propensity score estimation: a simulation study. Pharmacoepidemiol Drug Saf. 2008;17(6):546–55. Elwert F, Winship C: Effect heterogeneity and bias in main-effects-only regression models. Heuristics, probability and causality: A tribute to Judea Pearl 2010:327–336. Vansteelandt S, Goetghebeur E. Causal inference with generalized structural mean models. J Roy Stat Soc Ser B (Stat Method). 2003;65(4):817–35. Lu M, Sadiq S, Feaster DJ, Ishwaran H. Estimating individual treatment effect in observational data using random forest methods. J Comput Graph Stat. 2018;27(1):209–19. Chen X, Liu Y, Ma S, Zhang Z. Efficient estimation of general treatment effects using neural networks with a diverging number of confounders. 2020. arXiv preprint arXiv:200907055. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Amer Statistical Assoc. 1994;89(427):846–66. Van Der Laan MJ, Rubin D. Targeted maximum likelihood learning. Int J Biostat. 2006;2(1):1–38. Cheng D, Chakrabortty A, Ananthakrishnan AN, Cai T. Estimating average treatment effects with a double-index propensity score. Biometrics. 2020;76(3):767–77. Han P, Wang L. Estimation with missing data: beyond double robustness. Biometrika. 2013;100(2):417–30. Han P. Multiply robust estimation in regression analysis with missing data. J Amer Statistical Assoc. 2014;109(507):1159–73. Bellman RE. Curse of dimensionality. Adaptive control processes: a guided tour. New Jersey: Princeton University Press; 1961. Donoho DL. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math Challenges Lecture. 2000;2000(1):32. Rodrıguez G. Smoothing and non-parametric regression. New Jersey: Princeton University 2001. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 2016. arXiv preprint arXiv:160304467. Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014. arXiv preprint arXiv:14126980. Mitchell TM, Mitchell TM. Machine learning, vol. 1. New York: McGraw-hill; 1997. Bzdok D, Krzywinski M, Altman N. Machine learning: a primer. Nat Methods. 2017;14(12):1119. Bauer B, Kohler M. On deep learning as a remedy for the curse of dimensionality in nonparametric regression. Ann Stat. 2019;47(4):2261–85. Chen X, White H. Improved rates and asymptotic normality for nonparametric neural network estimators. IEEE Trans Inf Theory. 1999;45(2):682–91. White H, Gallant AR. Artificial Neural Networks: Approximation and Learning Theory. Oxford: Blackwell; 1992. Hornik K, Stinchcombe M, White H, Auer P. Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives. Neural Comput. 1994;6(6):1262–75. Yarotsky D. Optimal approximation of continuous functions by very deep ReLU networks. In: 2018: Stockholm: PMLR: 639–649. Conn D, Li G. An oracle property of the Nadaraya-Watson kernel estimator for high-dimensional nonparametric regression. Scand J Stat. 2019;46(3):735–64. Hart PE, Stork DG, Duda RO. Pattern classification. New Jersey: Wiley Hoboken; 2000. Hecht-Nielsen R. Theory of the backpropagation neural network. Neural networks for perception. California: Academic Press; 1992:65–93. Limas MC, Meré JBO, Marcos AG, Ascacíbar FJMdP, Espinoza AVP, Elias F, Ramos JMP. AMORE: A MORE flexible neural network package. In: 2014; 2014. Kyurkchiev N, Markov S. Sigmoid functions: some approximation and modelling aspects. Saarbrucken: LAP LAMBERT Academic Publishing; 2015. p. 4. Team RC. R: A language and environment for statistical computing. 2013. Chan KCG. A simple multiply robust estimator for missing response problem. Stat. 2013;2(1):143–9. Li W, Gu Y, Liu L. Demystifying a class of multiply robust estimators. Biometrika. 2020;107(4):919–33. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W, Robins J. Double/debiased machine learning for treatment and structural parameters. In.: Oxford University Press, Oxford, UK; 2018. Pearl J. Causal diagrams for empirical research. Biometrika. 1995;82(4):669–88. VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol. 2019;34(3):211–9. Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Stat Med. 2009;28(25):3083–107. White H. Maximum likelihood estimation of misspecified models. Econometrica: J Econom Society. 1982;50(1):1–25. Schmidt-Hieber J. Nonparametric regression using deep neural networks with ReLU activation function. Ann Stat. 2020;48(4):1875–97. This work was funded by National Natural Science Foundation of China (No.82173612, No.82273730), Shanghai Rising-Star Program (21QA1401300), Shanghai Municipal Natural Science Foundation (22ZR1414900), Shanghai Special Program: Clinical Multidisciplinary Treatment System and Systems Epidemiology Research, and Shanghai Municipal Science and Technology Major Project (ZD2021CY001). The sponsors had no role in study design, data collection, data analysis, data interpretation, or writing of this report. Jiaqin Xu and Kecheng Wei contributed equally to this work. Department of Biostatistics, School of Public Health, Fudan University, Shanghai, China Jiaqin Xu, Kecheng Wei, Ce Wang, Chen Huang, Yaxin Xue, Rui Zhang, Guoyou Qin & Yongfu Yu Key Laboratory of Public Health Safety of Ministry of Education, Fudan University, Shanghai, China Guoyou Qin & Yongfu Yu Shanghai Institute of Infectious Disease and Biosecurity, Shanghai, China Jiaqin Xu Kecheng Wei Ce Wang Chen Huang Yaxin Xue Rui Zhang Guoyou Qin Yongfu Yu GYQ and YFY designed the study. JQX and KCW wrote the manuscript. JQX performed simulations and analyzed the real-world data. CW, CH, YXX, and RZ revised the manuscript. All authors have provided critical comments on the draft, and read and approved the final manuscript. Correspondence to Guoyou Qin or Yongfu Yu. Since the simulated datasets did not involve any human data, ethics approval was not applicable. Since the real data is publicly available, ethics approval was not required. The authors declared no conflict of interest. Fig. S1. The distribution of the estimated average treatment effect for kernel-based MiPS estimator and artificial neural network-based MiPS estimator in 1000 simulated data sets. The range of the y-axis is restricted from -1.4 to 0.6 given that the kernel-based MiPS estimator yields highly biased estimation under some model specifications. The dashed line denotes the true average treatment effect. Table S1. Estimation results for scenario with both continuous and discrete covariates under 50% treated based on 1000 replications. Table S2. Estimation results of multi-index propensity score estimator incorporating extra incorrect models under scenario with both continuous and discrete covariates. Table S3. Sensitivity analysis of ANN.MiPS estimator with different tuning parameters selection for ANN under scenario of all continuous covariates and 50% treated. Xu, J., Wei, K., Wang, C. et al. Estimation of average treatment effect based on a multi-index propensity score. BMC Med Res Methodol 22, 337 (2022). https://doi.org/10.1186/s12874-022-01822-3 Multiply robust Submission enquiries: [email protected]
CommonCrawl
The European Physical Journal C July 2019 , 79:590 | Cite as Frequency variation for in vacuo photon propagation in the Standard-Model Extension José A. Helayël-Neto Alessandro D. A. M. Spallicci In the presence of Lorentz Symmetry Violation (LSV) associated with the Standard-Model Extension (SME), we have recently shown the non-conservation of the energy-momentum tensor of a light-wave crossing an Electro-Magnetic (EM) background field even when the latter and the LSV are constant. Incidentally, for a space-time dependent LSV, the presence of an EM field is not necessary. Herein, we infer that in a particle description, the energy non-conservation for a photon implies violation of frequency invariance in vacuo, giving rise to a red or blue shift. We discuss the potential consequences on cosmology. The Standard-Model (SM) describes through a Lagrangian three interactions among fundamental particles: Electro-Magnetic (EM), weak and strong. The SM is a very successful model but it neither includes massive neutrinos, nor incorporates the particles corresponding to a, yet to be found, dark universe. Furthermore, we remark that the photon is the only free massless particle in the SM. An attempt to extend the SM is Super-Symmetry (SuSy); see [1] for a review. This theory predicts the existence of new particles that are not included in the SM. Anyway, the physics we describe herein is valid also in absence of a SuSy scenario. In this respect, the role of SuSy is solely the provision of a microscopic origin of the LSV. The SM is assumed to be Lorentz Symmetry (LoSy)1 invariant. This prediction is likely valid only up to certain energy scales beyond which a LoSy Violation (LSV) might occur. There is a general framework known as the SM Extension (SME) [2, 3, 4], that allows us to test the low-energy manifestations of LSV. In two recent works [5, 6] on the SME, we have considered violations of LoSy, differing in the handedness of the Charge conjugation-Parity-Time reversal (CPT) symmetry and in whether considering the impact of photinos on photon propagation. We came up with four classes. For the CPT-odd classes (\(k^\mathrm{AF}_{\alpha }\) breaking vector) associated with the Carroll–Field–Jackiw (CFJ) model, the dispersion relations (DRs) and the Lagrangian show for the photon an effective mass, gauge-invariant, and proportional to \(|{\vec k}^\mathrm{AF}|\). The group velocity exhibits a deviation from the speed of light c. The deviation depends on the inverse of the frequency squared, as predicted by de Broglie [7]. For the CPT-even classes (\(k_\mathrm{F}^{\alpha \nu \rho \sigma }\) breaking tensor), when the photino is considered, the DRs display also a massive behaviour inversely proportional to a coefficient in the Lagrangian and to a term linearly dependent on \(k_\mathrm{F}^{\alpha \nu \rho \sigma }\). All DRs feature an angular dependence and lack LoSy invariance. Complex or simply imaginary frequencies and super-luminal speeds may appear in defined cases. Furthermore, we have shown the emergence of birefringence. Finally, for both CPT sectors, we have pointed out the non-conservation of the photon energy-momentum tensor in vacuo [6]. Hereafter, we deal with the latter result and give an order of magnitude of the energy change that light would undergo through propagation in a LSV universe. The energy variations, if losses, would translate into frequency damping if the excitation were a photon. Generally, the wave-particle correspondence, even for a single photon [8], leads us to consider that the non-conservation of energy corresponds to a photon energy variation and thereby a red or a blue shift. Before stepping into the equations, we intend to present the physical reason for why the non-conservation arises even in case of a constant EM background and of a constant LSV breaking vector (the breaking tensor appears either under a derivative or coupled to a derivative of the EM background). We recall that the CFJ equations of motion and the action are gauge-invariant but they originate from a Lagrangian density which is not gauge-invariant. Indeed, the gauge dependence of the Lagrangian density is a surface term to be neglected in the action. Conversely, gauge invariance is not acquired when processing the Lagrangian density of the classical massive electromagnetism of de Broglie-Proca. Concerning the non-conservation, the action contains a contribution \(\epsilon ^{\kappa \lambda \mu \nu }k^\mathrm{AF}_\kappa A_\lambda F_{\mu \nu }\), such that even if the EM background is constant, the corresponding background four-potential is not, \(A_\beta = x^\alpha F_{\alpha \beta }\). Thereby, there is an explicit \(x^\alpha \) dependence at the level of the Lagrangian. This determines a source of energy-momentum non-conservation, according to the Noether theorem. Otherwise put, there is an exchange of energy-momentum between the photon and the EM background. The latter is external to the system and does not follow the dynamics dictated by the photon action. We also remark that the four-curl of \(k^\mathrm{AF}\) is zero. This guarantees gauge invariance of the action and, in a simply connected space, \(k^\mathrm{AF}\) may be expressed as the four-gradient of a scalar function. 2 Energy-momentum non-conservation Our most general scenario is composed of \(k^\mathrm{AF}_\alpha \) and \(k_\mathrm{F}^{\alpha \nu \rho \sigma }\); \(f_{\alpha \nu }\) represents the photon field and \(a_\nu \) is the four-potential; \(F_{\alpha \nu }\) the EM background field, \(j^{\nu }\) the external current independent of the latter. The symbol * stands for the dual field. Starting from the field equation [6] in SI units (\(\mu _0 = 4\pi \times 10^{-7}\) NA\(^{-2}\)), where we used \(\partial _\alpha k^\mathrm{AF}_{\nu } - \partial _\nu k^\mathrm{AF}_{\alpha } = 0\) for the virtue of gauge invariance, and where \(\mathcal{F}^{\mu \nu } = F^{\mu \nu } + f^{\mu \nu }\) is the total field $$\begin{aligned} \partial _{\alpha }\mathcal{F}^{\alpha \nu }+ k^\mathrm{AF}_{\alpha }\ ^{*}\mathcal{F}^{\alpha \nu }+\left( \partial _{\alpha }k_\mathrm{F}^{\alpha \nu \kappa \lambda }\right) \mathcal{F}_{\kappa \lambda } +k_\mathrm{F}^{\alpha \nu \kappa \lambda }\partial _{\alpha }F_{\kappa \lambda } = \mu _0 j^{\nu },\nonumber \\ \end{aligned}$$ and adopting the identities indicated in [6], we worked out the photon energy-momentum tensor $$\begin{aligned} \theta _{\ \rho }^{\alpha }&= \frac{1}{\mu _0} \left( f^{\alpha \nu }f_{\nu \rho } + \frac{1}{4} \delta _{\rho }^{\alpha }f^{2} - \frac{1}{2} k^\mathrm{AF}_{\rho }\ ^{*}f^{\alpha \nu }a_{\nu }\right. \nonumber \\&\quad +\left. k_\mathrm{F}^{\alpha \nu \kappa \lambda }f_{\kappa \lambda }f_{\nu \rho } +\frac{1}{4} \delta _{\rho }^{\alpha }k_\mathrm{F}^{\kappa \lambda \alpha \beta }f_{\kappa \lambda }f_{\alpha \beta }\right) , \end{aligned}$$ and its non-conservation $$\begin{aligned} \partial _{\alpha }\theta _{\ \rho }^{\alpha }&= j^{\nu }f_{\nu \rho } - \frac{1}{\mu _0} \bigg [\left( \partial _{\alpha }F^{\alpha \nu }\right) f_{\nu \rho } + k^\mathrm{AF}_{\alpha }\ ^{*}F^{\alpha \nu }f_{\nu \rho } \nonumber \\&\quad + \frac{1}{2}\left( \partial _{\alpha }k^\mathrm{AF}_{\rho }\right) \ ^{*}f^{\alpha \nu }a_{\nu } - \frac{1}{4}\left( \partial _{\rho }k_\mathrm{F}^{\alpha \nu \kappa \lambda }\right) f_{\alpha \nu }f_{\kappa \lambda } \nonumber \\&\quad + \left( \partial _{\alpha }k_\mathrm{F}^{\alpha \nu \kappa \lambda }\right) F_{\kappa \lambda }f_{\nu \rho } + k_\mathrm{F}^{\alpha \nu \kappa \lambda }\left( \partial _{\alpha }F_{\kappa \lambda }\right) f_{\nu \rho }\phantom {+ \frac{1}{2}\left( \partial _{\alpha }k^\mathrm{AF}_{\rho }\right) \ ^{*}f^{\alpha \nu }a_{\nu } - \frac{1}{4}\left( \partial _{\rho }k_\mathrm{F}^{\alpha \nu \kappa \lambda }\right) f_{\alpha \nu }f_{\kappa \lambda } }\bigg ]. \end{aligned}$$ As mentioned, although derived in a SuSy framework embedding LSV, Eqs. (2, 3) are applicable without any reference to SuSy. Few remarks appear necessary for appreciating Eqs. (2, 3). The right-hand side of Eq. (3) exhibits all types of terms that describe the exchange of energy between the photon, the LSV parameters, the EM background field and the external current, taking into account an \(x^\alpha \)-dependence of the LSV parameters and of the EM background field. In Eq. (3), the first two right-hand side terms are purely Maxwellian. The energy-momentum tensor in Eq. (2) loses its symmetry, and thereby \(\theta _{0i}\ne \theta _{i0}\). This tells us that the momentum density \(\theta _{0i}\) does not correspond any longer to the extended Poynting vector \(\theta _{i0}\). Setting \(\rho = i\) in Eq. (3), we have \(\partial _{\alpha }\theta _{\ \rho }^{\alpha } = \partial _{0}\theta _{\ i}^{0} + \partial _{j}\theta _{\ i}^{j} = j^\nu f_{\nu i} + \cdots = j^0 f_{0 i} + j^k f_{k i} + \cdots \), so that the density of the Lorentz force appears at the right-hand side. Therefore, we interpret \(\theta _{\ i}^{0}\) as the momentum density of the wave (the time derivative of the momentum provides the force). We return to a comment made in the Introduction. In Eq. (3) the term \(k^\mathrm{AF}_{\alpha }\ ^{*}F^{\alpha \nu }f_{\nu \rho }\) is space-time independent. Indeed, \(k^\mathrm{AF}_{\alpha }\) from the CFJ Lagrangian [9] depends on the four-potential. By splitting the total field in background and photon fields, an explicit dependence on the EM background potentials appears now in the CFJ Lagrangian [10]. But, if the background field is constant, the background potential must necessarily show a linear dependence on \(x_\mu \) and translation invariance of the Lagrangian is thereby lost. The term \(k_\mathrm{F}^{\alpha \nu \kappa \lambda }\left( \partial _{\alpha }F_{\kappa \lambda }\right) f_{\nu \rho }\), even if \(k_\mathrm{F}^{\alpha \nu \kappa \lambda }\) is constant, breaks translation invariance due to the space-time dependence of the EM background field. We finally notice that there is energy non-conservation even in absence of an EM background field and of an external current. This is due to the presence of LSV space-time dependent terms. Since we are focusing on energy, we can tailor Eq. (3) to our needs, and thereby we set \(\rho = 0\). Due to the absence of diagonal terms in the EM field tensor, where this is applicable, \(\nu \) takes only spatial values i. We have $$\begin{aligned} \partial _{\alpha }\theta _{\ 0}^{\alpha }= & {} j^{i}f_{i 0} - \frac{1}{\mu _0} \left[ \left( \partial _{\alpha }F^{\alpha i}\right) f_{i 0} + k^\mathrm{AF}_{\alpha }\ ^{*}F^{\alpha i}f_{i 0} \right. \nonumber \\&+ \frac{1}{2}\left( \partial _{\alpha }k^\mathrm{AF}_{0}\right) \ ^{*}f^{\alpha \nu }a_{\nu } - \frac{1}{4}\left( \partial _{0}k_\mathrm{F}^{\alpha i\kappa \lambda }\right) f_{\alpha \nu }f_{\kappa \lambda } \nonumber \\&+ \left. \left( \partial _{\alpha }k_\mathrm{F}^{\alpha i\kappa \lambda }\right) F_{\kappa \lambda }f_{i 0} + k_\mathrm{F}^{\alpha i \kappa \lambda }\left( \partial _{\alpha }F_{\kappa \lambda }\right) f_{i 0}\right] . \end{aligned}$$ Upper limits of the LSV parameters (the last value is in SI units). \(^\mathrm{a}\)Energy shifts in the spectrum of the hydrogen atom [11]; \(^\mathrm{b}\)Rotation of the polarisation of light in resonant cavities [11]; \(^\mathrm{c,e}\)Astrophysical observations [12]. Such estimates are close to the Heisenberg limit on the smallest measurable energy or mass or length for a given time t, set equal to the age of the universe; \(^\mathrm{d}\)Rotation in the polarisation of light in resonant cavities [11]. \(^\mathrm{f}\)Typical value [12] \(|\vec {k}^\mathrm{AF}|\) \(^\mathrm{a}\) \(< 10^{-10}\) eV \( = 1.6 \times 10^{-29}\) J\(;~ 5.1 \times 10^{-4}\) m\(^{-1}\) \(^\mathrm{b}\) \(< 8\times 10^{-14}\) eV \( = 1.3 \times 10^{-32}\) J\(;~ 4.1 \times 10^{-7}\) m\(^{-1}\) \(^\mathrm{c}\) \(< 10^{-34}\) eV \( = 1.6 \times 10^{-53}\) J\(;~ 5.1 \times 10^{-28}\) m\(^{-1}\) \(k^\mathrm{AF}_0\) \(^\mathrm{d}\) \(^\mathrm{e}\) \(k_\mathrm{F}\) \(^\mathrm{f}\) \(\simeq 10^{-17}\) Table 1 provides the upper limits of the LSV terms. 3 Sizing the EM background field For the magnetic fields, we refer to [13, 14]. a. Spatial dependence of the magnetic field in the Milky Way The inter-stellar magnetic field in the Milky Way has a strength of around 500 pT. It has regular and fluctuating components of comparable strengths. The Galactic disk contains the regular field, which is approximately horizontal and parallel, being spirally shaped with a generally small opening angle of about \(p = 10^{\circ }\). In cylindrical coordinates, \(B \simeq B_r \cdot e_r + B_\phi \cdot e_\phi \), with \(B_r = B_\phi \tan p\). In the Galactic halo, the regular field is not horizontal, probably holding an X-shape, as observed in spiral galaxies. The fluctuating field varies over a whole range of spatial scales, from 100 parsecs down to very small scales. b. Time dependence of the magnetic field in the Milky Way The regular field evolves over very long time scales such as 1 Gyr. It likely increases exponentially in time until an equi-partition with kinetic energy is achieved. At that point, it saturates. Indeed the time derivative of the magnetic field obeys an equation containing spatial derivatives of B which coefficients of are independent of B until the counter-reaction of the inter-stellar fluid small-scale turbulent motion comes into play. Physically, the galaxy large-scale shearing of the poloidal field generates an azimuthal field, which in turn generates a poloidal field. The solution of this type of equation is indeed exponential in time. It is a dynamo mechanism. The fluctuating field varies over much shorter time scales, probably 1 Myr. c. Other galaxies External galaxies also possess inter-stellar magnetic fields with strengths of several hundred pT. While, in spiral galaxies the fields resemble those in our own Galaxy, there is absence of the regular component in elliptical galaxies, and solely fluctuating components are present. d. Inter-galactic space No certain conclusion can be drawn on the Inter-Galactic Medium (IGM). The medium between galaxies inside a cluster of galaxies hosts a fluctuating field with a typical strength of a few nT. The IGM outside of clusters of galaxies may also contain magnetic fields. Claims have been laid to the detection of such fields, but a confirmation is missing. e. Electric field The inter-stellar and inter-galactic media are good electric conductors, such that magnetic fields are frozen in the plasma. Thereby, the electric field is given by \({\vec {E}} \propto {\vec v}_p \times {\vec {B}}\), where \(v_p\) is the plasma velocity. In general, \(v_p \ll c\), thus \(E \ll B\) and thereby neglected herein. This assumption may not hold locally, and photons may pass through intense electric fields. 4 Sizing the energy non-conservation In Eq. (4), we neglect the tensorial perturbation, \(k_\mathrm{F}\) on the basis that is less likely to condensate, taking an expectation value different from zero, in contrast to the vectorial CFJ perturbation. If we consider that SuSy is a viable path beyond the SM, in [15, 16], it is shown that \(k_\mathrm{F}\) emerges as the product of multiple SuSy condensates in contrast to \(k_\mathrm{AF}\), which consists of a single SuSy condensate; therefore the latter is dominating as compared to \(k_\mathrm{F}\). On the other hand, independently from the considerations based on SuSy, we justify neglecting the \(k_\mathrm{F}\) term on other grounds. This term is quadratic in the field strength and in the frequency. The CFJ term instead contains a single derivative and thereby it is linear in the frequency. If we confine our investigation to low frequencies, as we do here, it is reasonable that the \(k_\mathrm{AF}\) term yields the dominating contribution. Instead, for very high frequencies, we expect that the \(k_{F}\) term dominates. Further, we suppose that \(k^\mathrm{AF}_0\) is constant. We thus get $$\begin{aligned} \partial _{\alpha }\theta _{\ 0}^{\alpha } = j^{i}f_{i 0} -\frac{1}{\mu _0} \left[ \left( \partial _{\alpha }F^{\alpha i}\right) f_{i 0} + k^\mathrm{AF}_{\alpha }\ ^{*}F^{\alpha i}f_{i 0} \right] . \end{aligned}$$ We are interested in the change of energy along the line of sight x where the photon path lies. We intend to render the terms in Eq. (5) explicit. In absence of an electric field, only the spatial components of the EM background field tensor are present as well as the mixed space-time components of the dual EM background field tensor, that is the magnetic field. We suppose also the absence of an external current. Equation (5) is approximated by (\(\vec e\) and \(\vec b\) are the electric and magnetic field of the photon, respectively) $$\begin{aligned}&\frac{1}{2}\frac{\partial }{\partial t}\left( \epsilon _0 {\vec e}^{\!~2} - \frac{k^\mathrm{AF}_{0}}{\mu _0c}{\vec e}\cdot {\vec a} + \frac{{\vec b}^{\!~2}}{\mu _0}\right) + \frac{1}{\mu _0} \frac{\partial }{\partial x} \left( {\vec e} \times {\vec b}\right) _x \nonumber \\&\quad = -\frac{c}{\mu _0} \left[ \left( \partial _{x}F^{xi}\right) f_{i 0} + k^\mathrm{AF}_{0}\ ^{*}F^{0 i}f_{i 0} \right] \nonumber \\&\quad = -\frac{c}{\mu _0} \left[ \underbrace{\partial _{x}F^{xi}}_\mathrm{First~term} + \underbrace{k^\mathrm{AF}_{0}\ ^{*}F^{0 i}}_\mathrm{Second~term}\right] f_{i 0}. \end{aligned}$$ The dimensions in Eq. (6) are Jm\(^{-3}\)s\(^{-1}\). The left-hand side of Eq. (6) corresponds to the expansion of \(\partial _{\alpha }\theta _{\ 0}^{\alpha }\), with \(\theta _{\ 0}^{\alpha }\) given by Eq. (2) where the \(k_\mathrm{F}\) contribution has been neglected for the reasons previously stated. We assume the absence of IGM magnetic field fluctuations over long time scales, that amounts to considering only the time fluctuations in the emitting galaxy and in our Milky Way, estimated as \(10^{21}\) m in size. The first term is estimated as \(5\times 10^{-10}/10^{21}\) Tm\(^{-1}\), and thereby dropped henceforth. Under all these assumptions, the energy variation comes chiefly from the second term. The \(k^\mathrm{AF}_{0}\) component of the LSV vector extends to the entire universe and thus it is not confined to a limited region. We need to integrate over the light travel time. For a source at \(z = 0.5\), the look-back time is \(t_{LB}= {5} \times 10^9\) yr = \(1.57 \times 10^{17}\) s [17], having taken a somewhat mean value among different values of the cosmological parameters (\(H_0 = 70\) km/s per Mpc Hubble–Humason constant, \(\Omega _m {= 0.3}\) matter density, \(\Omega _\Lambda {= 0.7}\) dark energy density). We set an arbitrary safe margin \(\varrho \), defined as positive, to take into account that the many magnetic fields, estimated at \(B = 5 \times 10^{-10} - 5 \times 10^{-9}\) T each, and crossed by light from the source to us, have likely different orientations and partly compensate their effects on the wave energy.2 Thus, the energy density change of the wave due to the second term, \(\Delta \) E, is (\(B = 2.75\) nT) $$\begin{aligned} |\Delta {\textsf {E}}|_{z=0.5} = \frac{c}{\mu _0} k^\mathrm{AF}_0 |B f_{i 0}| {\varrho }~t_{LB} \approx { 1.02 \times 10^{23}}~k^\mathrm{AF}_0 {\varrho } |f_{i 0}|. \end{aligned}$$ For \(h=6.626 \times 10^{-34}\) Js, the frequency change \(\Delta \nu \) is $$\begin{aligned} |\Delta \nu |_{z=0.5} = \frac{ 1.02 \times 10^{23}}{h}k^\mathrm{AF}_0 {\varrho } {|f_{i 0}|} = { 1.55 \times 10^{56}}~k^\mathrm{AF}_0 {\varrho } |f_{i 0}|. \end{aligned}$$ We now need to compute \(|f_{i 0}| = |{ {\vec e}}|/c\), the electric field of the photons. We consider the Maxwellian - in first approximation - classic intensity \(I = \epsilon _0 c {e^2} = \epsilon _0 c^3 f_{i0}^2\) (\(c{ b} = {e}\)). The frequency \(\nu = 4.86 \times 10^{14}\) Hz corresponds to the Silicon absorption line at 6150 Å, of 1a Super-Nova (SN) type. The monochromatic AB magnitude is based on flux measurements that are calibrated in absolute units [18]. It is defined as the logarithm of a spectral flux density3 SFD with the usual scaling of astronomical magnitudes and about 3631 Jy as zero-point4 $$\begin{aligned} m_\mathrm{AB} = -2.5 \log _{10}SFD - 48.6, \end{aligned}$$ in cgs units. For \(m_\mathrm{AB} = - 19\) (appropriate for SN Ia around the maximal light in this wave band), we get \(SFD = {1.44 \times } 10^{-15}\) Js\(^{-1}\) Hz\(^{-1}\) m\(^{-2}\) having been converted to SI units. We integrate over the frequency width of a bin, that is 30 Å or5 2.37 THz and get \(I= {3.4 } \times 10^{-3}\) Js\(^{-1}\) m\(^{-2}\). For \(\epsilon _0 = 8.85 \times 10^{-12}\) Fm\(^{-1}\), we have $$\begin{aligned} f_{i 0} = \sqrt{\frac{I}{\epsilon _0c^3}} = {3.79} \times 10^{-9} \mathrm{Vsm}^{-2}. \end{aligned}$$ Finally, from Eq. (8), we get $$\begin{aligned} |\Delta \nu |_{z=0.5}^{\nu = 486 \mathrm{THz}} = { 5.87 \times 10^{47}}~k^\mathrm{AF}_0 {\varrho }. \end{aligned}$$ The large range of values of \(k^\mathrm{AF}_0\) and \(\varrho \) render the range of values for the estimate in Eq. (11) also large. We recall that \(z = \Delta \nu /\nu _o\) where \(\Delta \nu = \nu _e - \nu _o\) is the difference between the observed \(\nu _o\) and emitted \(\nu _e\) frequencies. For \(z_c = 0.5\), where \(z_c\) is the redshift due to the expansion of the universe, \(|\Delta \nu |_{z=0.5}^{\nu = 486 \mathrm{THz}} = 1.62\) THz. We ask whether we can get a similar value for \(z_{LSV}\), the shift due to LSV. We consider two numerical applications. For \(k^\mathrm{AF}_0 = 10^{-10}\)m\(^{-1}\), Table 1, and \({ \varrho } { ~\approx 10^{-23}}\), which represents an extreme misalignment of the magnetic fields, or for \(k^\mathrm{AF}_0 = {5.1 \times 10^{-28}}\)m\(^{-1}\), Table 1, and \({ \varrho } { ~ \approx 10^{-6}}\), we get \(|\Delta \nu |_{z=0.5}^{\nu = 486 \mathrm{THz}}\) in the range of \(10^{14}\) Hz. This would strongly influence the measurement of the redshift due to the expansion of the universe, since \(z_{LSV}\) would be comparable to \(z_c\). Instead, combining the astrophysical upper limit on the size of \(k^\mathrm{AF}_0\) with a value of \({ \varrho } \ll 10^{-7}\), it will conversely produce a small effect. 5 Impact on cosmology We have determined an expression for an LSV frequency shift. The size of the effect may be negligible for cosmology, but just of relevance for the foundations of physics. Nevertheless, the rough estimates in the previous section seem to point to a large impact. Here below, we consider that the LSV shift takes a value large enough to be considered, and thereby to be superposed to the cosmological redshift. Which interpretation should we adopt in analysing spectra from distant sources? The parameter z is given by \(z = \Delta /\lambda _e\) where \(\Delta {\lambda } = \lambda _o - \lambda _e\) is the difference between the observed \(\lambda _o\) and emitted \(\lambda _e\) wavelengths. Expansion causes \(\lambda _e\) to stretch to \(\lambda _c\) that is \(\lambda _c = (1+z_c)\lambda _e\). The wavelength \(\lambda _c\) could be further stretched or shrunk for the supposed LSV shift to \(\lambda _o = (1+z_{LSV})\lambda _c = (1+z_{LSV}) (1+z_c) \lambda _e\). But since \(\lambda _o = (1+z)\lambda _e\), finally we have $$\begin{aligned} 1 + z = (1+z_{LSV}) (1+z_c) = 1 + z_{LSV} + z_c + \mathcal{O}(z^2). \end{aligned}$$ A reverse estimate process would instead set an error of the redshift measurement and assess upper limits to the LSV parameters. 6 Conclusions, discussion and perspectives We have introduced a new frequency shift for in vacuo propagation of a photon in a LSV scenario. The physical situation is as follows. We have neglected time variations of the LSV breaking terms and of the magnetic fields. Thus, the time averaging of the LSV shift differs from zero. Along the line of sight, the space averaging is also never zero, unless obviously there isn't any LoSy breaking, or the magnetic field vectors perfectly cancel one another. But, for the observer, there is an angular dependence of the LSV frequency shift, due to the LSV itself. For each direction, there is a value of \(k^\mathrm{AF}_0\) and of \(\varrho \), and thereby a direction-dependent LSV shift. The issue is whether the LSV shift is large enough to have an impact on the observations. We certainly need to put stringent model independent observational and experimental upper limits to \(z_{LSV}\) through constraints on the LSV parameters and on the EM field values. We question whether the sign of \(z_{LSV}\), and thereby a red or blue shift, could not be determined a priori on the basis of perturbation theory. Undoubtedly, the orientations and scale lengths of the LSV parameters, as well as the photon path crossing multiple background EM fields differently oriented, render this shift very dependent on the trajectory. We remark that the discrepancy between the luminosity distance derived with standard cosmology and the data, nowadays mostly explained by assuming dark energy, should be reviewed in light of this additional frequency shift. Classic electromagnetism has been well tested, as general relativity. This has not impeded the proposition of alternative formulations of gravitation during last century, and lately to circumvent the need of dark matter and dark energy. We point out that revisiting astrophysical data with non-Maxwellian electromagnetism opens the door to radically new interpretations. For instance, if we suppose that a static source bursts, and that at start it emits higher frequencies than at the end, this may mimic a time dilation effect from a receding source, if massive photons are considered. Indeed, for the CPT-odd handedness classes associated with LSV which entail massive photons, the deviation from c of the group velocity is inversely proportional to the square of the frequency. Thereby, the photons emitted towards the end of the burst will employ more time than the initial photons to reach an observer. Incidentally, the dependence of the group velocity on the frequency allows us to set upper limits on the photon mass from Fast Radio Bursts (FRBs) [19, 20, 21, 22, 23]. Generally, there is a continuous interest for testing non-Maxwellian electromagnetism, let it be massive or non-linear. The official upper limit on the photon mass is 10\(^{-54}\) kg [24]6, but see [26] for comments on the reliability of such a limit and for an experiment with solar wind satellite data. While opening a new low-frequency radio-astronomy window with a swarm of nano-satellites would be desirable [22], terrestrial experiments are faster to implement [27]. Among the non-linear effects, the last one to be detected is photon–photon scattering at CERN [28]. Poincaré has equally contributed to the establishment of these symmetries. We have not considered the potential presence of a strong magnetic field at the source. The spectral flux density is the quantity that describes the rate at which energy density is transferred by EM radiation per unit wavelength. In radio-astronomy, the jansky is a non-SI unit of spectral flux density equivalent to \(10^{-26}\) W m\(^{-2}\) Hz\(^{-1}\). For the frequency width, we have computed \(\frac{\Delta \lambda }{\lambda }=\frac{\Delta \nu }{\nu }\). de Broglie estimated the photon mass lower than 10\(^{-53}\) kg already in 1923 with a strike of genius [25]. The authors thank K. Ferrière (Toulouse) for the information on galactic and inter-galactic EM fields; M. López Corredoira (La Laguna) and M. Nicholl (Edinburgh) for comments on SNs. Thanks are also due to L. P. R. Ospedal (CBPF) for long and fruitful discussions on aspects of LSV, to M. Makler (CBPF) for remarks on dark energy, and to J. Sales de Lima (São Paulo) for his interest. The anonymous referee provided constructive remarks. S.P. Martin, A super symmetry primer. (2016). arXiv:hep-ph/9709356v7 D. Colladay, V.A. Kostelecký, CPT violation and the standard model. Phys. Rev. D 55, 6760 (1997)ADSCrossRefGoogle Scholar D. Colladay, V.A. Kostelecký, Lorentz-violating extension of the standard model. Phys. Rev. D 58, 116002 (1998)ADSCrossRefGoogle Scholar S. Liberati, Tests of Lorentz invariance: a 2013 update. Class. Quantum Gravity 30, 133001 (2013)ADSMathSciNetCrossRefGoogle Scholar L. Bonetti, L.R. dos Santos Filho, J.A. Helayël-Neto, A.D.A.M. Spallicci, Effective photon mass from Super and Lorentz symmetry breaking. Phys. Lett. B 764, 203 (2017)ADSCrossRefGoogle Scholar L. Bonetti, L.R. dos Santos Filho, J.A. Helayël-Neto, A.D.A.M. Spallicci, Photon sector analysis of Super and Lorentz symmetry breaking: effective photon mass, bi-refringence and dissipation. Eur. Phys. J. C 78, 811 (2018)ADSCrossRefGoogle Scholar L. de Broglie, La Mécanique Ondulatoire du Photon. Une Novelle Théorie de la Lumière (Hermann, Paris, 1940)zbMATHGoogle Scholar A. Aspect, P. Grangier, Wave-particle duality for single photons. Hyperfine Interact. 37, 1 (1987)ADSCrossRefGoogle Scholar S.-S. Chern, J. Simons, Characteristic forms and geometric invariants. Ann. Math. 99, 48 (1974)MathSciNetCrossRefGoogle Scholar S.M. Carroll, G.B. Field, R. Jackiw, Limits on a Lorentz- and parity-violating modification of electrodynamics. Phys. Rev. D 41, 1231 (1990)ADSCrossRefGoogle Scholar Y.M.P. Gomes, P.C. Malta, Lab-based limits on the Carroll–Field–Jackiw Lorentz violating electroynamics. Phys. Rev. D 94, 025031 (2016)ADSCrossRefGoogle Scholar V.A. Kostelecký, N. Russell, Data tables for Lorentz and CPT violation. Rev. Mod. Phys. 83, 11 (2011)ADSCrossRefGoogle Scholar M. Alves, K. Ferrière, The interstellar magnetic field from galactic scales down to the heliosphere. in Proc. \(42^{\mathit{nd}}\) COSPAR Scientific Assembly, 2018. Abstract id. D1.2-2-126-30, 14-22 July 2018 PasadenaGoogle Scholar K. Ferrière, Private communications on 4 December 2018 and 22 May 2019Google Scholar H. Belich, L.D. Bernald, P. Gaete, J.A. Helayël-Neto, The photino sector and a confining potential in a supersymmetric Lorentz-symmetry-violating model. Eur. Phys. J. C 73, 2632 (2013)ADSCrossRefGoogle Scholar H. Belich, L.D. Bernald, P. Gaete, J.A. Helayël-Neto, F.J.L. Leal, Aspects of CPT-even Lorentz-symmetry violating physics in a supersymmetric scenario. Eur. Phys. J. C 75, 291 (2015)ADSCrossRefGoogle Scholar E.L. Wright, A cosmology calculator for the World Wide Web. Publ. Astron. Soc. Pac. 118, 1711 (2006)ADSCrossRefGoogle Scholar J.B. Oke, J.E. Gunn, Secondary standard stars for absolute spectrophotometry. Astrophys. J. 266, 713 (1983)ADSCrossRefGoogle Scholar L. Bonetti, J. Ellis, N.E. Mavromatos, A.S. Sakharov, E.K. Sarkisian-Grinbaum, A.D.A.M. Spallicci, Photon mass limits from fast radio bursts. Phys. Lett. B 757, 548 (2016)ADSCrossRefGoogle Scholar X.-F. Wu, S.-B. Zhang, H. Gao, J.-J. Wei, Y.-C. Zou, W.-H. Lei, B. Zhang, Z.-G. Dai, P. Mészáros, Constraints on the photon mass with fast radio bursts. Astrophys. J. Lett. 822, L15 (2016)ADSCrossRefGoogle Scholar L. Bonetti, J. Ellis, N.E. Mavromatos, A.S. Sakharov, E.K. Sarkisian-Grinbaum, A.D.A.M. Spallicci, FRB 121102 casts new light on the photon mass. Phys. Lett. B 768, 326 (2017)ADSCrossRefGoogle Scholar M.J. Bentum, L. Bonetti, A.D.A.M. Spallicci, Dispersion by pulsars, magnetars, fast radio bursts and massive electromagnetism at very low radio frequencies. Adv. Space Res. 59, 736 (2017)ADSCrossRefGoogle Scholar L. Shao, B. Zhang, Bayesian framework to constrain the photon mass with a catalog of fast radio bursts. Phys. Rev. D 95, 123010 (2017)ADSCrossRefGoogle Scholar M. Tanabashi, Particle Data Group, Review of particle physics. Phys. Rev. D 98, 030001 (2018)Google Scholar L. de Broglie, Ondes et quanta. Comptes Rendus Hebd. Séances Acad. Sc. Paris 177, 507 (1923)Google Scholar A. Retinò, A.D.A.M. Spallicci, A. Vaivads, Solar wind test of the de Broglie–Proca massive photon with Cluster multi-spacecraft data. Astropart. Phys. 82, 49 (2016)ADSCrossRefGoogle Scholar J. Rosato, Retaining hypothetical photon mass in atomic spectroscopy models. Eur. Phys. J. D 73, 7 (2019)ADSCrossRefGoogle Scholar M. Aaboud, et. al., and (the ATLAS Collaboration), Evidence for light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC. Nat. Phys., 13, 852 (2017)Google Scholar Funded by SCOAP3 1.Departamento de Astrofísica, Cosmologia e Interações Fundamentais (COSMO)Centro Brasileiro de Pesquisas Físicas (CBPF)Rio de JaneiroBrasil 2.Observatoire des Sciences de l'Univers en région Centre (OSUC) UMS 3116Université d'OrléansOrléansFrance 3.Collegium Sciences et Techniques (CoST), Pôle de PhysiqueUniversité d'OrléansOrléansFrance 4.Laboratoire de Physique et Chimie de l'Environnement et de l'Espace (LPC2E) UMR 7328Centre National de la Recherche Scientifique (CNRS)OrléansFrance Helayël-Neto, J.A. & Spallicci, A.D.A.M. Eur. Phys. J. C (2019) 79: 590. https://doi.org/10.1140/epjc/s10052-019-7105-9 DOI https://doi.org/10.1140/epjc/s10052-019-7105-9 EPJC is an open-access journal funded by SCOAP3 and licensed under CC BY 4.0
CommonCrawl
Lewis Fry Richardson: His Intellectual Legacy and Influence in the Social Sciences Lewis Fry Richardson: His Intellectual Legacy and Influence in the Social Sciences pp 25-34 | Cite as The Influence of the Richardson Arms Race Model Ron P. Smith First Online: 15 November 2019 3k Downloads Part of the Pioneers in Arts, Humanities, Science, Engineering, Practice book series (PAHSEP, volume 27) This chapter reviews the Richardson arms race model: a pair of differential equations which capture an action reaction process. Whereas many of Richardson's equations were quite specific about what they referred to, the arms race model was not. This lack of specificity was both a strength and a weakness. Its strength was that with different interpretations it could be applied as an organising structure in a wide variety of contexts. Its weakness was that the model could not be estimated or tested without some auxiliary interpretation. The chapter considers the impact of these issues in interpretation and empirical application on the influence of the Richardson arms race model. This is a revised version of a paper presented at the Richardson Panel at ISA 2018, Saturday 7 April 2018. I am grateful to participants of the panel for comments. There are many definitions of arms races, but for the purpose of this chapter they can be thought of as enduring rivalries between pairs of hostile powers which prompt competitive acquisition of military capability. Two approaches to modelling arms races have been particularly influential. One is as a two-person game, in particular the Prisoner's dilemma, where the choices are to arm or not to arm, and the dominant strategy, for both to arm, is not Pareto optimal. The other, which is the focus of this chapter, is the Richardson model of the arms race as an action-reaction process, represented by a pair of differential equations. Just as the two supply and demand equations have structured thought about the dynamics of markets for most economists, the two Richardson equations have structured thought about the dynamics of arms races for most subsequent analysts. Not only did he develop the model, he attempted to test it using data on military expenditure prior to World War I. One of the strengths of the model is that it has prompted a range of questions, many of which Richardson himself posed. This chapter reviews the influence of the Richardson arms race model on the subsequent literature through these questions, which include: What are the characteristics of the solution to this model? What are the variables and actors? What is the time dimension? How do arms races relate to wars? How should the parameters be interpreted? How can the model allow for strategic factors and budget constraints? How should the model be related to the data? How do you stop arms races? Given the variety of ways that his arms race model has been used, I was tempted to call this piece 'variations on a theme by Richardson', but specifying the theme precisely proved problematic. The voluminous arms race literature that arose from his work has many themes. In trying to identify the themes, I found the papers in the collection edited by Gleditsch & Njølstad (1990), hereafter G&N, very useful. G&N provides an overview that lies roughly half way between the publication of Richardson (1960a) which brought his arms race model to a wider audience and the present day. With the end of the Cold War, interest in arms races declined somewhat and many of the themes that are in that book remain central. It is difficult to say anything new in this area, and Wiberg (1990) makes many of the same points as I make below. There is a more technical discussion of many of the issues mentioned here in Dunne & Smith (2007). As most readers of this chapter will probably know, Lewis Fry Richardson (1881–1953) was a Quaker physicist, a Fellow of the Royal Society, who made major contributions in the mathematics of meteorology, turbulence and psychology as well as his work on quantifying conflict. The significance of the work on conflict was only widely recognised posthumously with the publication of Richardson (1960a), Arms and Insecurity, which introduced the arms race model, and Richardson (1960b) Statistics of Deadly Quarrels, which looked at the distribution of conflict deaths. Richardson was a very careful scientist. When he was investigating the hypothesis that the probability of war between two countries was a function of the length of their common border, he double-checked the data and noticed that adjoining countries gave different lengths for their common border: the smaller country tending to think the border longer than the larger. This was because the measured length was a function of the size of the ruler or scale of the map used; small countries tended to use smaller rulers and larger scale maps. Richardson's subsequent studies on this phenomenon introduced the idea of non-integer dimensions and prompted Mandelbrot's work on fractals. A common border may increase the probability of war but, as Richardson recognised, it also tends to increase the amount of trade, which may have a pacifying effect. Richardson approached the analysis as a physicist. He often used differential equations to characterise the dynamics, and tried to match the models to data, often using probabilistic techniques. His work provides excellent teaching material in applied mathematics. Students find the arms race model a nice motivation for a neat system of differential equations which has a range of interesting solutions. Korner (1996) makes pedagogical use of a number of examples of Richardson's work to motivate the applications of mathematics, as well as discussing Richardson's life and influence. The teaching aspect is also noted in a recent paper, Beckmann, Gattke, Lechner & Reimer (2016: 22–23), say about the Richardson equations: 'our objective was to see whether this old staple can be brought back from the world of teaching (where it serves as an example for solving systems of differential equations) into modern research on conflict dynamics.' While serving in the Friends Ambulance Unit during World War I, Richardson began to try to describe the causes of war in systems of equations, which he published as Richardson (1919). Maiolo (2016: 1) says the term arms race originated in the 19th century and was commonly presented as one of the causes of World War I. He quotes Lord Grey, who had been Foreign Secretary when Britain went to war, as writing after the war that 'Great armaments lead inevitably to war. If there are armaments on one side, there must be armaments on the other sides' (Maiolo, 2016: 2). Richardson also cited Grey and thus his equations captured a common perception of the cause of that war. However, as Maiolo (2016: 4) also notes, the sporting metaphor can be misleading: in athletics races have clear start and finish lines, arms races do not. Richardson was not alone in trying to develop mathematical models of conflict. About the same time, Lanchester (1916), based on articles published in Engineering in 1914, developed models of the evolution of different types of battle. The models examined the role of the quantity and quality of forces deployed. Lanchester also used a pair of differential equations though to different ends. One might distinguish a Lanchester tradition, in operational research, of mathematical modelling to win wars, from a Richardson tradition, in peace research, of mathematical modelling to stop wars. MacKay (2020, in this volume) discusses a combination of Richardson's arms race equations with Lanchester's attritional dynamics. 3.2 The Equations The Richardson model describes the path over time, t, of the level of arms, x and y, of two countries, A and B. $$ \begin{aligned} {\text{dx}}/{\text{dt}} & = {\text{ky}} -\upalpha{\text{x}} + {\text{g}} \\ {\text{dy}}/{\text{dt}} & = {\text{lx}} -\upbeta{\text{y}} + {\text{h}} \\ \end{aligned} $$ The rate of change of the arms of each country is the sum of a positive reaction to the arms of the other country, a negative reaction to the level of its own arms through a 'fatigue' factor and a constant component through a 'grievance' factor. Setting it up in this way prompts a set of questions which are internal to the mathematical structure of solving linear differential equations. Does an equilibrium exist? Is it unique? Is it stable? Are there boundary conditions, e.g. x, y > 0? In equilibrium dx/dt = dy/dt = 0, so the equilibrium reaction functions are two straight lines $$ \begin{aligned} 0 & = {\text{ky}} -\upalpha{\text{x}} + {\text{g}} \\ 0 & = {\text{lx}} - {\upbeta}{\text{y}} + {\text{h}} \\ \end{aligned} $$ If αβ = kl the lines are parallel, otherwise they intersect once at an equilibrium, which may involve negative values. Because of linearity, if the equilibrium exists, it is unique, and one can then consider how stability varies as a function of the parameters. Here arms race stability refers to the nature of the solution to these equations. An unstable solution would diverge from an equilibrium, for instance exhibiting exponential growth by both countries. Arms race stability is not the same thing as strategic stability, which can itself have many meanings. Richardson related them by suggesting that exponential growth could lead to war, though in principle it could lead to bankruptcy. Diehl (2020, in this volume) discusses the links between arms races and war. Again, within the internal mathematical structure it is natural to ask if the model generalises. What happens if there are three or more actors? What happens if one relaxes the assumption of linearity? There is a large literature on both these questions. Broadly, as in the three-body problem in physics, the neat simplicity of the conclusions is lost when the model is generalised and multiple equilibria may exist. For instance, among three countries, the equations for each pair of nations may be stable, but the triplet is unstable. The model has an immediate common-sense plausibility as a description of an interaction between hostile neighbours. This is what makes it such a nice teaching tool. There are also historical examples of such reaction functions, for instance the British policy before World War I of having a fleet as large as the next two largest navies combined. But the model has no unambiguous interpretation. In the physical sciences, when Richardson used equations, for instance in fluid dynamics, he knew exactly what the variables were, what measures they corresponded to, and the time dimension of the dynamic processes involved. Little interpretation was needed. But in the social sciences the interpretation of mathematics is rarely unambiguous. 3.3 Interpreting the Equations The arms race equations prompted a number of questions about the interpretation. There were questions about how to interpret the measures of arming, x and y. In a symmetric arms race, they were the same variable, such as military expenditures or number of warships. In an asymmetric arms race, they could be different types of variable; historically there was an arms race between castle design and siege train technology. They might be quantitative, number of warheads, or qualitative, accuracy of the missiles. They might be given a more psychological interpretation as hostility or friendliness. There were many possibilities. There was also a question about how to interpret the nature of the actors, A and B, and the motives for their actions. They might be countries, alliances, decision makers or non-state actors like terrorists. Their actions might be the result of rational calculations or bureaucratic rules of thumb and there were many possible sources of their hostility. Some, like Intriligator (1975), felt the need to motivate the equations with an explicit objective function for the actors. There were also questions about the time period, months or centuries, over which the interactions were taking place and the extent to which the parameters could be regarded as stable. Finally, from the policy perspective there was the crucial question: how might you stop the process? This lack of specificity was both a strength and a weakness. Its strength was that the model could be applied in a wide variety of domains, by giving the variables x and y and the actors A and B different interpretations. As it was imported into a particular domain, other questions would arise. For instance, an economic interpretation would immediately prompt questions about the nature of the budget constraint. Economists tended to allow for the budget constraint by adding income as an extra variable, but there were many other ways, for instance Wiberg (1990: 366–367) assumes a fixed amount of resources available. The weakness of the lack of specificity was that there was little clarity either about the precise predictions of the model or about the evidence that would falsify it. As a specific example, the parameters α and β could be interpreted as representing: (a) a measure of fatigue, as Richardson did: increased spending exhausts a country depressing the growth of arms; (b) the speed of adjustment towards a desired level in a stock adjustment model or (c) a measure of bureaucratic inertia; or perhaps some combination of the three. The form of the equation would be identical, but the story one told about the parameters would be different in each case. This was important, since in practice these parameters were estimated statistically and needed interpretation. If one does not know where the parameters came from or why they might differ, between the countries or over time, it is difficult to judge whether the statistical estimates are sensible. Just as the term arms race is a metaphor, any model is a metaphor (the equations are interpreted as being like the world in some respects) and there is an issue as to how literally to take these equations. Some, like Beckmann et al. (2016) in a critique of the Richardson equations, treat them very literally. If they do not hold exactly, then the Richardson model is wrong or it is a different model. Others treat the model as being more loosely defined and are happy to label any set of equations involving action-reaction processes as a Richardson type model. Intriligator (1975) and Dunne & Smith (2007) take this approach. Economists, treating them like supply and demand curves, organising principles rather than exact specifications, seem inclined to take them less literally. Of course, some do not accept the action-reaction description itself. Senghaas (1990: 15) rejects the explanation of the arms race as an other-directed reciprocal escalation spiral: 'As much research on the biography of weapons systems has shown, the action-reaction scheme is at least highly dubious, if not completely false.' Instead he sees it as inner-directed by the self-centred imperatives of national armaments policy. Gleditsch (1990: 8–9) lists a very large number of explanations for arms acquisitions, organised under four levels: (1) internal factors, such as particular interest groups; (2) actor characteristics such as being an alliance leader or authoritarian rule; (3) relational characteristics, such as action reaction or relations to allies and (4) system characteristics, such as upswings in long economic waves and technological imperatives. While the focus in this chapter is on action-reaction explanations of arms races, much of the work on other explanations was prompted by the desire to criticise the Richardson action-reaction explanation. On whose behaviour they described, Richardson (1960a: 12) was enigmatic about the interpretation of the equations: 'the equations are merely a description of what people would do if they did not stop to think'. Intriligator (1975) derives Richardson type equations from optimising strategies in a nuclear war. Brito & Intriligator (1999) argue that new military technologies which imply increasing returns should mean the end of the Richardson paradigm with its implicit assumption of constant or declining returns to scale. The behaviour of participants and the research questions in increasing returns to scale systems are very different. For instance, multiple equilibria are possible, and arms control may have the potential to move the system from a high to low equilibrium. Increasing returns to scale increases the dominance of the dominant actor in its chosen technology, providing incentives for the non-dominant actors to choose alternative technologies such as terrorist attacks. Relating the equations to data Richardson evaluated the model through an examination of the growth in military expenditures, 1908–14, of the two belligerent alliances, the Entente and Central powers, prior to World War I. He took the observed exponential growth as an indication of support for his models. He interpreted x and y as measuring military expenditures, A and B as coalitions, and the relevant time period as 7 years. However, he noted that other conflicts were not preceded by arms races. As has been widely noted, e.g. by Gleditsch (1990: 9–10), there is an identification problem: quite different models can give observationally equivalent predictions. While one solution of the arms race model is exponential growth, exponential growth may equally well result from purely internal processes within each country, such as a military industrial complex, with no action-reaction component. Exponential growth may also result from both countries responding to a third country. Expectations further complicate the matter as discussed in Dunne & Smith (2007). The empirical literature separated into a number of separate tracks. One track looked at whether arms races, suitably defined, preceded conflicts, again suitable defined. Diehl (2020) reviews this track. Another track looked at estimating the Richardson equations directly to see whether they showed action reaction features: significant coefficients for the arms of the other countries. This was usually done from time series though there are also some cross section and panel papers looking at arms race interactions. To estimate the Richardson equations directly from time series data, they required various modifications. The equations had to be converted from continuous into discrete time, with corresponding judgements about the time-scale involved, how many lags were required and the interval over which one might expect the parameters to be stable. Typically, the lagged dependent variable, arms in the previous period, is a very strong predictor of the current value. Specific measures had to be chosen for x and y. The logarithms of military expenditures and the shares of military expenditure in GDP were popular choices, but there were many other possibilities, including physical measures like number of warheads. Even when using military expenditures, the estimates could be quite sensitive to other measurement issues, such as the choice of exchange rate used to make them comparable. Of course, expenditures are an input rather than an output, capability, measure. Countries may differ in their efficiency, the amount of military expenditure required to achieve a particular level of capability. The equations are deterministic and had to be supplemented by stochastic specification. Typically, 'well-behaved' error terms were added to the equations, but again there were many other possibilities, depending, for instance, on how one treated the endogeneity that resulted from the variables being jointly determined, serial correlation and heteroskedasticity. Supplemental variables might be added to control for other factors, e.g. GDP to allow for the budget constraint. Given all these decisions, it could be difficult to judge what light these estimates threw on the Richardson equations. Firstly, as noted above, since Richardson provided little in the way of interpretation of the coefficients, it was not always clear whether the statistical estimates were consistent with his model or not. Secondly, there is the Duhem-Quine problem: any test involves a joint hypothesis. What is being tested is both the substantive hypothesis, the validity of the Richardson model in this case, and a set of auxiliary hypotheses, such as those about choice of measure, dynamics and functional form. One never knows whether it is the substantive or the auxiliary hypotheses that has led to rejection. McKenzie (1990) discusses the Duhem-Quine problem in the context of the sociology of nuclear weapons technologies. The converse of this problem is that since the Richardson model is not very specific, this allows great freedom for specification search over such things as measures for x and y; functional forms; dynamics; estimation methods; sample period and control variables included. This search can continue until one finds a specification that confirms one's prior beliefs. Despite these qualifications most surveys of this literature including Dunne & Smith (2007) conclude that there is limited time-series evidence for stable equations, of the Richardson type, describing the interaction of quantitative measures of military expenditure or capability. That article discusses the case of India and Pakistan, where there had been more evidence of a stable Richardson type action-reaction process between constant dollar military expenditure, 1962–97, but it seemed to have broken down after 1997, about the time both powers went nuclear. Empirical estimates of Richardson type equations are sensitive to choice of measure of military expenditure and to many aspects of specification such as other covariates included and functional form used. 3.4 Other Arms Races The arms race metaphor has spread beyond military interactions and a comparison with its use in another area is revealing. We changed the title of Dunne & Smith (2007) from the one we had been given 'The econometrics of arms races' to 'The econometrics of military arms races', because on putting the term 'arms races' into Google Scholar the top paper was Dawkins & Kreps (1979) 'Arms races between and within species', followed by many highly cited biological papers. The comparison between Dawkins & Kreps and Richardson is interesting both for the similarity in the process and difference in approach: they are much more specific, much less metaphorical than Richardson. They do not cite Richardson but have a very similar process in mind: 'An adaptation in one lineage (e.g. predators) may change the selection pressure on another lineage (e.g. prey), giving rise to a counter-adaptation. If this occurs reciprocally, an unstable runaway escalation or 'arms race' may result' (Dawkins & Kreps, 1979: 489). They begin using a military analogy and clarifying the time scales considered. 'Foxes and rabbits race against each other in two senses. When an individual fox chases an individual rabbit, the race occurs on the time scale of behaviour. It is an individual race, like that between a particular submarine and the ship it is trying to sink. But there is another kind of race, on a different time scale. Submarine designers learn from earlier failures. As technology progresses, later submarines are better equipped to detect and sink ships and later-designed ships are better equipped to resist. This is an 'arms race' and it occurs over a historical time scale. Similarly, over the evolutionary time scale the fox lineage may evolve improved adaptations for catching rabbits, and the rabbit lineage improved adaptations for escaping. Biologists often use the phrase 'arms race' to describe this evolutionary escalation of ever more refined counter-adaptations (Dawkins & Kreps, 1979: 489–490). They cite use of the term arms race in a biological context in a 1940 biology paper, though as noted above the term arms race goes back to the 19th century. They are also specific about who is involved. 'In all of this discussion it is important to realize who are the parties that are 'racing' against one another. They are not individuals but lineages' (Dawkins & Kreps, 1979: 492). They distinguish between symmetric and asymmetric arms races, arguing that asymmetric arms races are more likely between species and symmetric ones within species, e.g. male-male competition for females. They propose the 'life-dinner principle': when a fox chases a rabbit, the fox is running for its dinner, the rabbit is running for its life. Thus, the incentives and the evolutionary selection pressures on the rabbit are greater. This principle has obvious military analogies in cases such as Vietnam where the weak defeat the strong, because the weak have more at stake. They do not have any equations in the paper, but much of the work they cite, such as by William D Hamilton and John Maynard Smith, is mathematical, often involving game theory, particularly evolutionary stable games. Dawkins & Kreps are quite specific about the time scales, parties and mechanisms involved in their biological arms races. This is like Richardson's treatment of physical processes and statistics of deadly quarrels, but unlike his more metaphorical treatment of the mathematics of military arms races. Military arms races are perceived as common and usually regarded as a bad thing. Wiberg (1990: 353) suggests that they are matters of concern because of the risk of war, the waste of resources, the threat to other states, and the danger that they can breed militarism. The two main tools we have for understanding the process have been game theory, particularly the Prisoner's dilemma, and the Richardson model. The Richardson model has motivated much more empirical work than game theory, where papers tend to just use illustrative historical examples to motivate the mathematics, rather than to attempt to test the theory. The strength and the weakness of the Richardson arms race model was that it was not very specific. This was a strength in that by giving the variables and actors different interpretations, it could be applied in a wide variety of contexts and prompt a range of interesting questions. It was a weakness in that it made it difficult to evaluate the theory. Richardson's models for the distribution of conflict statistics, power laws for size and Poisson distributions for frequency, were more like physical results and have been widely replicated. It may be that arms races, representing historically specific human decisions, are not subject to systemic regularities, so being prompted to ask the right questions is helpful in itself. As Gates, Gleditsch & Shortland (2016: 345) put it 'Richardson's formal dynamic model of arms races may not be very useful as a description of the data or as an explanation of conflict – indeed, no decision to use force per se appears in the model. Still it is clear that it has helped move the field ahead and stimulate new research and interest in formal models of conflict.' Beckmann, Klaus; Susan Gattke, Anja Lechner & Lennart Reimer (2016) A critique of the Richardson Equations. Economics Working Paper, Helmut Schmidt University (162).Google Scholar Brito, Dagobert L & Michael D Intriligator (1999) Increasing returns to scale and the arms race: The end of the Richardson paradigm? Defence and Peace Economics 10(1): 39–54.Google Scholar Dawkins, Richard & John R Kreps (1979) Arms races between and within species. Proceedings of the Royal Society of London, Series B, Biological Sciences 205(1161): 489–511.Google Scholar Diehl, Paul F (2020) What Richardson got right (and wrong) about arms races and war. Ch. 4 in this volume.Google Scholar Dunne, J Paul & Ron P Smith (2007) The econometrics of military arms races. Ch. 28 in: Todd Sandler & Keith Hartley (eds) Handbook of Defence Economics, 2. Amsterdam: Elsevier, 914–941.Google Scholar Gates, Scott; Kristian Skrede Gleditsch & Anja Shortland (2016) Winner of the 2016 Lewis Fry Richardson Award: Paul Collier. Peace Economics, Peace Science and Public Policy 22(4): 338–346.Google Scholar Gleditsch, Nils Petter (1990) Research on arms races. Ch. 1 in: Nils Petter Gleditsch & Olav Njølstad (eds) Arms Races. London: Sage, 1–14.Google Scholar Gleditsch, Nils Petter & Olav Njølstad (eds) (1990) Arms Races: Technological and Political Dynamics. London: Sage.Google Scholar Intriligator, Michael D (1975) Strategic considerations in the Richardson model of arms races. Journal of Political Economy 83(2): 339–353.CrossRefGoogle Scholar Korner, Thomas W (1996) The Pleasures of Counting. Cambridge: Cambridge University Press.Google Scholar Lanchester, Frederick W (1916) Aircraft in Warfare: The Dawn of the Fourth Arm. London: Constable.Google Scholar MacKay, Niall (2020) When Lanchester met Richardson: The interaction of warfare with psychology. Ch. 9 in this volume.Google Scholar Maiolo, Joseph (2016) Introduction. Ch. 1 in: Thomas Mahnken, Joseph Maiolo & David Stevenson (eds) Arms Races in International Politics: From the Nineteenth to the Twenty-first Century. Oxford: Oxford University Press, 1–10.Google Scholar McKenzie, Donald (1990) Towards an historical sociology of nuclear weapons. Ch. 8 in: Nils Petter Gleditsch & Olav Njølstad (eds) Arms Races. London: Sage, 121–139.Google Scholar Richardson, Lewis F (1919) The Mathematical Psychology of War. Oxford: Hunt.Google Scholar Richardson, Lewis F (1960a) Arms and Insecurity: A Mathematical Study of the Causes and Origins of War. Pittsburgh, PA: Boxwood.Google Scholar Richardson, Lewis F (1960b) Statistics of Deadly Quarrels. Pittsburgh, PA: Boxwood.Google Scholar Senghaas, Dieter (1990) Arms race dynamics and arms control. Ch. 18 in: Nils Petter Gleditsch & Olav Njølstad (eds) Arms Races. London: Sage, 346–351.Google Scholar Smith, Ron P (1987) Arms races. In: John Eatwell, Murray Milgate & Peter Newman (eds) The New Palgrave Dictionary of Economics, vol. I, A to D. London: Macmillan, 113–114.Google Scholar Wiberg, Håkan (1990) Arms races, formal models and quantitative tests. Ch. 2 in: Nils Petter Gleditsch & Olav Njølstad (eds) Arms Races. London: Sage, 31–57.Google Scholar © The Author(s) 2020 Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. 1.Birkbeck University of LondonLondonUK Smith R.P. (2020) The Influence of the Richardson Arms Race Model. In: Gleditsch N. (eds) Lewis Fry Richardson: His Intellectual Legacy and Influence in the Social Sciences. Pioneers in Arts, Humanities, Science, Engineering, Practice, vol 27. Springer, Cham. https://doi.org/10.1007/978-3-030-31589-4_3 First Online 15 November 2019 eBook Packages Earth and Environmental Science Earth and Environmental Science (R0)
CommonCrawl
First direct observation of muon antineutrino disappearance (1104.0344) MINOS collaboration: P. Adamson, C. Andreopoulos, D. J. Auty, D. S. Ayres, C. Backhouse, G. Barr, M. Bishai, A. Blake, G. J. Bock, D. J. Boehnlein, D. Bogert, S. Cavanaugh, D. Cherdack, S. Childress, B. C. Choudhary, J. A. B. Coelho, S. J. Coleman, L. Corwin, D. Cronin-Hennessy, I. Z. Danko, J. K. de Jong, N. E. Devenish, M. V. Diwan, M. Dorman, C. O. Escobar, J. J. Evans, E. Falk, G. J. Feldman, M. V. Frohne, H. R. Gallagher, R. A. Gomes, M. C. Goodman, P. Gouffon, N. Graf, R. Gran, N. Grant, K. Grzelak, A. Habig, D. Harris, J. Hartnell, R. Hatcher, A. Himmel, A. Holin, C. Howcroft, X. Huang, J. Hylen, J. Ilic, G. M. Irwin, Z. Isvan, D. E. Jaffe, C. James, D. Jensen, T. Kafka, S. M. S. Kasahara, G. Koizumi, S. Kopp, M. Kordosky, A. Kreymer, K. Lang, G. Lefeuvre, J. Ling, P. J. Litchfield, L. Loiacono, P. Lucas, W. A. Mann, M. L. Marshak, N. Mayer, A. M. McGowan, R. Mehdiyev, J. R. Meier, M. D. Messier, W. H. Miller, S. R. Mishra, J. Mitchell, C. D. Moore, J. Morfín, L. Mualem, S. Mufson, J. Musser, D. Naples, J. K. Nelson, H. B. Newman, R. J. Nichol, T. C. Nicholls, J. A. Nowak, J. P. Ochoa-Ricoux, W. P. Oliver, M. Orchanian, R. Ospanov, J. Paley, R. B. Patterson, G. Pawloski, G. F. Pearce, D. A. Petyt, S. Phan-Budd, R. K. Plunkett, X. Qiu, J. Ratchford, T. M. Raufer, B. Rebel, P. A. Rodrigues, C. Rosenfeld, H. A. Rubin, M. C. Sanchez, J. Schneps, P. Schreiner, P. Shanahan, A. Sousa, P. Stamoulis, M. Strait, N. Tagg, R. L. Talaga, E. Tetteh-Lartey, J. Thomas, M. A. Thomson, G. Tinti, R. Toner, G. Tzanakos, J. Urheim, P. Vahle, B. Viren, A. Weber, R. C. Webb, C. White, L. Whitehead, S. G. Wojcicki, T. Yang, R. Zwaska Aug. 15, 2011 hep-ex This letter reports the first direct observation of muon antineutrino disappearance. The MINOS experiment has taken data with an accelerator beam optimized for muon antineutrino production, accumulating an exposure of $1.71\times 10^{20}$ protons on target. In the Far Detector, 97 charged current muon antineutrino events are observed. The no-oscillation hypothesis predicts 156 events and is excluded at $6.3\sigma$. The best fit to oscillation yields $\Delta \bar{m}^{2}=(3.36^{+0.46}_{-0.40}\textrm{(stat.)}\pm0.06\textrm{(syst.)})\times 10^{-3}\,\eV^{2}$, $\sin^{2}(2\bar{\theta})=0.86^{+0.11}_{-0.12}\textrm{(stat.)}\pm0.01\textrm{(syst.)}$. The MINOS muon neutrino and muon antineutrino measurements are consistent at the 2.0% confidence level, assuming identical underlying oscillation parameters. Measurement of the neutrino mass splitting and flavor mixing by MINOS (1103.0340) The MINOS Collaboration: P. Adamson, C. Andreopoulos, R. Armstrong, D. J. Auty, D. S. Ayres, C. Backhouse, G. Barr, M. Bishai, A. Blake, G. J. Bock, D. J. Boehnlein, D. Bogert, S. Cavanaugh, D. Cherdack, S. Childress, B. C. Choudhary, J. A. B. Coelho, S. J. Coleman, L. Corwin, D. Cronin-Hennessy, I. Z. Danko, J. K. de Jong, N. E. Devenish, M. V. Diwan, M. Dorman, C. O. Escobar, J. J. Evans, E. Falk, G. J. Feldman, M. V. Frohne, H. R. Gallagher, R. A. Gomes, M. C. Goodman, P. Gouffon, N. Graf, R. Gran, N. Grant, K. Grzelak, A. Habig, D. Harris, J. Hartnell, R. Hatcher, A. Himmel, A. Holin, X. Huang, J. Hylen, J. Ilic, G. M. Irwin, Z. Isvan, D. E. Jaffe, C. James, D. Jensen, T. Kafka, S. M. S. Kasahara, G. Koizumi, S. Kopp, M. Kordosky, A. Kreymer, K. Lang, G. Lefeuvre, J. Ling, P. J. Litchfield, R. P. Litchfield, L. Loiacono, P. Lucas, W. A. Mann, M. L. Marshak, N. Mayer, A. M. McGowan, R. Mehdiyev, J. R. Meier, M. D. Messier, D. G. Michael, W. H. Miller, S. R. Mishra, J. Mitchell, C. D. Moore, J. Morfín, L. Mualem, S. Mufson, J. Musser, D. Naples, J. K. Nelson, H. B. Newman, R. J. Nichol, J. A. Nowak, W. P. Oliver, M. Orchanian, R. Ospanov, J. Paley, R. B. Patterson, G. Pawloski, G. F. Pearce, D. A. Petyt, S. Phan-Budd, R. K. Plunkett, X. Qiu, J. Ratchford, T. M. Raufer, B. Rebel, P. A. Rodrigues, C. Rosenfeld, H. A. Rubin, M. C. Sanchez, J. Schneps, P. Schreiner, P. Shanahan, C. Smith, A. Sousa, P. Stamoulis, M. Strait, N. Tagg, R. L. Talaga, J. Thomas, M. A. Thomson, G. Tinti, R. Toner, G. Tzanakos, J. Urheim, P. Vahle, B. Viren, A. Weber, R. C. Webb, C. White, L. Whitehead, S. G. Wojcicki, T. Yang, R. Zwaska March 2, 2011 hep-ex Measurements of neutrino oscillations using the disappearance of muon neutrinos from the Fermilab NuMI neutrino beam as observed by the two MINOS detectors are reported. New analysis methods have been applied to an enlarged data sample from an exposure of $7.25 \times 10^{20}$ protons on target. A fit to neutrino oscillations yields values of $|\Delta m^2| = (2.32^{+0.12}_{-0.08})\times10^{-3}$\,eV$^2$ for the atmospheric mass splitting and $\rm \sin^2\!(2\theta) > 0.90$ (90%\,C.L.) for the mixing angle. Pure neutrino decay and quantum decoherence hypotheses are excluded at 7 and 9 standard deviations, respectively. TMVA - Toolkit for Multivariate Data Analysis (physics/0703039) A. Hoecker, P. Speckmayer, J. Stelzer, J. Therhaag, E. von Toerne, H. Voss, M. Backes, T. Carli, O. Cohen, A. Christov, D. Dannheim, K. Danielowski, S. Henrot-Versille, M. Jachowski, K. Kraszewski, A. Krasznahorkay Jr., M. Kruk, Y. Mahalalel, R. Ospanov, X. Prudent, A. Robert, D. Schouten, F. Tegenfeldt, A. Voigt, K. Voss, M. Wolter, A. Zemla July 7, 2009 physics.data-an In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
CommonCrawl
GIS-based approaches on the accessibility of referral hospital using network analysis and the spatial distribution model of the spreading case of COVID-19 in Jakarta, Indonesia Florence Elfriede Sinthauli Silalahi ORCID: orcid.org/0000-0001-9817-21901, Fahrul Hidayat ORCID: orcid.org/0000-0002-8634-06411, Ratna Sari Dewi ORCID: orcid.org/0000-0003-3396-29541, Nugroho Purwono ORCID: orcid.org/0000-0002-8027-97121 & Nadya Oktaviani ORCID: orcid.org/0000-0003-4563-20711 BMC Health Services Research volume 20, Article number: 1053 (2020) Cite this article The outbreak of the novel coronavirus (COVID-19) has rapidly spread, causing million confirmed cases, thousands of deaths, and economic losses. The number of cases of COVID-19 in Jakarta is the largest in Indonesia. Furthermore, Jakarta is the capital city of Indonesia which has the densest population in the country. There is need for geospatial analysis to evaluate the demand in contrast to the capacity of Referral Hospitals and to model the spreading case of Covid-19 in order to support and organize an effective health service. We used the data from local government publicity for COVID-19 as trusted available sources. By using the verifiable data by observation from the local government, we estimated the spatial pattern of distribution of cases to estimate the growing cases. We performed service area and Origin-Destination (OD) Cost Matrix in support to existing referral hospital, and to create Standard Deviational Ellipse (SDE) model to determine the spatial distribution of COVID-19. We identified more than 12.4 million people (86.7%) based on distance-based service area, live in the well served area of the referral hospital. A total 2637 positive-infected cases were identified and highly concentrated in West Jakarta (1096 cases). The results of OD cost matrix in a range of 10 km show a total 908 unassigned cases from 24 patient's centroid which was highly concentrated in West Jakarta. Our results indicate the needs for additional referral hospitals specializing in the treatment of COVID-19 and spatial illustration map of the growth of COVID-19′ case in support to the implementation of social distancing in Jakarta. The outbreak of the novel corona virus (COVID-19) has rapidly spread as a global pandemic, and causing million confirmed cases, thousands of deaths, and economic losses. The World Health Organization (WHO) declared the spread of the novel corona virus COVID-19 as a very high global level on risk assessment with total 2,241,359 confirmed cases and 152,551 deaths reported on April 19, 2020. WHO has released operational planning guidelines to support country preparedness and response, guidance on the use of masks in communities, during home care, and in healthcare settings, and further advice for the public [1]. The first Indonesia' case of COVID-19 was identified on March 2, 2020, in Depok, West Java. Later, the number of cases rapidly increased, resulting in 8607 cases in national level including 720 deaths as of April 25, 2020 ([2]). Depok is geographically adjacent to Jakarta, the capital city of Indonesia. The outbreak was identified on March 3, 2020, in Jakarta. Its explosive outbreaks following human mobility patterns. Jakarta has a dense population and has transportation links, including airplanes, trains, interstate buses, and private transportation modes. The number of cases of COVID-19 in Jakarta is the largest in Indonesia, with the total number of confirmed COVID-19 cases is at 3681, including 350 deaths in Jakarta as of Apr 22, 2020 (Fig. 1). Graph depicting the increasing trend of COVID-19 cases between Jakarta and the national level (source: Provincial Government of DKI Jakarta [3]) The outbreak of COVID-19 has taken not just lives, but also has devastated workers' hours and earnings in Indonesia. The economic impact of COVID-19 in Indonesia is fundamentally affecting macro-economic stability and employment. The retail industry in seven cities have been affected by COVID-19, with the five most being impacted are West and Central Jakarta, South Tangerang – Banten Province, Depok and Bandung - West Java Province. The biggest decline with a total 32% fall in daily earnings per outlet was recorded in West Jakarta. According to an International Labour Organization (ILO) report, the outbreak of COVID-19 is expected to wipe out 6.7% of working hours globally in the second quarter of 2020, this is equivalent to 195 million full-time workers [4]. COVID-19 attacks the lungs primarily and transmitted through close contact. The essential needs of health service are being confronted with rapidly increasing demand generated by the COVID-19 cases which grow every day. The Indonesian Ministry of Health has released a national list of 132 referral hospitals, through a decree of the Minister of Health of Indonesia number HK.01.07/MENKES/169/2020 related to the establishment of referral hospitals of certain emerging infection disease, to optimize treatments and health services for infected patients that cover all regencies and cities in Indonesia. For limiting direct mortality and avoiding increased indirect mortality from an outbreak, a prepared and well-organized health service must be provided by Government of Indonesia to maintain an equitable access to essential treatments. Health systems in several countries are now struggling defeat COVID-19. Some of them approach to collapse. When health systems are overwhelmed due to high demand, both direct mortality and indirect mortality rates increase dramatically. Nowadays, many researchers compete to found some solutions i.e., the tool for estimating the capacity of health system to manage COVID-19 cases per day in the certain catchment area [5]. The accessibility of referral hospitals and their capacities are great importance for protecting the basic human right to health care and maintaining social stability [6, 7], especially related to spreading of infectious diseases that happen at any time, because the consequences are too serious [8]. Government needs to maintain population trust in the health system to provide essential needs and to control infection risk [9,10,11]. Related to supply of healthcare facility, a study found that a critical bed capacity across Asia varied widely depending on the country income, and Indonesia has 2.7 critical beds per 100,000 population (the 8th lowest ranks of 23 Asian countries) [12]. Then the escalation of COVID-19's positive cases becoming demand surges that shocking healthcare facilities in this pandemic. Short-term demand forecasting is needed using bunch of data from around the world by considering population density [13]. To strengthen medical response and infection control practices, local or regional health offices should prepare a list of COVID-19 patients in their respective area or review the list of patients through the Public Health Emergency Operating Center (PHEOC). Indonesian Ministry of Health has issued a Decree number HK.01/07/MENKES/446/2020 on reimbursement procedures for hospitals treating COVID-19 cases and this is in line with a Decree of Minister of Health number HK.01.07/MENKES/413/2020 on the guidelines for prevention and control of COVID-19. Hospitals that can claim their services in handling COVID-19 patients are hospitals that provide certain emerging infectious disease services such as referral hospitals and other hospitals that have facilities for COVID-19 patients. The services include service administration, accommodation (rooms and services in the emergency room, inpatient rooms, intensive care rooms, and isolation rooms), doctor services, surgery, the use of ventilators, laboratory diagnostic and radiology according to medical indications, medical consumables, medication, medical devices including personal protective equipment, ambulance, human corpse handling and other health services according to medical indications. So, in order to get full service without worry for the payment, COVID-19 patients must get treated in hospitals that provide certain emerging infectious disease services such as referral hospitals [14, 15]. The health service criteria as per Decree number HK.01/07/MENKES/446/2020 are as follows, 1) criteria for outpatients: a) suspected patients with or without comorbidity should provide evidence of routine blood laboratory test and chest x-ray images. The chest x-ray is excluded for pregnant women and patients with certain medical conditions, including mental disorder and anxiety, as evidenced by a written statement from medical doctor, and b) COVID-19 confirmed patients with or without comorbidity should provide evidence of the results of rapid test-Polymerase Chain Reaction (PCR) laboratory test from hospitals or from other health care facilities; 2) criteria for inpatients: a) suspected patients who are sixty years old with or without comorbidity, patients who are younger than sixty years old with comorbidity, and patients with severe acute respiratory infection (ARI)/severe pneumonia who need hospital care and without other causes based on clinical analysis; b) probable patients, and; c) confirmed patients, includes asymptomatic confirmed patients, who do not have facilities for self-isolation at their residence or public facilities prepared by the Government as evidenced by a written statement from the head of the community health center (Puskesmas), confirmed patients without symptoms and with comorbidity, and confirmed patients with mild, moderate, or severe/critical symptoms; and d) suspected/ probable/ confirmed patients with co-incidence. The criteria for outpatients and inpatients apply to Indonesian citizens and foreign citizens, including health workers and workers who have contracted COVID-19 due to work, who are treated in hospitals in the territory of the Republic of Indonesia. Besides, the patient's identity must be proven by 1) for foreigners: passport, temporary stay permit (KITAS) or UNHCR identification number; 2) For Indonesian citizens: citizenship identification number (NIK), family card, or a written statement from the village; 3) For displaced persons (orang terlantar): a written statement from the social service. In the event that the identity documents cannot be provided, proof of identity can be a written statement on patient data signed by the head of local health offices and stamped by the local health offices. The statement is requested by the hospital to the local health offices. In the event that the documents cannot be provided, proof of identity can be a guarantee letter from head of the hospital [15]. Beside of implementation of Decree of Minister of Health number HK.01/07/MENKES/446/2020 and Decree of Minister of Health number HK.01.07/MENKES/413/2020 on the national level, the Governor of the DKI Jakarta has issued regulations to prevent the spreading of COVID-19 by implementing massive restrictions (PSBB – Pembatasan Sosial Berskala Besar) in DKI Jakarta by Regulation No. 33 of 2020 on the Provincial level. The regulation of PSBB has implemented started on April 9, 2020. The purposes of this regulation are to restrict certain activities and movements of people and/or goods in suppressing the spread of COVID-19, to increase anticipation of the development of the spread escalation COVID-19, to strengthen health management efforts due to COVID-19, and to deal with the social and economic impacts of the spread COVID-19. Many literatures have discussed about the significant effect that is being caused by the COVID-19 i.e., the limitations and the needs of improvement on infrastructure and utility [7, 12, 16], participation of community [6, 17], and human movement by tracking and movement modelling [9, 10, 18]. But, during this pandemic, it is important to have information on how the condition of existing health care capacities, especially Referral Hospital for COVID-19 in Jakarta in contrast to the demand. Because, the demand is growing day by day, and there are issues about rejections of COVID-19 patients from some local hospitals in other region of Indonesia. So, in this research, it is critical to conduct site selection, routing, and statistical analysis to know, not only the existing demand and referral hospital capacities, but also to give prediction on the best route from patients to all the possible Referral Hospitals in the certain distances and/or can be reached in the certain time. In addition, it is important to have information on the additional hospital locations recommendation with their capacities, and to explain where the region with the highest prediction growth cases of COVID-19 by statistical model, so the local government and local people strictly implement a social distancing and the health protocol in their region. Most of death cases of COVID-19 was identified accompanied by at least an underlying comorbidity in patients, such as hypertension, diabetes mellitus, respiratory diseases such as asthma and chronic obstructive pulmonary disease [19]. To make cautious analysis about service areas with certain assumptions distance and time travel, in contrast to supply of healthcare facilities at our study area, each type analysis that we performed was carried out by a geospatial approach, particularly using geographical information system (GIS) network analysis and modelling. A common task in spatial analysis is how to estimate travel time and distance from a set of origins to destinations in a network. We used Origin – Destination (OD) cost Metrix as a feasible solution space and meets demand estimation flow with a definite constraint ([18, 20,21,22,23]) that we discuss on method section. Besides, we used SDE model as a long serve technique for analysing concentration tendency of concerned features, orientation, dispersion trends, and distribution differences, delineating the geographic distribution of concerned features, and providing a comparable estimate of the individuals' activity spaces ([24,25,26,27]). Our paper contained the report of the growth cases of COVID-19 in Jakarta, Indonesia and our objectives are as follow: 1) to introduce geographic distribution of COVID-19 based on GIS spatial technology; 2) to recommend the nearest healthcare facilities by service area and OD Cost Matrix; and 3) to create SDE model of the study area. Jakarta is the capital city of Indonesia located in Java Island which has the densest population in the country. The central point of the area is at geographical coordinates 6°12′29″ S and 106°50′20″ E. Jakarta as the state capital has a special status and special autonomy granted under Law of Indonesia Number 29 Year 2007. Administratively, the area of Jakarta is divided into five municipalities and one district administration [28]). The administrative regions below are divided into 44 sub districts and 267 administrative villages (Fig. 2). Its economic structure in 2018 was dominated by wholesale and retail trades, repair of motor vehicles, and motorcycles sectors, that reached 16.93% of the total Gross Domestic Regional Bruto of the DKI Jakarta Province. Total workers in Jakarta was 25,121 people in 2017, not included the number of job seekers, who come from, for instance, Depok, Bogor, Tangerang, and Bekasi, which are regions adjacent to Jakarta. Its population density in 2017 was 15,663 people/km2 with the population growth rate of 0.94% per year. In this case, West Jakarta has the highest population density in the amount of 19,516 people/km2 [28]). The 2014 topographic map of DKI Jakarta published at scale 1:25,000 obtained from the Indonesian Geospatial Information Agency The study began with the compilation of feature datasets such as road lines, administrative boundaries, hospital and settlement layers from topographic map of Jakarta at scale of 1:25,000 and the 2015 population density dataset with a spatial resolution of 100 × 100 m2 from the Indonesian National Board for Disaster Management [29]. All the data are public domain provided by Government of Indonesia. A total of 261 centroid of villages representing the position of COVID-19 patients observed were mapped in the area as of March 22, 2020. The local government used the centroid of the administrative regency to show series of confirmed cases of COVID-19. The centroid or the geometric centre, is a point feature representing a vector data (multipoint, line, and area features). The centroid is located inside or contained by the bounds of the input feature. Since Jakarta is an urban big city, the density of geometric centre will be distributed fairly since the open area is limited. We used centroids of villages as a patient origin based on positive-infected cases report on each village. In Indonesia, we have a restricted protocol of health data publication and the access to data is limited. So, in order to make this research represents actual condition, we used centroid of villages in Jakarta as the origin with an update attribute data on each centroid as per April, 16, 2020 In this research, majority of our maps are obtained from data analysis using Network Analyst tool extension in ArcGIS 10.5 and Spatial Statistics toolbox in ArcGIS Pro 2.5. We used ArcGIS Desktop Advanced 10.5 concurrent use and Arc GIS Pro Advanced under subscription ID 2422920474 owned licensed by Geospatial Information Agency of Indonesia. We obtained the daily series of confirmed cases of COVID-19 in Jakarta from the first case identified on March 3, 2020, to April 16, 2020, that are publicly available from Provincial Government of DKI Jakarta [3]). The total number of confirmed cases as of April 22, 2020, are 3681 positive-infected cases, with 1947 patients was hospitalized, 334 patients was discharge, 1050 cases with self-quarantine status, and 350 patients died. In our research, we focussed only on analysing the positive-infected cases. The total number of confirmed and positive-infected cases as of April 16, 2020, as well as classification cases based on gender and age were presented in Table 1. Table 1 The total number of positive-infected cases as of April 16, 2020, as well as the distribution by gender and age [3]) The number of COVID-19 cases in Jakarta quickly ascended following an exponential growth trend. We obtained the graph which visualizing data accumulation for the overall COVID-19 cases in Jakarta from March 1, 2020, to April 17, 2020 (Fig. 3). Graph depicting data accumulation for overall COVID-19 cases in Jakarta based on number of recovered, deceased, self-quarantine, and hospitalized report. (source: [3]) Besides, we collected referral health care facilities information as data attributes containing related information about infrastructures such as bed allocation for isolation room, bed allocation for Emergency Department, capacity of Intensive Care Unit (ICU), Paediatric intensive care unit (PICU), Neonatal Intensive Care Unit (NICU), Intensive Coronary Care Unit (ICCU), and High Care Unit (HCU) facilities. A total of eight referral hospitals were located in Jakarta (Fig. 4) with details capacity as attached on Additional file 1. We also collected information related to health workers involved in COVID-19 control programs, such as specialists and doctors, as well as other health workers that have been working to assist in the COVID-19 response [31]. Figure 5 represents a general framework of this research including data collection, spatial analysis, the output and recommendation for COVID-19 cases in Jakarta. Locations of referral hospital of COVID-19 and other hospitals and community health centres in DKI Jakarta (source: Indonesian Ministry of Health [30]) Flowchart of this research process Descriptions of each of step are as follow: Network dataset creation Networks are a specific type of vector data which is primarily composed of edges, junctions, and nodes. Network datasets are used to choose path and commonly used to model transportation networks. Transportation networks are used for modelling interaction such as allocating demand to facilities and modelling travel to facilities [22, 32, 33]). Many methods have been used to model the spatial accessibility to facilities including allocating demand to those facilities such as by using ArcGIS Network Analyst [34, 35], Frequency Ratio and Analytical Hierarchy Process [36], two-step floating catchment model [37, 38], and Space Syntax and location-based method [39]. Network analysis derived from route network data, and delineation of population catchment areas and Referral Hospital, using Geographic information systems (GIS) by travel-time-based and/or distance-based is commonly used and effectively utilized to carry out spatial accessibility of the healthcare services [34, 36,37,38]. A study related to relationship identification of active mobility activities such as walking, cycling and active transport journeys, with the environmental factors and the pattern of social interaction by spatial models, shows the intersection density and land use are strongly associated with active and public transport accessibility. It also improved by the infrastructure to support active travel and regulation to limit vehicular speed [36]. For this research, we performed an analysis on our road network dataset from the topographical map of Jakarta provided by Geospatial Information Agency, and using the ArcGIS Network Analyst extension. We also created two scenarios in service area delineation and we used not only a finite-size buffer ring around each centroid but we used OD cost matrix for modelling the spatial distribution of travel demand (next step). This road feature dataset has limitation in providing data requirements of network elements to a transportation network such as one-way streets, link impedances, turn impedances, underpasses, and overpasses. Example for CurbApproach, we did not give a specified side, so we permitted it from either side of the vehicle from the origin to be on when the vehicle departs Therefore, we had to make assumptions when performing our service area analysis. Service area delineation After a road network dataset of road line was built, we generated a service area based on two scenarios to show locations which can be reached within a certain distance or time. The results when using different approaches of the distance or time will influence the optimal service areas to be developed. In fact, both distance-based and time-based model to create the service area is interrelated in defining the quality of health-services in certain locations [40, 41]. By using distance-based scenario, we would like to evaluate healthcare services and knowing the impact of distance to emergency condition of patients. Since the increase of distance might affect less opportunity to survive from certain illness in emergency situation [42]. However, by using only distance in developing the service area tends to produce less optimal results due to possible traffic jams or local traffic pattern [41]. We defined referral hospital locations as facilities or points on a map as the starting point (point of origin) where service areas would be developed. This analysis helps us to evaluate coverage and accessibility based on how much of something is within referral hospitals neighbourhood or region. Service areas are important for evaluating healthcare services, accessibilities, and population-based health indicators for disease burden [43]. The method that we used for service area analysis is network-based [44]. We set two scenarios to develop the service areas, namely: 1) distance-based service area of 5 and 10 km or distance scenario; and 2) time-based service area or driving scenario by using time parameters derived from distance (geometry) and speed data adopted from the Indonesian Ministry of Transportation regulation [45, 46]. Since the most of death cases of COVID-19 was identified accompanied by at least an underlying comorbidity in patients which required quick health treatment and response, we assume the best distance-based service area of 5 and 10 km and based on driving time of 5, 10 and 15 min. Some justifications were made due to different standard between road classes in the regulation and road classes provided by topographic map. Table 2 shows the speed details for various classes of road in Indonesia based on the Indonesian Ministry of Transportation regulation, while Table 3 presents justifications of the speed of each road class that we made to this research. Here, we modified the speed by including the effect of traffic on the driving speed in Jakarta due to severe congestion in Jakarta. We adopted research conducted by the Directorate General of Land Transportation [46] and set driving speed to 20 km/h for arterial, collector, local and other roads. However, we assumed that there is no congestion in the highway. Table 2 Driving speed detail information for various classes of road in Indonesia that was used to develop service area of each referral hospital [45] Table 3 Justification of driving speed chosen for each of road classes provided in the topographic map Origin-destination cost matrix creation Next, we performed an origin-destination (OD) cost matrix analysis to create a matrix showing distances using a network dataset and to assign where confirmed COVID-19′ patients locations to the nearest referral hospital. In this study, Origin-Destination (OD) cost matrix was chosen because it has been used to find a solution closed to real matrix from feasible solution space which meets demand estimation flow with a definite constraint example for traffic planning, management and control [18, 20,21,22,23]. The OD cost matrix is a traditional and widely used approach for modelling the spatial and temporal distribution of travel demand [18, 23] aiming to find a solution which meets traffic flow constraints and closed to the real matrix from feasible solution spaces [21, 22]. We performed OD cost matrix analysis using network analyst extension on the Arc Pro 2.5 with a logistic service hosted by ArcGIS Online. By using this logistics, we got a high-quality solver references using worldwide network dataset which is stored in the ArcGIS Online cloud [47]. After the network dataset was set unto the project, we defined settings and constraints as parts of network parameters [33] such as, 1) mode, which refers to the type of transportation and distance versus time. In this, we used rural driving distance; 2) cut-off, which refers to a maximum time or distance within the analysis of target to search for destination. On this research, we set cut-off in a range of 10 km from the origin target to search for destination without barriers; 3) origins, we used 261 centroids of villages as a patient origin based on positive-infected cases report on each village with an update attribute data on each centroid as per April, 16, 2020; 4) destinations, which refers to the number of destinations to find should be set, we did not set a number of destinations, so the output will be the travel times for the paths from each origin to each possible destination; 5) search tolerance, we used 5 m as the input to locate the features on the network, so features that are outside the search tolerance are left unlocated; 6) CurbApproach, we permitted it from either side of the vehicle from the origin to be on when the vehicle departs; and 7) the output, we chose straight line for representing the result paths along the network. The OD cost matrix provides solver output represented as the straight lines on the map, but the values stored in the lines attribute table reflect the network distance from origins to destinations. OD cost matrix output is more appropriate than straight-line cost and it often become input for other spatial analyses [48]. The matrix output is a table which lists the total impedance (could be a distance, time, or money) for the shortest path along the network between each centroid of COVID-19 patients (origins) and each referral hospitals as a demand points (destinations). The distance that is stored in the origin-destination cost matrix is calculated over the street network. Each demand point is assigned to the facility its closest to. Additional service area delineation Additional service area was developed to find the most accessible hospitals which can be reached by unassigned patients or patients who live in rural area. In this case, we developed 5, 10, 15 min time-based service area from unassigned patients (as origin point). Next, we identified a more accessible near-patient hospital located within these 5, 10, 15 min distance. We put priority to the hospital located within 5 min as we assumed that less easily accessible hospital will increase cost to patients [49], however, the capacity of the hospitals was also part of our consideration. Standard Deviational Ellipse (SDE) modelling We used the SDE model to investigate spatial distribution in particular of dispersion pattern and direction changes of COVID-19 cases in Jakarta. We employed SDE model tool in ArcGIS to gain a better understanding of the geographical aspects of the COVID-19 phenomena and identify the cause of this event based on its specific geographic patterns in particularly for elliptic spatial scan statistic [24]. In this research, the illustration of the SDE model for certain area was depicted by gradient colour or tone level of the ellipses. A brighter colour indicates a recent data. The characteristic pattern of events illustrates the central tendency of the events to certain direction, in which the axis of SDE will be appeared skewed towards certain direction which indicated its distribution. The SDE model depicting a standard deviation graphs (ellipse) on the X and Y-axes of features locations which are focussed on the geometric mean value or mean centre for the features. The SDE formula was first suggested by Lefever [50], further was corrected in subsequent publications [51], so that until now it has generally been stated with the following equation: $$ {SDE}_X=\sqrt{\frac{\sum_{i=1}^n{\left({\mathcal{x}}_{\mathcal{i}}-\overline{X}\right)}^2}{n}},{SDE}_Y=\sqrt{\frac{\sum_{i=1}^n{\left({y}_{\mathcal{i}}-\overline{Y}\right)}^2}{n}} $$ where \( {\mathcal{x}}_{\mathcal{i}} \) and \( {y}_{\mathcal{i}} \) are the coordinates for feature \( \mathcal{i} \), { \( \overline{X} \), \( \overline{Y} \) } represents the mean centre for the features along the Cartesian coordinate system, and n is equal to the total number of features. The mean centre is essential to understand average location of many point distributions [27, 50,51,52,53]. For measuring the dispersion distribution of original observations, the major axis and minor axis of the SDE can be determined as follows: $$ {\sigma}_x=\sqrt{2\ }\sqrt{\frac{\sum \limits_{i=1}^n{\left(\tilde{x}_{i}\cos \alpha -\tilde{y}_{i}\sin \alpha \right)}^2}{n}},{\sigma}_y=\sqrt{2\ }\sqrt{\frac{\sum \limits_{i=1}^n{\left(\tilde{x}_{i}\mathit{\sin}\alpha -\tilde{y}_{i}\mathit{\cos}\alpha \right)}^2}{n}} $$ Furthermore, the use of weighted data involves an equation of shape to evaluate the ellipse direction [52, 53]. For the angle of rotation, it is calculated as follow: $$ \tan \alpha =\frac{p+q}{r} $$ in which, $$ p=\left(\sum \limits_{i=1}^n\tilde{x}_{i}^2-\sum \limits_{i=1}^n\tilde{y}_{i}^2\ \right) $$ $$ q=\sqrt{{\left(\sum \limits_{i=1}^n\tilde{x}_{i}^2-\sum \limits_{i=1}^n\tilde{y}_{i}^2\ \right)}^2}+4\ {\left(\sum \limits_{i=1}^n\tilde{x}_{i}\tilde{y}_{i}\ \right)}^2 $$ $$ r=2\sum \limits_{i=1}^n\tilde{x}_{i}\tilde{y}_{i} $$ where \( \tilde{x}_{i} \) and \( \tilde{y}_{i} \) are the deviations of the xy-coordinates from the mean centre (see Fig. 6). An ellipse which rotates in the direction of the clockwise angle (modified from Wang et al., [27]) For depicting the dispersion of COVID-19 cases in a range of 23-days over five regions of DKI Jakarta, SDE helps us to obtain descriptive statistics of univariate features around its centre. Using an ArcGIS extension, supposing the features concerned usually follow a spatially normal distribution. However, it is mainly determined by three measures i.e., average location, dispersion (or concentration) and orientation [27]). Service area of referral hospitals Identification of the movement and transport of people, disease, or goods, its migration patterns and commuting behaviour, is a spatial challenging in visualisation problem. Physical time and space are two dimensions which are considered to define a function of moving objects in transportation. We used GIS to process our map and model. It has not only a strong association for creating a transportation model, but also provided capable environments on management, analysis, and visualization of spatial data, i.e., for an integration of various data sources [32]. Figure 7 shows the overall distance-based service areas developed for eight referral hospitals. When using 5 km scenario (in dark purple), we can see that all referral hospitals can only cover half of Jakarta Province. Hence, we extended the coverage of the hospitals to 10 km (in light purple). On the one hand, the new coverage covered more people, but on the other hand, it resulted in overlapping service areas due to relatively close position of the referral hospitals. The other main drawback to this scenario was time needed for patients to reach the referral hospitals was remain unknown. Furthermore, previous studies have indicated that it is important to include the actual traffic conditions to the time element such as speed limits, traffic jams, and one-way street [54] in particular for emergency services for COVID-19 patients, for example, patients experiencing shortness of breath. Graph depicting service area of referral hospital of COVID-19 in Jakarta based on distance of 5 and 10 km (source: own elaboration, 2020) To overcome such drawback, Fig. 8 presents the second scenario incorporating time components to develop the service area of the referral hospitals. For the second scenario, the calculated service areas of each referral hospital were divided in to three categories: 0–5, 5–10, and 10–15 min. From Fig. 8, we can see that all Central Jakarta and most of South Jakarta were located at the well served area to the referral hospitals. In contrast, there were 55 villages which were not covered by the referral hospitals, representing 21% of the total number of villages in Jakarta. These villages were located in the North, East and West Jakarta. This implies that the local government needs to suggest more referral hospitals to cover people who live far away from the existing referral hospitals. Graph depicting service area of referral hospital of COVID-19 in Jakarta based on driving time of 5, 10 and 15 min (source: own elaboration, 2020) To further evaluate the effectiveness of both distance-based and time-based scenarios chosen, we superimposed the service areas with the population data so we can obtain the number of people living in certain service areas. With total population of approximately 14.3 million people live in Jakarta, more than 12.4 million people (86.7%) based on distance-based service area, live in the well served area of the referral hospital. Meanwhile, when using the time-based service area, the total number of people who live in the catchment are less than 9.4 million (65.8%), as can be seen in Table 4. Table 4 Number of people living in each service area of the referral hospital Information in Table 4 only presents a general information regarding number of people who are served by the referral hospitals. We need to evaluate the capacity of existing referral hospital with respect to the number of positive-infected COVID-19 patients. To do so, we superimposed the service areas with the positive-infected cases of COVID-19 as can be seen in Table 5. The term "unserved area" in Table 5 is defined as the area outside the service area of the referral hospitals. Using both scenarios in Table 5, we can see that the area which were best served by the hospital were mainly located in the Central and South Jakarta. Table 5 Number of positive-infected COVID-19 cases in the served and unserved area of the referral hospitals OD matrix analysis results To evaluate the accessibility of patients to the referral hospitals, we used the OD matrix method as can be seen in Fig. 9. The table of OD cost matrix with cut-off of 10 Kilometres has 549 rows with information on paths to each of the eight referral hospitals. The matrix shows the total travel time, shortest to longest distance, and destination rank, from each origin to destination. Based on this matrix, we derived detailed information regarding the required services that should be given by each referral hospital as in Table 6. Graph depicting (OD) cost matrix of COVID-19′ positive-infected patient centroids (origins) to the referral hospitals (destinations) in Jakarta in 10 Kilometres (source: own elaboration, 2020) Table 6 Total cases of COVID-19 assigned to referral hospitals based on the OD Matrix analysis From Table 6, we can see that two hospitals in Central Jakarta had been assigned to receive positive-infected COVID-19 patients from various villages within the range of the OD cost matrix range. The hospitals were RSPAD Gatot Subroto and RSAL Mintoharjo which served 264 and 319 premiere cases of COVID-19 from 107 and 111 villages, respectively. Meanwhile, RSUP Fatmawati in South Jakarta had been assigned to serve the highest premiere cases (462 patients) from 42 villages. In contrast, in terms of total positive-infected cases, RSUD Pasar Minggu had been assigned to serve the highest number of positive-infected patients, especially the third cases patients from 63 villages. From the results, we also got information that a total 2637 positive-infected cases were concentrated in West Jakarta (1096 cases), following by South Jakarta (715 cases), East Jakarta (414 cases), Central Jakarta (238 cases) and North Jakarta (174 cases) as per April 16, 2020. Besides, there was a total 908 unassigned cases from 24 patient's centroid which was highly concentrated in West Jakarta in a range of 10 km (see Table 7). We found the highest cases at Village of Jelambar Baru, West Jakarta, with a total 774 positive-infected cases. This centroid of patients had not been assigned to any number of destination (referral hospital). From a total 1096 positive-infected cases in West Jakarta, 798 cases or 72.81% are unassigned. Besides, only one referral hospital in this area (RSUD Cengkareng) and this hospital could not overcome the demand (see Fig. 10). For the second highest cases, we had a total 385 of cases from origin 11, Bintaro, South Jakarta, with OD matrix in a range of 10 km, this centroid of patients had been assigned to destination number 62 (RSUP Fatmawati) and 65 (RSUD Pasar Minggu). Meanwhile, we found one case at Petukangan Utara, South Jakarta which out of OD cost matrix range. Besides the capacity both of RSUP and RSUD Pasar Minggu could not overcome the demand. Table 7 Number of positive-infected COVID-19 versus bed allocations and unassigned cases in each district Fig. 10 Graph depicting demand in contrast to the capacity of Referral Hospitals and unassigned positive-infected cases generated from OD Cost Matrix in Jakarta in a radius of 10 Kilometres from patients' centroid Additional hospitals The outbreak of the COVID-19 makes many countries, including Indonesia, struggling with insufficient accessibility, a long waiting times at emergency departments and primary healthcare centres. Increasing in attendance level at emergency departments with acute admissions, meanwhile we are confronting COVID-19, gives adversely impacts and expose patients unnecessarily to the risks of hospital admission. Crowd level at an Emergency department can create risks for patients by causing delays in transport and treatment affecting the patient mortality rate [6, 16]). So, by mapping the capacity of referral hospitals and additional hospitals in contrast with cases distribution, we want to summarize the importance of applying a geographical or spatial perspective in support to primary healthcare centres research and practice. Based on what we have explained above, this would lead to the most equitable and accessible ways possible in, by, and for communities to stand the outbreak (Crooks and Andrews, 2009). Figure 11 shows the additional service areas developed from 24 unassigned patient's centroids. From the results, we can see that in general, there were five clusters for the additional hospitals. Here, we grouped hospitals in clusters since the service area was developed by considering travel time, so it was possible patients were assigned to hospitals in the neighbouring area. Additional service area developed for unassigned positive-infected patients of COVID-19 using time-based service area with 5, 10, and 15 min travel time from the centroid of patients as the origin (source: own elaboration, 2020) In order to choose which hospitals could be suggested as additional hospitals for the unassigned COVID-19 patients, we superimposed additional hospital data located within 5, 10, and 15 min coverage of the additional centroids. First priority was given to the hospitals located within 5 min-distances from the centroid of unassigned positive-infected patients and we evaluated the capacity of those hospitals as in Table 8. However, if the required hospitals were not available or the capacity of the hospital did not fulfil the required capacity, then we extended our observation to the second layer, 10 min-distances from the origin and so on. Table 8 The number of bed allocations of each alternate hospital in contrast to unassigned patients Information regarding bed allocations of each alternate hospital are available in Table 8. In cluster 1, two alternate hospitals were very far from being able to overcome the demand. Similarly, in cluster 3, RS Islam Jakarta Timur could not fulfil the required bed allocations. While for other clusters, at this period, the alternate hospital could provide the services needed for unassigned patients of COVID-19. However, this situation needs to be updated due to the dynamics of COVID-19 cases in Jakarta. When for developing new hospitals is not an option, then the Provincial Government of DKI Jakarta needs to allocate more additional health equipment in each alternate hospital. Otherwise, they need to search for other health care facilities nearby to be facilitated in providing services to COVID-19 patients. The shape of the study region as parent distribution is an important factor which may affect the shape and orientation of a point distribution and its ellipse. If the point set is a measure distributed over only part of the base, it is likely that the boundary shape has little effect. However, calculations of point distributions, which cover a whole sample area, must take the shape of that region into consideration [53]. In this study, we set the distribution area in the districts of DKI Jakarta Province. In accordance with several trials we have done, we specified two standard deviation (2 StDev) as an adjustment factor in order to generate ellipses containing 95% of the data points. The results of SDE model from confirmed number of the COVID-19 cases in each district in DKI Jakarta Province show various elliptical geometry values. These ellipses represent where was the most severe region of COVID-19 occurring. Spreading of the cases illustrates in ellipses encompassing the distribution of features and hence has a particular orientation. Their sizes and directions vary in each district (see Fig. 12) underlining the geographic distribution of COVID-19 cases in these areas. Apparently, most of the confirmed cases of COVID-19 occurred in Central Jakarta and the nearby area showing by the yellow ellipse as the centre of distribution since the district located in the most overlap area of others ellipses. Furthermore, these confirmed cases of COVID-19 appeared in densely populated area such as Kemayoran and Petamburan villages in Central Jakarta (360 people/ha), Matraman in East Jakarta (500 people/ha), Duri Kepa in West Jakarta (540 people/ha), and Pademangan Barat in North Jakarta (550 people/ha). This must be related to people movement since Jakarta has a dense population and has transportation links, including airplanes, trains, interstate buses, and private transportation modes. Standard Deviational Ellipse indicating trend of COVID-19 spread during March 25 – April 16, 2020 in five districts of DKI Jakarta Province. (source: own elaboration, 2020) We provided the characteristics of these standard derivational ellipses by area, and as a comparison, we presented two cases in March 25, and April 16, 2020. The centre locations of the two cases changed during the observed period (see Table 9). In West Jakarta, the centre location changed from Kedoya Utara to Jelambar Village; while in Central Jakarta, it moved from Kwitang to Senen village. Furthermore, in South Jakarta, the centre location shifted from Cipete Utara to Gandaria Selatan. In contrast, in the East and North Jakarta, the centre locations remained in the same villages, namely Halim Perdana Kusuma and Papanggo villages, respectively (see Fig. 13a and b). Similarly, the ellipse orientations were also changed from March 25, to April 16, 2020. A larger change was obvious in West Jakarta and South Jakarta (see Fig. 13a and b), while the ellipse direction of other areas was only slightly changed (see Table 9 and Fig. 13). A detailed visualisation of the change of ellipse directions is provided in Additional file 2. From the ellipse sizes, we can see that the ellipse size of West Jakarta became smaller in size. On the other hand, the ellipse size of South Jakarta was getting larger. These imply that the dispersion and spatial orientation of the deviational ellipses changed according to the trend of the number of confirmed cases periodically. Table 9 The characteristics of standard deviational ellipses by area Centre locations of each Standard Deviational Ellipse of COVID-19 cases (a) March 25, 2020; (b) April 16, 2020; and (c) the mean centre based on all observed data in five districts of DKI Jakarta Province. (source: own elaboration, 2020) We summarize the average result of SDE model for the observed time series from March 25, to April 16, 2020 (see Fig. 13c and Table 10). The mean centres of the ellipse were finally located in Kedoya Utara village (West Jakarta), Kwitang (Central Jakarta), Cipete Utara (South Jakarta), Halim Perdana Kusuma (East Jakarta), and Papanggo (North Jakarta) as can be seen in Fig. 13c. In addition, the sizes and the rotations of each ellipse are provided in Table 10. Table 10 The average results of the characteristic of standard deviational ellipse by each district This study introduced geographic distribution of COVID-19 based on GIS spatial technology and recommend the alternate healthcare facilities by service area to support the existing referral hospitals. We compared two methods in deriving the service area, based on distance and travel time. The two methods show slightly different service area of referral hospitals. By using time-based service area, we obtained more realistic results and the estimation of time needed for patients to reach the nearby referral hospital. As mentioned by van Wee [54] to include the actual traffic conditions to the time component when generating service area is important, especially for COVID-19 patients who might experience the shortness of breath and severe fever. Moreover, in the case of emergency services, time it takes to receive a diagnosis is directly related to survival. Meanwhile, previous study has shown that the increasing distance from a service is related to the lower willingness of patients to come to that hospital. In the case of COVID-19 in Indonesia, some patients who have recovered, testified that they preferred to be hospitalized at the nearby hospitals rather than at suggested hospital which was far away from their homes, as also mentioned by Turnbull et al. [55] that people prefer to reach health-care facilities nearby their neighbourhood. Therefore, the government of DKI Jakarta Province needs to take these into consideration when selecting alternate hospitals for COVID-19 patients. To our knowledge, efforts have been made by the Indonesian government to inform the public regarding the need to assign COVID-19 patients to referral hospitals and to ensure the capability of those referral hospitals in handling the COVID-19 patients. First was the implementation of the Decree of Minister of Health [15] on the definition of the health service criteria including the criteria for outpatients and the criteria for inpatients of COVID-19. Second was the implementation of the Decree of Minister of Health [14] on the guidelines for the prevention and control of COVID-19 on the national level. Third was planning and monitoring the availability of appropriate logistics to ensure the supply of medical devices including personal protective equipment, ventilators, medical indications (laboratory supplies) and medicines. In this case, the Ministry of Health monitors the chain of medicine distribution on the national scale, while the Provincial and District Health Offices not only ensure that the drug supply meets the needs of the primary and referral health facilities, but also are expected to anticipate delays in logistics delivery due to travel restrictions attributed to COVID-19 pandemic through an early preparation of drug requests as soon as possible and more routinely. The OD cost matrices used least-cost paths yet obtained the information on the best routes from multiple origins to multiple destinations. The lines on Fig. 9 representing path and attribute tables have the ability to contain information about each line. But, since we have difficulties to manage the network road dataset from available data resources, in this map we did not perform analysis for cut off of time using OD Cost Metrix, since we cannot determine how many traffic lights, U-turn, and one-way or two-way road type along road network, furthermore, Jakarta have a policy to restricted odd and even number in certain day in a week (plate-car-number restriction). So, we used cut off of distance using OD Cost Metrix only. OD cost matrix solver output represented as the straight lines on our map, but the values stored in the lines attribute table reflect the network distance from origins to destinations. The matrix output is a table which lists the total impedance (could be a distance, time, or money) for the shortest path along the network between each centroid of COVID-19 patients (origins) and each referral hospitals as a demand points (destinations). The distance that is stored in the origin-destination cost matrix is calculated over the street network. Each demand point is assigned to the facility its closest to. Search tolerance that we used to locate the input features on the network is about 5 m with rural driving distance mode with cut off from origin to destination is around 10 km. We chose rural driving distance because this mode defines the movement of cars and other similar small automobiles, such as pick-up trucks, and finds solutions that optimize travel distance. This travel obeys one-way roads, avoid illegal turns, and follows other rules that are specific to cars, but does not discourage travel on unpaved roads, with restrictions to avoid carpool roads, avoid express lane, avoid gates, avoid private roads, driving an automobile, roads under construction prohibited, through prohibited [33]. There is no set number of destinations to find. So, we can get the rank of distance from origin in a distance of 10 Kilometres to all possible destinations. The SDE model was developed under assumption that the observed COVID-19 cases follow the normal distribution meaning that the cases are densest in the centre and becoming less dense toward the periphery. Hence, the use of SDE tool must considered the variance of the data and the shape of the study area as also have been mentioned by Wang et al., [27]. In this research, based on our experiment, we defined two standard deviation (2 StDev) as magnification factor can intensify confidence level until 95% of entire observed data i.e., COVID-19 cases. Failing in choosing this standard deviation parameter will result in errors since the SDE model may not represent the dispersion patterns and directions of COVID-19 cases in Jakarta. Therefore, SDE method must be used with a circumspection when calculating the geographic distribution of the features concerned. For instance, the delineation of a region concerned by SDE will not indicate the limits of the data spreading, but may yield unclear results from other features of the distribution. Each of the deviational ellipse not only represented the direction and orientation of the COVID-19 outbreak during the time of observation, but also presented the compactness of the features. Furthermore, the area of those ellipses indicated data concentration. When the size was small relative to the study area, the point set was clustered, for instance, in Central Jakarta. Meanwhile, when the size was large, for instance, in East Jakarta, the data were widely distributed in the area. Based on the demand in contrast to the capacity of Referral Hospitals and unassigned positive-infected cases generated from OD Cost Matrix in Jakarta in a radius of 10 Kilometres from patients' centroid, we can conclude if there is a need for developing referral hospitals or to allocate more additional health equipment in each alternate hospital. Otherwise, they need to search for other health care facilities nearby to be facilitated in providing services to COVID-19 patients that we have clustered by service area analysis. Unassigned positive-infected cases are highly concentrated in West Jakarta. West Jakarta has the highest population density in the amount of 19,516 people/km2. But apparently, based on the spatiotemporal concentration of COVID-19 cases that we have evaluated in 23-days based on data report, the concentration will grow toward Central Jakarta and the nearby area showing by the yellow ellipse as the centre of distribution since this region located in the ellipse of all regions. The SDE model can be used to investigate dispersion patterns and directions of COVID-19 cases and to identify risk factors during the certain period. Therefore, as disease control program, SDE model can be administered based on specific information in order to support effective health decision. Furthermore, COVID-19 patients and health workers must put efforts together to reduce COVID-19 transmission in crowded settings and in health care services according to the recommendations of the Indonesian Ministry of Health and WHO. Queueing in services must be minimized especially in places where patients tend to gather such as registration counters, laboratory queues and in the pharmacy to take their drugs. There are several limitations in our study. First, the analysis in this study was limited to be based on the centroid of the villages which maybe less accurate than the patient's home address. It might cause a potential to reduce the accuracy of the results, as it might influence the route taken affecting the distances and travel times. Second, with a small number of confirmed cases (from March 3, 2020, to April 16, 2020), our SDE model might not represents the real distribution patterns and directions of COVID-19 case. The limitations of the implementation of OD cost matrix in this research are: 1) the OD cost matrix generates results more quickly but cannot return the true shape of routes or their driving directions, 2) the OD cost matrix analysis solved least-cost paths along the network, it represents the origin-destination path as a straight line to visualize the matrix in the map, but the actual network paths cannot be displayed in the map, 3) the links to the actual on-line data cannot be provided, since the software that we used does not permit the reproduction of the results. But, its positivity on our research is the OD cost matrix solver reduces the computation time from each origin to all destinations and helps us by its destination rank to categorize and identify unassigned positive-infected cases and where it is located. Open access data direct to primary resources https://corona.jakarta.go.id/id/data. For topographic map and hospital information, even the data is public domain, it requires Foreign Research Permit for international readers or users who want to use the data. OD Cost Matrix: Origin-Destination Cost Matrix SDE: Standard Deviational Ellipse GIS: Geographical Information System PHEOC: Public Health Emergency Operating Center ARI: KITAS: Kartu Izin Tinggal Terbatas (Temporary Stay Permit Card in Indonesia) NIK: Indonesian citizenship identification number PICU: Paediatric intensive care unit NICU: Neonatal Intensive Care Unit ICCU: HCU: High Care Unit World Health Organization (WHO) Situation Report-90 HIGHLIGHTS; Geneva PP - Geneva, 2020. Available online: https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200419-sitrep-90-covid-19.pdf?sfvrsn=551d47fd_4. Indonesian National Board for Disaster Management Peta Sebaran (Distribution Map of COVID-19) Available online: https://covid19.go.id/peta-sebaran (Accessed on Mar 10, 2020). Provincial Government of DKI Jakarta Data Pemantauan Covid-19 DKI Jakarta (Jakarta Covid-19 Monitoring Data) Available online: https://corona.jakarta.go.id/id/data (Accessed on Apr 1, 2020). The Jakarta Post 1.2 million Indonesian workers furloughed, laid off as COVID-19 crushes economy. 2020. Available online: https://www.thejakartapost.com/news/2020/04/09/worker-welfare-at-stake-as-covid-19-wipes-out-incomes.html. Giannakeas V, Bhatia D, Warkentin MT, Bogoch II, Stall NM. Estimating the maximum capacity of COVID-19 cases manageable per day given a health care System's constrained resources; 2020. McGeoch G, Shand B, Gullery C, Hamilton G, Reid M. Hospital avoidance: an integrated community system to reduce acute hospital demand. Prim Health Care Res Dev. 2019;20:e144. Walker PGT, Whittaker C, Watson OJ, Baguelin M, Winskill P, Hamlet A, Djafaara BA, Cucunubá Z, Olivera Mesa D, Green W, et al. The impact of COVID-19 and strategies for mitigation and suppression in low- and middle-income countries. Science. 2020;369:eabc0035. Hou J, Tian L, Zhang Y, Liu Y, Li J, Wang Y. Study of influential factors of provincial health expenditure-analysis of panel data after the 2009 healthcare reform in China. BMC Health Serv Res. 2020;20. available online: https://doi.org/10.1186/s12913-020-05474-1. Kamel Boulos MN, Geraghty EM. Geographical tracking and mapping of coronavirus disease COVID-19/severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic and associated events around the world: how 21st century GIS technologies are supporting the global fight against outbr. Int J Health Geogr. 2020;19:1–12. Fan J, Liu X, Pan W, Douglas MW, Bao S. Epidemiology of coronavirus disease in Gansu Province, China, 2020. Emerg Infect Dis. 2020;26:1257–65. Zheng Z, Xia H, Ambinakudige S, Qin Y, Li Y, Xie Z, Zhang L, Gu H. Spatial accessibility to hospitals based on web mapping API: An empirical study in Kaifeng, China. Sustain. 2019;11. Available online: https://doi.org/10.3390/su11041160. Phua J, Faruq MO, Kulkarni AP, Redjeki IS, Detleuxay K, Mendsaikhan N, Sann KK, Shrestha BR, Hashmi M, Palo JEM, et al. Critical care bed capacity in Asian countries and regions. Crit Care Med. 2020;48. Available online: https://doi.org/10.1097/ccm.0000000000004222. Bohmer RMJ, Pisano GP, Sadun R, Tsai TC. Harvard Business Review How Hospitals Can Manage Supply Shortages as Demand Surges. Available online: https://hbr.org/2020/04/how-hospitals-can-manage-supply-shortages-as-demand-surges. Indonesian Ministry of Health Decree of Minister of Health number HK.01.07/MENKES/413/2020 on the guidelines for prevention and control of COVID-19. Indonesia: Indonesian Ministry of Health; 2020. Available online in Bahasa (Indonesianlanguage): https://covid19.go.id/storage/app/media/Regulasi/KMK%20No.%20HK.01.07-MENKES-413-2020%20ttg%20Pedoman%20Pencegahan%20dan%20Pengendalian%20COVID-19.pdf. Indonesian Ministry of Health Decree of Minister of Health number HK.01/07/MENKES/446/2020 on reimbursement procedures for hospitals treating COVID-19 cases; 2020. Available online in Bahasa (Indonesian language): https://infeksiemerging.kemkes.go.id/download/KMK_No._HK.01.07-MENKES-446-2020_ttg_Petunjuk_Teknis_Klaim_Biaya_Pasien_Infeksi_Emerging_Tertentu_Bagi_RS_Pelayanan_COVID-19.pdf. Viktorsson L, Yngman-Uhlin P, Törnvall E, Falk M. Healthcare utilisation and health literacy among young adults seeking care in Sweden: findings from a cross-sectional and retrospective study with questionnaire and registry-based data. Prim Heal Care Res Dev. 2019;20. Available online: https://doi.org/10.1017/S1463423619000859. Crooks VA, Andrews GJ. Community, equity, access: Core geographic concepts in primary health care. Prim Heal Care Res Dev. 2009;10:270–3. Cik M, Fellendorf M. Cell phone based origin-destination matrices for transport Modelling. Transp. Res. Procedia. 2019;41:551–3. Sanyaolu A, Okorie C, Marinkovic A, Patidar R, Younis K, Desai P, Hosein Z, Padda I, Mangat J, Altaf M. Comorbidity and its impact on patients with COVID-19. SN Compr Clin Med. 2020;2:1069–76. Bonnel P, Hombourger E, Olteanu-Raimond AM, Smoreda Z. Passive mobile phone dataset to construct origin-destination matrix: potentials and limitations. Transp Res Procedia. 2015;11:381–98. Deng Q, Cheng L. Research review of origin-destination trip demand estimation for subnetwork analysis. Procedia Soc. Behav. Sci. 2013;96:1485–93. Wang F, Xu Y. Estimating O–D travel time matrix by Google maps API: implementation, advantages, and implications. Ann GIS. 2011;17:199–209. Osorio C. Dynamic origin-destination matrix calibration for large-scale network simulators. Transp Res Part C Emerg Technol. 2019;98:186–206. Eryando T, Susanna D, Pratiwi D, Nugraha F. Standard deviational ellipse (SDE) models for malaria surveillance, case study: Sukabumi district-Indonesia, in 2012. Malar J. 2012;11:P130. Saadallah DM. Utilizing participatory mapping and PPGIS to examine the activities of local communities. Alexandria Eng J. 2020;59:263–74. Tewara MA, Mbah-Fongkimeh PN, Dayimu A, Kang F, Xue F. Small-area spatial statistical analysis of malaria clusters and hotspots in Cameroon;2000–2015. BMC Infect Dis. 2018;18:1–15. Wang B, Shi W, Miao Z. Confidence analysis of standard deviational ellipse and its extension into higher dimensional Euclidean space. PLoS One. 2015. Available online: https://doi.org/10.1371/journal.pone.0118537. Statistics of DKI Jakarta DKI Jakarta Province in Figures 2019 (Statistik Daerah Provinsi DKI Jakarta 2019); 2019. Available online in Bahasa (Indonesian language): https://jakarta.bps.go.id/publication/download.html?nrbvfeve=M2NhNmY0ZDVhZmVkYmMyMTA2MzJhODBk&xzmn=aHR0cHM6Ly9qYWthcnRhLmJwcy5nby5pZC9wdWJsaWNhdGlvbi8yMDE5LzA5LzI2LzNjYTZmNGQ1YWZlZGJjMjEwNjMyYTgwZC9zdGF0aXN0aWstZGFlcmFoLXByb3ZbnNpLWRraS1qYWthcnRh. Indonesian National Board for Disaster Management InaRisk Available online: http://inarisk.bnpb.go.id/ (Accessed on Mar 10, 2020). Indonesian Ministry of Health Infeksi Emerging Available online: https://covid19.kemkes.go.id/situasi-infeksi-emerging/info-corona-virus/menteri-kesehatan-tetapkan-132-rumah-sakit-rujukan-covid-19/#.X4w1PNAzaMo (Accessed on Apr 1, 2020). Indonesian Ministry of Health The Center of Health Crisis Available online: http://pusatkrisis.kemkes.go.id/spasial/# (Accessed on Mar 13, 2020). Loidl M, Wallentin G, Cyganski R, Graser A, Scholz J, Haslauer E. GIS and transport modeling-strengthening the spatial perspective. ISPRS Int. J. Geo-Information. 2016;5. Available online: https://doi.org/10.3390/ijgi5060084. Mitchell A. The Esri Guide to GIS Analysis, Volume 3 | Modeling Suitability, Movement, and Interaction ESRI Press. 2012. pg. 326-365. ISBN 978-158948-305-7 (manual book). Mulrooney T, Beratan K, McGinn C, Branch B. A comparison of raster-based travel time surfaces against vector-based network calculations as applied in the study of rural food deserts. Appl Geogr. 2017. Available online: https://doi.org/10.1016/j.apgeog.2016.10.006. Sánchez-García S, Athanassiadis D, Martínez-Alonso C, Tolosana E, Majada J, Canga E. A GIS methodology for optimal location of a wood-fired power plant: quantification of available woodfuel, supply chain costs and GHG emissions. J Clean Prod. 2017. Available online: https://doi.org/10.1016/j.jclepro.2017.04.058. Zannat KE, Adnan MSG, Dewan A. A GIS-based approach to evaluating environmental influences on active and public transport accessibility of university students. J Urban Manag. 2020. Available online: https://doi.org/10.1016/j.jum.2020.06.001. Jamtsho S, Corner R, Dewan A. Spatio-temporal analysis of spatial accessibility to primary health care in Bhutan. ISPRS Int J Geo-Inform. 2015;4:1584–604. Cheng Y, Wang J, Rosenberg MW. Spatial access to residential care resources in Beijing, China. Int J Health Geogr. 2012. Available online: https://doi.org/10.1186/1476-072X-11-32. Morales J, Flacke J, Morales J, Zevenbergen J. Mapping urban accessibility in data scarce contexts using space syntax and location-based methods. Appl Spat Anal Policy. 2019. Available online: https://doi.org/10.1007/s12061-017-9239-1. Niedzielski MA, Eric Boschmann E. Travel time and distance as relative accessibility in the journey to work. Ann Assoc Am Geogr. 2014. Available online: https://doi.org/10.1080/00045608.2014.958398. Algharib SM. Distance and coverage: an assessment of location-allocation models for fire stations in Kuwait City. Kuwait: Kent State University; 2011. Nicholl J, West J, Goodacre S, Turner J. The relationship between distance to hospital and patient mortality in emergencies: an observational study. Emerg Med J. 2007. Available online: http://dx.doi.org/10.1136/emj.2007.047654. Zinszer K, Charland K, Kigozi R, Dorsey G, Kamya MR, Buckeridge DL. Determining health-care facility catchment areas in Uganda using data on malaria-related visits. Bull World Health Organ. 2014;92:178–86. ESRI Service area analysis Available online: https://desktop.arcgis.com/en/arcmap/latest/extensions/network-analyst/service-area.htm. (Accessed on Apr 10, 2020). Indonesian Ministry of Transportation Regulation of Ministry of Transportation Number 111/2015 concerning Procedure for Establishing Speed Limits. Indonesia: Indonesian Ministry of Law and Human Rights; 2015. Available online in Bahasa (Indonesian language): http://jdih.dephub.go.id/assets/uudocs/permen/2015/PM_111_Tahun_2015.pdf. Indonesian Ministry of Transportation Project for the Study on Jabodetabek Public Transportation Policy Implementation Strategy in the Republic of Indonesia (JAPTraPIS); 2012. Available online: https://openjicareport.jica.go.jp/pdf/12079000_01.pdf. ESRI Overview of Network analysis layers Available online: https://desktop.arcgis.com/en/arcmap/latest/extensions/network-analyst/overview-of-network-analysis-layers.htm. (Accessed on Apr 1, 2020). ESRI OD cost matrix analysis Available online: https://desktop.arcgis.com/en/arcmap/latest/extensions/network-analyst/od-cost-matrix.htm. Accessed 1 Apr 2020. Gulliford M, Morgan M. Access to health care; 2013; eISBN: 9780203867952 available online: https://doi.org/10.4324/9780203867952. Lefever DW. Measuring geographic concentration by means of the standard deviational ellipse. Am J Sociol. 1926;32:88–94. Furfey PH. A Note on Lefever's "Standard Deviational Ellipse". Am J Sociol. 1927;33:94–8. Gong J. Clarifying the standard deviational ellipse. Geogr Anal. 2002;34:155–67. Yuill RS. The Standard Deviational Ellipse; An Updated Tool for Spatial Description. Geogr. Ann. Ser. B, Hum. Geogr. 1971;53:28–39. van Wee B. Accessible accessibility research challenges. J Transp Geogr. 2016. Available online: https://doi.org/10.1016/j.jtrangeo.2015.10.018. Turnbull J, Martin D, Lattimer V, Pope C, Culliford D. Does distance matter? Geographical variation in GP out-of-hours service use: an observational study. Br J Gen Pract. 2008. Available online: https://doi.org/10.3399/bjgp08X319431. We thank our colleagues from Indonesian National Board for Disaster Management, Ministry of Health of Indonesia, Provincial Government of DKI Jakarta, Centre for Management and Dissemination of Geospatial Information, Centre for Thematic Mapping and Integration, and Centre for Regional Planning Mapping and Atlas, who provided data, related documents, insights and expert inputs that greatly assisted the research; although they may not agree with all of the interpretations/conclusions of this paper. This work was supported by the Geospatial Information Agency of Indonesia (Badan Informasi Geospasial) thru DIPA 2020 (3539.967.054A.521219) to the third author. The funders of the study had no role in data collection, data analysis and processing, data interpretation, or writing of the report. Research Division of Geospatial Information Agency of Indonesia, Jalan Raya Bogor Km. 46, Cibinong, Bogor, Jawa Barat, 16911, Indonesia Florence Elfriede Sinthauli Silalahi, Fahrul Hidayat, Ratna Sari Dewi, Nugroho Purwono & Nadya Oktaviani Florence Elfriede Sinthauli Silalahi Fahrul Hidayat Ratna Sari Dewi Nugroho Purwono Nadya Oktaviani FS, RD, FH and NP processed and analysed the data. FS, RD, FH and NP prepared the references. NP and NO retrieved and managed the data. RD supervised the model and all the maps. FS and RD wrote the first draft of the paper. FS, RD, FH and NP are equally contributed as the main contributor to this work. All authors read and approved the final manuscript. Correspondence to Florence Elfriede Sinthauli Silalahi. Not required. The capacity of the Referral Hospitals. Standard Deviational Ellipse per day starting from March 25, 2020 to April 16, 2020. Data and information regarding patients and COVID-19 trends in Jakarta can be directly accessed on https://corona.jakarta.go.id/id/data. For topographic map and hospital information, even the data is public domain, it requires Foreign Research Permit for international readers or users who want to use the data. Silalahi, F.E.S., Hidayat, F., Dewi, R.S. et al. GIS-based approaches on the accessibility of referral hospital using network analysis and the spatial distribution model of the spreading case of COVID-19 in Jakarta, Indonesia. BMC Health Serv Res 20, 1053 (2020). https://doi.org/10.1186/s12913-020-05896-x Referral hospital Healthcare needs and demand
CommonCrawl
Do you prove all theorems whilst studying? When you come across a new theorem, do you always try to prove it first before reading the proof within the text? I'm a CS undergrad with a bit of an interest in maths. I've not gone very far in my studies -- sequence two of Calculus -- but what I'm trying to understand right now, though, is how one actually goes about studying so that when finished with a good text, there's more of an intuitive understanding than superficial. After reading "The Art Of Problem Solving" from the Final Perspectives section of part eight in 'The Princeton Companion to Mathematics', it seems to hint at approaching studying in that very way. A quote in particular, from Eisenstein, that caught my attention was the following -- I'm not going to paraphrase much: The method used by the director was as follows: each student had to prove the theorems consecutively. No lecture took place at all. No one was allowed to tell his solutions to anybody else and each student received the next theorem to prove, independent of the other students, as soon as he had proved the preceding one correctly, and as long as he had understood the reasoning. This was a completely new activity for me, and one which I grasped with incredible enthusiasm and an eagerness for knowledge. Already, with the first theorem, I was far ahead of the others, and while my peers were still struggling with the eleventh or twelfth, I had already proved the hundredth. There was only one young fellow, now a medicine student, who could come close to me. While this method is very good, strengthening, as it does, the powers of deduction and encouraging autonomous thinking and competition among students, generally speaking, it can probably not be adapted. For as much as I can see its advantages, one must admit that it isolates a certain strength, and one does not obtain an overview of the whole subject, which can only be achieved by a good lecture. Once one has acquired a great variety of material through [...] For students, this method is practicable only if it deals with small fields of easily, understandable knowledge, especially geometric theorems, which do not require new insights and ideas. I feel that this type of environment is something you don't often see, especially in the US -- perhaps that's why so many of our greats are foreign born. As I understand it, he does go on to say that he wouldn't particularly recommend that method of study for higher mathematics, though. A similar question was posed to mathoverflow where Tim Gowers (Fields Medal) went on to say that he recommended similar methods to study: link I'm not quite certain that I understood the context of it all, though. Upon asking a few people whose opinion mattered to me, I was told that it if time were precious to me, it would be a waste going about studying mathematics in that way, so I'd like to get some perspective from you math.stackexchange. How do you go about studying your texts? Edit: Broken link added fixed. soft-question intuition education self-learning learning $\begingroup$ There are two kinds of mathematicians: those who do not try to prove all theorems before reading the proofs and liars. $\endgroup$ – Georges Elencwajg Mar 9 '12 at 9:16 $\begingroup$ I tried to. But I don't have that kind time. $\endgroup$ – Jeff Mar 9 '12 at 10:49 $\begingroup$ The Real Analysis classes I took years (decades!) ago at Bowdoin College were lecture-less. The professor (William Barker) divided us into small groups and gave us (hand-written!) handouts and problem sets that guided us through proving everything in the course. Naturally, this approach took longer than normal --we had extra-long and extra-many class periods to attend-- but, doggonit, we built Calculus with our own hands! Very satisfying. $\endgroup$ – Blue Mar 9 '12 at 11:58 $\begingroup$ Crossposted to Reddit: reddit.com/r/math/comments/qoocl . The broken link to the MathOverflow question is here. $\endgroup$ – user856 Mar 9 '12 at 17:02 $\begingroup$ As far as studying, I don't think you should ever try to learn mathematics linearly. As a prof of mine once remarked, it's written linearly to be easily checked for correctness, not to be understood. $\endgroup$ – Neal Mar 10 '12 at 2:48 There is a continuum in the way one understands a theorem. At one end of the spectrum mathematicians just try to understand the statement and use it as a black box . At the other end they understand the theorem so well that they improve on it: this is called research. An important thing to keep in mind is that your attitude toward a result is not fixed for ever: you may first consider it as a black box and solve exercises by blindly using it, then see how it is quoted in proving corollaries or other theorems and finally come back to it and realize that it is actually quite natural. Professors have the advantage that they really have to understand a theorem if they want to teach it well and answer the students' questions. One of the great aspects of this site is that everybody can be a teacher: I strongly advise you to try and answer questions here. They are at all possible levels and I am sure you can find some that you will answer very competently. A paradoxical way of expressing what it means to have understood a theorem is to say that ideally you have to reach the stage where you consider that all its proofs in the literature are "wrong": it is a patently absurd statement but it conveys the idea that the theorem is now yours because you have integrated it into your own mathematical world. Since Neal asks about this in his comment, let me emphasize that when I say that proofs in the literature are "wrong" I mean that, although they are technically 100% correct, they don't correspond to the subjective way one has organized one's understanding of the subject. For example, the definition I like for a finite field extension $K/k$ to be separable is that it is étale i.e. that the tensor product with an algebraic closure of $k$ is split: $K\otimes_k\bar k\cong \overline {k} ^n$ . I know this is rather idiosyncratic and of course I know the equivalence with the usual definition, but then I feel that long proofs that $\mathbb C\otimes _\mathbb R\mathbb C$ is not a field are "wrong" since I know, by the definition of separable I have interiorized, that $\mathbb C\otimes _\mathbb R\mathbb C=\mathbb C^2$. Let me emphasize that all this a completely personal and secret [till today :-)] attitude within myself and that I absolutely don't advocate that other mathematicians should change their definition of separable. Georges ElencwajgGeorges Elencwajg $\begingroup$ Another big +1. As a student of mathematics, it is important to realize (i) there are many different levels of understanding. If you try to ignore this you will either get weighed down on very basic stuff and never move out of that small domain or (worse) will not have a deep understanding of anything; and (ii) the level of one's understanding of any given part of mathematics is a function of time...which is not even guaranteed to be non-decreasing, I'm sorry to say. $\endgroup$ – Pete L. Clark Mar 9 '12 at 17:23 $\begingroup$ Thanks, @Pete: it feels great for me to to know that we hold similar views on the subject. $\endgroup$ – Georges Elencwajg Mar 9 '12 at 18:02 I did try to prove propositions in my textbooks before reading proofs when I was studying subjects like linear algebra, real analysis... But later on when things becoming more and more difficult, I had to give it up, and today I am still wondering if that was worthwhile...Because on one hand, there are many benefits of doing so: you may gain better understanding of the statement, improve you skill of proving things (), and you get great joy when you work something out! On the other hand, you will process very slowly (if not then congratulation!), but there is tons more to explore. However, I do have some little suggestions: if you develop the hobby of proving everything on you own, do not forget to zoom out from your proof of a specific statement and review the big picture of a lecture, a section or even a subject once for a while. if you don't want to prove it, read others' proof, don't skip it. maybe you can pick one or two subjects which are most important or interesting for you. Huachen ChenHuachen Chen See my question on mathoverflow for books that are designed for this sort of study: https://mathoverflow.net/questions/12709/are-there-any-books-that-take-a-theorems-as-problems-approach Also, from a short bio of S.Ramanujan: At sixteen, Ramanujan borrowed the English text "Synopsis of Pure Mathematics". This work was to prove a deep influence on Ramanujan's development as a mathematician, for it offered mathematical theorems without accompanying proofs, thereby prompting Ramanujan to prove the material by his own mathematical cunning. Full text here: http://myhero.com/go/hero.asp?hero=s_ramanujan aspy591aspy591 $\begingroup$ Regarding Ramanujan and Carr's Synopsis, while it is an inspiring story, one should bear in mind how exceptional Ramanujan was as a mathematician. (I think that most would agree that this was not caused by his learning from Carr's Synopsis, but rather that his ability to extract what he did from Carr's Synopsis was a sign of his remarkable nature). Regards, $\endgroup$ – Matt E Mar 10 '12 at 4:53 Should I understand a theorem's proof before using the theorem? Tensor product algebra $\mathbb{C}\otimes_\mathbb{R} \mathbb{C}$ Tensor product of domains is a domain Should one understand a proof of every important theorem in a field of mathematics to research? Stalk of structure sheaf on fiber product of schemes $\operatorname{Spec}(\mathbb{C}\otimes_\mathbb{R}\mathbb{C})$ has two points When is $\mathbb{Z}$ a flat $\mathbb{Z}G$-module? Does the category of affine $k$-varieties have finite products? Tensor product of inseparable field extensions Studying mathematics abroad - specifically France Studying at home and finding good resources in Algebra Studying for the Putnam Exam Studying Graduate Level Mathematics Outside Lecture Hours I need help in studying advanced mathematics. Multiple simple questions from someone who just started self learning mathematics
CommonCrawl
Comparing motion of curves and hypersurfaces in $ \mathbb{R}^m $ Birth of an arbitrary number of T-singularities in 3D piecewise smooth vector fields Analysis of a stochastic SIRS model with interval parameters Kangbo Bao 1, , Libin Rong 2, and Qimin Zhang 3,, School of Mathematics and Statistics, Ningxia University, Yinchuan, 750021, China Department of Mathematics, University of Florida, Gainesville, FL 32611, USA * Corresponding author: Qimin Zhang Received February 2018 Revised September 2018 Published February 2019 Many studies of mathematical epidemiology assume that model parameters are precisely known. However, they can be imprecise due to various uncertainties. Deterministic epidemic models are also subjected to stochastic perturbations. In this paper, we analyze a stochastic SIRS model that includes interval parameters and environmental noises. We define the stochastic basic reproduction number, which is shown to govern disease extinction or persistence. When it is less than one, the disease is predicted to die out with probability one. When it is greater than one, the model admits a stationary distribution. Thus, larger stochastic noises (resulting in a smaller stochastic basic reproduction number) are able to suppress the emergence of disease outbreaks. Using numerical simulations, we also investigate the influence of parameter imprecision and susceptible response to the disease information that may change individual behavior and protect the susceptible from infection. These parameters can greatly affect the long-term behavior of the system, highlighting the importance of incorporating parameter imprecision into epidemic models and the role of information intervention in the control of infectious diseases. Keywords: Stochastic SIRS model, stochastic basic reproduction number, interval parameter, information intervention, extinction and persistence. Mathematics Subject Classification: Primary: 37H10; Secondary: 34F05. Citation: Kangbo Bao, Libin Rong, Qimin Zhang. Analysis of a stochastic SIRS model with interval parameters. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4827-4849. doi: 10.3934/dcdsb.2019033 L. Allen, An introduction to stochastic epidemic models, Mathematical Epidemiology, Lecture Notes in Mathematics, 1945 (2008), 81–130. doi: 10.1007/978-3-540-78911-6_3. Google Scholar K. Bao and Q. Zhang, Stationary distribution and extinction of a stochastic SIRS epidemic model with information intervention, Advances in Difference Equations, 352 (2017), 19pp. doi: 10.1186/s13662-017-1406-9. Google Scholar L. Barros, R. Bassanezi and P. Tonelli, Fuzzy modelling in population dynamics, Ecological Modelling, 128 (2000), 27-33. doi: 10.1016/S0304-3800(99)00223-9. Google Scholar B. Buonomo, A. d'Onofrio and D. Lacitignola, Modeling of pseudo-rational exemption to vaccination for SEIR diseases, Journal of Mathematical Analysis and Applications, 404 (2013), 385-398. doi: 10.1016/j.jmaa.2013.02.063. Google Scholar B. Cao, M. Shan, Q. Zhang and W. Wang, A stochastic SIS epidemic model with vaccination, Physica A, 486 (2017), 127-143. doi: 10.1016/j.physa.2017.05.083. Google Scholar M. Carletti, Mean-square stability of a stochastic model for bacteriophage infection with time delays, Mathematical Biosciences, 210 (2007), 395-414. doi: 10.1016/j.mbs.2007.05.009. Google Scholar N. Dalal, D. Greenhalgh and X. Mao, A stochastic model of AIDS and condom use, Journal of Mathematical Analysis and Applications, 325 (2007), 36-53. doi: 10.1016/j.jmaa.2006.01.055. Google Scholar A. Das and M. Pal, A mathematical study of an imprecise SIR epidemic model with treatment control, Journal of Applied Mathematics Computing, 56 (2018), 477-500. doi: 10.1007/s12190-017-1083-6. Google Scholar T. Gard, Introduction to Stochastic Differential Equations, Marcel Dekker, New York, 1988. Google Scholar A. Gray, D. Greenhalgh, L. Hu, X. Mao and J. Pan, A stochastic differential equation SIS epidemic model, SIAM Journal on Applied Mathematics, 71 (2011), 876-902. doi: 10.1137/10081856X. Google Scholar R. Has'minskii, Stochastic Stability of Differental Equations, Sijthoff and Noordhoff, Alphen aan den Rijn, Netherlands, 1980. Google Scholar D. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Review, 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North Holland, New York, 1981. Google Scholar C. Ji, D. Jiang and N. Shi, Multigroup SIR epidemic model with stochastic perturbation, Physica A, 390 (2011), 1747-1762. Google Scholar W. Kermack and A. McKendric, Contribution to the mathematical theory of epidemics, Proceedings of the Royal Society of London, Series A, 115 (1927), 700–721.Google Scholar A. Kumar, P. Srivastava and Y. Takeuchi, Modeling the role of information and limited optimal treatment on disease prevallence, Journal of Theoretical Biology, 414 (2017), 103-119. doi: 10.1016/j.jtbi.2016.11.016. Google Scholar A. Lahrouz and L. Omari, Extinction and stationary distribution of a stochastic SIRS epidemic model with non-linear incidence, Statistics and Probability Letters, 83 (2013), 960-968. doi: 10.1016/j.spl.2012.12.021. Google Scholar Q. Liu, D. Jiang, N. Shi, T. Hayat and A. Alsaedi, Stationarity and periodicity of positive solutions to stochastic SEIR epidemic models with distributed delay, Discrete and Continuous Dynamical Systems - B, 22 (2017), 2479-2500. doi: 10.3934/dcdsb.2017127. Google Scholar X. Mao, Stochastic Differential Equations and Applications, 2nd edition, Horwood, Chichester, UK, 2008. doi: 10.1533/9780857099402. Google Scholar X. Mao, G. Marion and E. Renshaw, Environmental noise suppresses explosion in population dynamics, Stochastic Processes and their Applications, 97 (2002), 95-110. doi: 10.1016/S0304-4149(01)00126-0. Google Scholar R. May, Stability and Complexity in Model Ecosystems, Princeton University Press, 2001.Google Scholar D. Pal, G. Mahaptra and G. Samanta, Optimal harvesting of prey-predator ststem with interval biological parameters: A bioeconomic model, Mathematical Biosciences, 241 (2013), 181-187. doi: 10.1016/j.mbs.2012.11.007. Google Scholar P. Panja, S. Mondal and J. Chattopadhyay, Dynamical Study in fuzzy threshold dynamics of a cholera epidemic model, Fuzzy Information and Engineering, 9 (2017), 381-401. doi: 10.1016/j.fiae.2017.10.001. Google Scholar G. Sahu and J. Dhar, Analysis of an SVEIS epidemic model with partial temporary immunity and saturation incidence rate, Applied Mathematical Modelling, 36 (2012), 908-923. doi: 10.1016/j.apm.2011.07.044. Google Scholar S. Sharma and G. Samanta, Optimal harvesting of a two species competition model with imprecise biological parameters, Nonlinear Dynamics, 77 (2014), 1101-1119. doi: 10.1007/s11071-014-1354-9. Google Scholar S. Shen, C. Mei and J. Cui, A fuzzy varying coefficient model and its estimation, Computers and Mathematics with Applications, 60 (2010), 1696-1705. doi: 10.1016/j.camwa.2010.06.049. Google Scholar G. Strang, Linear Algebra and Its Applications, Thomson Learing Inc, 1988.Google Scholar E. Tornatore, S. Buccellato and P. Vetro, Stability of a stochastic SIR system, Physica A, 354 (2005), 111-126. doi: 10.1016/j.physa.2005.02.057. Google Scholar Q. Wang, Z. Liu, X. Zhang and R. Cheke, Incorporating prey refuge into a predator-prey system with imprecise parameter estimates, Computational and Applied Mathematics, 36 (2017), 1067-1084. doi: 10.1007/s40314-015-0282-8. Google Scholar Q. Yang and X. Mao, Extinction and recurrence of multi-group SEIR epidemic models with stochastic perturbations, Nonlinear Analysis: Real World Applications, 14 (2013), 1434-1456. doi: 10.1016/j.nonrwa.2012.10.007. Google Scholar L. Zadeh, Fuzzy sets, Information and Control, 8 (1965), 338-353. doi: 10.1016/S0019-9958(65)90241-X. Google Scholar T. Zhang and Z. Teng, An SIRVS epidemic model with pulse vaccination strategy, Journal of Theoretical Biology, 250 (2008), 375-381. doi: 10.1016/j.jtbi.2007.09.034. Google Scholar C. Zhu and G. Yin, Asymptotic properties of hybrid diffusion systems, SIAM Journal on Control and Optimization, 46 (2007), 1155-1179. doi: 10.1137/060649343. Google Scholar Figure 1. The path of $ S(t) $, $ I(t) $, $ R(t) $ and the histogram of the probability density function of $ I(150) $ assuming p = 0.1, $ (S_0,I_0,R_0,Z_0) = (479.0,20.0,1.0,10.0) $ under different noise intensities Figure 2. The path of $ S(t) $, $ I(t) $, $ R(t) $ assuming p = 0.1, $ (S_0,I_0,R_0,Z_0) = (479.0,20.0,1.0,10.0) $ with different noise intensities Figure 3. Variation of $ \mathscr{R}_0 $ and $ \mathscr{R}_s $ as $ p $ varies Figure 4. The path of $ S(t) $, $ I(t) $, $ R(t) $ with initial $ (S_0,I_0,R_0,Z_0) = (479.0,20.0,1.0,10.0) $ for p = 0.2, p = 0.4 and p = 0.6, respectively Figure 5. The path of $ I(t) $ with initial $ (S_0,I_0,R_0,Z_0) = (479.0,20.0,1.0,10.0) $ under different noise intensities and imprecise parameter p Hui Cao, Yicang Zhou. The basic reproduction number of discrete SIR and SEIS models with periodic parameters. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 37-56. doi: 10.3934/dcdsb.2013.18.37 Yanan Zhao, Yuguo Lin, Daqing Jiang, Xuerong Mao, Yong Li. Stationary distribution of stochastic SIRS epidemic model with standard incidence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2363-2378. doi: 10.3934/dcdsb.2016051 Yanan Zhao, Daqing Jiang, Xuerong Mao, Alison Gray. The threshold of a stochastic SIRS epidemic model in a population with varying size. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1277-1295. doi: 10.3934/dcdsb.2015.20.1277 Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129 Nicolas Bacaër, Xamxinur Abdurahman, Jianli Ye, Pierre Auger. On the basic reproduction number $R_0$ in sexual activity models for HIV/AIDS epidemics: Example from Yunnan, China. Mathematical Biosciences & Engineering, 2007, 4 (4) : 595-607. doi: 10.3934/mbe.2007.4.595 Ling Xue, Caterina Scoglio. Network-level reproduction number and extinction threshold for vector-borne diseases. Mathematical Biosciences & Engineering, 2015, 12 (3) : 565-584. doi: 10.3934/mbe.2015.12.565 Qingshan Yang, Xuerong Mao. Stochastic dynamics of SIRS epidemic models with random perturbation. Mathematical Biosciences & Engineering, 2014, 11 (4) : 1003-1025. doi: 10.3934/mbe.2014.11.1003 Sukhitha W. Vidurupola, Linda J. S. Allen. Basic stochastic models for viral infection within a host. Mathematical Biosciences & Engineering, 2012, 9 (4) : 915-935. doi: 10.3934/mbe.2012.9.915 Gerardo Chowell, R. Fuentes, A. Olea, X. Aguilera, H. Nesse, J. M. Hyman. The basic reproduction number $R_0$ and effectiveness of reactive interventions during dengue epidemics: The 2002 dengue outbreak in Easter Island, Chile. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1455-1474. doi: 10.3934/mbe.2013.10.1455 Gianni Gilioli, Sara Pasquali, Fabrizio Ruggeri. Nonlinear functional response parameter estimation in a stochastic predator-prey model. Mathematical Biosciences & Engineering, 2012, 9 (1) : 75-96. doi: 10.3934/mbe.2012.9.75 Keng Deng, Yixiang Wu. Extinction and uniform strong persistence of a size-structured population model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 831-840. doi: 10.3934/dcdsb.2017041 Chunyan Ji, Daqing Jiang. Persistence and non-persistence of a mutualism system with stochastic perturbation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 867-889. doi: 10.3934/dcds.2012.32.867 Li Zu, Daqing Jiang, Donal O'Regan. Persistence and stationary distribution of a stochastic predator-prey model under regime switching. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2881-2897. doi: 10.3934/dcds.2017124 Zachary P. Kilpatrick. Ghosts of bump attractors in stochastic neural fields: Bottlenecks and extinction. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2211-2231. doi: 10.3934/dcdsb.2016044 Robert Azencott, Yutheeka Gadhyan. Accurate parameter estimation for coupled stochastic dynamics. Conference Publications, 2009, 2009 (Special) : 44-53. doi: 10.3934/proc.2009.2009.44 Pingping Niu, Shuai Lu, Jin Cheng. On periodic parameter identification in stochastic differential equations. Inverse Problems & Imaging, 2019, 13 (3) : 513-543. doi: 10.3934/ipi.2019025 Alain Bensoussan, Jens Frehse, Christine Grün. Stochastic differential games with a varying number of players. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1719-1736. doi: 10.3934/cpaa.2014.13.1719 Francisco de la Hoz, Anna Doubova, Fernando Vadillo. Persistence-time estimation for some stochastic SIS epidemic models. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2933-2947. doi: 10.3934/dcdsb.2015.20.2933 David J. Aldous. A stochastic complex network model. Electronic Research Announcements, 2003, 9: 152-161. Kangbo Bao Libin Rong Qimin Zhang
CommonCrawl
MOF-Like 3D Graphene-Based Catalytic Membrane Fabricated by One-Step Laser Scribing for Robust Water Purification and Green Energy Production Water treatment and desalination Xinyu Huang1,2 na1, Liheng Li1 na1, Shuaifei Zhao3 na1, Lei Tong1, Zheng Li1, Zhuiri Peng1, Runfeng Lin1, Li Zhou4, Chang Peng5, Kan-Hao Xue1, Lijuan Chen6, Gary J. Cheng7, Zhu Xiong3,8 & Lei Ye1,2 Nano-Micro Letters volume 14, Article number: 174 (2022) Cite this article A novel multifunctional three-dimensional graphene-metal frame film for the combination of water purification and hydrolysis hydrogen production, which has excellent purification efficiency and extremely low energy consumption. A "green" one-step laser scribing technology without any organic solvent in the whole preparation process. Metal nanoparticles as the active site can be loaded in a graphene film and wrapped by graphene, preventing undesirable aggregation and increasing catalytic active sites. Increasing both clean water and green energy demands for survival and development are the grand challenges of our age. Here, we successfully fabricate a novel multifunctional 3D graphene-based catalytic membrane (3D-GCM) with active metal nanoparticles (AMNs) loading for simultaneously obtaining the water purification and clean energy generation, via a "green" one-step laser scribing technology. The as-prepared 3D-GCM shows high porosity and uniform distribution with AMNs, which exhibits high permeated fluxes (over 100 L m−2 h−1) and versatile super-adsorption capacities for the removal of tricky organic pollutants from wastewater under ultra-low pressure-driving (0.1 bar). After adsorption saturating, the AMNs in 3D-GCM actuates the advanced oxidization process to self-clean the fouled membrane via the catalysis, and restores the adsorption capacity well for the next time membrane separation. Most importantly, the 3D-GCM with the welding of laser scribing overcomes the lateral shear force damaging during the long-term separation. Moreover, the 3D-GCM could emit plentiful of hot electrons from AMNs under light irradiation, realizing the membrane catalytic hydrolysis reactions for hydrogen energy generation. This "green" precision manufacturing with laser scribing technology provides a feasible technology to fabricate high-efficient and robust 3D-GCM microreactor in the tricky wastewater purification and sustainable clean energy production as well. Avoid the common mistakes The ever-increasing energy and clean water demands are two major issues that currently bother many countries [1, 2]. To resolve them, this has led to extensive research on contaminated water purification as well as hydrolysis of hydrogen production [1,2,3,4,5], which often refers to the advanced functional nanomaterials. Catalytic membrane has both filtration and catalytic properties, and is one of the potential materials for sewage treatment and hydrogen production [2,3,4]. Among them, material scientists have paid the much attention to the graphene oxide (GO) membranes because of their high structure stability, exceptional water permeation and molecular sieving properties [5]. Regarding these advantages, GO nanosheets have acted as the "bricks" to construct the stacked 3D porous membranes for fast and efficiently producing the clean water from the wastewater and saline water [5, 6]. Unlike traditional polymeric membrane, the 3D graphene-based membranes (3D-GMs) enables ultrafast transport of water through defects or nanochannels between individual GO nanosheets; and in the meantime, their narrow interlayer spacings and large special surface areas provide with the high removal performances of various organics and soluble metal ions via the rejection or adsorption [5,6,7]. The 3D-GMs shows the better ability in breaking through the tradeoff between permeation fluxes and intercept precision while comparing with the traditional membranes [8,9,10]. However, traditional 3D-GMs are prepared from the GO dispersion with mass fraction less than 1.0 wt.% via the solution-processed self-assembly, electric field assistance, and vapor deposition which often costs much time and effort to stack the GO nanosheets. Moreover, such GO membranes suffer from low stability in aqueous medium [11,12,13]. When the GO membranes are immersed in aqueous media and subjected to certain hydraulic pressures, polar water easily enters, swells, and separates GO nanosheets from each other, leading to delamination of GO membranes within a few hours [11,12,13]. Especially in practical separation system, the powerful cross-flow shearing could severely damage the GO membranes within a very short time [12]. Although a number of strategies, such as cross-linking by chemical cross-linkers or multivalent metal cations [14, 15], hydroiodic acid or hydrazine reduction [16, 17], and insertion of polyelectrolytes [18] or epoxy resins [19] have been proposed to improve stability of GO membrane, those modification avenues need vast solvents again to disperse those modifiers for the cross-linking reaction in the narrow interlayer spacings of GO nanosheets. This shows that whether engineering the architecture of 3D-GMs or the cross-linking modification of 3D-GMs either cause additional environmental risks due to the large amount of chemicals or waste solvents involved. Unfortunately, there is no better choice in using the solution self-assembly to fabricate the advanced 3D-GMs, let alone modify or functionalize 3D-GMs. For example, researchers recently focus on engineering the 3D graphene-based catalytic membranes (3D-GCMs) via the incorporating of the active metal nanoparticles (MNPs) into membrane pores [20, 21]. Based on the encapsulated MNPs, the 3D-GCMs exhibit the various state-of-the-art properties such as the clean energy production of H2 [21]. In order to enhance the production efficiency of H2, the plasmonic photocatalysis is widely implemented to cause the hot-electrons generation, which accelerates the catalytic dissociation of water at the surface of MNPs [22, 23]. Meanwhile, the electron transfer at the surface of MNPs could also activate many kinds of peroxides for the generation of reactive oxygen radicals (ROS), which displays the strong oxidation in the degradation of recalcitrant organic pollutants (organic dyes and antibiotics) in water [20]. However, the intrinsic ultrafast recombination for photoinduced electron–hole pairs in the metal nanoparticle can hinder the hot-electron transfer to the outside. The phenomenon commonly results from large diameter of nanoparticles, and hence reduces the efficiency of plasmonic photocatalysis [24]. To maximize the separation of electron–hole pairs, it is essential for the MNPs to effectively prevent undesirable aggregation and couple to effective electron acceptors with high electron mobility [25]. For obtaining the excellent photocatalytic 3D-GCMs, the precursors of MNPs are usually premixed with GO dispersion [20, 23,24,25]. By virtue of the functional groups of GO nanosheets, the MNP precursors are evenly anchored on the GO surfaces, and then orderly undergo the crystal growing, reduction and solution self-assembly, ultimately realizing the preparation of 3D-GCMs [20, 21, 23]. This fabrication process is time-costing, solvent and chemical agents wasting as well as the metal catalysts. To this end, how to pursue "green" synthesis approach plays vital roles in shaping the structural property of the scalable 3D-GCMs for application that involves hydrogen production and water purification. Herein, we introduce a green, cost-effective, and ultrafast strategy to design the robust 3D-GCMs via the advanced laser scribing technology [26, 27]. The ultrafast heating of the GO generates endogenous pyrolysis, which not only welds the GO nanosheets for engineering the robust 3D-GM, but also in situ sinters the MNPs on the GO nanosheet surfaces. The clever intercalation of MNPs into graphene sheets can prevent the aggregation of noble MNPs and enhances efficiency of the electron transfer. The green fabrication process does not require any solution or chemical reagents. More interestingly, the microstructure of 3D-GCMs is like the porous metal–organic frameworks (MOFs) with high porosity, strong mechanical, large surface area after the advanced laser scribing engineering. Correspondingly, our novel 3D-GCM with excellent dynamic adsorption capacity can remove various trace contaminants, generating a large amount of cleaning water with a very low hydraulic pressure and in a short time, which greatly reduces the energy consumption and improves the efficiency (Fig. 1a). The loaded active MNPs can also realize the self-cleaning of 3D-GCM through an advanced oxidation process (AOP) [20]. Besides, non-radiative surface plasmon decay caused by surface plasmon resonance of MNPs involve the generation of hot electrons, which continuously generate hydrogen under ultraviolet light radiation. Therefore, our work provides an efficient, cost-effective, and green technology for the development of novel membranes for industrial applications that require advanced wastewater purification and energy regeneration. a Schematic diagram of the sewage treatment in Cu/Pd@3D-GCM. b XRD patterns, c Size distribution of MNPs, d The XPS spectra of Cu/Pd@3D-GCM, and e Raman spectra of Cu/Pd@3D-GCM. f SEM image, g TEM image, and h high-resolution TEM images of Cu/Pd@3D-GCM Experimental Sections Preparation of 3D-GCM MOF powders fixed by metal copper sheet mold was sandwiched between two slides and then laser scribing was performed on the MOF powders. A nanosecond pulsed laser (YLP, IPG photonics) was used as the strict energy source (1064 nm wavelength, 80 ns pulse width). Among them, the laser can be focused to a spot size of 50–200 μm. Any graphic can be programmed on the MOF layer by using programmatic control of the beam direction of the galvanometer (6240H, Cambridge Technology Inc.) to prepare a large-area membrane. X-ray diffraction (XRD) was conducted from a PANalytical B.V. X-ray diffractometer. Field emission transmission electron microscope (FTEM, Tecnai G2 F30, Netherlands) equipped with energy dispersive spectroscopy (EDS) was used to analyze the membranes morphology. Elements on the surface of samples were characterized by X-ray photoelectron spectroscopy (XPS, AXIS-ULTRA DLD-600 W, Japan). Raman spectroscopy (LabRAM HR800, France) was performed to confirm the graphene nanocomposites. Ultraviolet–visible-near infrared Spectrophotometer (UV–Vis-NIR, SolidSpec-3700, Japan) was conducted to estimate the optical response of the as-synthesized samples. Fourier Transform Infrared Spectroscopy (FTIR) spectra was conducted by VERTEX 70 instrument (Germany). N2 absorption isotherms were measured using an Autosorb-IQ2 (America). Performance Tests of 3D-GCM A cross-flow system was employed to evaluate the permeability of 3D-GCM under a certain hydraulic pressure. The effective area of the membrane was 3.14 cm2. The deionized (DI) water was used to compact the membrane to reach a steady state at 0.1 bar. After 5 h of compacting, the pure water flux was automatically recorded every 5 min for 60 min under 0.1 bar at 25 °C. To obtain the most accurate water flux through reducing the deviation, we recorded five sets of data to obtain the average value, and to calculate the water flux and the ultrafiltration coefficients (Kuc) of Cu/Pd@3D-GCM by Eqs. (1) and (2): $$ J = \frac{V}{S \times t} $$ $$ K_{uc} = \frac{J}{P} $$ where V, S, and t are the volume of the permeated (L), the area of the membrane (m2), and the time required for the infiltration process (h), respectively. P is the driving pressure. The rejection properties of 3D-GCM were investigated using Rhodamine B (RhB, 20 ppm), as the representative foulant suitable for use in repellency and antifouling tests. A peristaltic pump (BT300-1F, Longer Pump, China) was used to regulate the flow rate. A mixed solution was flowed through the membrane under 0.1 bar. After the system is stable, the flux data were recorded once every 10 min, and the filtrate was collected. At the end of the filtration test, a sufficient amount of permeates and residue was collected and sealed separately. An Ultraviolet–visible spectrophotometer (UV–Vis, TU1810, Persee, China) was used to identify the concentration of RhB in the cumulative sample, whose absorption peak is at 554 nm. The removal efficiency in Eq. (3): $$ R = \left( {1 - \frac{C}{{C_{0} }}} \right) + 100\% $$ where R is the soiling agent rejection (%), and C and C0 are the soiling agent concentrations (mg L−1) in the permeated and feed solution, respectively. The adsorption capability of 3D-GCM: $$ q = \frac{{\left( {C_{0} - C_{t} } \right)V}}{m} $$ where C0 (Ct) are the initial real-time concentrations of pollutants in water during the membrane separation process, respectively. V is the volume of real-time feeding solution. m is the weight of 3D-GCM. Photocatalytic Degradation Performance of 3D-GCM for Organic Pollutants For dynamic photocatalytic degradation, RhB (10 ppm) solution was added to the membrane reactor for filtration. The solution flowed through the catalytic membrane in the dark for the first 30 min to obtain saturated adsorption. The adsorption–desorption equilibrium was established in the surface of 3D-GCM. After saturation adsorption of RhB solution in the dark, the RhB solution is stopped. Then H2O2 solution (5 µL H2O2 per 10 mL water) flows through the catalytic membrane under UV light (8 W, 350 nm). The filtrate was tested for measurable UV–Vis (TU1810, Persee, China) absorption to record complete rejection and adsorption of the organic pollutant. Furthermore, after a given time interval, the color change of the feed solution under the light irradiation was observed and the UV–Vis (TU1810, Persee, China) spectrum was recorded to analyze the filtrate. The dynamic photocatalytic degradation was repeated for five times to evaluate the stability of the catalyst membrane. Photocatalytic Activity of 3D-GCM for Hydrogen Evolution The hydrogen production experiment was carried out in a photocatalytic reactor at room temperature (300 K). First, argon was used to purge the reactor for 30 min. Then a 300 W xenon arc lamp with an intensity of 100 mW cm−2 was used for illumination. Here, 3D-GCM is suspended in 10 mL of aqueous solution, including 9 mL of DI water and 1 mL of methanol. Gas chromatography is used to analyze gas samples. In the stability test, the hydrogen evolution experiment of 3D-GCM was performed every 4 h and repeated 5 times. Structure and Morphology of 3D-GCMs MOFs possess highly ordered porous structures, diverse metal, or organic compositions, and adjustable crystal morphologies. In this work, the MOF-like 3D-GCMs were prepared directly through a one-step laser scribing process. During the preparation process, the metal ions were reduced to MNPs in 3D-GCMs with tailorable size under various laser power and pulse conditions. Meanwhile, MOF layer was designed as a precursor, with metal ions assembled from two types of ions (mass ratio of Cu and Pd, 10:1), which was fixed with a metal copper sheet mold by two glass slides. A nanosecond pulsed laser (1064 nm) with a pulse duration of 80 ns was used as a precise energy source to scan the MOF layer. This enabled scalable manufacturing of large-area filter membranes (see Supporting Information for the details of the preparation methods). After laser scribing, the metal material and GO nanosheets were converted into a continuous and mechanically strong membrane with an area of 3.14 cm2 and a thickness of 500 μm only limited by the Cu mold. Figure S1 illustrates the process for the preparation of metal-loaded 3D-GCM by laser scribing. Different from direct thermal treatment, MNPs wrapped with graphene can achieve the purpose of non-agglomeration and higher density distribution on the carrier. To confirm the crystal structure of the synthesized Cu and Pd MNPs, the XRD pattern is presented in Fig. 1b. The XRD pattern of 3D-GCM exhibits three characteristic peaks at 2θ = 44.0°, 51.0°, and 74.5° corresponding to the crystalline planes of (111), (200), and (220) of Cu MNPs, respectively [28]. For the XRD peaks of Pd, two characteristic peaks at 2θ = 40.0° and 46.5°, corresponding to the crystalline planes of (111) and (220) of Pd MNPs [29]. Hence, the MNPs samples are composed of Cu and Pd. The specific surface areas and pore structures of the samples were investigated by measuring the nitrogen adsorption isotherms (Fig. S2a). Due to the high metal content, the obtained 3D-GCM shows a surface area of 50 m2 g−1. Such surface area can be attributed to the minimal damage to the material during the ultrafast laser scribing process, which benefits the adsorption for the organic pollutants. The Cu/Pd MNPs with an average particle diameter of 12.5 nm are uniformly distributed in 3D-GCM (Fig. 1c). To further prove the reduction in metal ion, we carried out high-resolution XPS characterization. The XPS spectra of the C 1s, O 1s, Cu 2p and Pd 3d core levels for the 3D-GCM are shown in Fig. 1d. The XPS spectra shows a characteristic peak of C 1s at 284.3 eV and O 1s at 530.9 eV. Cu 2p3/2 and Cu 2p1/2 peaks of 3D-GCM are located at bonding energy of 933.0 and 952.8 eV, respectively (Fig. S2c), consistent with metallic Cu0 [30]. The Pd 3d5/2 and Pd 3d3/2 for Cu/Pd@3D-GCM also shows the corresponding XPS characteristic peaks, respectively [31]. The sp2 bonded carbon (C–C, 284.8 eV) at C 1s spectra of MNP-G confirm that the particles of MNPs coexist in 3D-GCM. Compared with the FTIR spectra of Cu-MOF in the reference [32], the decrease in intensity for O–H stretching frequency in Cu/Pd@3D-GCM (approximately 3000–3400 cm−1) can be attributed to the interactions between MNPs and OH groups, indicating a strong interaction between the graphene and MNPs (Fig. S2d). Furthermore, we resorted to the Raman spectra to demonstrate the quality and stability of the prepared graphene. The Raman spectra of 3D-GCM for few-layer graphene are illustrated in Fig. 1e, with peaks located at 2650, 1590, and 1325 cm−1, corresponding to 2D, G, and D modes, respectively [33]. The supreme structural quality of graphene has been confirmed, showing the non-negative influence of the attached MNPs. Additionally, the intensity ratio of the G and D peaks, characterizing the quantities of defects in graphene-based materials [33,34,35], is 1.05 under 4.5 W laser power, which confirms the formation of highly crystalline graphene with few layers. The SEM image of 3D-GCM in Fig. 1f clearly illustrates a coralline-like morphology with a porous structure, which is different from the product obtained by traditional pyrolysis [36]. A regular arrangement of dark spherical spots, which correspond to nanoparticles with ~ 12 nm diameter, is clearly observed along with the support in the TEM micrographs of 3D-GCM (Fig. 1g). Each nanoparticle is uniformly dispersed without significant aggregation. Furthermore, the lattice fringe corresponding to the Cu (111) and Pd (111) structure observed from the high magnification TEM (Fig. 1h, inset), is coherently extended for each nanoparticle, indicating the formation of an individual crystal structure. In addition, the layer distance of 0.34 nm is observed from the high-resolution TEM image (Fig. 1h, inset), which also confirms the formation of few-layered graphene and the efficient reduction in 3D graphene. Moreover, the MNPs produced by laser scribing are uniformly distributed on the whole framework and the layered pores are clearly visible and abundant. In addition, two kinds of morphology, including coralline-like products filled with ultrafine nanoparticles and dense nanobubble-like products, can be observed (Fig. 1f), because MNPs on the surface and inside 3D-GCM were subjected to different laser intensities. Since the laser energy suffers decay in the irradiation direction, the MNPs formed on the outer surface are prone to melting and evaporation from the graphene shell under sufficient irradiation to form graphene nanobubbles, while the inner MNPs are preserved and wrapped by the graphene shell layer. The MNPs also contribute to the formation of graphene during laser scribing due to their unique catalytic effect on graphene growth [37]. Figure S3 illustrates the EDS mapping results. The color intensities indicate the uniformity and density of the MNPs consisting of mostly MNPs and C elements. Membrane Dynamic Adsorptive and Photo-regenerative Performance To investigate the separation performance and antifouling performance of the 3D-GCM, a cross-flow system was employed to evaluate the permeability of Cu/Pd@3D-GCM (Fig. 2a). Figure 2b records the permeability of the pure water flux until it reaches a steady state. For our sample, it was basically unchanged after reaching a steady state (4050 L m−2 h−1 bar−1). Water flux relative to pressure applied on the Cu/Pd@3D-GCM is shown in the inset of Fig. 2b. The high flux (239 L m−2 h−1) under extremely low driving pressure (0.1 bar) indicates very low energy consumption in operation. The ultrafiltration coefficient (Kuc) has been used to reflect the water permeability of the membrane. According to Eq. (2), Kuc reaches 4050 L m−2 h−1 bar−1, showing the supreme permeability at such a low driving pressure. We also tested conventional polyethersulfone (PES, 120 μm thickness, 0.1 μm average pore size, Membrana) and polypropylene (PP, 100 μm thickness, 0.22 μm average pore size, Membrana) membranes for comparison (Fig. 2b). The Cu/Pd@3D-GCM provided the highest maximum value in pure water flux compared to PES ultrafiltration and PP microfiltration membranes, as clearly demonstrated in Fig. 2b. In a typical experiment, RhB (20 ppm) was added to the cross-flow system to perform the dynamic filtration at 0.1 bar. The 100% removal rate in the first 10 min indicated complete rejection of dyes by 3D-GCM. After adsorption for 3 h, the removal rate dropped to 50%, indicating the gradual saturation of adsorption site. These results confirm the excellent separation performance of the membrane (Fig. S4). Generally, for textile wastewater treatment, the accumulation of organic pollutants leads to serious membrane fouling. The prepared Cu/Pd@3D-GCM exhibited high catalytic activity due to the existence of metal-reducing nanoparticles, which equipped the membranes with self-cleaning and antifouling capabilities. To evaluate the self-cleaning performance of Cu/Pd@3D-GCM, a dynamic measurement was performed to combine filtration and catalytic degradation into one process in the presence of H2O2 (10 μL per 50 mL water) for photocatalysis under the UV light. During the cross-flow filtration process, RhB molecules were completely retained by the nanopores of Cu/Pd@3D-GCM. Then, the adsorbed RhB molecules can be efficiently degraded. During the process, reactive oxygen-containing radicals were obtained via the activation of H2O2, which was confirmed by electron paramagnetic resonance spectroscopy (EPR) (Fig. 2c). The hydroxyl radicals were captured by spin traps 5-tert-butoxy carbonyl 5-methyl-1-pyrroline N-oxide (BMPO) and 5, 5-dimethyl N-oxide pyrroline (DMPO). Because of the hyperfine interaction between the electron spin of the free radicals and the orbital spin of N atoms in spin traps, the EPR spectra of DMPO/·OH was a 4-line spectrum with an intensity ratio of 1:2:2:1 with a spacing of 14 G in the magnetic field. Moreover, the characteristic spectrum of BMPO/·O2− splits into 4 single lines with relative intensities of 1:1:1:1. The results indicate that ·OH and ·O2− were simultaneously generated during the degradation process. Therefore, after four times of regeneration cycles, the dynamic RhB removal of Cu/Pd@3D-GCM could almost recover the initial value and the adsorption capacity of Cu/Pd@3D-GCM increased slightly (Figs. 2d and S4). Figure S5 shows that Cu/Pd@3D-GCM had stable removal efficiency and H2 evolution during 540 min under five cycles of reaction. The mechanical property of Cu/Pd@3D-GCM has the highest tensile strength value of 36.97 kPa and the highest elongation of 2.30% (Fig. S6). Synergistic results showed that the self-cleaning capability, high stability, and reproducibility of Cu/Pd@3D-GCM could minimize membrane cleaning and operating costs, showing great promise in the practical application of wastewater purification. a Schematic diagram of the cross-flow system, inset: schematic illustration of the catalytic membrane loaded with MNPs. b Water fluxes of Cu/Pd@3D-GCM (black), PP microfiltration (red), and PES ultrafiltration membrane (blue). Inset: relationship between pure water flux and driving pressure for Cu/Pd@3D-GCM. c EPR spectra of DMPO/·OH adducts and BMPO/·O2− adducts over 3D-GCM. d The adsorption capacity for RhB of the Cu/Pd@3D-GCM in the presence of H2O2 after consecutive regeneration cycles. e–h Time-dependent Flux, adsorption capacity, and removal efficiency of each pollutant by Cu/Pd@3D-GCM. i Performance comparison of 3D-GCM with other membranes reported in literature We investigated Cu/Pd@3D-GCM for the removal of various organic pollutants of industrial significance, including organic dyes and active pharmaceutical ingredients (APIs) with different molecular sizes and functionalities (Fig. 2e-h). Three distinct organic dyes, including RhB, methyl orange (MO), and methylene blue (MB) and four APIs, including p-nitrophenol (PNP), amoxicillin (AMX), dichlorophenol (DCP), and bisphenol A (BPA) were selected as the model aqueous contaminants for filtration test [38]. In the experiment, the removal rate for the three dyes reached 100% within 10 min. And the rejection values for APIs were more than 90%. At the same time, the slight decrease in flux showed good antifouling performance. In addition, the adsorption kinetics of different pollutants of Cu/Pd@3D-GCM regenerated multiple times are shown in Tables S1 and S2. The pseudo-first-order and pseudo-second-order models were used to describe the adsorption process and mechanism [38]. The quasi-first-order model correlation coefficient R2 > 0.99 indicates the rate at which external diffusion mass transfer dominated the rate of the adsorption process. Simultaneously, the curves of pseudo-second-order kinetic model also showed an ideal fit, concluding that chemical adsorption dominated the adsorption process [39]. According to the calculations above, the adsorption of organic pollutants and APIs of 3D-GCM was mainly controlled by external diffusion and chemical adsorption. Based on these experiments, 3D-GCM with satisfactory pollutant removal efficiencies could be feasible for removing other pollutants. The adsorption properties of typical graphene-based catalysts were compared (Table S3). The superior activity can be understood from the following aspects. The 3D-GCM with sufficient pores and nanoscale metal particles maximizes the surface area of the active phase. Meanwhile, we also take advantage of the dynamic catalysis, where the degradation of the contaminant mainly occurs inside the membranes to further increase the active site of the reaction. In addition, with the reduction in metal particle size and their high loading, the active components are highly dispersed and the utilization efficiency of metal particle catalyst is greatly improved. Also, the cost of 3D-GCM is a key factor for its commercialization, while the functional cost of each 3D-GCM with a diameter of 3.14 cm2 is roughly estimated to be less than $0.5. In addition, the operational energy consumption and removal efficiency of Cu/Pd@3D-GCM are much better compared with other membranes (Fig. 2i), indicating great potential in actual application of our membrane. Photocatalytic H2 Evolution Performance of 3D-GCM in Separation of Wastewater To demonstrate the universality of the preparation method and to confirm the photocatalytic performance based on plasmon enhancement, two other membranes anchored with Cu and Cu/Ag (5:1) alloys were successfully prepared by laser scribing, respectively. Based on the above results, the concept of 3D-GCM photocatalytic microreactors has been proposed. Figure 3a schematically illustrates the photodegradation process and photocatalytic hydrogen evolution of the microreactor. Compared with the traditional photocatalysis setup, the suspension of the catalyst often needs to be mechanically stirred [40]. Such step ensures complete mixing of catalyst and reagent. By contrast, the fabricated 3D-GCM directly guides the reactants into its open porous structures through adsorption, in close proximity to the photocatalytic sites, which simplifies the whole reaction setup. The SEM images and XRD patterns are shown in Figs. S7-S9. To estimate the optical response of the as-synthesized 3D-GCM, the UV–Vis-NIR was carried out. As shown in Fig. 3b, The UV–Vis-NIR diffuse reflectance spectra show extremely strong absorption in the region of approximately 200–2500 nm, indicating that they have considerable light absorption capacity, which most likely stems from the close packing between MNPs and 3D-GCM. In addition, Cu/Ag@3D-GCM and Cu/Pd@3D-GCM exhibit more enhanced absorption in the 200–550 nm range, compared with the smoother absorption intensity of Cu@3D-GCM over the entire wavelength range. This is due to the large size of MNPs, which suggests the efficient use of solar energy. a Schematic diagram of the photocatalysis process in a 3D-GCM. b UV–Vis-NIR absorption spectra, c, e photocatalytic degradation behaviors and H2 evolution rates of three membranes (Cu@3D-GCM, Cu/Ag@3D-GCM, and Cu/Pd@3D-GCM). d The plot of ln(Ct/C0) of RhB versus time Similar to the photocatalytic degradation of Cu/Pd@3D-GCM, a UV light irradiation source was applied on three membranes loaded with different types of metal atoms in the presence of H2O2 after the dark period of RhB adsorption. Figure 3c represents the amount of photocatalytically removed pollutant versus the total amount fed into the reactor. Cu/Pd@3D-GCM showed the highest photocatalytic activity, and the removal rate of RhB reached 98.4% under 90 min irradiation. According to the pseudo-first-order kinetics results, the surface catalytic reaction rate constant (k) of the Cu/Pd@3D-GCM sample was calculated to be 0.0459 min−1, suggesting impressive catalytic performance (Fig. 3d). In addition, the existence of the porous structures allows light waves to penetrate deeply inside the photocatalyst and leads to more catalytic active sites, thereby enhancing the photocatalytic activity. The hydrogen evolution from the water under visible light irradiation was carried out by using ethanol as the sacrificial electron donor. Typically, Cu/Pd@3D-GCM gave an H2 production activity of 1.3474 mmol g−1 h−1, which is about 7.0-fold (0.1927 mmol g−1 h−1) higher than that of the Cu@3D-GCM (Fig. 3e). In Table S4, we compare the photodegradation and photocatalytic H2 evolution performance of typical graphene-based catalysts. Our approach demonstrates several apparent advantages, including high hydrogen evolution performance, large degradation rate, and easy membrane regeneration. The high photocatalytic activity of Cu/Pd@3D-GCM is attributed to stronger electron affinity of Pd MNPs and the lower hydrogen overvoltage of Pd, making electron transfer from Cu to Pd possible to suppress charge reorganization [41]. Photocatalytic Mechanism of 3D-GCM in Cleaning Water and H2 Production It is well known that the generated strong electric fields can be non-radiatively damped through generating hot electron–hole pairs via Landau damping [42]. Furthermore, high-concentration hot electrons generated at the surface of the MNPs can interact with phonons to increase the lattice temperature to catalyze chemical reactions in a few hundred femtoseconds [43]. In order to reveal the underlying mechanism, a 3D finite element analysis method was conducted to calculate the intensity of electric near-field and temperature change of the catalytic membrane. In Fig. 4, the color scale bar represents the normalized electric field intensity and temperature (see Supporting Information for the details of the simulation methods). Figures 4a and b and S10 show the electric field distributions and sample temperature variations under 350 nm light irradiation. The enhancement of electric field intensity (|E|2) was clearly observed at the surface of MNPs. In addition, according to the light absorption or PL spectra, we also added the simulation results under other different wavelengths (375, 420, 500, and 750 nm), and shown as following (Fig. S11). The electric field strength distribution of the cross section and the enhancement of the electric field strength value along the central axis also showed fairly strong electric field localization due to its plasmon resonance. Simulation results also indicated an increase in the sample temperature for a short time. The excitation of LSPR greatly enhanced the absorption of light to generate more hot electrons and the electrons tend to be more energetic with the temperature rise. This plays a vital role in activating chemical bonds in chemical transformations. a, b Simulated electric field distributions and temperature variation of the sample under light irradiation of 350 nm by three-dimensional finite element analysis. c PL spectra of 3D-GCM. d The calculated free-energy diagram of the HER on C, Cu@C, and Cu/Pd@C; e the charge density redistributions of a 3D-GCM system (The blue, brown, and white atoms represent Cu, C, and Pd, respectively). Blue represents the loss of charge, and yellow represents the gain of electrons in charge density isosurface plot. f A schematic diagram of hot-electron generation. The graphene layer is used as a high-efficiency electron acceptor to degrade pollutants and release hydrogen Based on the above results, the generated electrons need to be rapidly transferred to graphene with high mobility to facilitate hot-electron charge separation from the surface of the nanoparticle, in order to increase the charge lifetime. In addition, the different hydrogen overvoltage between Cu-Pd or Cu-Ag would cause inter-metal electron transfer, further suppressing the charge recombination [41, 42]. To further verify the conjecture, the photoluminescence (PL) emission spectra of the three photocatalytic membranes were measured (the excitation wavelength of 532 nm). As shown in Fig. 4c, Cu/Pd@3D-GCM shows a significantly decreased PL intensity compared with others, demonstrating that the Cu/Pd@3D-GCM sample inhibited the photogenerated carrier recombination. These results verify that the charge transfer of Cu/Pd@3D-GCM could effectively accelerate the separation and transmission of the photoexcited electrons and holes, thereby inhibiting their recombination. Hence, the photocatalytic performance is improved, which is consistent with the results of the H2 generation experiment. As discussed above, 3D-GCM is highly efficient toward hydrogen evolution reaction (HER). Density functional theory calculations were conducted to examine the synergistic performance in 3D-GCM. From the calculation, the adsorption Gibbs free energies (∆G) of the adsorbed intermediate H* and charge density distributions of several models including graphene (C atomic) and MNPs encapsulated by graphene (Cu@C, Cu/Pd@C) during the HER process were systematically estimated (* represents an adsorption site) (see Supporting Information for the details of the methods). Figure 4d illustrates the ∆G values for HER on three samples. The bare graphene had weak H* adsorption and low activity to HER. However, upon encapsulating Cu within graphene, the ∆G value was effectively increased to yield better HER performance. Furthermore, according to the ∆G variation, the appropriate Pd doping also effectively improves the HER activity. Thus, Cu/Pd@C shows the best HER activity among the three samples according to the Gibbs free energy calculation. To further clarify the enhancement of catalytic performance exhibited by the samples after MNPs doping, Fig. 4e shows the charge redistributions in Cu-C, Cu/Pd–C systems, respectively. Specifically, the contact of Cu-C leads to a charge transfer between Cu and C. Furthermore, the appropriate Pd doping rebalances the charge density of the system and further improves the charge transfer efficiency. Therefore, the combination of Cu/Pd and C achieves optimal catalytic performance. The results show that the active sites for the sewage degradation and H2 generation are mainly located in metal nanoparticles. In order to compare the effect on reactions under active surface of MNPs and remarkable loss of active surface of MNPs, we supplemented the control experiment to show the effect on reactions under the remarkable loss of active surface of MNPs (Fig. S12). According to the results, the photodegradation efficiency and H2 evolution of 3D-Graphene are much lower than those of Cu/Pd@3D-GCM. The weak catalytic performance of 3D-Graphene is attributed to the presence of a small amount of graphene oxide, which is caused by incomplete reduction in laser scribing. On the basis of previous works and the results above, we propose the following mechanisms of hot-electron generation as shown in Fig. 4f. Upon light illumination, the excited LSPR energy of MNPs tends to yield energetic charge carriers (hot electrons and holes) during the LSPR-decay process. Due to the high conductivity of graphene, electrons could be allowed to be efficiently transferred from MNPs to graphene. The electrons on graphene can reduce the surface H2O2 and generate a large amount of ·OH and ·O2− radicals to degrade organic pollutant molecules. During the hydrogen production process, electrons transferred to graphene exhibit strong reducibility, and H+ can be easily reduced into H2, as confirmed by the results of the H2 generation experiment. Meanwhile, the calculation results confirmed that Pd MNPs can significantly enhanced the performance of photocatalysis, which is attributed to the strong electron affinity of Pd MNPs. In short, H2 evolution over Pd is easier than that over Cu and Ag, resulting in the enhancement of photoactivity of the Cu/Pd@3D-GCM. In summary, we developed a green, efficient, and universal strategy to fabricate a novel 3D catalytic membrane microreactor. The 3D catalytic membrane microreactor embedded with ultrafine MNPs was engineered from a MOF precursor by laser scribing. Compared with traditional GO microreactors and solvent-based catalysts, our catalytic membrane exhibited superior removal efficiency at ultra-low energy consumption and favorable robustness for aqueous pollutant degradation. The catalytic membrane also showed excellent performance for hydrogen evolution. The superior performance in pollutant degradation and hydrogen evolution of the catalytic membrane was contributed to the synergistic mechanism of Cu, Ag, Pd, and graphene layers that was explored systematically. This work enriches the bimetallic/graphene catalyst system and provides new insights into the catalytic mechanism. Also, the preparation route of large-area 3D-GCM composites provide an efficient and low-cost strategy for addressing environmental concerns and supplying renewable energy. D. Zheng, J. Li, S. Ci, P. Cai, Y. Ding et al., Three-birds-with-one-stone electrolysis for energy-efficiency production of gluconate and hydrogen. Appl. Catal. B 277, 119178 (2020). https://doi.org/10.1016/j.apcatb.2020.119178 I. Ihsanullah, Potential of MXenes in water desalination: current status and perspectives. Nano-Micro Lett. 12, 72 (2020). https://doi.org/10.1007/s40820-020-0411-9 B. Wang, Y. Han, X. Wang, N. Bahlawane, H. Pan et al., Prussian blue analogs for rechargeable batteries. iScience 3, 110–133 (2018). https://doi.org/10.1016/j.isci.2018.04.008 S. Shen, J. Fu, J. Yi, L. Ma, F. Sheng et al., High-efficiency wastewater purification system based on coupled photoelectric-catalytic action provided by triboelectric nanogenerator. Nano-Micro Lett. 13, 194 (2021). https://doi.org/10.1007/s40820-021-00695-3 H. Wang, X. Mi, Y. Li, S. Zhan, 3D graphene-based macrostructures for water treatment. Adv. Mater. 32(3), 1806843 (2019). https://doi.org/10.1002/adma.201806843 Y. Sun, F. Yu, C. Li, X. Dai, J. Ma, Nano-/micro-confined water in graphene hydrogel as superadsorbents for water purification. Nano-Micro Lett. 12, 2 (2019). https://doi.org/10.1007/s40820-019-0336-3 T. Xu, M.A. Shehzad, X. Wang, B. Wu, L. Ge et al., Engineering leaf-like UiO-66-SO3H membranes for selective transport of cations. Nano-Micro Lett. 12, 51 (2020). https://doi.org/10.1007/s40820-020-0386-6 Y. Yang, X. Yang, L. Liang, Y. Gao, H. Cheng et al., Large-area graphene-nanomesh/carbon-nanotube hybrid membranes for ionic and molecular nanofiltration. Science 364, 1057 (2019). https://doi.org/10.1126/science.aau5321 K.H. Thebo, X. Qian, Q. Zhang, L. Chen, H.M. Cheng et al., Highly stable graphene-oxide-based membranes with superior permeability. Nat. Commun. 9, 1486 (2018). https://doi.org/10.1038/s41467-018-03919-0 Z. Zhang, S. Li, B. Mi, J. Wang, J. Ding, Surface slip on rotating graphene membrane enables the temporal selectivity that breaks the permeability-selectivity trade-off. Sci. Adv. 6(34), eaba9471 (2020). https://doi.org/10.1126/sciadv.aba9471 C.N. Yeh, K. Raidongia, J. Shao, Q.H. Yang, J. Huang, On the origin of the stability of graphene oxide membranes in water. Nat. Chem. 7, 166 (2014). https://doi.org/10.1038/nchem.2145 Y. Zhong, S. Mahmud, Z. He, Y. Yang, Z. Zhang et al., Graphene oxide modified membrane for highly efficient wastewater treatment by dynamic combination of nanofiltration and catalysis. J. Hazard. Mater. 397, 122774 (2020). https://doi.org/10.1016/j.jhazmat.2020.122774 L. Wan, M. Long, D. Zhou, L. Zhang, W. Cai, Preparation and characterization of freestanding hierarchical porous TiO2 monolith modified with graphene oxide. Nano-Micro Lett. 4, 90 (2012). https://doi.org/10.1007/bf03353698 M. Zhang, Y. Mao, G. Liu, G. Liu, Y. Fan et al., Molecular bridges stabilize graphene oxide membranes in water. Angew. Chem. Int. Ed. 59(4), 1689–1695 (2020). https://doi.org/10.1002/anie.201913010 L. Chen, G. Shi, J. Shen, B. Peng, B. Zhang et al., Ion sieving in graphene oxide membranes via cationic control of interlayer spacing. Nature 550, 380 (2017). https://doi.org/10.1038/nature24044 A.B. Alayande, H.D. Park, J.S. Vrouwenvelder, I.S. Kim, Implications of chemical reduction using hydriodic acid on the antimicrobial properties of graphene oxide and reduced graphene oxide membranes. Small 15(28), 1901023 (2019). https://doi.org/10.1002/smll.201901023 Y. Cao, Z. Xiong, F. Xia, G.V. Franks, L. Zu et al., New structural insights into densely assembled reduced graphene oxide membranes. Adv. Funct. Mater. (2022). https://doi.org/10.1002/adfm.202201535 M. Zhang, K. Guan, Y. Ji, G. Liu, W. Jin et al., Controllable ion transport by surface-charged graphene oxide membrane. Nat. Commun. 10, 1253 (2019). https://doi.org/10.1038/s41467-019-09286-8 J. Abraham, K.S. Vasu, C.D. Williams, K. Gopinadhan, Y. Su et al., Tunable sieving of ions using graphene oxide membranes. Nat. Nanotechnol. 12, 546 (2017). https://doi.org/10.1038/nnano.2017.21 C. Liu, L. Liu, X. Tian, Y. Wang, R. Li et al., Coupling metal–organic frameworks and g-CN to derive Fe@N-doped graphene-like carbon for peroxymonosulfate activation: upgrading framework stability and performance. Appl. Catal. B 255, 117763 (2019). https://doi.org/10.1016/j.apcatb.2019.117763 S. Ramakrishnan, J. Balamurugan, M. Vinothkannan, A.R. Kim, S. Sengodan et al., Nitrogen-doped graphene encapsulated FeCoMoS nanoparticles as advanced trifunctional catalyst for water splitting devices and zinc–air batteries. Appl. Catal. B 279, 119381 (2020). https://doi.org/10.1016/j.apcatb.2020.119381 H.J. Huang, J.C.S. Wu, H.P. Chiang, Y.F.C. Chau, Y.S. Lin et al., Review of experimental setups for plasmonic photocatalytic reactions. Catalysts 10(1), 46 (2019). https://doi.org/10.3390/catal10010046 W. Zhang, W. Li, Y. Li, S. Peng, Z. Xu, One-step synthesis of nickel oxide/nickel carbide/graphene composite for efficient dye-sensitized photocatalytic H2 evolution. Catal. Today 335, 326 (2019). https://doi.org/10.1016/j.cattod.2018.12.016 S. Linic, U. Aslam, C. Boerigter, M. Morabito, Photochemical transformations on plasmonic metal nanoparticles. Nat. Mater. 14, 567–576 (2015). https://doi.org/10.1038/nmat4281 H. Luo, Z. Zeng, G. Zeng, C. Zhang, R. Xiao et al., Recent progress on metal-organic frameworks based- and derived-photocatalysts for water splitting. Chem. Eng. J. 383, 123196 (2020). https://doi.org/10.1016/j.cej.2019.123196 Y.C. Qiao, Y.H. Wei, Y. Pang, Y.X. Li, D.Y. Wang et al., Graphene devices based on laser scribing technology. Jpn. J. Appl. Phys. 57, 04fa01 (2018). https://doi.org/10.7567/jjap.57.04fa01 R. You, Y.Q. Liu, Y.L. Hao, D.D. Han, Y.L. Zhang et al., Laser fabrication of graphene-based flexible electronics. Adv. Mater. 32(15), e1901981 (2019). https://doi.org/10.1002/adma.201901981 L. Fan, F. Zhao, Z. Huang, B. Chen, S.F. Zhou et al., Partial deligandation of M/Ce-BTC nanorods (M = Au, Cu, au-cu) with "Quasi-MOF" structures towards improving catalytic activity and stability. Appl. Catal. A Gen. 572, 34 (2019). https://doi.org/10.1016/j.apcata.2018.12.021 J.B. Chang, C.H. Liu, J. Liu, Y.Y. Zhou, X. Gao et al., Green-chemistry compatible approach to TiO2-supported PdAu bimetallic nanoparticles for solvent-free 1-phenylethanol oxidation under mild conditions. Nano-Micro Lett. 7, 307 (2015). https://doi.org/10.1007/s40820-015-0044-6 Z. Pan, E. Han, J. Zheng, J. Lu, X. Wang et al., Highly efficient photoelectrocatalytic reduction of CO2 to methanol by a P-N heterojunction CeO2/CuO/Cu catalyst. Nano-Micro Lett. 12, 18 (2020). https://doi.org/10.1007/s40820-019-0354-1 S. Subudhi, S. Mansingh, S.P. Tripathy, A. Mohanty, P. Mohapatra et al., The fabrication of Au/Pd plasmonic alloys on UiO-66-NH2: an efficient visible light-induced photocatalyst towards the Suzuki Miyaura coupling reaction under ambient conditions. Catal. Sci. Technol. 9(23), 6579–6585 (2019). https://doi.org/10.1039/c9cy01431d L.H. Wee, M.R. Lohe, N. Janssens, S. Kaskel, J.A. Martens, Fine tuning of the metal–organic framework Cu3(BTC)2 HKUST-1 crystal size in the 100 nm to 5 micron range. J. Mater. Chem. 22(27), 13742–13746 (2012). https://doi.org/10.1039/c2jm31536j M. Wang, M. Huang, D. Luo, Y. Li, M. Choe et al., Single-crystal, large-area, fold-free monolayer graphene. Nature 596, 519–524 (2021). https://doi.org/10.1038/s41586-021-03753-3 L. Tong, X. Huang, P. Wang, L. Ye, M. Peng et al., Stable mid-infrared polarization imaging based on quasi-2D tellurium at room temperature. Nat. Commun. 11, 2308 (2020). https://doi.org/10.1038/s41467-020-16125-8 L. Tong, Z. Peng, R. Lin, Z. Li, Y. Wang et al., 2D materials-based homogeneous transistor-memory architecture for neuromorphic hardware. Science 373(6561), 1353–1358 (2021). https://doi.org/10.1126/science.abg3161 Y. Ito, C. Christodoulou, M.V. Nardi, N. Koch, M. Klaui et al., Tuning the magnetic properties of carbon by nitrogen doping of its graphene domains. J. Am. Chem. Soc. 137, 7678 (2015). https://doi.org/10.1021/ja512897m P.R. Kidambi, B.C. Bayer, R. Blume, Z.J. Wang, C. Baehtz et al., Observing graphene grow: catalyst-graphene interactions during scalable graphene growth on polycrystalline copper. Nano Lett. 13(10), 4769–4778 (2013). https://doi.org/10.1021/nl4023572 Y. Yang, Z. Xiong, Z. Wang, Y. Liu, Z. He et al., Super-adsorptive and photo-regenerable carbon nanotube based membrane for highly efficient water purification. J. Membr. Sci. 621, 119000 (2021). https://doi.org/10.1016/j.memsci.2020.119000 S. Lin, Y. Zhao, Y.S. Yun, Highly effective removal of nonsteroidal anti-inflammatory pharmaceuticals from water by Zr(IV)-based metal-organic framework: adsorption performance and mechanisms. ACS Appl. Mater. Interfaces 10(33), 28076–28085 (2018). https://doi.org/10.1021/acsami.8b08596 H. You, X. Ma, Z. Wu, L. Fei, X. Chen et al., Piezoelectrically/pyroelectrically-driven vibration/cold-hot energy harvesting for mechano-/pyro- bi-catalytic dye decomposition of NaNbO3 nanofibers. Nano Energy 52, 351–359 (2018). https://doi.org/10.1016/j.nanoen.2018.08.004 Z. Zheng, T. Tachikawa, T. Majima, Single-particle study of Pt-modified Au nanorods for plasmon-enhanced hydrogen generation in visible to near-infrared region. J. Am. Chem. Soc. 136(19), 6870–6873 (2014). https://doi.org/10.1021/ja502704n T.V. Shahbazyan, Landau damping of surface plasmons in metal nanostructures. Phys. Rev. B 94, 235431 (2016). https://doi.org/10.1103/PhysRevB.94.235431 O.H.C. Cheng, D.H. Son, M. Sheldon, Light-induced magnetism in plasmonic gold nanoparticles. Nat. Photonics 14, 365 (2020). https://doi.org/10.1038/s41566-020-0603-3 This project was supported by the National Scientific Foundation of China (No. 61974050, 61704061, 51805184, 61974049), Key Laboratory of Non-ferrous Metals and New Materials Processing Technology of Ministry of Education/Guangxi Key Laboratory of Optoelectronic Materials and Devices open Fund (20KF-9), the Natural Science Foundation of Hunan Province of China (No. 2018TP2003), Excellent youth project of Hunan Provincial Department of Education (No. 18B111), State Key Laboratory of Crop Germplasm Innovation and Resource Utilization (No. 17KFXN02). The authors thank the technical support from Analytical and Testing Center at Huazhong University of Science and Technology. Open access funding provided by Shanghai Jiao Tong University. Xinyu Huang, Liheng Li and Shuaifei Zhao contributed equally to this work. School of Optical and Electronic Information and Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, People's Republic of China Xinyu Huang, Liheng Li, Lei Tong, Zheng Li, Zhuiri Peng, Runfeng Lin, Kan-Hao Xue & Lei Ye Hubei Yangtze Memory Laboratories, Wuhan, 430205, People's Republic of China Xinyu Huang & Lei Ye Institute for Frontier Materials, Deakin University, Geelong, VIC, 3216, Australia Shuaifei Zhao & Zhu Xiong Key Laboratory of New Processing Technology for Nonferrous Metal and Materials (Ministry of Education), Guangxi Key Laboratory of Optical and Electronic Materials and Devices, College of Materials Science and Engineering, Guilin University of Technology, Guilin, 541004, People's Republic of China Li Zhou College of Chemistry and Materials Science, Hunan Agricultural University, Hunan, 410128, People's Republic of China Chang Peng School of Material Science and Engineering, Hunan University of Science and Technology, Xiangtan, Hunan Province, People's Republic of China Lijuan Chen School of Industrial Engineering and Birck Nanotechnology Centre, Purdue University, West Lafayette, IN, 47907, USA Gary J. Cheng Institute of Environmental Research at Greater Bay, Key Laboratory for Water Quality and Conservation of the Pearl River Delta, Ministry of Education, Guangzhou University, Guangzhou, 510006, Guangdong, People's Republic of China Zhu Xiong Xinyu Huang Liheng Li Shuaifei Zhao Lei Tong Zheng Li Zhuiri Peng Runfeng Lin Kan-Hao Xue Lei Ye Correspondence to Gary J. Cheng, Zhu Xiong or Lei Ye. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 1953 kb) Huang, X., Li, L., Zhao, S. et al. MOF-Like 3D Graphene-Based Catalytic Membrane Fabricated by One-Step Laser Scribing for Robust Water Purification and Green Energy Production. Nano-Micro Lett. 14, 174 (2022). https://doi.org/10.1007/s40820-022-00923-4 3D graphene Laser scribing Catalytic membrane
CommonCrawl
Numerische Mathematik August 2016 , Volume 133, Issue 4, pp 707–742 | Cite as Maximum-norm a posteriori error estimates for singularly perturbed elliptic reaction-diffusion problems Alan Demlow Natalia Kopteva Residual-type a posteriori error estimates in the maximum norm are given for singularly perturbed semilinear reaction-diffusion equations posed in polyhedral domains. Standard finite element approximations are considered. The error constants are independent of the diameters of mesh elements and the small perturbation parameter. In our analysis, we employ sharp bounds on the Green's function of the linearized differential operator. Numerical results are presented that support our theoretical findings. Mathematics Subject Classification 65N15 65N30 The first author was partially supported by National Science Foundation Grants DMS-1016094 and DMS-1318652. The second author was partially supported by DAAD Grant A/13/05482 and Science Foundation Ireland Grant SFI/12/IA/1683. Appendix A: Sharpness of log factors In this section we prove that there are cases in which the logarithmic factor in the a posteriori upper bound (1.3) is necessary. Using an idea of Durán [14], we first prove a priori upper bounds and a posteriori upper and lower bounds for \(u-u_h\) in a modified BMO norm in the case that \(\Omega \) is a convex polygonal domain. These estimates are essentially the same as our \(L_\infty \) bounds, but with no logarithmic factors present. The proof is completed by employing the counterexample of Haverkamp [20] showing that a similar logarithmic factor is necessary in \(L_\infty \) a priori upper bounds for piecewise linear finite element methods. Note that our counterexample is only valid for piecewise linear elements. Logarithmic factors are not present in standard a priori \(L_\infty \) bounds for elements of degree two or higher on quasi-uniform grids, and it remains unclear whether there are cases for which they are necessary in the corresponding \(L_\infty \) a posteriori bounds. In addition, both the result of Durán [14] and ours below only consider Poisson's problem and not the broader class of problems described in (1.1). A.1: Adapted Hardy and BMO spaces We begin by describing operator-adapted BMO and Hardy spaces, following [13]. Let \(-\Delta \) denote the Dirichlet Laplacian on \(\Omega \), i.e., the Laplacian with domain restricted to functions which vanish on \(\partial \Omega \). Let $$\begin{aligned} \Vert v \Vert _{\mathrm{bmo}_\Delta (\Omega )}= & {} \sup _{B(x,r): x \in \Omega , 0<r<1} \left[ \frac{1}{|B \cap \Omega |} \int _{B \cap \Omega } | (I-(I-r^2 \Delta )^{-1})v(x)|^2 \, \mathrm{d}x \right] ^{1/2} \nonumber \\&+ \sup _{B(x,r):x \in \Omega , r \ge 1} \left[ \frac{1}{|B \cap \Omega |} \int _{B \cap \Omega } | v(x)|^2 \, \mathrm{d}x \right] ^{1/2}. \end{aligned}$$ The space \(\mathrm{bmo}_\Delta (\Omega )\) then consists of functions \(v \in L_2(\Omega )\) for which \(\Vert v\Vert _{\mathrm{bmo}_\Delta (\Omega )} <\infty \). Note that the resolvent \((I-r^2 \Delta )^{-1}\) replaces the usual average over B in the definition of BMO. We also define an operator-adapted atomic Hardy space \(h_\Delta ^1\) which is dual to \(\mathrm{bmo}_\Delta \). A bounded, measurable function a supported in \(\Omega \) is a local atom if there is a ball B centered in \(\Omega \) with radius \(r<2 \mathrm{diam}(\Omega )\) such that \(\Vert a\Vert _{2; \, \mathbb {R}^n} \le |B \cap \Omega |^{-1/2}\) and either \(r>1\), or \(r \le 1\) and there exists b in the domain of the Dirichlet Laplacian such that \(a = -\Delta b\), \(\mathrm{supp}(b) \cup \mathrm{supp} (-\Delta b) \subset B \cap \overline{\Omega }\), and $$\begin{aligned} \Vert (-r^2 \Delta )^k b\Vert _{2; \, \mathbb {R}^n} \le r^2 |B \cap \Omega |^{-1/2}, ~~k=0,1. \end{aligned}$$ An atomic representation of w is a series \(w = \sum _{j} \lambda _j a_j\), where \(\{\lambda _j\}_{j=0}^\infty \in \ell ^1\), each \(a_j\) is a local atom, and the series converges in \(L_2(\Omega )\). We then define the norm $$\begin{aligned} \Vert w\Vert _{h_\Delta ^1(\Omega )} = \inf \left\{ \sum _{j=0}^\infty |\lambda _j| : w = \sum _{j=0}^\infty \lambda _j a_j \hbox { is an atomic representation of } w \right\} . \end{aligned}$$ The Hardy space \(h_\Delta ^1(\Omega )\) is the completion in \((\mathrm{bmo}_\Delta (\Omega ))^*\) of the set of functions having an atomic representation with respect to the metric induced by the above norm. In addition, \(\mathrm{bmo}_\Delta \) is the dual space of \(h_\Delta ^1\) in the sense that if \(w= \sum _{j=0}^\infty \lambda _j a_j \in h_\Delta ^1(\Omega )\), then $$\begin{aligned} w \mapsto v(w) := \lim _{k \rightarrow \infty } \sum _{j=0}^k \lambda _j \int _\Omega a_j v \, \mathrm{d}x \end{aligned}$$ is a well-defined and continuous linear functional for each \(v \in \mathrm{bmo}_\Delta (\Omega )\) whose norm is equivalent to \(\Vert v\Vert _{\mathrm{bmo}_\Delta (\Omega )}\). In addition, each continuous linear functional on \(h_\Delta ^1(\Omega )\) has this form (cf. Theorem 3.11 of [13]). We finally list an essential regularity result; cf. Theorem 4.1 of [13]. Lemma 9 Let \(\Omega \) be a bounded, simply connected, semiconvex domain in \(\mathbb {R}^n\), and let G be the Dirichlet Green's function for \(-\Delta \). Let \(\mathbb {G}_\Delta \) be the corresponding Green operator given by \(\mathbb {G}(v)(x) = \int _\Omega G(x,y) v(y) \, \mathrm{d}y\). Then the operators \(\frac{\partial \mathbb {G}}{\partial x_i \partial x_j}\) are bounded from \(h_\Delta ^1(\Omega )\) to \(L_1(\Omega )\). In other terms, given \(u \in H_0^1(\Omega )\) with \(-\Delta u \in h_\Delta ^1(\Omega )\), we have \(u \in W_1^2(\Omega )\) with $$\begin{aligned} |u|_{2, 1 ; \, \Omega } \lesssim \Vert \Delta u \Vert _{h_\Delta ^1(\Omega )}. \end{aligned}$$ We remark that the above regularity result does not in general hold on nonconvex Lipschitz (or even \(C^1\)) domains; cf. Theorem 1.2.b of [21]. It is not clear whether (5.5) holds on nonconvex polyhedral domains, but a different approach to the analysis than that taken in [13] would be in any case needed to establish this. Such a result would allow us to extend a posteriori estimates in \(bmo_{\Delta }\) that we obtain below for convex polyhedral domains to general polyhedral domains, which would be desirable since the corresponding \(L_\infty \) estimates also hold on general polyhedral domains. However, for our immediate purpose of providing a counterexample it suffices to consider convex domains. A.2: A priori and a posteriori estimates in \(bmo_{\Delta }\) In [14], Durán proved that given a smooth convex domain \(\Omega \subset \mathbb {R}^2\) and piecewise linear finite element solution \(u_h\) on a quasi-uniform mesh of diameter h, \(\Vert u-u_h\Vert _{\mathrm{BMO}(\Omega )} \lesssim h^2 |u|_{W_\infty ^2(\Omega )}\). Here BMO\((\Omega )\) is the classical BMO space; cf. [14] for a definition. We prove the same on convex polyhedral domains in arbitrary space dimension, but with BMO replaced by its operator-adapted counterpart. For notational simplicity we also consider only piecewise linear finite element spaces below, but our a priori and a posteriori bounds easily generalize to arbitrary polynomial degree. Lemma 10 Assume that \(\Omega \subset \mathbb {R}^n\) is convex and polyhedral, and \(u \in W_\infty ^2(\Omega )\). Let also \(u_h\) be the piecewise linear finite element approximation to u with respect to a quasi-uniform simplicial mesh of diameter h. Then $$\begin{aligned} \Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )} \lesssim h^2 |u|_{2, \infty ; \, \Omega }. \end{aligned}$$ Let \(\sum _{j=0}^k \lambda _j a_j = z \in h_\Delta ^1(\Omega )\) with k arbitrary but finite. Such functions are dense in \(h_\Delta ^1\), so to prove our claim it suffices by the duality of \(\mathrm{bmo}_\Delta \) and \(h_\Delta ^1\) to show that \(\int _\Omega (u-u_h) z \, \mathrm{d}x \lesssim h^2 |u|_{2, \infty ; \, \Omega } \Vert z\Vert _{h_\Delta ^1(\Omega )}\). Let \(-\Delta v=z\) with \(v=0\) on \(\partial \Omega \). Letting \(I_h v\) be a Scott-Zhang interpolant of v, we have $$\begin{aligned} (u-u_h, z)&= (u-u_h, -\Delta v) = (\nabla (u-u_h), \nabla (v-I_h v)) \nonumber \\&\lesssim h \Vert u-u_h\Vert _{W_\infty ^1(\Omega )} |v|_{2, 1; \, \Omega } \lesssim h\Vert u-u_h\Vert _{1, \infty ; \, \Omega } \Vert z\Vert _{h_\Delta ^1(\Omega )}. \end{aligned}$$ The proof is completed by recalling the \(W_\infty ^1\) error estimate \(\Vert u-u_h\Vert _{1, \infty ; \, \Omega } \lesssim h |u|_{2, \infty ; \, \Omega }\); cf. [12, 19, 36]. \(\square \) We next prove a posteriori upper and lower bounds for \(\Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )}\). Note that the a posteriori lower bound for the error is critical in establishing that the logarithmic factor in (1.3) is necessary. Assume that \(\Omega \subset \mathbb {R}^n\) is convex and polyhedral. Let also \(u_h\) be the piecewise linear finite element approximation to u with respect to a shape-regular simplicial mesh, where \(u \in H_0^1(\Omega )\) with \(-\Delta u = f \in L_\infty (\Omega )\). Then $$\begin{aligned}&\Vert u -u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )} + \max _{T \in \mathcal {T}_h} h_T^2 \Vert f-f_T\Vert _{\infty ; \, T} \nonumber \\&\quad \simeq \max _{T \in \mathcal {T}_h} [ h_T^2 \Vert f + \Delta u_h\Vert _{\infty ; \, T} + h_T \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, \partial T}]. \end{aligned}$$ Here \(f_T = \frac{1}{|T|} \int _T f \,\mathrm{d}x\). The upper bound for \(\Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )}\) follows by first noting that \(h_T^2 \Vert f-f_T \Vert _{\infty ; \, T} \le h_T^2 \Vert f+ \Delta u_h\Vert _{\infty ; \, T}\) and then employing a duality argument precisely as in the preceding lemma; one must only substitute standard residual error estimation techniques for the a priori error analysis techniques above. In order to prove the lower bound we employ a discrete \(\delta \)-function; cf. (A.5) of [38]. Given \(x_0 \in T \in \mathcal {T}_h\), let \(\delta _{x_0}\) be a smooth, fixed function compactly supported in T such that \((v_h, \delta _{x_0})=v_h(x_0)\) for all \(v_h \in S_h\). \(\delta _{x_0}\) may be constructed to satisfy \(\Vert \delta _{x_0}\Vert _{m, p ; \, T} \lesssim h_T^{-m-n(1-\frac{1}{p})}\) with constant independent of \(x_0\). A short computation shows that \(-c h_T^2 \Delta \delta _{x_0}\) is an atom satisfying (5.2) with the required value of c and the constant in \(r \simeq h_T\) independent of essential quantities. Thus $$\begin{aligned} h_T^2 \Vert f + \Delta u_h\Vert _{\infty ; \, T}&\le h_T^2 \Vert f-f_T\Vert _{\infty ; \, T} + h_T^2 (f_T + \Delta u_h, \delta _{x_0}) \nonumber \\&\lesssim h_T^2 \Vert f-f_T\Vert _{\infty ; \, T} - h_T^2 (\Delta (u-u_h), \delta _{x_0}) \nonumber \\&= h_T^2 \Vert f-f_T\Vert _{\infty ; \,T} - h_T^2 (u-u_h, \Delta \delta _{x_0}) \nonumber \\&\lesssim h_T^2 \Vert f-f_T\Vert _{\infty ; \, T} + \Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )}. \end{aligned}$$ To bound \(h_T \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, e}\) on a face e of the triangulation, let \(e = T_1 \cap T_2\) with \(T_1,T_2 \in \mathcal {T}_h\). Modest modification of the arguments in (A.5) of [38] yields that for \(x_0 \in e\) and fixed polynomial degree \(r-1\), there is a function \(\widetilde{\delta }_{x_0}\) compactly supported in \(\mathcal {T}_1 \cup T_2\) such that \(v_h (x_0) = \int _e \widetilde{\delta }_{x_0} v_h \, \mathrm{d}s\) for \(v_h \in \mathbb {P}_{r-1}\), and in addition, \(\Vert \widetilde{\delta }_{x_0} \Vert _{m,p; \,T_1 \cup T_2} \lesssim h_T^{-m+1+n (1-\frac{1}{p})}\). Similar to above, \(-c h_T \Delta \widetilde{\delta }_{x_0}\) is an atom with \(r \simeq h_T\). Thus $$\begin{aligned} h_T \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, e}&= \int _e \llbracket \nabla u_h \rrbracket \widetilde{\delta }_{x_0} \, \mathrm{d}s \nonumber \\&= h_T \int _{T_1 \cup T_2} \nabla (u-u_h) \nabla \widetilde{\delta }_{x_0} \, \mathrm{d}x - h_T \int _{T_1 \cup T_2} ( \Delta u_h+f) \widetilde{\delta }_{x_0} \, \mathrm{d}x \nonumber \\&\lesssim \int _{T_1 \cup T_2} (u-u_h) (-h_T \Delta \widetilde{\delta }_{x_0}) \, \mathrm{d}x + h_T \Vert \Delta u_h +f\Vert _{\infty ; \, T_1 \cup T_2} \Vert \widetilde{\delta }_{x_0} \Vert _{1; \, T_1 \cup T_2} \nonumber \\&\lesssim \Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )} + h_T^2 \Vert f + \Delta u_h \Vert _{\infty ; \, \mathcal {T}_1 \cup T_2}. \end{aligned}$$ Combining (5.9) and (5.10) completes the proof. \(\square \) A.3: Necessity of logarithmic factors In this section we show that logarithmic factors are necessary in maximum-norm a posteriori upper bounds at least in the case of piecewise linear function spaces in two space dimensions. In [20], Haverkamp showed that given a convex polygonal domain \(\Omega \) and quasi-uniform mesh of size h, there exists u (which depends on h) such that \(\Vert u-u_h\Vert _{ \infty ; \, \Omega } \gtrsim h^2 \log h^{-1} |u|_{2, \infty ; \, \Omega }\). Given such a u, employing this result, (1.3), and the preceding two lemmas yields $$\begin{aligned} h^2 \log h^{-1} |u|_{2, \infty ; \, \Omega }&\lesssim \Vert u-u_h\Vert _{\infty ; \, \Omega } \nonumber \\&\lesssim \log h^{-1}\max _{T \in \mathcal {T}_h} [ h^2 \Vert f + \Delta u_h\Vert _{\infty ; \, T} + h \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, \partial T}] \nonumber \\&\lesssim \log h^{-1} [\Vert u -u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )} + \max _{T \in \mathcal {T}_h} h^2 \Vert f-f_T\Vert _{\infty ; \, T}] \nonumber \\&\lesssim h^2 \log h^{-1}|u|_{2, \infty ; \, \Omega }. \end{aligned}$$ We have thus proved the following lemma. $$\begin{aligned} \Vert u-u_h\Vert _{\infty ; \, \Omega } \lesssim \log \underline{h}^{-1} \max _{T \in \mathcal {T}_h} [h_T^2 \Vert f+ \Delta u_h\Vert _{\infty ; \, T} + h_T \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, \partial T}] \end{aligned}$$ does not in general hold if the term \(\log \underline{h}^{-1}\) is omitted. We now also remark on two further important consequences of Lemma 11. First, the standard a priori and a posteriori upper bounds for \(L_\infty \) are $$\begin{aligned}&\Vert u -u_h\Vert _{\infty ; \, \Omega } \nonumber \\&\quad \lesssim \log \underline{h}^{-1} \max _{T \in \mathcal {T}_h} [ h_T^2 \Vert f + \Delta u_h\Vert _{\infty ; \, T} + h_T \Vert \llbracket \nabla u_h \rrbracket \Vert _{\infty ; \, \partial T} ] \nonumber \\&\quad \lesssim \log \underline{h}^{-1} \left( \Vert u-u_h\Vert _{\infty ; \, \Omega } + \max _{T \in \mathcal {T}_h} h_T^2 \Vert f-f_T\Vert _{\infty ; \, T} \right) . \end{aligned}$$ Lemma 12 establishes that the logarithmic factor in the first inequality above is necessary. Our estimates also show that the logarithmic factor in the second inequality (efficiency estimate) sometimes is not sharp, since \(\Vert u-u_h\Vert _{\infty ; \, \Omega }\) in the third line above may be replaced by \(\Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )}\) and the latter may grow strictly (logarithmically) slower than the former. Secondly, an interesting question that has yet to be successfully approached in the literature is proof of convergence of adaptive FEM for controlling maximum errors. Among other difficulties, the presence of the logarithmic factor in the a posteriori bounds for the maximum error makes adaptation of standard AFEM convergence and optimality proofs much more challenging. Because logarithmic factors are global, they play no role in AFEM marking schemes, so the natural AFEM for controlling \(\Vert u-u_h\Vert _{\infty ; \, \Omega }\) is precisely the same as that for controlling \(\Vert u-u_h\Vert _{\mathrm{bmo}_\Delta (\Omega )}\). Lemma 11 indicates that at least for convex domains the BMO norm of the error is more directly controlled by the standard \(L_\infty \) AFEM since the bounds involve no logarithmic factors. Abramowitz, M., Stegun, I.A. (eds.): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover Publications Inc, New York (1992). (Reprint of the 1972 edition)zbMATHGoogle Scholar Ainsworth, M., Babuška, I.: Reliable and robust a posteriori error estimating for singularly perturbed reaction-diffusion problems. SIAM J. Numer. Anal. 36, 331–353 (1999)MathSciNetCrossRefzbMATHGoogle Scholar Ainsworth, M., Vejchodský, T.: Fully computable robust a posteriori error bounds for singularly perturbed reaction-diffusion problems. Numer. Math. 119, 219–243 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Andreev, V.B., Kopteva, N.: Pointwise approximation of corner singularities for a singularly perturbed reaction-diffusion equation in an \(L\)- shaped domain. Math. Comput. 77, 2125–2139 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Blatov, I.A.: Galerkin finite element method for elliptic quasilinear singularly perturbed boundary problems. I. Differ. Uravn. 28, 1168–1177 (1992) (translation Differ. Equ. 28, 931–940 (1992))Google Scholar Brézis, H., Strauss, W.A.: Semi-linear second-order elliptic equations in \(L^{1}\). J. Math. Soc. Jpn. 25, 565–590 (1973)MathSciNetCrossRefzbMATHGoogle Scholar Chadha, N.M., Kopteva, N.: Maximum norm a posteriori error estimate for a 3d singularly perturbed semilinear reaction-diffusion problem. Adv. Comput. Math. 35, 33–55 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Chen, L.: iFEM: An Innovative Finite Element Method Package in Matlab, tech. rep., University of California-Irvine, California (2009)Google Scholar Clavero, C., Gracia, J.L., O'Riordan, E.: A parameter robust numerical method for a two dimensional reaction-diffusion problem. Math. Comput. 74, 1743–1758 (2005)MathSciNetCrossRefzbMATHGoogle Scholar Dari, E., Durán, R.G., Padra, C.: Maximum norm error estimators for three-dimensional elliptic problems. SIAM J. Numer. Anal. 37, 683–700 (2000). (electronic)MathSciNetCrossRefzbMATHGoogle Scholar Demlow, A., Georgoulis, E.: Pointwise a posteriori error control for discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 50, 2159–2181 (2012)MathSciNetCrossRefzbMATHGoogle Scholar Demlow, A., Leykekhman, D., Schatz, A.H., Wahlbin, L.B.: Best approximation property in the \({W}_\infty ^1\) norm for finite element methods on graded meshes. Math. Comput. 81, 743–764 (2012)Google Scholar Duong, X.T., Hofmann, S., Mitrea, D., Mitrea, M., Yan, L.: Hardy spaces and regularity for the inhomogeneous Dirichlet and Neumann problems. Rev. Mat. Iberoam. 29, 183–236 (2013)MathSciNetCrossRefzbMATHGoogle Scholar Durán, R.G.: A note on the convergence of linear finite elements. SIAM J. Numer. Anal. 25, 1032–1036 (1988)MathSciNetCrossRefzbMATHGoogle Scholar Eriksson, K.: An adaptive finite element method with efficient maximum norm error control for elliptic problems. Math. Models Methods Appl. Sci. 4, 313–329 (1994)MathSciNetCrossRefzbMATHGoogle Scholar Fellner, K., Kovtunenko, V.: A singularly Perturbed Nonlinear Poisson-Boltzmann Equation: Uniform and Super-Asymptotic Expansions. Tech. rep., SFB F 32/Universität Graz (2014)Google Scholar Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order, 2nd edn. Springer-Verlag, Berlin (1998)zbMATHGoogle Scholar Grinberg, G.A.: Some topics in the mathematical theory of electrical and magnetic phenomena (Nekotorye voprosy matematicheskoi teorii elektricheskih i magnitnyh yavlenii). USSR Academy of Sciences, Leningrad (1943). (Russian)Google Scholar Guzmán, J., Leykekhman, D., Rossmann, J., Schatz, A.H.: Hölder estimates for Green's functions on convex polyhedral domains and their applications to finite element methods. Numer. Math. 112, 221–243 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Haverkamp, R.: Eine Aussage zur \(L_{\infty }\)- Stabilität und zur genauen Konvergenzordnung der \(H^{1}_{0}\)- Projektionen. Numer. Math. 44, 393–405 (1984)MathSciNetCrossRefzbMATHGoogle Scholar Jerison, D., Kenig, C.E.: The inhomogeneous Dirichlet problem in Lipschitz domains. J. Funct. Anal. 130, 161–219 (1995)MathSciNetCrossRefzbMATHGoogle Scholar Juntunen, M., Stenberg, R.: A residual based a posteriori estimator for the reaction-diffusion problem. C. R. Math. Acad. Sci. Paris 347, 555–558 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Juntunen, M., Stenberg, R.: Analysis of finite element methods for the Brinkman problem. Calcolo 47, 129–147 (2010)MathSciNetCrossRefzbMATHGoogle Scholar Kopteva, N.: Maximum norm a posteriori error estimate for a 2d singularly perturbed reaction-diffusion problem. SIAM J. Numer. Anal. 46, 1602–1618 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Kopteva, N.: Maximum-norm a posteriori error estimates for singularly perturbed reaction-diffusion problems on anisotropic meshes. SIAM J. Numer. Anal. (2015, to appear). http://www.staff.ul.ie/natalia/pdf/rd_ani_R1_mar15.pdf Kopteva, N., Linß, T.: Maximum norm a posteriori error estimation for parabolic problems using elliptic reconstructions. SIAM J. Numer. Anal. 51, 1494–1524 (2013)MathSciNetCrossRefzbMATHGoogle Scholar Kreuzer, C., Siebert, K.G.: Decay rates of adaptive finite elements with Dörfler marking. Numer. Math. 117, 679–716 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Kunert, G.: Robust a posteriori error estimation for a singularly perturbed reaction-diffusion equation on anisotropic tetrahedral meshes. Adv. Comput. Math. 15, 237–259 (2001)Google Scholar Kunert, G.: A posterior \(H^1\) error estimation for a singularly perturbed reaction diffusion problem on anisotropic meshes. IMA J. Numer. Anal. 25, 408–428 (2005)MathSciNetCrossRefzbMATHGoogle Scholar Leykekhman, D.: Uniform error estimates in the finite element method for a singularly perturbed reaction-diffusion problem. Math. Comput. 77(261), 21–39 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Lin, R., Stynes, M.: A balanced finite element method for singularly perturbed reaction-diffusion problems. SIAM J. Numer. Anal. 50, 2729–2743 (2012)MathSciNetCrossRefzbMATHGoogle Scholar Nochetto, R.H.: Pointwise a posteriori error estimates for elliptic problems on highly graded meshes. Math. Comput. 64, 1–22 (1995)MathSciNetCrossRefzbMATHGoogle Scholar Nochetto, R.H., Schmidt, A., Siebert, K.G., Veeser, A.: Pointwise a posteriori error estimates for monotone semilinear problems. Numer. Math. 104, 515–538 (2006)MathSciNetCrossRefzbMATHGoogle Scholar Nochetto, R.H., Siebert, K.G., Veeser, A.: Pointwise a posteriori error control for elliptic obstacle problems. Numer. Math. 95, 163–195 (2003)MathSciNetCrossRefzbMATHGoogle Scholar Nochetto, R.H., Siebert, K.G., Veeser, A.: Fully localized a posteriori error estimators and barrier sets for contact problems. SIAM J. Numer. Anal. 42, 2118–2135 (2005). (electronic)MathSciNetCrossRefzbMATHGoogle Scholar Rannacher, R., Scott, R.: Some optimal error estimates for piecewise linear finite element approximations. Math. Comput. 38, 437–445 (1982)MathSciNetCrossRefzbMATHGoogle Scholar Schatz, A.H., Wahlbin, L.B.: On the finite element method for singularly perturbed reaction-diffusion problems in two and one dimensions. Math. Comput. 40, 47–89 (1983)MathSciNetCrossRefzbMATHGoogle Scholar Schatz, A.H., Wahlbin, L.B.: Interior maximum-norm estimates for finite element methods, Part II. Math. Comput. 64, 907–928 (1995)MathSciNetzbMATHGoogle Scholar Shishkin, G.I.: Grid approximation of singularly perturbed elliptic and parabolic equations. Ur. O Ran, Ekaterinburg (1992). (Russian)Google Scholar Stevenson, R.P.: The uniform saturation property for a singularly perturbed reaction-diffusion equation. Numer. Math. 101, 355–379 (2005)MathSciNetCrossRefzbMATHGoogle Scholar Tikhonov, A.N., Samarskiĭ, A.A.: Equations of Mathematical Physics. Dover Publications Inc, New York (1990). Translated from the Russian by A.R.M. Robson and P. Basu, Reprint of the 1963 translationGoogle Scholar Verfürth, R.: Robust a posteriori error estimators for a singularly perturbed reaction-diffusion equation. Numer. Math. 78, 479–493 (1998)MathSciNetCrossRefzbMATHGoogle Scholar 1.Department of MathematicsTexas A&M UniversityCollege StationUSA 2.Department of Mathematics and StatisticsUniversity of LimerickLimerickIreland Demlow, A. & Kopteva, N. Numer. Math. (2016) 133: 707. https://doi.org/10.1007/s00211-015-0763-0 Revised 10 July 2015
CommonCrawl
Only show content I have access to (36) Only show open access (5) Last 12 months (6) Last 3 years (14) Over 3 years (67) Physics and Astronomy (23) Materials Research (10) Area Studies (1) Proceedings of the International Astronomical Union (10) MRS Online Proceedings Library Archive (9) Proceedings of the Nutrition Society (7) European Psychiatry (4) The Journal of Anatomy (4) British Journal of Nutrition (3) High Power Laser Science and Engineering (3) Psychological Medicine (3) Industrial and Organizational Psychology (2) Cardiology in the Young (1) Earth and Environmental Science Transactions of The Royal Society of Edinburgh (1) Epidemiology and Psychiatric Sciences (1) Experimental Agriculture (1) Geological Magazine (1) Journal of the Australian Mathematical Society (1) Journal of the International Neuropsychological Society (1) MRS Advances (1) PMLA / Publications of the Modern Language Association of America (1) Prehospital and Disaster Medicine (1) The British Journal of Psychiatry (1) International Astronomical Union (11) Nestle Foundation - enLINK (11) Materials Research Society (10) European Psychiatric Association (4) Society for Industrial and Organizational Psychology (SIOP) (2) AEPC Association of European Paediatric Cardiology (1) Australian Mathematical Society Inc (1) BSAS (1) International Neuropsychological Society INS (1) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (1) Modern Language Association of America (1) Royal College of Psychiatrists / RCPsych (1) World Association for Disaster and Emergency Medicine (1) Case Studies in Neurology (1) Transient reductions in milk fat synthesis and their association with the ruminal and metabolic profile in dairy cows fed high-starch, low-fat diets E. C. Sandri, J. Lévesque, A. Marco, Y. Couture, R. Gervais, D. E. Rico Journal: animal / Volume 14 / Issue 12 / December 2020 Published online by Cambridge University Press: 23 June 2020, pp. 2523-2534 Sub-acute ruminal acidosis (SARA) is sometimes observed along with reduced milk fat synthesis. Inconsistent responses may be explained by dietary fat levels. Twelve ruminally cannulated cows were used in a Latin square design investigating the timing of metabolic and milk fat changes during Induction and Recovery from SARA by altering starch levels in low-fat diets. Treatments were (1) SARA Induction, (2) Recovery and (3) Control. Sub-acute ruminal acidosis was induced by feeding a diet containing 29.4% starch, 24.0% NDF and 2.8% fatty acids (FAs), whereas the Recovery and Control diets contained 19.9% starch, 31.0% NDF and 2.6% FA. Relative to Control, DM intake (DMI) and milk yield were higher in SARA from days 14 to 21 and from days 10 to 21, respectively (P < 0.05). Milk fat content was reduced from days 3 to 14 in SARA (P < 0.05) compared with Control, while greater protein and lactose contents were observed from days 14 to 21 and 3 to 21, respectively (P < 0.05). Milk fat yield was reduced by SARA on day 3 (P < 0.05), whereas both protein and lactose yields were higher on days 14 and 21 (P < 0.05). The ruminal acetate-to-propionate ratio was lower, and the concentrations of propionate and lactate were higher in the SARA treatment compared with Control on day 21 (P < 0.05). Plasma insulin increased during SARA, whereas plasma non-esterified fatty acids and milk β-hydroxybutyrate decreased (P < 0.05). Similarly to fat yield, the yield of milk preformed FA (>16C) was lower on day 3 (P < 0.05) and tended to be lower on day 7 in SARA cows (P < 0.10), whereas yield of de novo FA (<16C) was higher on day 21 (P < 0.01) in the SARA group relative to Control. The t10- to t11-18:1 ratio increased during the SARA Induction period (P < 0.05), but the concentration of t10-18:1 remained below 0.5% of milk fat, and t10,c12 conjugated linoleic acid remained below detection levels. Odd-chain FA increased, whereas branched-chain FA was reduced during SARA Induction from days 3 to 21 (P < 0.05). Sub-acute ruminal acidosis reduced milk fat synthesis transiently. Such reduction was not associated with ruminal biohydrogenation intermediates but rather with a transient reduction in supply of preformed FA. Subsequent rescue of milk fat synthesis may be associated with higher availability of substrates due to increased DMI during SARA. 1849 – Young And Suicide Prevention Programs Through Internet And Media: Supreme M. D'aulerio, V. Carli, M. Iosue, F. Basilico, A.M. De Marco, L. Recchia, J. Balazs, A. Germanavicius, R. Hamilton, C. Masip, N. Mschin, A. Varnik, C. Wasserman, C. Hoven, M. Sarchiapone, D. Wasserman Journal: European Psychiatry / Volume 28 / Issue S1 / 2013 Published online by Cambridge University Press: 15 April 2020, p. 1 The researches show a rapid growth of mental disorders among adolescents and young adults that often cooccurs with risk behaviours, such as suicide, which is one of the leading cause of death among young ages 15-34. Therefore it's necessary to use some tools that can promote mental health getting to young lives such as Internet and media. SUPREME (Suicide Prevention by Internet and Media Based Mental Health Promotion) is aimed to increasing the prevention of risk behaviours and mental health promotion through the use of mass media and Internet. The main expected outcome is to improve mental health among European adolescents. In each European countries a sample of 300 students (average age of 15 years) will be selected. The prevention program will be a highly interactive website that which will address topics such as raising awareness about mental health and suicide, combating stigma, and stimulate peer help. The program will use different means of referral to the intervention website: "Adolescent related" and "Professional related". A questionnaire will be administered to the pupils for require the data on lifestyles, values and attitudes, psychological well-being, familiar relationship and friendship. Some web-sites, managed by mental health professionals, produced encouraging results about their use in prevention of risk behaviours and in increase well-being, especially in youth with low self-esteem and low life-satisfaction. With the implementation of the SUPREME project we will be able to identify best practices for promoting mental health through the Internet and the media. EPA-0325 – Serum s100b Protein Levels in First-episode Psychosis S. Yelmo-Cruz, A.L. Morera-Fumero, G. Díaz-Marrero, J. Suárez-Jesús, D. Paico-Rodríguez, M. Henry-Benítez, P. Abreu-González, R. Gracia-Marco S100B is a calcium-binding protein produced by the astrocytes that has been used as a biomarker of brain inflammation. S100B has been involved in the schizophrenia pathophysiology, being considered a marker of state and prognosis. Studying the relationship between serum S100B levels and psychopathology in first-episode psychosis (FEP). At admission and discharge, serum S100B levels were measured in 20 never-medicated FEP in-patients and 20 healthy controls. Psychopathology was assessed with the PANSS (Positive and Negative Syndrome Scale). The total, positive, negative and general psychopathology scores were assessed. Results are presented as mean±sd. and S100B levels in pg./ml. At admission, patients had significantly higher serum S100B concentrations than healthy subjects (39.2±6.4 vs. 33.3±0.98, p<0.02). S100B levels increased from admission to discharge (39.2±6.4 vs. 40.0±6.8, p=0.285) but they do not reach statistical significance. There were no correlations between PANSS (total, positive, negative and general) scores and S100B at admission and discharge. Individual item by item PANSS correlations with S100B elicited a positive correlation with P5 (grandiosity) (r=0.486, p=0.030) and G5 (mannerisms/posturing) (r=0.514; p=0.02) at discharge. There also was a positive trend with G7 (motor retardation) (r=0.409; p=0.073) at discharge. FEP in-patients have significantly increased serum levels of S100B proteins, suggesting an activation of glial cells that may be associated with a neurodegenerative/inflammatory process. Apart from the study of total scale scores, the analysis of individual item is also recommended. The long-term treatment effect (one year or more) may be relevant to see their relationship to S100B levels. EPA-0289 – Psychopathology Sex Differences in Asthmatics M. Henry, A. Morera, A. Henry, E. Diaz-Mesa, S. Yelmo-Cruz, J. Suarez-Jesus, D. Paico-Rodriguez, G. Diaz-Marrero, R. Gracia-Marco, I. Gonzalez-Martin Although asthma has been one of the most investigated topics in psychosomatics, studies and papers on psychopathology in asthma are fairly scarce and of diverse meaning. Furthermore, psychopathology acoording to sex in asthma is not a common research topic. Aim This study aims at analyzing psychopathology sex differences in asthmatics. The psychopathology profile in a sample of 84 adult asthmatics in a hospital outpatient facility, mean age 34.62 (s.d.12.78), 36 male / 48 female, is studied. The Symptom Checklist-90-R (SCL-90-R) Self-Report Questionnaire was administered. The symptomatic profile is characterized by higher scores in women, with a main elevation in the dimensions of Somatization (1.92), Depression (1.66), Obsession-Compulsion (1.62) and Anxiety (1.44) whereas lower scores are recorded in men, with a profile dominated by Hostility (1.70), Anxiety (1.68), Interpersonal Sensitivity (1.58) and Depression (1.44). These scores mainly contribute to the psychopathology pattern according to sex. The possible clinical implications of the observed psychopathology sex differences should be taken into account in the management of these patients. EPA-0356 – Tuberous Esclerosis Complex and Psychiatric Comorbidity: Two Case Reports J. Suarez-Jesus, S. Yelmo-Cruz, D. Paico-Rodriguez, N. Suarez-Benitez, G. Diaz-Marrero, M. Henry-Benitez, R. Gracia-Marco Tuberous Sclerosis Complex (TSC) is a genetic inherited disease characterized by hamartomatous growths in several organs as brain, skin, kidneys, hearth and eyes. The estimated incidence is approximately 1:6000 live births. The diagnosis is made clinically. Seizures are present in 87% of patients. Psychiatric comorbidity has been reported. We report the clinical course of two patients with previous diagnosis of TSC. Psychiatric symptoms start in the adulthood without seizures history and absence of Subependimal Giant Cells Tumor (SGCT). The evolution and clinical features are described. Married 33-years-old woman with two children affected with TSC. She was diagnosed after headache presentation in 2011. Initial MRI showed periventricular glioneuronal hamartomas. In January 2013 start with self-injurious (swallowing of objects) and autistic behaviours as well as several hospital urgency room visits. In addition, the patient presented with dull mood, emotional indifference and intellectual impairment, with no response to medication. Married 43-years-old woman with a daughter affected with TSC. Diagnosis was made in 1999 and psychotic symptoms (delusional beliefs and auditory hallucinations) started in 2011 without previous psychiatric history. The MRI in 2013 shown subependymal nodules. Treatment with risperidone was effective. Psychiatric symptoms are very often associated to the physical findings on TSC, even in adulthood diagnoses. Psychiatric comorbidities are well described in literature. about 10-20% adult patients with TSC present clinically significant behavioral problems as self-injuries, frequently associated with SGCT. The European Expert Panel recommended regular assessment of cognitive development and behaviour and symptomatic treatment. 12 - From Parts to a Whole? Exploring Changes in Funerary Practices at Çatalhöyük from Part IV - Greater Awareness of an Integrated Personal Self By Scott D. Haddow, Eline M. J. Schotsmans, Marco Milella, Marin A. Pilloud, Belinda Tibbetts, Christopher J. Knüsel Edited by Ian Hodder, Stanford University, California Book: Consciousness, Creativity, and Self at the Dawn of Settled Life Published online: 22 February 2020 Print publication: 05 March 2020, pp 250-272 View extract Death is a universal and profoundly emotive human experience with social and economic implications that extend to communities as a whole. As such, the act of disposing of the dead is typically laden with deep meaning and significance. Archaeological investigations of funerary practices are thus important sources of information on the social contexts and worldviews of ancient societies. Changes in funerary practices are often thought to reflect organisational or cosmological transformations within a society (Carr 1995; Robb 2013). The focus of this volume is the role of cognition and consciousness in the accelerated sociocultural developments of the Neolithic Period in the Near East. In the introduction to this volume, Hodder identifies three commonly cited cognitive changes that can be measured against various archaeological datasets from Çatalhöyük. The funerary remains at Çatalhöyük are an obvious source of data for validating Hodder's third measure of change: a shift from a fluid and fragmented conception of the body and of selfhood to a greater awareness of an integrated, bounded personal self. Equivalency of the diagnostic accuracy of the PHQ-8 and PHQ-9: a systematic review and individual participant data meta-analysis – ERRATUM Yin Wu, Brooke Levis, Kira E. Riehm, Nazanin Saadat, Alexander W. Levis, Marleine Azar, Danielle B. Rice, Jill Boruff, Pim Cuijpers, Simon Gilbody, John P.A. Ioannidis, Lorie A. Kloda, Dean McMillan, Scott B. Patten, Ian Shrier, Roy C. Ziegelstein, Dickens H. Akena, Bruce Arroll, Liat Ayalon, Hamid R. Baradaran, Murray Baron, Charles H. Bombardier, Peter Butterworth, Gregory Carter, Marcos H. Chagas, Juliana C. N. Chan, Rushina Cholera, Yeates Conwell, Janneke M. de Manvan Ginkel, Jesse R. Fann, Felix H. Fischer, Daniel Fung, Bizu Gelaye, Felicity Goodyear-Smith, Catherine G. Greeno, Brian J. Hall, Patricia A. Harrison, Martin Härter, Ulrich Hegerl, Leanne Hides, Stevan E. Hobfoll, Marie Hudson, Thomas Hyphantis, Masatoshi Inagaki, Nathalie Jetté, Mohammad E. Khamseh, Kim M. Kiely, Yunxin Kwan, Femke Lamers, Shen-Ing Liu, Manote Lotrakul, Sonia R. Loureiro, Bernd Löwe, Anthony McGuire, Sherina Mohd-Sidik, Tiago N. Munhoz, Kumiko Muramatsu, Flávia L. Osório, Vikram Patel, Brian W. Pence, Philippe Persoons, Angelo Picardi, Katrin Reuter, Alasdair G. Rooney, Iná S. Santos, Juwita Shaaban, Abbey Sidebottom, Adam Simning, Lesley Stafford, Sharon Sung, Pei Lin Lynnette Tan, Alyna Turner, Henk C. van Weert, Jennifer White, Mary A. Whooley, Kirsty Winkley, Mitsuhiko Yamada, Andrea Benedetti, Brett D. Thombs Journal: Psychological Medicine / Volume 50 / Issue 16 / December 2020 Published online by Cambridge University Press: 19 August 2019, p. 2816 Association of a priori dietary patterns with depressive symptoms: a harmonised meta-analysis of observational studies Mary Nicolaou, Marco Colpo, Esther Vermeulen, Liset E. M. Elstgeest, Mieke Cabout, Deborah Gibson-Smith, Anika Knuppel, Giovana Sini, Danielle A. J. M. Schoenaker, Gita D. Mishra, Anja Lok, Brenda W. J. H. Penninx, Stefania Bandinelli, Eric J. Brunner, Aiko H. Zwinderman, Ingeborg A. Brouwer, Marjolein Visser, Journal: Psychological Medicine / Volume 50 / Issue 11 / August 2020 Published online by Cambridge University Press: 14 August 2019, pp. 1872-1883 Print publication: August 2020 Review findings on the role of dietary patterns in preventing depression are inconsistent, possibly due to variation in assessment of dietary exposure and depression. We studied the association between dietary patterns and depressive symptoms in six population-based cohorts and meta-analysed the findings using a standardised approach that defined dietary exposure, depression assessment and covariates. Included were cross-sectional data from 23 026 participants in six cohorts: InCHIANTI (Italy), LASA, NESDA, HELIUS (the Netherlands), ALSWH (Australia) and Whitehall II (UK). Analysis of incidence was based on three cohorts with repeated measures of depressive symptoms at 5–6 years of follow-up in 10 721 participants: Whitehall II, InCHIANTI, ALSWH. Three a priori dietary patterns, Mediterranean diet score (MDS), Alternative Healthy Eating Index (AHEI-2010), and the Dietary Approaches to Stop Hypertension (DASH) diet were investigated in relation to depressive symptoms. Analyses at the cohort-level adjusted for a fixed set of confounders, meta-analysis used a random-effects model. Cross-sectional and prospective analyses showed statistically significant inverse associations of the three dietary patterns with depressive symptoms (continuous and dichotomous). In cross-sectional analysis, the association of diet with depressive symptoms using a cut-off yielded an adjusted OR of 0.87 (95% confidence interval 0.84–0.91) for MDS, 0.93 (0.88–0.98) for AHEI-2010, and 0.94 (0.87–1.01) for DASH. Similar associations were observed prospectively: 0.88 (0.80–0.96) for MDS; 0.95 (0.84–1.06) for AHEI-2010; 0.90 (0.84–0.97) for DASH. Population-scale observational evidence indicates that adults following a healthy dietary pattern have fewer depressive symptoms and lower risk of developing depressive symptoms. POSITIVE GROUND STATES FOR A CLASS OF SUPERLINEAR $(p,q)$ -LAPLACIAN COUPLED SYSTEMS INVOLVING SCHRÖDINGER EQUATIONS Elliptic equations and systems J. C. DE ALBUQUERQUE, JOÃO MARCOS DO Ó, EDCARLOS D. SILVA Journal: Journal of the Australian Mathematical Society / Volume 109 / Issue 2 / October 2020 Published online by Cambridge University Press: 29 July 2019, pp. 193-216 Print publication: October 2020 We study the existence of positive ground state solutions for the following class of $(p,q)$ -Laplacian coupled systems $$\begin{eqnarray}\left\{\begin{array}{@{}lr@{}}-\unicode[STIX]{x1D6E5}_{p}u+a(x)|u|^{p-2}u=f(u)+\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D706}(x)|u|^{\unicode[STIX]{x1D6FC}-2}u|v|^{\unicode[STIX]{x1D6FD}}, & x\in \mathbb{R}^{N},\\ -\unicode[STIX]{x1D6E5}_{q}v+b(x)|v|^{q-2}v=g(v)+\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D706}(x)|v|^{\unicode[STIX]{x1D6FD}-2}v|u|^{\unicode[STIX]{x1D6FC}}, & x\in \mathbb{R}^{N},\end{array}\right.\end{eqnarray}$$ where $1<p\leq q<N$ . Here the coefficient $\unicode[STIX]{x1D706}(x)$ of the coupling term is related to the potentials by the condition $|\unicode[STIX]{x1D706}(x)|\leq \unicode[STIX]{x1D6FF}a(x)^{\unicode[STIX]{x1D6FC}/p}b(x)^{\unicode[STIX]{x1D6FD}/q}$ , where $\unicode[STIX]{x1D6FF}\in (0,1)$ and $\unicode[STIX]{x1D6FC}/p+\unicode[STIX]{x1D6FD}/q=1$ . Using a variational approach based on minimization over the Nehari manifold, we establish the existence of positive ground state solutions for a large class of nonlinear terms and potentials. Equivalency of the diagnostic accuracy of the PHQ-8 and PHQ-9: a systematic review and individual participant data meta-analysis Yin Wu, Brooke Levis, Kira E. Riehm, Nazanin Saadat, Alexander W. Levis, Marleine Azar, Danielle B. Rice, Jill Boruff, Pim Cuijpers, Simon Gilbody, John P.A. Ioannidis, Lorie A. Kloda, Dean McMillan, Scott B. Patten, Ian Shrier, Roy C. Ziegelstein, Dickens H. Akena, Bruce Arroll, Liat Ayalon, Hamid R. Baradaran, Murray Baron, Charles H. Bombardier, Peter Butterworth, Gregory Carter, Marcos H. Chagas, Juliana C. N. Chan, Rushina Cholera, Yeates Conwell, Janneke M. de Man-van Ginkel, Jesse R. Fann, Felix H. Fischer, Daniel Fung, Bizu Gelaye, Felicity Goodyear-Smith, Catherine G. Greeno, Brian J. Hall, Patricia A. Harrison, Martin Härter, Ulrich Hegerl, Leanne Hides, Stevan E. Hobfoll, Marie Hudson, Thomas Hyphantis, Masatoshi Inagaki, Nathalie Jetté, Mohammad E. Khamseh, Kim M. Kiely, Yunxin Kwan, Femke Lamers, Shen-Ing Liu, Manote Lotrakul, Sonia R. Loureiro, Bernd Löwe, Anthony McGuire, Sherina Mohd-Sidik, Tiago N. Munhoz, Kumiko Muramatsu, Flávia L. Osório, Vikram Patel, Brian W. Pence, Philippe Persoons, Angelo Picardi, Katrin Reuter, Alasdair G. Rooney, Iná S. Santos, Juwita Shaaban, Abbey Sidebottom, Adam Simning, Lesley Stafford, Sharon Sung, Pei Lin Lynnette Tan, Alyna Turner, Henk C. van Weert, Jennifer White, Mary A. Whooley, Kirsty Winkley, Mitsuhiko Yamada, Andrea Benedetti, Brett D. Thombs Journal: Psychological Medicine / Volume 50 / Issue 8 / June 2020 Published online by Cambridge University Press: 12 July 2019, pp. 1368-1380 Print publication: June 2020 Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9. We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy. 16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01). PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar. A 2D scintillator-based proton detector for high repetition rate experiments M. Huault, D. De Luis, J. I. Apiñaniz, M. De Marco, C. Salgado, N. Gordillo, C. Gutiérrez Neira, J. A. Pérez-Hernández, R. Fedosejevs, G. Gatti, L. Roso, L. Volpe Journal: High Power Laser Science and Engineering / Volume 7 / 2019 Published online by Cambridge University Press: 02 December 2019, e60 Print publication: 2019 We present a scintillator-based detector able to measure the proton energy and the spatial distribution with a relatively simple design. It has been designed and built at the Spanish Center for Pulsed Lasers (CLPU) in Salamanca and tested in the proton accelerator at the Centro de Micro-Análisis de Materiales (CMAM) in Madrid. The detector is capable of being set in the high repetition rate (HRR) mode and reproduces the performance of the radiochromic film detector. It represents a new class of online detectors for laser–plasma physics experiments in the newly emerging high power laser laboratories working at HRR. Scenarios of Land Use and Land Cover Change and Their Multiple Impacts on Natural Capital in Tanzania Claudia Capitani, Arnout van Soesbergen, Kusaga Mukama, Isaac Malugu, Boniface Mbilinyi, Nurdin Chamuya, Bas Kempen, Rogers Malimbwi, Rebecca Mant, Panteleo Munishi, Marco Andrew Njana, Antonia Ortmann, Philip J. Platts, Lisen Runsten, Marieke Sassen, Philippina Sayo, Deo Shirima, Elikamu Zahabu, Neil D. Burgess, Rob Marchant Journal: Environmental Conservation / Volume 46 / Issue 1 / March 2019 Published online by Cambridge University Press: 18 September 2018, pp. 17-24 Print publication: March 2019 Reducing emissions from deforestation and forest degradation plus the conservation of forest carbon stocks, sustainable management of forests and enhancement of forest carbon stocks in developing countries (REDD+) requires information on land-use and land-cover changes (LULCCs) and carbon emission trends from the past to the present and into the future. Here, we use the results of participatory scenario development in Tanzania to assess the potential interacting impacts on carbon stock, biodiversity and water yield of alternative scenarios where REDD+ is or is not effectively implemented by 2025, a green economy (GE) scenario and a business as usual (BAU) scenario, respectively. Under the BAU scenario, LULCCs will cause 296 million tonnes of carbon (MtC) national stock loss by 2025, reduce the extent of suitable habitats for endemic and rare species (mainly in encroached protected mountain forests) and change water yields. In the GE scenario, national stock loss decreases to 133 MtC. In this scenario, consistent LULCC impacts occur within small forest patches with high carbon density, water catchment capacity and biodiversity richness. Opportunities for maximizing carbon emission reductions nationally are largely related to sustainable woodland management, but also contain trade-offs with biodiversity conservation and changes in water availability. Probability of major depression diagnostic classification using semi-structured versus fully structured diagnostic interviews Brooke Levis, Andrea Benedetti, Kira E. Riehm, Nazanin Saadat, Alexander W. Levis, Marleine Azar, Danielle B. Rice, Matthew J. Chiovitti, Tatiana A. Sanchez, Pim Cuijpers, Simon Gilbody, John P. A. Ioannidis, Lorie A. Kloda, Dean McMillan, Scott B. Patten, Ian Shrier, Russell J. Steele, Roy C. Ziegelstein, Dickens H. Akena, Bruce Arroll, Liat Ayalon, Hamid R. Baradaran, Murray Baron, Anna Beraldi, Charles H. Bombardier, Peter Butterworth, Gregory Carter, Marcos H. Chagas, Juliana C. N. Chan, Rushina Cholera, Neerja Chowdhary, Kerrie Clover, Yeates Conwell, Janneke M. de Man-van Ginkel, Jaime Delgadillo, Jesse R. Fann, Felix H. Fischer, Benjamin Fischler, Daniel Fung, Bizu Gelaye, Felicity Goodyear-Smith, Catherine G. Greeno, Brian J. Hall, John Hambridge, Patricia A. Harrison, Ulrich Hegerl, Leanne Hides, Stevan E. Hobfoll, Marie Hudson, Thomas Hyphantis, Masatoshi Inagaki, Khalida Ismail, Nathalie Jetté, Mohammad E. Khamseh, Kim M. Kiely, Femke Lamers, Shen-Ing Liu, Manote Lotrakul, Sonia R. Loureiro, Bernd Löwe, Laura Marsh, Anthony McGuire, Sherina Mohd Sidik, Tiago N. Munhoz, Kumiko Muramatsu, Flávia L. Osório, Vikram Patel, Brian W. Pence, Philippe Persoons, Angelo Picardi, Alasdair G. Rooney, Iná S. Santos, Juwita Shaaban, Abbey Sidebottom, Adam Simning, Lesley Stafford, Sharon Sung, Pei Lin Lynnette Tan, Alyna Turner, Christina M. van der Feltz-Cornelis, Henk C. van Weert, Paul A. Vöhringer, Jennifer White, Mary A. Whooley, Kirsty Winkley, Mitsuhiko Yamada, Yuying Zhang, Brett D. Thombs Journal: The British Journal of Psychiatry / Volume 212 / Issue 6 / June 2018 Published online by Cambridge University Press: 02 May 2018, pp. 377-385 Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification. To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics. Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit. A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97). The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated. Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication. Sunspot data collection of Specola Solare Ticinese in Locarno Renzo Ramelli, Marco Cagnotti, Sergio Cortesi, Michele Bianda, Andrea Manna Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S340 / February 2018 Published online by Cambridge University Press: 27 November 2018, pp. 129-132 Print publication: February 2018 Sunspot observations and counting are carried out at the Specola Solare Ticinese in Locarno since 1957 when it was built as an external observing station of the Zurich observatory. When in 1980 the data center responsibility was transferred from ETH Zurich to the Royal Observatory of Belgium in Brussels, the observations in Locarno continued and Specola Solare Ticinese got the role of pilot station. The data collected at Specola cover now the last 6 solar cycles. The aim of this presentation is to discuss and give an overview about the Specola data collection, the applied counting method and the future archiving projects. The latter includes the publication of all data and drawings in digital form in collaboration with the ETH Zurich University Archives, where a parallel digitization project is ongoing for the document of the former Swiss Federal Observatory in Zurich collected since the time of Rudolph Wolf. Folate and vitamin B12 concentrations are associated with plasma DHA and EPA fatty acids in European adolescents: the Healthy Lifestyle in Europe by Nutrition in Adolescence (HELENA) study I. Iglesia, I. Huybrechts, M. González-Gross, T. Mouratidou, J. Santabárbara, V. Chajès, E. M. González-Gil, J. Y. Park, S. Bel-Serrat, M. Cuenca-García, M. Castillo, M. Kersting, K. Widhalm, S. De Henauw, M. Sjöström, F. Gottrand, D. Molnár, Y. Manios, A. Kafatos, M. Ferrari, P. Stehle, A. Marcos, F. J. Sánchez-Muniz, L. A. Moreno Journal: British Journal of Nutrition / Volume 117 / Issue 1 / 14 January 2017 Print publication: 14 January 2017 This study aimed to examine the association between vitamin B6, folate and vitamin B12 biomarkers and plasma fatty acids in European adolescents. A subsample from the Healthy Lifestyle in Europe by Nutrition in Adolescence study with valid data on B-vitamins and fatty acid blood parameters, and all the other covariates used in the analyses such as BMI, Diet Quality Index, education of the mother and physical activity assessed by a questionnaire, was selected resulting in 674 cases (43 % males). B-vitamin biomarkers were measured by chromatography and immunoassay and fatty acids by enzymatic analyses. Linear mixed models elucidated the association between B-vitamins and fatty acid blood parameters (changes in fatty acid profiles according to change in 10 units of vitamin B biomarkers). DHA, EPA) and n-3 fatty acids showed positive associations with B-vitamin biomarkers, mainly with those corresponding to folate and vitamin B12. Contrarily, negative associations were found with n-6:n-3 ratio, trans-fatty acids and oleic:stearic ratio. With total homocysteine (tHcy), all the associations found with these parameters were opposite (for instance, an increase of 10 nmol/l in red blood cell folate or holotranscobalamin in females produces an increase of 15·85 µmol/l of EPA (P value <0·01), whereas an increase of 10 nmol/l of tHcy in males produces a decrease of 2·06 µmol/l of DHA (P value <0·05). Positive associations between B-vitamins and specific fatty acids might suggest underlying mechanisms between B-vitamins and CVD and it is worth the attention of public health policies. Targets for high repetition rate laser facilities: needs, challenges and perspectives On the Cover of HPL Target Fabrication (2017) I. Prencipe, J. Fuchs, S. Pascarelli, D. W. Schumacher, R. B. Stephens, N. B. Alexander, R. Briggs, M. Büscher, M. O. Cernaianu, A. Choukourov, M. De Marco, A. Erbe, J. Fassbender, G. Fiquet, P. Fitzsimmons, C. Gheorghiu, J. Hund, L. G. Huang, M. Harmand, N. J. Hartley, A. Irman, T. Kluge, Z. Konopkova, S. Kraft, D. Kraus, V. Leca, D. Margarone, J. Metzkes, K. Nagai, W. Nazarov, P. Lutoslawski, D. Papp, M. Passoni, A. Pelka, J. P. Perin, J. Schulz, M. Smid, C. Spindloe, S. Steinke, R. Torchio, C. Vass, T. Wiste, R. Zaffino, K. Zeil, T. Tschentscher, U. Schramm, T. E. Cowan Published online by Cambridge University Press: 24 July 2017, e17 A number of laser facilities coming online all over the world promise the capability of high-power laser experiments with shot repetition rates between 1 and 10 Hz. Target availability and technical issues related to the interaction environment could become a bottleneck for the exploitation of such facilities. In this paper, we report on target needs for three different classes of experiments: dynamic compression physics, electron transport and isochoric heating, and laser-driven particle and radiation sources. We also review some of the most challenging issues in target fabrication and high repetition rate operation. Finally, we discuss current target supply strategies and future perspectives to establish a sustainable target provision infrastructure for advanced laser facilities. Biogeophysical properties of an expansive Antarctic supraglacial stream Michael D. SanClements, Heidi J. Smith, Christine M. Foreman, Marco Tedesco, Yu-Ping Chin, Christopher Jaros, Diane M. McKnight Journal: Antarctic Science / Volume 29 / Issue 1 / February 2017 Published online by Cambridge University Press: 20 October 2016, pp. 33-44 Supraglacial streams are important hydrologic features in glaciated environments as they are conduits for the transport of aeolian debris, meltwater, solutes and microbial communities. We characterized the basic geomorphology, hydrology and biogeochemistry of the Cotton Glacier supraglacial stream located in the McMurdo Dry Valleys of Antarctica. The distinctive geomorphology of the stream is driven by accumulated aeolian sediment from the Transantarctic Mountains, while solar radiation and summer temperatures govern melt in the system. The hydrologic functioning of the Cotton Glacier stream is largely controlled by the formation of ice dams that lead to vastly different annual flow regimes and extreme flushing events. Stream water is chemically dilute and lacks a detectable humic signature. However, the fluorescent signature of dissolved organic matter (DOM) in the stream does demonstrate an extremely transitory red-shifted signal found only in near-stream sediment leachates and during the initial flushing of the system at the onset of flow. This suggests that episodic physical flushing drives pulses of DOM with variable quality in this stream. This is the first description of a large Antarctic supraglacial stream and our results provide evidence that the hydrology and geomorphology of supraglacial streams drive resident microbial community composition and biogeochemical cycling. Post-common envelope PN, fundamental or irrelevant? Orsola De Marco, T. Reichardt, R. Iaconi, T. Hillwig, G. H. Jacoby, D. Keller, R. G. Izzard, J. Nordhaus, E. G. Blackman Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S323 / October 2016 Published online by Cambridge University Press: 08 August 2017, pp. 213-217 One in 5 PN are ejected from common envelope binary interactions but Kepler results are already showing this proportion to be larger. Their properties, such as abundances can be starkly different from those of the general population, so they should be considered separately when using PN as chemical or population probes. Unfortunately post-common envelope PN cannot be discerned using only their morphologies, but this will change once we couple our new common envelope simulations with PN formation models. Detection of secondary eclipses of WASP-10b and Qatar-1b in the Ks band and the correlation between Ks-band temperature and stellar activity. Patricia Cruz, David Barrado, Jorge Lillo-Box, Marcos Diaz, Mercedes López-Morales, Jayne Birkby, Jonathan J. Fortney, Simon Hodgkin Published online by Cambridge University Press: 12 September 2017, pp. 363-370 The Calar Alto Secondary Eclipse study was a program dedicated to observe secondary eclipses in the near-IR of two known close-orbiting exoplanets around K-dwarfs: WASP-10b and Qatar-1b. Such observations reveal hints on the orbital configuration of the system and on the thermal emission of the exoplanet, which allows the study of the brightness temperature of its atmosphere. The observations were performed at the Calar Alto Observatory (Spain). We used the OMEGA2000 instrument (Ks band) at the 3.5m telescope. The data was acquired with the telescope strongly defocused. The differential light curve was corrected from systematic effects using the Principal Component Analysis (PCA) technique. The final light curve was fitted using an occultation model to find the eclipse depth and a possible phase shift by performing a MCMC analysis. The observations have revealed a secondary eclipse of WASP-10b with depth of 0.137%, and a depth of 0.196% for Qatar-1b. The observed phase offset from expected mid-eclipse was of −0.0028 for WASP-10b, and of −0.0079 for Qatar-1b. These measured offsets led to a value for |ecosω| of 0.0044 for the WASP-10b system, leading to a derived eccentricity which was too small to be of any significance. For Qatar-1b, we have derived a |ecosω| of 0.0123, however, this last result needs to be confirmed with more data. The estimated Ks-band brightness temperatures are of 1647 K and 1885 K for WASP-10b and Qatar-1b, respectively. We also found an empirical correlation between the (R′HK) activity index of planet hosts and the Ks-band brightness temperature of exoplanets, considering a small number of systems. The TcTASV proteins are novel promising antigens to detect active Trypanosoma cruzi infection in dogs N. FLORIDIA-YAPUR, M. MONJE RUMI, P. RAGONE, J. J. LAUTHIER, N. TOMASINI, A. ALBERTI D'AMATO, P. DIOSQUE, R. CIMINO, J. D. MARCO, P. BARROSO, D. O. SANCHEZ, J. R. NASSER, V. TEKIEL Journal: Parasitology / Volume 143 / Issue 11 / September 2016 Published online by Cambridge University Press: 13 May 2016, pp. 1382-1389 Print publication: September 2016 In regions where Chagas disease is endemic, canine Trypanosoma cruzi infection is highly correlated with the risk of transmission of the parasite to humans. Herein we evaluated the novel TcTASV protein family (subfamilies A, B, C), differentially expressed in bloodstream trypomastigotes, for the detection of naturally infected dogs. A gene of each TcTASV subfamily was cloned and expressed. Indirect enzyme-linked immunosorbent assays (ELISA) were developed using recombinant antigens individually or mixed together. Our results showed that dogs with active T. cruzi infection differentially reacted against the TcTASV-C subfamily. The use of both TcTASV-C plus TcTASV-A proteins (Mix A+C-ELISA) enhanced the reactivity of sera from dogs with active infection, detecting 94% of the evaluated samples. These findings agree with our previous observations, where the infected animals exhibited a quick anti-TcTASV-C antibody response, coincident with the beginning of parasitaemia, in a murine model of the disease. Results obtained in the present work prove that the Mix A+C-ELISA is a specific, simple and cheap technique to be applied in endemic areas in screening studies. The Mix A+C-ELISA could help to differentially detect canine hosts with active infection and therefore with high impact in the risk of transmission to humans.
CommonCrawl
Quantifying the effect of complications on patient flow, costs and surgical throughputs Ahmed Almashrafi ORCID: orcid.org/0000-0003-1072-80571 & Laura Vanderbloemen1 BMC Medical Informatics and Decision Making volume 16, Article number: 136 (2016) Cite this article Postoperative adverse events are known to increase length of stay and cost. However, research on how adverse events affect patient flow and operational performance has been relatively limited to date. Moreover, there is paucity of studies on the use of simulation in understanding the effect of complications on care processes and resources. In hospitals with scarcity of resources, postoperative complications can exert a substantial influence on hospital throughputs. This paper describes an evaluation method for assessing the effect of complications on patient flow within a cardiac surgical department. The method is illustrated by a case study where actual patient-level data are incorporated into a discrete event simulation (DES) model. The DES model uses patient data obtained from a large hospital in Oman to quantify the effect of complications on patient flow, costs and surgical throughputs. We evaluated the incremental increase in resources due to treatment of complications using Poisson regression. Several types of complications were examined such as cardiac complications, pulmonary complications, infection complications and neurological complications. 48 % of the patients in our dataset experienced one or more complications. The most common types of complications were ventricular arrhythmia (16 %) followed by new atrial arrhythmia (15.5 %) and prolonged ventilation longer than 24 h (12.5 %). The total number of additional days associated with infections was the highest, while cardiac complications have resulted in the lowest number of incremental days of hospital stay. Complications had a significant effect on perioperative operational performance such as surgery cancellations and waiting time. The effect was profound when complications occurred in the Cardiac Intensive Care (CICU) where a limited capacity was observed. The study provides evidence supporting the need to incorporate adverse events data in resource planning to improve hospital performance. Several studies have associated postoperative complications with increased cost and Length of Stay (LOS). However, less is known about the effect of adverse events on patient flow and surgical throughputs. In hospitals with sufficient resources, complications may be less influential to overall productivity. However, when resources are constrained, complications can exert a series of sequential effects that might limit the availability of resources for other patients. To the best of our knowledge, there is no paper that has explicitly examined the relationship between complications and operational performance using simulation modelling. An optimum bed capacity is a key factor for smoothing patient flow. However, managing beds is difficult as patients stays tend to be influenced by uncertainty. This includes occurrence of complications which trigger the use of several resources. A hospital's efforts to manage complications is challenged by the fact that complications are difficult to predict [1]. At a certain level of capacity, a high rate of complications can substantially constrain patient flow and could reduce hospital responsiveness to urgent cases. In many resource planning approaches, there is a tendency to focus on average utilisation of a single resource such as the operating theatre without consideration to its relationship with downstream services such as intensive care unit beds [2]. Since many hospital services are interconnected, the effect of complications should be evaluated across the patient hospital journey. Quantifying the effect of complications on patient flow permits the hospital management to evaluate key performance indicator (KPI) targets based on the existing rate of complications. This understanding can yield several benefits such as focusing efforts on reducing certain complications and building a business case for investing in quality and safety programmes. Further, given the current economic climate, it is necessary to operate hospitals in a more efficient way. Hospitals can incur significant costs in treating complications (e.g. nosocomial infections) and might not be compensated in return. Discrete Event Simulation (DES) has been applied to numerous health policy issues related to staffing, scheduling, and capacity management [3–5]. Much of the enthusiasm in using DES in healthcare stems from its capability to capture complexity and uncertainty. A substantial body of literature has focused on measuring patient flow improvements under alternative solutions, with the intent to provide quantitative evidence to support decisions. However, it is often that intangible elements such as complications tend to be ignored in DES [6]. This might be the case because it is easier to focus on measurable processes. Moreover, modellers might not have access to sensitive patient data including the details of adverse events. Because DES offers the flexibility to track interconnected and uncertain events across multiple parts of the system [7], we believe that DES is an appropriate tool for evaluating the inherent uncertainty surrounding postoperative complications and their impact on resource utilisation. Hospital managers need to be able to evaluate efforts to reduce adverse events based on the added benefits to the patients' health and the hospital in general. Complications that occur in the Cardiac Intensive Care Unit (CICU) might lead the care givers to allocate extra resources (e.g. more bed days). As a result, other surgeries may be cancelled due to lack of available beds. Failing to manage the ratio of beds to operating rooms (OR) results in one of the resources being underutilised [8]. Additionally, cardiac surgical patients with complications can undergo re-exploration if, for example, a postoperative haemorrhage is identified [9], potentially resulting in postponement of less urgent cases. Furthermore, patients already transferred from CICU may need to return if they experience a critical complication. Postoperative complications following cardiac surgery Several factors related to patients and surgical procedures can increase the risk of complications. For example, patients with concomitant surgery (CABG and valve) are more likely to experience complications than patients with isolated surgery [10]. Patients undergoing an operation with a cardiopulmonary machine are more likely to experience an inflammatory response [11]. Blood transfusion during surgery is also associated with increased rate of morbidity [12]. The probability of complications exponentially increases as patients spend more time in the CICU [13]. On the other hand, high patient severity has been linked to occurrence of adverse events which in turn mediates on subsequent LOS [14]. For instance, Toumpoulis et al. [15] found that as severity (measured by the EuroSCORE) increases, the risk of postoperative complications tends to increase. Cardiac post-surgical complications include some life threatening complications such as myocardial infarction. Another potentially fatal complication is postoperative bleeding which will require reoperation. Studies suggest that the reoperation rate for bleeding is in the range of 2–9 % [16, 17]. The majority of the patients will be re-operated within 24 h of the surgery. When patients experience one or more postoperative complications, their conditions can rapidly deteriorate given that most patients are above 60 years old. Patients and data collection To evaluate the effect of complications on resource use, we utilised data from 600 patients who underwent a cardiac surgery at a major referral hospital. These data were drawn from a prospectively collected database. We included all types of cardiac surgeries such as isolated Coronary Artery Bypass Surgery (CABG), isolated valve surgery, combined surgery (CABG & valve), and other types of cardiac surgery. The rationale behind this is that cardiac surgical patients, irrespective of their surgery, share the same resources (e.g. operating theatre, critical care beds, etc.) and disregarding a certain type would compromise the analysis around capacity and throughputs. The type of collected data included demographic information, comorbidities, LOS detail, surgery detail, and postoperative complications. Several types of complications were examined such as cardiac complications, pulmonary complications, infection complications and neurological complications. In addition to the clinical data, we collected several parameters related to system operation such as surgery waiting times, non-surgical admissions, inter-arrival of elective and non-elective patients, and surgery duration. Non-surgical admissions refer to patients who are admitted for reasons other than surgery such as admission for follow-up. Elective patients are scheduled patients who are admitted based on prior appointments. Non-elective patients consist of emergency cases that require immediate care and urgent patients that are less severe than emergency cases and whom care can be delayed for few days. To inform the simulation model building, we first examined the relationship between resource use and complications. We performed Poisson regression in order to: 1) evaluate whether complications can independently explain variation in LOS. 2) inform our simulation model building by selecting the most influential complications and 3) quantify the excess LOS and cost associated with each type of complication so they can be used in our model. To evaluate the independent effect of complications on postoperative LOS, we adjusted the model for basic demographic characteristics, comorbidities, and type of surgery. Poisson regression has been previously found to be suitable for modelling intensive care unit and postoperative LOS data that are heavily skewed [18–20]. All analyses were two sided. A value of P <0.05 was considered statistically significant. Respectively, the incremental cost associated with hospital charges was estimated using the same methodology. Hospital charges were calculated based on an existing fee schedule (year 2015) for room, surgery, and investigation services (radiological and laboratory). Excess length of stay was assessed through the marginal effect of each significant factor. Marginal Effects at the Means (MEMS) measures the changes in the response variable in relation to change in a covariate. For binary variables, the effect of discrete changes (i.e. from 0 to 1) is computed holding all other variables at their means [21]. In effect, the margins are computed for all variables related to the patient mix, the surgical characteristics and complications. Thus, they reflect the marginal changes related to the specific cohort of patients which the model was derived from. All statistical analysis was done in Stata Statistical Software: Release 12. College Station, TX: StataCorp LP. The hospital under study is one of only two hospitals authorised to perform cardiac surgery in the country. Patients are referred for surgery from all other regions. Patients are also referred internally for surgery from the cardiac catheterization laboratory which is a major gateway for cardiac surgery. Following decision to operate, patients are placed in a waiting list. There is no pre-assessment clinic in the hospital which means patients have to be admitted a few days prior to their surgeries where an anaesthetist can assess their fitness for operation. Late cancellations due to unsuitability for surgery can arise, resulting in underutilisation of operating theatre time. A common surgical patient's pathway through the system is depicted in Fig. 1. Death can occur at any stage of patient care. An overview of patient flow in the cardiothoracic department There are three important components of the cardiothoracic surgical system: Operating theatre: There is only a single operating theatre at the hospital that is solely dedicated to cardiovascular surgeries. Surgeries are performed 4 days a week (Sunday to Wednesday) from 8:00 am to 2:30 pm. An In-call staff can utilise the OR 24 h, 7 days a week to accommodate emergency cases which can disrupt the normal daily OR schedule. Only a single elective patient is operated per day. Coronary Intensive Care Unit (CICU): This unit provides intensive care to patients immediately after surgery. Patients are kept in the CICU for at least 48 h after the surgery where they will be extubated and continuously monitored. Level of pain, vital signs, ventilation, and surgical site are carefully monitored. CICU stay is an important milestone in the patient journey. Patients who are stable can be transferred to the cardiothoracic ward to continue their recovery. Patients cannot be checked into the OR unless a CICU bed is available. The limited number of CICU beds (only five beds) have restricted OR operations in the past. The patient to nurse ratio is 1:1 in this unit. The cardiothoracic ward: This is the ward where patients are initially admitted preoperatively. Some admitted patients will not be scheduled for operations for reasons such as patient refusal or unfitness for surgery. Following a surgery, operated patients who required a lesser degree of care are transferred from CICU to this ward where they will continue their recovery. For most patients the ward is the last destination before discharge. There are 18 beds available. Developing the DES model We developed a DES model using SIMUL8 software package release 2015 (SIMUL8 corporation, Boston, MA). The software has its own internal language known as Visual Logic which enabled us to capture a complex representation of the real system. The model collects various statistics concerning patient types, their urgency level, duration of operation, pre and post LOS, occupancy rate, surgery cancellation, and time beds were blocked. Figure 2 provides an image of the model. Model screenshot Whenever a patient enters the model, a random sample of the same type is selected from a distribution based on historical data. Type of patients included: patients with isolated CABG, isolated valve, combined CABG and valve, and other surgeries. The model then generates a profile for each type of complication based on results obtained from the Poisson regression. Once a patient is admitted, a preoperative bed will be assigned for both surgical and non-surgical patients. Preoperative LOS is determined based on distribution derived from historical data. The model then checks for CICU bed availability before selecting patients for surgery. If all beds are occupied, the model calculates the time a bed was blocked. Once a bed becomes available, priority is given to non-elective patients. Postoperative length of stay was allowed to vary based on the type of surgery (e.g. CABG, combined surgery, isolated valve). Therefore, four types of distributions corresponding to postoperative LOS were set. From our analysis, there was an association between surgery type and postoperative LOS, sufficient to justify adding this level of detail to the model. Decisions for reoperation can be made any time post-surgery. Patients in the reoperation pathway are given priority over elective patients for surgery. The arrival rate of elective patients in our model is well approximated by the Poisson process. It is a common approach to model arrival to a system using this type of distribution [22]. We verified this selection using the Kolmogorov-Smirnov (K-S) test. The K-S was used for fitting other distributions. The distribution that best fits the data should produce the smallest K-S values that should be below the critical K-S statistics. Inputs parameters for the model are shown in Table 1. Table 1 Input parameters used to calibrate the model In practice, patients can experience complications during any time of their hospital stay. In the model this is governed by the same probabilities obtained from the data. Once a patient experiences a complication, the model moves that patient to the complication state. In the model, the postoperative LOS distribution was estimated based on the LOS of patients who didn't experience complications. However, any patient who develops a complication will then be assigned an additional LOS corresponding to excess LOS that is equal to the marginal effect of the specific complication. For example, the additional LOS for a patient with pneumonia is 6.3 days, 23 days for stroke, and so on. In order to obtain a steady state and improve output reliability, the model warm-up period and replications number were calculated. The warm-up period was determined by visually inspecting a time-series graph of surgery waiting times [7]. The value for a warm-up period was found to be approximately 6 months. The variable selected for measuring the warm-up period was the waiting time for surgery. Data were collected only after a steady state was achieved. We determined that 30 replications were required. Decision on the model scope and level of detail are referred to as simplification and abstraction [23]. In our model, it was important to include the right level of detail and system components that were directly associated with examining the problem at hand. Collection of outcome measures The effect of complications on the system operation was captured through collecting key performance indicators. In this section, we explain how these measures were derived: Number of surgery cancellations: When a patient with a complication is identified in the model, a series of visual logic codes are triggered. For instance, the model inspects if a surgery was cancelled due to a complication or any other reasons as cancellations can also happen for reasons such as unavailability of theatre times or CICU beds. In the model, the following conditions must be satisfied for a cancellation to occur. All of the CICU beds are full. At least one of the patients in the CICU is having a complication. An admitted patient is ready and waiting a surgery. The operating room is available during the regular working hour. To distinguish between the types of surgery cancellations, the model records the number of cancellations due to unavailability of operating room sessions, unavailability of CICU bed as well as cancellations due to patients developing complications. At this stage, a patient is delayed from proceeding to the next event in the simulation. However, they will take precedence over other patients for surgery. Bed turnover ratio: Bed turnover ratio is a measure of productivity of hospital beds and represents the number of patients treated per bed in a given period. It is computed according to the following formula (1). $$ \frac{Total\ number\ of\ discharges\ \left( including\ deaths\right)\ for\ a\ given\ time\ period}{Average\ bed\ count\ for\ the\ same\ period} $$ We further calculated the "lost bed days due to complications" by observing the number of bed days that have been lost due to complication. The lost bed day rate is the forgone opportunity of admitting a new patient when a bed was not available. Waiting time and waiting list: Waiting time can be a manifestation of insufficient capacity or inappropriate bed management [24]. Although complications might affect waiting time indirectly, it is important to trace their effect on waiting times to assess the hospital responsiveness. We only considered the waiting time related to patients scheduled for surgery. It should be noted that there are other elective patients who were admitted for non-surgical reasons. In the model, the order of the patient on the waiting list is updated each time a new patient enters the waiting list. At the end of the simulation, the model records both the average waiting time and the average waiting list size. Surgical throughputs: We define surgical throughputs as the number of patients who successfully received operations in a given time period. This measure can be related to the surgical cancellation measure discussed previously. However, it is possible that one type of complication can lead to surgery cancellation, yet the overall surgery throughputs remain unchanged. The previous outcome measures are also influenced by capacity and resources, as illustrated in Fig. 3, which will determine the degree to which complications can affect these measures. Relationships between complications, capacity and performance metrics Model assumptions Owing to unavailability of some data, we made the following assumptions to simplify the model: We assumed that 40 % of the postoperative complications occurred while patients were treated in the CICU unit and 60 % occurred in the ward. This assumption was made since we didn't have relevant data regarding the location and time of where and when complications have occurred during the patient hospitalisation. However, since prolonged ventilation >24 h was more likely to occur among CICU patients, this complication was limited to the CICU stay. All patients were categorised as elective or non-elective. In reality, another type of 'urgent patient' is considered in the hospital priority system. Only one surgery can take place each day. Non-elective patients are given priority and are operated on the next day. Scenarios evaluation We evaluated several policies that we thought might offer some potential operational improvements. These were divided into the following two categories. Modifying the rate of complications: An extreme scenario was assumed to eliminate all types of complications. Only complications deemed to be preventable were eliminated. In this case we focus on complications related to infections. Complications that are associated with the highest marginal hospital costs were eliminated. A marginal cost equal to or greater than the 75th percentile was used as a cut-off to indicate a high charge. This was equal to 1057.48 USD. The types of complication that met this cut-off were: permanent stroke, prolonged ventilation >24 h, other pulmonary complications, and septicaemia. Indirect strategies that can mitigate the effect of complications: Scheduling more surgeries by increasing the number of days when surgeries are performed. Adding more capacity to the CICU unit. Lowering ward postoperative LOS: results have shown that only 5 % of patients were discharged after the 5th postoperative day which may reflect that the LOS was influenced by local practices rather than clinical reasons. Results from statistical analysis In our dataset, 48 % of the patients experienced one or more complications. The most common types of complications were ventricular arrhythmia (16 %) followed by new atrial arrhythmia (15.5 %), prolonged ventilation longer than 24 h (12.5 %). The distribution of complications based on type is shown in Fig. 4. Cardiac complications occurred in 26 % of the patients, pulmonary complications occurred in 17 %, neurological complications affected 9.5 %, while 16 % of the patients had infections. The underlying distribution of the postoperative LOS of patients with and without complication was statistically significant (z = −9.320, P < 0.001). On average, patients with complications spent eight more postoperative days. The median postoperative hospital LOS was 8 days. Distribution of complications among the patients who experienced complications during their hospitalisation A Kruskal-Wallis H test revealed that there was a statistically significant difference between the types of surgeries and postoperative LOS. χ2(3) = 41, p < 0.001. Therefore, we further examined postoperative LOS distributions for each type individually and reflected this in the DES model. The excess LOS due to complications Table 2 lists the additional postoperative days associated with complications after adjusting for demographic variables and major comorbidities. The total number of additional days associated with infections was the highest, while cardiac complications resulted in the lowest number of incremental days of hospital stay. Table 2 Marginal effect of complications on postoperative LOS From Table 2, only two types of complications were not significant: neuropsychiatry complications (p = 0.36) and new atrial arrhythmia (p = .55). Surprisingly, ventricular arrhythmia which was the commonest type of complication (see Fig. 5) was associated with only one extra day of postoperative LOS. The extra postoperative LOS attributable to stroke and septicaemia (both at 23 days) was the highest. Likewise, the corresponding average change in LOS associated with pneumonia was 6 days. Boxplot graph for postoperative LOS distributions among patients with different complications The Omani Riyal was fixed to the US dollar. Thus, the total costs were converted to US dollars (USD) by a multiplication factor of 2.56, which was the existing exchange rate at the start of the study (June 2013). Cardiac surgery was associated with a sizable number of expensive complications (Table 3). The highest marginal effect for hospital charges was related to stroke (3211 USD). The extra hospital charges associated with ventricular arrhythmia was only 170 USD, despite its high prevalence. Septicaemia and other pulmonary complications had significant associated costs (2452 and 2457 respectively). On average, patients with pulmonary complications had the highest additional cost, 1415 USD, followed by 1375 USD for neurological complications, 561 USD for cardiac complications, and 793 USD for infection. The results confirmed the need to use individual complications instead of aggregating them (e.g. cardiac) as some complications were proportionally higher than others in the same category. Table 3 Marginal costs associated with different types of complications Results from the simulation model For each scenario, the simulation model was run for 1 year with patient waiting times, surgery cancellations, surgery throughput, bed turnover, and cost as the output of interest. Comparison of averages over multiple simulation runs was necessary to accommodate the effect of random variation (e.g. LOS duration, arrival of new patients, etc.). A close inspection of the results revealed that patients occupying a bed due to a complication have a significant effect on several outcome measures. It was intuitive to compare the effect on the outcome measures when all complications were eliminated (scenario 1). Table 4 provides a comparison between a hypothetical state of no complications and the existing state. Table 4 The effect of eliminating all complications on operational performance The purpose of scenario 1 (i.e. eliminating all complications), albeit unrealistic, was to estimate the burden of complications on outcome measures and provide a sense of scale of this burden. A change in all statistical indicators was observed when complications were eliminated (Table 4). For example, waiting time for surgery fell from 5 to 1.36 days, a decrease by almost 73 %. In the model with zero complications, 23 more surgeries were performed. While CICU bed turnover was improved by a reasonable number (+7.45), CICU bed turnover improved by lesser amount (+2.57). This is due to the limited number of beds in the CICU unit. The total bed days lost due to complications was 310 days. On average, each bed in the cardiothoracic department was occupied 15 days a year by patients with complications. We further examined the effect of each type of complication on the system performance by adding each type to the model separately. Complications were aggregated based on four types (cardiac, pulmonary, infection, and neurological). Additionally, in order to estimate the effect of complications occurring in the CICU and ward separately, complications were only allowed to occur in the respective location in the model. The results are shown in Table 5. Table 5 The effect of each type of postoperative complications on operation metrics based on the location where patients experienced complications As can be seen from Table 5, pulmonary complications were the most common type associated with surgery cancellations. This is the case because pulmonary complications were common in the CICU and consequently they reduced availability of beds leading to surgery cancellations. According to the model output, it was unlikely that a surgery would be cancelled if patients were treated for complications in the ward. A notable exception was when patients experienced pulmonary complications in the ward which resulted in approximately five surgery cancellations. The category "other pulmonary complications" which constitutes 4.5 % of the total type of complications was associated with substantial postoperative excess LOS (11.34 days). These complications were consequently responsible for delaying patient transfer from the CICU unit. Pulmonary complications had also reduced the surgery throughputs more than any other type of complications. Scenario experimentations In this section, we provide results from other scenario experiments. Tables 6, 7, and 8 list the results from the six scenarios. Table 6 The effect of the six scenarios on waiting for surgery Table 7 The effect of the six scenarios on theatre performance Table 8 The effect of the six scenarios on bed turnover A substantial system improvement can be gained by lowering the rate of infections. The only outcome measure that was not improved by eliminating infections was surgery cancellation, which increased by one cancellation compared to the baseline scenario. Since septicaemia was associated with a very high incremental LOS, we examined the effect of reducing this complication by 50 %. The number of bed days that can be essentially saved by eliminating septicaemia are (23 days × the number of patients experiencing septicaemia). In the model, 50 % reduction in septicaemia resulted in reduced waiting times by 9 % from the baseline. Scenario 3 examined the elimination of high cost complications. As such, the results compared favourably across all outcomes. The rest of the scenarios were related to modifying the existing system. An increase in OR operating days dramatically increased the number of throughputs (204 vs 174 in the baseline). However, this increase was offset by the increase in surgery cancellations (15 vs. 9 in the baseline). Additionally, waiting time improved modestly (4 days vs. 5 days). In contrast, the addition of one extra CICU bed decreased waiting list size and cancellations. It also resulted in increased surgery throughputs and bed turnover. The proportion of patients who waited for surgery fell considerably when an extra bed was added. Finally, the reduction of postoperative LOS by 40 % reduced waiting times. However, it stimulated more cancellations than any other scenario. We attempted to estimate the variability of the results when the location of complications is changed using different assignment alternatives such as 75 %:25 % for the CICU and ward respectively. We observed large discrepancies in the model results from historical data. Therefore, we concluded that the current assignment (40 % CICU:60 % ward) of where the complications originate and are treated is the best alternative for establishing the baseline model. In a previous paper by the first author and colleagues using the same patient dataset as the current study [25], people who died (4 % of the patient population) were found to have similar postoperative LOS distribution to those who survived. Therefore, death rate in our study has a negligible impact on resource utilisation. In hospitals where mortality is higher, the impact of death on patient flow and operational performance can be profound. Therefore, the effect of death, as a complication, cannot be overlooked in modelling patient flow and resource use. There are several methods to validate a simulation model. These include comparison with historical data, face validity, input-output transformation, and sensitivity analysis among others [26, 27]. To validate our model we first met with the surgeons to ensure conceptual validity of the model (face validity). The aim was to verify that the simulation model was a credible representation of the system and the theory behind its construction was acceptable. Second, historical data from 1 year were compared against predicted data (average from 30 simulation runs) [23]. To this end, the first step was to identify the key parameters with which to validate the model. We identified two important indicators which are presented in the first column of Table 9. The t-test distribution was used to test the null hypothesis (there is no statistical difference between the real and simulated sets). Then the null hypothesis of the two-tailed test is to be rejected if H0: |T| ≤ t α/2, n–1. Results of this test are presented in Table 9. Table 9 Validation of the model against some historical indicators using hypothesis testing. Results of 30 replications The observed and simulated datasets were similar with small discrepancies. Thus, it can be concluded that the baseline model adequately represented the behaviour of the real world system. We could not validate the number of cancellations that occurred due to complications as there were no records kept by the hospital. However, the obtained average number of cancellations from the simulation runs was reviewed by the surgeons and found to be reasonable and approximate reality. Our goal was to examine the effect of complications on some essential patient flow metrics. The findings from this study suggest that several postoperative complications were independently associated with increased hospital stay. Moreover, the marginal LOS attributable to these adverse events was a significant source for surgery cancellations, lower bed turnover rates, and extended waiting lists. The research was motivated by the lack of an existing mechanism to measure the impact of complications on operational performance. The feasibility of modelling adverse events and their effect on hospital resources and thus operations can provide compelling evidence for quality improvement initiatives. Furthermore, given the current economic climate in Oman, it is necessary to understand how adverse events such as infections would impact bed occupancy and accessibility levels. The use of DES modelling in this paper to assess the effect of complications on operational performance was a novel approach. The main challenge was to trace the impact of adverse events across several processes and to quantify their effect on operations. In the DES model, we integrated process characteristics such as uncertainties surrounding patient arrivals along with existing complication data. We demonstrated the utility of the DES in quantifying the effect of complications on performance measures. This modelling approach permits decision makers to understand the specific impact of a particular complication on resources (e.g. bed usage) and to provide empirical evidence on the effect on performance. Our research extends the use of DES as a methodology for operational problems involving sequential events [28] by incorporating the incremental LOS associated with complications in the patient flow. Adverse events are directly linked to increased cost [29], and LOS [30]. The economic gain from reducing complications is well documented [31]. A study in the United States found that pneumonia following valve surgery was associated with a $29,692 increase in hospital costs and a 10.2-day increase in median LOS [32]. Post-CABG complications resulted in an incremental increase of 5.3 days in LOS among Medicare beneficiaries [29]. Patients with excessive postoperative haemorrhage were at risk of experiencing a stay in the CICU longer than 3 days, receiving ventilation longer than 24 h, and a return to the operating room for reexploration [33]. The effect of complications on patient flow and operational performance There is a scarcity of literature around the effect of complications on hospital performance beyond LOS and costs. However, we found that the incremental LOS associated with complications was a source of variation that affected operations. The variation was introduced as a result of a series of events triggered by complications. Much of the reduced operational performance was related to the occurrence of pulmonary complications. This can be attributable to two reasons. First, pulmonary complications such as postoperative respiratory failure are common following cardiac surgery [34–36]. This was also reflected in our dataset. For example, pneumonia and the need for prolonged ventilation were among the most commonly reported complications. Second, these complications are often associated with prolonged LOS [37, 38]. Likewise, neurological complications significantly increased waiting time and surgery cancellations. Much of this effect is related to stroke, which remains a devastating complication despite advances in perioperative care [39, 40]. Six percent of the patients in our dataset developed stroke and their LOS was among the highest of all patients. Unlike previous studies that have found significant LOS attributable to atrial fibrillation [41, 42], the excess LOS associated with atrial fibrillation in our study was less than 7 h. Improvement in the standard treatment of this complication might have contributed toward lowering patient LOS. In general, cardiac complications had the lowest effect on waiting time, surgery throughputs and surgery cancellations. The results also demonstrated that adverse events which occurred early in the CICU had a higher impact than those that have occurred in the ward. This was due to the limited number of beds in the CICU unit. The risk factors of some of the adverse events such as stroke and pulmonary complications are known, and improvement in operational performance can be realised by effectively dealing with potentially modifiable risk factors [43, 44]. In our model, we had two waiting lists (for surgical and non-surgical patients). Surgical patients were given priority to non-surgical patients. The average waiting time for surgical patients was considerably lower as waiting time for a cardiac surgery was not an issue in this particular hospital. However, waiting for cardiac surgeries has been considered as one of the most important issues in many hospitals [45]. We incorporated waiting time in our model as many operational issues eventually manifest in the form of extended waiting times. There are several factors that affect waiting time. Previous research has not linked them to the occurrence of adverse events. The focus has instead been on determining the effect of prolonged waiting time on morbidity and mortality [46, 47]. Under the six scenarios in this study, waiting times were compared to the existing state. We observed that by adding an extra CICU bed, the waiting time did not improve considerably. This mainly occurred as a result of the increased number of patients. It is known that demand for resources in healthcare is dependent on supply [48]. Hence the expression 'if you build it they will come' can be relevant in this situation. Extra capacity can induce demand for services and unless the complication rate can be reduced, adding physical capacity might not be the optimal solution, and previous research has found that average waiting time may increase at higher levels of utilisation [49]. utilisation can be expressed by the following simple formula: $$ \mathrm{utilisation}/\left(1-\mathrm{utilisation}\right). $$ For example, the utilisation of CICU beds in our example was .82. The ratio of .82/(1−.82) equates to 5.55. When an extra bed was added, this ratio increased to .86/(1−.86) = 5.85. In the model, eliminating infections or high cost complications is a viable option that can save lives, improve patient satisfaction and contribute toward improving hospital productivity. The selection between adding more resources such as one extra CICU bed and investing in quality programmes to reduce complications should be evaluated based on how much potential cost will be avoided (e.g. costs associated with the extra LOS). While ICU capacity strain is linked to increased morbidity and lost hospital revenue, increasing the number of ICU beds increases the hospital's fixed costs at the same time [50]. Based on our results, some efficiency can be gained by reducing complications. This will allow the maximisation of the use of existing resources to produce the greatest output. The CICU services at the facility were in constant high demand from surgical and non-surgical patients. With a limited number of CICU beds in the country, non-refusal policy for CICU access is critical for insuring an unimpeded flow of patients. Theoretically, most infections are preventable. In for-profit hospitals, the extra cost that might be incurred to finance quality initiatives aimed at reducing infection for example could be defrayed in part by increased revenue from the increased number of admitted patients possible by improved bed turnaround (scenarios 1, 2, and 3). However, it should be noted that high bed occupancy might leave units understaffed, and in return, increase the number of patients experiencing complications [51]. While our intention was to model postoperative complications, postoperative LOS appeared to be an issue in this hospital. Less than 5 % of the patients were discharged home after the 5th day post-surgery which could reflect the influence of local practice rather than the medical conditions of the patients. We chose to test a scenario where postoperative LOS was reduced by 40 %. The decrease was coupled with an increased cancellation rate. The freed capacity in the ward stimulated an increase in the number of patients who were treated in the CICU, thus contributing to the high utilisation of beds and leading to a higher cancellations rate. Respectively, preoperative LOS was considerably high, averaging 5 days. This has been recognised as a problem in many healthcare systems [52]. The move toward "same-day surgery" programs was a response to avoid unnecessary LOS that adds cost and might not add value to the patient's care [53]. In general, prolonged hospitalisation is associated with an increased risk of complications [54] and may indicate shortcomings in patient safety [55]. Limitations of the statistical models One potential limitation of this study is the extent to which its results can be generalisable. The data pertain to a specific population and specific setting, therefore, results might not be generalisable to other populations or settings with different characteristics. However, the method and interpretation of the models are generalisable. There are various factors affecting LOS and resource utilization aside from complications, such as physician judgments, hospital policy, and adequacy of resources. The current study was limited by availability of data that were routinely collected. Therefore, the factors that were not accounted for when calculating the excess LOS attributable to each type of complication might have a significant effect. However, we think the existing data were sufficient to provide an overall measure for predicting excess LOS, as evidenced by high discriminative power. Limitations of the simulation model One of the limitations of the simulation model was the absence of data on the location where each complication originated. This can have a significant impact on results concerning resource utilisation in the CICU and the ward. As such, complications leading to prolonged LOS in the CICU would have a greater impact on patient flow than complications occurring in the ward. Second, it was difficult to track whether a surgery cancellation was due to the occurrence of complications in the downstream beds or to other reasons. Instead, we obtained subjective expert opinions to compensate for this missing variable. The reader should be aware that the number of cardiac surgeries in the hospital under study was relatively low. The implication for this is that the pressure on resources was relatively less compared to other hospitals. Thus, the hospital might not have the incentive to expedite patient discharge. Moreover, hospitals in Oman are not required to meet specific waiting time targets for cardiac surgery. In healthcare systems where waiting times are closely monitored, LOS are expected to be shorter to accommodate more patients from the waiting list. The study provides evidence supporting the need to incorporate adverse events in resource planning to improve hospital performance. We attempted to quantify the effect of complications using DES. We found a significant impact of complications on LOS, surgery cancellations, and waiting list size. The effect on operational performance was profound when complications occurred in the CICU where a limited capacity was observed. Excess LOS spent in the hospital constitutes a lost opportunity for admitting more patients. A marked decrease in adverse events would be required to effectively deal with the negative consequences on system performance. The growth of cardiac care services in Oman has been slow relative to the population density. Maximising existing resources would be an option as adding more resources might not guarantee a higher level of services. One way to accomplish this is by reducing avoidable complications. In our model this has not only reduced cost, but also significantly improved performance of other metrics. As there is a scarcity in research regarding quantifying the effect of complications on patient flow and overall operational performance, we recommend further research in this area. An explicit measure should be an integral part of hospital resource planning to improve resource utilisation and perioperative patient experience. Hospitals may consider integrating the method discussed in this study into existing health information systems. While our study was based on cardiac surgical patients, this methodology can be applied to other specialties. For further development, researchers should aim at investigating the effect of complications related to other surgeries such as general surgeries which are associated with higher volume. Moreover, modellers should consider surgical complications that occur in the OR. In hospitals with a high demand for operating theatres, unexpected complications can lead to increased surgical time exceeding the allocated slot. This eventually results in postponement of other surgeries. Secondly, in this study we did not model the relationship between prolonged hospital stay and the increased likelihood of morbidity. Future research might consider this relationship. Finally, a hospital-wide modelling of complications is needed. A system-wide approach such as this will allow a better understanding of how complications impact resources and hospital performance. CABG: CICU: Cardiac intensive care unit K-S: Kolmogorov-Simrnov test MEMS: Marginal effects at the means Melendez JA, Carlon VA. Cardiopulmonary risk index does not predict complications after thoracic surgery. CHEST J. 1998;114(1):69–75. Azari-Rad S, Yontef A, Aleman DM, Urbach DR. A simulation model for perioperative process improvement. Oper Res Health Care. 2014;3(1):22–30. Katsaliaki K, Mustafee N. Applications of simulation within the healthcare context. J Oper Res Soc. 2011;62(8):1431–51. Fone D, Hollinghurst S, Temple M, Round A, Lester N, Weightman A, Roberts K, Coyle E, Bevan G, Palmer S. Systematic review of the use and value of computer simulation modelling in population health and health care delivery. J Public Health. 2003;25(4):325–35. Eldabi T, Paul R, Young T. Simulation modelling in healthcare: reviewing legacies and investigating futures. J Oper Res Soc. 2006;58(2):262–70. Taylor K, Lane D. Simulation applied to health services: opportunities for applying the system dynamics approach. J Health Serv Res Policy. 1998;3(4):226–32. Robinson S. Simulation: the practice of model development and use. Chichester: Wiley; 2004. Bowers J. Balancing operating theatre and bed capacity in a cardiothoracic centre. Health Care Manag Sci. 2013;16(3):236–44. Ranucci M, Bozzetti G, Ditta A, Cotza M, Carboni G, Ballotta A. Surgical reexploration after cardiac operations: why a worse outcome? Ann Thorac Surg. 2008;86(5):1557–62. Fowler VG, O'Brien SM, Muhlbaier LH, Corey GR, Ferguson TB, Peterson ED. Clinical predictors of major infections after cardiac surgery. Circulation. 2005;112(9 suppl):I-358–65. Butler J, Rocker GM, Westaby S. Inflammatory response to cardiopulmonary bypass. Ann Thorac Surg. 1993;55(2):552–9. Al‐Khabori M, Al‐Riyami A, Mukaddirov M, Al‐Sabti H. Transfusion indication predictive score: a proposed risk stratification score for perioperative red blood cell transfusion in cardiac surgery. Vox Sang. 2014;107(3):269–75. Graf K, Ott E, Vonberg R-P, Kuehn C, Haverich A, Chaberny IF. Economic aspects of deep sternal wound infections. Eur J Cardiothorac Surg. 2010;37(4):893–6. Samore MH, Shen S, Greene T, Stoddard G, Sauer B, Shinogle J, Nebeker J, Harbarth S. A simulation-based evaluation of methods to estimate the impact of an adverse event on hospital length of stay. Med Care. 2007;45(10):S108–15. Toumpoulis IK, Anagnostopoulos CE, Swistel DG, DeRose JJ. Does EuroSCORE predict length of stay and specific postoperative complications after cardiac surgery? Eur J Cardiothorac Surg. 2005;27(1):128–33. Smárason NV, Sigurjónsson H, Hreinsson K, Arnorsson T, Gudbjartsson T. Reoperation for bleeding following open heart surgery in Iceland. Laeknabladid. 2009;95(9):567–73. Kristensen KL, Rauer LJ, Mortensen PE, Kjeldsen BJ. Reoperation for bleeding in cardiac surgery. Interact Cardiovasc Thorac Surg. 2012;14(6):709–13. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcome Res Methodol. 2002;3(2):107–33. Verburg IW, de Keizer NF, de Jonge E, Peek N. Comparison of Regression Methods for Modeling Intensive Care Length of Stay. PLoS One. 2014;9:e109684. Osler TM, Rogers FB, Hosmer DW. Estimated additional hospital length of stay caused by 40 individual complications in injured patients: an observational study of 204,388 patients. J Trauma Acute Care Surg. 2013;74(3):921–5. Williams R. Using the margins command to estimate and interpret adjusted predictions and marginal effects. Stata J. 2012;12(2):308. Kelton WD, Law AM. Simulation modeling and analysis. Boston: McGraw Hill; 2000. Robinson S. Conceptual Modelling for Discrete-Event Simulation. Florida: CRC Press INC; 2010. HOPE. Measuring and comparing waiting lists a study in four European countries. In: Brussels standing committee of the hospitals of the European Union. 2004. Almashrafi A, Alsabti H, Mukaddirov M, Balan B, Aylin P. Factors associated with prolonged length of stay following cardiac surgery in a major referral hospital in Oman: a retrospective observational study. BMJ Open. 2016;6(6):e010764. Nelson BL, Carson JS, Banks J. Discrete event system simulation. New Jersey: Prentice Hall; 2001. Sargent RG. Verification and validation of simulation models. In: Proceedings of the 37th conference on Winter simulation: 2005. Winter Simulation Conference; New Jersey, 2005. p. 130–43. Bayer S. Simulation modelling and resource allocation in complex services. BMJ Qual Saf. 2014;23(5):353–5. Brown PP, Kugelmass AD, Cohen DJ, Reynolds MR, Culler SD, Dee AD, Simon AW. The frequency and cost of complications associated with coronary artery bypass grafting surgery: results from the United States Medicare program. Ann Thorac Surg. 2008;85(6):1980–6. Herwaldt LA, Cullen JJ, Scholz D, French P, Zimmerman MB, Pfaller MA, Wenzel RP, Perl TM. A prospective study of outcomes, healthcare resource utilization, and costs associated with postoperative nosocomial infections. Infect Control. 2006;27(12):1291–8. Eappen S, Lane BH, Rosenberg B, Lipsitz SA, Sadoff D, Matheson D, Berry WR, Lester M, Gawande AA. Relationship between occurrence of surgical complications and hospital finances. JAMA. 2013;309(15):1599–606. Iribarne A, Burgener JD, Hong K, Raman J, Akhter S, Easterwood R, Jeevanandam V, Russo MJ. Quantifying the incremental cost of complications associated with mitral valve surgery in the United States. J Thorac Cardiovasc Surg. 2012;143(4):864–72. Christensen MC, Krapf S, Kempel A, von Heymann C. Costs of excessive postoperative hemorrhage in cardiac surgery. J Thorac Cardiovasc Surg. 2009;138(3):687–93. Ji Q, Mei Y, Wang X, Feng J, Cai J, Ding W. Risk factors for pulmonary complications following cardiac surgery with cardiopulmonary bypass. Int J Med Sci. 2013;10(11):1578. Badenes R, Lozano A, Belda FJ. Postoperative Pulmonary Dysfunction and Mechanical Ventilation in Cardiac Surgery. Crit Care Res Prac. 2015;2015:420513. Wynne R, Botti M. Postoperative pulmonary dysfunction in adults after cardiac surgery with cardiopulmonary bypass: clinical significance and implications for practice. Am J Crit Care. 2004;13(5):384–93. Collins TC, Daley J, Henderson WH, Khuri SF. Risk factors for prolonged length of stay after major elective surgery. Ann Surg. 1999;230(2):251. Lazar HL, Fitzgerald C, Gross S, Heeren T, Aldea GS, Shemin RJ. Determinants of length of stay after coronary artery bypass graft surgery. Circulation. 1995;92(9):20–4. Bucerius J, Gummert JF, Borger MA, Walther T, Doll N, Onnasch JF, Metz S, Falk V, Mohr FW. Stroke after cardiac surgery: a risk factor analysis of 16,184 consecutive adult patients. Ann Thorac Surg. 2003;75(2):472–8. Stamou SC, Hill PC, Dangas G, Pfister AJ, Boyce SW, Dullum MK, Bafi AS, Corso PJ. Stroke after coronary artery bypass incidence, predictors, and clinical outcome. Stroke. 2001;32(7):1508–13. Aranki SF, Shaw DP, Adams DH, Rizzo RJ, Couper GS, VanderVliet M, Collins JJ, Cohn LH, Burstin HR. Predictors of atrial fibrillation after coronary artery surgery current trends and impact on hospital resources. Circulation. 1996;94(3):390–7. LaPar DJ, Speir AM, Crosby IK, Fonner Jr E, Brown M, Rich JB, Quader M, Kern JA, Kron IL, Ailawadi G, et al. Postoperative atrial fibrillation significantly increases mortality, hospital readmission, and hospital costs. Ann Thorac Surg. 2014;98(2):527–33. discussion 533. John R, Choudhri AF, Weinberg AD, Ting W, Rose EA, Smith CR, Oz MC. Multicenter review of preoperative risk factors for stroke after coronary artery bypass grafting. Ann Thorac Surg. 2000;69(1):30–5. Agostini P, Cieslik H, Rathinam S, Bishay E, Kalkat M, Rajesh P, Steyn R, Singh S, Naidu B. Postoperative pulmonary complications following thoracic surgery: are there any modifiable risk factors? Thorax. 2010;65(9):815–8. NHS. A guide to commissioning cardiac surgical services. London: 2010. http://www.scts.org/_userfiles/resources/634586925235172296_Cardiac_Surgery_Commissioning_Guide.pdf. Koomen EM, Hutten BA, Kelder JC, Redekop WK, Tijssen JG, Kingma JH. Morbidity and mortality in patients waiting for coronary artery bypass surgery. Eur J Cardiothorac Surg. 2001;19(3):260–5. Sampalis J, Boukas S, Liberman M, Reid T, Dupuis G. Impact of waiting time on the quality of life of patients awaiting coronary artery bypass grafting. Can Med Assoc J. 2001;165(4):429–33. Proudlove N, Boaden R. Using operational information and information systems to improve in-patient flow in hospitals. J Health Organ Manag. 2005;19(6):466–77. Terwiesch C, Diwas K, Kahn JM. Working with capacity limitations: operations management in critical care. Crit Care. 2011;15(4):308. Kahn JM. The risks and rewards of expanding ICU capacity. Crit Care. 2012;16(5):1–2. Dang D, Johantgen ME, Pronovost PJ, Jenckes MW, Bass EB. Postoperative complications: does intensive care unit staff nursing make a difference? Heart Lung: J Acute Crit Care. 2002;31(3):219–28. Luigi S, Michael B, Valerie M. OECD Health Policy Studies Waiting Time Policies in the Health Sector What Works? Paris: OECD Publishing; 2013. Cella AS, Bush CA, Codignotto B. Same-day admission for cardiac surgery: a benefit to patient, family, and institution. J Cardiovasc Nurs. 1993;7(4):14–29. Hassan M, Tuckman HP, Patrick RH, Kountz DS, Kohn JL. Hospital length of stay and probability of acquiring infection. Int J Pharm Healthc Mark. 2010;4(4):324–38. Borghans I, Hekkert KD, den Ouden L, Cihangir S, Vesseur J, Kool RB, Westert GP. Unexpectedly long hospital stays as an indicator of risk of unsafe care: an exploratory study. BMJ Open. 2014;4(6):e004773. The authors would like to thank Dr Hilal Al-Sabti, senior consultant cardiothoracic surgeon at the Sultan Qaboos University for his guidance and support in data collection. We would also like to thank the two referees for their helpful comments. This research was part of AA's PhD research which was funded by the government of Oman. The dataset which we have acquired will not be shared as a supplementary file. Our ethical approval does not permit the sharing of the entire dataset which we have acquired. AA participated in the data collection, performed the literature review, wrote the first draft of the manuscript and conducted the statistical analysis. LV helped draft the manuscript and appraised the study quality. Both authors read and approved the final manuscript. Ethical approval to conduct this study was granted by the Sultan Qaboos University Hospital and the research ethical committee of the Sultan Qaboos University (Reference number: MREC#798). Data for this study came from prospectively collected database that was maintained by the cardiothoracic department at the hospital. Consent was obtained from patients or surrogates to collect their data for research purpose. Department of Primary Care and Public Health, School of Public Health, Imperial College London, Charing Cross Campus, Reynolds Building, St Dunstans Road, London, W6 8RP, UK Ahmed Almashrafi & Laura Vanderbloemen Search for Ahmed Almashrafi in: Search for Laura Vanderbloemen in: Correspondence to Ahmed Almashrafi. Almashrafi, A., Vanderbloemen, L. Quantifying the effect of complications on patient flow, costs and surgical throughputs. BMC Med Inform Decis Mak 16, 136 (2016) doi:10.1186/s12911-016-0372-6 Incremental LOS
CommonCrawl
May 2016's Entries Good news about four former students. Castles in the Air Some thoughts on the value of nonrigorous mathematics. The HoTT Effect HoTT conference season E8 as the Symmetries of a PDE Work by Dennis The that expresses E8 as the symmetries of an explicit PDE. The Works of Charles Ehresmann Charles Ehresmann's complete works, over 3000 pages of mathematics, are now available for free online. Man Ejected from Flight for Solving Differential Equation Yes, really. Which Paths Are Orthogonal to All Cycles? A question about the first homology group of a graph. Categorifying Lucas' Equation Gavin Wraith asks: can you find a nice bijective proof that $1^2 + \cdots + 24^2 = 70^2$? « April 2016 | Main | June 2016 » Various bits of good news concerning my former students Alissa Crans, Derek Wise, Jeffrey Morton and Chris Rogers. Continue reading "Good News" … The most recent issue of the Notices includes a review by Slava Gerovitch of a book by Amir Alexander called Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. As the reviewer presents it, one of the main points of the book is that science was advanced the most by the people who studied and worked with infinitesimals despite their apparent formal inconsistency. The following quote is from the end of the review: If… maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems. Innovation often grows out of outlandish ideas, but to make them acceptable one needs a different cultural image of mathematics — not a perfectly polished pyramid of knowledge, but a freely growing tree with tangled branches. The reviewer makes parallels to more recent situations such as quantum field theory and string theory, where the formal mathematical justification may be lacking but the physical theory is meaningful, fruitful, and made correct predictions, even for pure mathematics. However, I couldn't help thinking of recent examples entirely within pure mathematics as well, and particularly in some fields of interest around here. Continue reading "Castles in the Air" … Martin-Löf type theory has been around for years, as have category theory, topos theory and homotopy theory. Bundle them all together within the package of homotopy type theory, and philosophy suddenly takes a lot more interest. If you're looking for places to go to hear about this new interest, you are spoilt for choice: CFA: Foundations of Mathematical Structuralism, Munich, 12-14 October 2016 (see below for a call for papers). FOMUS, Foundations of Mathematics: Univalent foundations and set theory, Bielefeld, 18-23 July 2016. Homotopy Type Theory in Logic, Metaphysics and Philosophy of Physics, Bristol, 13-15 September 2016. For an event which delves back also to pre-HoTT days, try my Type Theory and Philosophy, Canterbury, 9-10 June 2016. Continue reading "The HoTT Effect" … Posted by John Huerta My friend Dennis The recently gave a new description of the Lie algebra of E 8\mathrm{E}_8 (as well as all the other complex simple Lie algebras, except 픰픩(2,ℂ)\mathfrak{sl}(2,\mathbb{C})) as the symmetries of a system of partial differential equations. Even better, when he writes down his PDE explicitly, the exceptional Jordan algebra makes an appearance, as we will see. Dennis The, Exceptionally simple PDE. This is a story with deep roots: it goes back to two very different models for the Lie algebra of G 2\mathrm{G}_2, one due to Cartan and one due to Engel, which were published back-to-back in 1893. Dennis figured out how these two results are connected, and then generalized the whole story to nearly every simple Lie algebra, including E 8\mathrm{E}_8. Continue reading "E8 as the Symmetries of a PDE" … Charles Ehresmann's complete works are now available for free here: Charles Ehresmann: Oeuvres complètes et commentées. There are 630 pages on algebraic topology and differential geometry, 800 pages on local structures and ordered categories, and their applications to topology, 900 pages on structured categories and quotients and internal categories and fibrations, and 850 pages on sketches and completions and sketches and monoidal closed structures. That's 3180 pages! On top of this, more issues of the journal he founded, Cahiers de Topologie et Géométrie Différentielle Catégoriques, will become freely available online. Continue reading "The Works of Charles Ehresmann" … A professor of economics was escorted from an American Airlines flight and questioned by secret police after the woman in the next seat spotted him writing page after page of mysterious symbols. It's all over the internet. Press reports do not specify which differential equation it was. Although his suspiciously mediterranean appearance may have contributed to his neighbour's paranoia, the professor has the privilege of not having an Arabic name and says he was treated with respect. He's Italian. The flight was delayed by an hour or two, he was allowed to travel, and no harm seems to have been done. Unfortunately, though, this story is part of a genre. It's happening depressingly often in the US that Muslims (and occasionally others) are escorted off planes and treated like criminals on the most absurdly flimsy pretexts. Here's a story where some passengers were afraid of the small white box carried by a fellow passenger. It turned out to contain baklava. Here's one where a Berkeley student was removed from a flight for speaking Arabic, and another where a Somali woman was ejected because a flight attendant "did not feel comfortable" with her request to change seats. The phenomenon is now common enough that it has acquired a name: "Flying while Muslim". Greg Egan and I have been thinking about topological crystallography, and I bumped into a question about the homology of a graph embedded in a surface, which I feel someone should have already answered. Do you know about this? I'll start with some standard stuff. Suppose we have a directed graph Γ\Gamma. I'll write e:v→we : v \to w \, when ee is an edge going from the vertex vv to the vertex ww. We get a vector space of 0-chains C 0(Γ,ℝ)C_0(\Gamma,\mathbb{R}), which are formal linear combinations of vertices, and a vector space of 1-chains C 1(Γ,ℝ)C_1(\Gamma,\mathbb{R}), which are formal linear combinations of edges. We also get a boundary operator ∂:C 1(Γ,ℤ)→C 0(Γ,ℤ) \partial : C_1(\Gamma,\mathbb{Z}) \to C_0(\Gamma,\mathbb{Z}) sending each edge e:v→we: v \to w to the difference w−vw - v. A 1-cycle is 1-chain cc with ∂c=0\partial c = 0. There is an inner product on 1-chains for which the edges form an orthonormal basis. Any path in the graph gives a 1-chain. When is this 1-chain orthogonal to all 1-cycles? To make this precise, and interesting, I should say a bit more. Continue reading "Which Paths Are Orthogonal to All Cycles?" … In 1875, Édouard Lucas challenged his readers to prove this: A square pyramid of cannon balls contains a square number of cannon balls only when it has 24 cannon balls along its base. In other words, the 24th square pyramid number is also a perfect square: 1 2+2 2+⋯+24 2=70 2 1^2 + 2^2 + \cdots + 24^2 = 70^2 and this is only true for 24. Nitpickers will note that it's also true for 0 and 1. However, these are the only three natural numbers nn such that 1 2+2 2+⋯+n 21^2 + 2^2 + \cdots + n^2 is a perfect square. This fact was only proved in 1918, with the help of elliptic functions. Since then, more elementary proofs have been found. It may seem like much ado about nothing, but actually this fact about the number 24 underlies the simplest construction of the Leech lattice! So, understanding it better may be worthwhile. Gavin Wraith has a new challenge, which is to find a bijective proof that the number 24 has this property. But part of this challenge is to give a precise statement of what counts as success! I'll let him explain… Continue reading "Categorifying Lucas' Equation" …
CommonCrawl
Feasibility study of early prediction of postoperative MRI findings for knee stability after anterior cruciate ligament reconstruction Jianqiang Zhang1 na1, Jiyao Ma1 na1, Juan Huang1, Guoliang Wang1, Yilong Huang1, Zhenhui Li2, Jun Yan1, Xiaomin Zeng1, Hongli Zhu1, Wei Zhao1, Yanlin Li1 & Bo He1 At present, the most effective and mature treatment after ACL injury and tear is ACL reconstruction, but the rehabilitation process after ACL reconstruction that is very long, so it is very important to find early MRI positive findings of knee instability. We retrospectively collected the clinical and imaging data of 70 patients who underwent ACL reconstruction from January 2016 to December 2019; Based on clinical criteria, the patients were divided into a stable group (n = 57) and an unstable group (n = 13); We measured the MRI imaging evaluation indexes, including the position of the bone tunnel, graft status, and the anatomical factors; Statistical methods were used to compare the differences of imaging evaluation indexes between the two groups; The prediction equation was constructed and ROC curve was used to compare the prediction efficiency of independent prediction factors and prediction equation. There were significant differences in the abnormal position of tibial tunnel entrance, percentage of the position of tibial tunnel entrance, position of tibial tunnel exit, lateral tibial posterior slope (LTPS), width of intercondylar notch between stable knee joint group and unstable knee joint group after ACL reconstruction (P < 0.05); The position of tibial tunnel exits and the lateral tibial posterior slope (LTPS) and the sagittal obliquity of the graft were independent predictors among surgical factors and self-anatomical factors (P < 0.05); The prediction equation of postoperative knee stability was established: Logit(P) = -1.067–0.231*position of tibial tunnel exit + 0.509*lateral tibial posterior slope (LTPS)-2.105*sagittal obliquity of the graft; The prediction equation predicted that the AUC of knee instability was 0.915, the sensitivity was 84.6%, and the specificity was 91.2%. We found that abnormalities of the position of the exit of the bone tunnel, lateral tibial posterior slope (LTPS) and sagittal obliquity of the graft were the early MRI positive findings of knee instability after ACL reconstruction. It is helpful for clinicians to predict the stability of knee joint after ACL reconstruction. Anterior cruciate ligament (ACL) and posterior cruciate ligament (PCL) are important static structures that maintain the stability of the knee joint. Their roles are to connect the femur and tibia, to maintain the stability of the knee joint and to limit the forward movement of the tibia when performing movements that need to change direction. If there is too much pressure on the knee, the ACL may be damaged or broken completely. Post-traumatic osteoarthritis often occurs regardless of surgical intervention following ACL injury, if left untreated, not only the knee joint will become unstable, but also aggravate the abrasion of articular cartilage and meniscus [1]. The most effective and mature treatment for ACL tears is ACL reconstruction. The reconstruction of ACL has become one of the most common operations in orthopedics and sports medicine. Although arthroscopic ACL reconstruction is very mature, the failure rate of arthroscopic reconstruction remains 6%-12% [2]. Due to the biomechanical properties of ACL reconstruction grafts, ACL reconstruction is not an immediate surgical operation. It requires a long recovery process and the entire recovery cycle is as long as two years [3]. The common reasons for the early failure of ACL reconstruction surgery are poor surgical techniques, failure of graft maturation, and wrong postoperative rehabilitation methods. Graft disruption is the disruption of the graft fiber continuity. The causes of late onset failure include re-injury after transplantation, graft disruption, etc. [4]. Therefore, follow-up in the postoperative rehabilitation process has become an important part to evaluate the situation of the patient's recovery, and clinicians will adjust the corresponding rehabilitation plan according to the results of the patient's follow-up. Postoperative follow-up mainly includes clinical physical examination and imaging investigation. Physical examination is limited by the clinician's experience and subjective judgment, and it cannot truly reflect the biomechanical properties of the graft and the postoperative recovery, but Imaging can provide objective evidence for clinicians. Scholars such as Wilson et al., Clatworthy et al. and Fahey et al. [5,6,7] proposed that the graft is most likely to be damaged in 4–8 months after the reconstruction, and the enlargement of the inner diameter of the tunnel occurred in 3 months after the operation. Previous studies have described imaging features linked to the stability of the knee and ACL injury [8]. In this study, the basic imaging manifestation in the early stage of postoperative patient was summarized and analyzed by reexamining MRI within one week after operation, comparing the correlation between postoperative MRI evaluation and clinical assessment, early predicting the stability of the knee joint after ACL reconstruction and providing an objective evidence for the clinic, preventing and treating postoperative complications as soon as possible, and increasing the probability of success after surgery and improving patient quality of life. Research object A total of 348 patients were collected who underwent ACL reconstruction operation in the Sports Medicine Department of the First Affiliated Hospital of Kunming Medical University from January 2016 to December 2019. Among them, 223 patients were selected who underwent MRI within one week after surgery. The inclusion and exclusion criteria of patients are shown in (Fig. 1). After exclusion of unsuitable patients, a total of 70 patients were included in this study. Inclusion process of enrolled patients Patient inclusion criteria After ACL injury and arthroscopic ACL reconstruction, the patient's sex and age were not limited; ACL reconstruction of unilateral knee joint only; The reconstruction autograft is a composite tendon of semitendinosus and gracilis; Clinical physical examination after ACL reconstruction: assessment of knee joint function and stability. Patient exclusion criteria Patients with other important organ injuries such as brain, chest and abdomen before operation; Combined with other injuries of the knee joint, such as posterior cruciate ligament injury, medial and lateral ligament injury, meniscus injury and so on, which will affect the stability of the knee joint. Combined with other systemic diseases, such as rheumatoid arthritis, gouty arthritis, etc., which will affect functional recovery. Re-rupture of the graft caused by secondary injury after surgery. The patient underwent ACL revision reconstruction. Patients with complications such as infection after surgery. Those patients who cannot evaluate bone tunnels and intra-articular grafts due to artifacts. Clinical assessment criteria and grouping of knee joint function and stability All selected patients underwent clinical knee joint stability examination before MR: Lachman test [9]: The patient lay flat on the examination bed, keep their muscles relaxed, make their injured knee flexed to 20 ~ 30 degrees and feet flat; Then the examiner uses one hand to stabilize the distal femur while using the other hand to grasp the proximal tibia; Next, push back and forth in the opposite direction; Compared with the healthy side, if there is more forward movement than the healthy side, it is regarded as positive. If the Lachman test is positive, it is considered that there is instability in the knee joint. Take the patient's clinical assessment as the standard, patients were divided into stable knee joint group and unstable knee joint group according to postoperative mobility restriction or pathologic laxity of the knee joint. MRI Image acquisition Scanning equipment and experimental process Three different MRI machines are usedin this study: Philips superconducting magnetic resonance (Achieva, Philips, Best, Netherlands 3.0 T), GE (Discovery MR 750 3.0 T) and GE 1.5 T Magnetic Resonance Scanner (SignaHDxt), Knee joint coil, supine position, foot side first. The scanning sequences included: Axial T2WI FS, Sagittal PDWI FS, Sagittal T1WI and Coronal PDWI FS. Specific scanning parameters for each machine are as follows: (1) Philips 3.0 T with 8-channel knee coils: ①Axial T2WI FS: TR = 2425 ms, TE = 65 ms, FOV = 150 × 162mm2, Matrix = 316 pixels × 209 pixels, Phase encoding direction: R > > L, NEX = 2, Acquisition time = 2min11s. ②Sagittal PDWI FS: TR = 4203 ms, TE = 30 ms, FOV = 180 × 190mm2, Matrix = 300 pixels × 250 pixels, Phase encoding direction: F > > H, NEX = 1, Acquisition time = 2min56s. ③Sagittal T1WI: TR = 633 ms, TE = 20 ms, FOV = 190 × 180mm2, Matrix = 456 pixels × 355 pixels, Phase encoding direction: F > > H, NEX = 1, Acquisition time = 1min48s. ④Coronal PDWI FS: TR = 3329 ms, TE = 30 ms, FOV = 190 × 171mm2, Matrix = 316 pixels × 209 pixels, Phase encoding direction: R > > L, NEX = 2, Acquisition time = 2min19s. (2) GE 3.0 T with 16-channel knee coils: ①Axial T2WI FS: TR = 2710 ms, TE = 48 ms, FOV = 160 × 160mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: R > > L, NEX = 2, Acquisition time = 1min20s. ②Sagittal PDWI FS: TR = 2693 ms, TE = 35 ms, FOV = 160 × 160mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: F > > H, NEX = 2, Acquisition time = 2min25s. ③Sagittal T1WI: TR = 718 ms, TE = 13 ms, FOV = 160 × 160mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: F > > H, NEX = 1, Acquisition time = 1min15s. ④Coronal PDWI FS: TR = 2354 ms, TE = 35 ms, FOV = 160 × 160mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: R > > L, NEX = 2, Acquisition time = 2min15s. (3) GE 1.5 T with HD Trknee PA: ①Axial T2WI FS: TR = 2540 ms, TE = 60 ms, FOV = 170 × 170mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: S > > I, NEX = 2, Acquisition time = 2min16s. ②Sagittal PDWI FS: TR = 2900 ms, TE = 30 ms, FOV = 170 × 170mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: A > > P, NEX = 2, Acquisition time = 2min30s. ③Sagittal T1WI: TR = 500 ms, TE = 10 ms, FOV = 170 × 170mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: A > > P, NEX = 2, Acquisition time = 1min42s. ④Coronal PDWI FS: TR = 2540 ms, TE = 30 ms, FOV = 170 × 170mm2, Matrix = 320 pixels × 224 pixels, Phase encoding direction: A > > P, NEX = 2, Acquisition time = 1min37s. The slice thickness and gap of different machines were consistent. The slice thickness was 4 mm and the slice spacing was 0.4 mm. Patients enrolled in this study were examined by MRI of the knee joint within one week after operation. The experimental process is shown in (Fig. 2): The position of tibial tunnel and femoral tunnel were recorded within one week after operation; Coronal and sagittal obliquity of grafts; Anatomical factors, including: the lateral tibial posterior slope (LTPS), the medial tibial posterior slope (MTPS), the depth of the medial tibial plateau, the width of intercondylar notch, the cross-sectional area of the intercondylar notch, the notch width index (NWI).The above data were evaluated and recorded by two physicians in the musculoskeletal imaging diagnosis group. The contents of the MR examinations records of the enrolled patients MR image analysis The evaluation criteria of imaging evaluation indicators related to surgery are as follows: Position of bone tunnel The Sagittal PDWI FS series images were used to evaluate the position of the femoral and tibial bone tunnels one week after the operation. Position of femoral tunnel: Firstly, the lateral femoral condyle was selected to display the first slice of the graft tunnel; Secondly, the height and width of the lateral femoral condyle at this slice were measured; Then, the distance between the center of the tunnel and the lower and posterior edges of the lateral femoral condyle were measured, and compared it with the height and width of the lateral femoral condyle as the height ratio and the width ratio of the intra-femoral graft tunnel; Finally, the position of the intra-femoral tunnel were defined in the anterior and posterior, superior and inferior position of the lateral femoral condyle [10]. Anchoring point opening of femoral tunnel: On the sagittal plane of the knee joint, it was located at the intersection of the lateral wall of the intercondylar notch and the posterior femoral cortex [11]. On the coronal plane of the knee joint, the position of the opening of femoral tunnel was measured by the clock face method centered on the intercondylar notch. The femoral tunnel of the right knee joint approximately opens at 10–11 o'clock position; the femoral tunnel of the left knee joint approximately opens at 1–2 o'clock position [12]. Position of tibial tunnel entrance: On the sagittal plane, the positional relationship between the Blumensaat's line and the front border of the inner opening of the tibial tunnel was measured. The front border of the inner opening of the tibial tunnel should fall behind the tangential line of the Blumenstaat's line [13]. The distance from the inner opening center of the tibial tunnel to the anterior edge of the tibial plateau accounts for the percentage of the anteroposterior diameter of the tibial plateau. The opening center of the tibial tunnel should ideally be located around the 42% mark of the entire distance of the anteroposterior diameter of the tibial plateau on the sagittal plane [14]. Position of tibial tunnel exit: On the sagittal plane of the external opening of the tibial tunnel was located. Then, the distance from the upper edge of the external opening of the tibial tunnel to the cortical bone of the anterior and superior edge of the tibial plateau was measured [10]. The status of the graft The Sagittal PDWI FS and Coronal PDWI FS series images were used to evaluate the status of the graft. The obliquity of the graft: The coronal obliquity of the graft: The angle between a line drawn along the long axis of the intra-articular graft and the plane of the tibial articular surface at the inner opening of the tibial tunnel was measured on the coronal image. The obliquity of the graft in the coronal plane should be less than 75° [15]. The sagittal obliquity of the graft: The angle between the vertical line of the tibia long axis and the intra-articular long axis of the graft on the sagittal image was measured. The sagittal obliquity of the graft after ACL reconstruction should be between 50–60°, and not exceed 60° [16]. Anatomical factors The Sagittal PDWI FS and Coronal PDWI FS series images were used to evaluate the anatomical factors. Measurement of the medial and lateral slope of tibial plateau [17, 18]: The largest slice of the tibial plateau was selected on the axial image of the knee joint and the plane with the largest anteroposterior diameter of the medial and lateral condyle was selected as the measurement plane of the medial and lateral slope of tibial plateau. The lateral tibial posterior slope (LTPS) and medial tibial posterior slope (MTPS) were measured on the corresponding sagittal images and the long axis of the tibia on the central sagittal image was determined. The lateral tibial posterior slope (LTPS) was at the plane of the lateral tibial plateau and the angle between the vertical line of the long axis of the tibia and the cortical bone line of the posterior edge of the lateral tibial plateau was measured. The medial tibial posterior slope (MTPS) was at the plane of the medial tibial plateau and the angle between the vertical line of the long axis of the tibia and the bone cortical line of the posterior edge of the medial tibial plateau was measured. Measurement of the depth of the medial tibial plateau: the line connecting the cortical bone of the anterior and posterior edge of the tibial plateau was made at the maximum slice of the medial tibial plateau, and the vertical distance from the most concave point of the proximal tibial plateau to the line was Medial Tibial Depth [19]. Measurement of the width of intercondylar notch [20]: measured on the coronal image corresponding to the midpoint slice of the Blumensaat's line on the sagittal image, and crossed the popliteal groove to make the parallel line connecting the medial and lateral condyle cortex of the distal femur. The width of the parallel line was occupied by the intercondylar notch, so it's the width of the intercondylar notch [21]. The width of the intercondylar notch can be expressed by the notch width index (NWI) [22], that is, the ratio of the width of the intercondylar notch to the width of line connecting the medial and lateral femoral condyles at the level of the popliteal groove. The cross-sectional area of the intercondylar notch: the vertical line from the top of the intercondylar notch to the line of the cortical bone of the medial and lateral condyle is the height of the intercondylar notch. The cross-sectional area of the intercondylar notch was measured by multiplying width and height. Statistical analysis was performed using SPSS 22.0 statistical software. Independent sample t-test was used to detect the difference between the stable group and the unstable group, which includes: age, postoperative days, percentage of tibial tunnel entrance, position of tibial tunnel exit, position of femoral tunnel (the height ratio and width ratio), lateral tibial posterior slope (LTPS), medial tibial posterior slope (MTPS), Medial tibial depth, width of intercondylar notch, Notch width index (NWI), cross-sectional area of the intercondylar notch, coronal obliquity of the graft, The sagittal obliquity of the graft; The differences of the position of the entrance of the tibial and femoral tunnel between the two groups were compared by χ2 test;Kappa consistency test to compare the consistency between MR findings and clinical presentation, > 0.75 indicates good consistency, 0.40 ~ 0.75 indicates medium consistency, < 0.40 indicates poor consistency. The sensitivity, specificity, accuracy, positive predictive value and negative predictive value of the above MR findings in the diagnosis of postoperative stability of knee joint were calculated, P < 0.05 was considered that the difference is statistically significant. The independent predictors are screened by logistic regression analysis, and the prediction equation is constructed at the same time, P < 0.05 was considered that the difference is statistically significant. ROC curve was used to compare the prediction efficiency of independent prediction factors and prediction equations. We used t test, rank sum test and chi-square test to select the variables with statistically significant differences, then significant variables were passed to a secondary multivariate logistic regression model. Clinical evaluation and grouping A total of 70 patients were enrolled in this study. Among them, there were 48 males and 22 females, aged from 14 to 64 years old, with a mean age of 32.5 ± 10.69 years old. Clinical follow-up within two years,13 cases had positive Lachman test after reconstruction of ACL denoting unstable knee joint on clinical assessment. Thus, these patients were included in postoperative unstable knee group; the remaining 57 cases were included in the postoperative stable knee group. The analysis of the basic information between the stable and unstable knee joint groups is shown in (Table 1). After statistical analysis, patient's age, sex, surgical site and the time of postoperative reexamination imaging, there were no statistical and mathematical differences between the stable knee joint group and the unstable group after ACL reconstruction (P > 0.05). Table 1 Analysis of basic conditions between stable and unstable knee groups after ACL reconstruction MR image analysis: comparison of the first reexamination findings within one week after ACL reconstruction The comparison of MRI findings within one week after ACL reconstruction is shown in (Table 2). According to statistical analysis, there were significant differences in the abnormal position of tibial tunnel entrance, percentage of the position of tibial tunnel entrance, position of tibial tunnel exit, lateral tibial posterior slope (LTPS) and the width of the intercondylar notch between stable knee joint group and unstable knee joint group after ACL reconstruction (P < 0.05). Table 2 Statistical analysis of magnetic resonance imaging findings in stable group and unstable group after operation (within one week) Comparison of the consistency between MR findings and clinical evaluation Positive MRI findings after ACL reconstruction to evaluate the stability of knee joint after operation is shown in (Table 3). According to statistical analysis, the positive MRI findings that the position of tibial tunnel entrance, percentage of the position of tibial tunnel entrance, position of tibial tunnel exit, lateral tibial posterior slope (LTPS), width of intercondylar notch were used to evaluate the function of knee joint. There were significant differences between stable group and unstable group (P < 0.05). Table 3 Positive MRI findings after ACL reconstruction to evaluate the stability of knee joint after operation The positive MRI findings of the position of tibial tunnel entrance and lateral tibial posterior slope (LTPS) are generally consistent with the clinical physical examination, indicating that the two methods cannot replace each other, and the two should be evaluated comprehensively. Comparison of operative factors and self-anatomical factors in postoperative stability of knee joint After the multi-factor binary logistical regression analysis, the position of tibial tunnel exits and the lateral tibial posterior slope (LTPS) and the sagittal obliquity of the graft were independent predictors among surgical factors and self-anatomical factors (P < 0.05), as shown in (Table 4). The OR value of the lateral tibial posterior slope (LTPS) is more than 1, indicating that it is a risk factor for postoperative stability of the knee joint. The OR values of the position of tibial tunnel exit and the sagittal obliquity of the graft were less than 1, indicating that both are the protective factors for the postoperative stability of the knee joint. Table 4 Multi-factor binary logistical regression analysis of surgical factors and self-factors in postoperative Stability of knee Joint OR > 1 risk factor, OR < 1 protective factor. According to statistical analysis, lateral tibial posterior slope (LTPS) is a risk factor for postoperative stability of knee joint. The position of tibial tunnel exit and the sagittal obliquity of the graft were protective factors for the stability after knee joint operation (P < 0.05). Based on multi-factor logistical regression, the prediction equation of the postoperative stability of knee joint was established: $$\mathrm{Logit}(\mathrm P)\:=\:-1.067-0.231\ast\mathrm{position}\;\mathrm{of}\;\mathrm{tibial}\;\mathrm{tunnel}\;\mathrm{exit}+0.509\ast\mathrm{lateral}\;\mathrm{tibial}\;\mathrm{posterior}\;\mathrm{slope}\;(\mathrm{LTPS})-2.105\ast\mathrm{sagittal}\;\mathrm{obliquity}\;\mathrm{of}\;\mathrm{the}\;\mathrm{graft}.$$ ROC curve compares the prediction efficiency of the prediction factor and the prediction equation The ROC curve was constructed by the prediction equation, lateral tibial posterior slope (LTPS), position of tibial tunnel exit and the sagittal obliquity of the graft, as shown in (Fig. 3) and (Table 5). ROC curve of prediction equation, surgical factors and self-anatomical factors Table 5 ROC curve analysis of prediction equation, lateral tibial posterior slope (LTPS), position of tibial tunnel exit and sagittal obliquity of the graft The prediction equation predicted that the AUC of knee instability was 0.915, the sensitivity was 84.6%, and the specificity was 91.2%. The AUC of the position of tibial tunnel exit, lateral tibial posterior slope (LTPS) and abnormal sagittal obliquity of the graft to predict knee instability were all less than 0.80. This study found that the early positive findings of knee instability after anterior cruciate ligament reconstruction using magnetic resonance includes three factors: the position of tibial tunnel exit, the lateral tibial posterior slope (LTPS) and the sagittal obliquity of the graft. The study used a predictive model composed of the above factors to predict knee instability after ACL reconstruction, and the AUC was 0.915. The incidence rate of ACL injury is related to occupational and biomechanical characteristics at the time of injury. Shelbourne pointed out that [23] the incidence rate of ACL injury in the general population is about 38/100000, and that of professional athletes is about 60–70/100000, with more women than men. The failure of ACL reconstruction is marked by pathological relaxation of graft or limitation of joint movement [4]. The incidence rate of primary knee instability after ACL reconstruction was 3%—10% [24]. The graft is most prone for injury 4–8 months after reconstruction, and the vast majority of early surgical failures occur within half a year after operation, so it is particularly important to predict knee joint stability after ACL reconstruction by early postoperative MRI findings. The surgical factors that affect the stability of the knee joint after ACL reconstruction and the patient's own anatomical factors were summarized in this study. The positive findings of MR were used to predict the stability of the knee joint within one week after the operation. According to statistical analysis, there were significant differences in the abnormal position of tibial tunnel entrance, percentage of the position of tibial tunnel entrance, position of tibial tunnel exit, lateral tibial posterior slope (LTPS), width of intercondylar notch between stable knee joint group and unstable knee joint group after ACL reconstruction (P < 0.05). After the multi-factor binary logistical regression analysis, the position of tibial tunnel exit and the lateral tibial posterior slope (LTPS) and the sagittal obliquity of the graft were independent predictors among surgical factors and self-anatomical factors (P < 0.05). Based on multi-factor logistical regression, the prediction equation of postoperative stability of knee joint was established. Logit(P) = -1.067–0.231*position of tibial tunnel exit + 0.509*lateral tibial posterior slope (LTPS)-2.105*sagittal obliquity of the graft. The prediction equation predicted that the AUC of knee instability was 0.915, the sensitivity was 84.6%, and the specificity was 91.2%. The AUC of the position of tibial tunnel exit, the lateral tibial posterior slope (LTPS) and the abnormal sagittal obliquity of the graft to predict knee instability were all less than 0.80. Thus, it can be said that the prediction equation of knee joint postoperative stability is more effective in predicting knee joint instability. For ACL reconstruction, the entrance of the femoral tunnel should be should be located at the intersection of the lateral wall of the intercondylar notch and the posterior femoral cortex on the sagittal position, and the position of the inner femoral tunnel should be at the back and upper part of the lateral condyle of the femur, that is, the height ratio of the graft in the first slice of the lateral condyle of the femur should be larger and the width ratio. Although statistics showed that there was little correlation between the height ratio and width ratio of the femoral tunnel and the stability. This data is directly related to the inner opening of the tibial tunnel and the obliquity of the graft, and these two factors are related to stability. The front border of the opening of the tibial tunnel should fall behind the tangential line of the Blumenstaat's line. This position is approximately 42% of the anteroposterior diameter of the tibial plateau. The front of the tibial entrance tunnel is the main factor causing the graft impingement. If the tibial tunnel is too forward (partly or entirely in front of the Blumenstaat's line), the graft will hit the top of the intercondylar notch. The tibial tunnel is not near the intercondylar eminence and the graft will hit the lateral wall of the intercondylar notch [14, 25, 26]. The tibial tunnel exit should not be too close to the articular surface of the anterior upper edge of the tibia. If the tibial tunnel exit being too close to the articular surface can cause the tunnel and graft to lean backward, increasing the risk of instability, especially on the sagittal plane, which makes the graft lose its biomechanical properties, and it is easy to cause fractures of the tibial plateau. Because of the tunnel through the area, the cortical bone becomes relatively thinner. If the distance from the tibial tunnel exit to the articular surface of the anterior upper edge of the tibia is too close, the bones are vulnerable to secondary fractures. In this study, the probability of abnormal position of bone tunnel in knee instability group was higher, the sensitivity and specificity of knee instability caused by abnormal position of bone tunnel were higher, and the evaluation of clinical consistency was moderate. It can be concluded that the position of bone tunnel is one of the indexes to judge the stability of knee joint. Among the abnormal position of bone tunnel, the incidence rate of abnormal position of tibial tunnel was the highest. The anterior tibial tunnel accounted for more than half of the abnormal position of the tibial tunnel. In this study, the average percentage of the inner opening of the tibial tunnel at the anteroposterior diameter of the tibial plateau in the knee instability group was about 26%, approximately at the front 1/4 of the anteroposterior diameter of the tibial plateau, while the stable group was 30%. It can be seen that the position of the inner opening of the tibial tunnel in the unstable group was significantly anterior than in the stable group. In addition, the instability accounted for 18.57% in this study, which was significantly higher than that in the literature. The anterior tibial tunnel should be an important reason. The correct position of the femoral and tibial tunnel is very important for the stability of the graft and good clinical results, so each patient underwent ACL reconstruction should be evaluated and documented. Tomczak, Sanders et al. [11, 13] pointed out that the anterior position of the tibial tunnel is the most likely to cause the intercondylar notch impingement. In the case of intercondylar notch impingement, MRI showed an increase in signal intensity at the injured site, mostly in the first two thirds of the graft, which was different from the increased signal intensity of vascularization and synovialization of the graft during the postoperative recovery period. Because the position of increased signal caused by the impingement is more limited, if the anatomical risk factors are not removed, the high signal of the impingement will always exist, which will cause a tear in the graft. Papakonstantino believes that [12] when the femoral tunnel is too far forward, the length and tension of the graft will increase when the knee is bent, which will increase the risk of injury. In this study, the intercondylar notch impingement occurred in the anterior graft of the tibial tunnel, and the impingement site was near the midpoint of the graft, and the localized signal increased during the follow-up. In the knee joint stability group, the reason why there was no impingement due to the forward movement of the femoral tunnel and tibial tunnel may be related to the anatomical morphology of the intercondylar notch, and the impingement is more likely to occur in the narrow intercondylar notch. The position of the inner opening of the tibial and femoral tunnel determines the obliquity of the graft. Mall and Saupe [15, 16] proposed that the posterior obliquity of the graft should be between 50°- 60°on the sagittal plane, and should not exceed 60°on the sagittal plane, and the coronal obliquity of the graft should be less than 75°. If the limit is exceeded, the graft will relax. In this study, the average of the sagittal obliquity of the graft in the unstable group was 48°only, and the minimum value was 24°, which was equivalent to lying horizontally in the articular cavity, and made the reconstructed ligament lose its biomechanical function. The impingement of the graft will cause a variety of complications, including graft tear, graft fibrosis, graft myxoid degeneration. The most common impingement site is the impingement between the Blumenstaat's line and the middle and lower part of the graft, which is characterized by increased local signal and local edema. Fibrosis of the graft is the excessive proliferation of fibrous tissue in the joint cavity after surgery [12, 13, 27, 28]. The fibrotic area limits the movement of the knee joint, which is the main reason for the limitation of knee joint extension. Two types of fibrosis can be observed on MRI; diffuse fibrosis and localized fibrosis. Localized fibrosis is the most common complication of ACL reconstruction. Low signal nodules in the distal anterior part of the graft can be seen on magnetic resonance images, which is called "cyclops sign". Diffuse fibrosis is characterized by synovial thickening around the graft, because of the infiltration of inflammatory cells, which can extend to the articular capsule, resulting in the thickening of the articular capsule. Fibrosis is caused by the injury of the graft, which can cause mechanical disturbance of the end of the graft. The areas with low signal of T1WI and low signal of T2WI were showed on MR. In the process of graft maturation, the continuity of the graft should be kept intact in order to maintain the stability of the knee joint. However, if the knee joint is injured again in the process of rehabilitation, the increased signal of the injury site overlaps with the high signal in the process of ligamentization, which will increase the difficulty for diagnosis. Therefore, we need to pay attention to the continuity of the graft. If there is a discontinuity of the tendon fiber bundle, it indicates the injury or tear of the reconstructed ligament. Horton [29] suggested that the continuous coronal image of ligament is more accurate. The patient's own anatomical factors also have a certain influence on the stability of postoperative knee joint function. Some scholars have found that [30,31,32]: Narrow intercondylar notch, increased tibial posterior slope (TPS) and deeper medial tibial plateau all affect the biomechanical properties of knee joint. These three factors are already the risk factors of ACL injury in normal human knee joint injury. It has been confirmed that the tibial posterior slope (TPS) increases, especially the lateral tibial posterior slope (LTPS). When it is more than 10°, the risk of graft injury after ACL reconstruction is significantly increased. As shown in (Fig. 4), this study also confirmed that there was a significant difference in the lateral tibial posterior slope (LTPS) between the stable and unstable groups of the knee joint, with no difference between male and female. If the lateral tibial posterior slope (LTPS) increases, and the risk of knee instability increases. There was no significant difference in anatomical risk factors between males and females in this group. 49-year-old female patients with ACL reconstruction: a one week after operation, MRI showed that the inner opening of tibial tunnel was anterior and the lateral tibial posterior slope (LTPS) was 14°; b half a year after operation, the continuity of ACL reconstruction was interrupted and fluid was accumulated at the internal opening of tibial tunnel The risk of graft impingement in narrow intercondylar notch was also significantly increased. Fujii et al. [22] have shown that when the intercondylar notch width index (NWI) is less than 0.21, the risk of graft impingement after reconstruction significantly increases as shown in (Fig. 5). Postoperative follow-up of 26-year-old male patients with ACL reconstruction: a Reexamination of magnetic resonance imaging one week after operation, the intercondylar notch width index (NWI) was 0.19; b half a year after operation, MRI showed that reconstruction of ACL impingement and peripheral synovial hyperplasia It can be seen that the preoperative evaluation of the anatomical factors of the knee joint is beneficial to the formulation of the surgical approach and provides objective evidences for the correction of the tibial posterior slope (TPS) or the plasty of the intercondylar notch. Postoperative imaging evaluation can early indicate the possibility of clinical complications. This study has the following limitations:(1) The number of this sample was small; (2) This study was a single-center retrospective study. However, our research results are helpful to the multi-center, prospective, large sample research design in the future, so as to verify our research results. (3) Our prediction equation was obtained from the experimental data of the experimental group of this study, and it should only be applicable to the predictive analysis of patients in this experimental group. Therefore, independent new experimental data should be used to verify the prediction performance of this model. In conclusion, we found that abnormalities of the position of the exit of the bone tunnel, lateral tibial posterior slope (LTPS) and sagittal obliquity of the graft were the early MRI positive findings of knee instability after ACL reconstruction. It is helpful for clinicians to predict the stability of knee joint after ACL reconstruction, to deal with postoperative complications as soon as possible, so as to increase the success rate of operation and improve the quality of life of patients. Fabricant PD, Lakomkin N, Cruz A, Spitzer E, Lawrence JTR, Marx RG. Early ACL reconstruction in children leads to less meniscal and articular cartilage damage when compared with conservative or delayed treatment. ISAKOS. 2016;1:10–5. Crawford SN, Waterman BR, Lubowitz JH. Long-term failure of anterior cruciate ligament reconstruction. Arthroscopy. 2013;29:1566–71. Janssen RP, Scheffler SU. Intra-articular remodelling of hamstring tendon grafts after anterior cruciate ligament reconstruction. Knee Surg Sports Traumatol Arthrosc. 2014;22:2102–8. Meyers AB, Haims AH, Menn K, et al. Imaging of anterior cruciate ligament repair and its complications. AJR Am J Roentgenol. 2010;194:476–84. Wilson TC, Kantaras A, Atay A, Johnson DL. Tunnel enlargement after anterior cruciate ligament surgery. Am J Sports Med. 2004;32:543–9. Clatworthy MG, Annear P, Bulow JU, Bartlett RJ. Tunnel widening in anterior cruciate ligament reconstruction: a prospective evaluation of hamstring and patella tendon grafts. Knee Surg Sports Traumatol Arthrosc. 1999;7:138–45. Fahey M, Indelicato PA. Bone tunnel enlargement after anterior cruciate ligament replacement. Am J Sports Med. 1994;22:410–4. Hosseinzadeh S, Kiapour AM. Sex differences in anatomic features linked to anterior cruciate ligament injuries during skeletal growth and maturation. Am J Sports Med. 2020;48:2205–12. Collins NJ, Misra D, Felson DT, et al. Measures of knee function: International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form, Knee Injury and Osteoarthritis Outcome Score (KOOS), Knee Injury and Osteoarthritis Outcome Score Physical Function Short Form ( KOOS-PS), Knee Outcome Survey Activities of Daily Living Scale (KOS-ADL), Lysholm Knee Scoring Scale, Oxford Knee Score (OKS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), Activity Rating Scale (ARS), and Tegner Activity Score (TAS). Arthritis CareRes (Hoboken). 2011;63:S208–28. Kiapour AM, Ecklund K, Murray MM, et al. Changes in cross-sectional area and signal intensity of healing ACLs and ACL grafts in the first two years after surgery. Orthop J Sports Med. 2019;47:1831–43. Tomczak RJ, Hehl G, Mergo PJ, et al. Tunnel placement in anterior cruciate ligament reconstruction: MRI analysis as an important factor in the radiological report. Skeletal Radiol. 1997;26:409–13. Papakonstantinou O, Chung CB, Chanchairujira K, et al. Complications of anterior cruciate ligament reconstruction: MR imaging. Eur Radiol. 2003;13:1106–17. Sanders TG. MR imaging of postoperative ligaments of the knee. Semin Muscaloskelet Radiol. 2002;6:19–33. Howell SM, Berns GS, Farley TE. Unimpinged and impinged anterior cruciate ligament grafts: MR signal intensity measurements. Radiology. 1991;179:639–43. Saupe N, White LM, Chiavaras MM, Essue J, Weller I, Kunz M, Hurtig M, Marks P. Anterior cruciate ligament reconstruction grafts: MR imaging features at long-term follow-up-correlation with functional and clinical evaluation. Radiology. 2008;249:581–90. Mall NA, Matava MJ, Wright RW, Brophy RH. Relation between anterior cruciate ligament graft obliquity and knee laxity in elite athletes at the National Football League combine. Arthroscopy. 2012;28:1104–13. Christensen JJ, Krych AJ, Engasser WM, Vanhees MK, Collins MS, Dahm DL. Lateral tibial posterior slope is increased in patients with early graft failure after anterior cruciate ligament reconstruction. Am J Sports Me. 2015;43:2510–4. Webb JM, Salmon LJ, Leclerc E, Pinczewski LA, Roe JP. Posterior tibial slope and further anterior cruciate ligament injuries in the anterior cruciate ligament-reconstructed patient. Am J Sports Med. 2013;41:2800–4. Grassi A, Bailey JR, Signorelli C, et al. Magnetic resonance imaging after anterior cruciate ligament reconstruction: a practical guide. World Journal of Orthopedics. 2016;7:638–49. Gohil S, Annear PO, Breidahl W. Anterior cruciate ligament reconstruction using autologous double hamstrings: a comparison of standard versus minimal debridement techniques using MRI to assess revascularisation. A randomised prospective study with a one-year follow-up. J Bone Joint Surg. 2007;89:1165–71. Sonnery-Cottet B, Archbold P, Cucurulo T, Fayard JM, Bortolletto J, Thaunat M, Prost T, Chambat P. The influence of the tibial slope and the size of the intercondylar notch on rupture of the anterior cruciate ligament. J Bone Joint Surg Br. 2011;93:1475–8. Fujii M, Furumatsu T, Miyazawa S, Okada Y, Tanaka T, Ozaki T, Abe N. Intercondylar notch size influences cyclops formation after anterior cruciate ligament reconstruction. Knee Surg Sports Traumatol Arthrosc. 2015;23:1092–9. Shelbourne KD, Kerr B. The relationship of femoral intercondylar notch width to height, weight, and sex in patients with intact anterior cruciate ligaments. Am J Knee Surg. 2001;14:92–6. Bach Bernard R, et al. Revision anterior cruciate ligament surgery. Arthrosc J Arthrosc Relat Surg. 2003;19:14–29. Frank RM, Seroyer ST, Lewis PB, Bach BR, Verma NN. MRI analysis of tibial position of the anterior cruciate ligament. Knee Surg Sports Traumatol Arthrosc. 2010;18:1607–11. Lee S, Kim H, Jang J, Seong SC, Lee MC. Intraoperative correlation analysis between tunnel position and translational and rotational stability in single- and double-bundle anterior cruciate ligament reconstruction. Arthroscopy. 2012;28:1424–36. Recht MP, Kramer J. MR imaging of the postoperative knee: a pictorial essay. Radiographics. 2002;22:765–74. White LM, Kramer J, Recht MP. MR imaging evaluation of the postoperative knee: ligaments, menisci, and articular cartilage. Skeletal Radiol. 2005;34:431–52. Horton LK, Jacobson JA, Lin J, Hayes CW. MR imaging of anterior cruciate ligament reconstruction graft. AJR. 2000;175:1091–7. Sabzevari Soheil, Ata Rahnemai-Azar Amir, et al. Increased lateral tibial posterior slope is related to tibial tunnel widening after primary ACL reconstruction. Knee Surg Sports Traumatol Arthrosc Off J Esska. 2017;25:3906–13. Tanaka MJ, Jones KJ, Gargiulo AM, Delos D, Wickiewicz TL, Potter HG, Pearle AD. Passive anterior tibial subluxation in anterior cruciate ligament-deficient knees. Am J Sports Med. 2013;41:2347–52. Bisson LJ, Gurske-DePerio J. Axial and sagittal knee geometry as a risk factor for noncontact anterior cruciate ligament tear: a casecontrol study. Arthroscopy. 2010;26:901–6. Thanks to all the researchers who contributed to the design, data collection and analysis, and writing of this article. Jianqiang Zhang and Jiyao Ma are co-first authors and they contributed equally to this work. Medical Imaging Department, First Affiliated Hospital of Kunming Medical University, Kunming Medical University, Kunming, China Jianqiang Zhang, Jiyao Ma, Juan Huang, Guoliang Wang, Yilong Huang, Jun Yan, Xiaomin Zeng, Hongli Zhu, Wei Zhao, Yanlin Li & Bo He Medical Imaging Department, Yunnan Cancer Hospital &, The Third Affiliated Hospital of Kunming Medical University, Kunming, China Zhenhui Li Jianqiang Zhang Jiyao Ma Juan Huang Guoliang Wang Yilong Huang Jun Yan Xiaomin Zeng Hongli Zhu Wei Zhao Yanlin Li Bo He (I) Conception and design: Y Huang, Z Li, J Ma, J Zhang, B He; (II) Administrative support: W Zhao, Y Li, B He;(III) Provision of study materials or patients: G Wang, Y Li, B He; (IV) Collection and assembly of data: J Zhang, J Ma, J Yan, X Zeng, H Zhu; (V) Data analysis and interpretation: J Huan, B He;(VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors. Corresponding authors Correspondence to Yanlin Li or Bo He. The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. This retrospective study was carried out by the approval of institutional review/ethics committee of the First Affiliated Hospital of Kunming Medical University. This study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). Informed consent was obtained by all participants and parent/legal guardian of minor participant (age less than 16 years) to participate in the study or to use their clinical and imaging data. All authors have completed the ICMJE uniform disclosure form. The authors declare that they have no competing interests. Zhang, J., Ma, J., Huang, J. et al. Feasibility study of early prediction of postoperative MRI findings for knee stability after anterior cruciate ligament reconstruction. BMC Musculoskelet Disord 22, 649 (2021). https://doi.org/10.1186/s12891-021-04507-y DOI: https://doi.org/10.1186/s12891-021-04507-y Anterior cruciate ligament (ACL) reconstruction Stability of knee joint Postoperative magnetic resonance (MR) examination MRI findings early
CommonCrawl
Proposing new indicators for glaucoma healthcare service Yuan Bo Liang1,2,5, Ye Zhang3, David C. Musch4 & Nathan Congdon2 Glaucoma is the first leading cause of irreversible blindness worldwide with increasing importance in public health. Indicators of glaucoma care quality as well as efficiency would benefit public health assessments, but are lacking. We propose three such indicators. First, the glaucoma coverage rate (GCR), which is the number of people known to have glaucoma divided by the total number of people with glaucoma as estimated from population-based studies multiplied by 100%. Second, the glaucoma detection rate (GDR), which is number of newly diagnosed glaucoma patients in one year divided by the population in a defined area in millions. Third, the glaucoma follow-up adherence rate (GFAR), calculated as the number of patients with glaucoma who visit eye care provider(s) at least once a year over the total number of patients with glaucoma in given eye care provider(s) in a specific period. Regularly tracking and reporting these three indicators may help to improve the healthcare system performance at national or regional levels. Assessing healthcare quality and efficiency has become increasingly important. In the past 20 years, substantial improvements have been seen in cataract blindness prevention. Indicators of cataract surgical rate (CSR) and cataract surgery coverage (CSC) played important roles in evaluating and promoting cataract blindness prevention programs [1]. These indicators provide an evidence base for evaluating the output of all sectors: government, non-governmental organizations, and private sectors. As performance indicators, they measure the extent of the effort to control cataract blindness and allow for comparisons between countries and regions. They also indicate the availability, accessibility, and affordability of cataract services. Such indicators are not yet available for glaucoma even though glaucoma is increasingly important in public health. Glaucoma is the leading cause of irreversible blindness worldwide. A recent meta-analysis by Tham et al. estimated that the global pooled prevalence of glaucoma among persons aged 40–80 years is 3.54% [2]. In 2013, the number of people with glaucoma worldwide was estimated to be 64.3 million and will increase to 76.0 million in 2020, disproportionately affecting people residing in Asia and Africa [2]. Glaucoma accounts for 12.3% of blindness worldwide [3]. According to Quigley et al., bilateral blindness will be present in 5.9 million people with primary open angle glaucoma (POAG) and 5.3 million people with primary angle closure glaucoma (PACG) in 2020 [4]. With the reduction in blindness due to age-related cataract as access to effective treatment increases [5], glaucoma and diabetic retinopathy will become the two major blindness-causing eye diseases [6, 7]. Thus, glaucoma is a considerable public health issue globally. Glaucoma can be regarded as a group of chronic eye diseases that have as a common end-point a characteristic optic neuropathy, which is determined by both structural changes (optic disk appearance) and functional deficit (measured by visual field change), with or without an increased intraocular pressure (IOP) [8]. Glaucoma usually affects both eyes, although they may be affected to varying degrees. The public health challenge is that if detected and treated properly with currently available ophthalmic treatments such as hypotensive eye drops, laser or surgery, the disease process can be significantly delayed or possibly prevented. Lack of such treatment is particularly a problem for underserved populations. Detection and treatment of glaucoma fall within the purview of eye care providers, so it is important to evaluate the effectiveness of eye care delivery in glaucoma. We suggest the glaucoma coverage rate (GCR), the glaucoma detection rate (GDR), and the glaucoma follow-up adherence rate (GFAR) as new indicators for evaluating glaucoma care. Main text Glaucoma coverage rate (GCR) and glaucoma detection rate (GDR) Even though glaucoma-related blindness is largely preventable with early detection and appropriate treatment regimens, many people who have glaucoma are not diagnosed. For example, in India, studies have found that 91% of persons with open-angle glaucoma were unaware, and 20.3% were already blind bilaterally or unilaterally, respectively, due to glaucoma [9]. In China, results of the Handan Eye Study showed that over 90% of participants with primary angle closure (PAC), over half with PACG and more than 95% of POAG cases had not previously been diagnosed or treated, while 65.6% of PACG, and 4.5% for POAG were blind in at least one eye [10, 11]. Even in developed nations, as many as half of those with glaucoma are unaware that they have the disease [12,13,14]. Reasons include inadequate screening, unavailability or low utilization of eye care services, and lack of awareness due to the absence of symptoms in the early stages of glaucoma. The GCR could serve as an important index for evaluating glaucoma healthcare. It is calculated by dividing the number of people in the population with known glaucoma by the total number of people with glaucoma as estimated from population-based studies. However, this parameter can only be obtained through conduct of or access to results from well-designed population-based studies. Practically, we would suggest using the number of patients with newly-detected glaucoma in one year in a defined region divided by the number of people in that defined region, which represents the GDR. With increasing improvement in medical care systems in many countries, the number of detected glaucoma cases can be accurately tracked [15]. The GDR and GCR will vary among populations based on public awareness of the disease, the accessibility and capacity of the regional/national eye care system, existence of user fees, willingness to pay and other related factors. Although population screening for open angle glaucoma has not been found to be cost effective [16, 17], healthcare planners can use the GDR to track the impact of other, more practical methods to increase glaucoma detection, such as community education [18], screening of targeted high risk groups (including relatives of known glaucoma patients) [19] and enhanced clinic-based case finding through training and incentivizing clinicians to carry out the complete examinations needed to detect asymptomatic glaucoma [20]. With access to estimates of GDR across nations and regions, focused attention could be applied to areas with low GDRs and influence those responsible for allocation of healthcare resources to intervene [21]. Access to a well-established cross-hospital medical information system would provide an important resource to track the number of newly-diagnosed cases. The formula for GCR/GDR would be: $$ \mathrm{G}\mathrm{C}\mathrm{R}=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{people}\kern0.5em \mathrm{with}\kern0.5em \mathrm{known}\kern0.5em \mathrm{glaucoma}}{\mathrm{Total}\kern0.5em \mathrm{number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{patients}\kern0.5em \mathrm{with}\kern0.5em \mathrm{glaucoma}\kern0.5em \mathrm{as}\kern0.5em \mathrm{estimated}\kern0.5em \mathrm{from}\kern0.5em \mathrm{population}\hbox{-} \mathrm{based}\kern0.5em \mathrm{studies}}\times 100\% $$ $$ \mathrm{G}\mathrm{D}\mathrm{R}=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{people}\kern0.5em \mathrm{with}\kern0.5em \mathrm{newly}\hbox{-} \mathrm{detected}\kern0.5em \mathrm{glaucoma}\kern0.5em \mathrm{in}\kern0.5em \mathrm{one}\kern0.5em \mathrm{year}}{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{people}\kern0.5em \mathrm{in}\kern0.5em \mathrm{a}\kern0.5em \mathrm{given}\kern0.5em \mathrm{a}\mathrm{rea}\kern0.5em \left(\mathrm{in}\kern0.5em \mathrm{millions}\right)} $$ Glaucoma follow-up adherence rate (GFAR) As glaucoma is a chronic eye disease, and IOP is the only well-proven modifiable risk factor, lifelong ocular hypotensive medical, laser or surgical treatment is indicated to prevent progression in most cases. Even when glaucoma is detected and treated, inadequate response to therapy and/or IOP fluctuation can cause further damage. This creates the important need for regular follow-up by eye care professionals to monitor glaucomatous optic nerve damage and visual field defects, adjusting therapy as needed [22, 23]. According to recommended clinical practice, even patients with suspected glaucoma and modest risk for progression should be seen at least every 12–24 months, whereas patients with diagnosed glaucoma should have a follow-up visit every 3–6 months [24]. Poor adherence with recommended glaucoma follow-up care serves as a major obstacle to proper disease management. Jin et al. reported follow-up rates at 6, 12 and 48 months after 1186 glaucoma operations in Xian, China as 68.5, 62.1 and 48.8%, respectively [25]. Main risk factors for failed follow-up included low annual income, old age, inability to read, long distance from hospital, and poor disease awareness. Liu et al. reported the follow-up rate in cases of PACG in Handan City, China, at 6, 12 and 48 months after trabeculectomy as 41.1, 21.3 and 13.3%, respectively [26]. They also found that poor knowledge about glaucoma, rural residence, and having poor vision were associated with lower follow-up rates [26]. A recent short-term prospective study found that poor adherence to recommended post-trabeculectomy follow-up was associated with lower education, unawareness of the importance of follow-up, lack of an accompanying person, low family annual income, and not requiring removal of scleral flap sutures postoperatively [27]. The problem of sub-optimal adherence with both post-operative care and medical therapy for glaucoma in developed countries is also well-documented [28,29,30]. Additional reasons for poor follow-up adherence were identified, such as difficulty on the part of the patient or escort to get time off from work for appointments, long waiting times in the clinics, unfamiliarity with treatment requirements, lack of knowledge regarding the permanency of glaucoma-induced vision loss, cost of examination being too high, and legal blindness [31, 32]. Adherence to follow-up is an essential component of effective care for glaucoma. The follow-up adherence rate can be calculated as the number of follow-up visits that take place within a defined period of time divided by the number of expected/planned visits. The latter number varies greatly due to the differing practice patterns of clinicians and the stage of glaucoma. For example, during the early post-operative period, more frequent visits are needed, whereas less frequent visits are needed when a patient's glaucoma status is stable. Based on a public health perspective, we recommend GFAR, calculated as the number of patients with glaucoma who visit eye care provider(s) at least once in a year's period divided by the total number of patients with glaucoma diagnosed in given eye care center(s), as another essential index for the evaluation of glaucoma healthcare. The formula for GFAR would be: $$ \mathrm{GFAR}=\frac{\begin{array}{l}\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{glaucoma}\kern0.5em \mathrm{patients}\\ {}\mathrm{with}\kern0.5em \mathrm{a}\mathrm{t}\kern0.5em \mathrm{least}\kern0.5em \mathrm{one}\kern0.5em \mathrm{visit}\kern0.5em \mathrm{a}\kern0.5em \mathrm{year}\end{array}}{\begin{array}{l}\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{patients}\kern0.5em \mathrm{with}\kern0.5em \mathrm{glaucoma}\\ {}\mathrm{diagnosed}\kern0.5em \mathrm{in}\kern0.5em \mathrm{given}\kern0.5em \mathrm{eye}\kern0.5em \mathrm{care}\kern0.5em \mathrm{center}\left(\mathrm{s}\right)\end{array}}\times 100\% $$ There are some strategies that can be taken within the healthcare system to improve adherence to follow-up in glaucoma patients. Suggested measures include: 1) educating current and in-training eye care providers on proven communication strategies for improving follow-up; 2) reducing or eliminating fees for post-operative examinations and consider incentives such as provision of free medication at postoperative visits, due to the particular importance of good compliance during this period; 3) providing visit reminders (e.g., via text or telephone) or a support network such as a case manager or glaucoma patient club to help patients adhere to the management requirements of their eye condition. The GDR, GCR, and GFAR, as proposed above, are sometimes very difficult for a country or region to estimate, especially so for those with limited healthcare systems and less accurate data to rely upon. Governments in most countries are responsible for covering at least some portion of eye care costs and investment in low vision rehabilitation and care as well as monitoring and improving the GDR, GCR, and GFAR are likely to reduce healthcare costs in the long run. The limitation of using these indicators is the lack of a threshold value for judging whether these indicators are reflective of good or inadequate detection and care of glaucoma based on limited studies. However, upon measuring these indicators, they can be used for self-comparison or cross regional comparison. In conclusion, from a public health perspective, we need standard indices to compare and evaluate the level of glaucoma care across different countries and regions, with the goal to improve the prevention and treatment outcome of glaucoma, which is the leading cause of irreversible blindness. We propose that GDR, GCR, and GFAR may be particularly useful in this respect. CSC: Cataract surgical coverage CSR: Cataract surgery rate GCR: Glaucoma coverage rate GDR: Glaucoma detection rate GFAR: Glaucoma follow-up adherence rate IOP: PAC: Primary angle closure PACG: Primary angle closure glaucoma POAG: Primary open angle glaucoma http://www.iapb.org/vision-2020/what-is-avoidable-blindness/cataract. Accessed 12 June 2016. Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014;121:2081–90. Giangiacomo AC. The epidemiology of glaucoma. In: Grehn F, Stamper R, editors. Glaucoma. Berlin, Germany: Springer; 2009. p. 13–21. Quigley HA, Broman AT. The number of people with glaucoma worldwide in 2010 and 2020. Br J Ophthalmol. 2006;90:262–7. Khairallah M, Kahloun R, Bourne R, Limburg H, Flaxman SR, Jonas JB, et al. Number of People Blind or Visually Impaired by Cataract Worldwide and in World Regions, 1990 to 2010. Invest Ophthalmol Vis Sci. 2015;56:6762–9. WHO has estimated that 4.5 million people are blind due to glaucoma. http://www.iapb.org/vision-2020/what-is-avoidable-blindness/glaucoma. Accessed 7 Apr 2016. Ting DS, Cheung GC, Wong TY. Diabetic retinopathy: global prevalence, major risk factors, screening practices and public health challenges: a review. Clin Exp Ophthalmol. 2016;44(4):260–77. Foster PJ, Buhrmann R, Quigley HA, Johnson GJ. The definition and classification of glaucoma in prevalence surveys. Br J Ophthalmol. 2002;86:238–42. Vijaya L, George R, Arvind H, Baskaran M, Raju P, Ramesh SV, et al. Prevalence and causes of blindness in the rural population of the Chennai Glaucoma Study. Br J Ophthalmol. 2006;90:407–10. Liang Y, Friedman DS, Zhou Q, Yang XH, Sun LP, Guo L, et al. Prevalence and characteristics of primary angle-closure diseases in a rural adult Chinese population: the Handan Eye Study. Invest Ophthalmol Vis Sci. 2011;52:8672–9. Liang YB, Friedman DS, Zhou Q, Yang X, Sun LP, Guo LX, et al. Prevalence of primary open angle glaucoma in a rural adult Chinese population: the Handan eye study. Invest Ophthalmol Vis Sci. 2011;52:8250–7. Tielsch JM, Sommer A, Katz J, Royall RM, Quigley HA, Javitt J. Racial variations in the prevalence of primary open-angle glaucoma. The Baltimore Eye Survey. JAMA. 1991;266:369–74. Mitchell P, Smith W, Attebo K, Healey PR. Prevalence of open-angle glaucoma in Australia. The Blue Mountains Eye Study. Ophthalmology. 1996;103:1661–9. Varma R, Ying-Lai M, Francis BA, Nguyen BB, Deneen J, Wilson MR, et al. Prevalence of open-angle glaucoma and ocular hypertension in Latinos: the Los Angeles Latino Eye Study. Ophthalmology. 2004;111:1439–48. Hurt L. Glaucoma, active component, U.S. Armed Forces, 1998-2013. MSMR. 2014;21:17–23. Moyer VA, U.S. Preventive Services Task Force. Screening for glaucoma: U.S. Preventive Services Task Force Recommendation Statement. Ann Intern Med. 2013;159:484–9. Harasymowycz P, Kamdeu Fansi A, Papamatheakis D. Screening for primary open-angle glaucoma in the developed world: are we there yet? Can J Ophthalmol. 2005;40:477–86. Thapa SS, Kelley KH, Rens GV, Paudyal I, Chang L. A novel approach to glaucoma screening and education in Nepal. BMC Ophthalmol. 2008;8:21. Vistamehr S, Shelsta HN, Palmisano PC, Filardo G, Bashford K, Chaudhri K, et al. Glaucoma screening in a high-risk population. J Glaucoma. 2006;15:534–40. Levi L, Schwartz B. Glaucoma screening in the healthcare setting. Surv Ophthalmol. 1983;28:164–74. Renzi C, Sorge C, Fusco D, Agabiti N, Davoli M, Perucci CA. Reporting of quality indicators and improvement in hospital performance: the P.Re.Val.E. Regional Outcome Evaluation Program. Health Serv Res. 2012;47:1880–901. Friedman DS, Nordstrom B, Mozaffari E, Quigley HA. Glaucoma management among individuals enrolled in a single comprehensive insurance plan. Ophthalmology. 2005;112:1500–4. Ung C, Murakami Y, Zhang E, Alfaro T, Zhang M, Seider MI, et al. The association between compliance with recommended follow-up and glaucomatous disease severity in a county hospital population. Am J Ophthalmol. 2013;156:362–9. Prum Jr BE, Rosenberg LF, Gedde SJ, Mansberger SL, Stein JD, Moroi SE, et al. Primary Open-Angle Glaucoma Preferred Practice Pattern® Guidelines. Ophthalmology. 2016;123:P41–111. Jin QX, Li JQ, Lin WJ, Wang ZW, Wang CM, Xin LH, et al. Investigation and analysis on the causes of failing in follow-up after antiglaucoma operation. J Clin Ophthalmol. 2006;14(6):516–8. Liu K, Rong SS, Liang YB, Fan SJ, Liu WR, Sun X, et al. A retrospective survey of long-term follow-up on primary angle closure glaucoma after trabeculectomy. Ophthalmology in China. 2011;20(1):50–4. Yang K, Jin L, Li L, Zeng S, Dan A, Chen T, et al. Preoperative characteristics and compliance with follow-up after trabeculectomy surgery in rural southern China. Br J Ophthalmol. 2017;101(2):131–7. Kosoko O, Quigley HA, Vitale S, Enger C, Kerrigan L, Tielsch JM. Risk factors for noncompliance with glaucoma follow-up visits in a residents' eye clinic. Ophthalmology. 1998;105:2105–11. Ngan R, Lam DL, Mudumbai RC, Chen PP. Risk factors for noncompliance with follow-up among normal-tension glaucoma suspects. Am J Ophthalmol. 2007;144:310–1. Lee PP, Feldman ZW, Ostermann J, Brown DS, Sloan FA. Longitudinal rates of annual eye examinations of persons with diabetes and chronic eye diseases. Ophthalmology. 2003;110:1952–9. Murakami Y, Lee BW, Duncan M, Kao A, Huang JY, Singh K, et al. Racial and ethnic disparities in adherence to glaucoma follow-up visits in a county hospital population. Arch Ophthalmol. 2011;129:872–8. Thompson AC, Thompson MO, Young DL, Lin RC, Sanislo SR, Moshfeghi DM, et al. Barriers to Follow-up and Strategies to Improve Adherence to Appointments for Care of Chronic Eye Diseases. Invest Ophthalmol Vis Sci. 2015;56:4324–31. This study is funded by Wenzhou Medical University R&D Fund, No. QTJ13009 and Health Innovation Talents in Zhejiang Province (2016). No. 25. YBL conceived the idea of GDR, GCR and GFAR and revised and finalized the manuscript. YZ drafted the manuscript. DM suggested critical revisions to the manuscript. NC suggested critical improvements of the concepts and the manuscript's revision. All authors read and approved the final manuscript. Yuanbo Liang is a Professor in Ophthalmology, Director of the Centre for Clinical & Epidemiological Eye Research, Eye Hospital of Wenzhou Medical University, Zhejiang, China, and a joint-appointed reader for the Global Eye Health Unit, Centre for Public Health, Queen's University Belfast, UK. Clinical and Epidemiological Eye Research Center, The Eye Hospital of Wenzhou Medical University, Wenzhou, China Yuan Bo Liang Centre for Public Health, Queens University, Belfast, UK & Nathan Congdon Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China Ye Zhang Kellogg Eye Center, University of Michigan, Ann Arbor, MI, USA David C. Musch Eye Hospital, School of Optometry and Ophthalmology, Wenzhou Medical College, No. 270, Xue Yuan Xi Road, Wenzhou, Zhejiang, 3250027, China Search for Yuan Bo Liang in: Search for Ye Zhang in: Search for David C. Musch in: Search for Nathan Congdon in: Correspondence to Yuan Bo Liang. Liang, Y.B., Zhang, Y., Musch, D.C. et al. Proposing new indicators for glaucoma healthcare service. Eye and Vis 4, 6 (2017). https://doi.org/10.1186/s40662-017-0071-0 Healthcare indicator Epidemiology of important eye diseases
CommonCrawl
On the computational complexity of curing non-stoquastic Hamiltonians A quantum solution for efficient use of symmetries in the simulation of many-body systems Albert T. Schmitz & Sonika Johri Quantum Hamiltonian complexity in thermal equilibrium Sergey Bravyi, Anirban Chowdhury, … Pawel Wocjan Quantum computing formulation of some classical Hadamard matrix searching methods and its implementation on a quantum computer Andriyan Bayu Suksmono & Yuichiro Minato Quantum physics in connected worlds Joseph Tindall, Amy Searle, … Dieter Jaksch Inside quantum black boxes Vedran Dunjko Enhancing violations of Leggett-Garg inequalities in nonequilibrium correlated many-body systems by interactions and decoherence J. J. Mendoza-Arenas, F. J. Gómez-Ruiz, … L. Quiroga Improved techniques for preparing eigenstates of fermionic Hamiltonians Dominic W. Berry, Mária Kieferová, … Ryan Babbush Undecidability in quantum thermalization Naoto Shiraishi & Keiji Matsumoto Exactly solving the Kitaev chain and generating Majorana-zero-modes out of noisy qubits Marko J. Rančić Milad Marvian1,2,3, Daniel A. Lidar2,3,4,5 & Itay Hen ORCID: orcid.org/0000-0002-7009-77393,4,6 Nature Communications volume 10, Article number: 1571 (2019) Cite this article Quantum simulation Quantum many-body systems whose Hamiltonians are non-stoquastic, i.e., have positive off-diagonal matrix elements in a given basis, are known to pose severe limitations on the efficiency of Quantum Monte Carlo algorithms designed to simulate them, due to the infamous sign problem. We study the computational complexity associated with 'curing' non-stoquastic Hamiltonians, i.e., transforming them into sign-problem-free ones. We prove that if such transformations are limited to single-qubit Clifford group elements or general single-qubit orthogonal matrices, finding the curing transformation is NP-complete. We discuss the implications of this result. The "negative sign problem", or simply the "sign problem"1, is a central unresolved challenge in quantum many-body simulations, preventing physicists, chemists, and material scientists alike from being able to efficiently simulate many of the most profound macroscopic quantum physical phenomena of nature, in areas as diverse as high-temperature superconductivity and material design through neutron stars to lattice quantum chromodynamics. More specifically, the sign problem slows down quantum Monte Carlo (QMC) algorithms2,3, which are in many cases the only practical method available for studying large quantum many-body systems, to the point where they become practically useless. QMC algorithms evaluate thermal averages of physical observables by the (importance-) sampling of quantum configuration space via the decomposition of the partition function into a sum of easily computable terms, or weights, which are in turn interpreted as probabilities in a Markovian process. Whenever this decomposition contains negative terms, QMC methods tend to converge exponentially slowly. Most dishearteningly, it is typically the systems with the richest quantum`mechanical behavior that exhibit the most severe sign problem. In defining the scope under which QMC methods are sign-problem free, the concept of "stoquasticity", first introduced by Bravyi et al.4, has recently become central. The most widely used definition of a local stoquastic Hamiltonian is Definition 15 A local Hamiltonian, \(H = \mathop {\sum}\nolimits_{a = 1}^M H_a\) is called stoquastic with respect to a basis \({\cal{B}}\), iff all the local terms Ha have only non-positive off-diagonal matrix elements in the basis \({\cal{B}}\). In the basis \({\cal{B}}\), the partition function decomposition of stoquastic Hamiltonians leads to a sum of strictly nonnegative weights and such Hamiltonians hence do not suffer from the sign problem. For example, in the path-integral formulation of QMC with respect to a basis \({\cal{B}} = \{ b\}\), the partition function Z at an inverse temperature β is reduced to an L-fold product of sums over complete sets of basis states, {b1}, …, {bL}, which are weighted by the size of the imaginary-time slice Δτ = β/L and the matrix elements of e−ΔτH. Namely, \(Z = \mathop {\prod}\nolimits_{l = 1}^L \mathop {\sum}\nolimits_{b_l} {\langle {b_l| \,{\mathrm{e}}^{ - \Delta \tau H}| b_{l + 1}} \rangle }\), where L is the number of slices and periodic boundary conditions are assumed. For a Hamiltonian H that is stoquastic in the basis \({\cal{B}}\), all the matrix elements of e−ΔτH are nonnegative for any Δτ, leading to nonnegative weights for each time slice. On the other hand, non-stoquastic Hamiltonians, whose local terms have positive off-diagonal entries, induce negative weights and generally lead to the sign problem1,6 unless certain symmetries are present. The concept of stoquasticity is also important from a computational complexity-theory viewpoint. For example, the complexity class StoqMA associated with the problem of deciding whether the ground-state energy of stoquastic local Hamiltonians is above or below certain values, is expected to be strictly contained in the complexity class QMA, that poses the same decision problem for general local Hamiltonians4. In addition, StoqMA appears as an essential part of the complexity classification of two-local qubit Hamiltonian problem7. However, stoquasticity does not imply efficient (i.e., polynomial-time) equilibration. For example, finding the ground-state energy of a classical Ising model—which is trivially stoquastic—is already NP-hard8. Conversely, non-stoquasticity does not imply inefficiency: there exist numerous cases where an apparent sign problem (i.e., non-stoquasticity) is the result of a naive basis choice that can be transformed away, resulting in efficient equilibration7,9,10,11. Here, we focus on the latter, i.e., whether non-stoquasticity can be "cured". To this end, we first propose an alternative definition of stoquasticity that is based on the computational complexity associated with transforming non-stoquastic Hamiltonians into stoquastic ones. Then, we proceed by proving that finding such a transformation for general local Hamiltonians, even if restricted to the single-qubit Clifford group or the single-qubit orthogonal group, is computationally hard. Along the way, we provide several results of independent interest, in particular an algorithm to efficiently group local Hamiltonian terms, and an algorithm to efficiently decide the curing problem using Pauli operators. We conclude by discussing some implications of our results, employing planted solution ideas, and also some potential cryptographic applications. Computationally stoquastic Hamiltonians To motivate our alternative definition, we first note that any Hamiltonian can trivially be presented as stoquastic via diagonalization. However, the complexity of finding the diagonalizing basis generally grows exponentially with the size of the system (as noted in ref. 6) and the new basis will generally be highly nonlocal and hence not efficiently representable. We also note that it is straightforward to construct examples where apparent non-stoquasticity may be transformed away. For example, consider the n-spin Hamiltonian \(H_{XZ} = {\sum} \tilde J_{ij}X_iX_j - {\sum} J_{ij}Z_iZ_j\) with nonnegative \(J_{ij},\tilde J_{ij}\), where Xi and Zi are the Pauli matrices acting on spin i. This Hamiltonian is non-stoquastic, but can easily be converted into a stoquastic form. Denoting the Hadamard gate (which swaps X and Z) by W, consider the transformed Hamiltonian \(H_{ZX} = W^{ \otimes n}H_{XZ}W^{ \otimes n} = - {\sum} J_{ij}X_iX_j + {\sum} \tilde J_{ij}Z_iZ_j\), which is stoquastic. The sign problem of the original Hamiltonian, HXZ, can thus be efficiently cured by a unit-depth circuit of single-qubit rotations. Moreover, thermal averages are invariant under unitary transformations; namely, defining \(\left\langle A \right\rangle _H \equiv \frac{{{\mathrm{Tr}}(e^{ - \beta H}A)}}{{{\mathrm{Tr}}(e^{ - \beta H})}}\), it is straightforward to check that \(\left\langle A \right\rangle _H = \left\langle {UAU^\dagger } \right\rangle _{UHU^\dagger }\). Therefore, if QMC is run on the transformed, stoquastic Hamiltonian, it is no longer slowed down by the sign problem. Finally, note that Definition 1 implies that a local Hamiltonian, \(H = \mathop {\sum}\nolimits_{a = 1}^M H_a\), is stoquastic if all terms Ha are stoquastic. However, there always remains some arbitrariness in the manner in which the total Hamiltonian is decomposed into the various terms. Consider, e.g., H = −2X1 + X1Z2. The second term separately is non-stoquastic, whereas the sum is stoquastic. This suggests that the grouping of terms matters (see the Methods section, "Grouping terms without changing the basis"). The above considerations motivate a reexamination of the concept of stoquasticity from a complexity-theory perspective, which can have important consequences for QMC simulations. (A related approach was discussed in ref. 12.) For example, given a k-local non-stoquastic Hamiltonian \(H = \mathop {\sum}\nolimits_a H_a\) (where each summand is a k-local term, i.e., a tensor product of at most k non-identity single-qubit Pauli operators), similarly to ref. 13, we may ask whether there exists a constant-depth quantum circuit U such that \(H^{\prime} = UHU^\dagger\) can be written as a k′-local stoquastic Hamiltonian \(H^{\prime} = \mathop {\sum}\nolimits_a H^{\prime}_a\) and if so, what the complexity associated with finding it is. It is the answer to the latter question that determines whether the Hamiltonian in question should be considered computationally stoquastic, i.e., whether it is feasible (in a complexity theoretic sense) to find a "curing" transformation U, which would then allow QMC to compute thermal averages with H by replacing it with H′. More formally, we propose the following definition: Definition 2 A unitary transformation U "cures" a non-stoquastic Hamiltonian H (i.e., removes its sign problem) represented in a given basis if \(H^{\prime} = UHU^\dagger\) is stoquastic, i.e., its off-diagonal elements in the given basis are all non-positive. A family of local Hamiltonians {H} represented in a given basis is efficiently curable (or, equivalently, computationally stoquastic) if there exists a polynomial-time classical algorithm such that for any member of the family H, the algorithm can find a unitary U and a Hamiltonian H′ with the property that \(H^{\prime} = UHU^\dagger\) is local and stoquastic in the given basis. As an example, the Hamiltonian HXZ considered above is efficiently curable. General local Hamiltonians are unlikely to be efficiently curable as this would imply the implausible result that QMA=StoqMA13. Note that given some class of basis transformations, our definition distinguishes between the ability to cure a Hamiltonian efficiently or in principle. For example, deciding whether a Pauli group element \(U = \mathop { \otimes }\nolimits_{i = 1}^n u_i\), where ui belongs to the single-qubit Pauli group \({\cal{P}}_1 = \{ I,X,Y,Z\} \times \{ \pm 1, \pm i\}\), can cure each term {Ha} of a k-local Hamiltonian \(H = \mathop {\sum}\nolimits_a H_a\), which can be solved in polynomial time (see the Methods section, "Curing using Pauli operators"). However, the Hamiltonian H = X1Z2 cannot be made stoquastic in principle using a Pauli group element, as conjugating it with Pauli operators results in ±X1Z2, both of which are non-stoquastic (see ref. 13 for the results on an intrinsic sign problem for local Hamiltonians). Therefore, the fact that general local Hamiltonians are not considered to be computationally stoquastic does not imply that curing is computationally hard for a given class of transformations. The last example illustrates that, while the curing problem can be efficiently decided for the Pauli group, this group can cure a very limited family of Hamiltonians. This motivates us to consider the curing problem beyond the Pauli group. Our main result is a proof that even for particularly simple local transformations such as the single-qubit Clifford group and real-valued rotations, the problem of deciding whether a family of local Hamiltonians is curable cannot be solved efficiently, in the sense that it is equivalent to solving 3SAT and is hence NP-complete. We assume that a k-local Hamiltonian \(H = \mathop {\sum}\nolimits_a H_a\) is described by specifying each of the local terms Ha, and the goal is to find a unitary U that cures each of these local terms. In general, a unitary U that cures the total Hamiltonian H may not necessarily cure all Ha separately. However, for all of the constructions in this paper, we prove that a unitary U cures H if and only if it cures all Ha separately. The decomposition {Ha} is merely used to guarantee that verification is efficient and the problem is contained in NP. Complexity of curing for the single-qubit Clifford group To study the computational complexity associated with finding a curing transformation U, we shall consider for simplicity single-qubit unitaries \(U = \mathop { \otimes }\nolimits_{i = 1}^n u_i\) and only real-valued Hamiltonian matrices. As we shall show, even subject to these simplifying restrictions, the problem of finding a curing transformation U is computationally hard when \(U\) is not in the Pauli group. We begin by considering the computational complexity of finding local rotations from a discrete and restricted set of rotations. Specifically, we consider the single-qubit Clifford group \({\cal{C}}_1\) (with group action defined as conjugation by one of its elements), defined as \({\cal{C}}_1 = \{ U|UgU^\dagger \in {\cal{P}}_1\;\forall g \in {\cal{P}}_1\}\), i.e., the normalizer of \({\cal{P}}_1\). It is well known that \({\cal{C}}_1\) is generated by W and the phase gate P = diag(1, i)14. Theorem 1 Let \(U = \mathop { \otimes }\nolimits_{i = 1}^n u_i\), where uibelongs to the single-qubit Clifford group. Deciding whether there exists a curing unitary U for 3-local Hamiltonians is NP-complete. We prove this theorem by reducing the problem to the canonical NP-complete problem known as 3SAT (3-satisfiability)15, beginning with the following lemma: Lemma 1 Let ui ∈ {I, W}, where I is the identity operation and W is the Hadamard gate. Deciding whether there exists a curing unitary \(U = \mathop { \otimes }\nolimits_{i = 1}^n u_i\) for 3-local Hamiltonians is NP-complete. To prove Lemma 1, we first introduce a mapping between 3SAT and 3-local Hamiltonians. Our goal is to find an assignment of n binary variables xi ∈ {0, 1} such that the unitary \(W(x) \equiv \mathop { \otimes }\nolimits_{i = 1}^n W_i^{x_i}\) [where x ≡ (x1, …, xn)] rotates an input Hamiltonian to a stoquastic Hamiltonian. We use the following 3-local Hamiltonian as our building block: $$H_{ijk}^{(111)} = Z_iZ_jZ_k - 3(Z_i + Z_j + Z_k) - (Z_iZ_j + Z_iZ_k + Z_jZ_k),$$ where i, j, and k are three different qubit indices. It is straightforward to check that $$W(x)H_{ijk}^{(111)}W^\dagger (x) = W_i^{x_i} \otimes W_j^{x_j} \otimes W_k^{x_k}(H_{ijk}^{(111)})W_i^{x_i} \otimes W_j^{x_j} \otimes W_k^{x_k}$$ is stoquastic ("True") for any combination of the binary variables (xi, xj, xk) except for (1, 1, 1), which makes Eq. (2) non-stoquastic ("False"). This is precisely the truth table for the 3SAT clause \((\bar x_i \vee \bar x_j \vee \bar x_k)\), where ∨ denotes the logical disjunction and the bar denotes negation (see Table 1). We can define the other seven possible 3SAT clauses by conjugating \(H_{ijk}^{(111)}\) with Hadamard or identity gates: $$H_{ijk}^{(\alpha \beta \gamma )} = W_i^{\bar \alpha } \otimes W_j^{\bar \beta } \otimes W_k^{\bar \gamma } \cdot H_{ijk}^{(111)} \cdot W_i^{\bar \alpha } \otimes W_j^{\bar \beta } \otimes W_k^{\bar \gamma }.$$ Table 1 Mapping the problem of finding a suitable change of basis to the Boolean satisfiability problem The Hamiltonian \(W(x)H_{ijk}^{(\alpha \beta \gamma )}W^\dagger (x)\) is non-stoquastic (corresponds to a clause that evaluates to False) only when (xi, xj, xk) = (α, β, γ), and is stoquastic (True) for any other choice of the variables x. We have thus established a bijection between 3-local Hamiltonians \(H_{ijk}^{(\alpha \beta \gamma )}\), with (α, β, γ) ∈ {0, 1}3, and the eight possible 3SAT clauses on three variables (xi, xj, xk) ∈ {0, 1}3. We denote these clauses, which evaluate to False iff (xi, xj, xk) = (α, β, γ), by \(C_{ijk}^{(\alpha \beta \gamma )}\). The final step of the construction is to add together such "3SAT-clause Hamiltonians" to form $$H_{{\mathrm{3SAT}}} = \mathop {\sum}\limits_C {H_{ijk}^{(\alpha \beta \gamma )}},$$ where C is the set of all M clauses in the given 3SAT instance \(\wedge C_{ijk}^{(\alpha \beta \gamma )}\). Having established a bijection between 3SAT clauses and 3SAT-clause Hamiltonians, the final step is to show that finding x such that $$H^\prime = W(x)H_{{\mathrm{3SAT}}}W(x)$$ is stoquastic for every H3SAT, is equivalent to solving the NP-complete problem of finding satisfying assignments x for the corresponding 3SAT instances. To prove the equivalence, we show (i) that satisfying a 3SAT instance implies that the corresponding H3SAT is cured, and (ii) that if H3SAT is cured, this implies that the corresponding 3SAT instance is satisfied. Note that any assignment x that satisfies the given 3SAT instance also satisfies each individual clause. It follows from the bijection we have established that such an assignment cures each corresponding 3SAT-clause Hamiltonian individually. The stoquasticity of H′ then follows by noting that the tensor product of a stoquastic Hamiltonian with the identity matrix is still stoquastic and the sum of stoquastic Hamiltonians is stoquastic. To simplify the argument, we assume that each clause has exactly three variables. This version of 3SAT, sometimes called EXACT-3SAT, remains NP-hard15. We prove that an unsatisfied 3SAT instance implies that the corresponding H3SAT is not cured. It suffices to focus on a particular clause \(C_{ijk}^{(\alpha \beta \gamma )}\). The choice of variables that makes this clause False rotates the corresponding 3SAT-clause Hamiltonian to one that contains a non-stoquastic +XiXjXj term, which generates positive off-diagonal elements in specific locations in the matrix representation of H3SAT. In what follows, we show that no other 3SAT-clause Hamiltonian in H3SAT contains a ±XiXjXk term and therefore these positive off-diagonal elements cannot be canceled out or made negative regardless of the choice of the other variables in the assignment. To see this, we first note that a 3SAT-clause Hamiltonian that does not contain xi, xj, and xk, cannot generate a ±XiXjXk term. Second, a choice of assignment for xi, xj, and xk that does not satisfy \(C_{ijk}^{(\alpha \beta \gamma )}\) would satisfy any other 3SAT clause on these three variables. A satisfied 3SAT clause also does not generate an XiXjXk term. Therefore, the rotated Hamiltonian is guaranteed to be non-stoquastic. This establishes that the problem is NP-Hard. Checking whether a given U cures all the local terms {Ha} is efficient and therefore the problem is NP-complete. To complete the proof of Theorem 1, let us consider the modified Hamiltonian $$\tilde H_{{\mathrm{3SAT}}} = H_{{\mathrm{3SAT}}} + cH_0,H_0 = - \mathop {\sum}\limits_{i = 1}^n {(X_i + Z_i)},$$ where H0 is manifestly stoquastic and c is any number larger than the maximum number of clauses that any variable appears in. As there are M clauses, we simply choose c = O(1)M, which is still a polynomial in the number of variables. (Note that even a restricted variant of 3SAT with each variable restricted to appear at most in a constant number of clauses is still NP-Complete16 and therefore we can take c to be a constant.) The goal is to find a unitary \(U = \mathop { \otimes }\nolimits_{i = 1}^n u_i\) with \(u_i \in {\cal{C}}_1\) that cures \(\tilde H_{{\mathrm{3SAT}}}\). Note first that any choice of \(U = \otimes _{i = 1}^nW_i^{x_i}\) that cures H3SAT also cures \(\tilde H_{{\mathrm{3SAT}}}\). Second, note that choosing any ui that keeps H0 stoquastic is equivalent to choosing one of the elements of \({\cal{C}}_1^\prime \equiv \{ I,X,W,XW\} \subset {\cal{C}}_1\) (e.g., the phase gate, which is an element of \({\cal{C}}_1\), maps X to Y so it is excluded, as is WX, which maps Z to −X). Therefore, by choosing c to be large enough, any choice of \(u_i \in {\cal{C}}_1\backslash {\cal{C}}_1^\prime\) would transform \(\tilde H_{{\mathrm{3SAT}}}\) into a non-stoquastic Hamiltonian. It follows that if \(u_i \in {\cal{C}}_1\) and is to cure \(\tilde H_{{\mathrm{3SAT}}}\) then in fact it must be an element of \({\cal{C}}_1^\prime\). Next, we note that conjugating a matrix by a tensor product of X or identity operators only shuffles the off-diagonal elements, but never changes their values (for a proof see the Methods section, "Conjugation by a product of X operators"). Therefore, for the purpose of curing a Hamiltonian, applying X is equivalent to applying I and applying XW is equivalent to applying W. With this observation, the set of operators that can cure a Hamiltonian is effectively reduced from {I, W, XW, X} to {I, W}. According to Lemma 1, deciding whether such a curing transformation exists is NP-complete. Complexity of curing for the single-qubit orthogonal group Similarly, we can use Lemma 1 to show that the problem of curing the sign problem remains NP-complete when the set of allowed rotations is extended to the continuous group of single-qubit orthogonal matrices, i.e., transformations of the form \(Q = \mathop { \otimes }\nolimits_{i = 1}^n q_i\), where \(q_i^Tq_i = I\) ∀i. Namely: Theorem 2 Deciding whether there exists a curing orthogonal transformation Q for 6-local Hamiltonians is NP-complete. See the Methods section, "Proof of Theorem 2", for the proof. In analogy to the proof of Theorem 1, the crucial step is to show that by promoting each Z, X, and W to a two-qubit operator, the continuous set of possible curing transformations reduces to the discrete set considered in Lemma 1. Implications and applications An immediate and striking implication of Theorem 1 is that even under the promise that a non-stoquastic Hamiltonian can be cured by one-local Clifford unitaries (corresponding to trivial basis changes), the problem of actually finding this transformation is unlikely to have a polynomial-time solution. An interesting implication of Theorem 2 is the possibility of constructing "secretly stoquastic" Hamiltonians. That is, one may generate stoquastic quantum many-body Hamiltonians Hstoq, but present these in a "scrambled" non-stoquastic form \(H_{{\mathrm{nonstoq}}} = UH_{{\mathrm{stoq}}}U^\dagger\), where U is a tensor product of single-qubit orthogonal matrices (or in the general case a constant- depth quantum circuit). We conjecture that the latter Hamiltonians will be computationally hard to simulate using QMC by parties that have no access to the "descrambling" circuit U. In other words, it is possible to generate efficiently simulable spin models that might be inefficient to simulate unless one has access to the "secret key" to make them stoquastic (see Fig. 1). This observation may potentially have cryptographic applications (see the Methods section, "Encryption based on secretly stoquastic Hamiltonians"). A classical bit string connecting a stoquastic Hamiltonian to a seemingly non-stoquastic Hamiltonian. One may generate an n-qubit stoquastic Hamiltonian Hstoq and then transform it using a randomly chosen unitary (specified by a classical bit string) to bring it into a seemingly non-stoquastic form. Unlike the generated stoquastic Hamiltonian, the simulation of the seemingly non-stoquastic Hamiltonian can be computationally hard. Also, as discussed here, given a non-stoquastic Hamiltonian, finding the bit string that converts it into a stoquastic Hamiltonian can be computationally hard in general. Therefore, the classical bit string can serve as a secret key, without which certain properties of Hstoq cannot be efficiently simulated Our work also has implications for the connection between the sign problem and the NP-hardness of a QMC simulation. A prevailing view of this issue associates the origin of the NP-hardness of a QMC simulation to the relation between a ("fermionic") Hamiltonian that suffers from a sign problem and the corresponding ("bosonic") Hamiltonian obtained by replacing every coupling coefficient by its absolute value. Consider the following example:. \(H_X = \mathop {\sum}\nolimits_{ij} J_{ij}X_iX_j\), with Jij randomly chosen from the set {0, ±J} on a three-dimensional lattice, has a sign problem. Deciding whether its ground-state energy is below a given bound is NP-complete8. Deciding the same for its bosonic and sign-problem-free version \(H_{|X|} = \mathop {\sum}\nolimits_{ij} |J_{ij}|X_iX_j\) is in BPP (classical polynomial time with bounded error) since this Hamiltonian is that of a simple ferromagnet. The conclusion drawn in ref. 6 was that since the bosonic version is easy to simulate, the sign problem is the origin of the NP-hardness of a QMC simulation of this model (HX). The view we advocate here is that a solution to the sign problem is to find an efficiently computable curing transformation that removes it in such a way that the model has the same physics (in general the fermionic and bosonic versions of the same Hamiltonian do not), i.e., conserves thermal averages. In the above example, computing thermal averages via a QMC simulation of HX is the same as for \(H_Z = W^{ \otimes n}H_XW^{ \otimes n} = \mathop {\sum}\nolimits_{ij} J_{ij}Z_iZ_j\), which is stoquastic. Thus, the sign problem of HX is efficiently curable, after which (when it is presented as HZ) deciding its ground-state energy remains NP-hard. We have proposed an alternative definition of stoquasticity (or absence of the sign problem) of quantum many-body Hamiltonians that is motivated by computational complexity considerations. We discussed the circumstances under which non-stoquastic Hamiltonians can in fact be made stoquastic by the application of single-qubit rotations and in turn potentially become efficiently simulable by QMC algorithms. We have demonstrated that finding the required rotations is computationally hard when they are restricted to the one-qubit Clifford group or one-qubit continuous orthogonal matrices. These results raise multiple questions of interest. It is important to clarify the computational complexity of finding the curing transformation in the case of constant-depth circuits that also allow two-body rotations, whether discrete or continuous. Also, since our NP-completeness proof involved 3- and 6-local Hamiltonians, it is interesting to try to reduce it to 2-local building blocks. Another direction into which these results can be extended is to relax the constraints on the off-diagonal elements and require that they are smaller than some small ε > 0. This is relevant when some small positive off-diagonal elements can be ignored in a QMC simulation. Finally, it is natural to reconsider our results from the perspective of quantum computing. Namely, for non-stoquastic Hamiltonians that are curable, do there exist quantum algorithms that cure the sign problem more efficiently than is possible classically? With the advent of quantum computers, specifically quantum annealers, it may be the case that these can be used as quantum simulators, and as such they will not be plagued by the sign problem. Will such physical implementations of quantum computers offer advantages over classical computing even for problems that are incurably non-stoquastic? We leave these as open questions to be addressed in future studies. Proof of Theorem 2 The proof builds on that of Theorem 1, but first we note that the clause Hamiltonians introduced in Eq. (2) can have curing solutions that are orthogonal rotations outside the Clifford group (see Fig. 2). To deal with this richer set of rotations—which is now a continuous group—we promote each Z, X, and W in the clause Hamiltonians to a two-qubit operator: \(Z_i \ \mapsto \ \bar Z_i \equiv Z_{2i - 1}Z_{2i}\), \(X_i \ \mapsto \ \bar X_i \equiv X_{2i - 1}X_{2i}\), \(W_i^\alpha \ \mapsto \ \overline W_i^\alpha \equiv W_{2i - 1}^\alpha \otimes W_{2i}^\alpha\). Thus Eq. (1) becomes $$\bar H_{ijk}^{(111)} = \bar Z_i\bar Z_j\bar Z_k - 3(\bar Z_i + \bar Z_j + \bar Z_k) - (\bar Z_i\bar Z_j + \bar Z_i\bar Z_k + \bar Z_j\bar Z_k).$$ Orthogonal rotations that can cure the clause Hamiltonians introduced in Eq. (2). The yellow region depicts the angles of rotations that can cure \(H_{123}^{(111)}\) and the blue region depicts the angles of the curing rotations for \(H_{123}^{(000)}\) Let \(\bar W(x) \equiv \mathop { \otimes }\nolimits_{i = 1}^n \bar W_i^{x_i}\). It is again straightforward to check that \(\bar W(x)\bar H_{ijk}^{(111)}\bar W^\dagger (x)\) is stoquastic for any combination of the binary variables (xi, xj, xk) except for (1, 1, 1). Likewise, generalizing Eq. (3), we define $$\bar H_{ijk}^{(\alpha \beta \gamma )} = \bar W_i^{\bar \alpha } \otimes \bar W_j^{\bar \beta } \otimes \bar W_k^{\bar \gamma } \cdot \bar H_{ijk}^{(111)} \cdot \bar W_i^{\bar \alpha } \otimes \bar W_j^{\bar \beta } \otimes \bar W_k^{\bar \gamma }.$$ \(\bar H_{ijk}^{(\alpha \beta \gamma )}\) is a clause Hamiltonian corresponding to a clause in the 3SAT instance. Similarly, \(\bar W(x)\bar H_{ijk}^{(\alpha \beta \gamma )}\bar W(x)\) is stoquastic for any combination of the binary variables (xi, xj, xk) except for (α, β, γ). Generalizing Eq. (4) we define $$\bar H_{{\mathrm{3SAT}}} = \mathop {\sum}\limits_C {\bar H_{ijk}^{(\alpha \beta \gamma )}},$$ where C denotes the corresponding set of clauses in a 3SAT instance, constructed just as in the proof of Lemma 1. This, again, establishes a bijection between 3SAT clauses and "3SAT-clause Hamiltonians", now of the form \(\bar H_{{\mathrm{3SAT}}}\). In lieu of Eq. (4), we consider the 6-local Hamiltonian \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) $$\widetilde {\bar H}_{{\mathrm{3SAT}}} = \bar H_{{\mathrm{3SAT}}} + cH_0^\prime,\quad H_0^\prime = - \mathop {\sum}\limits_{i = 1}^n 2 \bar Z_i + \bar X_i,$$ where c = O(1). Just as in the proof of Theorem (1), we prove that (i) any satisfying assignment of a 3SAT instance provides a curing Q for the corresponding Hamiltonian \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\), and (ii) any Q that cures \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) provides a satisfying assignment for the corresponding 3SAT instance. Because of the relation between a single-qubit orthogonal matrix and a single-qubit rotation, it suffices to prove the hardness only for pure rotations (see the next subsection for the relation between a single-qubit real–orthogonal matrix and a single-qubit rotation); we let \(R(\theta _i) = \left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}\theta _i} & { - {\mathrm{sin}}\theta _i} \\ {{\mathrm{sin}}\theta _i} & {{\mathrm{cos}}\theta _i} \end{array}} \right]\) denote a rotation by angle θi. Let P(x) denote the product of 2n single-qubit rotations such that if xi = 0 then qubits 2i − 1 and 2i are unchanged, or if xi = 1 then they are both rotated by \(R\left( {\frac{\pi }{4}} \right)\): $$P(x) \equiv \otimes _{i = 1}^n\left( {R\left( {\frac{\pi }{4}} \right)^{x_i} \otimes R\left( {\frac{\pi }{4}} \right)^{x_i}} \right),$$ where x is a n-bit string x = (x1, …, xn). Let us now show that if the 3SAT instance has a satisfying assignment x* then \(P(x^ \ast )\widetilde {\bar H}_{{\mathrm{3SAT}}}P^T(x^ \ast )\) is stoquastic. Note that \(R\left( {\frac{\pi }{4}} \right) = XW\), and as we discussed in the article (under "Complexity of curing for the single-qubit Clifford group") it is equivalent to W for curing. To prove the claim, note that x* necessarily satisfies each individual clause of the 3SAT instance, and therefore makes the corresponding clause Hamiltonian stoquastic, i.e., \(P(x^ \ast )\bar H_{ijk}^{(\alpha \beta \gamma )}P^T(x^ \ast )\) is stoquastic \(\forall C_{ijk}^{(\alpha \beta \gamma )}\). Also, \(P(x)H_0^\prime P^T(x)\) is clearly stoquastic for any x, where \(H_0^\prime = - \mathop {\sum}\nolimits_{i = 1}^n 2\bar Z_i + \bar X_i\) [Eq. (10)]. The stoquasticity of \(P(x^ \ast )\widetilde {\bar H}_{{\mathrm{3SAT}}}P^T(x^ \ast )\) then follows immediately. We need to prove that any rotation that cures \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) provides a satisfying assignment for the corresponding 3SAT instance. We do this in two steps: Below, under "A useful lemma", we prove that for any \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\), any curing rotation \(R = \mathop { \otimes }\nolimits_{i = 1}^{2n} R(\theta _i)\) has to satisfy the condition that \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {\left( {\frac{\pi }{2},\frac{\pi }{2}} \right),\left( {\frac{\pi }{4},\frac{\pi }{4}} \right),(0,0),\left( {\frac{{ - \pi }}{4},\frac{{ - \pi }}{4}} \right)} \right\}\) ∀i. This is the crucial step, since it reduces the problem from a continuum of angles to a discrete set. If \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {(0,0),\left( {\frac{\pi }{2},\frac{\pi }{2}} \right)} \right\}\), we assign xi = 0, while if \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {\left( { - \frac{\pi }{4}, - \frac{\pi }{4}} \right),\left( {\frac{\pi }{4},\frac{\pi }{4}} \right)} \right\}\), we assign xi = 1, since rotations with the angles in each pair have the same effect. If R cures \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\), x = {xi} satisfies the corresponding 3SAT instance. See Fig. 3 for the discrete set of solutions and the corresponding assignments. Reducing the continuous region of solutions to a discrete set. The set of possible orthogonal transformations that can cure \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) (introduced in Eq. (10)) reduces to a discrete set. We assign the value of each binary variable satisfying a 3SAT instance depending on the value of the curing transformation. We set xi = 0 if the curing transformation has \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {(0,0),\left( {\frac{\pi }{2},\frac{\pi }{2}} \right)} \right\}\), while we assign xi = 1 if the curing transformation has \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {\left( { - \frac{\pi }{4}, - \frac{\pi }{4}} \right),\left( {\frac{\pi }{4},\frac{\pi }{4}} \right)} \right\}\) To see this, we first note that if R cures \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) it must cure all the clauses separately: Using step (a), we know that any such solution must be one of the four possible cases. Therefore, if R were to cure \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) but does not cure one of the 3SAT-clause Hamiltonians, it would result in a \(\bar X_i\bar X_j\bar X_k\) term in the corresponding clause. Since no other 3SAT-clause Hamiltonian in \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) contains an identical \(\bar X_i\bar X_j\bar X_k\) term, these positive off-diagonal elements cannot be canceled out or made negative, regardless of the choice of the other variables in the assignment. Therefore, if R cures \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\), it also necessarily separately cures all the terms in \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\). By construction, if R cures a term \(\bar H_{ijk}^{(\alpha \beta \gamma )}\), the string x satisfies the corresponding 3SAT clause \(C_{ijk}^{(\alpha \beta \gamma )}\). Thus x satisfies all the clauses in the corresponding 3SAT instance. The decision problem for the existence of R (and hence Q) is therefore NP-hard. Given a unitary U and a set of local terms \(\{ H_a\}\), verifying whether U cures all of the terms is clearly efficient and therefore this problem is NP-complete. Relation between an orthogonal matrix and a rotation The condition \(q_iq_i^T = I\) forces each real–orthogonal matrix qi to be either a reflection or a rotation of the form $$q_i = \left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}\theta _i} & {a_i\,\sin \theta _i} \\ {{\mathrm{sin}}\theta _i} & { - a_i\,\cos \theta _i} \end{array}} \right]$$ with ai = +1 (a reflection) or ai = −1 (a rotation). The operators X, Z, and Hadamard, are included in the family with ai = 1; I and iY = XZ are in the family with ai = −1. Note that \(\forall H,\forall \theta _i:{\kern 1pt} q_i(\theta _i)Hq_i^T(\theta _i) = q_i(\theta _i + \pi )Hq_i^T(\theta _i + \pi )\). Therefore, the angles that cure a Hamiltonian are periodic with a period of π. Hence, it suffices to consider the curing solutions only in one period: \(\theta _i \in \left. {\left( {\frac{{ - \pi }}{2},\frac{{ + \pi }}{2}} \right.} \right]\). Next, observe that a reflection by angle θi can be written as $$\left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}\theta _i} & {{\mathrm{sin}}\theta _i} \\ {{\mathrm{sin}}\theta _i} & { - {\mathrm{cos}}\theta _i} \end{array}} \right] = X\left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}\frac{\pi }{2} - \theta _i} & { - sin\frac{\pi }{2} - \theta _i} \\ {sin\frac{\pi }{2} - \theta _i} & {cos\frac{\pi }{2} - \theta _i} \end{array}} \right] = XR\left( {\frac{\pi }{2} - \theta _i} \right),$$ $$R(\theta _i) = \left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}\theta _i} & { - {\mathrm{sin}}\theta _i} \\ {{\mathrm{sin}}\theta _i} & {{\mathrm{cos}}\theta _i} \end{array}} \right].$$ As discussed below (under "Conjugation by a product of X operators"), if \(XR\left( {\frac{\pi }{2} - \theta _i} \right)\) is a curing operator so is \(R\left( {\frac{\pi }{2} - \theta _i} \right)\). Therefore, any curing \(Q = \mathop { \otimes }\nolimits_{i = 1}^n q_i\) provides a curing \(R = \mathop { \otimes }\nolimits_{i = 1}^n R(\theta _i)\). Hence, the NP-completeness of the decision problem for R implies the NP-hardness of the decision problem for Q, which is the statement of Theorem 2. A useful lemma Here, we prove that any curing rotation \(R = \mathop { \otimes }\nolimits_{i = 1}^{2n} R(\theta _i)\) for any \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\), as introduced in Eq. (10), must satisfy the condition that \((\theta _{2i - 1},\theta _{2i}) \in \left\{ {\left( {\frac{\pi }{2},\frac{\pi }{2}} \right),\left( {\frac{\pi }{4},\frac{\pi }{4}} \right),(0,0),\left( {\frac{{ - \pi }}{4},\frac{{ - \pi }}{4}} \right)} \right\}\) ∀i. In what follows, we show this for i = 1 (local rotations on the first two qubits), but the proof trivially works for any choice of i. Our strategy is to expand any locally rotated \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) on the first two qubits and then to find necessary conditions that any curing rotations on these two qubits must satisfy. With this motivation, we introduce the following lemma: Lemma 2 Let \(\theta _1,\theta _2 \in \left( { - \frac{\pi }{2},\ \frac{\pi }{2}} \right]\). Consider the 2n-qubit Hamiltonian $$H\prime = Z_1 \otimes Z_2 \otimes M_z + X_1 \otimes X_2 \otimes M_x + I_1 \otimes I_2 \otimes M_I,$$ where Mz, Mx, and MIare Hamiltonians on 2n−2 qubits satisfying the following two conditions: The absolute value of at least one element of Mz is different from the absolute value of the corresponding element in Mx. Both Mx and Mz have at least one negative element. Then the only rotation R(θ1) ⊗ R(θ2) that can cure H′ has angles given by the following four points: $$(\theta _1,\theta _2) \in \left\{ {\left( {\frac{\pi }{2},\frac{\pi }{2}} \right),\left( {\frac{\pi }{4},\frac{\pi }{4}} \right),(0,0),\left( {\frac{{ - \pi }}{4},\frac{{ - \pi }}{4}} \right)} \right\}.$$ To relate this lemma to our construction, note that any locally rotated \(\widetilde {\bar H}_{{\mathrm{3SAT}}}\) is in the form of H′. To be more precise, we will choose \(H{\prime} = R{\prime}\widetilde {\bar H}_{{\mathrm{3SAT}}}R^{\prime T}\), where \(R{\prime} = \mathop { \otimes }\nolimits_{i = 3}^{2n} R(\theta _i)\). We proceed by first proving the lemma. Then we show that both of the conditions introduced in the lemma are satisfied for our construction. Proof. It is straightforward to check that $$R(\theta _i)XR( - \theta _i) = {\mathrm{cos}}2\theta _iX - {\mathrm{sin}}2\theta _iZ,$$ $$R(\theta _i)ZR( - \theta _i) = {\mathrm{sin}}2\theta _iX + cos 2\theta _iZ.$$ Using this we have $$\begin{array}{*{20}{l}} {H\prime\prime } \hfill & = \hfill & {[R(\theta _1) \otimes R(\theta _2)]H{\prime}[R( - \theta _1) \otimes R( - \theta _2)]} \hfill \\ {} \hfill & = \hfill & {X_1 \otimes X_2 \otimes [{\mathrm{sin}}2\theta _1\,\sin 2\theta _2M_z + {\mathrm{cos}}2\theta _1\,\cos 2\theta _2M_x]} \hfill \\ {} \hfill & {} \hfill & { + X_1 \otimes Z_2 \otimes [{\mathrm{sin}}2\theta _1\,\cos 2\theta _2M_z - {\mathrm{cos}}2\theta _1\,\sin 2\theta _2M_x]} \hfill \\ {} \hfill & {} \hfill & { + Z_1 \otimes X_2 \otimes [{\mathrm{cos}}2\theta _1\,\sin 2\theta _2M_z - {\mathrm{sin}}2\theta _1\,\cos 2\theta _2M_x]} \hfill \\ {} \hfill & {} \hfill & { + Z_1 \otimes Z_2 \otimes [{\mathrm{cos}}2\theta _1\,\cos 2\theta _2M_z + {\mathrm{sin}}2\theta _1\,\sin 2\theta _2M_x]} \hfill \\ {} \hfill & {} \hfill & { + I_1 \otimes I_2 \otimes M_I.} \hfill \end{array}$$ We next find necessary conditions that θ1 and θ2 must satisfy in order to make H′′ stoquastic. Let A to E be arbitrary matrices, and let [0] denote the all-zero matrix. We note that H′′ can be stoquastic only if the X1Z2 ⊗ B term is zero (where we have dropped the tensor product between the first two qubits for notational simplicity). To see this, we first note that the matrix X1Z2 ⊗ B has both +B and −B as distinct off-diagonal elements for any nonzero matrix B (a similar observation holds for Z1X2 ⊗ C). Second, we note that there are no common off-diagonal elements between X1X2 ⊗ A, X1Z2 ⊗ B, and Z1X2 ⊗ C. Third, there is no common off-diagonal elements between these three matrices and Z1Z2 ⊗ D and I1I2 ⊗ E. Therefore, these terms cannot make the ±B in X1Z2 ⊗ B non-positive. Thus, for H′′ to become stoquastic it is necessary to have B = [0]: $$\sin 2\theta _1\,\cos 2\theta _2M_z - \cos 2\theta _1\,\sin 2\theta _2M_x = [0].$$ Similar reasoning for Z1X2 ⊗ C yields $$\cos 2\theta _1\sin 2\theta _2M_z - \sin 2\theta _1\,\cos 2\theta _2M_x = [0].$$ If the absolute value of at least one element of Mz is different from the absolute value of the corresponding element in Mx (i.e., Condition 1 is satisfied), comparing the corresponding expressions from Eqs. (20) and (21), we can conclude that $$\sin 2\theta _1\,\cos 2\theta _2 = \cos 2\theta _1\sin 2\theta _2 = 0.$$ Equation (22) gives rise to two possible cases: (i) sin 2θ1 = sin 2θ2 = 0, or (ii) cos 2θ1 = cos 2θ2 = 0. For \(\theta _1,\theta _2 \in \left. {\left( { - \frac{\pi }{2},\frac{\pi }{2}} \right.} \right]\). These are the eight possible solutions: $$(\theta _1,\theta _2) \in \left\{ {0,\frac{\pi }{2}} \right\} \times \left\{ {0,\frac{\pi }{2}} \right\},\,{\mathrm{or}}\,(\theta _1,\theta _2) \in \left\{{\hskip -2.5pt}\pm {\hskip -2pt} \frac{\pi }{4} \right\} \times \left\{{\hskip -2.5pt} \pm {\hskip -2pt} \frac{\pi }{4} \right\}.$$ Now, we observe that Condition 2 generates additional constraints on the allowed values of θ1 and θ2. To see this, we consider the X1X2 ⊗ A term in Eq. (19) in the two possible cases. For case (i), when sin 2θ1 = sin 2θ2 = 0 and hence cos 2θ1 = ±1 and cos 2θ2 = ±1, this term becomes X1X2 ⊗ cos 2θ1cos 2θ2Mx. If Mx has any negative element, then the two combinations such that cos 2θ1cos 2θ2 = −1 flip the sign of this element and make the term non-stoquastic. That is, if Mx has any negative element, the only rotations that can keep H′′ stoquastic satisfy cos 2θ1 = cos 2θ2 = 1 or cos 2θ1 = cos 2θ2 = −1. Similarly, for case (ii), when cos 2θ1 = cos 2θ2 = 0, if Mz has any negative elements, the only rotations that can keep H′′ stoquastic satisfy sin 2θ1 = sin 2θ2 = 1 or sin 2θ1 = sin 2θ2 = −1. To summarize, if both conditions hold, the solutions are necessarily one of these four points: $$\begin{array}{l}\sin (2\theta _1) = 0,\cos (2\theta _1) = 1,\sin (2\theta _2) = 0,\cos (2\theta _2) = 1\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \Rightarrow (\theta _1,\theta _2) = (0,0)\end{array}$$ $$\begin{array}{l}\sin(2\theta _1) = 0,\cos(2\theta _1) = - 1,{\mathrm{sin}}(2\theta _2) = 0,\cos(2\theta _2) = - 1\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \Rightarrow (\theta _1,\theta _2) = \left( {\frac{\pi }{2},\frac{\pi }{2}} \right)\end{array}$$ $$\begin{array}{l}{\mathrm{sin}}(2\theta _1) = 1,{\mathrm{cos}}(2\theta _1) = 0,{\mathrm{sin}}(2\theta _2) = 1,{\mathrm{cos}}(2\theta _2) = 0\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \Rightarrow (\theta _1,\theta _2) = \left( {\frac{\pi }{4},\frac{\pi }{4}} \right)\end{array}$$ $$\begin{array}{l}{\mathrm{sin}}(2\theta _1) = - 1,{\mathrm{cos}}(2\theta _1) = 0,{\mathrm{sin}}(2\theta _2) = - 1,{\mathrm{cos}}(2\theta _2) = 0\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \Rightarrow (\theta _1,\theta _2) = \left( {\frac{{ - \pi }}{4},\frac{{ - \pi }}{4}} \right)\end{array}$$ Having proved the lemma, we now proceed with identifying the properties of Mz and Mx for our construction. As mentioned earlier, we choose \(H{\prime} = R{\prime}\widetilde {\bar H}_{{\mathrm{3SAT}}}R^{\prime T}\), where \(R{\prime} = \mathop { \otimes }\nolimits_{i = 3}^{2n} R(\theta _i)\). Mz can be written as Hz−2I, where Hz denotes the terms in Mz coming from \(\bar H_{{\mathrm{3SAT}}}\). Similarly, Mx = Hx − I, where Hx denotes the terms in Mx coming from \(\bar H_{{\mathrm{3SAT}}}\). The term \(\bar Z_1 \otimes H_z\) (recall that \(\bar Z_i \equiv Z_{2i - 1}Z_{2i}\) and \(\bar X_i \equiv X_{2i - 1}X_{2i}\)) is composed of rotated 3SAT-clause Hamiltonians that share \(\bar x_1\) in their corresponding 3SAT clauses. Therefore, we have $$H_z = R\prime \left( {\mathop {\sum}\limits_{C_{1jk}^{(1\beta \gamma )} \in C} {\bar W_j^{\bar \beta }} \otimes \bar W_k^{\bar \gamma }(\bar Z_j\bar Z_k - 3 - \bar Z_j - \bar Z_k)\bar W_j^{\bar \beta } \otimes \bar W_k^{\bar \gamma }} \right)R^{\prime T}.$$ It is straightforward to check that each of the rotated 3SAT-clause Hamiltonians has only non-positive diagonal elements. Namely, using Eq. (17), it is straightforward to check that the max norm, defined as \(\left\Vert A \right\Vert_{max} = max_{ij}|[A]_{ij}|\), of any rotated Pauli operator is at most 1, and therefore the same is true for any tensor product of rotated Pauli operators. In each 3SAT-clause Hamiltonian, there are three non-identity Pauli terms. Therefore, they cannot generate a diagonal element that is larger than 3. There is a −3 term for each clause, guaranteeing that all the diagonal terms remain non-positive. As Hz is a sum of these matrices with all non-positive diagonal elements, we conclude that all the diagonal elements of Hz are non-positive. Hx is similar to Hz, but with a sum over \(C_{1jk}^{(0\beta \gamma )}\). Using similar arguments, we conclude that all the diagonal elements of Hx are non-positive. Therefore, all the diagonal elements of Mx = Hx − I and Mz = Hz − 2I are negative, and we conclude that Condition 2 is satisfied for our construction. Now, we show that that the first condition also holds. Using the cyclic property of the trace and noting that all the terms in Hz except the −3 are traceless, we have Tr(Hz) = −3k with \(k \in {\Bbb N}_0\) (\(k = 0\) only if \(H_z = 0\), i.e., when there is no \(\bar x_1\) in any of the 3SAT clauses). Therefore we have \({\mathrm{Tr}}(H_z - 2I) = - 3k - 2^{2n - 1}\). Using similar arguments, we conclude that \({\mathrm{Tr}}(H_x) = - 3k{\prime}\) with \(k{\prime} \in {\Bbb N}_0\) and \({\mathrm{Tr}}(H_x - I) = - 3k{\prime} - 2^{2n - 2}\) where \(k{\prime} \in {\Bbb N}_0\). (\(k{\prime} = 0\) only if Hx = 0, i.e., when there is no x1 in any of the 3SAT clauses). Clearly, the two traces cannot be equal for any value of k and k′. From this, in addition to the already-established fact that all the diagonal elements of Hx − I and Hz − 2I are negative, we conclude that at least one diagonal element of Hx−I is different from the corresponding element of Hz − 2I. Therefore, Condition 1 is also satisfied. Grouping terms without changing the basis As discussed here, one ambiguity in the definition of stoquastic Hamiltonians is in the choice of the set {Ha}. With this motivation, and ignoring the freedom in choosing a basis, we address the following question. Problem: We are given the k-local \(H = \mathop {\sum}\nolimits_a {H_a}\), i.e., each Ha acts nontrivially on at most k qubits. In the same basis (without any rotation), find a new set \(H_a^\prime\) satisfying \(H = \mathop {\sum}\nolimits_a {H_a^\prime }\), where each H′a is k′-local and stoquastic (if such a set exists). Obviously, if the total Hamiltonian is stoquastic, then considering the total Hamiltonian as one single Hamiltonian is a valid solution with k′ = n. This description of the Hamiltonian requires a 2n × 2n matrix. We would prefer a k′-local Hamiltonian, i.e., a set consisting of a polynomial number of terms, each 2k′ × 2k′, where k′ is a constant independent of n. Solution: One simple strategy is to consider any k′-local combination of qubits, and to try to find a grouping that makes all of these \(( {\begin{array}{*{20}{c}} n \\ {k\prime } \end{array}} )\) terms stoquastic. To do so, for any k′-local combination of qubits, we generate a set of inequalities. First, for a fixed combination of qubits, we add the terms in \(H = \mathop {\sum}\nolimits_a {H_a}\) that act nontrivially only on those k′ qubits, each with an unknown weight that will be determined later. Then we write down the conditions on the weights to ensure that all the off-diagonal elements are non-positive. This is done for all the \(( {\begin{array}{*{20}{c}} n \\ {k\prime } \end{array}} )\) combinations to get the complete set of linear inequalities. By this procedure, the problem reduces to finding a feasible point for this set of linear inequalities, which can be solved efficiently. (In practice, one can use linear programming optimization tools to check whether such a feasible point exists.) When there is no feasible point for a specific value of k′, we can increase the value of k′ and search again. Example: Assume we are given H = Z1X2−2X2 + X2Z3 and the goal is to find a stoquastic description with k′ = 2. We combine the terms acting on qubits 1 and 2 and then the terms acting on qubits 2 and 3 (there is no term on qubits 1 and 3). We construct h1,2 = α1Z1X2 + α2(−2X2) and h2,3 = α3(−2X2) + α4X2Z3. There are two types of constraints: (1) constraints enforcing H = h1,2 + h2,3: $$\alpha _1 = \alpha _4 = 1,\alpha _2 + \alpha _3 = 1,$$ and (2) constraints from stoquasticity of each of the two Hamiltonians: $$1 - 2\alpha _2 \le 0, - 1 - 2\alpha _2 \le 0;$$ $$1 - 2\alpha _3 \le 0, - 1 - 2\alpha _3 \le 0.$$ Simplifying these inequalities, we have 0.5 ≤ α2, α3 and α2 + α3 = 1, which clearly has only one feasible point: α2 = α3 = 0.5. The corresponding terms are H′1 = h1,2 = Z1X2−X2 and H′2 = h2,3 = −X2 + X2Z3. Both of these terms are stoquastic and they satisfy \(H = \mathop {\sum}\nolimits_a {H_a^\prime }\). Curing using Pauli operators In the next subsection, we show that conjugating a Hamiltonian by a tensor product of Pauli X operators or identity operators only shuffles the off-diagonal elements without changing their values. Recalling that Y = iXZ, we thus conclude that choosing between Pauli operators to cure a Hamiltonian is equivalent to choosing between I and Z operators. Therefore, given local terms of a k-local Hamiltonian {Ha} as input, the goal is to find a string x = (x1, …, xn) such that \(U = \otimes _{i = 1}^nZ^{x_i}\) cures each of the local terms {Ha} separately. Each multi-qubit Pauli operator in Ha can be decomposed into X components and Z components. We group all the terms in each Ha that share the same X component. For example, if Ha includes Y1Y2, 3X1X2, and X1X2Z3, we combine them into one single term X1X2(−Z1Z2 + 3 + Z3). Conjugating this term with U yields \(( - 1)^{x_1 + x_2}X_1X_2( - Z_1Z_2 + 3 + Z_3)\). As terms with different X components do not correspond to overlapping off-diagonal elements in Ha, the combined Z part fixes a constraint on {xi} based on the positivity or negativity of all its elements (if the combined Z part has both positive and negative elements, we conclude that there is no U that can cure the input H). In this example, \(( - 1)^{x_1 + x_2}X_1X_2( - Z_1Z_2 + 3 + Z_3)\) becomes stoquastic iff x1 + x2 ≡ 1 mod 2. We combine all these linear equations in mod 2 that are generated from terms with different X components, and solve for a satisfying x. This can be done efficiently, e.g., using Gaussian elimination. The absence of a consistent solution implies the absence of a curing Pauli group element. As the dimension of each of the local terms {Ha} is independent of the number of qubits n, and there are at most poly(n) of these terms, the entire procedure takes poly(n) time. Conjugation by a product of X operators Here, we show that conjugating a Hamiltonian by a tensor product of X's or identity operators only shuffles the off-diagonal elements without changing their values. Lemma 3 Let \(U_X = X^{a_1} \otimes \ldots \otimes X^{a_n}\), where ai ∈ {0, 1}. The set of off-diagonal elements of a general 2n × 2nmatrix B, OFF(B) = {[B]ij|i, j ∈ {0, 1}n, i ≠ j}, is equal to the set of off-diagonal elements of UXBUXfor all possible {ai}. Proof. Let a = (a1,...,an). Similarly, let i and j represent n-bit strings. The elements of the matrix UXBUX are $$\begin{array}{*{20}{l}} {\left\langle {i|\,U_XBU_X\,|j} \right\rangle } \hfill & = \hfill & {\left\langle {i_1, \ldots,i_n|\,U_XBU_X\,|j_1, \ldots,j_n} \right\rangle } \hfill \\ {} \hfill & = \hfill & {\left\langle {i_1 \odot a_1, \ldots,i_n \odot a_1|\,B\,|j_1 \odot a_1, \ldots,j_n \odot a_1} \right\rangle } \hfill \\ {} \hfill & = \hfill & {\left\langle {i \odot a|\,B\,|j \odot a} \right\rangle } \hfill \end{array}$$ where \(\odot\) denotes the XOR operation. Clearly for any fixed a, we have \(i \ne j \Leftrightarrow i \odot a \ne j \odot a\) and therefore $$\begin{array}{*{20}{l}} {{\mathrm{OFF}}(U_XBU_X)} \hfill & = \hfill & {\{ [B]_{i \odot a,j \odot a}|i,j \in \{ 0,1\} ^n,i \ne j\} } \hfill \\ {} \hfill & = \hfill & {\{ [B]_{i \odot a,j \odot a}|i \odot a,j \odot a \in \{ 0,1\} ^n,i \odot a \ne j \odot a\} } \hfill \\ {} \hfill & = \hfill & {\{ [B]_{i{\prime}j{\prime}}|i{\prime},j{\prime} \in \{ 0,1\} ^n,i{\prime} \ne j{\prime}\} \{ [B]_{i{\prime}j{\prime}}|i{\prime},j{\prime} \in \{ 0,1\} ^n,i{\prime} \ne j{\prime}\} } \hfill \\ {} \hfill & = \hfill & {{\mathrm{OFF}}(B)} \hfill \end{array}$$ A similar argument proves that UX shuffles the diagonal elements: DIA(UXBUX) = DIA(B). Therefore, conjugating a Hamiltonian by UX does not change the set of inequalities one needs to solve to make a Hamiltonian stoquastic. As a consequence, whenever we want to pick ui as a solution, we can instead choose Xui. One-local rotations are not enough Consider, e.g., the three Hamiltonians $$H_{ij} = Z_iZ_j + X_iX_j,\quad (i,j) \in \{ (1,2),(2,3),(3,1)\} .$$ The sum of any pair of these Hamiltonians can be cured by single-qubit unitaries (e.g., H2 = H1,2 + H2,3 can be cured by applying U = Z2). In contrast, the frustrated Hamiltonian H = H12 + H23 + H13 cannot be cured using any combination of single-qubit rotations. To see this, we first note that the partial trace of a stoquastic Hamiltonian is necessarily stoquastic. By partial trace over single qubits of H, we conclude that in order for H to be stoquastic, all three Hij's must be stoquastic. To find all the solutions that convert each of these Hamiltonians into a stoquastic Hamiltonian, we expand \(R_i(\theta _i) \otimes R_j(\theta _j)H_{ij}R_i^T(\theta _i) \otimes R_j^T(\theta _j)\) and note that it has ±sin 2(θi − θj) and cos 2(θi − θj) as off-diagonal elements. Demanding that the rotated Hamiltonians are all stoquastic [so that sin 2(θi − θj) = 0] forces cos 2(θi − θj) = −1 ∀(i, j) ∈ {(1, 2), (2, 3), (3, 1)}. But this set of constraints does not have a feasible point. To see this note that $$\begin{array}{*{20}{l}} {\cos 2(\theta _1 - \theta _3)} \hfill & = \hfill & {\cos 2(\theta _1 - \theta _2)\cos 2(\theta _1 - \theta _3)} \hfill \\ {} \hfill & {} \hfill & { - \sin 2(\theta _1 - \theta _2)\sin 2(\theta _1 - \theta _3)} \hfill \\ {} \hfill & {} \hfill & { = - 1 \times -\!1 + 0 \times 0 = + 1.} \hfill \end{array}$$ Therefore, H cannot be made stoquastic using 2 × 2 rotational matrices. Because of the relation between rotation and orthogonal matrices discussed above, we conclude that H cannot be made stoquastic using 2 × 2 orthogonal matrices. Encryption based on secretly stoquastic Hamiltonians By generating 3SAT instances with planted solutions (see, e.g., refs. 17,18) and transforming these to non-stoquastic Hamiltonians via the mapping prescribed by Theorems 1 (or 2), one would be able to generate 3-local (or 6-local) Hamiltonians that are stoquastic, but are computationally hard to transform into a stoquastic form. This construction may have cryptographic implications. For example, imagine planting a secret n-bit message in the (unique by design) ground state of a stoquastic Hamiltonian. Since the solution is planted, Alice automatically knows it. She checks that QMC can find the ground state in a prescribed amount of time \(\tau(n)\), and if this is not the case, she generates a new, random, stoquastic Hamiltonian with the same planted solution and checks again, etc., until this condition is met. Alice and Bob pre-share the secret key, i.e., the curing transformation, and after they separate, Alice transmits only the O(n2) coefficients of the non-stoquastic Hamiltonians (transformed via the mapping prescribed by Theorem 1) for every new message she wishes to send to Bob. To discover Alice's secret message, Bob runs QMC on the cured Hamiltonian. Since Alice verified that QMC can find the ground state in polynomial time, Bob will also find the ground state in polynomial time. This scheme should be viewed as merely suggestive of a cryptographic protocol, since as it stands it contains several potential loopholes: (i) its security depends on the absence of efficient two-or-more qubit curing transformations, as well as the absence of algorithms other than QMC that can efficiently find the ground state of the non-stoquastic Hamiltonians generated by Alice; (ii) the fact that Alice must start from Hamiltonians for which the ground state can be found in polynomial time may make the curing problem easy as well; (iii) this scheme transmits an n-bit message using rn2 message bits, where r is the number of bits required to specify the n2 coefficients of the transmitted non-stoquastic Hamiltonian, so it is less efficient than a one-time pad. Additional research is needed to improve this into a scheme that overcomes these objections. Data sharing was not applicable to this article, as no datasets were generated or analyzed during the current study. Loh, E. Y. et al. Sign problem in the numerical simulation of many-electron systems. Phys. Rev. B 41, 9301–9307 (1990). Landau, D. & Binder, K. A Guide to Monte Carlo Simulations in Statistical Physics. (Cambridge University Press, New York, NY, USA, 2005). Newman, M.E.J. and Barkema, G.T. Monte Carlo Methods in Statistical Physics. (Clarendon Press, New York, Oxford 1999). Bravyi, S., DiVincenzo, D. P., Oliveira, R. I. & Terhal, B. M. The complexity of stoquastic local hamiltonian problems. Quant. Inf. Comp. 8, 0361 (2008). MathSciNet MATH Google Scholar Bravyi, S. & Terhal, B. Complexity of stoquastic frustration-free hamiltonians. SIAM J. Comput. 39, 1462–1485 (2009). Article MathSciNet Google Scholar Troyer, M. & Wiese, U.-J. Computational complexity and fundamental limitations to fermionic quantum monte carlo simulations. Phys. Rev. Lett. 94, 170201 (2005). Cubitt, T. & Montanaro, A. Complexity classification of local hamiltonian problems. SIAM J. Comput. 45, 268–316 (2016). Barahona, F. On the computational complexity of Ising spin glass models. J. Phys. A: Math. Gen. 15, 3241 (1982). Article ADS MathSciNet Google Scholar Okunishi, K. & Harada, K. Symmetry-protected topological order and negative-sign problem for SO(n) bilinear-biquadratic chains. Phys. Rev. B 89, 134422 (2014). Alet, F., Damle, K. & Pujari, S. Sign-problem-free monte carlo simulation of certain frustrated quantum magnets. Phys. Rev. Lett. 117, 197203 (2016). Honecker, A. et al. Thermodynamic properties of highly frustrated quantum spin ladders: Influence of many-particle bound states. Phys. Rev. B 93, 054408 (2016). Terhal, B. Adiabatic quantum computing conference, http://www.smapip.is.tohoku.ac.jp/aqc2017/program.html (2018). Hastings, M. How quantum are non-negative wavefunctions? J. Math. Phys. 57, 015210 (2016). Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information. (Cambridge University Press, 2000). Karp, R. Reducibility among combinatorial problems. In Miller, R. E. & Thatcher, J. W. (eds.) Complexity of Computer Computations, The IBM Research Symposia Series, 85 (Plenum, New York, 1972). Tovey, C. A. A simplified np-complete satisfiability problem. Discret. Appl. Math. 8, 85–89 (1984). Hen, I. et al. Probing for quantum speedup in spin-glass problems with planted solutions. Phys. Rev. A. 92, 042325 (2015). Hamze, F. et al. From near to eternity: spin-glass planting, tiling puzzles, and constraint-satisfaction problems. Phys. Rev. E 97, 043303 (2018). The research is based upon work (partially) supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the U.S. Army Research Office contract W911NF-17-C-0050. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank Ehsan Emamjomeh-Zadeh, Iman Marvian, Evgeny Mozgunov, Ben Reichardt, and Federico Spedalieri for useful discussions. Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA Milad Marvian Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089, USA Milad Marvian & Daniel A. Lidar Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, CA, 90089, USA Milad Marvian, Daniel A. Lidar & Itay Hen Department of Physics and Astronomy, University of Southern California, Los Angeles, CA, 90089, USA Daniel A. Lidar & Itay Hen Department of Chemistry, University of Southern California, Los Angeles, CA, 90089, USA Daniel A. Lidar Information Sciences Institute, University of Southern California, Marina del Rey, CA, 90292, USA Itay Hen I.H. conceived of the project. M.M. devised most aspects of the technical proofs. I.H., D.A.L. and M.M. contributed equally to discussions and writing the paper. Correspondence to Milad Marvian. Journal peer review information: Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Marvian, M., Lidar, D.A. & Hen, I. On the computational complexity of curing non-stoquastic Hamiltonians. Nat Commun 10, 1571 (2019). https://doi.org/10.1038/s41467-019-09501-6 Prospects for quantum enhancement with diabatic quantum annealing E. J. Crosson D. A. Lidar Nature Reviews Physics (2021) Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Upper bounds and algorithms for parallel knock-out numbers Haitze J. Broersma, Matthew Johnson, Daniël Paulusma Discrete Mathematics and Mathematical Programming Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review We study parallel knock-out schemes for graphs. These schemes proceed in rounds in each of which each surviving vertex simultaneously eliminates one of its surviving neighbours; a graph is reducible if such a scheme can eliminate every vertex in the graph. We show that, for a reducible graph $G$, the minimum number of required rounds is $O(\sqrt{\alpha}$, where $\alpha$ is the independence number of $G$. This upper bound is tight and the result implies the square-root conjecture which was first posed in MFCS 2004. We also show that for reducible $K_{1,p}$-free graphs at most $p - 1$ rounds are required. It is already known that the problem of whether a given graph is reducible is NP-complete. For claw-free graphs, however, we show that this problem can be solved in polynomial time. SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity Giuseppe Prencipe, Shmuel Zaks Lecture Notes in Computer Science Broersma, H. J., Johnson, M., & Paulusma, D. (2007). Upper bounds and algorithms for parallel knock-out numbers. In G. Prencipe, & S. Zaks (Eds.), SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity (pp. 328-340). [10.1007/978-3-540-72951-8_26] (Lecture Notes in Computer Science; Vol. 4474, No. 1). Berlin: Springer. https://doi.org/10.1007/978-3-540-72951-8_26 Broersma, Haitze J. ; Johnson, Matthew ; Paulusma, Daniël. / Upper bounds and algorithms for parallel knock-out numbers. SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity. editor / Giuseppe Prencipe ; Shmuel Zaks. Berlin : Springer, 2007. pp. 328-340 (Lecture Notes in Computer Science; 1). @inproceedings{f05b738bcd7442979728677d53b83820, title = "Upper bounds and algorithms for parallel knock-out numbers", abstract = "We study parallel knock-out schemes for graphs. These schemes proceed in rounds in each of which each surviving vertex simultaneously eliminates one of its surviving neighbours; a graph is reducible if such a scheme can eliminate every vertex in the graph. We show that, for a reducible graph $G$, the minimum number of required rounds is $O(\sqrt{\alpha}$, where $\alpha$ is the independence number of $G$. This upper bound is tight and the result implies the square-root conjecture which was first posed in MFCS 2004. We also show that for reducible $K_{1,p}$-free graphs at most $p - 1$ rounds are required. It is already known that the problem of whether a given graph is reducible is NP-complete. For claw-free graphs, however, we show that this problem can be solved in polynomial time.", keywords = "EWI-11586, IR-62061, METIS-245868", author = "Broersma, {Haitze J.} and Matthew Johnson and Dani{\"e}l Paulusma", note = "10.1007/978-3-540-72951-8_26", series = "Lecture Notes in Computer Science", editor = "Giuseppe Prencipe and Shmuel Zaks", booktitle = "SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity", Broersma, HJ, Johnson, M & Paulusma, D 2007, Upper bounds and algorithms for parallel knock-out numbers. in G Prencipe & S Zaks (eds), SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity., 10.1007/978-3-540-72951-8_26, Lecture Notes in Computer Science, no. 1, vol. 4474, Springer, Berlin, pp. 328-340. https://doi.org/10.1007/978-3-540-72951-8_26 Upper bounds and algorithms for parallel knock-out numbers. / Broersma, Haitze J.; Johnson, Matthew; Paulusma, Daniël. SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity. ed. / Giuseppe Prencipe; Shmuel Zaks. Berlin : Springer, 2007. p. 328-340 10.1007/978-3-540-72951-8_26 (Lecture Notes in Computer Science; Vol. 4474, No. 1). T1 - Upper bounds and algorithms for parallel knock-out numbers AU - Broersma, Haitze J. AU - Johnson, Matthew AU - Paulusma, Daniël N1 - 10.1007/978-3-540-72951-8_26 N2 - We study parallel knock-out schemes for graphs. These schemes proceed in rounds in each of which each surviving vertex simultaneously eliminates one of its surviving neighbours; a graph is reducible if such a scheme can eliminate every vertex in the graph. We show that, for a reducible graph $G$, the minimum number of required rounds is $O(\sqrt{\alpha}$, where $\alpha$ is the independence number of $G$. This upper bound is tight and the result implies the square-root conjecture which was first posed in MFCS 2004. We also show that for reducible $K_{1,p}$-free graphs at most $p - 1$ rounds are required. It is already known that the problem of whether a given graph is reducible is NP-complete. For claw-free graphs, however, we show that this problem can be solved in polynomial time. AB - We study parallel knock-out schemes for graphs. These schemes proceed in rounds in each of which each surviving vertex simultaneously eliminates one of its surviving neighbours; a graph is reducible if such a scheme can eliminate every vertex in the graph. We show that, for a reducible graph $G$, the minimum number of required rounds is $O(\sqrt{\alpha}$, where $\alpha$ is the independence number of $G$. This upper bound is tight and the result implies the square-root conjecture which was first posed in MFCS 2004. We also show that for reducible $K_{1,p}$-free graphs at most $p - 1$ rounds are required. It is already known that the problem of whether a given graph is reducible is NP-complete. For claw-free graphs, however, we show that this problem can be solved in polynomial time. T3 - Lecture Notes in Computer Science BT - SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity A2 - Prencipe, Giuseppe A2 - Zaks, Shmuel CY - Berlin Broersma HJ, Johnson M, Paulusma D. Upper bounds and algorithms for parallel knock-out numbers. In Prencipe G, Zaks S, editors, SIROCCO 2007: 14th International Colloquium on Structural Information and Communication Complexity. Berlin: Springer. 2007. p. 328-340. 10.1007/978-3-540-72951-8_26. (Lecture Notes in Computer Science; 1). https://doi.org/10.1007/978-3-540-72951-8_26 http://eprints.eemcs.utwente.nl/secure2/11586/01/fulltext.pdf
CommonCrawl
Biology 2015 Meiosis and Sexual Reproduction Sylvia S. Mader, Michael Windelspecht If a parent cell has 16 chromosomes, then each of the daughter cells following meiosis will have a. 48 chromosomes. b. 32 chromosomes. c. 16 chromosomes. d. 8 chromosomes. Andrew S. A bivalent is a. a homologous chromosome. b. the paired homologous chromosomes. c. a duplicated chromosome composed of sister chromatids. d. the two daughter cells after meiosis I. e. the two centrioles in a centrosome. James T. The synaptonemal complex a. forms during prophase I of meiosis. b. allows synapsis to occur. c. forms between homologous chromosomes. d. All of these are correct. Crossing-over occurs between a. sister chromatids of the same chromosome. b. two different kinds of bivalents. c. two different kinds of chromosomes. d. nonsister chromatids of a bivalent. e. two daughter nuclei. Which of the following occurs at metaphase I of meiosis? a. independent assortment b. crossing-over c. interkinesis d. formation of new alles At the metaphase plate during metaphase I of meiosis, there are a. chromosomes consisting of one chromatid. b. unpaired duplicated chromosomes. c. bivalents. d. homologous pairs of chromosomes. e. Both c and d are correct. At the metaphase plate during metaphase lI of meiosis, there are Ashley R. During which phase of meiosis do homologous chromosomes separate? a. prophase I b. telophase I c. anaphase I d. anaphase lI Mitosis _____ chromosome number, whereas meiosis _____ the chromosome number of the daughter cells. a. maintains; increases b. increases; maintains c. increases; decreases d. maintains; decreases Eric T. For questions $10-13$ , match the statements that follow to the items in the key. Answers may be used more than once, and more than one answer may be used. a. mitosis b. meiosis c. meiosis II d. Both meiosis I and meiosis Il are correct. e. All of these are correct. Involves pairing of duplicated homologous chromosomes A parent cell with five duplicated chromosomes will produce daughter cells with five chromosomes consisting of one chromatid each. Hollie U. Nondisjunction may occur, causing abnormal gametes to form. Involved in growth and repair of tissues Kaley L. Polar bodies are formed during the process of a. spermatogenesis. b. gametophyte formation. c. sporophyte formation. d. oogenesis. e. None of these are correct. In humans, gametogenesis results in the formation of a. diploid egg and sperm cells. b. gametophytes. c. sporophytes. d. haploid egg and sperm cells. e. a zygote. Negin Y. Nondisjunction during meiosis 1 of oogenesis will result in eggs that have a. the normal number of chromosomes. b. one too many chromosomes. c. one less than the normal number of chromosomes. d. Both b and c are correct. Abigail S. In which of the following is genetic material moved between nonhomologous chromosomes? $$\begin{array}{ll}{\text { a. insertion }} & {\text { d. translocation }} \\ {\text { b. nondisjunction }} & {\text { e. inversion }} \\ {\text { c. deletion }}\end{array}$$ Which of the following is not an aneuploid condition? $$\begin{array}{ll}{\text { a. Turner syndrome }} & {\text { c. Alagille syndrome }} \\ {\text { b. Down syndrome }} & {\text { d. Klinefelter syndrome }}\end{array}$$ Bryan L.
CommonCrawl
Vol. 30, Issue 5 pp.1269-1588 Vol. 30, Issue 4 pp.959-1268 Vol. 9, Issue 5 pp.1081-1433 Vol. 9, Issue 4 pp.843-1080 Vol. 5, Issue 2-4 pp.195-848 A Second-Order Path-Conservative Method for the Compressible Non-Conservative Two-Phase Flow Yueling Jia, Song Jiang, Baolin Tian & Eleuterio F. Toro 10.4208/cicp.OA-2017-0097 Commun. Comput. Phys., 24 (2018), pp. 309-331. Preview Purchase PDF 31 7068 Abstract A theoretical solution of the Riemann problem to the two-phase flow model in non-conservative form of Saurel and Abgrall is presented under the assumption that all the nonlinear waves are shocks. The solution, called 4-shock Riemann solver, is then utilized to construct a path-conservative scheme for numerical solution of a general initial boundary value problem for the two-phase flow model in the non-conservative form. Moreover, a high-order path-conservative scheme of Godunov type is given via the MUSCL reconstruction and the Runge-Kutta technique first in one dimension, based on the 4-shock Riemann solver, and then extended to the two-dimensional case by dimensional splitting. A number of numerical tests are carried out and numerical results demonstrate the accuracy and robustness of our scheme in the numerical solution of the five-equations model for two-phase flow. Deformation of a Sheared Magnetic Droplet in a Viscous Fluid Wellington C. Jesus, Alexandre M. Roma & Hector D. Ceniceros A fully three-dimensional numerical study of the dynamics and field-induced deformation of a sheared, superparamagnetic ferrofluid droplet immersed in a Newtonian viscous fluid is presented. The system is a three-dimensional, periodic channel with top and bottom walls displaced to produce a constant shear rate and with an external, uniform magnetic field perpendicular to the walls. The model consists of the incompressible Navier-Stokes equations with the extra magnetic stress coupled to the static Maxwell's equations. The coupled system is solved with unprecedented resolution and accuracy using a fully adaptive, Immersed Boundary Method. For small droplet distortions, the numerical results are compared and validated with an asymptotic theory. For moderate and strong applied fields, relative to surface tension, and weak flows a large field-induced droplet deformation is observed. Moreover, it is found that the droplet distortion in the vorticity direction can be of the same order as that occurring in the shear plane. This study highlights the importance of the three-dimensional character of a problem of significant relevance to applications, where a dispersed magnetic phase is employed to control the rheology of the system. High-Order Vertex-Centered U-MUSCL Schemes for Turbulent Flows H. Q. Yang & Robert E. Harris Many production and commercial unstructured CFD codes provide no better than 2nd-order spatial accuracy. Unlike structured grid procedures where there is an implied structured connectivity between neighboring grid points, for unstructured grids, it is more difficult to compute higher derivatives due to a lack of explicit connectivity beyond the first neighboring cells. Our goal is to develop a modular high-order scheme with low dissipation flux difference splitting that can be integrated into existing CFD codes for use in improving the solution accuracy and to enable better prediction of complex physics and noise mechanisms and propagation. In a previous study, a 3rd-order U-MUSCL scheme using a successive differentiation method was derived and implemented in FUN3D. Verification studies of the acoustic benchmark problems showed that the new scheme can achieve up to 4th-order accuracy. Application of the high-order scheme to acoustic transport and transition-to-turbulence problems demonstrated that with just 10% overhead, the solution accuracy can be dramatically improved by as much as a factor of eight. This paper examines the accuracy of the high-order scheme for turbulent flow over single and tandem cylinders. Considerably better agreement with experimental data is observed when using the new 3rd-order U-MUSCL scheme. A Performance Comparison of Density-of-States Methods Rene Haber & Karl Heinz Hoffmann Nowadays equilibrium thermodynamic properties of materials can be obtained very efficiently by numerical simulations. If the properties are needed over a range of temperatures it is highly efficient to determine the density of states first. For this purpose histogram- and matrix-based methods have been developed. Here we present a performance comparison of a number of those algorithms. The comparison is based on three different benchmarks, which cover systems with discrete and continuous state spaces. For the benchmarks the exact density of states is known, for one benchmark – the FAB system – the exact infinite temperature transition matrix Q is also known. In particular the Wang-Landau algorithm in its standard and 1/t variant are compared to Q-methods, where estimates of the infinite temperature transition matrix are obtained by random walks with different acceptance criteria. Overall the Q-matrix methods perform better or at least as good as the histogram methods. In addition, different methods to obtain the density of states from the Q-matrix and their efficiencies are presented. Joint Optimization of the Spatial and the Temporal Discretization Scheme for Accurate Computation of Acoustic Problems Jitenjaya Pradhan, Saksham Jindal, Bikash Mahato & Yogesh G. Bhumkar Here, a physical dispersion relation preserving (DRP) scheme has been developed by combined optimization of the spatial and the multi-stage temporal discretization scheme to solve acoustics problems accurately. The coupled compact difference scheme (CCS) has been spectrally optimized (OCCS) for accurate evaluation of the spatial derivative terms. Next, the combination of the OCCS scheme and the five stage Runge-Kutta time integration (ORK5) scheme has been optimized to reduce numerical diffusion and dispersion error significantly. Proposed OCCS−ORK5 scheme provides accurate solutions at considerably higher CFL number. In addition, ORK5 time integration scheme consists of low storage formulation and requires less memory as compared to the traditional Runge-Kutta schemes. Solutions of the model problems involving propagation, reflection and diffraction of acoustic waves have been obtained to demonstrate the accuracy of the developed scheme and its applicability to solve complex problems. Numerical Method of Profile Reconstruction for a Periodic Transmission Problem from Single-Sided Data Mingming Zhang & Junliang Lv We are concerned with the profile reconstruction of a penetrable grating from scattered waves measured above the periodic structure. The inverse problem is reformulated as an optimization problem, which consists of two parts: a linear severely ill-posed problem and a nonlinear well-posed problem. A Tikhonov regularization method and a Landweber iteration strategy are applied to the objective function to deal with the ill-posedness and nonlinearity. We propose a self-consistent method to recover a potential function and an approximation of grating function in each iterative step. Some details for numerical implementation are carefully discussed to reduce the computational efforts. Numerical examples for exact and noisy data are included to illustrate the effectiveness and the competitive behavior of the proposed method. Integrated Linear Reconstruction for Finite Volume Scheme on Arbitrary Unstructured Grids Li Chen, Guanghui Hu & Ruo Li In [L. Chen and R. Li, Journal of Scientific Computing, Vol. 68, pp. 1172–1197, (2016)], an integrated linear reconstruction was proposed for finite volume methods on unstructured grids. However, the geometric hypothesis of the mesh to enforce a local maximum principle is too restrictive to be satisfied by, for example, locally refined meshes or distorted meshes generated by arbitrary Lagrangian-Eulerian methods in practical applications. In this paper, we propose an improved integrated linear reconstruction approach to get rid of the geometric hypothesis. The resulting optimization problem is a convex quadratic programming problem, and hence can be solved efficiently by classical active-set methods. The features of the improved integrated linear reconstruction include that i). the local maximum principle is fulfilled on arbitrary unstructured grids, ii). the reconstruction is parameter-free, and iii). the finite volume scheme is positivity-preserving when the reconstruction is generalized to the Euler equations. A variety of numerical experiments are presented to demonstrate the performance of this method. A Reduced Basis Method for the Homogenized Reynolds Equation Applied to Textured Surfaces Michael Rom & Siegfried Müller In fluid film lubrication investigations, the homogenized Reynolds equation is used as an averaging model to deal with microstructures induced by rough or textured surfaces. The objective is a reduction of computation time compared to directly solving the original Reynolds equation which would require very fine computational grids. By solving cell problems on the microscale, homogenized coefficients are computed to set up a homogenized problem on the macroscale. For the latter, the discretization can be chosen much coarser than for the original Reynolds equation. However, the microscale cell problems depend on the macroscale film thickness and thus become parameter-dependent. This requires a large number of cell problems to be solved, contradicting the objective of accelerating simulations. A reduced basis method is proposed which significantly speeds up the solution of the cell problems and the computation of the homogenized coefficients without loss of accuracy. The suitability of both the homogenization technique and the combined homogenization/reduced basis method is documented for the application to textured journal bearings. For this purpose, numerical results are presented where deviations from direct solutions of the original Reynolds equation are investigated and the reduction of computational cost is measured. Optimal Convergence Analysis of a Mixed Finite Element Method for Fourth-Order Elliptic Problems Yue Yan, Weijia Li, Wenbin Chen & Yanqiu Wang A Ciarlet-Raviart type mixed finite element approximation is constructed and analyzed for a class of fourth-order elliptic problems arising from solving various gradient systems. Optimal error estimates are obtained, using a super-closeness relation between the finite element solution and the Ritz projection of the PDE solution. Numerical results agree with the theoretical analysis. A Fast Finite Difference Method for Tempered Fractional Diffusion Equations Xu Guo, Yutian Li & Hong Wang Using the idea of weighted and shifted differences, we propose a novel finite difference formula with second-order accuracy for the tempered fractional derivatives. For tempered fractional diffusion equations, the proposed finite difference formula yields an unconditionally stable scheme when an implicit Euler method is used. For the numerical simulation and as an application, we take the CGMYe model as an example. The numerical experiments show that second-order accuracy is achieved for both European and American options. Mathematical Model of Freezing in a Porous Medium at Micro-Scale Alexandr Žák, Michal Beneš, Tissa H. Illangasekare & Andrew C. Trautz We present a micro-scale model describing the dynamics of pore water phase transition and associated mechanical effects within water-saturated soil subjected to freezing conditions. Since mechanical manifestations in areas subjected to either seasonal soil freezing and thawing or climate change induced thawing of permanently frozen land may have severe impacts on infrastructures present, further research on this topic is timely and warranted. For better understanding the process of soil freezing and thawing at the field-scale, consequent upscaling may help improve our understanding of the phenomenon at the macro-scale. In an effort to investigate the effect of the pore water density change during the propagation of the phase transition front within cooled soil material, we have designed a 2D continuum micro-scale model which describes the solid phase in terms of a heat and momentum balance and the fluid phase in terms of a modified heat equation that accounts for the phase transition of the pore water and a momentum conservation equation for Newtonian fluid. This model provides the information on force acting on a single soil grain induced by the gradual phase transition of the surrounding medium within a nontrivial (i.e. curved) pore geometry. Solutions obtained by this model show expected thermal evolution but indicate a non-trivial structural behavior. An Enhanced Finite Element Method for a Class of Variational Problems Exhibiting the Lavrentiev Gap Phenomenon Xiaobing Feng & Stefan Schnake This paper develops an enhanced finite element method for approximating a class of variational problems which exhibits the $Lavrentiev$ $gap$ $phenomenon$ in the sense that the minimum values of the energy functional have a nontrivial gap when the functional is minimized on the spaces $W^{1,1}$ and $W^{1,∞}$. To remedy the standard finite element method, which fails to converge for such variational problems, a simple and effective cut-off procedure is utilized to design the (enhanced finite element) discrete energy functional. In essence the proposed discrete energy functional curbs the gap phenomenon by capping the derivatives of its input on a scale of $\mathcal{O}$($h^{−α}$) (where $h$ denotes the mesh size) for some positive constant $α$. A sufficient condition is proposed for determining the problem-dependent parameter $α$. Extensive 1-D and 2-D numerical experiment results are provided to show the convergence behavior and the performance of the proposed enhanced finite element method.
CommonCrawl
Cross Validated Cross Validated Meta Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Chi-Square Test vs Distribution confusion Disclaimer: I'm new to stats so please bear with me Everywhere I look, the Chi-Square Distribution is explained with a (Z-Score)^2 i.e $$ ((X-\mu )/\sigma )^2 $$ and based on a random variable from a STANDARD NORMAL Distribution in which case it becomes $$X^2$$ However, the Chi Square Goodness of Fit Test is explained with the following formula: $$(Oi-Ei)^2/Ei$$ which can be reinterpreted as $$ (X-\mu )^2/\mu $$ What I'm trying to understand is how can the $\chi^2$ Critical Value for a given P-Value that is derived from $$\sum ((Oi-Ei)^2/Ei) $$ be equal to the $\chi^2$ Critical Value that is derived from a $\chi^2$ distribution based on a random variable that follows a Standard Normal Distribution $$ \sum ((X-\mu )/\sigma )^2 => \sum ((X-0 )/ 1 )^2 => \sum X^2 $$ The reason I'm asking is because I was watching this video https://youtu.be/ZNXso_riZag?t=620 where the guy just plugs in the P-Value and the degrees of freedom without specifying any sort of $\mu$ which leads to me to believe the Critical Value he got was from a $\chi^2$ distribution whose random variable was based on a STANDARD Normal Distribution and yet the actual normal distribution of his problem is different from a Standard Normal Distribution. So shouldn't he be getting a different $\chi^2$ critical value ? $$\sum ((X-\mu )/\sigma )^2 \neq \sum ((Oi-Ei)^2/Ei) $$ chi-squared-test chi-squared-distribution critical-value mandemmandem It can be shown that $$\chi^2 :=\sum_{j=1}^k\left[\frac{(\text{obs}_j-\text{exp}_j)^2}{\text{exp}_j}\right]\overset{\mathscr L}{\to} \chi^2_{k-1}.\tag 1\label 1$$ That is, the asymptotic distribution of the goodness-of-fit or more formally Pearson $\chi^2$ statistic is chi-square distribution with degrees of freedom equal to the number of cells minus one. What the video showed is the usage of $\eqref 1$ in the calculation. Implicit is that the total frequency must be reasonably large. 3,69433 gold badges77 silver badges3131 bronze badges $\begingroup$ What is the symbol on top of the arrow? Also, is this trying to say that it becomes a $\chi^2$ distribution with k-1 degrees of freedom whose random variable is based on a STANDARD Normal Distribution? $\endgroup$ – mandem $\begingroup$ It's a curly "L", $\mathscr{L}$. Presumably (when taken along with the arrow) it's standing for something along the lines of "in the limit as $n\to \infty$, is distributed as" $\endgroup$ – Glen_b Thanks for contributing an answer to Cross Validated! Chi square test on non-normal distributions Chi Square independence test for continuous measure (not counts) - updated Chi-square test statistic is significant but $p$-value says otherwise - what's the correct interpretation? chi-square distribution with ${n-1}$ degree of freedom Chi-Square Test Confidence
CommonCrawl
A sharp Sobolev-Strichartz estimate for the wave equation ERA-MS Home This Volume Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity 2015, 22: 32-45. doi: 10.3934/era.2015.22.32 The $\boldsymbol{q}$-deformed Campbell-Baker-Hausdorff-Dynkin theorem Rüdiger Achilles 1, , Andrea Bonfiglioli 1, and Jacob Katriel 2, Dipartimento di Matematica, Università degli Studi di Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy, Italy Department of Chemistry, Technion -- Isreal Institute of Technology, Haifa 32000, Israel Received August 2014 Published July 2015 We announce an analogue of the celebrated theorem by Campbell, Baker, Hausdorff, and Dynkin for the $q$-exponential $\exp_q(x)=\sum_{n=0}^{\infty} \frac{x^n}{[n]_q!}$, with the usual notation for $q$-factorials: $[n]_q!:=[n-1]_q!\cdot(q^n-1)/(q-1)$ and $[0]_q!:=1$. Our result states that if $x$ and $y$ are non-commuting indeterminates and $[y,x]_q$ is the $q$-commutator $yx-q\,xy$, then there exist linear combinations $Q_{i,j}(x,y)$ of iterated $q$-commutators with exactly $i$ $x$'s, $j$ $y$'s and $[y,x]_q$ in their central position, such that $\exp_q(x)\exp_q(y)=\exp_q\!\big(x+y+\sum_{i,j\geq 1}Q_{i,j}(x,y)\big)$. Our expansion is consistent with the well-known result by Schützenberger ensuring that one has $\exp_q(x)\exp_q(y)=\exp_q(x+y)$ if and only if $[y,x]_q=0$, and it improves former partial results on $q$-deformed exponentiation. Furthermore, we give an algorithm which produces conjecturally a minimal generating set for the relations between $[y,x]_q$-centered $q$-commutators of any bidegree $(i,j)$, and it allows us to compute all possible $Q_{i,j}$. Keywords: q-deformed CBHD series, $q$-commutator identities., $q$-calculus, Exponential theorem, Campbell-Baker-Hausdoff-Dynkin (CBHD) series. Mathematics Subject Classification: 05A30, 81R50, 16T20 (Pri); 17B37, 05A15 (Sec. Citation: Rüdiger Achilles, Andrea Bonfiglioli, Jacob Katriel. The $\boldsymbol{q}$-deformed Campbell-Baker-Hausdorff-Dynkin theorem. Electronic Research Announcements, 2015, 22: 32-45. doi: 10.3934/era.2015.22.32 R. Achilles, A. Bonfiglioli and J. Katriel, A sixth-order expansion of the $q$-Campbell-Baker-Hausdorff series,, preprint, (2014). Google Scholar M. Arik and D. D. Coon, Hilbert spaces of analytic functions and generalized coherent states,, J. Mathematical Phys., 17 (1976), 524. doi: 10.1063/1.522937. Google Scholar D. Bonatsos and C. Daskaloyannis, Model of $n$ coupled generalized deformed oscillators for vibrations of polyatomic molecules,, Phys. Rev. A, 48 (1993), 3611. doi: 10.1103/PhysRevA.48.3611. Google Scholar F. Bonechi, E. Celeghini, R. Giachetti, C. M. Pereña, E. Sorace and M. Tarlini, Exponential mapping for nonsemisimple quantum groups,, J. Phys. A, 27 (1994), 1307. doi: 10.1088/0305-4470/27/4/023. Google Scholar A. Bonfiglioli and R. Fulci, Topics in Noncommutative Algebra. The Theorem of Campbell, Baker, Hausdorff and Dynkin,, Lecture Notes in Mathematics, (2034). doi: 10.1007/978-3-642-22597-0. Google Scholar A. Bonfiglioli and J. Katriel, The $q$-analogue of the Campbell-Baker-Hausdorff-Dynkin Theorem,, submitted, (2015). Google Scholar J. Cigler, Operatormethoden für $q$-Identitäten,, Monatsh. Math., 88 (1979), 87. doi: 10.1007/BF01319097. Google Scholar V. G. Drinfel'd, Quantum groups,, J. Soviet Math., 41 (1988), 898. doi: 10.1007/BF01247086. Google Scholar V. G. Drinfel'd, On some unsolved problems in quantum group theory,, in Quantum Groups (Leningrad, (1990), 1. doi: 10.1007/BFb0101175. Google Scholar K. Ebrahimi-Fard and D. Manchon, A Magnus- and Fer-type formula in dendriform algebras,, Found. Comput. Math., 9 (2009), 295. doi: 10.1007/s10208-008-9023-3. Google Scholar K. Ebrahimi-Fard and D. Manchon, Dendriform equations,, J. Algebra, 322 (2009), 4053. doi: 10.1016/j.jalgebra.2009.06.002. Google Scholar K. Ebrahimi-Fard and D. Manchon, Twisted dendriform algebras and the pre-Lie Magnus expansion,, J. Pure Appl. Algebra, 215 (2011), 2615. doi: 10.1016/j.jpaa.2011.03.004. Google Scholar T. Ernst, A Comprehensive Treatment of $q$-Calculus,, Birkhäuser/Springer Basel AG, (2012). doi: 10.1007/978-3-0348-0431-8. Google Scholar A. M. Gavrilik and Yu. A. Mishchenko, Deformed Bose gas models aimed at taking into account both comositeness of particles and their interaction,, Ukr. J. Phys., 58 (2013), 1171. Google Scholar A. C. Hearn, REDUCE, A portable general-purpose computer algebra system., Available from: , (). Google Scholar A. Inomata and S. Kirchner, Bose-Einstein condensation of a quon gas,, Phys. Lett. A, 231 (1997), 311. doi: 10.1016/S0375-9601(97)00345-9. Google Scholar P. E. T. Jørgensen and R. F. Werner, Coherent states of the q-canonical commutation relations,, Comm. Math. Phys., 164 (1994), 455. doi: 10.1007/BF02101486. Google Scholar V. Kac and P. Cheung, Quantum Calculus,, Universitext; Springer-Verlag, (2002). doi: 10.1007/978-1-4613-0071-7. Google Scholar J. Katriel and G. Duchamp, Ordering relations for q-boson operators, continued fractions techniques and the q-CBH enigma,, J. Phys. A, 28 (1995), 7209. doi: 10.1088/0305-4470/28/24/018. Google Scholar J. Katriel, M. Rasetti and A. I. Solomon, The q-Zassenhaus formula,, Lett. Math. Phys., 37 (1996), 11. doi: 10.1007/BF00400134. Google Scholar J. Katriel and A. I. Solomon, A no-go theorem for a Lie-consistent $q$-Campbell-Baker-Hausdorff expansion,, J. Math. Phys., 35 (1994), 6172. doi: 10.1063/1.530736. Google Scholar J. Katriel and A. I. Solomon, A $q$-analogue of the Campbell-Baker-Hausdorff expansion,, J. Phys. A, 24 (1991). doi: 10.1088/0305-4470/24/19/003. Google Scholar C. Quesne, Disentangling $q$-exponentials: A general approach,, Internat. J. Theoret. Phys., 43 (2004), 545. doi: 10.1023/B:IJTP.0000028885.42890.f5. Google Scholar D. L. Reiner, A $q$-analog of the Campbell-Baker-Hausdorff formula,, Discrete Math., 43 (1983), 125. doi: 10.1016/0012-365X(83)90030-4. Google Scholar N. Reshetikhin, Quantization of Lie bialgebras,, Internat. Math. Res. Notices, (1992), 143. doi: 10.1155/S1073792892000163. Google Scholar M.-P. Schützenberger, Une interprétation de certains solutions de l'équation fonctionnelle: $F(x+y)=F(x)F(y)$,, C. R. Acad. Sci. Paris, 236 (1953), 352. Google Scholar R. Sridhar and R. Jagannathan, On the q-analogues of the Zassenhaus formula for disentangling exponential operators,, J. Comput. Appl. Math., 160 (2003), 297. doi: 10.1016/S0377-0427(03)00633-2. Google Scholar N. Ja. Vilenkin and A. U. Klimyk, Representations of Lie Groups and Special Functions. Vol. 3: Classical and Quantum Groups and Special Functions,, Mathematics and Its Applications (Soviet Series), (1992). doi: 10.1007/978-94-017-2881-2. Google Scholar H. Wachter, q-Exponentials on quantum spaces,, Eur. Phys. J. C Part. Fields, 37 (2004), 379. doi: 10.1140/epjc/s2004-01999-5. Google Scholar Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021016 Cheng Peng, Zhaohui Tang, Weihua Gui, Qing Chen, Jing He. A bidirectional weighted boundary distance algorithm for time series similarity computation based on optimized sliding window size. Journal of Industrial & Management Optimization, 2021, 17 (1) : 205-220. doi: 10.3934/jimo.2019107 Kateřina Škardová, Tomáš Oberhuber, Jaroslav Tintěra, Radomír Chabiniok. Signed-distance function based non-rigid registration of image series with varying image intensity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1145-1160. doi: 10.3934/dcdss.2020386 Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, Stock price fluctuation prediction method based on time series analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 915-915. doi: 10.3934/dcdss.2019061 Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374 Guo-Niu Han, Huan Xiong. Skew doubled shifted plane partitions: Calculus and asymptotics. Electronic Research Archive, 2021, 29 (1) : 1841-1857. doi: 10.3934/era.2020094 Baoli Yin, Yang Liu, Hong Li, Zhimin Zhang. Approximation methods for the distributed order calculus using the convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1447-1468. doi: 10.3934/dcdsb.2020168 Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395 Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108 Shengbing Deng, Tingxi Hu, Chun-Lei Tang. $ N- $Laplacian problems with critical double exponential nonlinearities. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 987-1003. doi: 10.3934/dcds.2020306 Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020105 2019 Impact Factor: 0.5 Rüdiger Achilles Andrea Bonfiglioli Jacob Katriel
CommonCrawl
EURASIP Journal on Advances in Signal Processing A robust LU polynomial matrix decomposition for spatial multiplexing Moustapha Mbaye1, Moussa Diallo1 & Mamadou Mboup2 EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 45 (2020) Cite this article This paper considers time-domain spatial multiplexing in MIMO wideband system, using an LU-based polynomial matrix decomposition. Because the corresponding pre- and post-filters are not paraunitary, the noise output power is amplified and the performance of the system is degraded, compared to QR-based spatial multiplexing approach. Degradations are important as the post-filter polynomial matrix is ill-conditioned. In this paper, we introduce simple transformations on the decomposition that solve the ill-conditioning problem. We show that this results in a MIMO spatial multiplexing scheme that is robust to noise and channel estimation errors. In the latter context, the proposed LU-based beamforming compares favorably to the QR-based counterpart in terms of complexity and bit error rate. Multiple-input multiple-output (MIMO) technique can provide high spatial freedom to increase reliability and throughput. This technique has attracted a lot of attention [1] and has been widely used in various wireless communication standards. One of the key advantages of MIMO spatial multiplexing is the fact that it is able to provide additional data capacity. MIMO spatial multiplexing achieves this by exploiting the multiple paths and effectively using them as additional "channels" to carry data. In wideband, due to the delay spread of the different multipath components, the received signal can no longer be characterized by just an amplitude and phase random processes [2]. The effect of multipath on wideband signals must therefore take into account the multipath delay spread variations. The wireless channel between a single transmit-receive pair is therefore finite impulse response (FIR) filter in nature. This is due to the transmitted signal arriving at the receiver over multiple paths and with different time delays [3]. This FIR filter will take the form of a polynomial in the indeterminate variable z−1, which is used to represent a unit delay. In this case, for a wireless system consisting of Nt transmit antennas and Nr receive antennas, the multipath channel transfer function can be represented by a Nr×Nt polynomial matrix, denoted H(z). The received signal on each antenna is a superposition of signals from the different transmit antennas called co-channel interference (CCI). In order to recover the transmitted data sequence corrupted by channel interference, a conventional method is the spatio-temporal vector coding (STVC) [4]. STVC structure is suggested as a theoretical means for achieving capacity, and a reduced complexity discrete matrix multitone (DMMT) technique is implemented by the authors to exploit the frequency selective MIMO channel. It is based on discrete multitone which is a technique that uses the discrete Fourier transform (DFT) to implement frequency-division multiplexing (FDM). DMMT is essentially analogous to OFDM [5] approach: the wideband problem is reduced to a narrowband form by using a DFT or FFT to split the data into narrower frequency bands and applying an SVD at each frequency to decorrelate the signals [6]. This approach ignores correlations between frequency bands, and the SVD will order the output channels according to power in each individual band leading to a lack of phase coherence [7]. An alternative is to consider time-domain scheme for which the diagonalization of the temporal MIMO channel can be performed once for the entire system [8]. This design, based on polynomial matrix decomposition, transforms the MIMO channel into a number of independent single-input single-output (SISO) subchannels. This is one of the most efficient techniques which is done by a factorization of the MIMO channel polynomial matrix as: $$ H(z)= U(z) D(z) V(z), $$ where U(z) and V(z) are square matrices of sizes Nr and Nt, respectively. If the inverses of V(z) and U(z), assuming they are stable and causal, are inserted into the transmission chain respectively as pre- and post-filters, then the original MIMO channel becomes equivalent to D(z). Diagonalization of H(z), viz. the factorization in (1) with D(z) diagonal, therefore reduces the MIMO wideband channel to N= min(Nt,Nr) independent SISO subchannels, thereby canceling the CCI. Such decomposition is most commonly achieved using the popular polynomial singular value decomposition (PSVD) method, leading to paraunitary factors U(z) and V(z). This paraunitaryness assures that the power distributions of the signal and noise remain unaltered after post-filtering. However, given a polynomial matrix, a PSVD factorization as described above does not exist in general [9]. By contrast, the MIMO spatial multiplexing scheme presented in [10, 11] completely eliminates the CCI. This beamforming method is inspired from a blind equalization method exploiting the Bezout identity [12, 13]. It is based on a combination of the classical Smith canonical form and LU (Gauss elimination). The decomposition method in [11], called LU-PMD (LU-polynomial matrix decomposition), is effective and does not require any iteration: the algorithm ends up after a finite and prescribed number of steps, with a matrix D(z) which is exactly diagonal. Moreover, it was shown in [11] that unless for some improbable original MIMO channel, all but except the last resulting independent SISO subchannels reduce to simple additive noise channels. Therefore, in addition to completely canceling the CCI, this decomposition also inherently avoids the ISI problem. However, the corresponding factors U(z) and V(z) are unimodular and not paraunitary as in the QR-based methods. The loss of the latter property induces a serious limitation consisting in an output noise enhancement. The role of the post-filter in this performance degradation was clarified: the degradation becomes severe as the norm and the condition number of the post-filter matrix-valued transfer function increase. Improving the post-filter matrix conditioning by a simple row balancing was proposed in [14]. Significant improvement of the performance, in terms of bit error rate, has been observed. In this paper, we revisit the LU-based factorization in [11], in combination with the row balancing trick in [14]. We show that the resulting transformations solve the ill-conditioning problem and lead to a MIMO spatial multiplexing scheme that is robust to noise and channel estimation errors (see also [15] for a combination of spatial beamforming and channel estimation). In the latter context, the proposed LU-based beamforming compares favorably to the QR-based counterpart in terms of both complexity and bit error rate. The structure of this paper is as follows. Section 2 is devoted to the LU-based decomposition method for MIMO spatial multiplexing scheme. The noise enhancement problem is also explained. Two solutions, and a combination of both, are presented in Section 3. Simulation results showing that the proposed LU-based decomposition significantly reduces the noise enhancement are given in Section 3.2 with comparison with the QR-based scheme. The robustness of the proposed scheme to channel estimation errors is discussed in Section 4 with comparison with the QR-based scheme. Finally, concluding remarks are given in Section 5. MIMO spatial multiplexing scheme Let us consider a MIMO communication system which has Nt transmitting antennas and Nr receiving antennas through a channel represented by its transfer matrix-valued function \(H(z)\in \mathbb {C}^{N_{r} \times N_{t}}\). Let \(\{x_{i,k}\}_{k\in \mathbb {N}}\) denote the equivalent discrete-time causal signal on the transmit antenna i∈{1,⋯,Nt} and define by: $$ x_{i}(z) = \sum_{k \geqslant 0} x_{i,k} z^{-k} $$ its associated Z-transform. We use the boldface notation x(z) for the column vector of size Nt given by \(\phantom {\dot {i}\!}\boldsymbol {x}(z) = [x_{1}(z) \ \cdots \ x_{N_{t}}(z)]^{T}\), where the superscript T stands for the transpose operator. Likewise, we denote by y(z) the vector collecting the z-transforms of the discrete-time signals recorded on the Nr receiving antennas. Then, the MIMO channel input-output relation reads in the z-transform domain as: $$ \boldsymbol{y}(z)= H(z)\boldsymbol{x}(z)+ \boldsymbol{n}(z), $$ where n(z) stands for the z-transform of a sample realization of the noise corruption \(\boldsymbol {n} \in \mathbb {C}^{N_{r} \times 1}\). Assume that the channel's transfer matrix admits a factorization H(z)=U(z)D(z)V(z) as in (1). Then, using the inverse of U(z) and V(z), noted: $$U_{po}(z) {\buildrel \triangle \over =} U(z)^{-1} \quad \text{and} \quad V_{pr}(z) {\buildrel \triangle \over =} V(z)^{-1}$$ respectively, as post- and pre-filters, allow one to reduce the original MIMO channel into the simpler form D(z). Indeed, if the original signal is pre-filtered before transmission as in \(\widehat {\boldsymbol {x}}(z)= V_{pr}(z)\boldsymbol {x}(z)= \left [\widehat {x}_{1}(z) \ \cdots \ \widehat {x}_{N_{t}}(z)\right ]^{T}\), then the corresponding channel's output becomes \(\widehat {\boldsymbol {y}}(z)= H(z)\widehat {\boldsymbol {x}}(z)+ \boldsymbol {n}(z)\). Thus, the post-filtering step \(\widetilde {\boldsymbol {y}}(z)=U_{po}(z)\widehat {\boldsymbol {y}}(z)\) yields the final equivalent system: $$ \widetilde{\boldsymbol{y}}(z) = D(z)\boldsymbol{x}(z) + \widetilde{\boldsymbol{n}}(z), $$ where we have set \(\widetilde {\boldsymbol {n}}(z) {\buildrel \triangle \over =} U_{po}(z) \boldsymbol {n}(z)\) for the noise after post-filtering. The decomposition in Eq. (1) is mostly performed by polynomial matrix SVD decomposition. The corresponding factors V(z) and U(z) are then expected to be paraunitary, which means that they satisfy: $$U(z)^{*} U(z) = I \text{ and} V(z) {V(z)}^{*} = I, \quad \text{for all} z \in \mathbb{C}, $$ where the notation ∗ stands for the para-Hermitian conjugation, that is, \([\!F(z)]^{*} {\buildrel \triangle \over =} \overline {F(1/\bar {z})}^{T}\), and I is the identity matrix of appropriate size. Thereby, the pre- and post-filters Vpr(z)=V(z)∗ and Upo(z)=U(z)∗ are also paraunitary, and setting E(·) for the mathematical expectation, we have: $$\begin{array}{*{20}l} \left\|\widetilde{\boldsymbol{n}}(z)\right\|^{2}_{2} {\buildrel \triangle \over =}& \int_{|z|=1} \textsf{E}\left[\widetilde{\boldsymbol{n}}(z)^{*} \widetilde{\boldsymbol{n}}(z)\right] \frac{dz}{z} = \int_{|z|=1} \textsf{E}[\boldsymbol{n}(z)^{*} U_{po}(z)^{*} U_{po}(z) \boldsymbol{n}(z)] \frac{dz}{z}\\ =& \int_{|z|=1} \textsf{E}\left[\boldsymbol{n}(z)^{*} \boldsymbol{n}(z)\right] \frac{dz}{z} {\buildrel \triangle \over =} \|{\boldsymbol{n}}(z)\|^{2}_{2}. \end{array} $$ Likewise, we obtain \(\left \|\widehat {\boldsymbol {x}}(z)\right \|_{2}^{2} = \|\boldsymbol {x}(z)\|^{2}_{2}\) showing that, in this case, the pre- and post-filtering do not modify the mean power of the original signal and noise stochastic processes. Unfortunately, polynomial matrix SVD does not exist in general. Of course, an SVD decomposition is clearly feasible if one relaxes the constrain of the factors being polynomial. But then, the presence of poles can lead to instability. Instead, a common solution is to consider a Laurent polynomial matrix decomposition. Several iterative algorithms have been proposed to obtain approximate Laurent polynomial matrix SVD [16–19]. These methods can only generate approximately diagonal matrices D(z), leading to inevitable residual CCI. The residual CCI may be drastically reduced by increasing the number of iterations in the algorithms but at the expense of large order of the polynomial D(z), which translates into increased complexity and more intersymbol interference (ISI) on each resulting SISO channel. Polynomial order truncation is introduced to limit the degrees of the polynomials. But, this can affect the paraunitary property of the pre- and post-filters (see also [20, 21] where the order growth problem is mitigated). In this regard, a MIMO beamforming scheme based on a combination of the classical Smith canonical form and LU (Gauss elimination) was presented in [11] as an alternative solution. LU-based polynomial matrix decomposition (LU-PMD) The decomposition algorithm is recalled in Section 3.1.1, with a reformulation in two nested recursions. Let us give an overview, meanwhile. Basically, the approach follows the same steps as the classical LU factorization. However, in each step, a preprocessing by the first step of the decomposition in Smith canonical form is considered. This preprocessing solves a Bezout equation in order to reduce the pivot element to a constant. We first obtain: $$ H(z)= U(z) R(z) $$ where U(z) and R(z) are respectively Nr×Nr-unimodular and Nr×Nt-upper triangular polynomial matrices. Next, the same decomposition is applied to R(z)T to obtain: $$ R(z)^{T} = V(z)^{T} D(z), $$ where V(z) is Nt×Nt-unimodular like U(z) in (5). Then, for \(N_{r} \geqslant N_{t}\), a common setting in MIMO systems, the factorization (1) follows with: $$ D(z) = \left[\begin{array}{c} \widetilde{D}(z)\\ O_{N_{r}-N_{t},N_{t}} \end{array}\right]. $$ where \(\widetilde {D}(z)\) is an Nt×Nt-diagonal matrix and Oi,j is the zero matrix of size i×j. Noise amplification problem First, observe as in [14] that if the channel's output noise n is spatially and temporally white, i.e., with power spectral density matrix \(\textsf {E}\left [\boldsymbol {n}(z) \boldsymbol {n}(z)^{*}\right ] = \sigma ^{2} I_{N_{r}}\), then the post-filtered noise power reads as: $$ \|\widetilde{\boldsymbol{n}}(z)\|_{2}^{2} = \sigma^{2} \|U_{po}(z)\|^{2}. $$ The noise component in the equivalent reduced system (i.e., after pre- and post-filtering) is thus amplified with respect to the original system whenever the norm of the post-filter is high. This is illustrated in Fig. 1. Effect of the post-filter norm on the system's performance: BER vs SNR In this experiment, a complete OFDM communication system is simulated with a 4-QAM modulation. The sequence \(\{x_{i,k}\}_{k \geqslant 0}\) in (2) then represents the ith OFDM signal, including a cyclic prefix. The performance of the LU-based spatial multiplexing is measured by the corresponding bit error rate vs the SNR. Four different 3×3 MIMO channels, each corrupted by a unit-variance spatial-temporal white noise, are considered. The performance significantly degrades as the norm of the post-filter increases. Clearly, this performance loss cannot be explained only by the noise power enhancement since the output signal \(\widetilde {\boldsymbol {y}}\) also undergoes the same post-filtering. Therefore, an analysis based on signal-to-noise ratios is more relevant. To proceed, note that the post-filtering operation amounts to the resolution of the linear perturbed system \(U(z) \widetilde {\boldsymbol {y}}(z) = H(z) \widetilde {\boldsymbol {x}}(z) + \boldsymbol {n}(z)\), with the error term \(\widetilde {\boldsymbol {n}}(z)\). Let us denote by \(\kappa (U) {\buildrel \triangle \over =} \left \|U(z)\right \|\left \|U(z)^{-1}\right \|\) the condition number of the matrix U(z) with respect to the L2 matrix norm: $$\|U(\cdot)\|^{2} {\buildrel \triangle \over =} \frac{1}{2\pi}\int_{0}^{2\pi} \textsf{Tr}\left[\overline{U(e^{i \omega})}^{T} U(e^{i \omega})\right] d\omega = \int_{|z|=1} \textsf{Tr}\left[U(z)^{*} U(z)\right]\frac{dz}{z} $$ where Tr(·) is the trace operator. Now, compare the communication systems in (3) and (4) in the light of classical perturbation analysis [22]. The corresponding noise-to-signal ratios are then related by (see [14]): $$ \frac{\|\widetilde{\boldsymbol{n}}(z)\|_{2}}{\|D(z){\boldsymbol{x}}(z)\|_{2}} \leqslant \kappa(U) \frac{\|\boldsymbol{n}(z)\|_{2}}{\|H(z)\widetilde{\boldsymbol{x}}(z)\|_{2}}. $$ When the post-filter U(z) is ill-conditioned, i.e., κ(U)≫1, the noise-to-signal ratio can be significantly higher for the reduced system than for the original one. This explains the performance drop observed in the experiment reported in Fig. 2. Effect of the postcoder's condition number on the BER vs SNR— 3×3 MIMO channel In this experiment, we have considered again the previous setting, with 3×3 randomly selected MIMO channels with Rayleigh distribution. The system's performance is measured by the corresponding bit error rate vs the SNR. The same experiment is then repeated with 4×4 and 5×5 MIMO channels in Figs. 3 and 4, respectively. All these experiments confirm a drop in performance as the condition number of the post-filter increases. A row balancing of the post-filter was proposed in [14] as a solution to keep both the norm and the condition number of the post-filter low (see [23]). This method consists in replacing the preceding post-filter Upo(z) by S(z) of the form: $$ S(z) = W U_{po}(z), $$ where W is a diagonal constant matrix selected such that each row of S(z) has unit norm. The diagonal elements Wi,i of W then read as: $$ W_{i,i} = \frac{1}{\left\|\left[U_{po}(z)\right]_{i}\right\|_{2}}, \quad i= 1, \cdots, N_{r} $$ where [A]i denotes the ith row of matrix A. Accordingly, the channel's output signal after this modified post-filtering would read as: $$ {\boldsymbol{y}}(z) = W {\boldsymbol{y}}(z) = W D(z) \boldsymbol{x}(z) + S(z) \boldsymbol{n}(z). $$ Good performance in terms of bit error rate was observed. Despite this improvement, the LU-based polynomial matrix decomposition for MIMO beamforming remains less competitive than the state-of-the-art methods because of the post-filter noise amplification. A robust decomposition Source of ill-conditioning To identify where the abovementioned ill-conditioning stems from, let us recall one iteration of the LU-based factorization (see [11]). Indeed, the decomposition of H(z) in (5) can be rephrased by a recursion of the form: $$\begin{array}{*{20}l} H_{k}(z) &= \Phi_{k-1}(z) H_{k-1}(z) = \Phi_{k-1}(z)\Phi_{k-2}(z)\cdots \Phi_{k-m}(z) H_{k-m}(z) \end{array} $$ $$\begin{array}{*{20}l} &=\Phi^{(k-1)}(z) H_{0}(z) \end{array} $$ initialized to H0(z)=H(z) and ending at HN(z)=R(z), with N= min(Nr−1,Nt). The form of the polynomial transition matrix will be given later. Given the (k−1)th iterate with: $$ {}H_{k-1}(z) = \left[\begin{array}{ccccccc} d_{1}(z) &{h^{(k)}}_{1,2}(z)&\cdots&{h^{(k)}}_{1,k-1}(z)&{h^{(k)}}_{1,k}(z)&\cdots& {h^{(k)}}_{1,N_{t}}(z) \\ 0 &d_{2}(z)&\ddots&\vdots&\vdots&\cdots&\vdots \\ \vdots &&\ddots&{h^{(k)}}_{k-2,k-1}(z)&\vdots&\cdots&\vdots \\ 0 &&\ddots&d_{k-1}(z)&{h^{(k)}}_{k-1,k}(z)&\cdots&{h^{(k)}}_{k-1,N_{t}}(z) \\. 0 &&\ddots&0&{h^{(k)}}_{k,k}(z)&\cdots& {h^{(k)}}_{k,N_{t}}(z) \\ \vdots &&&\vdots&\vdots&&\vdots \\ 0&&\cdots&0&{h^{(k)}}_{N_{r},k}(z)&\cdots& {h^{(k)}}_{N_{r},N_{t}}(z) \end{array}\right], $$ we describe how to get to the next step. First, the kth diagonal entry h(k)k,k(z) is reduced to the greatest common divisor (gcd) of the polynomials h(k)k+ℓ,k(z),ℓ=0,…,Nr−k, through the recursion: $$ \left\{\begin{array}{l} d_{k,0}(z) = {h^{(k)}}_{k,k}(z)\\ d_{k,\ell}(z) = \textsf{gcd}(d_{k,\ell-1}(z), {h^{(k)}}_{k+\ell,k}(z)) \end{array}\right. \quad \ell = 1, \ldots, \ell_{k} $$ which runs until ℓ=ℓk such that either 1≤ℓk<Nr−k and \(d_{k,\ell _{k}}(z) = 1\) or ℓk=Nr−k. Each iteration ℓ of this recursion is implemented in matrix form by a left multiplication by: $$ \overline{A}_{k,\ell}(z) = \left[\begin{array}{cccccc} I_{k-1}&&&&\\. &{h}^{\sharp}_{k,k}(z)&&{h}^{\sharp}_{k+\ell,k}(z)&\\. &&\phantom{n^{g^{2}}} I_{\ell-1} \phantom{n_{g_{2}}}&&\\. &-\widehat{d}_{k,\ell-1}(z)&&{\widehat{h}^{(k)}}_{k+\ell,k}(z)&\\. &&&&I_{N_{r}-k-\ell} \end{array}\right], $$ where \(\widehat {d}_{k,\ell -1}(z)\) and \({\widehat {h}^{(k)}}_{k+\ell,k}(z)\) are respectively the quotients of dk,ℓ−1(z) and h(k)k+ℓ,k(z) by their gcd dk,ℓ(z) and where \({h}^{\sharp }_{k,k}(z)\) and \({h}^{\sharp }_{k+\ell,k}(z)\) are obtained from the Bezout equation: $${h}^{\sharp}_{k,k}(z) d_{k,\ell-1}(z) + {h}^{\sharp}_{k+\ell,k}(z) {h^{(k)}}_{k+\ell,k}(z) = d_{k,\ell}(z).$$ Next, the kth iteration of the recursion (13) is completed by a Gaussian elimination step. This is achieved by left multiplying \(\overline {A}_{k}(z) = \overline {A}_{k,\ell _{k}}(z) \overline {A}_{k,\ell _{k}-1}(z) \cdots \overline {A}_{k,1}(z)\), obtained at the end of the recursion (16), by the polynomial matrix: $$ \overline{L}_{k}(z) = \left[\begin{array}{ccc} I_{k-1}& & \\. &1 & \\. &&\\ &\boldsymbol{f}_{k}(z) &I_{N_{r}-k} \\ && \end{array}\right] $$ where \(\boldsymbol {f}_{k}(z) = [ \underbrace {0\ \quad \cdots \quad 0}_{\ell _{k}-1 \text { zeros}}\ {h^{(k)}}_{k+\ell _{k}+1,k}(z) \cdots {h^{(k)}}_{N_{r},k}(z)]^{t} \). The polynomial transition matrix in the recursion (13) then readily reads as \(\Phi _{k}(z) = \overline {L}_{k}(z) \overline {A}_{k}(z)\) and has the block diagonal form: $$ \Phi_{k}(z) = \left[\begin{array}{cc} I_{k-1} & \\. & \Psi_{k}(z) \end{array}\right] $$ for some polynomial matrix Ψk(z). The first k−1 rows of Φ(k)(z)=Φk(z)Φ(k−1)(z) are therefore identical to that of Φ(k−1)(z). Meanwhile, the degrees of the remaining rows are increased, compared to Φ(k−1)(z), because of the left multiplication by the polynomial matrix Ψk(z). As a consequence, the final matrix: $$\Phi^{(N-1)}(z) = \Phi_{N-1}(z) \Phi_{N-2}(z) \cdots \Phi_{0}(z),$$ which coincides with Upo(z)=U(z)−1=Φ(N−1)(z), is badly scaled. This explains why the post-filter is ill-conditioned [23]. A robust post-filter As explained above, the row imbalance induced by the iterations of the decomposition leads to an ill-conditioned post-filter. Observe that the reduction steps of the decomposition, implemented by the multiplications by the polynomial matrices \(\overline {A}_{k}(z)\), are one of the main sources of the row unbalance. Recall that these steps are applied to each iteration k, to reduce the pivot (diagonal element of column k) to the greatest common divisor of the pivot and the polynomials in column k beneath the diagonal. As already mentioned, the iterations described in the preceding subsection are applied to R(z)T to complete the decomposition (1). Consider the iteration k in this context and call d(z) the gcd of the pivot and the polynomials in column k of R(z)T, beneath the diagonal. As a result of the factorization (5) described above, the corresponding pivot is already the greatest common divisor of all the subchannels from the original kth transmit antenna to the receive antennas k,…,Nr. Now, the reduction step for this iteration seeks d(z) as the gcd of (1) all subchannels from the original kth transmit antenna to the receive antennas k,…,Nr and (2) all the subchannels linking the transmit antennas k,…,Nt with the kth receive antenna. Most likely, d(z) will be equal to 1, leading to \(\overline {A}_{k}(z) \equiv I_{N}\). A direct consequence is that the pre-filter Vpr(z) is better conditioned than the post-filter Upo(z). We thus come to the conclusion that the noise amplification can be avoided by a simple modification in the decomposition by swapping the order in which the pre- and post-filters are computed. To see this, let us consider the decomposition in (1) applied to G(z)=H(z)T instead of H(z), i.e., $$ G(z) = H(z)^{T} = U(z) D(z) V(z). $$ Then transposing back again, we obtain: $$ H(z) = V(z)^{T} D(z) U(z)^{T}. $$ The post-filter becomes V(z)T. Now since the design of V(z) is most likely free from the reduction step, the output noise enhancement is avoided. This allows the post-filter to have improved properties. Since the pre-filter has no effect on the noise component, its conditioning properties will not affect the system's performance. The proposed "left-right swapping" scheme is compared to the "row balancing" solution described in [14]. The condition number of this new post-filter V(z)−1 and that of the post-filter S(z) in (10) obtained with the "row balancing" technique are computed. Also, the output noise power after post-filtering is computed via (8) with σ2=1. Thereby, we consider several (p×p)-MIMO systems, for p=3,4,⋯,15. For each system, we thus calculate the average power and the averageFootnote 1 condition number on 100 randomly simulated Rayleigh fading channels H(z). With the row balancing, the output noise power is readily given by \(\sqrt {p}\). Table 1 displays the obtained results. Table 1 Left-right swapping vs row balancing: comparison of powers and condition numbers The results show that the proposed "left-right swapping" scheme provides a better conditioned post-filter matrix, with a reasonable norm (output noise power). It is therefore expected that this translates into enhanced MIMO-OFDM performance. The effect in terms of bit error rate is now studied in MIMO-OFDM system. For the simulation, we consider a spatial multiplexing scheme using V-BLAST algorithm, with the ITU Pedestrian-A channel model with the following parameters: 20 MHz of bandwidth, Ns=512 subcarriers, CP=Ns/8=64 for cyclic prefix length, and 4-QAM modulation. Figures 5 and 6 show the BER comparison in MIMO-OFDM time-domain spatial multiplexing, between classical the LU-PMD post-filtering, the modified post-filter based on "row balancing," and this "left-right swapping" scheme. BER comparison of the two beamformers: indoor ITU channel model BER comparison of the two beamformers: outdoor ITU channel model Significant improvement is obtained with the proposed method in both MIMO 3×3 contexts: indoor (5) and outdoor (6). Observe how the performance gain is very important in the more severe outdoor context. For example, the same BER level of 10−3 is reached with the proposed solution with about 5 dB drop in SNR compared to the "row balancing" trick. This is due to the fact that the post-filter matrix is better conditioned now, while the output filtering power remains reasonably high. Comparison with QR-based spatial multiplexing In this subsection, we compare the performance of the improved scheme in MIMO-OFDM system with those of the QR-based spatial multiplexing [19]. For the QR decomposition, we have set the tolerance parameter ε=10−3 for the off-diagonal elements. With this value, the residual CCI is insignificant. The truncation parameter is selected as μ=10−3 to limit the growth of the degrees of the Laurent polynomials in the final reduced equivalent channel D(z). We refer to [18] for more details on the meaning and roles of these parameters. For the purpose of the comparison, we have simulated a complete transmission chain from the encoding/interleaving block of the original binary source to the final demodulation block, through an outdoor pedestrian ITU MIMO 3×3 channel. The different BERs are displayed in Fig. 7. BER comparison of LU-based scheme and QR-based scheme Figure 7 shows that, in terms of BER in MIMO wideband spatial multiplexing, the LU-PMD using "left-right swapping" compares favorably to the QR approach, even for weak SNR. The interesting properties of the LU-PMD decomposition (low complexity, CCI cancelation, and ISI mitigation) are now becoming apparent. A robust and unitary post-filter As already mentioned before and observed in [14], the "row-balancing" trick improves the conditioning of the post-filter matrix. Swapping the pre- and post-filter matrices also results in an improved beamforming system as argued above. We therefore propose in this section a combination of both improvements, that is, (1) to swap the left and right factors of the decomposition to obtain a better conditioned post-filter at the reception and (2) to apply a row balancing to improve further its conditioning. The final resulting post-filter matrix is subsequently denoted by Q(z). Table 2 shows how this combination allows one to enhance the good conditioning of the post-filter matrix. These results are obtained with the same simulation setting as in Table 1. Table 2 Comparison of condition numbers: row balancing schemes, left-right swapping, and combination of both The performance in terms of BER in 3×3,4×4, and 5×5 MIMO systems is studied with an ITU Pedestrian-A channel model. The results presented respectively in Figs. 8, 9, and 10 confirm the expectation that the combination of the two methods improves the performance compared to each one taken separately. BER performance in MIMO 3×3 by combining "row balancing" and "left-right swapping" methods Fig. 10 In a spatial multiplexing problem, a common and underlying assumption is that the coefficients of the polynomial matrix representing the MIMO channel are available. Accordingly, in all the preceding experiments, the pre- and post-filters correspond exactly to the right and left factors of the decomposition of the channel that is actually used to simulate the transmission system. However, the channel coefficients result from an estimation procedure. The pre- and post-filters therefore do not stem from the decomposition of the exact transmission MIMO channel. In this section, we thus study the impact of channel estimation errors in a MIMO wideband system using LU-based spatial multiplexing. The exact channel polynomial matrix is still denoted by H(z) and its estimation will be denoted by \(\widehat {H}(z) = H(z) + \Delta H(z)\). The power of the estimation error ΔH(z) is given by the square of the L2 matrix norm: $$ E = \|\Delta H(z)\|_{2} = \|H(z)-\widehat{H}(z)\|_{2}. $$ In the sequel, we compute the pre- and post-filters from the decomposition of \(\widehat {H}(z)\) but the MIMO transmission system is still simulated using the exact channel matrix H(z). QR-based decomposition is also implemented in this channel-pre/post-filter mismatch setting for comparison. We evaluate the BER performance for different values of the relative error Er=E/∥H(z)∥2. The results are presented in Figs. 11 and 12. LU-PMD: BER performance with imperfect channel estimation QR: BER performance with imperfect channel estimation For Er=0.01, we observe that for both the LU-based and QR-based methods, the BER curves obtained with the exact channel coincide with that corresponding to the estimated channel. Very small channel estimation errors do not affect the BER for both methods. However, the BER performance drops significantly as the estimation errors increase, and this is particularly visible for high SNR, when the noise effect is no longer dominant. The proposed LU spatial multiplexing scheme appears to be more robust to channel estimation imperfection than the QR-based method. Therefore, the proposed LU-PMD with "left-right swapping" scheme is more realistic than the QR-based approach because it provides better BER performance in the presence of channel estimation errors. Unlike the QR-based decompositions of polynomial matrix, the LU-based decomposition is simple and exact. Nonetheless, this approach was hitherto discarded in MIMO wideband spatial multiplexing applications, due to an amplification of the output noise. We have presented in this paper a simple but effective solution to this problem of output noise enhancement. We have clearly established in previous studies that performance limitation of the LU-based spatial multiplexing was essentially due to an ill-conditioning of the corresponding post-filter polynomial matrix. Matrix row balancing has then been proposed, and a significant reduction of the noise amplification was observed. Here, we have shown that the ill-conditioning of the post-filter matrix is caused by the pivot reduction step during the polynomial matrix factorization. A simple permutation of the left and right factors of the decomposition was sufficient to significantly improve the BER performance compared to the previous row balancing solution. Then, a combination of both solutions results in an LU-based polynomial matrix decomposition approach for MIMO spatial multiplexing in which the noise amplification is now avoided. Finally, we have shown that this proposed LU-based multiplexing scheme compares favorably to the state-of-the-art QR-based methods, in the realistic setting where knowledge of the channel's coefficient matrices is corrupted by estimation errors. The Matlab and Scilab codes used to generate the simulation results are available to reviewers upon request. Actually, the table displays the median values instead of the mean values because the computed condition numbers exhibit very high variances. MIMO: Multiple-input multiple-output SISO: Single-input single-output LU: "Lower Upper" matrix factorization LU-PMD: LU polynomial matrix decomposition QR: QR matrix factorization FIR: Finite impulse response CCI: Co-channel interference ISI: Intersymbol interference STVC: Spatio-temporal vector coding DMMT: Discrete matrix multitone (D/F)FT: (Discrete/fast) Fourier transform (O)FDM: (Orthogonal) Frequency-division multiplexing Cyclic prefix (P)SVD: (Polynomial) Singular value decomposition QAM: Quadrature amplitude modulation Signal-to-noise ratio BER: A. J. Paulraj, D. A. Gore, R. U. Nabar, H. Bolcskei, An overview of MIMO communications - a key to gigabit wireless. Proc. IEEE. 92(2), 198–218 (2004). G. J. Foschini, M. J. Gans, On limits of wireless communications in a fading environment when using multiple antennas. Wirel. Pers. Commun.6(3), 311–335 (1998). S. Redif, S. Weiss, J. McWhirter, Relevance of polynomial matrix decompositions to broadband blind signal separation. Signal Proc.134:, 76–86 (2017). G. G. Raleigh, J. M. Cioffi, Spatio-temporal coding for wireless communication. IEEE Trans. Commun.46(3), 357–366 (1998). H. Bolcskei, MIMO-OFDM wireless systems: basics, perspectives, and challenges. IEEE Wirel. Commun.13(4), 31–37 (2006). Y. Liang, R. Schober, W. Gerstacker, Time-domain transmit beamforming for MIMO-OFDM systems with finite rate feedback. IEEE Trans. Commun.57(9), 2828–2838 (2009). P. D. Baxter, J. G. McWhirter, in Proc 37th Asilomar Conference on Circuits Systems and Computers. Blind signal separation of convolutive mixtures, vol. 1, (2004), pp. 124–128. R. Brandt, M. Bengtsson, in Proc. Int. Symp. Pers. Indoor Mobile Radio Commun. Wideband MIMO channel diagonalization in the time domain, (2011), pp. 1914–1918. S. Icart, P. Comon, in 9th IMA Intern. Conf. on Math. in Sig. Proc. Some properties of laurent polynomial matrices, (2012), pp. 1–5. M. Mbaye, M. Diallo, M. Mboup, in IEEE SPAWC. Unimodular-upper polynomial matrix decomposition for MIMO spatial multiplexing (Toronto, 2014), pp. 26–29. M. Mbaye, M. Diallo, M. Mboup, LU based beamforming schemes for MIMO system. IEEE Trans. Veh. Technol.66(3), 2214–2222 (2017). M. Mboup, M. Miranda, in International Telecomm. Symp. (ITS2002. A polynomial approach to the blind multichannel deconvolution problem (Natal, Brazil, 2002), pp. 1–6. M. Mboup, in Colloquium GRETSI, 4. Sur la résolution de l'identité de Bezout pour l'égalisation autodidacte de systèmes mono-entrée–multi-sorties, (1999), pp. 1113–1116. M. Mboup, M. Diallo, M. Mbaye, in ICASSP'17. Efficient postcoding filter in LU-based beamforming scheme (New Orleans, USA, 2017). D. Hassan, S. Redif, S. Lambotharan, Polynomial matrix decompositions and semi-blind channel estimation for MIMO frequency-selective channels. IET Signal Proc.13(3), 356–366 (2019). D. Cescato, H. Bolcskei, QR decomposition of Laurent polynomial matrices sampled on the unit circle. IEEE Trans. Inf. Theory. 56:, 4754–4761 (2010). J. G. McWhirter, P. D. Baxter, in Proc. 12th Annual Workshop of Adaptive Sensor Array Signal Processing. A novel technique for broadband SVD (Lexington, USA, 2004). J. A. Foster, J. G. McWhirter, S. Lambotharan, I. K. Proudler, M. Davies, J. Chambers, Polynomial matrix QR decomposition for the decoding of frequency selective multiple-input multiple-output communication channels. IET Signal Proc.6(7), 704–712 (2012). J. A. Foster, J. G. McWhirter, M. R. Davies, J. A. Chambers, An algorithm for calculating the QR and singular value decompositions of polynomial matrices. IEEE Trans. Signal Proc.58(3), 1263–1274 (2010). Z. Wang, J. G. McWhirter, J. Corr, S. Weiss, in 9th IEEE Sensor Array and Multichannel Signal Processing Workshop. Order-controlled multiple shift SBR2 algorithm for para-Hermitian polynomial matrices (Rio de Janeiro, Brazil, 2016), pp. 1–5. F. Coutts, J. Corr, K. Thompson, I. Proudler, S. Weiss, in Sensor Signal Processing for Defence Conference. Divide-and-conquer sequential matrix diagonalisation for parahermitian matrices (London, UK, 2017), pp. 1–5. R. A. Horn, C. R. Johnson, Matrix Analysis, 2nd edn. (Cambridge University Press, USA, 2012). E. E. Osborne, On pre-conditioning of matrices. J. Assoc. Comput. Mach.7:, 338–345 (1960). Université Cheikh Anta Diop, BP 5005, Dakar Fann, Sénégal Moustapha Mbaye & Moussa Diallo Université de Reims Champagne Ardenne, CReSTIC EA 3804, Reims, 51687, France Mamadou Mboup Moustapha Mbaye Moussa Diallo M. Mbaye and M. Mboup wrote the paper. M. Diallo contributed by providing the choice and motivation of the simulation contexts and settings; he also validated the BER vs SNR simulation results. The authors read and approved the final manuscript. Correspondence to Mamadou Mboup. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Mbaye, M., Diallo, M. & Mboup, M. A robust LU polynomial matrix decomposition for spatial multiplexing. EURASIP J. Adv. Signal Process. 2020, 45 (2020). https://doi.org/10.1186/s13634-020-00705-3 Polynomial matrix decomposition LU decomposition Beamforming MIMO wideband system Time-domain spatial multiplexing
CommonCrawl
Performance enhancement of overlapping BSSs via dynamic transmit power control Xiaoying Lei1 & Seung Hyong Rhee1 EURASIP Journal on Wireless Communications and Networking volume 2015, Article number: 8 (2015) Cite this article In densely deployed wireless local area networks (WLANs), overlapping basic service sets (BSSs) may suffer from severe performance degradations. Mobile stations in a BSS may compete for channel access with stations that belong to another BSS in such environment, and it reduces overall throughput due to the increased collision probability. In this paper, we propose a new scheme for transmit power control, which enables mobile stations to dynamically adjust their transmit powers. Using our mechanism, those stations in different BSSs will have more chances of simultaneous transmissions and thus improve their performances by enhancing spatial reuse. We develop a Markov chain model to analyze the performance of the proposed scheme and also perform extensive simulations. Both the analytical and simulation results show that our mechanism effectively improves the network performance of WLANs. As IEEE 802.11 wireless local area networks (WLANs) have been widely deployed in homes, offices, and public places [1], the high density of WLANs has posed a great concern on the problem of co-channel interferences. Thus, the overall network performance of WLANs may be severely degraded unless an efficient scheme is provided to reduce the interference. A WLAN basic service set (BSS) is typically formed by an access point (AP) and a number of stations associated with the AP [2], and in that case, data transmissions are allowed only between the stations and the AP. When the coverage of nearby co-channel BSSs overlaps with each other, they are called overlapping BSSs (OBSSs) [3]. In case a station located at the overlapping area transmits frames, other stations of the neighbor BSS may sense the transmission and refrain their transmissions. Also if they cannot sense the transmission, then they will become hidden terminals to the transmitter. Therefore, the chance of simultaneous transmissions among OBSSs is reduced, and thus the whole network may suffer from the poor spatial reuse of OBSSs. Many solutions have been suggested so far to dynamically control the transmit power of WLAN stations and thus to improve the overall throughput of the network [1,4-8]. By adopting those schemes, stations are able to reduce their transmission ranges using only proper amount of transmit power, such that more stations can simultaneously transmit and thus the overall throughput is increased. The previous works, however, may not be adopted in a practical WLAN system: For example, the problem of how to determine the proper power level is not fully investigated in [1,4]. Also [6] is based on assumptions that may not be possible in real world, [7] requires the real-time adaptation of a measurement algorithm, and power control scheme in [8] is limited to use only in ad hoc mode. In this paper, we propose a method for dynamic transmit power control to enhance the throughput of OBSSs. First, we study the four different radio ranges in 802.11 systems and how OBSSs interfere with each other in a density WLAN. Based on these observations, we propose a new power control scheme such that every station keeps a table for recording the path loss between itself and the neighbor BSS stations from which request to send/clear to send (RTS/CTS) frames can be overheard. Utilizing the information, those stations adjust their transmit powers and data frames are delivered using only the proper powers. We develop a discrete-time Markov chain model in order to verify that our proposed method provides the OBSSs with more opportunities of simultaneous transmissions and thus increases spatial reuse. In addition, simulation results are presented to validate our proposed scheme and its analytical model. The remaining part of this paper is organized as follows. We discuss the related previous works and study the interference occurred in OBSSs in Section 1. The details of our proposed power control method are addressed in Section 1, and the Markov chain model is investigated in Section 1. The extensive simulation results are reported in Section 1, and finally, concluding remarks are drawn in Section 1. Problem definition and related works Problem definition There are four different radio ranges in 802.11 systems as illustrated in Figure 1 [9]: transmission range, net allocation vector (NAV) set range, clear channel assessment (CCA) busy range, and interference range. Transmission range is the range from a transmitter (T) and represents the area within which the receiver station (R) can receive a frame successfully. The NAV set range is the area within which the wireless stations (A, B) can set the NAVs correctly, based on the duration/ID information carried in the RTS/CTS frames. CCA busy range is the area within which the wireless stations (C, D) can physically sense the busy channel during the data transmission. Interference range is the range from a receiver and represents the area within which the wireless stations (E) are able to interfere with the reception of data frames at the receiver. A sketch of the radio ranges during a four-way frame exchange [ 9 ]. Currently most stations are configured to transmit at their maximum powers, and such a default deployment may result in a high interference among OBSSs [1]. A scenario of interference among OBSSs is illustrated in Figure 2, where two BSSs, BSS 1 and BSS 2, overlap with each other. When station A, which belongs to BSS 1 and locates at the overlapping area, begins a transmission, neighbor AP (i.e., AP 2) and other stations (such as B) will sense the transmission and set their NAVs. Also other neighbor stations which are hidden terminals to the sender A, e.g., D and E, may try to access channel. In case D successfully transmits a data frame to AP 2, the AP cannot response with an ACK in time since it has set NAV. Due to the unsuccessful transmission, D increases its contention window and contends for retransmission. In this example, we can see that transmission from one BSS can hamper the operation of neighbor BSSs. This problem of the 802.11 WLANs comes from the fact that each station must rely on its direct experience in estimating congestion, which often leads to asymmetric views [10]. A simple scenario of OBSSs. (a) Topology of OBSSs and (b) transmission process for OBSSs. Several attempts have been made to improve the performance of 802.11 MAC by utilizing transmit power control scheme. Since the transmit power control (TPC) method standardized in the IEEE 802.11 suffers from inaccuracies, Oteri et al. [4] propose a fractional CSMA/CA scheme by combining the TPC with user grouping and inter-BSS coordination to improve the performance of overlapping BSSs. However, their approach lacks of a mechanism for determining the proper transmit power. In [6], an iterative power control algorithm is proposed to increase the number of concurrent transmissions in the dense wireless networks. This proposal is based on the assumptions that every node has complete knowledge of the network topology and current configuration, which may not be possible in real world. In [7], a run-time self-adaptation algorithm is proposed based on packet loss differentiation, which can jointly adapt both transmit power and physical carrier sensing (PCS) threshold. The problem of this scheme is that it requires metrics such as PER and interference level to be measured in real time which can increase the burden of system. Also Cesana et al. [8] present an interference aware MAC for ad hoc mode network, in which each station transmits using RTS/CTS procedure, and the information about reception powers of RTS frames and interference levels is inserted into CTS packets. Utilizing the information, stations which overhear a CTS can tune their transmit powers such that they can transmit simultaneously without interfering with each other. For performance enhancement of OBSSs, many recent works have provided different approaches. Li et al. [11] propose an interference avoidance algorithm to mitigate the interference from the neighbor BSS operating at the same channel. However, this scheme enables AP to drop its defer threshold to energy detect threshold when transmitting to stations located at overlapping area. Thus a hidden terminal to AP can sense the transmission from AP and the collision probability is reduced. Fang et al. [12] propose a PCF-based two-level carrier sensing mechanism which adopts two NAVs in stations, namely self BSS network NAV (SBNAV) and OBSSs network NAV (OBNAV). When a transmission processes in one of the BSSs, the station which senses it sets the value of its NAV to be either SBNAV or OBNAV, whichever is bigger. If there are no OBSSs, the OBNAV is set to 0. In [13], an interference packet detection scheme in link layer is proposed, in which a receiving station that detects interference packets reports the existence of another BSS to AP. Then the AP announces channel switching to all stations in its BSS for avoidance of interference. There is no guarantee that the chosen channel is free from interference though. Dynamic transmit power control Our proposed dynamic TPC (DTPC) scheme is presented in this section. In the DTPC, the stations located at overlapping area are referred as interference prone (IP) stations adopting the notion in [11]. As all the stations continually monitor the ongoing transmissions, combining with the information recorded in the path loss table, a station can determine whether it can start a concurrent transmission. Then all the stations which try to start concurrent transmissions adjust their transmit powers to proper levels and compete for channel access. If one station is successful to access the channel, then since its transmission uses a low power, more stations may become hidden terminals to the transmitter. Thus we propose all the stations to use RTS/CTS procedure where the RTS/CTS frames are exchanged using their maximum powers. Our DTPC scheme enables the performance enhancement in two aspects: First, when a transmission from an IP station is ongoing, another station which belongs to a neighbor BSS and is not a hidden terminal to the IP station can start a simultaneous transmission after tuning the transmit power. Second, if a hidden terminal starts a transmission in parallel with the IP station, the neighbor AP can adjust its transmit power for timely ACK response, which means a successful transmission. NAV reset timer modification A timer named RESET_NAV is defined in the IEEE 802.11 MAC for NAV update [14]. The stations overhearing an RTS set their NAVs and also set the timer RESET_NAV with a duration of CTS_Time+2SIFS_Time+2Slot_Time. Here, CTS_Time is calculated from the length of the CTS frame and the rate at which the CTS frame is transmitted. After setting the timer, the stations will reset their NAVs if they overhear DATA frame from the RTS sender. As the RTS/CTS frames are transmitted on the maximum power and data frames are transmitted on a tuned power, some stations which set NAVs according to an RTS frame may not overhear the data frame, and the timer of these stations will expire within the time-out. We modify the NAV reset timer as follows: A new timer D_RESET_NAV is added, and the duration of this timer is the same as the duration field of RTS. Thus if a station overhears an RTS of a station that belongs to its BSS, it sets D_RESET_NAV, otherwise it sets RESET_NAV. It makes sense because in 802.11 WLAN, a station is supposed to receive all the incoming frames and at least decode the MAC header part unless it is in the sleeping mode. Moreover, in infrastructure architecture, the direct transmission is only possible between AP and stations. Thus, a station can check the address fields of a received packet to confirm whether the sender belongs to the domestic BSS. This modification in the NAV reset timer guarantees the domestic stations which set D_RESET_NAV timer will not experience time-out until the ongoing transmission terminates. Our DTPC proposes that after RESET_NAV timer expires, stations enter into the back-off (BO) process directly. The station whose BO counter decreases to 0 will access the channel. Path loss recording In our proposal, an AP broadcasts the value of an allowable maximum transmit power via beacon frames, and other stations transmit RTS/CTS frames using the maximum power. Also it is assumed that all BSSs adopt a same value of maximum transmit power. Two more fields are added into the RTS frame, reception power and signal to interference and noise (SINR). When a station transmits an RTS frame on maximum power, it piggybacks the reception power of the beacon it received recently and SINR of the beacon. Note that $${SINR}_{j}=\frac{P_{tra} \times G_{ij}}{N_{j}},$$ where P tra is the transmit power of sender i, G ij is the path loss between sender i and receiver j, and N j is the noise and interference experienced in j. Thus when the AP receives the RTS, it can calculate the path loss from the sender. Also a neighbor BSS station which overhears the RTS packets can calculate the path loss between the sender and itself, by adopting the allowable maximum transmit power. As the RTS/CTS frames are exchanged on the maximum power, it prevents hidden terminals and exposed terminals in a wide range. After the RTS/CTS frame exchanged, the sender adjusts its transmit power to a low level and delivers a data frame SIFS later. In DTPC, each station keeps a table for recording reception power of beacon frame and path loss between itself and the neighbor BSS stations from which it can overhear an RTS/CTS frame, i.e. < n o d e id ,p a t h l o s s ij ,p rev >. AP keeps a table for its own BSS stations and neighbor BSS stations located at overlapping area. When a station overhears an RTS/CTS frame, it updates the record related to the sender. If there is no record for the sender, it adds a new record into the table. Tuning transmit power In this section, the method for tuning transmit power is explained. We assume the thresholds S I N R(γ) for stations in all BSSs are the same. A BSS that first starts a transmission is called primary BSS, and the other BSS overlapping with the primary BSS is referred as a secondary BSS. Let \(P_{\textit {i\_re}}\) be the power that station i which belongs to primary BSS received a beacon from its AP, and \(P_{\textit {j\_tr}}\) be the transmit power of station j in secondary BSS. Also let I i be the noise and interference that station i experienced, G ij be the path loss between station i and station j assuming a symmetric channel, and let S I N R i be the SINR that i experienced when it received a frame from its AP. In order to guarantee that transmissions from j does not disturb ongoing transmission in i, the following condition is required. $$ \frac{P_{i\_re}}{I_{i}+\frac{P_{j\_tr}}{G_{ij}}}> \gamma. $$ As \(I_{i} = \frac {P_{\textit {i\_re}}}{{SINR}_{i}}\), after rearrangement of (1), we can get $$ P_{j\_tr}< \frac{P_{i\_re}\cdot G_{ij}({SINR}_{i}-\gamma)}{{SINR}_{i}\cdot \gamma}. $$ In order to guarantee the transmission from station j can be received by its AP successfully, the condition below should be satisfied: $$ \frac{P_{j\_tr}\times G_{AP\_j}}{I_{AP}}> \gamma. $$ Rearranging (3) gives $$ P_{j\_tr}> \frac{I_{AP}\times \gamma }{G_{AP\_j}}, $$ where G\(_{\textit {AP\_j}}\) is the path loss between station j and its AP. Combining (2) and (4), the transmit power of station j can be adjusted as follows: $$ \frac{I_{AP}\times \gamma }{G_{AP\_j}}< P_{j\_tr}< \frac{P_{i\_re}\cdot G_{ij}({SINR}_{i}-\gamma)}{{SINR}_{i}\cdot \gamma}. $$ Transmissions in a non-hidden terminal environment We consider the proposed DTPC in a non-hidden terminal environment. The network topology is given in Figure 2a, and we use Figure 3a to illustrate the radio ranges of stations while using Figure 3b to present the transmission process. As shown in Figure 3b, the RTS frame of station A contains the reception power of the recently received beacon from AP1 and SINR of the beacon. The stations which belong to BSS2 and overhear this transmission (e.g., station B and AP2) will set their NAVs and RESET_NAV timers. After A receives a CTS from AP1, it tunes its transmit power and transmits a data frame. The stations which set NAVs according to the RTS frame but cannot sense the following data frame will experience the RESET_NAV timer's time-out and enter into the BO process. Station B, whose BO counter reaches 0 first, accesses the channel and delivers a data frame after adjusting its transmit power. Proposed scheme works in a non-hidden terminal environment. (a) Radio range and (b) transmission process. Transmissions in a hidden terminal environment Now we use Figure 4a and Figure 4b to consider the transmission process of DTPC in a hidden terminal scenario. The network topology is the same as depicted in Figure 2a. When station A transmits an RTS frame, AP2 overhears it and sets its NAV as shown in Figure 4b. Then after receiving a CTS frame from AP1, station A adjusts its power level and transmits its data frame on a possible low power. As AP2 cannot overhear this data frame, its RESET_NAV timer expires and it will not set NAV when station A is exchanging data frame with AP 1. D which is a hidden terminal to station A, after sensing the channel is idle, transmits a data frame to AP2 during A's ongoing transmission. AP2 adjusts its power level based on (5) and then responses with an ACK SIFS later. Proposed scheme works in a hidden terminal environment. (a) Radio range and (b) transmission process. In order to analyze the performance of the proposed scheme compared to 802.11 MAC, we develop an analytical model using the discrete-time Markov chain in this section. Markov chain model While an ongoing transmission in a BSS prevents transmissions in a neighbor OBSS in the legacy 802.11 MAC, in our proposed scheme, however, the OBSSs are allowed to transmit simultaneously. Thus, in order to compare the channel utilization of the proposed scheme (DTPC) and the legacy MAC, we make an assumption that the co-channel is divided into two sub channels, and each BSS may occupy one of them. Adopting slotted time, in order to make the model Markovian, we suppose that the packet lengths which are integer multiples of slot durations are independent and geometrically distributed with parameter q (i.e., packet duration has a mean of 1/q slots) [15]. Also we assume that devices always have packets to send to AP in each time slot, and each device attempts to transmit with probability p. In addition, it is assumed that there are no hidden or exposed terminals in domestic BSS. Let X n be the number of transmissions ongoing in the two sub channels at time n. Since each BSS can process one transmission during a time slot, the state space for the model is given by S={0,1,2}. Note that the value of X n can be 2 only when two BSSs process a transmission at the same slot. The relationship between X n+1 and X n can be written as follows: $$\begin{array}{@{}rcl@{}} X_{n+1} = X_{n}+S_{n}-T_{n}, n \geq 0 , \end{array} $$ where S n is the number of new transmissions successfully started at time n, and T n is the number of terminations at time n. Note that S n =1 if a new transmission starts successfully in time slot n and S n =0, otherwise. If X n =2, which means that both BSSs are processing transmissions, then S n =0 with probability 1. The number of terminations T n at time n ranges from 0 to X n . If X n =0, then T n =0 with probability 1. When a station has a frame to transmit, it attempts to transmit with probability p. If k stations are transmitting in the current slot, then the success probability D k in the next time slot is $$\begin{array}{@{}rcl@{}} D_{k} = Kp(1-p)^{L-1}, \end{array} $$ where L is the number of stations. Also the probability \(R_{k}^{(j)}\) that j transmissions are finished when the system is in state k is given by $$\begin{array}{@{}rcl@{}} R_{k}^{(j)} & = & Pr[j\ transfers\ terminate\ at\ time\ t|X_{t-1} = k]\\ & = & {k \choose j}q^{j}(1-q)^{k-j} . \end{array} $$ Now the transmission probability matrix for the model can be computed as follows: $$ {\fontsize{7}{6}\mathbf{P} = \left[ \begin{array}{ccc} 1-D_{0}(1-D_{0})-D_{0}D_{0}\ &\ D_{0}(1-D_{0})\ &\ D_{0}D_{0}\\ 1-D_{0}D_{1}R_{1}^{(1)}-D_{1}R_{1}^{(0)}\ &\ D_{1}R_{1}^{(0)}(1-D_{0})+D_{0}D_{1}R_{1}^{(0)}\ &\ D_{0}D_{1}R_{1}^{(0)}\\ R_{2}^{(2)} & R_{2}^{(1)} & R_{2}^{(0)} \end{array} \right].} $$ The state transition diagram of the Markov chain model for OBSS operations is depicted in Figure 5. In the developed Markov model, the legacy 802.11 MAC and our proposed DTPC differ only in transition probability. Detailed transition probabilities of both cases are omitted here due to the space limitation. State transition diagram. Capacity analysis The average utilization ρ per sub channel can be obtained as $$\begin{array}{@{}rcl@{}} \rho = \frac{\sum_{i\in{s}}{i\pi_{i}}}{N}, \end{array} $$ where π i is the limiting probability that the system is in state i and S is the state space of the Markov chain. Then the overall system throughput is given by $$\begin{array}{@{}rcl@{}} TH &=& N\times C\times\rho, \end{array} $$ where C is the channel bit rate. We study the performance of our proposed scheme compared to that of the legacy MAC using the parameters shown in Table 1. Table 1 System parameters Figure 6a shows the variation of whole network throughput according to the number of stations in each BSS. Throughputs of both proposed DTPC and the legacy 802.11 MAC decrease as the number of stations increases. The figure proves the effectiveness of our proposed scheme in enhancing the throughput. One can see that the throughput can be increased by around 40 Mbps. Figure 6b shows the dependency of the throughput on the transmission probability. The throughputs of both schemes reach their peaks when the transmission probability is around 0.06. The figure proves once again that our proposed scheme improves the network throughput. Analysis results. (a) Throughput vs. number of stations; (b) throughput vs. transmission probability. In the Markov chain model, we have assumed that there are no hidden or exposed stations in the domestic BSS and a transmission is completed successfully. In a practical network, however, the hidden and/or exposed stations may exist and they will introduce collisions. In order to make the analysis more accurate, our future works will include the study on how to model the probability that a transmission completed successfully in a time slot. Also, we have analyzed two overlapping BSSs. Modeling the performance of multiple OBSSs, however, becomes more challenging, as the transition probabilities in both legacy scheme and proposed solution are dependent on the network topologies. Especially, based on the location of a transmitting station, the possibility whether a neighbor BSS can process a concurrent transmission in a time slot becomes more complex. We plan to investigate this issue in our future work. We conduct extensive simulations of proposed DTPC using OPNET. The network size is set to 300 × 300 m 2 and two overlapping BSSs are processing their transmissions on the same channel. The network topology is given in Figure 7, where the mobility of stations are not assumed. The numbers of member stations in both BSSs are the same and all the stations periodically transmit to AP the constant bit rate (CBR) UDP packets which are 1,024 bytes long. The IEEE 802.11a is adopted for the WLAN protocol, and other parameters are presented in Table 2. All the results reported here are the average values of 20 runs. Network topology. Table 2 Simulation parameters Figure 8a shows that the overall network throughput is inversely proportional to the increment of the network size, where the number of member stations in each BSS varies from 5 to 20. We can see that the simulation result closely follows the analytic data and our proposed scheme enhances the overall network performance. Note that as the stations in legacy 802.11 MAC contend for accessing channel with the stations in another BSS, one BSS is allowed to transmit. The proposed DTPC, however, enables the stations to dynamically adjust their transmit powers such that other stations in different BSSs can have more opportunities of simultaneous transmissions. Simulation results. (a) Network throughput and (b) retransmission attempts. Figure 8b presents the retransmission attempts versus the network size. We find the retransmission attempts in our proposed scheme are lower than that of the legacy one. It can be achieved by the fact that our proposed scheme enables RTS/CTS frames to be transmitted on a maximum power and it prevents hidden and exposed terminals occurring in a wide range. In addition, unlike the legacy scheme, the neighbor AP can adjust its power to a proper level such that it can respond with immediate ACKs. In this paper, we have presented a dynamic transmit power control scheme, namely DTPC, for enhancing the performance of OBSSs. Stations can dynamically adjust their transmit powers using the proposed DTPC, which enables the overlapping BSSs to transmit simultaneously and to enhance the spatial reuse. We have developed a Markov chain model in order to analyze the performance of DTPC, and the simulation results prove that the analytical model is properly built. Both analytic and simulation results show that the proposed DTPC significantly improves the performance of OBSSs. As the future work, we plan to investigate the performance of DTPC operating in multiple OBSSs rather than two OBSSs. Also the hidden and/or exposed terminals existed in a BSS and various network topologies should be studied as a future work. W Li, Y Cui, X Cheng, MA Al-Rodhaan, A Al-Dhelaan, Achieving proportional fairness via AP power control in multi-rate WLANs. IEEE Trans. Wireless Commun. 10(11), 3784–3792 (2011). A Jow, C Schurgers, Borrowed channel relaying: a novel method to improve infrastructure network throughput. EURASIP J. Wireless Commun. Netw. 2009174730 (2010). B Han, L Ji, S Lee, RR Miller, B Bhattacharjee, in IEEE International Conference on Communications. Channel access throttling for overlapping BSS management (Dresden, 2009), pp. 1–6. O Oteri, P Xia, F LaSita, R Olesen, in 16th International Symposium on Wireless Personal Multimedia Communications, WPMC. Advanced power control techniques for interference mitigation in dense 802.11 networks (Atlantic, 2013), pp. 1–7. X Wang, H Lou, M Ghosh, G Zhang, P Xia, O Oteri, F La Sita, R Olesen, N Shah, in Systems, Applications and Technology Conference, LISAT, 2014 IEEE Long Island. Carrier grade Wi-Fi: air interface requirements and technologies (Farmingdale, 2014), pp. 1–6. X Liu, S Seshan, P Steenkiste, in proceedings of the annual conference of ITA. Interference-aware transmission power control for dense wireless networks, (2007), pp. 1–7. H Ma, J Zhu, S Roy, SY Shin, Joint transmit power and physical carrier sensing adaptation based on loss differentiation for high density IEEE 802.11 WLAN. Comput. Netw. 52, 1703–1720 (2008). M Cesana, D Maniezzo, P Bergamo, M Gerla, in Vehicular Technology Conference 2003. Interference aware (IA) MAC: an enhancement to IEEE802.11b DCF (Orlando, Florida, USA, 2003), pp. 2799–2803. D Qiao, S Choi, A Jain, KG Shin, in Proceedings of the 9th annual international conference on mobile computing and networking. MiSer: an optimal low energy transmission strategy for IEEE 802.11a/h (San Diego, California, USA, 2003), pp. 161–175. X Wang, GB Giannakis, CSMA/CCA: a modied CSMA/CA protocol mitigating the fairness problem for IEEE 802.11 DCF. EURASIP J. Wireless Commun. Netw. 2006, 039604 (2006). Y Li, X Wang, SA Mujtaba, in Vehicular Technology Conference, 2003. Co-channel interference avoidance algorithm in 802.11 wireless LANs (Orlando, Florida, USA, 2003), pp. 2610–2614. Y Fang, D Gu, AB McDonald, J Zhang, in The 14th IEEE Workshop on Local and Metropolitan Area Networks. Two-level carrier sensing in overlapping basic service sets (BSSs) (Island of Crete, Greece, 2005), p. 6. T Tandai, K Toshimitsu, T Sakamoto, in Personal, Indoor and Mobile Radio Communications, 2006. Interferential packet detection scheme for a solution to overlapping BSS issues in IEEE 802.11 WLANs (Finland, 2006), pp. 1–5. IEEE 802.11-2012. Part 11, Wireless LAN medium access control (MAC) and physical layer (PHY) specifications. IEEE (2012). J Mo, H-S So, J Walrand, Comparison of multichannel MAC protocols. IEEE Trans. Mobile Comput. 7(1), 60–65 (2008). This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2013008855), and in part by the Research Grant of Kwangwoon University in 2013. Department of Electronics Convergence Engineering, Kwangwoon University, Wolgye Dong, 447-1, Nowon-GU, Seoul, Korea Xiaoying Lei & Seung Hyong Rhee Search for Xiaoying Lei in: Search for Seung Hyong Rhee in: Correspondence to Xiaoying Lei. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Lei, X., Rhee, S.H. Performance enhancement of overlapping BSSs via dynamic transmit power control. J Wireless Com Network 2015, 8 (2015) doi:10.1186/s13638-014-0232-y IEEE 802.11 MAC Overlapping BSSs Transmit power
CommonCrawl
Longtables Help I want to resize my table for the Appendices in our paper. But I don't know how. Please help me revise it, the output of the table is just half of the paper. Thank you! \usepackage{amsmath,amssymb,amsthm,epsfig,hyperref} \usepackage{graphicx,array,float,longtable,multicol,bigstrut} \begin{longtable}{@{\extracolsep{\fill}}|c|c|c|@{}} \resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} W(3,n) & v(1,1) & v(1,2) & v(1,3) & v(2,1) & v(2,2) & v(2,3) & v(3,1) & v(3,2) & v(3,3) & v(4,1) & v(4,2) & v(4,3) & v(5,1) & v(5,2) & v(5,3) & v(6,1) & v(6,2) & v(6,3) & v(7,1) & v(7,2) & v(7,3) & v(8,1) & v(8,2) & v(8,3) & v(9,1) & v(9,2) & v(9,3) & v(10,1) & v(10,2) & v(10,3) \\ W(3,1) & 2 & 4 & 6 & & & & & & & & & & & & & & & & & & & & & & & & & & & \\ W(3,2) & 2 & 4 & 6 & 3 & 5 & 7 & & & & & & & & & & & & & & & & & & & & & & & &\\ W(3,3) & 2 & 5 & 8 & 3 & 6 & 9 & 4 & 7 & 10 & & & & & & & & & & & & & & & & & & & & &\\ W(3,4) & 2 & 6 & 10 & 3 & 7 & 11 & 4 & 8 & 12 & 5 & 9 & 13 & & & & & & & & & & & & & & & & & &\\ W(3,5) & 2 & 7 & 12 & 3 & 8 & 13 & 4 & 9 & 14 & 5 & 10 & 15 & 6 & 11 & 16 & & & & & & & & & & & & & & &\\ W(3,6) & 2 & 8 & 14 & 3 & 9 & 15 & 4 & 10 & 16 & 5 & 11 & 17 & 6 & 12 & 18 & 7 & 13 & 19 & & & & & & & & & & & &\\ W(3,7) & 2 & 9 & 16 & 3 & 10 & 17 & 4 & 11 & 18 & 5 & 12 & 19 & 6 & 13 & 20 & 7 & 14 & 21 & 8 & 15 & 22 & & & & & & & & &\\ W(3,8) & 2 & 10 & 18 & 3 & 11 & 19 & 4 & 12 & 20 & 5 & 13 & 21 & 6 & 14 & 22 & 7 & 15 & 23 & 8 & 16 & 24 & 9 & 17 & 25 & & & & & &\\ W(3,9) & 2 & 11 & 20 & 3 & 12 & 21 & 4 & 13 & 22 & 5 & 14 & 23 & 6 & 15 & 24 & 7 & 16 & 25 & 8 & 17 & 26 & 9 & 18 & 27 & 10 & 19 & 28 & & &\\ W(3,10) & 2 & 12 & 22 & 3 & 13 & 23 & 4 & 14 & 24 & 5 & 15 & 25 & 6 & 16 & 26 & 7 & 17 & 27 & 8 & 18 & 28 & 9 & 19 & 29 & 10 & 20 & 30 & 11 & 21 & 31\\ W(3,24) & 2 & 26 & 50 & 3 & 27 & 51 & 4 & 28 & 52 & 5 & 29 & 53 & 6 & 30 & 54 & 7 & 31 & 55 & 8 & 32 & 56 & 9 & 33 & 57 & 10 & 34 & 58 & 11 & 35 & 59 \\ W(3,45) & 2 & 47 & 92 & 3 & 48 & 93 & 4 & 49 & 94 & 5 & 50 & 95 & 6 & 51 & 96 & 7 & 52 & 97 & 8 & 53 & 98 & 9 & 54 & 99 & 10 & 55 & 100 & 11 & 56 & 101\\ W(3,46) & 2 & 48 & 94 & 3 & 49 & 95 & 4 & 50 & 96 & 5 & 51 & 97 & 6 & 52 & 98 & 7 & 53 & 99 & 8 & 54 & 100 & 9 & 55 & 101 & 10 & 56 & 102 & 11 & 57 & 103\\ W(3,47) & 2 & 49 & 96 & 3 & 50 & 97 & 4 & 51 & 98 & 5 & 52 & 99 & 6 & 53 & 100 & 7 & 54 & 101 & 8 & 55 & 102 & 9 & 56 & 103 & 10 & 57 & 104 & 11 & 58 & 105\\ W(3,48) & 2 & 50 & 98 & 3 & 51 & 99 & 4 & 52 & 100 & 5 & 53 & 101 & 6 & 54 & 102 & 7 & 55 & 103 & 8 & 56 & 104 & 9 & 57 & 105 & 10 & 58 & 106 & 11 & 59 & 107\\ W(3,49) & 2 & 51 & 100 & 3 & 52 & 101 & 4 & 53 & 102 & 5 & 54 & 103 & 6 & 55 & 104 & 7 & 56 & 105 & 8 & 57 & 106 & 9 & 58 & 107 & 10 & 59 & 108 & 11 & 60 & 109\\ W(3,60) & 2 & 62 & 122 & 3 & 63 & 123 & 4 & 64 & 24 & 5 & 65 & 125 & 6 & 66 & 126 & 7 & 67 & 127 & 8 & 68 & 128 & 9 & 69 & 128 & 10 & 70 & 130 & 11 & 71 & 131\\ \end{tabular}} \normalsize\textbf{Table 1. $L(2,1)$-labeling of each vertex in \\ triangular windmill graph} tables tabularx arrays Angie ViiAngie Vii Why are you nesting a tabular environment in a longtable? – user31729 Feb 27 '18 at 16:10 What kind of information are you trying to convey by printing a table with 31 [!!] columns? Do you even remotely expect anyone to glance at such a table for more than two seconds? – Mico Feb 27 '18 at 16:23 Don't use \resizebox for tables: it leads to inconsistent fontsizes. Change the fon,tsize to, say, \footnotesize and play with the value of \tabcolsep (default is 2×6pt). But I doubt that with so many column, it can fit on a page width. Maybe in landscape orientation... – Bernard Feb 27 '18 at 16:24 @Mico: You have actually counted the number of columns? I was too lazy for that – user31729 Feb 27 '18 at 16:24 table is so wide that with tiny font size and \tabcolsize=0pt can not be fit on page in landscape orientation. from latex point of view you should split table (at least) on two parts. before this consider @Mico first comment! – Zarko Feb 27 '18 at 16:33 considered page layout as proposed Mico in his answer for landscape orientation used pdflscape package reorganized columns headers allows font size \small since table after that measures occupy two pages, the longtable is used mwe: \usepackage[a4paper,margin=2.5cm]{geometry} % set page dimensions \usepackage{booktabs, longtable} \newcommand\mc[1]{\multicolumn{3}{@{}c@{}}{#1}} \usepackage{siunitx} \usepackage{pdflscape} \begin{landscape} \captionsetup{font=small} \setlength\LTleft{0pt} \setlength\LTright{0pt} \small \setlength\tabcolsep{0pt} \begin{longtable}{@{\extracolsep{\fill}} *{9}{S[table-format=1.0] S[table-format=2.0] S[table-format=2.1]} \caption{$L(2,1)$-labeling of each vertex in triangular windmill graph} \label{tab: my huge table} \\ & \mc{v(1,i)} & \mc{v(2,i)} & \mc{v(3,i)} & \mc{v(4,i)} & \mc{v(9,i)} & \mc{v(10,i)} \\ \cmidrule{2-4} \cmidrule{5-7} \cmidrule{8-10} \cmidrule{11-13} \cmidrule{14-16}\cmidrule{17-19}\cmidrule{20-22}\cmidrule{23-25} \cmidrule{26-28}\cmidrule{29-31} W(3,n) & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} & {1} & {2} & {3} \\ \midrule \caption{$L(2,1)$-labeling of each vertex in triangular windmill graph (cont.)}\\ \multicolumn{31}{r}{\textit{continue is on the next page}} \bottomrule W(3,1) & 2 & 4 & 6 \\ W(3,2) & 2 & 4 & 6 & 3 & 5 & 7 \\ W(3,3) & 2 & 5 & 8 & 3 & 6 & 9 & 4 & 7 & 10 \\ W(3,4) & 2 & 6 & 10 & 3 & 7 & 11 & 4 & 8 & 12 & 5 & 9 & 13\\ W(3,5) & 2 & 7 & 12 & 3 & 8 & 13 & 4 & 9 & 14 & 5 & 10 & 15 & 6 & 11 & 16\\ \addlinespace W(3,6) & 2 & 8 & 14 & 3 & 9 & 15 & 4 & 10 & 16 & 5 & 11 & 17 & 6 & 12 & 18 & 7 & 13 & 19 \\ W(3,7) & 2 & 9 & 16 & 3 & 10 & 17 & 4 & 11 & 18 & 5 & 12 & 19 & 6 & 13 & 20 & 7 & 14 & 21 & 8 & 15 & 22 \\ W(3,8) & 2 & 10 & 18 & 3 & 11 & 19 & 4 & 12 & 20 & 5 & 13 & 21 & 6 & 14 & 22 & 7 & 15 & 23 & 8 & 16 & 24 & 9 & 17 & 25 \\ W(3,9) & 2 & 11 & 20 & 3 & 12 & 21 & 4 & 13 & 22 & 5 & 14 & 23 & 6 & 15 & 24 & 7 & 16 & 25 & 8 & 17 & 26 & 9 & 18 & 27 & 10 & 19 & 28 \\ \end{landscape} ZarkoZarko +1. Nice touch to insert \addlinespace after every 5th row! – Mico Feb 27 '18 at 20:24 If your document has standard margins (ca 1", or 2.5cm) and if you're willing to use a \tiny font size, along with no vertical lines, very few horizontal lines, and very little whitespace between the columns, it is just, but only just, possible to fit the entire table on one page, by turning it 90 degrees, i.e., by using landscape mode. No need to use a longtable, actually. To improve legibility ever so slightly, I would suggest that you not center-set the contents of the numeric columns but, instead, align the numbers on the (implicit) decimal markers; this may be done by loading the siunitx package and using its S column type. Do ask yourself, please, what a reader is likely to take away from staring at a table that comprises 30 [!] data columns and 60 [!!] data rows. If you suspect that the answer might be "nothing at all," do your readers (and yourself!) a big favor by showing far fewer numbers -- but hopefully just the most relevant ones. \usepackage{caption,rotating,booktabs,siunitx} \begin{sidewaystable} \captionsetup{font=scriptsize} \tiny \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l *{30}{S[table-format=3.0]}} W(3,n) & {v(1,1)} & {v(1,2)} & {v(1,3)} & {v(2,1)} & {v(2,2)} & {v(2,3)} & {v(3,1)} & {v(3,2)} & {v(3,3)} & {v(4,1)} & {v(4,2)} & {v(4,3)} & {v(5,1)} & {v(5,2)} & {v(5,3)} & {v(6,1)} & {v(6,2)} & {v(6,3)} & {v(7,1)} & {v(7,2)} & {v(7,3)} & {v(8,1)} & {v(8,2)} & {v(8,3)} & {v(9,1)} & {v(9,2)} & {v(9,3)} & {v(10,1)} & {v(10,2)}& {v(10,3)} \\ \end{tabular*} MicoMico you can obtain some space if you rotate column headers :-). maybe this be sufficient to use \scriptsize font size – Zarko Feb 27 '18 at 17:05 @Zarko - Thanks. The table shown above barely fits in the textblock, vertically as well has horizontally. If one were to rotate the header cells and switch to \scriptsize, one would also have to use a longtable environment and let the table span two pages. Feel free to post an answer that implements these ideas. :-) Of course, one would still be stuck with looking at a 1800-cell table... – Mico Feb 27 '18 at 17:35 I propose this layout, with \footnotesize and landscape orientation. I load ltablex which combines (and loads) longtable and tabularx, so that tabularx environments can break across lines. I added some coloured columns, as they seem to be grouped by three, to improve readability. Last, I placed the caption where it should be – at the beginning of the table, and repeated at the beginning of each new page: \documentclass[12pt, svgnames, table]{report} \usepackage[margin=2cm]{geometry} \usepackage{amsmath, amssymb, amsthm} \usepackage{lscape} \usepackage{graphicx, ltablex, makecell, caption} \newcommand{\rotbox}[1]{\multicolumn{1}{l}{\rlap{\rotatebox[origin =cb]{45}{#1}}}} \newcommand{\colrotbox}[1]{\multicolumn{1}{l}{\rlap{\rotatebox[origin =ct]{45}{\fboxsep =0.5pt\colorbox{WhiteSmoke}{#1}}}}} \usepackage{lipsum} \lipsum[1] \keepXColumns{ \noindent% \footnotesize% \setlength{\tabcolsep}{4pt}% \setlength{\extrarowheight}{2pt} \captionsetup{font = bf, labelsep = period, justification = centerlast} \arrayrulecolor{red}% \begin{tabularx}{\linewidth}{|X|*{10}{ >{\columncolor{WhiteSmoke}}c| p{5.5mm}|>{\arraybackslash}p{5.5mm}|}} %% \caption{\boldmath$L(2,1)$-labeling of each vertex in triangular windmill graph}\\ \multicolumn{1}{l}{\thead[lb]{W(3,n)}} & \colrotbox{v(1,1)} & \rotbox{v(1,2)} & \rotbox{v(1,3)} & \colrotbox{v(2,1)} & \rotbox{v(2,2)} & \rotbox{v(2,3)} & \colrotbox{v(3,1)} & \rotbox{v(3,2)} & \rotbox{v(3,3)} & \colrotbox{v(4,1)} & \rotbox{v(4,2)} & \rotbox{v(4,3)} & \colrotbox{v(5,1)} & \rotbox{v(5,2)} & \rotbox{v(5,3)} & \colrotbox{v(6,1)} & \rotbox{v(6,2)} & \rotbox{v(6,3)} & \colrotbox{v(7,1)} & \rotbox{v(7,2)} & \rotbox{v(7,3)} & \colrotbox{v(8,1)} & \rotbox{v(8,2)} & \rotbox{v(8,3)} & \colrotbox{v(9,1)} & \rotbox{v(9,2)} & \rotbox{v(9,3)} & \colrotbox{v(10,1)} & \rotbox{v(10,2)} & \rotbox{v(10,3)} \multicolumn{31}{c}{\bfseries\boldmath \tablename~\thetable. $L(2,1)$-labeling of each vertex in triangular windmill graph~\textmd{(continued)}}\medskip \\ \noalign{\smallskip} \multicolumn{31}{r}{\scriptsize (to be continued)} \end{tabularx}% BernardBernard Not the answer you're looking for? Browse other questions tagged tables tabularx arrays or ask your own question. Overriding margins with longtables Total number of longtables (v3.02+) White Horizontal Line Longtables How to right-align whole longtables Need help starting long, text-heavy table Need help for tabularx table Two longtables side by side Help with coverting tabularx into longtable Help with Tabularx
CommonCrawl
Expected Value of a Discrete Random Variable The expected value associated with a discrete random variable $X$, denoted by either $E(X)$ or $\mu$ (depending on context) is the theoretical mean of $X$. For a discrete random variable, this means that the expected value should be indentical to the mean value of a set of realizations of this random variable, when the distribution of this set agrees exactly with the associated probability mass function (presuming such a set exists). As an example, suppose that we were flipping a coin three times and X counted the number of heads seen. The probability mass function for X is shown below. $$\begin{array}{c|c|c|c|c}X & 0 & 1 & 2 & 3\\ \hline P(X) & 1/8 & 3/8 & 3/8 & 1/8\\ \end{array}$$ If we were to do this 200 times, we would "expect" to see 0 heads 1/8th of the time, or 200*(1/8) = 25 times 1 head 3/8ths of the time, or 200*(3/8) = 75 times 2 heads 3/8ths of the time, or 200*(3/8) = 75 times The mean of this theoretical distribution would then be $$\mu = \frac{0 \cdot 25 + 1 \cdot 75 + 2 \cdot 75 + 3 \cdot 25}{200}$$ But think about where these numbers came from -- we could write instead: $$\mu = \frac{ 0 \cdot 200 \cdot (1/8) + 1 \cdot 200 \cdot (3/8) + 2 \cdot 200 \cdot (3/8) + 3 \cdot 200 \cdot (1/8)}{200}$$ We can then factor out a 200 from every term in the numerator, which would cancel with the 200 in the denominator, yielding $$\mu = 0 \cdot (1/8) + 1 \cdot (3/8) + 2 \cdot (3/8) + 3 \cdot (1/8) = 1.5$$ So we expect to see an average of 1.5 heads throughout our trials. Notice the complete lack of 200 in the last calculation of the above expression! This wasn't a coincidence -- it would have happened if the 200 was 1000, 10 million, or 13,798,235,114. As you can see, the "expected value" depends only on the outcome values and the probabilities those outcomes occur. We just have to multiply the outcomes together with their corresponding probabilities and add them up! With this in mind, and assuming that this random variable has an outcome/sample space of $S$ and probability mass function $P$, this expected value is given by $$E(X) = \sum_{x \in S} \left[x \cdot P(x)\right]$$ Properties of Expected Value We will often have need to find the expected value of the sum or difference of two or more random variables. To this end, suppose both $X$ and $Y$ are discrete random variables with outcome spaces $S_x = \{x_1, x_2, \ldots\}$, and $S_y = \{y_1, y_2, \ldots\}$, respectively. One can show without too much trouble that the expected value of a sum of two random variables is the sum of their individual expected values. That is to say, : If $X$ and $Y$ are random variables, $$E(X \pm Y) = E(X) \pm E(Y)$$ As a first step in seeing why this is true, note that when talking about two random variables, one needs to worry about their joint distribution -- that is to say, rather than dealing with $P(X=x)$, one needs to deal with $P(X=x \textrm{ and } Y=y)$ instead. If it helps, one can think of $X$ and $Y$ being the separate outcomes of two experiments (or games) -- which may or may not be related. For example, $X$ could be how much one wins in one hand of poker, while $Y$ might be how much one wins in another hand. Then $X+Y$ would be how much you won playing both games. Thus, $$\begin{array}{rcl} E(X+Y) &=& \displaystyle{\sum_{x \in S_x,\,y \in S_y} (x+y) \cdot P(X=x \textrm{ and } Y=y)}\\\\ &=& \displaystyle{\sum_{x \in S_x,\,y \in S_y} x \cdot P(X=x \textrm{ and } Y=y) + \sum_{x \in S_x,\,y \in S_y} y \cdot P(X=x \textrm{ and } Y=y)} \end{array}$$ Consider the first of these sums. Note that $$\begin{array}{rcl} \displaystyle{\sum_{x \in S_x,\,y \in S_y} x \cdot P(X=x \textrm{ and } Y=y)} &=& \displaystyle{\sum_{x \in S_x} \left[ \sum_{y \in S_y} x \cdot P(X=x \textrm{ and } Y=y) \right]}\\\\ &=& \displaystyle{\sum_{x \in S_x} \left[ x \sum_{y \in S_y} P(X=x \textrm{ and } Y=y) \right]}\\\\ &=& \displaystyle{\sum_{x \in S_x} x \cdot P(X=x)}\\\\ &=& E(X) \end{array}$$ Similarly, $$\displaystyle{\sum_{x \in S_x,\,y \in S_y} y \cdot P(X=x \textrm{ and } Y=y) = E(Y)}$$ Combining these, we have $$E(X+Y) = E(X) + E(Y)$$ One can also show (even more quickly) that the expected value of some multiple of a random variable is that same multiple of the expected value of that random variable. That is to say, If $X$ is a random variable and $c$ is some real value, then $$E(cX) = c \cdot E(X)$$ To see this, note $$\begin{array}{rcl} E(cX) &=& \displaystyle{\sum_{x \in S_x} \left[ cx \cdot P(x) \right]}\\\\ &=& c \displaystyle{\sum_{x \in S_x} \left[ x \cdot P(x) \right]}\\\\ &=& c \cdot E(X) \end{array}$$ Combining these two properties (i.e., $E(X + Y) = E(X) + E(Y)$ and $E(cX) = cE(X)$), using $c= -1$, we arrive at the result stated at the beginning of this section $$E(X \pm Y) = E(X) \pm E(Y)$$
CommonCrawl
Corporate Finance & Accounting Financial Analysis Sales Mix Variance Jake Frankenfield Jake Frankenfield is an experienced writer on a wide range of business news topics and his work has been featured on Investopedia and The New York Times among others. He has done extensive work and research on Facebook and data collection, Apple and user experience, blockchain and fintech, and cryptocurrency and the future of money. How to Value a Company What Is Sales Mix Variance? Sales mix variance is the difference between a company's budgeted sales mix and the actual sales mix. Sales mix is the proportion of each product sold relative to total sales. Sales mix affects total company profits because some products generate higher profit margins than others. Sales mix variance includes each product line sold by the firm. The sales mix compares the sales of a product to that of total sales. The sales mix variance compares budgeted sales to actual sales and helps identify the profitability of a product or product line. Understanding Sales Mix Variance A variance is the difference between budgeted and actual amounts. Companies review sales mix variances to identify which products and product lines are performing well and which ones are not. It tells the "what" but not the "why." As a result, companies use the sales mix variance and other analytical data before making changes. For example, companies use profit margins (net income/sales) to compare the profitability of different products. Assume, for example, that a hardware store sells a $100 trimmer and a $200 lawnmower and earns $20 per unit and $30 per unit, respectively. The profit margin on the trimmer is 20% ($20/$100), while the lawnmower's profit margin is 15% ($30/$200). Although the lawnmower has a higher sales price and generates more revenue, the trimmer earns a higher profit per dollar sold. The hardware store budgets for the units sold and the profit generated for each product the business sells. Sales mix variance is a useful tool in data analysis, but alone it may not give a complete picture of why something is the way it is (root cause). Example of Sales Mix Variances Sales mix variance is based on this formula: SMV = ( AUS × ( ASM − BSM ) ) × BCMPU where: AUS = actual units sold ASM = actual sales mix percentage BSM = budgeted sales mix percentage BCMPU = budgeted contribution margin per unit \begin{aligned} &\text{SMV} = ( \text{AUS} \times ( \text{ASM} - \text{BSM} ) ) \times \text{BCMPU} \\ &\textbf{where:} \\ &\text{AUS} = \text{actual units sold} \\ &\text{ASM} = \text{actual sales mix percentage} \\ &\text{BSM} = \text{budgeted sales mix percentage} \\ &\text{BCMPU} = \text{budgeted contribution margin per unit} \\ \end{aligned} ​SMV=(AUS×(ASM−BSM))×BCMPUwhere:AUS=actual units soldASM=actual sales mix percentageBSM=budgeted sales mix percentageBCMPU=budgeted contribution margin per unit​ Analyzing the sales mix variance helps a company detect trends and consider the impact they on company profits. Assume that a company expected to sell 600 units of Product A and 900 units of Product B. Its expected sales mix would be 40% A (600 / 1500) and 60% B (900 / 1,500). If the company sold 1000 units of A and 2000 units of B, its actual sales mix would have been 33.3% A (1,000 / 3,000) and 66.6% B (2,000 / 3,000). The firm can apply the expected sales mix percentages to actual sales; A would be 1,200 (3,000 x 0.4) and B would be 1,800 (3,000 x 0.6). Based on the budgeted sales mix and actual sales, A's sales are under expectations by 200 units (1,200 budgeted units - 1,000 sold). However, B's sales exceeded expectations by 200 units (1,800 budgeted units - 2,000 sold). Assume also that the budgeted contribution margin per unit is $12 per unit for A and $18 for B. The sales mix variance for A = 1,000 actual units sold * (33.3% actual sales mix - 40% budgeted sales mix) * ($12 budgeted contribution margin per unit), or an ($804) unfavorable variance. For B, the sales mix variance = 2,000 actual units sold * (66.6% actual sales mix - 60% budgeted sales mix) * ($18 budgeted contribution margin per unit), or a $2,376 favorable variance. Sales Price Variance Definition Sales price variance is the difference between the price a business expects to sell its products or services for and what it actually sells them for. What Is a Variable Cost? A variable cost is an expense that changes in proportion to production or sales volume. Return on Investment (ROI) Definition Return on investment (ROI) is a performance measure used to evaluate the efficiency of an investment or compare the efficiency of several investments. What Is Opportunity Cost? Opportunity cost is the potential loss from a missed opportunity—the result of choosing one alternative and forgoing another. Internal Rate of Return (IRR) The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments. Sales Mix The sales mix is the relative amounts purchased of each of the products or services a company sells. Recoverable Depreciation: How it Works How to Calculate Return on Investment (ROI) Exchange Rate Risk: Economic Exposure Understanding Contribution Margins Capital Gains Tax 101 15 Ways to Save Money Painlessly
CommonCrawl
Modular Cauchy kernel corresponding to the Hecke curve Arnold Mathematical Journal. 2018. Vol. 4. No. 3-4. P. 301-313. Sakharova N. In this paper we construct the modular Cauchy kernel $\Xi_N(z_1, z_2)$, i.e. the modular invariant function of two variables, $(z_1, z_2) \in \mathbb{H} \times \mathbb{H}$, with the first order pole on the curve $$D_N=\left\{(z_1, z_2) \in \mathbb{H} \times \mathbb{H}|~ z_2=\gamma z_1, ~\gamma \in \Gamma_0(N) \right\}.$$ The function $\Xi_N(z_1, z_2)$ is used in two cases and for two different purposes. Firstly, we prove generalization of the Zagier theorem (\cite{La}, \cite{Za3}) for the Hecke subgroups $\Gamma_0(N)$ of genus $g>0$. Namely, we obtain a kind of ``kernel function'' for the Hecke operator $T_N(m)$ on the space of the weight 2 cusp forms for $\Gamma_0(N)$, which is the analogue of the Zagier series $\omega_{m, N}(z_1,\bar{z_2}, 2)$. Secondly, we consider an elementary proof of the formula for the infinite Borcherds product of the difference of two normalized Hauptmoduls, ~$J_{\Gamma_0(N)}(z_1)-J_{\Gamma_0(N)}(z_2)$, for genus zero congruence subgroup $\Gamma_0(N)$. Research target: Mathematics Priority areas: mathematics Keywords: modular formsBorcherds products Сабейские этюды Коротаев А. В. М.: Восточная литература, 1997. Dynamics of Information Systems: Mathematical Foundations Iss. 20. NY: Springer, 2012. This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. Siegel modular forms of genus 2 with the simplest divisor Gritsenko V., Cléry F. Proceedings of the London Mathematical Society. 2011. Vol. 102. No. 6. P. 1024-1052. We prove that there exist exactly eight Siegel modular forms with respect to the congruence subgroups of Hecke type of the paramodular groups of genus two vanishing precisely along the diagonal of the Siegel upper half-plane. This is a solution of a question formulated during the conference "Black holes, Black Rings and Modular Forms" (ENS, Paris, August 2007). These modular forms generalize the classical Igusa form and the forms constructed by Gritsenko and Nikulin in 1998. 24 faces of the Borcherds modular form Phi_{12} Gritsenko V. arxiv.org. math. Cornell University, 2012. No. 6503. The fake monster Lie algebra is determined by the Borcherds function Phi_{12} which is the reflective modular form of the minimal possible weight with respect to O(II_{2,26}). We prove that the first non-zero Fourier-Jacobi coefficient of Phi_{12} in any of 23 Niemeier cusps is equal to the Weyl-Kac denominator function of the affine Lie algebra of the root system of the corresponding Niemeier lattice. This is an automorphic answer (in the case of the fake monster Lie algebra) on the old question of I. Frenkel and A. Feingold (1983) about possible relations between hyperbolic Kac-Moody algebras, Siegel modular forms and affine Lie algebras. Гипотеза о тэта-блоках первого порядка Гриценко В. А., Ванг Х. Успехи математических наук. 2017. Т. 72. № 5. С. 191-192. In this paper we prove the conjecture above in the last case of known theta-blocks of weight 2. This gives a new intereting series of Borcherds products of weight 2. On M-functions associated with modular forms Lebacque P., Zykin A. I. HAL:archives-ouvertes. HAL. Le Centre pour la Communication Scientifique Directe, 2017 Let f be a primitive cusp form of weight k and level N, let χ be a Dirichlet character of conductor coprime with N, and let L ( f ⊗ χ,s ) denote either log L ( f ⊗ χ,s ) or ( L ′ /L )( f ⊗ χ,s ) . In this article we study the distribution of the values of L when either χ or f vary. First, for a quasi-character ψ : C → C × we find the limit for the average Avg χ ψ ( L ( f ⊗ χ,s )) , when f is fixed and χ varies through the set of characters with prime conductor that tends to infinity. Second, we prove an equidistribu tion result for the values of L ( f ⊗ χ,s ) by establishing analytic properties of the above limit function. Third , we study the limit of the harmonic average Avg h f ψ ( L ( f,s )) , when f runs through the set of primitive cusp forms of given weight k and level N → ∞ . Most of the results are obtained conditionally on the Generalized Riemann Hypothesis for L ( f ⊗ χ,s ) . Sakharova N. arxiv.org. math. Cornell University, 2018. No. 1802.03299. In this paper we construct the modular Cauchy kernel $\Xi_N(z_1, z_2)$, i.e. the modular invariant function of two variables, $(z_1, z_2) \in \mathbb{H} \times \mathbb{H}$, with the first order pole on the curve $$D_N=\left\{(z_1, z_2) \in \mathbb{H} \times \mathbb{H}|~ z_2=\gamma z_1, ~\gamma \in \Gamma_0(N) \right\}.$$ The function $\Xi_N(z_1, z_2)$ is used in two cases and for two different purposes. Firstly, we prove generalization of the Zagier theorem (\cite{La}, \cite{Za3}) for the Hecke subgroups $\Gamma_0(N)$ of genus $g>0$. Namely, we obtain a kind of ``kernel function'' for the Hecke operator $T_N(m)$ on the space of the weight 2 cusp forms for $\Gamma_0(N)$, which is the analogue of the Zagier series $\omega_{m, N}(z_1,\bar{z_2}, 2)$. Secondly, we consider an elementary proof of the formula for the infinite Borcherds product of the difference of two normalized Hauptmoduls, ~$J_{\Gamma_0(N)}(z_1)-J_{\Gamma_0(N)}(z_2)$, for genus zero congruence subgroup $\Gamma_0(N)$. Equations D3 and spectral elliptic curves Golyshev V., Vlasenko M. In bk.: Feynman Amplitudes, Periods and Motives. Iss. 648. AMS, 2015. P. 135-152. We study modular determinantal differential equations of orders 2 and 3. We show that the expansion of the analytic solution of a nondegenerate modular equation of type D3 over the rational numbers with respect to the natural parameter coincides, under certain assumptions, with the q–expansion of the newform of its spectral elliptic curve and therefore possesses a multiplicativity property. We compute the complete list of D3 equations with this multiplicativity property and relate it to Zagier's list of nondegenerate modular D2 equations. Examples of lattice-polarized K3 surfaces with automorphic discriminant, and Lorentzian Kac–Moody algebras Gritsenko V., Nikulin V. Transactions of the Moscow Mathematical Society. 2017. Vol. 78. P. 75-83. Model for organizing cargo transportation with an initial station of departure and a final station of cargo distribution Khachatryan N., Akopov A. S. Business Informatics. 2017. No. 1(39). P. 25-35. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the "correct" extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. Nullstellensatz over quasi-fields Trushin D. Russian Mathematical Surveys. 2010. Vol. 65. No. 1. P. 186-187. Деловой климат в оптовой торговле во II квартале 2014 года и ожидания на III квартал Лола И. С., Остапкович Г. В. Современная торговля. 2014. № 10. Прикладные аспекты статистики и эконометрики: труды 8-ой Всероссийской научной конференции молодых ученых, аспирантов и студентов Вып. 8. МЭСИ, 2011. Laminations from the Main Cubioid Timorin V., Blokh A., Oversteegen L. et al. arxiv.org. math. Cornell University, 2013. No. 1305.5788. According to a recent paper \cite{bopt13}, polynomials from the closure $\ol{\phd}_3$ of the {\em Principal Hyperbolic Domain} ${\rm PHD}_3$ of the cubic connectedness locus have a few specific properties. The family $\cu$ of all polynomials with these properties is called the \emph{Main Cubioid}. In this paper we describe the set $\cu^c$ of laminations which can be associated to polynomials from $\cu$. Entropy and the Shannon-McMillan-Breiman theorem for beta random matrix ensembles Bufetov A. I., Mkrtchyan S., Scherbina M. et al. arxiv.org. math. Cornell University, 2013. No. 1301.0342. Bounded limit cycles of polynomial foliations of ℂP² Goncharuk N. B., Kudryashov Y. arxiv.org. math. Cornell University, 2015. No. 1504.03313. In this article we prove in a new way that a generic polynomial vector field in ℂ² possesses countably many homologically independent limit cycles. The new proof needs no estimates on integrals, provides thinner exceptional set for quadratic vector fields, and provides limit cycles that stay in a bounded domain. Метод параметрикса для диффузий и цепей Маркова Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012. Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action? Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466. Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself. Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1. We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results. Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214. Hypercommutative operad as a homotopy quotient of BV Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749. We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras. Added: Aug 29, 2012 Cross-sections, quotients, and representation rings of semisimple algebraic groups V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856. Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
CommonCrawl
Total angular momentum operator $L^2$ Consider a system with a state of fixed total angular momentum $l = 2$. What are the eigenvalues of the following operators (a)$ L_z$ (b) $3/5L_x −4/5L_y$ (c) $2L_x −6L_y +3L_z$ My problem is more to do with the definition of the angular momentum operator: I think the angular momentum operator is $L^2=L_x^2+L_y^2+L_z^2$. I have seen many different eigenvalues this gets when applied to an eigen ket: $L^2|\psi\rangle=\hbar^2 k^2|\psi\rangle$ $L^2|\psi\rangle=\hbar^2 j(j+1 )|\psi\rangle$ along with a few others. I understand that these are sort of equivilent and we are just using numbers to represent the value. However, what is the $l=2$? Is it the $k$, the $j$? I know what to do from here on, $m$ (the quantum m=number for angular momentum along a given axis) varies from $-j$ to $+j$ quantum-mechanics homework-and-exercises angular-momentum operators Toby PeterkenToby Peterken $\begingroup$ Where have you seen $L^2|\psi\rangle=\hbar^2k^2|\psi\rangle$? That result doesn't immediately make sense to me. Did you happen to confuse $L^2$ with $p^2$ in that instance with the $k^2$? $\endgroup$ – WAH Aug 20 '17 at 20:05 $\begingroup$ Your question is impossible to answer unless you give the state. Simply knowing $\ell=2$ is not enough to say anything about components. $\endgroup$ – ZeroTheHero Aug 20 '17 at 22:05 $\begingroup$ That was the question though, I have changed nothing $\endgroup$ – Toby Peterken Aug 21 '17 at 6:40 $\begingroup$ @ZeroTheHero I disagree on that: given the value of the an angular momentum you can explicitly build the form of the operators $L_x$, $L_y$, $L_z$ in some representation (for example the usual $|l,m\rangle$): $L_z$ is easy as it's diagonal, and has eigenvalues $-2,-1,0,-1,2$; the other 2 are way more involved but in principle they are 5x5 matrices, with their own eigenvalue problem. $\endgroup$ – Francesco Bernardini Jan 7 '19 at 1:44 $\begingroup$ @FrancescoBernardini you understand the question differently than I do. $\endgroup$ – ZeroTheHero Jan 7 '19 at 1:47 I think the trick here is to note that the operator in (b) measures the component of angular momentum along the axis $\hat{n} = (3/5, 4/5, 0)$. It's eigenvalues must be $\{2,1,0,-1,-2\}$, the same as those of $L_z$, because you could have chosen your z-axis to lie along $\hat{n}$. Similarly, the operator in (c) is 7 times the component of $\vec{L}$ along the normalized axis $\hat{n} = (2/7, -6/7, 3/7)$. It's eigenvalues must therefore be $\{14,7,0,-7,-14\}$, by the same reasoning. Paul GPaul G The information you are given, i.e. $l=2$, tells you that the operators $L_x$, $L_y$, $L_z$ can be represented as 5x5 matrices, which operate on a vector space spanned by the five vectors $$\{|2,-2\rangle,|2,-1\rangle,|2,0\rangle,|2,1\rangle,|2,2\rangle\}$$ which can be represented, for example, by one of the most natural bases: $$|2,-2\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 1\\ 0\\ 0\\ 0\\ 0 \end{array}\right),|2,-1\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 1\\ 0\\ 0\\ 0 \end{array}\right),|2,0\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 1\\ 0\\ 0 \end{array}\right),|2,1\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 0\\ 1\\ 0 \end{array}\right),|2,2\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 0\\ 0\\ 1 \end{array}\right)$$ $L_z$ is easy because in this basis is diagonal by definition, and it would be represented by $$L_z\stackrel{\cdot}{=}\left(\begin{array}{ccccc} -2 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 2\\ \end{array}\right)$$ On the other hand, the two operators $L_+$ and $L_-$, defined as $$L_\pm=L_x\pm iL_y$$ are then represented by $$L_-\stackrel{\cdot}{=}\hbar\left(\begin{array}{ccccc} 0 & 2 & 0 & 0 & 0\\ 0 & 0 & \sqrt6 & 0 & 0\\ 0 & 0 & 0 & \sqrt6 & 0\\ 0 & 0 & 0 & 0 & 2\\ 0 & 0 & 0 & 0 & 0\\ \end{array}\right),\quad L_+\stackrel{\cdot}{=}\hbar\left(\begin{array}{ccccc} 0 & 0 & 0 & 0 & 0\\ 2 & 0 & 0 & 0 & 0\\ 0 & \sqrt6 & 0 & 0 & 0\\ 0 & 0 & \sqrt6 & 0 & 0\\ 0 & 0 & 0 & 2 & 0\\ \end{array}\right)$$ So, inverting the definition, $L_x=\frac12(L_++L_-)$ and $L_y=\frac12(L_++L_-)$, one can build the matrices corresponding to $$\hat O_1 =\frac35 L_x-\frac45 L_y$$ $$\hat O_2 = 2L_x-6L_y + 3L_z$$ and calculate the eigenvalues by merely cranking the math, either: manually; using some software; using some trick that I'm not able to see now; Francesco BernardiniFrancesco Bernardini Short answer: the eigenvalue is: $\qquad l\cdot(l+1) \hbar \qquad$ Consequently, you'll get $2\cdot3\hbar=6\hbar$. $J$ is (sometimes) angular momentum in general. If your concrete angular momentum is $L$, then replace $j$ by $l$. But this is only when $J$ is used to denote "angular momentum in general". $J$ is usually "total angular momentum" (sum of all angular momenta), which would not be the same anymore. FGSUZFGSUZ $\begingroup$ Leave a comment if you're looking for more info. $\endgroup$ – FGSUZ Apr 8 '19 at 18:44 Not the answer you're looking for? Browse other questions tagged quantum-mechanics homework-and-exercises angular-momentum operators or ask your own question. Angular momentum of quantum system Why only 1 component of angular momentum? About shift operators Angular Momentum Operators - Commutation Relations The uncertainty in angular momentum Commuting angular momentum Measuring two components of angular momentum after measuring total angular momentum and one of its components Angular Momentum Eigenvalues in Two Dimensions Spherical symmetry and mean of angular momentum What does vector operator for angular momentum measure?
CommonCrawl
Title: Adversarial Multi-agent Output Containment Graphical Game with Local and Global Objectives for UAVs Authors: Kartal, Y. Koru, A.T. Lewis, F.L. Wan, Y. Dogan, A. Keywords: <inline-formula xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <tex-math notation="LaTeX">$H_\infty$</tex-math> </inline-formula> optimal control bounded <inline-formula xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <tex-math notation="LaTeX">$L_{2}$</tex-math> </inline-formula> gain differential game linear-quadratic game Nash equilibrium Sufficient conditions Publisher: Institute of Electrical and Electronics Engineers Inc. Abstract: Multiple leader &amp; follower graphical games constitute challenging problems for aerospace and robotics applications. One of the challenges is to address the mutual interests among the followers with an optimal control point of view. In particular, the traditional approaches treat the output containment problem by introducing selfish followers where each follower only considers their own utility. In this paper, we propose a differential output containment game over directed graphs where the mutual interests among the followers are addressed with an objective functional that also considers the neighboring agents. The obtained output containment error system results in a formulation where outputs of all followers are proved to converge to the convex hull spanned by the outputs of leaders in a game optimal manner. The output containment problem is solved using the <inline-formula><tex-math notation="LaTeX">$\mathcal {H}_\infty$</tex-math></inline-formula> output feedback method where the new necessary and sufficient conditions are presented. Another challenge is to design distributed Nash equilibrium control strategies for such games, which cannot be achieved with the traditional quadratic cost functional formulation. Therefore, a modified cost functional that provides both Nash and distributed control strategies in the sense that each follower uses the state information of its own and neighbors, is presented. Furthermore, an <inline-formula><tex-math notation="LaTeX">$\mathcal {L}_{2}$</tex-math></inline-formula> gain bound of the output containment error system that experiences worst-case disturbances with respect to the <inline-formula><tex-math notation="LaTeX">$\mathcal {H}_\infty$</tex-math></inline-formula> criterion is investigated. The proposed methods are validated by means of multi-agent quadrotor Unmanned Aerial Vehicles (UAVs) output containment game simulations. IEEE URI: https://doi.org/10.1109/TCNS.2022.3210861 Appears in Collections: Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
CommonCrawl
Number of binary strings with 'at least two consecutives' constraints I'm trying to count the number of binary strings of length $n$ with the properties described below. Say we break the string into substrings (starting from left to right) of consecutive $0$'s or $1$'s. We must have: The first substring can be of any length (from $1$ to $n$). The last (ending at $n$) substring can be of any appropriate length (namely, a length between $1$ and whatever is left, is allowed). All the intermediate substrings must have a length of at least $2$. For example, if $n=10$, the following strings are viable: $$0111001111$$ This, however, is not allowed: $010...$ or $110001101..$ I've tried to furnish a recursive relation but since there are different constraints on the first and last substrings, the recursion seems to go all the way back to the start. I'm currently trying to think of this as a question of Stars and Bars and maybe get a (set of) Diophantine equation(s). No luck yet though. co.combinatorics Rodrigo de Azevedo AD1984AD1984 $\begingroup$ They're in bijective correspondence with strings of length $n + 2$ where all runs (including the first and last) have length $\geq 2$. $\endgroup$ – Adam P. Goucher Jul 30 '17 at 11:22 $\begingroup$ One of descriptions in A000045 (Fibonacci numbers) explicitly says "F(n) = number of compositions of n+1 with no part equal to 1. [Cayley, Grimaldi]". I think it is easy to find a 1-1 correspondence with your strings. $\endgroup$ – მამუკა ჯიბლაძე Jul 30 '17 at 13:46 $\begingroup$ May I ask just what is the value of this investigation? $\endgroup$ – Michael Karas Jul 31 '17 at 4:04 We start considering words with no consecutive equal characters at all. These words are called Smirnov words or Carlitz words. (See example III.24 Smirnov words from Analytic Combinatorics by Philippe Flajolet and Robert Sedgewick for more information.) A generating function for the number of Smirnov words over a binary alphabet is given by \begin{align*} \left(1-\frac{2z}{1+z}\right)^{-1}\tag{1} \end{align*} To only get words with runs of length at least $2$ we replace in (1) each occurrence of $z$ by \begin{align*} z&\longrightarrow z^2+z^3+\cdots=\frac{z^2}{1-z}\\ \end{align*} Doubling first and last character of a word implies we can focus on words containing solely of subword runs with length $\geq 2$ and the wanted number of occurrences of words of length $n$ is \begin{align*} [z^{n+2}]&\left(1-\frac{2\frac{z^2}{1-z}}{1+\frac{z^2}{1-z}}\right)^{-1} =[z^{n+2}]\frac{1-z+z^2}{1-z-z^2}\tag{2} \end{align*} with $\frac{1}{1-z-z^2}$ the generating function of the Fibonacci numbers $F_n$. We obtain from (2) the sequence \begin{align*} (F_{n+3}-F_{n+2}+F_{n+1})_{n\geq 1} &=\left(2F_{n+1}\right)_{n\geq 1}\\ &=(2,4,6,10,16,26,42,\ldots) \end{align*} Markus ScheuerMarkus Scheuer $\begingroup$ Thanks a lot for the answer. Something strikes me as a bit weird though: For $n=4$, don't we have: $(0011)$, $(1100)$, $(1001)$, $(0110)$, $(1000)$, $(0111)$, $(0001)$ and $(1110)$, all being valid? Think doesn't correspond to the value 6 above. Am I missing something? $\endgroup$ – AD1984 Jul 30 '17 at 14:02 $\begingroup$ @AD1984: Typo corrected. Thanks. There are $10$ admissible words for $n=4$ which are $0000,0001,0011,0110,0111,1000,1001,1100,1110,1111$. $\endgroup$ – Markus Scheuer Jul 30 '17 at 14:18 $\begingroup$ Yes, I got it now. Perfect. $\endgroup$ – AD1984 Jul 30 '17 at 14:19 $\begingroup$ @AD1984: You're welcome! :-) $\endgroup$ – Markus Scheuer Jul 30 '17 at 14:20 $\begingroup$ @AD1984: It does not account. We respect the special boundary conditions by looking for words of length $n+2$ instead of $n$ as indicated by the coefficient of operator $[z^{n+2}]$. We look at these words of length $n+2$ and skip the first and last character. This way we obtain all admissible words of length $n$. $\endgroup$ – Markus Scheuer Jul 30 '17 at 14:32 An answer has already been given, but let me additionally give you a meta-answer: the constraints you describe define a rational language on the alphabet $\{0,1\}$, namely a set of words (=finite strings) on this alphabet which can be described by a regular expression or finite automaton. In your case, a regular expression is easily described: $$ 00^*(111^*000^*)^*111^*00^* + 00^*(111^*000^*)^*11^* + 11^*(000^*111^*)^*00^* + 11^*(000^*111^*)^*000^*11^* + 0^* + 1^* $$ where "$^*$" means "any number (at least zero) of", and "$+$" (sometimes written "$|$") means "or" (the cases in the above sum are, moreover, disjoint: the first four describe the four cases where your string begins with $0$ and ends with $0$, begins with $0$ and ends with $1$, etc., and the last two make a special case of the sequences with just zeros or just ones — you might wish to add $\varepsilon$ to the expression if you consider the empty string to match your condition). Now there are well known algorithms which will take a regular expression such as above and turn it into a finite automaton recognizing the language, turn this finite automaton into a deterministic one, compute the generating function of the language generated by a deterministic finite automaton. Any book on rational languages or finite automata (e.g., the one by Sakarovitch¹) should discuss at least the first two and probably all three; the third is also discussed, e.g., here. Alternatively, you can go through the "unambiguous regular expression" path, as discussed here: I don't know which is algorithmically more efficient in general, but in the case of your particular question, this works very well, as the above regular expression is unambiguous, and the procedure described here immediately produces the rational function $2\frac{x+x^2}{1-x-x^2}$. My point is, this answers not only your particular question, but all questions about counting (or at least, producing a generating function for) the number of words described by any rational expression. Furthermore, these algorithms are actually implemented in the Vaucanson/Vaucanson2/Vaucanson-R/Wali/VCSN programs¹ (none of which are terribly usable at the present, unfortunately). Full disclosure: Sakarovitch is my office neighbour. Gro-TsenGro-Tsen $\begingroup$ Note that the generating function of the Catalan numbers -- which count words of balanced parentheses such as (()())()(()) -- is not a rational function. The contrapositive of your answer thus implies, correctly, that no regular expression can verify whether a word of parentheses is balanced. $\endgroup$ – Adam P. Goucher Jul 30 '17 at 18:34 $\begingroup$ @ Gro-Tsen: Interesting. This is a kind of `constraint coding' perspective on the problem. Define the constrained language via a finite-state machine and proceed from there. The (asymptotic) capacity of the system, i.e., the $\limsup_{n\to\infty}\frac{1}{n}\log|\mathcal{C}_n|$, where $\mathcal{C}_n$ is the set of viable words of length $n$, is extremely easy to get. Namely, it is the $log$ of the largest eigenvalue of the adjacency matrix (through the Perron–Frobenius theorem). For my case (and in correspondence to the provided answers), we get the logarithm of the Golden Ration as the answer. $\endgroup$ – AD1984 Jul 31 '17 at 10:26 $\begingroup$ @AD1984 Could you give a reference for this constraint coding method with finite-state machines? I suspect I could use it for some of my own questions here (namely mathoverflow.net/q/201205/41291, mathoverflow.net/q/200762/41291 and mathoverflow.net/q/146802/41291). $\endgroup$ – მამუკა ჯიბლაძე Aug 1 '17 at 9:06 $\begingroup$ @მამუკა ჯიბლაძე: A decent (online version of a) book on the topic of constrained coding would be math.ubc.ca/~marcus/Handbook. In particular, you should take a look at Chapter 4: Finite-State Encoders. The explanations on how to define the appropriate finite-state machine should be there. $\endgroup$ – AD1984 Aug 2 '17 at 11:32 $\begingroup$ I would like to add that Graham, Knuth, Patashnik, Concrete mathematics chapter 7 gives some practical examples for the third step in a popular way, although it mostly concentrates on the fourth step, computing an explicit formula of the terms from the generating function. $\endgroup$ – Zsbán Ambrus Aug 27 '17 at 19:52 Call strings in which all runs are of length at least 2 "duplicative strings". Note that a duplicative string of length $n + 2$ either ends in a run of length exactly 2 or a run of length greater than 2. In the former case, it can be any duplicative string of length $n$, followed by the two characters opposite from this one's last character (assume here $n$ is positive). In the latter case, it can be any duplicative string of length $n + 1$ with its last character duplicated. Thus, if we denote the number of duplicative strings of length $n + 2$ by $F(n + 2)$, we obtain the Fibonacci-type recurrence $F(n + 2) = F(n) + F(n + 1)$ for positive $n$. This along with the obvious initial values $F(1) = 0$, $F(2) = 2$, and the observation made by others that the strings you are interested in are (by duplicating initial and final characters for those of length at least 1) in correspondence with duplicative strings of length $n + 2$, makes quick work of the problem. We have that $F(n)$ is twice the $(n - 1)$-th Fibonacci number (on the indexing whose $0$-th and $1$-st Fibonacci numbers are $0$ and $1$, respectively), and that the quantity you are interested in is $F(n + 2)$ (for positive $n$), which is therefore twice the $(n + 1)$-th Fibonacci number. Sridhar RameshSridhar Ramesh $\begingroup$ This is exactly how I ended up counting the strings with runs of length $\geq 2$. A neat and simple combinatoric argument. Thanks! $\endgroup$ – AD1984 Jul 31 '17 at 10:21 Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question. Generating function of a regular language Can regular expressions be made unambiguous? Combinatorics of palindromic decompositions "Special" meanders Number of trivializations of a trivial word in the free group an operation on binary strings Maximal difference between k randomly drawn numbers from 1 to n – Looking for formula to sequence Maximal number of binary strings given constraints Maximal chain of 1s in binary strings Generating sets of all binary strings of a fixed size An inequality related to the number of binary strings with no fixed substring Computing exact or asymptotics for number of strings over an alphabet of size $n$ that have no non-trivial substrings that appear more than once Paths in Pascal's triangle; or balanced $0-1$ initial segments Repeats of all binary strings of length k
CommonCrawl
Identification of outliers and positive deviants for healthcare improvement: looking for high performers in hypoglycemia safety in patients with diabetes Brigid Wilson1, Chin-Lin Tseng2, Orysya Soroka2, Leonard M. Pogach3 & David C. Aron1,4 The study objectives were to determine: (1) how statistical outliers exhibiting low rates of diabetes overtreatment performed on a reciprocal measure – rates of diabetes undertreatment; and (2) the impact of different criteria on high performing outlier status. The design was serial cross-sectional, using yearly Veterans Health Administration (VHA) administrative data (2009–2013). Our primary outcome measure was facility rate of HbA1c overtreatment of diabetes in patients at risk for hypoglycemia. Outlier status was assessed by using two approaches: calculating a facility outlier value within year, comparator group, and A1c threshold while incorporating at risk population sizes; and examining standardized model residuals across year and A1c threshold. Facilities with outlier values in the lowest decile for all years of data using more than one threshold and comparator or with time-averaged model residuals in the lowest decile for all A1c thresholds were considered high performing outliers. Using outlier values, three of the 27 high performers from 2009 were also identified in 2010–2013 and considered outliers. There was only modest overlap between facilities identified as top performers based on three thresholds: A1c < 6%, A1c < 6.5%, and A1c < 7%. There was little effect of facility complexity or regional Veterans Integrated Service Networks (VISNs) on outlier identification. Consistent high performing facilities for overtreatment had higher rates of undertreatment (A1c > 9%) than VA average in the population of patients at high risk for hypoglycemia. Statistical identification of positive deviants for diabetes overtreatment was dependent upon the specific measures and approaches used. Moreover, because two facilities may arrive at the same results via very different pathways, it is important to consider that a "best" practice may actually reflect a separate "worst" practice. Learning from high performing health care systems constitutes an important strategy for organizational improvement [1,2,3,4]. Among the methods used to identify such systems is the identification of statistical outliers based on specific performance measures [5,6,7]. Identification of outliers has also constituted the first step in the identification of "positive deviants," a strategy that has gained popularity in healthcare improvement [8,9,10,11,12,13,14]. This approach depends upon the choice of a robust measure that accurately represents performance. However, criteria for outlier status remain uncertain [5,6,7]. Moreover, in complex disease management individual measures reflect only one aspect of performance. For example, undertreatment and overtreatment are each associated with adverse outcomes. A focus on undertreatment of a particular condition to reduce one set of adverse outcomes can result in overtreatment of some patients resulting in increase in another set of adverse outcomes [15, 16]. Measurement of both undertreatment and overtreatment would better reflect organizational performance than measurement of only one. This is particularly relevant to diabetes. The National Committee for Quality Assurance (NCQA), National Quality Forum (NQF) and others have developed a variety of performance measures related to diabetes [17]. Central to such assessment has been measures of glycemic control, with a focus on rates of under-treatment [18, 19]. A1c targets are typically individualized at different levels within the range of <7% to <9%. However, undertreatment has typically been assessed relative to the high end of that range, i.e., by measures such as the percentage of patients with diabetes with A1c > 9% and there have been efforts to address this undertreatment for many years. More recently, greater attention has been paid (in terms of performance measurement) to undertreatment at the low end of the range, i.e., the percentage of patients with A1c > 7%. The American Diabetes Association recommended A1c <7% for all patients 19–74 years of age. In May 2006 the NCQA included measures of optimal glycemic control (A1C <7%) for public reporting in 2008 [17]. Consequently, the potential for overtreatment became more evident [20,21,22]. This may occur by setting targets for glucose control that are inappropriately low based on patients' life expectancies and/or comorbid conditions, resulting in risk for serious hypoglycemia [23,24,25,26,27]. In fact, this issue became the focus in 2014–5 of national initiatives including the Choosing Wisely initiative which recommends "moderate control" of A1c in most older adults, a recommendation from the American Geriatrics Association [28]. In addition, the FDA in collaboration with NIDDK, CDC, and VA included hypoglycemia safety as a major component of its 2014 Action Plan on Adverse Drug Events. Consequently, VA initiated a major effort to reduce overtreatment in 2015, the Choosing Wisely/Hypoglycemia Safety Initiative [29]. Therefore, we focused on overtreatment as an issue of patient safety. Patient safety is an area in which positive deviance has been applied and where the identification of high performing outliers is a critical first step [30]. The primary objective of our study was to determine how statistical outliers exhibiting low rates of diabetes overtreatment performed on a reciprocal measure – rates of diabetes undertreatment. Also, since different measure thresholds, different comparators, and consistency of performance over time may affect which facilities are identified as outliers, a secondary objective was to determine the extent to which high performing outlier status for diabetes overtreatment is impacted by different criteria. This was a serial cross-sectional study design, using yearly Veterans Health Administration (VHA) administrative data from fiscal year (FY) 2009–2013. This study was approved by the Department of Veteran Affairs (VA)–Louis Stokes Cleveland VA Medical Center and New Jersey Health Care System Institutional Review Boards. There was waiver of informed consent. Study population – healthcare system This study was carried out using data from a very large healthcare system - the Veterans Health Administration (VHA). VHA provides comprehensive healthcare to eligible veterans of the Armed Services in >100 hospitals and their related clinics. In the years of the study, it was organized into 21 regional networks (Veterans Integrated Service Networks or VISNs), each consisting of 3–10 facilities. Facilities vary by the level of complexity depending upon size, scope of clinical activities, and other site characteristics. Study population – patients (at risk group) Patients who met our previously proposed criteria for a population with risk factors for hypoglycemia (hence overtreatment) were included in the study [22]. Specifically, this population included diabetic patients taking a diabetes drug known to have a relatively high frequency of hypoglycemia (insulin and/or sulfonylurea agents) plus having at least one of the following additional criteria: age 75 years or older, chronic kidney disease (defined as last serum creatinine measurement in a year greater than 2.0 mg/dL (to convert to micromoles per liter, multiply by 88.4), or an ICD-9-CM diagnosis of cognitive impairment or dementia in ambulatory care. Diabetes mellitus status for a given year was defined based on 2 or more occurrences of International Classification Of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for diabetes mellitus (250.xx,) associated with clinical face-to-face outpatient care on separate calendar days in prior 2 years, or oral diabetes mellitus–specific medication prescription (insulin, sulfonylurea, biguanide, α-glucosidase inhibitor, meglitinide, or thiazolidinedione) in prior year. Among these patients, we retained only those that had at least one hemoglobin A1c (HbA1c) value documented in any fiscal year from FY2009-FY2013 to be the final study population (at risk group) and denominator for calculation of rates. In addition, we separately analyzed all other patients with diabetes (not at high risk group). Data sources included the VHA National Patient Clinical Data Set (Austin, Texas; to obtain ICD-9-CM and diagnostic codes) and the Decision Support System (to obtain laboratory data and medication information). Because veterans may obtain care from more than one facility, we determined a patient's parent facility based on where they received most of their ambulatory care. Outcome measure – over- and under-treatment rates Our primary outcome measure was rate of overtreatment of diabetes at a facility level in patients at high risk for hypoglycemia based on the criteria above, i.e., the proportion of patients with A1c < 6.5% [22]. We defined overtreatment in these patients based on their last HbA1c value in a year, consistent with industry standards. However, because dichotomous measures of A1c can be sensitive to small changes in average A1c, we also examined thresholds of A1c < 6% and A1c < 7% [31]. The former represents extremely intensive glycemic control while the latter is still a commonly used quality measure applicable to patients age < 65 years of age. A1c < 7% is also closer to the Choosing Wisely recommendation of the American Geriatrics Society [28]. Since overtreatment may be an unintended consequence of focus on undertreatment, we also examined rates of undertreatment as our secondary outcome measure, i.e., the proportion of patients with an A1C > 9% [32]. Undertreatment rates were calculated in two populations: (i) the population at risk for hypoglycemia; and (ii) all patients with diabetes not in the at risk group. Outcome measure – outlier status Outlier status was assessed using two approaches: a facility outlier value measure weighted by at-risk population and standardized within year and comparator group; and model residuals. (For comparison, we also assess a facility outlier measure unweighted by the at-risk population.) For each year, A1C threshold, and a comparator, a population overtreatment rate p was calculated and the facility outlier value was calculated based on the difference between the observed number of overtreated patients, x, and the expected number under the normal approximation given the number of patients at risk, n, and the population overtreatment rate p: (x – np)/√(np(1-p)). One facility was excluded from this analysis based on small sample size (an at-risk population for which either np or np(1-p) was <5). We utilized three comparators: (1) all VA hospitals; (2) hospitals within the same VISN; and (3) hospitals within the same complexity level. Facilities with outlier values in the lowest decile for all years of data based on more than on comparator-A1c threshold were considered high performing outliers and analyzed further. We also utilized standardized residuals derived from linear mixed effects models controlling for complexity and VISN to determine outliers. For each year and A1c threshold, we considered a facility j in VISN i in the following model in which random intercepts were assumed for each VISN: $$ {\mathit{\mathsf{Overtreatmentrate}}}_{\mathit{\mathsf{i}\mathsf{j}}}={\beta}_{\mathsf{0}}+{\alpha}_{\mathsf{0}\mathit{\mathsf{i}}}+{\beta}_{\mathsf{1}}\mathit{\mathsf{Complexity}}+{\varepsilon}_{\mathit{\mathsf{i}\mathsf{j}}} $$ By considering model residuals (ε ij ) and standardizing them within year and threshold, we assessed the relative performance of facilities across years while controlling for complexity, VISN, and both the mean and variability of the VA-wide overtreatment levels in a given year. A negative residual for a facility indicates a lower than predicted rate, and a positive residual indicates a greater than predicted rate. To determine if a facility is a high performing outlier, we averaged the model residuals across time from overtreatment rates for each of the three thresholds. Facilities in the lowest decile of residuals from all three thresholds were considered to be high performing outliers and were analyzed further. For each facility, we obtained the information of its designated VHA regional service area, known as Veteran Integrated Service Networks (VISN). We also employed a measure of VHA facility complexity level from 2011. VHA classifies facilities into five complexity levels (1a, 1b, 1c, 2, and 3) based on their size, scope of clinical activities, e.g., range of specialties providing care, and other site characteristics; level 1a is most complex while level 3 is least complex. There are 18–32 facilities per complexity level. Overtreatment rates and at-risk populations for each of the 135 facilities were calculated for each year - FY 2009 through 2013. We utilized three A1c thresholds (6, 6.5, and 7%). Similarly, we calculated undertreatment rates in the same population using an A1c > 9% as threshold. We then assessed the VA-wide trends in these rates, calculating Spearman rank correlations of overtreatment and undertreatment rates with time. To determine performance outliers among facilities, we first used a non-modeling approach comparing each facility rate to one (of three) comparators to obtain a facility outlier value for each year. To assess consistency of performance over time, we calculated correlation coefficients (Pearson Product moment). Since facility-level rates may be different due to factors not related to facility efforts (hence, performance) in glycemic control, such factors may need to be considered in comparing facility performance; health care facilities differ in their levels of services provided (complexity level). We used facility complexity (see below) as a proxy for specialty care resources. In addition, VHA is organized into regional networks (VISN) where network level initiatives might result in geographic differences in performance. Thus, our second approach used linear mixed-effects models to adjust for the effects that were previously considered as comparators (facility complexity, VISN-specific trends, and VA-wide variability) simultaneously and examine the standardized residuals from such models as outlier metrics [33, 34]. For each year and A1c threshold, a linear mixed-effects model containing facility complexity as a fixed effect and a random intercept for each VISN was fit to the data of facility overtreatment rates. The distributions of residuals and their independence from predicted values were examined for all estimated models to check model assumptions. The effect of complexity was assessed in each estimated model. We chose to estimate separate models for each year so we would not need to parameterize the correlation structure for the repeated measures within facility or changes in variability across the VA over time. The use of standardized residuals allowed us to compare facilities across years and thresholds to identify those that consistently perform better or worse than expected/predicted. All analyses were performed using SAS 9.2 and R 3.1.1 statistical software. Identification of outliers with low rates of overtreatment and changes over time Overtreatment rates in general fell over time (Fig. 1). Facilities in the top performing decile (based on their 2009 performance using 6.5% threshold and all VA facilities as comparator) and their performance over time are shown in Fig. 2. Facility performance varied over time. Although half (7/14) of the high performers were in the top performing decile in at least 2 of the subsequent 4 years, the range of the performance increased over time; the top decile remained largely separate in 2010 and less so with each subsequent year. However, only one facility was in the top decile for all 5 years though three others were in the top two deciles. Outlier values for the same threshold in adjacent years were highly correlated and the year-to-year correlation decreased with increased separation in time (Table 1). The lowest correlations within comparator and threshold were generally seen between the 2009 and 2013 outlier values. Overtreatment and undertreatment rates measured at facilities by year and threshold. For all three overtreatment thresholds, a decrease over time was observed. The rates of overtreatment increase with the overtreatment threshold as more patients are captured. The undertreatment rates are of similar magnitude to the overtreatment rates using 6% as the threshold and are increasing over time. Outliers are more present at high rates than at low rates Overtreatment outlier values over time by 2009 deciles using 6.5% threshold and all VA facilities as comparator. Facilities were classified into deciles by their 2009 performance using outlier values into and then tracked using the same threshold and comparator across time to illustrate the correlations and consistency of performance over time. The solid line indicates the limit of the top decile year by year. The dotted line indicates the limit of the top two deciles. Facilities in the top decile in 2009 are labeled with letters. Over time, the performance of these facilities spread across the entire distribution Table 1 Correlation coefficients with p-values for overtreatment and undertreatment rates for each year and overtreatment threshold. For overtreatment thresholds 6.5% and 7%, significant negative correlation between overtreatment rates and undertreatment rates was observed across all years; using 6% as the overtreatment threshold, correlations were negative but of lower magnitude and significance Sensitivity of the overtreatment measure to changes in A1c threshold, facility complexity, and organizational region There was only modest overlap between facilities identified as top performers based on three thresholds. Using 2009 outlier values based on all VA facilities and the three thresholds (A1c < 6%, A1c < 6.5%, and A1c < 7%), we looked at overlap in facilities in the highest performing decile (Fig. 3). Only 4/14 (29%) were in the top decile for all three thresholds. Similar results were observed with other years and comparators (2010–2013, VISN and complexity, data not shown). In examining the estimated effects from the overtreatment linear mixed models, facility complexity was not a significant predictor of overtreatment and inter-VISN variability consistently exceeded intra-VISN variability (Table 2). Overlap in high performing facilities across overtreatment threshold. For each threshold, the highest performing decile of facilities was identified (n = 14 for each threshold). Seven facilities were identified by all three overtreatment thresholds. There were no facilities identified by both 6 and 7% that were not also identified by 6.5% Table 2 Correlations of outlier values between years for each comparator and threshold. To describe the year-to-year relationships in overtreatment performance (as measured by outlier values), correlation matrices were generated. The diagonal values within each 4-by-4 threshold/comparator combination show that outlier values in adjacent years are highly correlated and that the year-to-year correlation decreases with increased separation. The lowest correlations within each threshold and comparator are generally seen between 2009 and 2013. The correlations calculated using all VA facilities and facilities of the same complexity as the comparator are very consistent; the VISN comparator correlations are generally lower Undertreatment rates in outlier facilities with low rates of overtreatment Undertreatment rates as defined by A1c > 9% rose modestly over time (Fig. 4). This applied to both the population at high risk for hypoglycemia and the diabetic population not at high risk. We identified 8 consistent high performing outliers for overtreatment (n = 8): 7 identified using model residuals, 3 identified using outlier values, with 2 facilities identified through both approaches. These differences in outlier sets highlight the differences in the two approaches: one incorporated the size of the at-risk population and decreased variability in rates measured with larger samples while the other considered all facility rates of equal weight and validity; one required high performance in all 5 years while the other averaged performance across time. Undertreatment rates over time among facilities with lowest rates of overtreatment (dashed lines) and the average rate among all VA facilities (solid line); the shaded area represents the 95th percentile confidence interval.The upper panel shows results from the at risk population and the lower panel shows the response in the population not at high risk. Eight facilities (labeled with letters) were identified as high performers in 2009 using overtreatment metrics; but low overtreatment observed in a facility with high rates of undertreatment is of concern. Looking at the undertreatment rates over time of the flagged overtreatment facilities and comparing them to the undertreatment rates of all VA facilities, several high performers with regard to overtreatment have consistently high rates of undertreatment The outliers generally had higher rates of undertreatment than the VA average in the population of patients at high risk for hypoglycemia. However, one facility with low overtreatment rates in the at risk patients was a consistent high performer in terms of undertreatment rates both in the at risk population (average rate) and not at risk populations (below average rate). The differences were less dramatic in those patients not at high risk for hypoglycemia. Correlations between overtreatment rates and undertreatment rates were calculated for each year and threshold within the at risk population. For overtreatment thresholds 6.5% and 7%, significant negative correlations between overtreatment rates and undertreatment rates were observed across all years (ρ ranging from −0.5 to −0.3); using 6% as the overtreatment threshold, correlations were negative but of lower magnitude and not significant (ρ ranging from −0.2 to −0.1). Our results demonstrate two important findings and add to literature on the impact of using different measures or different statistical approaches on hospital rankings and outlier status [10, 22, 35,36,37]. First, our results indicate the importance of having a balancing measure. Without such a measure, one can be misled into thinking that a high-performing hospital, e.g., one with very good results with respect to overtreatment might be a good performer with respect to undertreatment. While overtreatment in general fell over time and undertreatment rose, we found that those high performing facilities in overtreatment tended to exhibit a trend of increasing undertreatment over time that often exceeded the VA-wide average. Such facilities should not be considered positive deviants. A low rate of overtreatment may be observed under both desirable and undesirable conditions. Ideally, low rates of overtreatment occur because of specific attention to patients at risk for hypoglycemia. Less ideally, a low rate of overtreatment may occur as an artifact of widespread under-treatment and a facility-wide tendency towards higher A1c levels. In fact, the reverse has been observed where overtreatment may be an unintended consequence of focus on undertreatment [15, 16]. Second, our findings indicate that the statistical identification of high performing outliers is very sensitive to the specific criteria chosen, i.e., the statistical approaches to identifying positive deviants may not be very robust. For example, the choice of three different A1c levels to define overtreatment resulted in three series of high performing outliers with only modest overlap. This variation has implications for the validity of conclusions drawn from league tables, particularly those based on a single measure, and highlights the need to understand the clinical differences of thresholds when interpreting differences in quality results seen across thresholds [7, 38]. Moreover, our findings indicate the importance of identifying consistent high performance. A positive deviant in 1 year may not merit that designation in other years. This issue is particularly true for dichotomous (threshold) measures applied to continuous outcomes where small changes in statistical distribution may have a large effect on the measure. Interestingly, there was little impact of facility complexity or VISN on the findings; limiting comparators to like facilities was not necessary in this particular circumstance. It may be that the lack of effects of facility characteristics reflects the fact that most patients with diabetes are managed by primary care providers even when a facility has a diabetes specialist. Primary care services are available at all facilities regardless of the overall scope of services provided by the facility. Although we used dichotomous thresholds in our analyses, it is important to recognize that quality measures vary depending upon the specific issue. For example, wrong site surgery is a "never event" and a lower rate is always better. We chose our at-risk population to make A1c < 6% essentially a never event (with A1c, this is more controversial). In these circumstances, it is important prove a balancing measure. But there are patients in this group for whom an A1c < 7% might be appropriate. The U/J-shaped curves for mortality and body mass index and blood pressure also suggest that lower is not always better [39, 40]. Finally, our results have implications for the increasingly popular "positive deviance" approach to improvement [41, 42]. The "positive deviance approach" to social/behavior change began in the area of childhood nutrition when it was noted that within communities with high levels of childhood malnutrition some families had well-nourished children [43, 44]. Families referred to as positive deviants evidenced uncommon but successful behaviors or strategies that enabled them to find better solutions to a problem than their peers, despite facing similar challenges and having no apparent extra resources or knowledge. In the original work on positive deviance, performance was assessed by simple observation rather than statistical analysis; the difference between malnourished and well-nourished children was obvious [43]. Reliance on simple observation may not be a feasible strategy for identifying organizational positive deviance. Moreover, when applied in healthcare, there has been variation in the criteria for deviance (both magnitude and comparator), and the level (individual, team, unit, or organization) at which it is applied [8, 10, 22, 43]. In the original work on positive deviance in public health, the comparators were required to be those with similar access to resources. In contrast, Krumholz et al. selected hospitals that were diverse in areas such as the volume of patients with acute myocardial infarction, teaching status, and socioeconomic status of patients [10]. Later studies examined what the positive deviants were doing that was different from other intensive care units and those actions were shared with others [9, 45]. Many hospitals have since adopted those practices with resulting improvement in outcomes. However, despite their good results, several questions are raised about extrapolating that approach to other issues [46, 47]. Although there are numerous methods for outlier detection and differences in both criteria and comparator, our study suggests that considerable thought needs to be given to this issue at the outset, before attempts are made to identify performance outliers. This study has several limitations. First, this study involved assessment of the management of a single issue, which is multidimensional and involves patients, providers, and organizations and all of their interactions. Nevertheless, the condition chosen is a common one and is currently the subject of a national initiative because of its importance. Second, there are many ways to identify outliers statistically. However, by necessity, we limited our statistical analyses so that we cannot infer that other types of analyses would exhibit the same findings. Nevertheless, we used methods that are commonly employed and thus familiar to those for whom assessing performance is important. The study was limited to a single healthcare system, albeit a very large one. It may be that factors unique to this have an impact on the findings. Finally, we focused on the first part of the "best practices" approaches – the identification of deviants or high performing sites and not on the best practices themselves, i.e., what the practices actually were. Similarly, we did not address the issue of implementation of the practices elsewhere, which is a critical part of the positive deviance approach. Notwithstanding these limitations, we believe we have illustrated some of the issues involved in using the positive deviance or best practices approaches. In summary, we have found that in the case of overtreatment of diabetes in the Veterans Healthcare System, statistical identification of high performing facilities was dependent upon the specific measures used. This variation has implications for the validity of conclusions drawn from league tables based upon single measures. Moreover, these results combined with the literature extant suggest that the choice of comparator is dependent upon the nature of the practice. Finally, because two facilities may arrive at the same results via very different pathways, it is important to consider that a "best" practice may actually reflect a separate "worst" practice. FY: HbA1c: ICD-9-CM: International Classification Of Diseases, Ninth Revision, Clinical Modification NQF: VISN: Veterans Integrated Service Network Bretschneider S, Marc-Aurele F Jr, Wu J. "best practices" research: a methodological guide for the perplexed. J Public Adm Res Theory. 2005;15(2):307–23. Guzman G, Fitzgerald JA, Fulop L, Hayes K, Poropat A, Avery M, et al. How best practices are copied, transferred, or translated between health care facilities: a conceptual framework. Health Care Manage Rev. 2015;40(3):193–202. Maggs-Rapport F. 'Best research practice': in pursuit of methodological rigour. J Adv Nurs. 2001;35(3):373–83. Mold J, Gregory M. Best practices research. Fam Med. 2003;35(3):131–4. Gaspar J, Catumbela E, Marques B, Freitas A. A systematic review of outliers detection techniques in medical data: preliminary study. Rome: HEALTHINF; 2011. p. 2011. Hodge V, Austin J. A survey of outlier detection methodologies. Artif Intell Rev. 2004;22(2):85–126. Shahian DM, Normand SL. What is a performance outlier? BMJ Qual Saf. 2015;24(2):95–9. Baxter R, Kellar I, Taylor N, Lawton R. How is the positive deviance approach applied within healthcare organizations? A systematic review of methods used. BMC Health Serv Res. 2014;14(Suppl 2):7. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: using positive deviance to improve quality of health care. Implement Sci. 2009;4:25. Krumholz HM, Curry LA, Bradley EH. Survival after acute myocardial infarction (SAMI) study: the design and implementation of a positive deviance study. Am Heart J. 2011;162(6):981–7. Lawton J, Fox A, Fox C, Kinmonth A. Participating in the United Kingdom prospective diabetes study (UKPDS): a qualitative study of patients' experiences. Br J Gen Pract. 2003;52(490):394–8. Luft HS. Data and methods to facilitate delivery system reform: harnessing collective intelligence to learn from positive deviance. Health Serv Res. 2010;45(5 Pt 2):1570–80. Setiawan M, Sadiq S. A methodology for improving business process performance through positive deviance. Int J Inf Sys Model Des. 2013;4(2):1–22. Singhal A, Greiner K. Using the positive deviance approach to reduce hospital-acquired infections at the veterans administration healthcare system in Pittsburgh. In: Suchman A, Sluyter D, Williamson P, editors. Leading change in healthcare: transforming organizations using complexity, positive psychology, and relationship-centered care. Radcliffe Publishing: New York; 2011. p. 177–209. Duckworth W, Abraira C, Moritz T, Reda D, Emanuele N. Al e. Glucose control and vascular complications in veterans with type 2 diabetes. N Engl J Med. 2009;360(2):129–39. Miller ME, Bonds DE, Gerstein HC, Seaquist ER, Bergenstal RM, Calles-Escandon J, et al. The effects of baseline characteristics, glycaemia treatment approach, and glycated haemoglobin concentration on the risk of severe hypoglycaemia: post hoc epidemiological analysis of the ACCORD study. BMJ. 2010;340:b5444. published online Pogach L, Aron DC. Sudden acceleration of diabetes quality measures. JAMA. 2011;305(7):709–10. Aron DC. Quality indicators and performance measures in diabetes care. Curr Diab Rep. 2014;14(3):472. Pogach L, Aron D. Quality of diabetes care (current levels, distribution, and trends) and challenges in measuring quality of care. In: Moran S, Gregg E, Williams D, Cowie C, Narayan K, eds. Diabetes and public health: from data to policy: Oxford University Press; 2010:373. Lipska KJ, Ross JS, Miao Y, Shah ND, Lee SJ, Steinman MA. Potential overtreatment of diabetes mellitus in older adults with tight glycemic control. JAMA Intern Med. 2015;175(3):356–62. Lipska KJ, Ross JS, Wang Y, Inzucchi SE, Minges K, Karter AJ, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med. 2014;174(7):1116–24. Tseng CL, Soroka O, Maney M, Aron DC, Pogach LM. Assessing potential glycemic overtreatment in persons at hypoglycemic risk. JAMA Intern Med. 2014;174(2):259–68. Inzucchi SE, Siegel MD. Glucose control in the ICU — how tight is too tight? N Engl J Med. 2009;360(13):1346–9. Ismail-Beigi F, Moghissi ES, Tiktin M, Hirsch IB, Inzucchi SE, Genuth S. Individualizing Glycemic targets in type 2 diabetes mellitus: implications of recent clinical trials. Ann Intern Med. 2011;154:554–9. Montori V, Fernandez-Balsells M. Glycemic control in type 2 diabetes: time for an evidence-based about-face? Ann Intern Med. 2009;150:803–8. Pogach L, Aron D. Balancing hypoglycemia and glycemic control: a public health approach for insulin safety. JAMA. 2010;303(20):2076–7. Pogach LM, Tiwari A, Maney M, Rajan M, Miller DR, Aron D. Should mitigating comorbidities be considered in assessing healthcare plan performance in achieving optimal glycemic control? Am J Manag Care. 2007;13(3):133–40. Trucil D 20144. http://www.choosingwisely.org/societies/american-geriatrics-society/. Accessed 14 Nov 2017. U.S. Department of Health and Human Services OoDPaHP 2014;Pages. Accessed at U.S. Department of Health and Human Services, Office of Disease Prevention and Health Promotion. at https://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed 14 Nov 2017. Lawton R, Taylor N, Clay-Williams R, Braithwaite J. Positive deviance: a different approach to achieving patient safety. BMJ Qual Saf. 2014;23(11):880–3. Pogach L, Rajan M, Aron D. Aligning performance measurement with clinical epidemiology: comparison of weighted performance measurement and dichotomous threshold for glycemic control in the veterans health administration. Diabetes Care. 2006;29:241–6. Aron DC. No "black swan": unintended but not unanticipated consequences of diabetes performance measurement. Jt Comm J Qual Patient Saf. 2013;39(3):106–8. Bates DM. lme4: mixed-effects modeling with R. 2010. http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf. Accessed 14 Nov 2017. Tabachnick BG, Fidell LS. Using multivariate statistics. 3rd ed. New York: Harper Collins; 1996. Bilimoria KY, Shen WT, Elaraj D, Bentrem DJ, Winchester DJ, Kebebew E, et al. Adrenocortical carcinoma in the United States: treatment utilization and prognostic factors. Cancer. 2008;113(11):3130–6. Mull HJ, Chen Q, O'Brien WJ, Shwartz M, Borzecki AM, Hanchate A, et al. Comparing 2 methods of assessing 30-day readmissions: what is the impact on hospital profiling in the veterans health administration? Med Care. 2013;51(7):589–96. Rothberg MB, Morsi E, Benjamin EM, Pekow PS, Lindenauer PK. Choosing the best hospital: the limitations of public quality reporting. Health Aff (Millwood). 2008;27(6):1680–7. Paddock SM, Adams JL, dlG H. Better-than-average and worse-than-average hospitals may not significantly differ from average hospitals: an analysis of Medicare hospital compare ratings. BMJ Qual Saf. 2015;24(2):128–34. Jorgensen TS, Osler M, Angquist LH, Zimmermann E, Christensen GT, Sorensen TI. The U-shaped association of body mass index with mortality: influence of the traits height, intelligence, and education. Obesity (Silver Spring). 2016;24(10):2240–7. Gomadam P, Shah A, Qureshi W, Yeboah PN, Freedman BI, Bowden D, Soliman EZ, Yeboah J. Blood pressure indices and cardiovascular disease mortality in persons with or without diabetes mellitus. J Hypertens. 2017. doi:10.1097/HJH.0000000000001509. [Epub ahead of print]. Marsh DR, Schroeder DG, Dearden KA, Sternin J, Sternin M. The power of positive deviance. BMJ. 2004;329(7475):1177–9. Pascale R, Sternin J, Sternin M. The power of positive deviance: how unlikely innovators solve the World's toughest problems. Boston: Harvard Business Press; 2010. Sternin J. Practice positive deviance for extraordinary social and organizational change. The Change Champion's Field Guide: Strategies and Tools for Leading Change in Your Organization. Hoboken: John Wiley & Sons; 2013. pp. 20–37. Zeitlin M, Ghassemi H, Mansour M. Positive deviance in child nutrition - with emphasis on psychosocial and Behavioural aspects and implications for development. New York: United Nations University Press; 1991. Bradley EH, Herrin J, Wang Y, Barton BA, Webster TR, Mattera JA, et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med. 2006;355(22):2308–20. Ramalingam B, Laric M, Primrose J. From best practice to best fit: understanding and navigating wicked problems in international development. London: Overseas Development Institute; 2014. p. 1–44. Urbach DR, Govindarajan A, Saskin R, Wilton AS, Baxter NN. Introduction of surgical safety checklists in Ontario, Canada. N Engl J Med. 2014;370(11):1029–38. The authors acknowledge and express gratitude to the reviewers who made a substantial contribution to improvement of the manuscript. Disclaimer/Declaration: The work was supported by grants from the Veterans Health Administration (VHA) Health Services Research & Development Service and its Quality Enhancement Research Initiative (QUERI) to Dr. Aron (SCE 12–181), to Dr. Pogach (RRP-12-492) and to Dr. Tseng (IIR 11–077). The opinions expressed are solely those of the authors and do not represent the views of the Department of Veterans Affairs. The authors state that there are no conflicts of interest. The datasets were generated from the VA's Corporate Data Warehouse. The analytic datasets include potentially identifiable data and will not be shared for that reason. The views expressed to not represent those of the Dept. of Veterans Affairs or any other agency. IIRECC – EUL 5M677, Louis Stokes Cleveland Department of Veterans Affairs Medical Center, 10701 East Blvd., Cleveland, OH, 44106, USA Brigid Wilson & David C. Aron Department of Veterans Affairs-New Jersey Healthcare System, East Orange, NJ, USA Chin-Lin Tseng & Orysya Soroka Department of Veterans Affairs - Office of Specialty Care Services, Washington DC, USA Leonard M. Pogach Case Western Reserve University School of Medicine, Cleveland, OH, USA Brigid Wilson Chin-Lin Tseng Orysya Soroka Each author fulfills each of the following requirements: (1) substantial contribution to conception and design, or acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content; and (3) final approval of the version to be published. DA and LP formed the initial research questions. BW and CT planned and conducted the statistical analyses. DA wrote the first draft of the manuscript. OS created the dataset and made contributions to the statistical analyses. All authors read and approved the final manuscript. Correspondence to David C. Aron. This project was approved by the Cleveland VA Medical Center Institutional Review Board. Wilson, B., Tseng, CL., Soroka, O. et al. Identification of outliers and positive deviants for healthcare improvement: looking for high performers in hypoglycemia safety in patients with diabetes. BMC Health Serv Res 17, 738 (2017). https://doi.org/10.1186/s12913-017-2692-3 Overtreatment Positive deviance
CommonCrawl
Home Journals IJHT Modeling of Volatile Organic Compounds Condensation in a Vertical Tube Kaoutar Zine-Dine* | Youness El Hammami | Rachid Mir | Sara Armou | Touria Mediouni Laboratory of Mechanics, Processes, Energy and Environment, Ibn Zohr University, ENSA, B.P 1136, Agadir, Morocco [email protected] The aim of this numerical study is to investigate the heat and mass transfer during the volatile organic compounds (VOCs) condensation, particularly alcohols (n-butanol-propanol, ethanol-propanol and n-butanol-ethanol) in the presence of air along a vertical tube. The parabolic governing equations coupled in the both liquid and gas phases with the appropriate boundary and interfacial conditions are solved by an applied numerical method. Thomas algorithm solves the systems of equations, obtained using an implicit finite differences method. The numerical results obtained indicate that the thermal and mass transfer is more intense at the inlet of tube for the three mixtures thus favoring thermal and mass exchanges and the Nusselt number is higher for the ethanol-propanol mixture compared to other mixtures. condensation process, heat and mass transfer, numerical simulation, phase change, ternary mixture VOCs are atmospheric pollutants that have a negative impact on health and ecosystems and are linked to the greenhouse effect and global warming. Hence the necessity to reduce their concentration in the atmosphere and particular in the industrial discharges. There are several reduction techniques for these compounds, the most common of which are absorption treatment, adsorption treatment, condensation treatment and membrane treatment. The technique of VOC capture by condensation is based on the simple principle of liquid-vapor equilibrium of a ternary mixture (VOC-air). For multicomponent mixtures condensation, studies have shown that the diffusive flux of an element does not depend only on its concentration gradient, but depends on the concentration gradients of all solution species Toor [1-2]. Mirkovich and Missen [3] have studied mixtures formed from n-pentane, methanol and dichloromethylene as well as n-pentane, n-hexane. Their test geometry was a vertical tube with 150 mm in the diameter which can be considered as a flat plate. Their first studies were to visualize the profile of the condensate film. With the two dichloromethylene-methanol and n-pentane-n-hexane binary mixtures, they have observed a smooth laminar film for all concentrations tested. On the other hand, the n-pentane-methanol and n-pentane-dichloromethylene mixtures produce different condensation modes depending on the composition and the exchanged flux. For concentrations close to pure fluids and azeotropic mixtures or for high fluxes, they still visualized a smooth laminar film. But, in the case of higher concentrations, by reducing the exchanged flux, they saw waves appear which progressively cover the laminar film. Bandrowski and Bryczkowski [4] presented an experimental study on the binary and ternary mixtures condensation based on methanol, n-propanol and water on a smooth tube and have established appropriate correlations. A numerical study of the condensation of multicomponent mixture (methanol-water-air and acetone-methanol-water) on a vertical plate with constant wall is developed by Taitel and Tamir [5]. This theory is based only on conservation equations. The results are obtained by two different methods: the exact resolution of the conservation equations and the approximate integral method. They have shown that the temperature at the liquid-vapor interface becomes lower than the temperature of the mixture. This is caused by the accumulation of non-condensable gas at the interface and the more volatile components, which reduces the condensation rate. Braun and Renz [6] presented a numerical study of the heat and mass transfer during a ternary mixture condensation in laminar and turbulent flow inside a vertical tube. The results show that the effects of the multicomponent diffusion interactions on the mass fraction profiles are demonstrated at the osmotic diffusion point of a component during the condensation of two components. An experimental study developed by Fujii et al. [7] on mixtures obtained from water, ethanol and methanol has brought interesting results. The condensation of the ethanol-methanol binary mixture forms a film whereas; the other two binary mixtures develop different modes of condensation. The methanol-water mixture condenses in the form of drops while the ethanol-water mixture takes successively the aspects of film, waves and drops. The result of these observations and measurements shows that the exchange coefficient of the condensation in drops can be up to 6 times higher than the theoretical coefficient of a film condensation. However, this increase is less than that which can be expected for the drop condensation of a pure fluid. The resistance due to diffusion in the vapor phase must certainly attenuate this gain. El Hammami et al. [8] developed a numerical study of heat and mass transfer during the condensation of water vapor and ethanol (and methanol) mixture in the presence of air. They showed that the transfers during the condensation of the ethanol vapor and methanol mixture are more influenced by the non-condensable gas compared to the water vapor. A numerical study of the effects of non˗condensable gas type in a vapor mixture of water˗gas (water vapor˗ Neon, water vapor˗ Air, water vapor˗ Argon, and water vapor˗ Krypton ) during the condensation along a vertical tube with a wall subjected to a non uniform flow is investigate by Zine˗Dine et al. [9]. They have shown that the increase of molar mass of non-condensable gases influences on the thermal and mass transfer. The results of the calculation model were validated with those in the literature to verify the accuracy of the numerical procedure developed [9]. The comparison is made with numerical study of Hassaninejadfarahani et al. [10]. The numerical results are also compared by the experimental study of Lebedev et al. [11] during vapor humid air condensation. The purpose of this work is to reduce the concentration of volatile organic compounds (n-butanol-propanol-air, ethanol-propanol-air and n-butanol-ethanol-air), by condensation; given their usefulness in the pharmaceutical industry, food, refining of oil and in means of transport. These VOCs have acute toxicity in humans and animals (causes skin irritation, serious eye damage, can irritate the respiratory tract ...), they also have impacts on the environment (they intervene in the formation of tropospheric ozone and contribute to the phenomenon of acid rain attacking plants and buildings). 2. Physical and Mathematical Modelling Figure 1. Physical model The physical model of the problem studied is a vertical tube with length L, radius R, and thickness δz which very low compared to R (Figure 1). The tube wall is subjected to a non-uniform flow (i.e, cooled by a flow air at temperature Te). The flow of the vapor-gas mixture is laminar and symmetrical about the centre line of the tube. At the inlet tube arrive a flow of vapor mixture and a non-condensable gas at a uniform temperature Tin, uniform pressure Pin, uniform velocity uin and vapor mass fraction win. The following assumptions were made for the mathematical formulation: The flow of gas mixture is stationary, laminar, incompressible and two dimensional. Boundary layer approximations are used for both phases. The Radiative heat transfer and viscous dissipation are not taken into account. The vapor-liquid interface is movable, without waves and in thermodynamic equilibrium, The effect of the liquid superficial tension is not taken into account. Considering the above assumptions, the set of governing equations corresponding to the continuity, momentum, energy, concentration and the boundary conditions, for both gas and liquid film are written in the following form: 2.1 Liquid film Continuity equation: $\frac{\partial}{\partial z}\left(\rho_{l} u_{l}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(r \rho_{l} v_{l}\right)=0$ (1) Momentum equation: $\frac{\partial}{\partial z}\left(\rho_{l} u_{l}^{2}\right)+\frac{1}{r}\left(\frac{\partial}{\partial r}\left(\rho_{l} r v_{l} u_{l}\right)\right)=-\frac{d p}{d z}+\frac{1}{r} \frac{\partial}{\partial r}\left(r u_{l} \frac{\partial u_{l}}{\partial r}\right)+\rho_{l} g$ (2) Energy equation: $\frac{\partial}{\partial z}\left(\rho_{l} C_{p}^{l} u_{l} T_{l}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(\rho_{l} C_{p}^{l} r v_{l} T_{l}\right)=\frac{1}{r} \frac{\partial}{\partial r}\left(r \lambda_{l} \frac{\partial T_{l}}{\partial r}\right)$ (3) 2.2 Gas mixture $\frac{\partial}{\partial z}\left(\rho_{m} u_{m}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(r \rho_{m} v_{m}\right)=0$ (4) $\frac{\partial}{\partial z}\left(\rho_{m} C_{p}^{m} u_{m} T_{m}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(\rho_{m} C_{p}^{m} r v_{m} T_{m}\right)=\frac{1}{r} \frac{\partial}{\partial r}\left(r \lambda_{m} \frac{\partial T_{m}}{\partial r}\right)+$$\frac{1}{r} \frac{\partial}{\partial r}\left[r \rho_{m} D\left(C_{p v}-C_{p g}\right)\right] \frac{\partial W^{k}}{\partial r} T_{m}$ (5) $\frac{\partial}{\partial z}\left(\rho_{m} C_{p}^{m} u_{m} T_{m}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(\rho_{m} C_{p}^{m} r v_{m} T_{m}\right)=\frac{1}{r} \frac{\partial}{\partial r}\left(r \lambda_{n} \frac{\partial T_{m}}{\partial r}\right)+$$\frac{1}{r} \frac{\partial}{\partial r}\left[r \rho_{m} D\left(C_{p v}-C_{p g}\right)\right] \frac{\partial W^{k}}{\partial r} T_{m}$ (6) Diffusion equation: $\frac{\partial}{\partial z}\left(\rho_{m} u_{m} W^{k}\right)+\frac{1}{r} \frac{\partial}{\partial r}\left(\rho_{m} r v_{m} W^{k}\right)=\frac{1}{r} \frac{\partial}{\partial r}\left(r \rho_{m} D \frac{\partial W^{k}}{\partial r}\right)$ K=1, 2 (7) It is necessary to add the mass conservation equation in both liquid and gas phases to the previous system, in order to complete the mathematical modeling of the problem. So the global mass conservation can be expressed as follows: $\frac{\overset{.}{\mathop{{{m}_{in}}}}\,}{2\pi }=\int\limits_{0}^{R-{{\delta }_{z}}}{r{{\rho }_{m}}{{u}_{m}}dr+\int\limits_{R-{{\delta }_{z}}}^{R}{r{{\rho }_{l}}{{u}_{l}}dr}}$ (8) The pure component data are approximated by polynomials in term of temperature and mass fraction. For further details, the thermophysical properties are available in Fuller [12], Perry Don [13], Bromley and Wilke [14]. 2.3 Boundary and interface conditions The imposed boundary and interface conditions are the followings: Condition at the inlet of the tube z=0: $u_{m}=u_{i n} \qquad T_{m}=T_{i n} \qquad W=W_{i n} \qquad p_{m}=p_{i n}$ (9) Condition on the central axis of the tube r=0: $\frac{\partial u_{m}}{\partial r}=\frac{\partial T_{m}}{\partial r}=\frac{\partial W}{\partial r}=0 \qquad \mathrm{v}_{\mathrm{m}}=0$ (10) Condition on the tube wall r=R $q_{m}=-\lambda_{l}\left.\frac{\partial T_{l}}{\partial r}\right|_{w}=h_{e}\left(T_{w}-T_{e}\right) \qquad \mathrm{u}_{1}=v_{l}=0$ (11) Interfacial conditions $r=R-\delta_{z}$ Continuities of velocity and temperature: $u_{I}(z)=u_{m, I}=u_{l, I} \qquad \mathrm{T}_{\mathrm{I}}(z)=T_{m, I}=T_{l, I}$ (12) Continuities of the shear stress and heat flux: $\tau_{I}=\left[\mu \frac{\partial u}{\partial r}\right]_{l, I}=\left[\mu \frac{\partial u}{\partial r}\right]_{m, I}$ (13) The total convective heat flux from the film interface to the gas stream can be expressed by sensitive mode Qsen,I and latent mode QLat,I as follows: $Q_{I}=Q_{s e s, I}+Q_{L a t, I}=-\lambda_{m} \frac{\partial T_{m}}{\partial r}+\dot{m}_{I} h_{f g}$ (14) The interfacial mass flux exchanged between the two phases is determined by Fick's law as follows ${{\overset{.}{\mathop{m}}\,}_{I}}=-\frac{{{\rho }_{m}}\sum\nolimits_{k=1}^{2}{{{\left( D_{k}^{m}\frac{\partial W}{\partial r} \right)}_{I}}}}{1-\sum\limits_{k=1}^{2}{W_{I}^{k}}}$ (15) The interfacial mass fraction can be calculated as follows: $W_{I}^{k}=\frac{P_{k I} M_{k}}{\sum_{i=1}^{3} P_{i} M_{i}}$ (16) Along the interface the local Nusselt and Sherwood numbers are given by the following expressions: $N u_{z}=\frac{h_{t}(2 R)}{\lambda_{m}}=\frac{Q_{I}(2 R)}{\lambda_{m}\left(T_{b u l k}-T_{I}\right)}$ (17) $S{{h}_{z}}=\frac{{{h}_{M}}(2R)}{D}=\frac{{{\overset{.}{\mathop{m}}\,}_{I}}\left( 1-{{W}_{I}} \right)(2R)}{{{\rho }_{m}}D\left( {{W}_{bulk}}-{{W}_{I}} \right)}$ (18) Tbulk and Wbulk are respectively the temperature and mass fraction of the bulk, which are defined as follows: ${{T}_{bulk}}=\frac{\int_{0}^{R-\delta }{{{\rho }_{m}}C_{p}^{m}r{{u}_{m}}dr}}{\int\limits_{0}^{R-\delta }{{{\rho }_{m}}C_{p}^{m}r{{u}_{m}}dr}}$ ${{W}_{bulk}}=\frac{\int_{0}^{R-\delta }{{{\rho }_{m}}r{{u}_{m}}Wdr}}{\int\limits_{0}^{R-\delta }{{{\rho }_{m}}r{{u}_{m}}dr}}$ (19) The total condensate rate defined by the expression: ${{M}_{r}}=\frac{2\pi \int_{0}^{z}{\left( R-{{\delta }_{z}} \right)\overset{.}{\mathop{{{m}_{I}}dz}}\,}}{\overset{.}{\mathop{{{m}_{in}}}}\,}$ (20) 3. Numerical Resolution Method 3.1 Solution method Given the impossibility of obtaining an analytical method for the non-linear coupling differential equations, the conjugated problem leading to the parabolic system of equation (1) - (7) with the appropriate boundary conditions are solved by a finite difference numerical scheme. The transversal convection and diffusion terms are approximated by the central difference while the axial convection terms are approximated by the backward difference. In the centerline (r=0) of the tube, the diffusional terms are singular. A correct representation can be found from an application of L'Hospital's rule. Each system of the finite-difference equations forms a tridiagonal matrix equation, which can be solved by the TDMA Method Patankar [15]. 3.2 Velocity and pressure coupling The problem of coupling velocity-pressure is manifested by the appearance of these variables in the momentum equation. The pressure gradient $\frac{d P_{d}}{d z}$ which appears as the source term in this equation plays the role of the flow motor. Unfortunately, no pressure equation is available. Also, pressure is always an unknown to determine as well as velocity. A given velocity field can satisfy the continuity equation without checking the momentum transport equations. This peculiarity of equations necessitates the use of a velocity-pressure coupling algorithm. Several types of iterative procedures can be used. In this study we use the method of Raithby and Schneider [16] who proposed an appropriate arrangement for incompressible flows that requires one third less effort (three iterations) than the calculation of the secant. The iterative procedure of this method can be summarized as follows: Given $\left(\frac{d P_{d}}{d z}\right)^{*}$, taking account of the momentum quantity equation, a solution temporary u* is obtained. In this step, the mass flow rate of the flow is defined as follows: $\dot{m}^{*}=\int \rho u^{*} d r$. The arrangement of the momentum equation assumes that the coefficients in the discretized equations will remain constant, i.e., no form of the update is used while the pressure gradient is adjusted so that the overall stress of the mass flow is satisfied. A correction can be obtained using a form of the Newton method. With the "frozen" coefficients, the velocities vary linearly with the pressure gradient, so Newton's method should provide the pressure gradient correction. To illustrate this correction, we put: $Q=\left(\frac{d P_{d}}{d z}\right)^{*} \qquad$ and $\qquad \mathrm{F}_{\mathrm{p}}=\frac{\partial u_{p}}{\partial Q}$ (21) The integration of equation (21) leads to: $u_{p}=u_{p}^{*}+F_{p} \Delta Q$ with$\quad \Delta \mathrm{Q}=-\left[\frac{\partial \mathrm{P}_{\mathrm{d}}}{\partial \mathrm{z}}-\left(\frac{\partial P_{d}}{\partial z}\right)^{*}\right]$ (22) The integration of equation (21) makes it possible to obtain the pressure gradient difference as follows: $\Delta Q=\frac{\overset{.}{\mathop{m-\overset{.}{\mathop{{{m}^{*}}}}\,}}\,}{\int{\rho {{F}_{P}}dr}}$ (23) which give: $\left( -\frac{d{{P}_{d}}}{dz} \right)=\frac{\overset{\text{.}}{\mathop{\text{m}}}\,-{{\overset{.}{\mathop{m}}\,}^{*}}}{\int{\rho {{F}_{p}}dr}}\text{ }+{{\left( \text{-}\frac{\partial {{\text{P}}_{\text{d}}}}{\partial \text{z}} \right)}^{\text{*}}}$ (24) In equation (24), \(\dot{m}\) is a known value indicated by the initial conditions. This requires determination of $\left(-\frac{\partial P_{d}}{\partial z}\right)$. The corrected velocity up values can then be determined from equation (22). The continuity equation is then used to determine vp. 3.3 Calculation of liquid film thickness The film liquid thickness is variable along the flow. At each section z, it is calculated by the secant method Nougier J.P. [17] applied to the mass flow conservation equation of total condensate, according to the following iterative procedure: We impose two arbitrary distinct values from the film thickness and For each of them, iteratively resolves the momentum, continuity, energy and diffusion equations successively until convergence is verified according to criterion: $Max\left| \frac{\phi _{i,j}^{n}-\phi _{i,j}^{n-1}}{\phi _{i,j}^{n-1}} \right|\prec \text{1}{{\text{0}}^{\text{-6}}}\text{ }\phi \text{: u, T et W}$ (25) The relative error En is then calculated on the mass flow rate of condensate. From the 3rd iteration ( $n \geq 3$ ), if the relative error is greater than 10-6, then another value of is calculated by the secant method as follows: $\delta _{z}^{n}=\delta _{z}^{n-1}\text{-}{{\text{E}}^{\text{n-1}}}\left( \frac{\delta _{\text{z}}^{\text{n-1}}-\delta _{z}^{n-2}}{{{E}^{n-1}}-{{E}^{n-2}}} \right)$ (26) Otherwise, the value \(\delta _{z}^{n}\) obtained is adopted and we go to the next line. 3.4 Stability of the calculation scheme In order to choose the computational grid for the numerical simulation, a preliminary test had been conducted. To refine the numerical calculation, a non-uniform grid was been chosen, based on geometrical progression in the radial and axial directions and taking into account the irregular variation of u, T and W at the gas˗liquid interface and at the tube entrance. As a result, the density of the nodes is greater at the gas˗liquid interface and at entrance. The variation of the local Nusselt and Sherwood numbers are calculated for each grid size (axial direction (I) and the radial direction, respectively in the gas (J) and liquid (K)) as shown in Table 1. The results show that, the relative error does not exceed 3 % for the variations of the local Nusselt and Sherwood numbers, related to computations using grids ranging from 51*(81+21) to 201*(121+81). In view of these results all further calculations presented in this paper were performed with the 131*(81+31) grid. Table 1. Comparisons of local Nusselt and Sherwood numbers at the interface for various grids (pin=1.atm, Rein=2000, Hr=50 % and Tin=60 °C) I*(J+K) z/L 51*(81+21) 101*(61+31) 201*(121+81) A study was realized to analyze the ternary mixture condensation of volatile organic compounds in particular the alcohols (Ethanol, Propanol and n-Butanol) in the presence of air. The calculations were performed for a vertical tube of length L=2 m and d=2 cm in diameter, the tube wall is cooled by a flow air at temperature Te. In order to make a comparison of the three ternary mixtures condensation, the results of this section are presented around the following nominal values: Temperature difference between inlet and external is DT=30 °C, The Reynolds number is Rein=2000, The inlet pressure is Pin=1atm, The mass fraction of the constituents at the input: \(W_{in}^{n-bu tan ol}=0.05\) and \(W_{in}^{propanol}=W_{in}^{ethanol}=0.25\). Figure 2. Variation of bulk temperature (a) and interface temperature (b) along the tube Figure 2 shows variation of the bulk temperature Tbulk and the interface TI along the tube for three ternary mixtures. It is noted that for the three fluids, interface temperature TI decreases to tend towards the cooling temperature of the external fluid (Figure 2 (b)), and that beyond 0.3 (z/L≥0.3) the interface temperatures of the three mixtures are confounded and become constant because it reached the end of condensation. For the bulk temperature Tbulk, it decreases along the tube, tending towards the interface temperature (Figure 2 (a)). The length of the tube used does not allow in that case joining the interface temperature. It is noted on these results, that at the inlet of tube, the curve of ethanol-propanol interface temperature is above n-butanol-propanol which is itself above that of the n-butanol-ethanol. The explanation of this order come from the fact that the three mixtures contain at least one constituent having the same concentration at 25 %, the interface temperature then corresponds to the saturation temperature of these constituents (ethanol: 42.70 °C., propanol: 42.18 °C, n-butanol: 38.93 °C). To analyze the heat transfer during the volatile organic compounds condensation along the tube, it is necessary to determine the Nusselt number which globally reflects the evolution of the flow ratio convective and conductive along the tube. Figure 3 shows that the local Nusselt number is important at the inlet, this is due to the important thermal gradient in this zone, because of the temperature difference between the inlet vapor (42 °C) and the liquid film (27 °C) (see Figure 2). Transfers are therefore more intense at the tube inlet for the three mixtures thus promoting heat and mass exchange. The heat released by the liquid to vapor has a part by latent heat due to condensation of vapor, and a part by sensible heat due to liquid-vapor contact. The condensation heat decreases due to decreasing of condensed quantities because the steam becomes poorer as one advance in the tube. The exchange by sensible heat also decreases because of the approximation of the temperatures of the gas and the liquid. The local Nusselt number (Figure 3) therefore decreases as the flow progresses in the tube. At the exit the three curves meet at the end of condensation. Note that Nuz is more important for ethanol-propanol mixture compared to other mixtures, this is due to the fact that on one hand the amount of this mixture is greater (50 %) and on the other hand its heat latent which is larger (831.14 J.Kg-1) than that of n-butanol-ethanol and n-butanol-propanol (792.89 J.Kg-1); (728.02 J. Kg-1). Figure 3. Variation of Nusselt number along the tube for the three ternary mixtures Figures 4 (a), (b) show the variation of bulk mass fraction Wbulk and interface WI along the tube. The interface mass fraction WI follow the evolution of the interface temperature TI, they decrease along the tube to establish a fixed level. The bulk mass fraction Wbulk similarly decreases to reach the end of WI for the three mixtures (Figure 4 (a)). It is found that the mass fraction of ethanol vapor is higher compared with that of propanol and n-butanol. As for the comparison with n-butanol, the explanation is due to small quantity of n-butanol injected at the inlet. As regards the propanol, which is injected with the same concentration as ethanol, the cause is its saturation pressure and therefore its saturation temperature, which is lower than that of ethanol. Its liquefaction is therefore slightly behind with ethanol. Figure 5(a) illustrates the variation of the liquid film thickness along the tube for the three ternary mixtures (n-butanol-propanol-air, ethanol-propanol-air and n-butanol-ethanol-air). Figure 4. Variation of bulk mass fraction (a) and interface (b) along the tube It is noted that the liquid film thickness increases as the flow of vapor mixture progresses in the tube, and becomes constant for values zL≥0.3. This is explained by the bulk temperature Tbulk which becomes very close to the interface temperature TI therefore the end of condensation is reached. It is observed that the liquid film thickness is important for the n-butanol propanol mixture. This difference in behavior is explained by the viscosity of the n-butanol propanol mixture (mLiq=2.053*10-3 Kg.m-1.s-1) which is greater than that of ethanol-propanol (mLiq=1.31*10-3 Kg.m-1.s-1) therefore its velocity is lower than that of ethanol-propanol hence the film thickness is greater. Indeed, when the viscosity increases, the friction forces increase consequently the effect of the friction increases. Figure 5(b) shows that the condensation rate of ethanol-propanol mixture is higher than the condensation rate of the n-butanol-propanol mixture and at the end of condensation, they are almost confused. Indeed these results confirm the results of Figure 4(a).
CommonCrawl
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB Forthcoming papers Search references What is RSS Mat. Sb.: Personal entry: Mat. Sb., 2007, Volume 198, Number 5, Pages 33–44 (Mi msb1110) This article is cited in 1 scientific paper (total in 1 paper) Special factorization of a non-invertible integral Fredholm operator of the second kind with Hilbert–Schmidt kernel G. A. Grigoryan Institute of Mathematics, National Academy of Sciences of Armenia Abstract: The problem of the special factorization of a non-invertible integral Fredholm operator $I-K$ of the second kind with Hilbert–Schmidt kernel is considered. Here $I$ is the identity operator and $K$ is an integral operator: $$ (Kf)(x)\equiv\int_0^1 \mathrm K(x,t)f(t) dt, \qquad f \in L_2[0,1]. $$ It is proved that $\lambda=1$ is an eigenvalue of $K$ of multiplicity $n\ge1$ if and only if $I-K=W_{+,1}\circ…\circ W_{+,n}\circ (I-K_n)\circ W_{-,1}\circ…\circ W_{-,n}$, where the $W_{+,j}$, $W_{-,j}$, $j=1,…,n$, are bounded operators in $L_2[0,1]$ of a special structure that are invertible from the left and the right, respectively. Bibliography: 7 titles. DOI: https://doi.org/10.4213/sm1110 Full text: PDF file (438 kB) References: PDF file HTML file Sbornik: Mathematics, 2007, 198:5, 627–637 Bibliographic databases: UDC: 517.968 MSC: 47G10, 47A68 Received: 04.07.2005 and 02.08.2006 Citation: G. A. Grigoryan, "Special factorization of a non-invertible integral Fredholm operator of the second kind with Hilbert–Schmidt kernel", Mat. Sb., 198:5 (2007), 33–44; Sb. Math., 198:5 (2007), 627–637 Citation in format AMSBIB \Bibitem{Gri07} \by G.~A.~Grigoryan \paper Special factorization of a~non-invertible integral Fredholm operator of the second kind with Hilbert--Schmidt kernel \jour Mat. Sb. \vol 198 \pages 33--44 \mathnet{http://mi.mathnet.ru/msb1110} \crossref{https://doi.org/10.4213/sm1110} \elib{https://elibrary.ru/item.asp?id=9512209} \jour Sb. Math. \crossref{https://doi.org/10.1070/SM2007v198n05ABEH003852} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-34548572413} Linking options: http://mi.mathnet.ru/eng/msb1110 https://doi.org/10.4213/sm1110 http://mi.mathnet.ru/eng/msb/v198/i5/p33 This publication is cited in the following articles: G. A. Grigoryan, "On a Criterion for the Invertibility of Integral Operators of the Second Kind in the Space of Summable Functions on the Semiaxis", Math. Notes, 96:6 (2014), 914–920 This page: 367 Full text: 121 Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2021
CommonCrawl
Physically motivated global alignment method for electron tomography Toby Sanders1, Micah Prange2, Cem Akatay3 & Peter Binev1 Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop a new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable 'local' center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results. Electron tomography has been a powerful tool in determining 3-D structures and characterization of nanoparticles in the biological, medical, and materials sciences [1-3]. The method is carried out by acquiring a series of 2-D projection images of an object and then using these 2-D projections to reconstruct the 3-D object. Using the transmission electron microscope, these projections are collected at a number of different orientations, typically by tilting the sample about a fixed tilt axis [4], while other dual axis tilting schemes also exist [5]. A demonstration of the projection scheme is shown for a 2-D object in Figure 1. We will focus only on the case of a single fixed tilt axis in this paper, although our methods can easily be translated to dual axis schemes. 1-D projections are taken of a 2-D object. The small ball along the edge is not projected in the 0° projection straight down due to the limited projection range. However, at the higher angles, this mass is now projected, which will affect an alignment based on the center of mass of these projections. Ideally, between two consecutive projections acquired at nearby tilts of the sample, one would observe only a small rotation of the projected image. However, due to unavoidable mechanical limitations, significant translation shifts are present. Therefore, the projections must be aligned into a common coordinate system to be properly interpreted. Once the projections are aligned, they can then be merged to approximate the 3-D structure of the sample. The alignment is a crucial part of the process, for the resolution of the reconstructed 3-D structures are limited to the accuracy in the alignment. In this paper, we demonstrate a new mathematically justified method for the alignment based on the apparent motion of the center of mass of many 2-D cross-sections of the sample. Over the years, many traditional alignment techniques have been developed by the biological sciences [6]. The most commonly practiced are correlation techniques, feature tracking, and fiducial marker tracking. Correlation techniques are performed by selecting one of the projections as a reference image and aligning each pair neighboring images by selecting the cross-correlation peak between the images for the shift [7]. This method has been proven useful but can yield poor results, as small cumulative errors may result in a serious drift of the sample [8]. As we will show, cross-correlation will not recover the correct alignment even for noise-free data subjected to random shifts. The current work finds a solution without this deficiency. Fiducial marker tracking is done by decorating the sample with small high-density particles that create high contrast in the projection images [9-12]. Individual markers are then identified in all projections. The alignment is determined based on tracking of the path of each marker through the projections. This method can be very accurate but requires a lot of manual interaction to properly locate and center the markers. The main drawback of marker tracking is that the markers will be present in the reconstruction and must be removed for accurate characterization of the sample. Since the markers are of such high density, the reconstruction of the markers will inevitably mix with the reconstruction, making the task of removal nontrivial and possibly inaccurate. Feature tracking uses regions of high contrast or intensity as fiducial markers [13,14]. It requires the identification of suitable regions of high contrast that remain visible throughout the tilt series. Others have begun to perform alignment techniques based on a refinement approach [6]. After a coarse alignment from cross-correlation, one proceeds in computing an initial 3-D reconstruction. This 3-D reconstruction is then reprojected and compared with the original projections. A new alignment arises from aligning the reprojected reconstruction with the original projections, and this process is iterated until convergence is met. In our experience with this method, the reconstruction always satisfies the projections, even if they're misaligned, so that insignificant refinement occurs from updating. Most recently, Scott et al. [15] introduced a technique based on the observation that as the sample is tilted about a fixed axis, the center of mass of the sample will spin in a circle, and if the center of mass is on the tilt axis, then it remains fixed. In this way, it was suggested to shift each projection so that the center of mass in each projection is fixed on a point and taking the line through this point parallel to the axis of rotation as the tilt axis. We believe this is not always applicable and can yield poor results in many settings. First, it requires a tilt series in which the total projected volume is fixed for each projection. However, in most practical settings, some mass will move in and out of the projection range as the sample is tilted, which will then significantly affect the location of the center of mass within the projection along both axes of the projection images. This transition of mass must be accounted for, as this transition will be along the edges of the projections, far from the center, and will thus weigh heavily on the calculated center of mass. Figure 1 demonstrates this transition of mass, with the small ball located on the left edge of the object that has only been projected at certain angles. An additional drawback is that using only the single center of mass point in each projection removes the use of any local structure of the projections as criteria for alignment. In this paper, we give an alignment method that makes more detailed use of the path of the projected center of mass along many cross-sections of the object, perpendicular to the axis of rotation. In an ideal experiment, points on the sample move in circular trajectories. We define a viable path as the projection of such a circular orbit. By simple calculation, we derive an equation which describes all such viable paths of the projected centers of mass, as opposed to the one trivial path of a single point. From here, we show how one can determine a shift for each projection so that the center of mass of all cross-sections perpendicular to the axis of rotation nearly follows a viable path. In this way, since all cross-sections are considered in our alignment method, we will be able to avoid problems involved with error in the calculated centers of mass due to transition of volume in and out of the projections, and we maintain local analysis of the projections as means for the alignment. Additionally, our model aligns the projections based on the rotation about a chosen axis, so that manual interaction for determining the positioning of the tilt axis is avoided. In general, our method can be considered more statistically accurate, and we will show that it provides very dependable alignment and definitively improves the resolution of the reconstruction. The 3-D density function for reconstruction will be denoted f(x,y,z)=f(x,(y z)), with (y z) a 2-D row vector. The data generated are the projections of f in the z-axis, about rotations around the x-axis. A rotation of f through θ about the x-axis can be written as: $$ f\left(x,\left(y\kern1em z\right){Q}_{\theta}\right),\kern1em \mathrm{where}\kern1em {Q}_{\theta }=\left(\begin{array}{cc} \cos \theta & - \sin \theta \\ {} \sin \theta & \cos \theta \end{array}\right). $$ A projection about the rotation θ is then defined as: $$ {P}_{\theta }(f)\left(x,y\right)=\underset{\mathbb{R}}{\int }f\left(x,\left(y\kern1em z\right){Q}_{\theta}\right)\kern1em dz. $$ We note that for each fixed x=x 0, P θ (f)(x 0,y) only contains information from f(x 0,y,z), and therefore, many of the alignment and reconstruction processes can be considered as 2-D rather than 3-D. Therefore, for convenience, we will sometimes denote: $$ {f}_x\left(y,z\right)=f\left(x,y,z\right)\kern1em \mathrm{and}\kern1em {P}_{\theta}\left({f}_x\right)(y)={P}_{\theta }(f)\left(x,y\right). $$ In practice, we are given the unaligned data; therefore, we will regularly refer to the misaligned projections, denoted by \( {\overset{\sim }{P}}_{\theta }(f) \). We define these projections as: $$ {\overset{\sim }{P}}_{\theta }(f)\left(x,y\right)={P}_{\theta }(f)\left(x-{x}_{\theta },y-{y}_{\theta}\right), $$ where the coordinates (x θ ,y θ ) are the shifts to be determined for the alignment. Similarly, we will denote: $$ {\overset{\sim }{P}}_{\theta}\left({f}_x\right)(y)={P}_{\theta}\left({f}_x\right)\left(y-{y}_{\theta}\right), $$ where in this instance the shift x θ is not included. We do not include it, for determining the shifts x θ is a much more trivial task, so that most of our work here focuses on determining y θ after the x-axis alignment is completed. We will denote the total mass about a cross-section x by \( {M}_x=\underset{{\mathbb{R}}^2}{\int }{f}_x\left(y,z\right)\kern1em dy\kern1em dz \). Then, the coordinates for the center of mass of a cross-section are denoted as: $$ {c}_x^y=\frac{1}{M_x}\underset{{\mathbb{R}}^2}{\int }{f}_x\left(y,z\right)y\kern1em dy\kern1em dz,\kern1em {c}_x^z=\frac{1}{M_x}\underset{{\mathbb{R}}^2}{\int }{f}_x\left(y,z\right)z\kern1em dy\kern1em dz $$ We will denote the center of mass of a projected cross-section of f by: $$ {t}_x^{\theta}\kern0.3em =\kern0.3em \frac{1}{M_x}\underset{\mathbb{R}}{\int }{P}_{\theta}\left({f}_x\right)(y)y\kern1em dy,\kern1.30em \mathrm{and}\kern1.30em {\overset{\sim }{t}}_x^{\theta}\kern0.3em =\kern0.3em \frac{1}{M_x}\underset{\mathbb{R}}{\int }{\overset{\sim }{P}}_{\theta}\left({f}_x\right)(y)y\kern1em dy $$ We take the conventional L p norm (denoted by ∥·∥ p ) of a function, say g, defined over ℝ n to be: $$ \parallel g{\parallel}_p^p=\underset{{\mathbb{R}}^n}{\int}\Big|g(x){\Big|}^p\kern1em dx. $$ Similarly, for a vector x∈ℝ n, we take the ℓ p norm (denoted ∥·∥ p ) to be: $$ \parallel x{\parallel}_p^p=\sum_{i=1}^n\Big|{x}_i{\Big|}^p. $$ Theoretical model In practice, we are given the set of misaligned angular projections: $$ {\overset{\sim }{P}}_{\theta_i}(f)\left(x,y\right),\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\kern1em i=1,2,\dots, k. $$ Typically, the number of projections, k, can be from 50 to 200, with maximum tilts of ± 70°. The domain is of course limited, but for theoretical purposes, we will assume that the domain for y is all of ℝ. The problem is then to approximate the set of shifts \( \left({x}_{\theta_i},{y}_{\theta_i}\right) \) for alignment, so that \( {\left\{{\overset{\sim }{P}}_{\theta_i}(f)\left(x,y\right)\right\}}_{i=1}^k \) correspond to the aligned projections \( {\left\{{P}_{\theta_i}(f)\left(x,y\right)\right\}}_{i=1}^k \). Determining the shifts for the x-axis is much simpler, since the x-axis is the axis of rotation. We simply observe that the total mass in each cross-section should remain fixed, so that: $$ {M}_x=\underset{{\mathbb{R}}^2}{\int }{f}_x\left(y,z\right)\kern1em dy\kern1em dz=\underset{\mathbb{R}}{\int }{P}_{\theta_i}\left({f}_x\right)(y)\kern1em dy. $$ ((1)) Based on this simple observation, one should be able to approximate all shifts \( {x}_{\theta_i} \) based on a 'conservation of mass' approach. We design a 'global' alignment method for determining these shifts, by taking \( {x}_{\theta_i} \) to be the shift which minimizes the difference between the observed mass in each cross-section of \( {\overset{\sim }{P}}_{\theta_i}(f)\left(x-{x}_{\theta_i},y\right) \) and the average mass of all projections in each cross-section. More precisely, we let: $$ \begin{array}{c}{x}_{\theta_i}\kern0.3em = \arg \underset{x^{\ast }}{ \min}\kern0.3em {\parallel \underset{\mathbb{R}}{\int }{\overset{\sim }{P}}_{\theta_i}\left(\kern0.60em f\right)\left(x\kern0.3em -\kern0.3em {x}^{\ast },y\right)\kern1em dy\kern0.3em -\kern0.3em \frac{1}{k}\sum_{l=1}^k\kern0.3em \left(\underset{\mathbb{R}}{\int }{\overset{\sim }{P}}_{\theta_l}(f)\left(x,y\right) dy\right)\kern0.3em \parallel}_1 .\end{array} $$ Of course, the averaged term, \( \frac{1}{k}\sum_{l=1}^k\left(\underset{\mathbb{R}}{\int }{\overset{\sim }{P}}_{\theta_l}\right.\left.(f)\left(x,y\right)\kern1em dy\right) \), is subject to error since the projections are not yet aligned, so the determination of each \( {x}_{\theta_i} \) is iterated a few times until there is no change. The number of iterations will depend on just how large the offset of the projections are, but we have typically observed no change in each \( {x}_{\theta_i} \) after just two iterations. A demonstration of this x-axis alignment is given in Figure 2. Images demonstrating the alignment along the x -axis. (a) 2-D projection image taken at a 30°tilt about the x-axis. (b) 1-D projection of (a) onto the x-axis. (c) 1-D projections onto the x-axis of all 2-D projections taken at different tilts about the x-axis. The misalignment is clearly shown in (c), as the 1-D projections should all be nearly the same. (d) Same 1-D projections in (c), shown after alignment is performed along the x-axis. One could also perform a similar 'local' method, by comparing the consecutive projections to each other instead of the average. This approach is subject to cumulative error in the alignment similar to cross-correlation; therefore, we avoid this approach. From here forth, we will now assume that the \( {x}_{\theta_i} \) have been accurately determined, and consider each cross-section. For alignment along the y-axis, we again want to make use of physical properties. It has been noted, as f x (y,z) is rotated about the origin, the center of mass \( \left({c}_x^y,{c}_x^z\right) \) will spin in a circle around the origin. It is not immediately clear, however, how this property can be observed within the projections and used for alignment. Computing the center of mass of a projected slice, we obtain: $$ \begin{array}{lll}{t}_x^{\theta_i}& =\frac{1}{M_x}\underset{\mathbb{R}}{\int }{P}_{\theta_i}\left({f}_x\right)(y)y\kern1em dy\kern2em & \kern2em \\ {}& =\frac{1}{M_x}\underset{\mathbb{R}}{\int}\left(\underset{\mathbb{R}}{\int }{f}_x\left(\left(y\kern1em z\right){Q}_{\theta_i}\right)y\kern1em dz\right)\kern1em dy\kern2em & \kern2em \\ {}& =\frac{1}{M_x}\underset{\mathbb{R}}{\int}\underset{\mathbb{R}}{\int }{f}_x\left(\alpha, \beta \right)\left(\alpha \cos {\theta}_i-\beta \sin {\theta}_i\right)\kern1em d\alpha \kern1em d\beta \kern2em & \kern2em \\ {}& =\kern0.60em \frac{ \cos {\theta}_i}{M_x}\kern0.3em \underset{\mathbb{R}}{\int}\kern0.3em \underset{\mathbb{R}}{\int }{f}_x\left(\alpha, \kern0.3em \beta \right)\alpha \kern1em d\alpha \kern1em d\beta \kern0.3em -\kern0.3em \frac{ \sin {\theta}_i}{M_x}\kern0.60em \underset{\mathbb{R}}{\int}\underset{\mathbb{R}}{\int }{f}_x\left(\alpha, \kern0.3em \beta \right)\beta \kern0.3em d\alpha \kern0.3em d\beta \kern2em & \kern2em \\ {}& ={c}_x^y \cos {\theta}_i-{c}_x^z \sin {\theta}_i,\kern2em & \kern2em \end{array} $$ where we applied the substitution \( \left(\alpha \kern1em \beta \right):=\left(y\kern1em z\right){Q}_{\theta_i} \). This tells us that the center of mass of each projected cross-section should follow the path given by: $$ {t}_x^{\theta_i}={c}_x^y \cos {\theta}_i-{c}_x^z \sin {\theta}_i,\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\kern1em i=1,2,\dots, k. $$ This equation gives us a local relationship between the relative positioning of all of the projections to use for the alignment. As discussed earlier, in [15], it was simply noted that if the center of mass is located at the origin on the tilt axis, then it does not move under rotations about that axis. This observation can be made through similar computations where the integrand is first taken over x, and then, the center of mass is computed for the total sum of the cross-sections, that is: $$ {t}^{\theta_i}=\frac{1}{M}\underset{{\mathbb{R}}^2}{\int }{P}_{\theta_i}(f)\left(x,y\right)\kern0.3em dx\kern0.3em y\kern0.3em dy={c}^y \cos {\theta}_i-{c}^z \sin {\theta}_i, $$ where c y and c z here denote the center-of-mass coordinates along the y- and z-axes, respectively, independent of x, and M denotes the total mass of f. Therefore, it is suggested to shift each projection so that \( {t}^{\theta_i}=0 \) for all i, so that c y=c z=0. While this approach is theoretically sound in an ideal setting, summing over x immediately removes any consideration of local behavior of the projections of f. As we will show, in many settings, this simplification can be a major drawback. Therefore, our approach is to determine a sequence of shifts so that for each cross-section there exists some deterministic center of mass \( \left({c}_x^y,{c}_x^z\right) \) so that Equation 3 is nearly satisfied. With this in mind, let us denote: $$ \varTheta =\kern0.3em \left(\begin{array}{cc} \cos {\theta}_1& \kern0.3em - \sin {\theta}_1\\ {} \cos {\theta}_2& \kern0.3em - \sin {\theta}_2\\ {}\vdots & \vdots \\ {} \cos {\theta}_k& \kern0.3em - \sin {\theta}_k\end{array}\right),\kern1em {c}_x\kern0.3em =\kern0.3em \left(\begin{array}{c}{c}_x^y\\ {}{c}_x^z\\ {}\end{array}\right),\kern1em \mathrm{and}\kern1em {t}_x\kern0.3em =\kern0.3em \left(\begin{array}{c}{\overset{\sim }{t}}_x^{\theta_1}\\ {}{\overset{\sim }{t}}_x^{\theta_2}\\ {}\vdots \\ {}{\overset{\sim }{t}}_x^{\theta_k}\end{array}\right). $$ We note that from the acquired projection data we can compute both Θ and t x . Now from Equation 3, if our alignment is good, then for each cross-section x, there should exist some c x so that Θ c x ≈t x . Therefore, in order to yield a good alignment, we would like to determine: $$ {y}_{\varTheta }=\left(\begin{array}{c}{y}_{\theta_1}\\ {}{y}_{\theta_2}\\ {}\vdots \\ {}{y}_{\theta_k}\end{array}\right), $$ so that there exist some c x satisfying: $$ \varTheta {c}_x\approx {t}_x+{y}_{\varTheta },\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{all}x, $$ or equivalently: $$ \underset{c_x}{ \min}\parallel \varTheta {c}_x-\left({t}_x+{y}_{\varTheta}\right)\underset{2}{\overset{2}{\parallel }}\approx 0\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{all}x. $$ In practice, we will have some finite number of cross-sections, say x j , for j=1,2,…n. Then, we would like solve the minimization problem: $$ \underset{y_{\varTheta }}{ \min}\left(\sum_{j=1}^n\underset{c_{x_j}}{ \min}\parallel \varTheta {c}_{x_j}-\left({t}_{x_j}+{y}_{\varTheta}\right){\parallel}_2^2\right) $$ Now we can compute the minimization over c x directly. Given Θ and t x , the least square solution \( {c}_x^{\ast } \), to \( \parallel \varTheta {c}_x-\left({t}_x+{y}_{\varTheta}\right){\parallel}_2^2\kern0.3em \) : $$ {c}_x^{\ast }= \arg \underset{c_x}{ \min}\parallel \varTheta {c}_x-\left({t}_x+{y}_{\varTheta}\right)\underset{2}{\overset{2}{\parallel }}, $$ can simply be found by differentiation so that: $$ \begin{array}{lll}& \left(\frac{\partial }{\partial {c}_x^y}\parallel \varTheta {c}_x-\left({t}_x+{y}_{\varTheta}\right)\parallel {\kern1.60em }_2^2\right)\left|{\kern1.60em }_{c_x={c}_x^{\ast }}=0\kern1em \mathrm{and}\right.\kern2em & \kern2em \\ {}& \left.\left(\frac{\partial }{\partial {c}_x^z}\parallel \varTheta {c}_x-\left({t}_x+{y}_{\varTheta}\right)\parallel {\kern1.60em }_2^2\right)\right|{\kern1.60em }_{c_x={c}_x^{\ast }}=0.\kern2em & \kern2em \end{array} $$ Solving these equations, the solution can be found to be: $$ {\varTheta}^{+}\left({t}_x+{y}_{\varTheta}\right), $$ where Θ + denotes the pseudo-inverse of Θ, given by Θ +=(Θ T Θ)−1 Θ. It should be noted that Θ T Θ is a 2×2 matrix with entries: $$ \begin{array}{lll}{\left({\varTheta}^T\varTheta \right)}_{11}& =\sum_{i=1}^k\overset{2}{ \cos }{\theta}_i,\kern1em {\left({\varTheta}^T\varTheta \right)}_{21}={\left({\varTheta}^T\varTheta \right)}_{12}\kern2em & \kern2em \\ {}& =-\sum_{i=1}^k \cos {\theta}_i \sin {\theta}_i,\kern1em {\left({\varTheta}^T\varTheta \right)}_{22}=\sum \overset{2}{ \sin }{\theta}_i,\kern2em & \kern2em \end{array} $$ which is clearly invertible and without any notable computational cost. Then, our minimization in Equation 5 becomes: $$ \begin{array}{lll}\underset{y_{\varTheta }}{ \min }& \left(\sum_{j=1}^n\parallel \varTheta {\varTheta}^{+}\left({t}_{x_j}+{y}_{\varTheta}\right)-\left({t}_{x_j}+{y}_{\varTheta}\right){\parallel}_2^2\right)\kern2em & \kern2em \\ {}& =\underset{y_{\varTheta }}{ \min}\left(\sum_{j=1}^n\parallel \left(\varTheta {\varTheta}^{+}-I\right)\left({t}_{x_j}+{y}_{\varTheta}\right)\underset{2}{\overset{2}{\parallel }}\right).\kern2em \end{array} $$ If we let: $$ A=\left(\begin{array}{c}\varTheta {\varTheta}^{+}-I\\ {}\varTheta {\varTheta}^{+}-I\\ {}\vdots \\ {}\varTheta {\varTheta}^{+}-I\end{array}\right),\kern1em \mathrm{and}\kern1em b=\left(\begin{array}{c}\left(\varTheta {\varTheta}^{+}-I\right){t}_{x_1}\\ {}\left(\varTheta {\varTheta}^{+}-I\right){t}_{x_2}\\ {}\vdots \\ {}\left(\varTheta {\varTheta}^{+}-I\right){t}_{x_n}\end{array}\right), $$ then the minimization problem in Equation 6 is equivalent to solving a standard least squares problem: $$ \underset{y_{\varTheta }}{ \min}\parallel A{y}_{\varTheta }-b{\parallel}_2^2. $$ Practical implementation The major consideration that we have ignored so far in the theoretical development but will handle in this section is that certainly the domain for y for \( {\overset{\sim }{P}}_{\theta_i}\left({f}_x\right)(y) \) is finite, say [−m,m]. As before with x, for all practical purposes, we will now additionally consider the y-axis to be discrete, and for each projection \( {P}_{\theta_i}(f)\left(x,y\right) \), the domain is given as: $$ D=\left\{\left(x,y\right):\kern1em x=1,2,\dots, n,\kern1em y=-m,-m+1,\dots, m\right\}. $$ We chose the indexing for y symmetrically for convenience in the center-of-mass computations so that the center of the projections is along the modeled axis of rotation at y=0. Computing \( {t}_x^{\theta_i} \) now becomes: $$ {t}_x^{\theta_i}=\frac{1}{M_x}\sum_{y=-m}^m{\overset{\sim }{P}}_{\theta_i}\left(x,y\right)y. $$ The first issue is that M x may vary through the tilt series for each cross-section; in particular, since the domain for y is limited, there may be some observable mass moving in and out of the field of view after rotation and projection, as we demonstrated in Figure 1. This is again why it's important that we choose the alignment to be considered over many projected cross-sections. To handle this issue, we multiply \( {\overset{\sim }{P}}_{\theta_i}(f)\left(x,y\right) \) by a window function, \( {\omega}_{\theta_i}\left(x,y\right) \), in the computation of \( {t}_x^{\theta_i} \) in order to alleviate some of this transition of mass in and out of the frame. The window function allows for the balance of the total mass within each projection. We choose our window functions to satisfy the following properties: \( 0\le {\omega}_{\theta_i}\left(x,y\right)\le 1; \) \( M=\sum_{x=1}^n\sum_{y=-m}^m{P}_{\theta_i}(f)\left(x,y\right){\omega}_{\theta_i}\left(x,y\right) \), for i=1,2,…,k; \( {\omega}_{\theta_i}\left(x,y\right)\le {\omega}_{\theta_i}\left(x,y+1\right)\kern1em \mathrm{if}\kern1em y<0, \) \( {\omega}_{\theta_i}\left(x,y\right)\ge {\omega}_{\theta_i}\left(x,y+1\right)\kern1em \mathrm{if}\kern1em y\ge 0; \) (iiii) \( {\omega}_{\theta_i}\left(x,y\right)={\omega}_{\theta_i}\left(x+1,y\right) \), for x=1,2,…,n−1. The first property simply emphasizes that multiplication of \( {\overset{\sim }{P}}_{\theta_i}(f) \) by \( {\omega}_{\theta_i} \) reweighs the projection values in order to dampen the introduction of new mass in to the frames. The second property then tells us that this dampening of the values of \( {P}_{\theta_i}(f) \) by multiplication of \( {\omega}_{\theta_i} \) yields the same total mass in each projection. Finally, properties (iii) and (iiii) describe how this dampening should be done. Property (iii) says that the window function decreases as the function moves away from the y-axis. This is because new mass would be introduced along the edge of the plane of view, so that we dampen these values more significantly. Property (iiii) is an additional property to help us better characterize \( {\omega}_{\theta_i} \) in a simple manner and simply says that we place the same weight for each cross-section x. One could remove property (iiii) and change property (ii) so that instead the mass M x is fixed for each cross-section of each projection. This could potentially cause bias in the alignment of the cross-sections, especially ones with considerable noise, and it would require much greater computational time to determine a window for each cross-section of each projection. After the windowing function is determined, we then compute the center of mass for each projected cross-section \( {t}_{x_j} \), for j=1,2,…,n as: $$ {\overset{\sim }{t}}_{x_j}^{\theta_i}=\frac{1}{M_{x_j}^{\theta_i}}\sum_{y=-m}^m{\overset{\sim }{P}}_{\theta_i}\left({f}_{x_j}\right)(y){\omega}_{\theta_i}(y)y\kern1em \mathrm{and}\kern1em {t}_{x_j}=\left(\begin{array}{c}{\overset{\sim }{t}}_{x_j}^{\theta_1}\\ {}{\overset{\sim }{t}}_{x_j}^{\theta_2}\\ {}\vdots \\ {}{\overset{\sim }{t}}_{x_j}^{\theta_k}\end{array}\right), $$ and solve a variant of Equation 6. The variation is that we only choose to minimize only a subset of the cross-sections, say T⊂{1,2,…,n}. This subset is chosen so that the selected cross-sections have a significant quantity of mass in each projection so that introduction of new mass along the edges has considerably less effect on the center of mass of this projected cross-section area. In addition, we only choose those in which the observable total mass within that cross-section varies little throughout all projections, to again avoid the cross-sections with large transition of mass. More precisely, we pick the cross-sections in which the ratio of the average observed mass through the projections to the variance of the mass in the projections is above some specified tolerance. This tolerance can be chosen based upon quality of the data. Finally, the minimization for determining the shifts becomes: $$ \underset{y_{\varTheta }}{ \min}\left(\sum_{j\in T}\parallel \left({\varTheta}^{+}\varTheta -I\right)\left({y}_{\varTheta }+{t}_{x_j}\right){\parallel}_2^2\right), $$ which can again be converted into a standard least squares minimization problem as done in Equation 7. We summarize the method with the simple schematic shown in Figure 3. The general workflow of our alignment approach. Reconstruction method After the alignment, for the reconstruction, we use a compressed sensing approach by total variation (TV) minimization [16]. These methods have recently been gaining popularity for electron tomographic reconstructions [17-19]. In order to briefly describe the method, let us denote the 3-D reconstructed approximation of f by \( g={\left\{{g}_{x,y,z}\right\}}_{x,y,z=1}^N \), where for simplicity we now let our discrete 3-D domain be: $$ D=\left\{\left(x,y,z\right):x,y,z\in \left\{1,2,\dots, N\right\}\right\}. $$ Most reconstruction methods are then designed so that numerical reprojection of g agrees with the experimental projections \( {P}_{\theta_i}(f) \), for i=1,2,…,k. In particular, reconstruction techniques typically minimize the distance between the projections of g and the experimental projections, sometimes called the projection error. This projection error can be expressed as: $$ \sum_{i=1}^k\mathrm{dist}{\left({P}_{\theta_i}(f),{P}_{\theta_i}(g)\right)}^2\kern0.3em =\kern0.3em \sum_{i=1}^k\sum_{x,y=1}^N{\left({P}_{\theta_i}(f)\left(x,y\right)\kern0.3em -\kern0.3em {P}_{\theta_i}(g)\left(x,y\right)\right)}^2. $$ However, simple minimization of the projection error does not necessarily produce optimal results in the presence of noise. Therefore, methods, such as TV minimization, additionally apply regularization conditions on the reconstruction. In the case that our sample consists of homogeneous materials and relatively smooth surfaces, compressive-sensing theory allows us to assume that the reconstruction should have a small total variation norm, given by: $$ \begin{array}{lll}\parallel g{\parallel}_{TV}& \kern0.3em =\kern0.60em \sum_{x,y=1}^N\sum_{z=1}^{N-1}\kern0.3em \left|{g}_{x,y,z+1}\kern0.3em -\kern0.3em {g}_{x,y,z}\right|\kern0.3em +\kern0.60em \sum_{x,z=1}^N\sum_{y=1}^{N-1}\left|{g}_{x,y+1,z}\kern0.3em -\kern0.3em {g}_{x,y,z}\right|\kern2em & \kern2em \\ {}& \kern1em +\sum_{y,z=1}^N\sum_{x=1}^{N-1}\left|{g}_{x+1,y,z}-{g}_{x,y,z}\right|.\kern2em & \kern2em \end{array} $$ With this in mind, we would like for Equation 9 to be relatively small, while also applying a penalty on ∥g∥ TV for noise reduction, so that our method solves: $$ \underset{g}{ \min}\left\{\right.\parallel g{\parallel}_{TV}+\lambda \sum_{i=1}^k\mathrm{dist}{\left({P}_{\theta_i}(f),{P}_{\theta_i}(g)\right)}^2\left\}\right.. $$ ((10)) We will give the results for experimental and simulation data. We compare the reconstructions from alignment using cross-correlation and our center-of-mass technique, while also demonstrating the advantage of using many slices for the center-of-mass alignment, as opposed to just one center-of-mass calculation. For the experimental data, we have an alumina particle sitting on a holey carbon grid. The sample was prepared by grinding the alumina spheres into powder. A suspension of the powder is prepared in ethanol and sonicated for 5 min. The suspension was then added drop-wise over the lacey carbon film supported on 200 mesh Cu TEM grids (Structure Probe, Inc., West Chester, PA, USA) and dried at room temperature. The sample is analyzed using the FEI Titan 80-300 Scanning Transmission Electron Microscope equipped with a spherical-aberration probe-corrector (CEOS GmbH, Heidelberg, Germany) operating at 200 kV. The images were collected using the high-angle annular detector with the camera length of 195 mm and at 80,000 X magnification. The acquisition time was set to 15 s over an image area of 1024 X 1024 pixels resulting in a pixel size of 0.2411 nm. The tilt series is collected using linear tilt scheme continuously from -70° to +70°with tilt increments of 2°. Dynamic STEM focus function is used to compensate for change in focus across the image. The projection of the sample at 30°degrees is shown in Figure 2, and the aligned projections are shown in a video in Additional file 1. Total variation minimization is valid for this data set, as the alumina particle and the carbon grid are known to be uniform in density. In addition, regularization of the reconstruction with TV minimization is critical to the quality of the results due to the low-dose sampling conditions necessary for acquisition of the projections due to beam sensitivity of the material. The reconstructed images from cross-correlation and our alignment methods are shown in Figure 4. While the overall particle morphologies are similar, the reconstruction resulting from our alignment displays much more uniform densities and clearer particle structures. This will result in more confident segmentation and characterization of the reconstructed particle, which is crucial to the interpretations of the experiment. In the 3-D images (visualized using tomviz software [20]), the overall structures appear similar. However, less rigid particle structure is recovered with the cross-correlation alignment, as the red glow around the particle demonstrates blurring from the main particle structure to a lower gray level represented by red in the colormap. In Figure 5, we plotted the centers of mass, t x , for two cross-sections. Plotted together with t x are least squares solutions of the center of mass, \( \left({c}_x^y,{c}_x^z\right) \), based Equation 3 given the computed t x . It is evident that our method finds a nearly viable path for the motion of the center of mass, as we set out to do. On the other hand, the alignment from cross-correlation clearly fails to do so, resulting in low-resolution reconstructions. Reconstructions from cross-correlation and our alignment approach. (a-c) Cross-section images of the 3-D volume from cross-correlation alignment. (d-f) Same cross-sections shown as (a-c) resulting from implementing our alignment method. (g, h) 3-D volume renders of the two reconstructions from cross-correlation alignment (g) and our alignment method (h). The scale bar in (a) is valid for (a-f), and the scale bar in (g) is valid for (g) and (h). It is apparent from these images that more blurring is present from the cross-correlation as a result of misalignment. Location of the centers of mass of single cross-sections for each projection angle (blue) and the least squares solutions to fit the viable paths (red) given by Equation 3. The results from cross-correlation for two cross-sections are given in (a, c), and the results from our alignment method for the same cross-sections are shown in (b, d). In Figure 6, additional results are given using the alignment method described in [15]. Again the 3-D visual comparison of the reconstructions show that our alignment has produced a more rigid structure, as there is less red glow from the main particle but less significant than the results from cross-correlation. Similarly, the images in Figure 6c,d,e,f of the 2-D cross-sections show a more rigid structure and less noisy artifacts due to misalignment. The plots in Figure 6 give a quantitative comparison of the alignment approaches. In Figure 6g,h, the location of the global projected center of mass along the y-axis is shown for the two methods. The plot in Figure 6g shows the only consideration for the originally proposed center-of-mass alignment, as the center of mass in the projections along the y-axis is shifted to the tilt axis. With pixelation of the images, there is still a small negligible distance (less than half a pixel) between the center of mass in each projection and the tilt axis. The location of this center of mass resulting from our approach is shown in Figure 6h and does not necessarily follow a viable path, because we choose a different minimization and allow our approach to avoid problematic cross-sections. In Figure 6i,j, the path of the projected center of mass is shown for a single cross-section for the two alignment methods, where, for this cross-section, our methods demonstrate a viable path and the approach based on the single global center of mass does not. Inevitably, our method produces better reconstruction results, demonstrating that a more sophisticated alignment approach should be taken for dependable results as we have done, taking into account not one single data point but rather all cross-sections as unique data points. The resulting segmentation of the alumina particle is shown in 3-D in a video in the Additional file 2. Results from alignment in [ 15 ] and our approach. (a, b) Images of 3-D volume rendering of the reconstructions from the [15] (a) and our method (b). (c, e) 2-D cross-sections images of the 3-D reconstruction shown in (a). (d, f) Images of corresponding 2-D cross-sections of the 3-D reconstruction shown in (b). (g, h) Plots of the path of the projected global center of mass along the y-axis for the two alignment methods. (i, j) Plots of the path of a center of mass along a single cross-section of the projections for the two alignment methods. Simulation results As a numerical test, we reconstructed simulated data by projecting a discrete 3-D volume with binary intensities at the same tilt angles as the experimental data: a maximum tilt range of ± 70 °in 2°-angle increments. We align the projection images according to the various alignment methods, and each realigned set of projections is reconstructed again using TV minimization. The results from the simulations are shown in Figure 7. The total projected volume shows little variation depending on the tilt angle, with the exception of a small mass appearing in the projection range at high-tilt angles. This is indicated in the projection images shown in Figure 7a,b, where, in Figure 7a, the bundle of mass is located towards the upper right of the projection image, and in Figure 7b, this bundle of mass has nearly moved completely out of the projecting range. With the special example we have here, this small transition of mass will significantly affect the results of an alignment approach such as in [15]. This is very clear from the resulting blurry reconstruction in Figure 7e that does not resemble a binary reconstruction. In addition, it can be seen in Figure 7d that even in this noise-free simulation cross-correlation also produces very poor results simply because the model is not appropriate. In Figure 7c, it is seen that our center-of-mass approach still yields optimal results displaying a near binary reconstruction image that almost completely resembles the original phantom not presented in the figure. The adaptability of our method to choose only the appropriate cross-sections with little variability of mass is clearly advantageous as demonstrated in these simulations. Tomographic simulations with a binary 3-D phantom. (a, b) Projection images of the phantom tilted about the axis at -50° and -32°, respectively. (c-e) 2-D cross-section of the reconstructed phantom from registering the data with different alignment techniques. (c) Result from our center of alignment method. (d) Result from cross-correlation. (e) Result from originally proposed center-of-mass technique. Our method has a sound physical basis: the movement of the center of mass in each cross-section. By selecting shifts for individual tilt-series images that globally lead to physically plausible motions for the centers of mass of many cross-sections, our method effectively utilizes the assumption that the sample object is rigid to improve the alignment and the resolution of the final reconstruction. We have shown that conventional alignment procedures, which shift the global center of mass to the origin, may not produce physically plausible motions in other cross-sections. We have generalized these methods in a computationally feasible manner that can be easily be incorporated into electron tomography workflows. We have demonstrated the significance of such consistency between cross-sections and the effectiveness of the presented method by improving the resolution of 3-D reconstructions of simulated and actual data. Lucic, V, Forster, F, Baumeister, W: Structural studies by electron tomography: from cells to molecules. Ann. Rev. Biochem. 74, 833–865 (2005). Midgley, P, Weyland, M: 3D electron microscopy in the physical sciences: the development of Z-contrast and EFTEM tomography. Ultramicroscopy. 96, 413–431 (2003). International Workshop on Strategies and Advances in Atomic-Level Spectroscopy and Analysis (SALSA), GUADELOUPE, GUADELOUPE, MAY 05-09, 2002. Arslan, I, Yates, T, Browning, N, Midgley, P: Embedded nanostructures revealed in three dimensions. Science. 309(5744), 2195–2198 (2005). Crowther, RA, Amos, LA, Finch, JT, De Rosier, DJ, Klug, A: Three dimensional reconstructions of spherical viruses by Fourier synthesis from electron micrographs. Nature. 226(5244), 421–425 (1970). Arslan, I, Tong, JR, Midgley, PA: Reducing the missing wedge: high-resolution dual axis tomography of inorganic materials. Ultramicroscopy. 106(11–12), 994–1000 (2006). Proceedings of the International Workshop on Enhanced Data Generated by Electrons Proceedings of the International Workshop on Enhanced Data Generated by Electrons. Houben, L, Sadan, MB: Refinement procedure for the image alignment in high-resolution electron tomography. Ultramicroscopy. 111(9–10), 1512–1520 (2011). Guckenberger, R: Determination of a common origin in the micrographs of tilt series in three-dimensional electron microscopy. Ultramicroscopy. 9(1–2), 167–173 (1982). Saxton, W, Baumeister, W, Hahn, M: Three-dimensional reconstruction of imperfect two-dimensional crystals. Ultramicroscopy. 13(1–2), 57–70 (1984). Brandt, S, Heikkonen, J, Engelhardt, P: Multiphase method for automatic alignment of transmission electron microscope images using markers. J. Struct. Biol. 133(1), 10–22 (2001). Fung, JC, Liu, W, de Ruijter, W, Chen, H, Abbey, CK, Sedat, JW, Agard, DA: Toward fully automated high-resolution electron tomography. J. Struct. Biol. 116(1), 181–189 (1996). Masich, S, Östberg, T, Norlén, L, Shupliakov, O, Daneholt, B: A procedure to deposit fiducial markers on vitreous cryo-sections for cellular tomography. J. Struct. Biol. 156(3), 461–468 (2006). Ress, D, Harlow, M, Schwarz, M, Marshall, R, McMahan, U: Automatic acquisition of fiducial markers and alignment of images in tilt series for electron tomography. J. Electron Microsc. 48(3), 277–287 (1999). year=1999, Brandt, S, Heikkonen, J, Engelhardt, P: Automatic alignment of transmission electron microscope tilt series without fiducial markers. J. Struct. Biol. 136(3), 201–213 (2001). Sanchez Sorzano, CO, Messaoudi, C, Eibauer, M, Bilbao-Castro, JR, Hegerl, R, Nickell, S, Marco, S, Carazo, JM: Marker-free image registration of electron tomography tilt-series. BMC Bioinformatics. 10, 124 (2009). http://biocomp.cnb.csic.es/~coss/Articulos/Sorzano2009b.pdf. Scott, MC, Chen, C-C, Mecklenburg, M, Zhu, C, Xu, R, Ercius, P, Dahmen, U, Regan, BC, Miao, J: Electron tomography at 2.4-angstrom resolution. Nature. 483(7390), 444–U91 (2012). Li, C: Compressive sensing for 3D data processing tasksapplications, models, and algorithms. Dissertation, Rice University (2011). Leary, R, Saghi, Z, Midgley, PA, Holland, DJ: Compressed sensing electron tomography. Ultramicroscopy. 131, 70–91 (2013). Goris, B, den Broek, WV, Batenburg, K, Mezerji, HH, Bals, S: Electron tomography based on a total variation minimization reconstruction technique. Ultramicroscopy. 113, 120–130 (2012). Monsegue, N, Jin, X, Echigo, T, Wang, G, Murayama, M: Three-dimensional characterization of iron oxide (alpha-Fe2O3) nanoparticles: application of a compressed sensing inspired reconstruction algorithm to electron tomography. Microscopy Microanal. 18(6), 1362–1367 (2012). Tomviz for tomographic visualization of 3D scientific data. http://www.tomviz.org (2014). 15 August 2014. The authors would like to thank Dr. Ilke Arslan for her helpful discussions. This research was supported in part by NSF grant DMS 1222390. It was also funded by the Laboratory Directed Research and Development program at Pacific Northwest National Laboratory, under contract DE-AC05-76RL01830. Department of Mathematics, University of South Carolina, 1523 Greene Street, Columbia, 29208, SC, USA Toby Sanders & Peter Binev Pacific Northwest National Laboratory, 902 Battelle Blvd, Richland, 99354, WA, USA Micah Prange UOP LLC, a Honeywell Company, 50 E. Algonquin Rd., Des Plaines, 60016, IL, USA Cem Akatay Toby Sanders Peter Binev Correspondence to Toby Sanders. TS derived the alignment methods and algorithms. TS and MP analyzed the technical issues of the methods and algorithms. PB assisted in the analysis of the methods and supervised the research. CA generated the tomography data and analyzed the quality of the reconstructions. TS created the simulated tomography data. TS performed the alignment and reconstruction algorithms and performed the analysis. TS drafted the manuscript. TS and MP revised the manuscript, and all authors discussed it. All authors read and approved the final manuscript. Video that shows the sequence of aligned projection images of the alumina particle using the method proposed in this paper. Video that shows the reconstructed alumina particle in 3-D. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Sanders, T., Prange, M., Akatay, C. et al. Physically motivated global alignment method for electron tomography. Adv Struct Chem Imag 1, 4 (2015). https://doi.org/10.1186/s40679-015-0005-7 Electron tomography Imaging and Modeling in Electron Microscopy - Recent Advances
CommonCrawl
Selected articles from the 6th Biennial International Nursing Conference The association of diabetes literacy with self-management among older people with type 2 diabetes mellitus: a cross-sectional study Utami Rachmawati1, Junaiti Sahar1 & Dwi Nurviyandari Kusuma Wati1 Diabetes has become one of public health problem up until now. As the disease progressed, it might lead to increasing complication as well as death related to them. Diabetes as chronic disease in older people can lead to more vulnerable conditions if they fail to carry out a proper diabetes self-management. Diabetes literacy is an internal factor affecting how the older people go about their diabetes management routines. This study aimed to describe diabetes literacy of the older people and identify the relation of diabetes literacy to diabetes self-management of the older people with T2DM in selected areas of Depok City, West Java, Indonesia. A cross-sectional observational study was utilized and used 106 samples of older people individuals with T2DM, all of whom were chosen via cluster sampling. This research took place in five selected areas under the supervision of three public health center in Depok City. The data were analyzed using a bivariate t-independent test, the Pearson product-moment correlation, and logistic regression for multivariate analysis to determine the relationship of independent and dependent variable. This research shows a significant correlation between diabetes literacy and diabetes self-management (p = 0,011). Diabetes self management is associated with diabetes literacy in older people with type 2 diabetes mellitus. Diabetes literacy should be considered when assessing and addressing diabetes-specific health education needs. Degenerative health problems that the older people experience are closely related to the aging process as well as biological and lifestyle risk factors. One of the health problems that often occurs in old age due to the various factors is Type 2 Diabetes Mellitus (T2DM). There are 387 million people all over the world with T2DM and this number continues to grow [1]. The World Health Organization (WHO) report (2015) showed that diabetes is one of the main causes of high mortality worldwide. The International Diabetes Federation records in 2015 also indicated that the prevalence of T2DM in Indonesia increases with age and reaches its peak at the age of 60–64 of nearly 15%. T2DM ranks fifth among the contributing diseased in old age [2]. The figure rose every year from 1.1% in 2007 to 2.1% in 2013, and Depok as one of the cities in West Java also shows a significant increase in the number of diabetics from as many as 4,834 in 2015 to 7,365 in 2016. Diabetes mellitus is characterized by the increased levels of glucose in the blood or hyperglycemia due to abnormalities in insulin secretion or insulin action, or both [3]. This can be seen from the reports of a research conducted in Cimanggis Sub-municipality which indicate that there are still 45% of older people (out of a total of 99 respondents) with inadequate knowledge of diabetes, 25% of whom do not know about diabetes and its complications. From the results of the preliminary research conducted by the researcher in 2016, it is found that compliance to diabetic self-management is not yet optimal, indicated by a number of 60% respondents still consuming restricted food such as high calories or sugar level and 26.7% not exercising regularly. This shows that the increasing number of older people T2DM indicates non-optimized knowledge and implementation of diabetes self-management. The results of regular health monitoring by Posbindu (the community healthcare center for older people people) indicate that there was lack of older people participation. This finding was shown in the Posbindu staff's report stating that the older people who come are always the same people every month. There are still older people who have diabetes who do not come regularly for a check-up to Posbindu [4]. This is not only the case in one village, but also in other areas as shown in a research by Rusdianingseh in 2014 reporting that there is still lack of enthusiasm in the community causing Posbindu to not have a complete record of clients with T2DM in that region. The Health Program for Older people with Diabetes Mellitus (LANSET DM in Bahasa) developed by Ratnawati in 2013 has not been duplicated in other villages. Interventions related to the issue of older people diabetes have also been implemented through health education and a direct intervention through home visits [5]. Nevertheless, the secondary data from the local Community Health Center in 2016 still indicate the persisting high rates of T2DM in the older people. Health education is one of the multidisciplinary approaches to overcome the problem of older people with T2DM and relies on the cognitive ability and health literacy [6, 7]. Health literacy refers to the ability of person to seek, process, understand, and apply the necessary information regarding their health [8]. The diabetes literacy components that can be observed from this study include Basic/Functional Literacy, Communicative/ Interactive Literacy, and Critical Literacy [9]. Health literacy is known to determine the successful achievement of health outcomes [10], as well as improve patients' diabetes self-management [11]. Studies suggest that low health literacy lead to poor self-management knowledge and abilities [12], poorer level of glycaemic control [13] and higher level of HbA1c in people with diabetes [14]. Low health literacy can also indicate that the health promotion techniques are not used appropriately [8]. Low health literacy is linked to the declining health status of the older people and results in low compliance to disease prevention programs [15]. A health literacy study on T2DM patients in Taiwan shows that health literacy has an indirect impact on the patients' blood sugar control [16]. The study also recommends improving diabetes self-care through the betterment of health literacy and self-efficacy of patients [17]. As such, health literacy is closely related to a patient's health behavior. Promotional actions are necessary to minimize burdens for the individuals, families, and governments impacted by diabetes. Promotional programs can reduce the costs of inpatient hospital care, analogue insulin, and outpatient care [18]. The above explanation describes the factors in the older people that can affect the diabetes self-management in their treatment of diabetes. Diabetes literacy is a factor related to diabetes self-management of the older people with T2DM, there are already some studies about this however, there is a dearth information regarding this matter espescially in Indonesia. This study aimed to describe diabetes literacy of the older people in Depok City and identify the relation of diabetes literacy to diabetes self-management of the older people with diabetes. Study area and study period The study was carried out in several areas in Depok City from February to June, 2016. There were 3 Puskesmas (the Regional Public Service Agency of Community Health Service) used in this study namely, Puskesmas Tugu, Puskesmas Cimanggis, and Puskesmas Sukatani. On each Puskesmas chosen, there were two selected municipality for this study, except Puskesmas Tugu with only one area selected. This research used the descriptive correlational research design with the cross-sectional observational approach. Source of population All of the older people with T2DM in Depok City. All the sampled older people with T2DM who have been living in area under the supervision of Puskesmas Tugu, Puskesmas Sukatani, and Puskesmas Cimanggis of Depok City, West Java, Indonesia. Sampled older people who diagnosed with T2DM, able to communicate, read and write in Bahasa. Older people with T2DM who were having difficulties in both speaking and hearing. Sampling size determination Samples were collected in 5 different areas according to the sample's inclusion criteria. The sample size was determined using the following assumption and a single population (p) n = \( \frac{Z\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.{\alpha}^2 PQ}{d^2} \) was employed. Where n is sample size desired, z2 is a standard normal score of 95% of confidence interval = 1.96, d is degree of accuracy = 0.05 and p = 50%, which was the estimated population proportion of the older people with diabetes in five selected areas. Since the total source of the population was less than ten thousand which was 96; then by adding 10% for the possible non-response rate, a total sample size of 106 was obtained. Proportional allocation was used to allocate the sample in five selected areas. Sampling was conducted using purposive sampling in which areas were previously used to be the work-study area of nursing chosen. The cluster sampling method were used as sampel was taken from different areas. After taking proportion in each area, 106 older people with T2DM where chosed randomnly from the data taken from both public health center and social worker datas. Data collection procedure Data were collected using a structured-self administered questionnaire that were distributed and collected by enumerators from 15 to 31 June 2017. Data collection tool The three instruments used in this research included the socio-demographic questionnaire —which queried age, sex, educational background, household income, ethnicity, family history of diabetes, and information media used for daily and health education needs—, the diabetes literacy questionnaire, which was a diabetes literacy questionnaire adapted from Functional, Communicative, and Critical Health Literacy Scale [19], and the diabetes self-management questionnaire used by Masi in 2016, which is adopted and translated into Bahasa from The Diabetes Self-Management Questionnaire (DSMQ) [20]. The validity test results show that the correlation of the items questionnaire from diabetes literacy ranging from 0,429–0,742 and for the DSMQ ranging from 0,376-0,797. However, there is 1 item from diabetes literacy questionnaire that was not valid (0,197). The sentence from this item was modified so that it can be used. The reliability test results show that the value of Cronbach's alpha for the diabetes literacy questionnaire is 0.743, and for the diabetes self-management questionnaire 0.667, and based on those values they are declared to have adequate internal consistency as a research questionnaire. To assure data quality, orientation was given to experienced enumerators. Enumerators as data collectors responsible for checking the missing answers at each points. The data were also checked during entry and before analysis. Tje schematic sampling procedure is described in Fig. 1. Schematic sampling procedure (n = 106) Study variable Diabetes self-management Data analysis procedure The data were entered and analyzed using statistical data analysis software SPSS version 17. The statistical analysis was made at the 95% confidence level. The bivariate analysis to determine the significance of the relationship between the diabetes self-management variable and the diabetes literacy variable was conducted using Pearson product-moment correlation test. The multivariate analysis was conducted using logistic regression. Ethical consideration There was no physical and psychological harm done to the participant. This research ensures that no force was applied to praticipants in joining the study. The study was also approved by the Faculty of Nursing University of Indonesia Reaseach Ethic Committee (reference: No.100/UN2.F12.D/HKP.02.04/2017). Socio demographic characteristics of the respondents In this study, 106 older people with diabetes were recruited. Among the participants, median age and income of the respondents are 64 years (95% CI: 64.51–66.25), and 2,000,000 IDR (95% CI: 2,165,566.04–1,937,586.77). The detailed characteristics of the respondents are described in Table 1. Table 1 Socio demographic Characteristics of Respondents (n = 106) The characteristics of the respondents show that their average age is 65 years, the majority of them are of the Javanese and Betawi ethnic groups, there is a balanced proportion of low and middle education, their average income is 2 million IDR, and most have a family history of diabetes. Respondent may choose more than one option regarding information media used and the majority of them use televisin. The big number of health information were gained from family and friend. Furthermore, the mean of diabetes literacy in respondents was 42,95 (95% CI: 41,55 - 44,36). Diabetes literacy and diabetes self-management are described in Table 2. Table 2 Association of Diabetes Literacy to Diabetes Self-Management (n = 106) Diabetes literacy significantly affects diabetes self-management with the p value of 0.011. However, the coefficient correlation 0,246 shows weak relationship of these variables. This shows that the better someone's diabetes literacy, the better their diabetes self-management. Respondents with good diabetes literacy will have the ability to search for, process, and apply good health information. Thus, if the information needed to implement diabetes self-management has been well received and understood, diabetes self-management will also be well implemented. Moreover, the analysis of diabetes literacy components are described in Table 3. Table 3 Components Analysis of Diabetes Literacy (n = 106) The final modeling from the final multivariate analysis after going through the candidate, interaction, and confounder testing process is described in Table 4. The result shows that the respondents with good diabetes literacy are two times more likely to perform good diabetes self-management compared to the patients with poor diabetes literacy after being controlled by age and income. This means that good diabetes literacy will be associated with good diabetes-related knowledge as well. Good knowledge of diabetes will be able to change one's behavior, in this case in improving their diabetes self-management. Table 4 Final Model Resulting from the Multivariate Analysis Related to Diabetes Self-Management (n = 106) The results of the statistical tests for this research indicate that there is a significant relationship between diabetes literacy with diabetes self-management. These results are supported by a research conducted by Bailey, et al. in 2014 which suggests that low health literacy is associated with low knowledge of diabetes [21]. Another similar study also explains that the higher the health literacy the higher the participation of one's self-management [12]. Low health literacy is synonymous with a declining health status of the older people and results in low compliance to disease prevention programs [15]. Ishikawa and Yano also state that diabetes patients with a good level of health literacy will tend to demonstrate a good level of participation and diabetes self-care efficacy [22]. Another study that supports these results was conducted by Fransen, et.al who states that low health literacy is associated with low knowledge of diabetes and leads to poor self-management as well [23]. A health literacy study on type-2 diabetes patients in Taiwan shows that health literacy has an indirect impact on the patients' blood sugar control as one of the components of diabetes self-management of patients [16]. This finding, however, contradicts that of Al-Sayah, Majumdar, and Johnson (2015), who state that health literacy does not have a direct impact on the health status of diabetics [13]. The component of functional literacy of the respondents has varying assessments. However, there are more respondents who state that the counseling or health information that they receive today is delivered with ease, that they now rarely find difficult terms or writings that are too small, and that the information is easy to understand. Respondents who need help to understand health information amount to one-third of the total respondents. White reveals that health literacy is related to one's experience about the structure of health information such as brochures for patients [24]. This shows that the role of media is quite important in delivering health information to respondents. Older people with declining physical functions will certainly experience a setback in the aspects of eyesight and cognitive ability. Therefore, it is not uncommon to find older people who express having difficulties or need help when receiving information. As many as a third of the respondents in this study still need help from someone else in reading or understanding a piece of health information. The respondents consider that their family should also know and read the information delivered to them to help them remember it more easily. This statement from the respondents is consistent with a study by Strizich, et al. in 2016 which states that older people with declining cognitive abilities and minimal family support will be more likely to suffer from uncontrolled diabetes [25]. During the data collecting, a spontaneous interview showed that communicative literacy ability of the respondents is fairly good because the respondents have begun to seek and gather information related to their health. This may be due to the research taking place in the practice area for nursing students, which is also a factor enabling the high rate of search and application of health information in this research. Even though the use of media information is not statistically affecting self-management, it might be taken into consideration that it played as contributing factors. The respondents of this research have often received information from the work-study nursing students as well as from the health workers who are regularly on duty in Posbindu (the community healthcare center for older people) and community healthcare center. Then, since the respondents received the information they have made attempts to understand and apply it in their daily life. The research conducted by von Wagner, Wolf, Steptoe, and Wardle in 2009 shows that self-management requires certain abilities, such as the ability to understand information and how lifestyle can affect diabetes [26]. The ability to seek and collect health information is also affected by the existing access to health information. Pawlak finds that information technology is one of the determinants of health literacy [27]. Ease of information collection will certainly increase the search for and collection of information by respondents even further. The research conducted by Santosa showed that one factor that has an impact on health literacy is access to health information [28]. The ability to seek and collect health information is also affected by the access to health information available. Ferawati, Hasibuan, and Wicaksono state that diabetes management is also affected by the factor of information support [29]. The results of this study are supported by the results of a similar research related to access to health information and diabetes care management conducted by Lai et.al, wherein they state that decisions on health made and implemented by individuals are affected by comprehensive, accessible health information that is appropriate based on their needs and socio-cultural backgrounds [30]. This means that the individual's diabetes self-management will be good if the necessary health information is easy to obtain and meets the individual's needs as well as characteristics such as education and cultural values adopted by the individual concerned. Informational support from the family is known to also impact diabetes self-management. The results of this research show that half of the respondents get health information from their friends and family. This finding is consistent with that of Netismar in 2017 who states that informational support from the family is one of the motivations for the implementation of diabetes self-management [31]. The majority of research respondents have received health-related information during the past month. Apart from their family, health information that they received also came from the health workers at public health center, Posbindu, and elsewhere. This shows that access to individual health information is well functioning and there is also the role of health personnel as one of the health information providers during the diabetes treatment undergone by the respondents, and these affect the communicative literacy aspect. These findings are reinforced by a research conducted by Santosa wherein she states that health information provided by health personnel can now be more easily digested, only that it is not given frequently enough [28]. For example, a respondent stated that during the doctor visit, a brie counselling was given with some adjustments in the way that it's easier for patient to understand medical terms. The respondents do not visit the health care facility every day, so there is a need for other media where health information can be easily accessed. Interviews with the respondents indicate that television is more interesting because they can see for themselves what is being delivered. This is in line with the findings of Newblod and Campos (2011) which assert that radio and television are more effective in terms of audience reachability and repeatability of the news compared to print media [32]. The explanation shows that the role of print media is not very efficient in delivering health education to the older people. The gap filled by this research related to diabetes literacy and diabetes self-management is found in the minimal amount of research that examines diabetes literacy in more specific terms and its relationship with diabetes self-management. The findings of this research indicate that diabetes self-management will be good if accompanied by efforts to obtain, process, and apply good health information as well. Educational background and functional status will also represent different levels of diabetes literacy in the older people and, as such, they render the health education efforts undertaken ineffective. The results of this research also show that the older people access health information mostly on television as an audiovisual device. Therefore, nurses ought to have the ability to design health education using audiovisual media that must take into consideration the aspects of age and education of the older people. There is a significant relationship between diabetes literacy with diabetes self-management of the older people with diabetes, in that the better their diabetes literacy the better their self-management. Diabetes literacy in the older people can improve their diabetes self-management alongwith ability to seek and apply information on diabetes available through the use of suitable media information. It is expected that the results of this research will able to provide an input and materials for consideration in order to overcome the problems of older people diabetes by considering the aspect of diabetes literacy when assessing and addressing diabetes-specific health education needs, one method being individual health education interventions conducted using audiovisual media. CI: DRPM: Directorate of research and community engagement (Direktorat Riset dan Pengabdian Masyarakat in Bahasa) DSMQ: The diabetes self-management questionnaire Exp(B): Exponent Beta LANSET DM: A term used in Bahasa for the health program for older people with diabetes Mellitus PITTA: Internationally Indexed Publication for Final Assignment (Publikasi terindeks Internasional untuk Tugas Akhir in Bahasa) SD: Standard of deviation T2DM: International Diabetes Federation. IDF Diabetes Atlas, 6th edn update, poster. Brussels: International Diabetes Federation; 2014. Pusat Data dan Informasi Kementrian Kesehatan RI. Situasi lanjut usia (lansia) di Indonesia. Available from: http://www.depkes.go.id/resources/download/pusdatin/infodatin/infodatin%20lansia%202016.pdf. Accessed 16 Mar 2016 Soelistijo SA, et al. Konsesus Pengelolaan dan Pencegahan diabetes melitus tipe 2 di Indonesia 2015. Available from: https://pbperkeni.or.id/wp-content/uploads/2019/01/4.-Konsensus-Pengelolaan-dan-Pencegahan-Diabetes-melitus-tipe-2-di-Indonesia-PERKENI-2015.pdf Rusdianingseh. Pengalaman klien dalam pengendalian diabetes melitus tipe 2 di Kelurahan Sukatani Depok. Depok: Faculty of Nursing- University of Indonesia; 2015. Ratnawati D. Program Lansia Sehat dengan diabetes mellitus (LANSET DM) sebagai Strategi Intervensi Keperawatan Komunitas dalam Pengendalian DM pada Kelompok Lansia di Kelurahan Cisalak Pasar, Cimanggis, Depok. Depok: Faculty of Nursing- University of Indonesia; 2014. Nguyen HA, et al. Cognitive function is a risk for health literacy in older adults with diabetes. Diabetes Res Clin Pract. 2013;101:141–7. Available from. https://0-doi-org.brum.beds.ac.uk/10.1016/j.diabres.2013.05.012. Andrade I, Silva C, Martins AC. Application of the health literacy INDEX on the development of a manual for prevention of falls for older adults. Patient Educ Couns. 2017;100(1):154–9 Available from: https://0-doi-org.brum.beds.ac.uk/10.1016/j.pec.2016.07.036. Sørensen K, et al. Health literacy and public health: A systematic review and integration of definitions and models. BMC Public Health. 2012;12:80. Nutbeam D. Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health Promot Int. 2000;15(3):259–67 Available from: https://0-doi-org.brum.beds.ac.uk/10.1093/heapro/15.3.259. Van den Broucke S. Health literacy: a critical concept for public health. Arch Public Health. 2014;72(10):1-2. https://0-doi-org.brum.beds.ac.uk/10.1186/2049-3258-72-10. Vandenbosch J, et al. The impact oh health literacy on diabetes self-management education. Health Educ J. 2018;77(3):349–62. https://0-doi-org.brum.beds.ac.uk/10.1177/0017896917751554. Van der Heide I, et al. Association among health literacy, diabetes knowledge, and self-management behavior in adults with diabetes: results of a Dutch cross-sectional study. J Health Commun. 2014;19(2):115–31 Available from: https://0-doi-org.brum.beds.ac.uk/10.1080/10810730.2014.936989. Al Sayah F, Majumdar SR, Johnson JA. Association of inadequate health literacy with health outcomes in patiets with type 2 diabetes and depression: secondary analysis of controlled trial. Can J Diabetes. 2015;39:259–65 Available from: https://0-doi-org.brum.beds.ac.uk/10.1016/j.jcjd.2014.11.005. Souza JG, Apolinaro D, Magaldi RM, Busse AL, Campora F, Jacob-Filho W. Functional health literacy and glycaemic control in older adults with type 2 diabetes outcomes. MBJ Open. 2014;4(2):e004180. MacLeod S, Musich S, Gulyas S, Cheng Y, Tkatch R, Cempellin D, … Yeh CS. The impact of inadequate health literacy on patient satisfaction, healthcare utilization, and expenditures among older adults. 2016. Available from: https://0-doi-org.brum.beds.ac.uk/10.1016/j.gerinurse.2016.12.003. Tseng H-M, Liao S-F, Wen Y-P, Chuang Y-J. Stages of change concept of the transtheoretical model for healthy eating links health literacy and diabetes knowledge to glycemic control in people with type 2 diabetes. Prim Care Diabetes. 2017;11:29–36 Available from: https://0-doi-org.brum.beds.ac.uk/10.1016/j.pcd.2016.08.005. Lee E-H, Lee YW, Moon SH. A structural equation model linking health literacy to self-efficacy, self-care activities, and health-related quality of life in patients with type 2 diabetes. Asian Nurs Res. 2016;10:82–7 Available from https://0-doi-org.brum.beds.ac.uk/10.1016/j.anr.2016.01.005. World Health Organization. Golbal reports on diabetes: Executive Summary. 2016. Available from: http://apps.who.int/iris/bitstream/10665/204874/1/WHO_NMH_NVI_16.3_eng.pdf Ishikawa H, Takeuchi T, Yano E. Measuring finctional, communicative, and critical health literacy aong diabetes patients. Diabetes Care. 2008;31:874–9. Schmitt A, Gahr A, Hermanns N, Kulzer B, Huber J, Hak T. The Diabetes Self-Management Questionnaire (DSMQ): Development and evaluation of an instrument to assess diabetes self-care activities associated with glycaemic control. Health Qual Life Outcomes. 2013; 11(138). Available from: doi:https://0-doi-org.brum.beds.ac.uk/10.1186/1477-7525-11-138. Bailey SC, Brega AG, Crutchfield TM, Elasy T, Herr H, Kaphingst K, et al. Update on health literacy and diabetes. Diabetes Educ. 2014;40(5):581–604 Available from: https://0-doi-org.brum.beds.ac.uk/10.1177/0145721714540220. Ishikawa H, Yano E. The relationship of patient participation and diabetes outcomesfor patients with high vs low health literacy. Patient Educ Couns. 2011; 84(3):393–397. Available from: doi: https://0-doi-org.brum.beds.ac.uk/10.1016/j.pec.2011.01.029. Fransen MP, von Wagner C, Essink-Bot M-L. Diabetes self-management in patients with low health literacy: ordering findings from literature in health literacy framework. Patient Educ Couns. 2011;88:44-53. Available from: doi:https://0-doi-org.brum.beds.ac.uk/10.1016/j.pec.2011.11.015. White S, Chen J, Atchison R. Relationship of preventive gealth practice and health literacy: a national study. Am J Health Behav. 2008;32(3):227–42. Strizich G, et al. Glycemic control, cognitive function, and faily support among middle-aged and older Hispanics with diabetes: the Hispanic community health study/study of Latinos. Diabetes Res Clin Pract. 2016;117:64–73 Available from: https://0-doi-org.brum.beds.ac.uk/10.1016/j.diabres.2016.04.052. von Wagner C, Steptoe A, Wolf MS, Wardle J. Health literacy and health actions: a review and a framework from health psychology: J Health Educ Behav. 2009;36(5):860-877. Available from doi: https://0-doi-org.brum.beds.ac.uk/10.1177/1090198108322819. Pawlak R. Economic considerations of health literacy. J Nurse Econ. 2005;23(4):173–80 147. Santosa KS. Faktor-faktor yang berhubungan dengan tingkat kemelekan kesehatan pasien di klinik dokter keluarga fakultas kedokteran Universitas Indonesia Kiara,DKI Jakarta. (Thesis). Depok: Faculty of Medicine-University of Indonesia; 2012. Ferawati HPJ, Wicaksono A. Hubungan dukungan keluarga dengan perilaku pengelolaan penatalaksanaan DM tipe 2 di wilayah kerja Puskesmas Purnama Kecamatan Pontianak Selatan, Kota Pontianak. Kota Pontianak: Universitas Tanjungpura; 2014. Available from: https://media.neliti.com/media/publications/206333-hubungan-dukungan-keluarga-dan-perilaku.pdf Lai A, Ishikawa T, Kiuchi T, Mooppil N, Griva K. Communicative and critical health literacy, and self-management bihaviors in end-stage renal disease patients with diabetes on hemodialysis. Patient Educ Counc. 2013; 91: 221-227 Available from: doi: https://0-doi-org.brum.beds.ac.uk/10.1016/j.pec.2012.12.018 Netismar. Hubungan karakteristik, dukungan keluarga, dan motivasi diabetesi tipe 2 dengan pemanfaatan pelayanan kesehatan di Kecamatan Jagakarsa Jakarta Selatan. (Tesis). Depok: Faculty of Nursing University of Indonesia; 2017. Newbold KB, Campos S. Media and social media in public health messages: a systematic review. Hamilton: McMaster University: McMaster Institute of Environtment & Health; 2011. Available from: http://www.mcmaster.ca/mieh/documents/publications/Social%20Media%20Report.pdf The researcher would like to express a sincere thank you to all those who helped during the research, more importantly all research participants. Thank you to the Faculty of Nursing of Universitas Indonesia. The publication of the results of this study is funded by PITTA Grant provided by DRPM of Universitas Indonesia as listed in 381/UN.2.R3.1/HKP.05.00/2017.. Data and questionnaires will be available upon request. This article has been published as part of BMC Nursing Volume 18 Supplement 1, 2019: Selected articles from the 6th Biennial International Nursing Conference. The full contents of the supplement are available online at https://0-bmcnurs-biomedcentral-com.brum.beds.ac.uk/articles/supplements/volume-18-supplement-1. Faculty of Nursing Universitas Indonesia, Jalan. Prof. Dr. Bahder Djohan, Depok, West Java, 16424, Indonesia Utami Rachmawati , Junaiti Sahar & Dwi Nurviyandari Kusuma Wati Search for Utami Rachmawati in: Search for Junaiti Sahar in: Search for Dwi Nurviyandari Kusuma Wati in: All authors read and approved the final version of the manuscript. Correspondence to Junaiti Sahar. This study has passed the ethical of conduct by the Ethics Committee of the Faculty of Nursing of Universitas Indonesia No. 100/UN2.F12.D/HKP.02.04/2017 and the participants had provided informed consent to participate in this study Patient consent for publication was receieved Rachmawati, U., Sahar, J. & Wati, D.N.K. The association of diabetes literacy with self-management among older people with type 2 diabetes mellitus: a cross-sectional study. BMC Nurs 18, 34 (2019) doi:10.1186/s12912-019-0354-y DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-019-0354-y
CommonCrawl