text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
5.7: Multiple Logistic Regression
[ "article:topic", "logistic regression", "authorname:mcdonaldj", "showtoc:no" ]
Book: Biological Statistics (McDonald)
5: Tests for Multiple Measurement Variables
Contributed by John H. McDonald
Associate Professor (Biological Sciences) at University of Delaware
When to use it
Null hypothesis
Using nominal variables in a multiple logistic regression
Selecting variables in multiple logistic regression
Graphing the results
Similar tests
How to do multiple logistic regression
Power analysis
To use multiple logistic regression when you have one nominal variable and two or more measurement variables, and you want to know how the measurement variables affect the nominal variable. You can use it to predict probabilities of the dependent nominal variable, or if you're careful, you can use it for suggestions about which independent variables have a major effect on the dependent variable.
Use multiple logistic regression when you have one nominal and two or more measurement variables. The nominal variable is the dependent (\(Y\)) variable; you are studying the effect that the independent (\(X\)) variables have on the probability of obtaining a particular value of the dependent variable. For example, you might want to know the effect that blood pressure, age, and weight have on the probability that a person will have a heart attack in the next year.
Heart attack vs. no heart attack is a binomial nominal variable; it only has two values. You can perform multinomial multiple logistic regression, where the nominal variable has more than two values, but I'm going to limit myself to binary multiple logistic regression, which is far more common.
The measurement variables are the independent (\(X\)) variables; you think they may have an effect on the dependent variable. While the examples I'll use here only have measurement variables as the independent variables, it is possible to use nominal variables as independent variables in a multiple logistic regression; see the explanation on the multiple linear regression page.
Epidemiologists use multiple logistic regression a lot, because they are concerned with dependent variables such as alive vs. dead or diseased vs. healthy, and they are studying people and can't do well-controlled experiments, so they have a lot of independent variables. If you are an epidemiologist, you're going to have to learn a lot more about multiple logistic regression than I can teach you here. If you're not an epidemiologist, you might occasionally need to understand the results of someone else's multiple logistic regression, and hopefully this handbook can help you with that. If you need to do multiple logistic regression for your own research, you should learn more than is on this page.
The goal of a multiple logistic regression is to find an equation that best predicts the probability of a value of the \(Y\) variable as a function of the \(X\) variables. You can then measure the independent variables on a new individual and estimate the probability of it having a particular value of the dependent variable. You can also use multiple logistic regression to understand the functional relationship between the independent variables and the dependent variable, to try to understand what might cause the probability of the dependent variable to change. However, you need to be very careful. Please read the multiple regression page for an introduction to the issues involved and the potential problems with trying to infer causes; almost all of the caveats there apply to multiple logistic regression, as well.
As an example of multiple logistic regression, in the 1800s, many people tried to bring their favorite bird species to New Zealand, release them, and hope that they become established in nature. (We now realize that this is very bad for the native species, so if you were thinking about trying this, please don't.) Veltman et al. (1996) wanted to know what determined the success or failure of these introduced species. They determined the presence or absence of \(79\) species of birds in New Zealand that had been artificially introduced (the dependent variable) and \(14\) independent variables, including number of releases, number of individuals released, migration (scored as \(1\) for sedentary, \(2\) for mixed, \(3\) for migratory), body length, etc. Multiple logistic regression suggested that number of releases, number of individuals released, and migration had the biggest influence on the probability of a species being successfully introduced to New Zealand, and the logistic regression equation could be used to predict the probability of success of a new introduction. While hopefully no one will deliberately introduce more exotic bird species to new territories, this logistic regression could help understand what will determine the success of accidental introductions or the introduction of endangered species to areas of their native range where they had been eliminated.
The main null hypothesis of a multiple logistic regression is that there is no relationship between the \(X\) variables and the \(Y\) variable; in other words, the \(Y\) values you predict from your multiple logistic regression equation are no closer to the actual \(Y\) values than you would expect by chance. As you are doing a multiple logistic regression, you'll also test a null hypothesis for each \(X\) variable, that adding that \(X\) variable to the multiple logistic regression does not improve the fit of the equation any more than expected by chance. While you will get \(P\) values for these null hypotheses, you should use them as a guide to building a multiple logistic regression equation; you should not use the \(P\) values as a test of biological null hypotheses about whether a particular \(X\) variable causes variation in \(Y\).
Multiple logistic regression finds the equation that best predicts the value of the \(Y\) variable for the values of the \(X\) variables. The \(Y\) variable is the probability of obtaining a particular value of the nominal variable. For the bird example, the values of the nominal variable are "species present" and "species absent." The \(Y\) variable used in logistic regression would then be the probability of an introduced species being present in New Zealand. This probability could take values from \(0\) to \(1\). The limited range of this probability would present problems if used directly in a regression, so the odds, \(Y/(1-Y)\), is used instead. (If the probability of a successful introduction is \(0.25\), the odds of having that species are \(0.25/(1-0.25)=1/3\). In gambling terms, this would be expressed as "\(3\) to \(1\) odds against having that species in New Zealand.") Taking the natural log of the odds makes the variable more suitable for a regression, so the result of a multiple logistic regression is an equation that looks like this:
\[\ln \left [ \frac{Y}{1-Y} \right ]=a+b_1X_1+b_2X_2+b_3X_3+...\]
You find the slopes (\(b_1,\; b_2\), etc.) and intercept (\(a\)) of the best-fitting equation in a multiple logistic regression using the maximum-likelihood method, rather than the least-squares method used for multiple linear regression. Maximum likelihood is a computer-intensive technique; the basic idea is that it finds the values of the parameters under which you would be most likely to get the observed results.
You might want to have a measure of how well the equation fits the data, similar to the \(R^2\) of multiple linear regression. However, statisticians do not agree on the best measure of fit for multiple logistic regression. Some use deviance, \(D\), for which smaller numbers represent better fit, and some use one of several pseudo-\(R^2\) values, for which larger numbers represent better fit.
You can use nominal variables as independent variables in multiple logistic regression; for example, Veltman et al. (1996) included upland use (frequent vs. infrequent) as one of their independent variables in their study of birds introduced to New Zealand. See the discussion on the multiple linear regression page about how to do this.
Whether the purpose of a multiple logistic regression is prediction or understanding functional relationships, you'll usually want to decide which variables are important and which are unimportant. In the bird example, if your purpose was prediction it would be useful to know that your prediction would be almost as good if you measured only three variables and didn't have to measure more difficult variables such as range and weight. If your purpose was understanding possible causes, knowing that certain variables did not explain much of the variation in introduction success could suggest that they are probably not important causes of the variation in success.
The procedures for choosing variables are basically the same as for multiple linear regression: you can use an objective method (forward selection, backward elimination, or stepwise), or you can use a careful examination of the data and understanding of the biology to subjectively choose the best variables. The main difference is that instead of using the change of \(R^2\) to measure the difference in fit between an equation with or without a particular variable, you use the change in likelihood. Otherwise, everything about choosing variables for multiple linear regression applies to multiple logistic regression as well, including the warnings about how easy it is to get misleading results.
Multiple logistic regression assumes that the observations are independent. For example, if you were studying the presence or absence of an infectious disease and had subjects who were in close contact, the observations might not be independent; if one person had the disease, people near them (who might be similar in occupation, socioeconomic status, age, etc.) would be likely to have the disease. Careful sampling design can take care of this.
Multiple logistic regression also assumes that the natural log of the odds ratio and the measurement variables have a linear relationship. It can be hard to see whether this assumption is violated, but if you have biological or statistical reasons to expect a non-linear relationship between one of the measurement variables and the log of the odds ratio, you may want to try data transformations.
Multiple logistic regression does not assume that the measurement variables are normally distributed.
Some obese people get gastric bypass surgery to lose weight, and some of them die as a result of the surgery. Benotti et al. (2014) wanted to know whether they could predict who was at a higher risk of dying from one particular kind of surgery, Roux-en-Y gastric bypass surgery. They obtained records on \(81,751\) patients who had had Roux-en-Y surgery, of which \(123\) died within \(30\) days. They did multiple logistic regression, with alive vs. dead after \(30\) days as the dependent variable, and \(6\) demographic variables (gender, age, race, body mass index, insurance type, and employment status) and \(30\) health variables (blood pressure, diabetes, tobacco use, etc.) as the independent variables. Manually choosing the variables to add to their logistic model, they identified six that contribute to risk of dying from Roux-en-Y surgery: body mass index, age, gender, pulmonary hypertension, congestive heart failure, and liver disease.
Benotti et al. (2014) did not provide their multiple logistic equation, perhaps because they thought it would be too confusing for surgeons to understand. Instead, they developed a simplified version (one point for every decade over \(40\), \(1\) point for every \(10\) BMI units over \(40\), \(1\) point for male, \(1\) point for congestive heart failure, \(1\) point for liver disease, and \(2\) points for pulmonary hypertension). Using this RYGB Risk Score they could predict that a \(43\)-year-old woman with a BMI of \(46\) and no heart, lung or liver problems would have an \(0.03\%\) chance of dying within \(30\) days, while a \(62\)-year-old man with a BMI of \(52\) and pulmonary hypertension would have a \(1.4\%\) chance.
Graphs aren't very useful for showing the results of multiple logistic regression; instead, people usually just show a table of the independent variables, with their \(P\) values and perhaps the regression coefficients.
If the dependent variable is a measurement variable, you should do multiple linear regression.
There are numerous other techniques you can use when you have one nominal and three or more measurement variables, but I don't know enough about them to list them, much less explain them.
I haven't written a spreadsheet to do multiple logistic regression.
There's a very nice web page for multiple logistic regression. It will not do automatic selection of variables; if you want to construct a logistic model with fewer independent variables, you'll have to pick the variables yourself.
Salvatore Mangiafico's \(R\) Companion has a sample R program for multiple logistic regression.
You use PROC LOGISTIC to do multiple logistic regression in SAS. Here is an example using the data on bird introductions to New Zealand.
DATA birds;
INPUT species $ status $ length mass range migr insect diet clutch
broods wood upland water release indiv;
DATALINES;
Cyg_olor 1 1520 9600 1.21 1 12 2 6 1 0 0 1 6 29
Cyg_atra 1 1250 5000 0.56 1 0 1 6 1 0 0 1 10 85
Cer_nova 1 870 3360 0.07 1 0 1 4 1 0 0 1 3 8
Ans_caer 0 720 2517 1.1 3 12 2 3.8 1 0 0 1 1 10
Ans_anse 0 820 3170 3.45 3 0 1 5.9 1 0 0 1 2 7
Bra_cana 1 770 4390 2.96 2 0 1 5.9 1 0 0 1 10 60
Bra_sand 0 50 1930 0.01 1 0 1 4 2 0 0 0 1 2
Alo_aegy 0 680 2040 2.71 1 . 2 8.5 1 0 0 1 1 8
Ana_plat 1 570 1020 9.01 2 6 2 12.6 1 0 0 1 17 1539
Ana_acut 0 580 910 7.9 3 6 2 8.3 1 0 0 1 3 102
Ana_pene 0 480 590 4.33 3 0 1 8.7 1 0 0 1 5 32
Aix_spon 0 470 539 1.04 3 12 2 13.5 2 1 0 1 5 10
Ayt_feri 0 450 940 2.17 3 12 2 9.5 1 0 0 1 3 9
Ayt_fuli 0 435 684 4.81 3 12 2 10.1 1 0 0 1 2 5
Ore_pict 0 275 230 0.31 1 3 1 9.5 1 1 1 0 9 398
Lop_cali 1 256 162 0.24 1 3 1 14.2 2 0 0 0 15 1420
Col_virg 1 230 170 0.77 1 3 1 13.7 1 0 0 0 17 1156
Ale_grae 1 330 501 2.23 1 3 1 15.5 1 0 1 0 15 362
Ale_rufa 0 330 439 0.22 1 3 2 11.2 2 0 0 0 2 20
Per_perd 0 300 386 2.4 1 3 1 14.6 1 0 1 0 24 676
Cot_pect 0 182 95 0.33 3 . 2 7.5 1 0 0 0 3 .
Cot_aust 1 180 95 0.69 2 12 2 11 1 0 0 1 11 601
Lop_nyct 0 800 1150 0.28 1 12 2 5 1 1 1 0 4 6
Pha_colc 1 710 850 1.25 1 12 2 11.8 1 1 0 0 27 244
Syr_reev 0 750 949 0.2 1 12 2 9.5 1 1 1 0 2 9
Tet_tetr 0 470 900 4.17 1 3 1 7.9 1 1 1 0 2 13
Lag_lago 0 390 517 7.29 1 0 1 7.5 1 1 1 0 2 4
Ped_phas 0 440 815 1.83 1 3 1 12.3 1 1 0 0 1 22
Tym_cupi 0 435 770 0.26 1 4 1 12 1 0 0 0 3 57
Van_vane 0 300 226 3.93 2 12 3 3.8 1 0 0 0 8 124
Plu_squa 0 285 318 1.67 3 12 3 4 1 0 0 1 2 3
Pte_alch 0 350 225 1.21 2 0 1 2.5 2 0 0 0 1 8
Pha_chal 0 320 350 0.6 1 12 2 2 2 1 0 0 8 42
Ocy_loph 0 330 205 0.76 1 0 1 2 7 1 0 1 4 23
Leu_mela 0 372 . 0.07 1 12 2 2 1 1 0 0 6 34
Ath_noct 1 220 176 4.84 1 12 3 3.6 1 1 0 0 7 221
Tyt_alba 0 340 298 8.9 2 0 3 5.7 2 1 0 0 1 7
Dac_nova 1 460 382 0.34 1 12 3 2 1 1 0 0 7 21
Lul_arbo 0 150 32.1 1.78 2 4 2 3.9 2 1 0 0 1 5
Ala_arve 1 185 38.9 5.19 2 12 2 3.7 3 0 0 0 11 391
Pru_modu 1 145 20.5 1.95 2 12 2 3.4 2 1 0 0 14 245
Eri_rebe 0 140 15.8 2.31 2 12 2 5 2 1 0 0 11 123
Lus_mega 0 161 19.4 1.88 3 12 2 4.7 2 1 0 0 4 7
Tur_meru 1 255 82.6 3.3 2 12 2 3.8 3 1 0 0 16 596
Tur_phil 1 230 67.3 4.84 2 12 2 4.7 2 1 0 0 12 343
Syl_comm 0 140 12.8 3.39 3 12 2 4.6 2 1 0 0 1 2
Syl_atri 0 142 17.5 2.43 2 5 2 4.6 1 1 0 0 1 5
Man_mela 0 180 . 0.04 1 12 3 1.9 5 1 0 0 1 2
Man_mela 0 265 59 0.25 1 12 2 2.6 . 1 0 0 1 80
Gra_cyan 0 275 128 0.83 1 12 3 3 2 1 0 1 1 .
Gym_tibi 1 400 380 0.82 1 12 3 4 1 1 0 0 15 448
Cor_mone 0 335 203 3.4 2 12 2 4.5 1 1 0 0 2 3
Cor_frug 1 400 425 3.73 1 12 2 3.6 1 1 0 0 10 182
Stu_vulg 1 222 79.8 3.33 2 6 2 4.8 2 1 0 0 14 653
Acr_tris 1 230 111.3 0.56 1 12 2 3.7 1 1 0 0 5 88
Pas_dome 1 149 28.8 6.5 1 6 2 3.9 3 1 0 0 12 416
Pas_mont 0 133 22 6.8 1 6 2 4.7 3 1 0 0 3 14
Aeg_temp 0 120 . 0.17 1 6 2 4.7 3 1 0 0 3 14
Emb_gutt 0 120 19 0.15 1 4 1 5 3 0 0 0 4 112
Poe_gutt 0 100 12.4 0.75 1 4 1 4.7 3 0 0 0 1 12
Lon_punc 0 110 13.5 1.06 1 0 1 5 3 0 0 0 1 8
Lon_cast 0 100 . 0.13 1 4 1 5 . 0 0 1 4 45
Pad_oryz 0 160 . 0.09 1 0 1 5 . 0 0 0 2 6
Fri_coel 1 160 23.5 2.61 2 12 2 4.9 2 1 0 0 17 449
Fri_mont 0 146 21.4 3.09 3 10 2 6 . 1 0 0 7 121
Car_chlo 1 147 29 2.09 2 7 2 4.8 2 1 0 0 6 65
Car_spin 0 117 12 2.09 3 3 1 4 2 1 0 0 3 54
Car_card 1 120 15.5 2.85 2 4 1 4.4 3 1 0 0 14 626
Aca_flam 1 115 11.5 5.54 2 6 1 5 2 1 0 0 10 607
Aca_flavi 0 133 17 1.67 2 0 1 5 3 0 1 0 3 61
Aca_cann 0 136 18.5 2.52 2 6 1 4.7 2 1 0 0 12 209
Pyr_pyrr 0 142 23.5 3.57 1 4 1 4 3 1 0 0 2 .
Emb_citr 1 160 28.2 4.11 2 8 2 3.3 3 1 0 0 14 656
Emb_hort 0 163 21.6 2.75 3 12 2 5 1 0 0 0 1 6
Emb_cirl 1 160 23.6 0.62 1 12 2 3.5 2 1 0 0 3 29
Emb_scho 0 150 20.7 5.42 1 12 2 5.1 2 0 0 1 2 9
Pir_rubr 0 170 31 0.55 3 12 2 4 . 1 0 0 1 2
Age_phoe 0 210 36.9 2 2 8 2 3.7 1 0 0 1 1 2
Stu_negl 0 225 106.5 1.2 2 12 2 4.8 2 0 0 0 1 2
PROC LOGISTIC DATA=birds DESCENDING;
MODEL status=length mass range migr insect diet clutch broods wood upland
water release indiv / SELECTION=STEPWISE SLENTRY=0.15 SLSTAY=0.15;
In the MODEL statement, the dependent variable is to the left of the equals sign, and all the independent variables are to the right. SELECTION determines which variable selection method is used; choices include FORWARD, BACKWARD, STEPWISE, and several others. You can omit the SELECTION parameter if you want to see the logistic regression model that includes all the independent variables. SLENTRY is the significance level for entering a variable into the model, if you're using FORWARD or STEPWISE selection; in this example, a variable must have a \(P\) value less than \(0.15\) to be entered into the regression model. SLSTAY is the significance level for removing a variable in BACKWARD or STEPWISE selection; in this example, a variable with a \(P\) value greater than \(0.15\) will be removed from the model.
Summary of Stepwise Selection
Effect Number Score Wald
Step Entered Removed DF In Chi-Square Chi-Square Pr > ChiSq
1 release 1 1 28.4339 <.0001
2 upland 1 2 5.6871 0.0171
3 migr 1 3 5.3284 0.0210
The summary shows that "release" was added to the model first, yielding a \(P\) value less than \(0.0001\). Next, "upland" was added, with a \(P\) value of \(0.0171\). Next, "migr" was added, with a \(P\) value of \(0.0210\). SLSTAY was set to \(0.15\), not \(0.05\), because you might want to include a variable in a predictive model even if it's not quite significant. However, none of the other variables have a \(P\) value less than \(0.15\), and removing any of the variables caused a decrease in fit big enough that \(P\) was less than \(0.15\), so the stepwise process is done.
Analysis of Maximum Likelihood Estimates
Standard Wald
Parameter DF Estimate Error Chi-Square Pr > ChiSq
Intercept 1 -0.4653 1.1226 0.1718 0.6785
migr 1 -1.6057 0.7982 4.0464 0.0443
upland 1 -6.2721 2.5739 5.9380 0.0148
release 1 0.4247 0.1040 16.6807 <.0001
The "parameter estimates" are the partial regression coefficients; they show that the model is:
\[\ln \left [ \frac{Y}{1-Y} \right ]=-0.4653-1.6057(migration)-6.2721(upland)+0.4247(release)\]
You need to have several times as many observations as you have independent variables, otherwise you can get "overfitting"—it could look like every independent variable is important, even if they're not. A frequently seen rule of thumb is that you should have at least \(10\) to \(20\) times as many observations as you have independent variables. I don't know how to do a more detailed power analysis for multiple logistic regression.
Benotti, P., G.C. Wood, D.A. Winegar, A.T. Petrick, C.D. Still, G. Argyropoulos, and G.S. Gerhard. 2014. Risk factors associated with mortality after Roux-en-Y gastric bypass surgery. Annals of Surgery 259: 123-130.
Veltman, C.J., S. Nee, and M.J. Crawley. 1996. Correlates of introduction success in exotic New Zealand birds. American Naturalist 147: 542-557.
John H. McDonald (University of Delaware)
5.6: Simple Logistic Regression
6: Multiple Tests
John H. McDonald | CommonCrawl |
enVision Math Common Core Grade 3 Answer Key Topic 13 Fraction Equivalence and Comparison
Go through the enVision Math Common Core Grade 3 Answer Key Topic 13 Fraction Equivalence and Comparison regularly and improve your accuracy in solving questions.
enVision Math Common Core 3rd Grade Answers Key Topic 13 Fraction Equivalence and Comparison
What are different ways to compare fractions?
enVision STEM Project: Life Cycles
Do Research A frog egg hatches into a tadpole that lives in water. The tadpole will change and eventually become an adult frog. Use the Internet or another source to gather information about the life cycle of a frog and other animals.
Journal: Write a Report Include what you found. Also in your report:
Tell about what is in a frog's habitat to support changes the frog goes through in its life cycle.
Compare the life cycles of the different animals you studied.
For the animals you studied, make up and solve problems using fractions. Draw fraction strips to represent the fractions.
Review What You Know
Choose the best term from the box. Write it on the blank.
unit fraction
The symbol ___________ means is greater than.
The symbol > means greater than.
In the above-given question,
given that,
greater than symbol is used to compare numbers.
3 > 1.
so the symbol > is used for large numbers when compared with small numbers.
The symbol _________ means is less than.
The symbol < means less than.
less than a symbol is used to compare numbers.
1 < 3.
so the symbol < is used for small numbers when compared with large numbers.
A ________ represents one equal part of a whole.
The fraction represents one equal part of a whole.
a fraction represents one equal part of a whole.
the whole part is 4.
the 3/4 of the portion is filled.
so the fraction represents one equal part of a whole.
Comparing Whole Numbers
Compare. Write <, >, or =.
48 > 30.
the two numbers are 48 and 30.
30 is less than 48.
48 is greater than 30.
6 = 6.
the two numbers are 6 and 6.
6 is equal to 6.
723 < 732.
the two numbers are 723 and 732.
723 is less than 732.
732 is greater than 723.
100 > 10.
the two numbers are 100 and 10.
100 is greater than 10.
10 is less than 100.
456 = 456.
456 is equal to 456.
421 > 399.
Identifying Fractions
For each shape, write the fraction that is shaded.
The fraction is 4/8.
the figure contains 8 boxes.
4 boxes are filled.
4/8 portion of the boxes are filled.
4/8 = 1/2.
so half portion of the boxes is filled.
there are 6 boxes in the figure.
1 box is filled.
so 1/6 portion is filled.
Divide.
30 ÷ 5
The answer is 6.
the two numbers are 30 and 5.
5 x 6 = 30.
30 / 5 = 6.
How can you check if the answer to 40 ÷ 5 is 8?
Pick a Project
Do you want to ride a horse?
Project: Design a Racetrack for Horses
PROJECT 13B
How deep do you have to dig before you reach water?
Project: Create a Picture of a Well
PROJECT 13C
How many coffee beans does it take to fill up a container?
Project: Plot Fractions on a Number Line
3-ACT MATH PREVIEW
Math Modeling
What's the Beef?
Lesson 13.1 Equivalent Fractions: Use Models
Solve & Share
Gregor threw a softball of the length of the yard in front of his house. Find as many fractions as you can that name the same part of the length of the yard that Gregor threw the ball. Explain how you decided
I can … find equivalent fractions that name the same part of a whole.
1/9, 2/9, 3/9, 4/9.
Gregor threw a softball off the length of the yard in front of his house.
Gregor threw the 1st ball at 1 yard.
the length of the yard is 9.
Gregor threw the 2nd ball at 2 yards.
Gregor threw the 3rd ball at 3 yards.
Gregor threw the 4th ball at 4 yards.
so the fractions are 1/9, 2/9, 3/9, and 4/9.
Look Back! How can fraction strips help you tell if a fraction with a denominator of 2, 3, or 6 would name the same part of a whole as \(\frac{3}{4}\)?
2/4 and 6/4.
the denominators are 2, 3, and 6.
3/4 and 2/4 = 1/2.
Essentials Question
How Can Different Fractions Name the Same Part of a Whole?
Visual Learning Bridge
The Chisholm Trail was used to drive cattle to market. Ross's herd has walked \(\frac{1}{2}\) the distance to market. What is another way to name \(\frac{1}{2}\)?
\(\frac{1}{2}\) = \(\frac{}{}\) You can use fraction strips.
The fractions \(\frac{1}{2}\) and \(\frac{2}{4}\) represent the same part of the whole.
Two \(\frac{1}{4}\) strips are equal to \(\frac{1}{2}\), so \(\frac{1}{2}\) = \(\frac{2}{4}\).
Another name for \(\frac{1}{2}\) is \(\frac{2}{4}\).
You can find other equivalent fractions. Think about fractions that name the same part of the whole.
Four \(\frac{1}{8}\) strips are equal to \(\frac{1}{2}\), so \(\frac{1}{2}\) = \(\frac{4}{8}\).
Another name for \(\frac{1}{2}\) is \(\frac{4}{8}\)
Convince Me! Look for Relationships In the examples above, what pattern do you see in the fractions that are equivalent to \(\frac{1}{2}\)? What is another name for \(\frac{1}{2}\) that is not shown above?
The other name is 4/8.
Four \(\frac{1}{8}\) strips are equal to \(\frac{1}{2}\).
\(\frac{1}{2}\) = \(\frac{4}{8}\).
\(\frac{1}{2}\) is \(\frac{4}{8}\).
so the other name is 4/8.
Another Example!
You can find an equivalent fraction for \(\frac{4}{6}\) using an area model.
Both area models have the same-sized whole. One is divided into sixths. The other shows thirds. The shaded parts show the same part of a whole. Because \(\frac{4}{6}\) = \(\frac{2}{3}\), another name for \(\frac{4}{6}\) is \(\frac{2}{3}\).
Guided Practice
Divide the second area model into sixths. Shade it to show a fraction equivalent to \(\frac{1}{3}\):
divide the second area model into sixths.
Use the fraction strips to help find an equivalent fraction.
1/4 = 4/16.
\(\frac{1}{4}\) = \(\frac{4}{16}\).
\(\frac{4}{16}\) is \(\frac{1}{4}\).
Independent Practice
1/4 + 1/4 = 1/2.
1/2 + 1/2 = 1.
Divide the second area model into eighths. Shade it to show a fraction equivalent to \(\frac{1}{2}\).
divide the second area model into eights.
In 5-8, find each equivalent fraction. Use fraction strips or draw area models to help.
\(\frac{3}{4}\) = \(\frac{}{8}\)
divide the 1st area model into fourths.
divide the 1st area model into sixths.
6/6 = 1.
divide the second area model into thirds.
divide the 1st area model into eigths.
divide the second area model into halfs.
In 9 and 10, use the fraction strips at the right.
Marcy used fraction strips to show equivalent fractions. Complete the equation.
\(\frac{}{4}\) = ________
Rita says the fraction strips show fractions that are equivalent to \(\frac{1}{2}\). Explain what you could do to the diagram to see if she is correct.
divide the second area model into halves.
Reasoning A band learns 4 to 6 new songs every month. What is a good estimate for the number of songs the band will learn in 8 months? Explain.
The number of songs the band will learn in 8 months = 80 songs.
A band learns 4 to 6 new songs every month.
4 + 6 = 10.
10 x 8 = 80.
so the number of songs the band will learn in 8 months = 80 songs.
Three-eighths of a playground is covered by grass. What fraction of the playground is NOT covered by grass?
The fraction of the playground is not covered by grass = 5/8.
three-eights of a playground is covered by grass.
so the fraction of the playground is not covered by grass = 5/8.
Higher Order Thinking Aiden folded 2 strips of paper into eighths. He shaded a fraction equal to \(\frac{1}{4}\) on the first strip and a fraction equal to \(\frac{3}{4}\) on the second strip. Use eighths to show the fractions Aiden shaded on the pictures to the right. Which fraction of each strip did he shade?
The fraction he shaded = 6/8.
Aiden folded 2 strips of paper into eighths.
He shaded a fraction equal to \(\frac{1}{4}\) on the first strip.
fraction equal to \(\frac{3}{4}\) on the second strip.
he shaded the 6/8 portion of each strip.
Which fractions are equivalent? Select all that apply.
there are three equivalent fractions.
the fractions are:
Lesson 13.2 Equivalent Fractions: Use the Number Line
The top number line shows a point at \(\frac{1}{4}\). Write the fraction for each of the points labeled A, B, C, D, E, and F. Which of these fractions show the same distance from 0 as \(\frac{1}{4}\)?
I can … use number lines to represent equivalent fractions.
Look Back! How can number lines show that two fractions are equivalent?
The fractions are 1/2, 2/4, 3/4, 2/8, 4/8, and 4/6.
The number line A shows the fraction 1/2.
B shows the fraction 2/4.
C shows the fraction 3/4.
D shows the fraction 2/8.
E shows the fraction 4/8.
F shows the fraction 4/6.
How Can You Use Number Lines to Find Equivalent Fractions?
The Circle W Ranch 1-mile trail has water for cattle at each \(\frac{1}{4}\) mile mark. The Big T Ranch 1-mile trail has water for cattle at the \(\frac{1}{2}\)-mile mark. What fractions name the points on the trails where there is water for cattle at the same distance from the start of each trail?
You can use number lines to find the fractions.
The fractions \(\frac{2}{4}\) and \(\frac{1}{2}\) name the same points on the trails where there is water for cattle. They are at the same distance from the start of the trails.
Convince Me! Model with Math lan paints \(\frac{6}{8}\) of a fence. Anna paints \(\frac{3}{4}\) of another fence of equal size and length. How can you show that lan and Anna have painted the same amount of each fence?
Yes, both Anna and Lan have painted the same amount of each fence.
Lan paints \(\frac{6}{8}\) of a fence.
Anna paints \(\frac{3}{4}\) of another fence of equal size and length.
2 x 3 = 6.
so both Anna and Lan have painted the same amount of each fence.
Complete the number line to show that \(\frac{2}{6}\) and \(\frac{1}{3}\) are equivalent fractions.
the fractions on the number line are:
1/6, 2/6, 3/6, 4/6, and 5/6.
Sheila compares \(\frac{4}{6}\) and \(\frac{4}{8}\) she discovers that the fractions are NOT equivalent. How does Sheila know?
Yes, both fractions are not equivalent.
Sheila compares \(\frac{4}{6}\) and \(\frac{4}{8}\).
so 4/6 is not equal to 4/8.
In 3 and 4, find the missing equivalent fractions on the number line. Then write the equivalent fractions below.
The missing equivalent fractions on the number line is 3/6.
1/6, 2/6, 3/6, 4/6, 5/6, and 1.
so the 3/6 and 1/2 are the equivalent fractions.
1/8, 2/8, 3/8, 4/8, 5/8, 6/8, and 7/8.
In 5-8, find the missing equivalent fractions on the number line. Then write the equivalent fractions below.
The missing equivalent fractions on the number line are 2/8.
the fractions on the number line are 1/8, 2/8, 3/8, 4/8, 5/8, 6/8, and 7/8.
so the missing equivalent fraction is 2/8 = 1/4.
the fractions on the number line are 1/6, 2/6, 3/8, and 5/6.
the fractions on the number line are 1/8, 2/8, 3/8, 5/8, 6/8, and 7/8.
the fractions on the number line are 1/6, 2/6, 3/6, 4/6, and 5/6.
so the missing equivalent fraction is 6/6 = 1.
Number Sense Bradley had 40 slices of pizza to share. How many pizzas did he have? Explain how you solved the problem.
The number of pizzas did he have = 5.
Bradley had 40 slices of pizza to share.
each pizza was cut into 8 slices.
40/8 = 5.
so the number of pizzas did he have = 5.
Ms. Owen has 15 magazines to share among 5 students for an art project. How many magazines will each student get? Use the bar diagram to write an equation that helps solve the problem.
The number of magazines will each student get = 3.
Ms. Owen has 15 magazines to share among 5 students for an art project.
3 + 3 + 3 + 3 + 3 = 15.
so the number of magazines will each student get = 3.
Yonita has 28 different apps on her computer. Casey has 14 music apps and 20 game apps on his computer. How many more apps does Casey have than Yonita? Explain.
The number of apps does Casey has more than Yonita = 6.
Yonita has 28 different apps on her computer.
Casey has 14 music apps and 20 game apps on his computer.
14 + 20 = 34.
34 – 28 = 6.
so the number of apps does Casey has more than Yonita = 6.
Construct Arguments How can you tell, just by looking at the fractions, that \(\frac{2}{4}\) and \(\frac{3}{4}\) are NOT equivalent? Construct an argument to explain.
Yes, 2/4 and 3/4 are not equivalent fractions.
the fraction 2/4 = 1/2.
the fraction 3/4 is not an equivalent fraction.
so both the fractions are not equal.
Higher Order Thinking Fiona and Gabe each had the same length of rope. Fiona used \(\frac{2}{3}\) of her rope. Using sixths, what fraction of the length of rope will Gabe need to use to match the amount Fiona used? Draw a number line as part of your answer.
The fraction of the rope Gabe used is 4/6.
Fiona and Gabe each had the same length of rope.
Fiona used \(\frac{2}{3}\) of her rope.
so the fraction of rope Gabe used is 4/6.
Use the number line to find which fraction is equivalent to \(\frac{3}{6}\).
Option A is the correct answer.
so option A is correct.
Option C is the correct answer.
so option C is correct.
Lesson 13.3 Use Models to Compare Fractions: Same Denominator
Maria and Evan are both jogging a mile. Maria has jogged mile, and Evan has jogged mile. Show how far each has jogged. Use any model you choose. Who jogged farther? How do you know?
I can … compare fractions that refer to the same-sized whole and have the same denominator by comparing their numerators.
Look Back! Suppose Evan had jogged \(\frac{5}{8}\) mile instead of \(\frac{3}{8}\) mile. Now, who has jogged farther? Explain.
Evan jogged farther than Maria.
Evan had jogged \(\frac{5}{8}\) mile instead of \(\frac{3}{8}\).
5/8 – 3/8 = 2/8.
so I think maria jogged very little when compared to Evan.
so Evan jogged farther than Maria.
How Can You Compare Fractions with the Same Denominator?
Two banners with positive messages are the same size. One banner is \(\frac{4}{6}\) yellow, and the other banner is \(\frac{2}{6}\) yellow. Which is greater, \(\frac{4}{6}\) or \(\frac{2}{6}\)?
\(\frac{4}{6}\) is 4 of the unit fraction is \(\frac{1}{6}\).
\(\frac{2}{6}\) of the unit fraction \(\frac{1}{6}\).
So, \(\frac{4}{6}\) is greater than \(\frac{2}{6}\).
Record the comparison using symbols or words.
\(\frac{4}{6}\) > \(\frac{2}{6}\)
Four sixths is greater than two sixths.
If two fractions have the same denominator, the fraction with the greater numerator is the greater fraction.
Convince Me! Reasoning Write a number for each numerator to make each comparison true. Use a picture and words to explain how you decided.
Explain how you can use fraction strips to show whether \(\frac{5}{6}\) or \(\frac{3}{6}\) of the same whole is greater.
5/6 > 3/6.
the fractions are 5/6 and 3/6.
so 5/6 is greater than 3/6.
3/6 < 5/6.
Which is greater, \(\frac{3}{4}\) or \(\frac{2}{4}\)? Draw \(\frac{1}{4}\)-strips to complete the diagram and answer the question.
3/4 is greater than 2/4.
the whole is 4.
but one time it is divided into 3 parts.
it is divided into 2 parts.
so 3/4 > 2/4.
In 3 and 4, compare. Write <, >, or =. Use the fraction strips to help.
5/6 is divided into 5 parts.
1/6 + 1/6 + 1/6 = 3/6.
so 3/6 < 5/6.
Leveled Practice In 5-14, compare. Write <, >, or =. Use or draw fraction strips to help. The fractions refer to the same whole.
the two fractions are 3/8 and 4/8.
1/8 + 1/8 + 1/8 + 1/8 = 4/8.
so both of them are equal.
\(\frac{6}{8}\) \(\frac{3}{8}\)
1/8 + 1/8 + 1/8 + 1/8 + 1/8 = 5/8.
1/8 x 7 = 7/8.
1/2 x 1 =1/2.
1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6 = 6/6.
1/8 + 1/8 +1/8 = 3/8.
1/4 +1/4 + 1/4 = 3/4.
In 15 and 16, use the pictures of the strips that have been partly shaded.
Compare. Write <, >, or =
The green strips show \(\frac{1}{6}\) \(\frac{2}{6}\)
1/6 +1/6 = 2/6.
Do the yellow strips show \(\frac{2}{4}\) > \(\frac{3}{4}\)? Explain.
No, 2/4 < 3/4.
Izzy and Henry have two different pizzas. Izzy ate \(\frac{3}{8}\) of her pizza. Henry ate \(\frac{3}{8}\) of his pizza. Izzy ate more pizza than Henry. How is this possible? Explain.
No, it was not possible.
Izzy and Henry have two different pizzas.
Izzy ate \(\frac{3}{8}\) of her pizza.
Henry ate \(\frac{3}{8}\) of his pizza.
3/8 = 1/8 + 1/8 + 1/8.
so both of them ate the equal.
Generalize Two fractions are equal. They also have the same denominator. What must be true of the numerators of the fractions? Explain.
Yes, the two fractions are equal.
Number Sense Mr. Domini had $814 in the bank on Wednesday. On Thursday, he withdrew $250, and on Friday, he withdrew $185. How much money did he have in the bank then?
The money he has in the bank = $379.
Mr. Domini had $814 in the bank on Wednesday.
On Thursday, he withdrew $250.
On Friday, he withdrew $185.
250 + 185 = 435.
814 – 435 = 379.
so Mr. Domini had $814 in the bank on Wednesday.
Higher Order Thinking Tom's parents let him choose whether to play his favorite board game for \(\frac{7}{8}\) hour or for \(\frac{8}{8}\) hour. Explain which amount of time you think Tom should choose and why.
Tom should choose 8/8 hour.
Tom's parents let him choose whether to play his favorite board game for 7/8 hours.
8/8 hour = 1.
1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 = 8/8.
so i think Tom should choose 1 hour.
Paul and Enrique each have equal-sized pizzas cut into 8 equal slices. Paul eats 3 slices. Enrique eats 2 slices. Select numbers and symbols from the box to write a comparison for the fraction of pizza Paul and Enrique have each eaten.
Paul eats more slices than Enriques.
Paul and Enrique each have equal-sized pizzas cut into 8 equal slices.
Paul eats 3 slices.
Enrique eats 2 slices.
so paul eats more slices than Enriques.
Lesson 13.4 Use Models to Compare Fractions: Same Numerator
Krista, Jamal, and Rafe each had 1 serving of vegetables. Krista ate \(\frac{2}{6}\), Jamal ate \(\frac{2}{3}\), and Rafe ate \(\frac{2}{8}\) of his serving. Arrange the fractions in order from least to greatest to show who ate the least and who ate the greatest amount of vegetables.
I can … compare fractions that refer to the same whole and have the same numerator by comparing their denominators.
Rafe, Krista, and Jamal.
Krista, Jamal, and Rafe each had 1 serving of vegetables.
Krista ate \(\frac{2}{6}\), Jamal ate \(\frac{2}{3}\).
Rafe ate \(\frac{2}{8}\) of his serving.
2/6 = 1/6 + 1/6.
so Rafe ate least when compared to Krista and Jamal.
so the order from least to highest.
Look Back! Tamika ate \(\frac{2}{2}\) of a serving of vegetables. In order from least to greatest, arrange the fractions of a serving Krista, Jamal, Rafe, and Tamika each ate. Explain your reasoning.
Rafe, Krista, Jamal, and Tamika.
Tamika ate \(\frac{2}{2}\) of a serving of vegetables.
so Tamika at more than Rafe, Krista, and Jamal.
How Can You Compare Fractions with the Same Numerator?
Claire bought 2 scarves as souvenirs from her visit to a Florida university. The scarves are the same size. One scarf is \(\frac{5}{6}\) orange, and the other scarf is \(\frac{5}{8}\) orange. Which is greater, \(\frac{5}{6}\) or \(\frac{5}{8}\)?
What You Show
Use fraction strips to reason about the size of \(\frac{5}{6}\) a compared to the size of \(\frac{5}{8}\).
There are 5 sixths. There are 5 eighths. The parts are different sizes.
The greater the denominator, the smaller each part will be.
What You Write
Describe the comparison using symbols or words.
Five sixths is greater than five eighths.
If two fractions have the same numerator, the fraction with the lesser denominator is the greater fraction.
Convince Me! Critique Reasoning Julia says \(\frac{1}{8}\) is greater than \(\frac{1}{4}\) because 8 is greater than 4. Critique Julia's reasoning. Is she correct? Explain.
Yes, Julia's reasoning was correct.
Julia says \(\frac{1}{8}\) is greater than \(\frac{1}{4}\).
1/8 = 1 x 1/8.
so Julia's reasoning was correct.
How can fraction strips help you reason about whether \(\frac{4}{6}\) or \(\frac{4}{8}\) of the same whole is greater?
4/6 = 1/6 + 1/6 + 1/6 + 1/6.
Which is greater, \(\frac{1}{4}\) or \(\frac{1}{6}\)? Draw fraction strips to complete the diagram and answer the question.
1 is divided into 1/4 and 1/6.
In 3 and 4, compare. Write <, >, or =. Use fraction strips to help.
so 4/6 = 1 x 4/6.
so 4/8 = 4/8.
5/6 = 1/6 + 1/6 + 1/6 + 1/6 + 1/6.
1/4 = 1/4 x 1.
James uses blue and white tiles to make the two designs shown here. James says that the total blue area in the top design is the same as the total blue area in the bottom design. Is he correct? Explain.
Yes, James was correct.
James uses blue and white tiles to make the two designs.
James says that the total blue area in the top design is the same as the total blue area in the bottom design.
so James was correct.
Amy sold 8 large quilts and 1 baby quilt. How much money did she make from selling quilts?
The money did she make from selling quilts = $520.
Amy sold 8 large quilts and 1 baby quilt.
60 x 8 = 480.
480 + 40 = 520.
so the money did she make from selling quilts = $520.
Be Precise Write two comparison statements about the fractions shown below.
Higher Order Thinking John says that when you compare two fractions with the same numerator, you look at the denominators because the fraction with the greater denominator is greater. Is he correct? Explain, and give an example.
Yes, he was correct.
John says that when you compare two fractions with the same numerator,
1 > 1/4.
so he was correct.
These fractions refer to the same whole. Which of these comparisons are correct? Select all that apply.
2/4 > 2/3, 1/2 > 1/4, 5/6 = 5/6, and 3/4 > 3/6.
so the four fractions are correct.
Lesson 13.5 Compare Fractions: Use Benchmarks
Mr. Evans wrote \(\frac{2}{8}, \frac{4}{8}, \frac{6}{8}, \frac{1}{8}, \frac{3}{8}, \frac{5}{8}\) on and \(\frac{7}{8}\) on the board. Then he circled the fractions that are closer to 0 than to 1. Which fractions did he circle? Which fractions did he not circle? Explain how you decided.
I can … use what I know about the size of benchmark numbers to compare fractions.
Look Back! Eric says that \(\frac{3}{8}\) is closer to 1 than to 0 because \(\frac{3}{8}\) is greater than \(\frac{1}{8}\). Is he correct? Use benchmark numbers to evaluate Eric's reasoning and justify your answer.
Yes, Eric was correct.
Eric says that 3/8 is closer to 1.
so Eric was correct.
How Can Benchmark Numbers Be Used to Compare Fractions?
Keri wants to buy of a container of roasted peanuts. Alan wants to buy of a container of roasted peanuts. The containers are the same size. Who will buy more peanuts?
Compare each fraction to the benchmark number \(\frac{1}{2}\). Then see how they relate to each other in size.
So, \(\frac{2}{6}\) is less than \(\frac{2}{3}\).
\(\frac{2}{6}\) < \(\frac{2}{3}\)
Alan will buy more peanuts than Keri.
Convince Me! Make Sense and Persevere Candice buys \(\frac{2}{8}\) of a container of roasted peanuts. The container is the same size as those used by Keri and Alan. She says \(\frac{2}{8}\) is between \(\frac{1}{2}\) and 1, so she buys more peanuts than Alan. Is Candice correct? Explain.
Candice, she was correct.
Candice buys \(\frac{2}{8}\) of a container of roasted peanuts.
The container is the same size as those used by Keri and Alan.
She says \(\frac{2}{8}\) is between \(\frac{1}{2}\) and 1.
so Candice was correct.
Tina used benchmark numbers to decide that \(\frac{3}{8}\) is less than \(\frac{7}{8}\). Do you agree? Explain.
Yes, 3/8 is less than 7/8.
Tina used benchmark numbers to decide that 3/8 is less than 7/8.
7/8 = 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8.
so 3/8 is less than 7/8.
Write two fractions with a denominator of 6 that are closer to 0 than to 1.
the two fractions are 3/6 and 4/6
so 3/6 and 4/6 equal to 0 and 1.
the two fractions with a denominator of 8 that are closer to 1 than to 0.
7/8 = 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8.
so the two fractions are 2/8 and 7/8 is equal to 0 and 1.
In 4-6, choose from the fractions \(\frac{1}{8}, \frac{1}{4}, \frac{6}{8}\) and \(\frac{3}{4}\). Use fraction strips to help.
Which fractions are closer to 0 than to 1?
the fractions are 1/8, 1/4, 6/8, and 3/4.
1/4 x 1 = 1/4
so the two fractions are 3/4 and 1/4.
6/8 = 1/8 + 1/8 + 1/8 + 1/8 + 1/8 + 1/8.
so the fractions closer to 1 than to 0 are 6/8 and 3/4.
Use the two fractions with a denominator of 8 to write a true statement: < .
the two fractions with a denominator of 8.
In 7 and 8, choose from the fractions, \(\frac{2}{3}, \frac{7}{8}, \frac{1}{4}\), and \(\frac{2}{6}\).
Which of the fractions are closer to 0 than to 1?
The fractions are closer to 0 than to 1 are 2/3 and 1/4.
so the fractions are closer to 0 than to 1 are 2/3 and 1/4.
In 9-14, use a strategy to compare. Write <, >, or =.
In 15-17, use the table at the right.
Which people have walked closer to 1 mile than to 0 miles?
Mr. Nunez and Miss Lee have walked closer to 1 mile than to 0 miles.
there are 5 people in the chart.
they are 1/6, 5/6, 1/3, 4/8, and 4/6.
the people closer to 1 mile than to 0 miles are Mr. Nunez and Miss.
the fractions closer to 1 mile is 5/6 and 4/6.
Which people have walked closer to 0 miles than to 1 mile?
Mrs. Avery and Miss Chang have walked closer to 0 miles than to 1 mile.
the people closer to 0 miles than to 1 mile are Mrs. Avery and Miss chang.
the fractions closer to 0 miles is 1/6 and 1/3.
Who has walked a fraction of a mile that is closer to neither 0 nor 1? Explain.
Mr. O'Leary has walked closer to neither 0 nor 1.
the people closer to neither 0 nor 1.
the fractions are 4/8.
Rahul compares two wholes that are the same size. He says that \(\frac{2}{6}\) < \(\frac{2}{3}\) because \(\frac{2}{6}\) is less than \(\frac{1}{2}\), and \(\frac{2}{3}\) is greater than \(\frac{1}{2}\). Is he correct? Explain.
Rahul compares two wholes that are the same size.
He says that \(\frac{2}{6}\) < \(\frac{2}{3}\).
\(\frac{2}{6}\) is less than \(\frac{1}{2}\).
Make Sense and Persevere Manish drives 265 more miles than Janice. Manish drives 642 miles. How many miles does Janice drive?
The number of miles does Janice drive =
Manish drives 265 more miles than Janice.
Manish drives 642 miles.
so the number of miles does Janice drives = 377.
Algebra Nika has 90 pencils. Forty of them are yellow, 13 are green, 18 are red, and the rest are blue. How many blue pencils does Nika have?
The number of blue pencils does Nika have = 47.
In the above-given that,
Algebra Nika has 90 pencils.
Forty of them are yellow, 13 are green, 18 are red, and the rest are blue.
90 – 43 = 47.
so the number of blue pencils does Nika have = 47.
Higher Order Thinking Omar says that \(\frac{2}{6}\) < \(\frac{4}{6}\) because \(\frac{2}{6}\) is between 0 and \(\frac{1}{2}\), and \(\frac{4}{6}\) is between \(\frac{1}{2}\) and 1. Is he correct? Explain.
2/6 is between 0 and 1/2.
4/6 is between 1/2 and 1.
0, 2/6, and 1/2.
1/2, 4/6, and 1.
so Omar was correct.
Each of the fractions in the comparisons at the right refer to the same whole. Use benchmark fractions to reason about the size of each fraction. Select all the correct comparisons.
2/3 < 2/4, 3/6 > 3/8, and 3/8 > 5/8.
the fractions are 2/3 < 2/4, 2/4 < 2/3, 3/8 > 5/8, 1/4 < 2/4, and 3/6 > 3/8.
so the 2/3 < 2/4.
2/3 < 1 and 1/2 > 1.
so the correct fractions are 2/3 < 2/4, 3/6 > 3/8, and 3/8 > 5/8.
Lesson 13.6 Compare Fractions: Use the Number Line
Tanya, Riaz, and Ryan each used a bag of flour to make modeling clay. The bags were labeled lb, á lb, and Ź lb. Show these fractions on a number line. How can you use the number line to compare two of these fractions?
I can … compare two fractions by locating them on a number line.
Look Back! If the bags were labeled \(\frac{4}{8}\) lb, \(\frac{3}{8}\) lb, and \(\frac{6}{8}\) lb, how could a number line help you solve this problem?
3/8 < 4/8 < 6/8.
if the bags were labeled 4/8 lb, 3/8 lb, and 6/8 lb.
so the fractions from least to greatest are 3/8, 4/8, and 6/8.
3/8 is near to 0.
4/8 is in between 0 and 1.
How Can You Compare Fractions Using the Number Line?
Talia has two different lengths of blue and red ribbon. Does she have more blue ribbon or more red ribbon?
The fractions both refer to 1 yard of ribbon. This is the whole.
You can use a number line to compare \(\frac{1}{3}\) and \(\frac{2}{3}\).
The farther the distance of the fraction from zero on the number line, the greater the fraction.
On the number line, \(\frac{2}{3}\) is farther to the right than \(\frac{1}{3}\).
So, \(\frac{2}{3}\) > \(\frac{1}{3}\).
Talia has more blue ribbon than red ribbon.
Convince Me! Use Structure Talia has an additional length of green ribbon that measures \(\frac{2}{4}\) yard. How can you compare the length of the green ribbon to the lengths of the blue and red ribbons?
When two fractions refer to the same whole, what do you notice when the denominators you are comparing are the same?
The denominators are greater than the numerators.
if two fractions are the same.
Write a problem that compares two fractions with different numerators.
the two different fractions are 1/3 and 2/5.
In 3-5, compare fractions using <, >, or =. Use the number lines to help.
2/4 is the half portion in the number line.
2/6 is below the half portion in the number line.
In 6-9, use the number lines to compare the fractions. Write >, <, or =.
1/4 is nearest to 0.
1/4 = 1/4 x 1
Number Sense Randy wants to save $39. The table shows how much money he has saved. Explain how you can use estimation to decide if he has saved enough money.
Yes, he has saved enough money.
Randy wants to save $39.
in march month he saved $14.
in April he saved $11.
in May he saved $22.
14 + 11 + 22 = 47.
so he has saved enough money.
Scott ate \(\frac{2}{8}\) of a fruit bar. Anne ate \(\frac{4}{8}\) of a same-sized fruit bar. Can you tell who ate more of a fruit bar, Scott or Anne? Explain.
Anne ate more of a fruit bar.
Scott ate \(\frac{2}{8}\) of a fruit bar.
Anne ate \(\frac{4}{8}\) of a same-sized fruit bar.
so the whole is 8.
so Anne ate more of a fruit bar.
Be Precise Matt and Adara have identical pieces of cardboard for an art project. Matt uses \(\frac{2}{3}\) of his piece. Adara uses \(\frac{2}{6}\) of her piece. Who uses more, Matt or Adara? Draw two number lines to help explain your answer.
Matt uses more cardboard.
Matt and Adara have identical pieces of cardboard for an art project.
matt uses 2/3 of his piece.
Adara uses 2/6 of her piece.
in 1st 3 is the whole part.
2 is near to 3.
so matt uses more cardboard.
Higher Order Thinking Some friends shared a pizza. Nicole ate \(\frac{2}{8}\) of the pizza. Chris ate \(\frac{1}{8}\) more than Johan. Mike ate \(\frac{1}{8}\) of the pizza. Johan ate more than Mike. Who ate the most pizza?
Chris ate more pizza.
Some friends shared a pizza.
Nicole ate \(\frac{2}{8}\) of the pizza.
Chris ate \(\frac{1}{8}\) more than Johan.
Mike ate \(\frac{1}{8}\) of the pizza.
1/8 + 1 = 2/8.
so Chris ate more pizza.
Inez has 2 rows of plants. There are 8 plants in each row. Each plant has 3 flowers. How many flowers are there in all?
The number of flowers is there = 48.
Inez has 2 rows of plants.
there are 8 plants in each row.
each plant has 3 flowers.
8 x 2 =16.
so the number of flowers is there = 48.
Daniel walked \(\frac{3}{4}\) of a mile. Theo walked \(\frac{3}{8}\) of a mile. Use the number lines to show 0 the fraction of a mile Daniel and Theo each walked. Then select all the correct statements that describe the fractions.
☐ \(\frac{3}{4}\) is equivalent to \(\frac{3}{8}\) because the fractions mark the same point.
☐ \(\frac{3}{4}\) is greater than \(\frac{3}{8}\) because it is farther from zero.
☐ \(\frac{3}{4}\) is less than \(\frac{3}{8}\) because it is farther from zero.
☐ \(\frac{3}{8}\) is less than \(\frac{3}{4}\) because it is closer to zero.
☐ \(\frac{3}{8}\) is greater than \(\frac{3}{4}\) because it is closer to zero.
Option B is the correct answer.
Daniel walked \(\frac{3}{4}\) of a mile.
Theo walked \(\frac{3}{8}\) of a mile.
in the 1st line, the 3 is farther from the 0.
so option B is the correct answer.
Lesson 13.7 Whole Numbers and Fractions
Jamie's family ate 12 pieces of apple pie during the week. Each piece was \(\frac{1}{6}\) of a whole pie. How many whole pies did Jamie's family eat? What fraction of a pie was left over? Explain how you decided.
I can … use representations to find fraction names for whole numbers.
Look Back! Jamie cuts another pie into smaller pieces. Each piece of pie is \(\frac{1}{8}\) of the whole. Jamie gives away 8 pieces. Does Jamie have any pie left over? Explain how you know.
Jamie does not have left any pie.
Jamie cuts another pie into smaller pieces.
Each piece of pie is \(\frac{1}{8}\) of the whole.
Jamie gives away 8 pieces.
so Jamie does not have left any pie.
How Can You Use Fraction Names to Represent Whole Numbers?
What are some equivalent fraction names for 1, 2, and 3?
You can write a whole number as a fraction by writing the whole number as the numerator and
The number line shows 3 wholes. Each whole is divided into 1 equal part.
1 whole divided into 1 equal part can be written as \(\frac{1}{1}\).
2 wholes each divided into 1 equal part can be written as \(\frac{2}{1}\).
3 wholes each divided into 1 equal part can be written as \(\frac{1}{1}\)
1 = \(\frac{1}{2}\)
You can find other equivalent fraction names for whole numbers.
Convince Me! Reasoning What equivalent fraction names can you write for 4 using denominators of 1, 2, or 4?
You can use fractions to name whole numbers.
Twelve \(\frac{1}{3}\) fraction strips equal 4 whole fraction strips.
All whole numbers have fraction names. You can write 4 = \(\frac{12}{3}\).
You also know 4 = \(\frac{4}{1}\), so you can write 4 = \(\frac{4}{1}\) = \(\frac{12}{3}\).
Explain how you know that \(\frac{4}{1}\) = 4.
12/3 = 4/1.
so 4/1 = 4.
Complete the number line.
The missing numbers in upside are 1/3, 3/3, 4/3, and 6/3.
the missing numbers on the downside are 1/6, 2/6, 4/6, 5/6, 6/6, 7/6, 9/6, 10/6, 11/6, and 12/6.
the number line is 1/3, 2/3, 3/3, 4/3, 5/3, 6/3.
so the missing numbers are 1/3, 3/3, 4/3, and 6/3.
1/6, 2/6, 4/6, 5/6, 6/6, 7/6, 9/6, 10/6, 11/6, and 12/6.
Look at the number line. Write two equivalent fractions for each whole number.
1 = 3/3 = 6/6.
2 = 6/3 = 12/6.
6 / 6 = 1.
In 4-7, write two equivalent fractions for each whole number. You can draw number lines to help.
the number is 4.
so the missing numbers are 8 and 4.
5 = 10/2 = 5/1.
so the missing numbers are 10 and 5.
In 8-11, for each pair of fractions, write the equivalent whole number.
\(\frac{6}{2}\) = \(\frac{3}{1}\) =
\(\frac{6}{2}\) = \(\frac{3}{1}\) = 3.
for each pair of fractions, write the equivalent whole number.
so \(\frac{6}{2}\) = \(\frac{3}{1}\) = 3.
\(\frac{9}{3}\) = \(\frac{12}{4}\) =
\(\frac{9}{3}\) = \(\frac{12}{4}\) = 3.
so \(\frac{9}{3}\) = \(\frac{12}{4}\) = 3.
Henry needs to fix or replace his refrigerator. It will cost $376 to fix it. How much more will it cost to buy a new refrigerator than to fix the current one?
The more it costs to buy a new refrigerator = $593.
Henry needs to fix or replace his refrigerator.
It will cost $376 to fix.
the new refrigerator cost is $969.
so more it costs to buy a new refrigerator = $593.
Declan says, "To write an equivalent fraction name for 5, I can write 5 as the denominator and 1 as the numerator." Do you agree with Declan? Explain.
No, Declan was wrong.
To write an equivalent fraction name for 5, I can write 5 as the denominator and 1 as the numerator.
so Declan was wrong.
Look for Relationships Describe a pattern in fractions equivalent to 1 whole.
enVision® STEM There are four stages in a butterfly's life cycle: egg, caterpillar, chrysalis, and butterfly. Dan makes one whole poster for each stage. Use a fraction to show the number of whole posters Dan makes.
Dan makes the fractions 1/4, 2/4, 3/4, and 4/4.
There are four stages in a butterfly's life cycle: egg, caterpillar, chrysalis, and butterfly.
1st stage is egg = 1/4.
2nd stage is caterpillar = 2/4.
3rd stage is chrysalis = 3/4.
4th stage is butterfly = 4/4.
Karen bought 4 movie tickets for $9 each. She has $12 left over. How much money did Karen have to start? Explain.
The money Karen has to start = $48.
Karen bought 4 movie tickets for $9 each. She has $12 left over
so the money Karen has to start = $48.
Higher Order Thinking Peggy has 4 whole sandwiches. She cuts each whole into halves. Then Peggy gives away 1 whole sandwich. Show the number of sandwiches Peggy has left as a fraction.
Each sandwich is cut into equal parts.
The number of sandwiches Peggy has left as a fraction = 6/8.
Peggy has 4 whole sandwiches.
She cuts each whole into halves.
Then Peggy gives away 1 whole sandwich.
so the number of sandwiches Peggy has left as a fraction = 6/8.
Complete the equations. Match the fractions with their equivalent whole numbers.
6/1 = 12/2 = 6.
6/3 = 4/2 = 2.
the numbers are 1, 2, 4, and 6.
Lesson 13.8 Problem Solving
Construct Arguments
Lindsey and Matt are running in a 1-mile race. They have both run the same distance so far. Write a fraction that shows how far Lindsey could have run. Write a different fraction that shows how far Matt could have run. Construct a math argument to support your answer.
I can … construct math arguments using what I know about fractions.
Thinking Habits
Be a good thinker! These questions can help you.
How can I use numbers, objects, drawings, or actions to justify my argument?
Am I using numbers and symbols correctly?
Is my explanation clear and complete?
Look Back! Construct Arguments Are the two fractions you wrote equivalent? Construct a math argument using pictures, words, and numbers to support your answer.
How Can You Construct Arguments?
Clara and Ana are making rugs. The rugs will be the same size. Clara has finished of her rug. Ana has finished of her rug. Who has finished more of her rug? Conjecture: Clara has finished a greater portion of her rug than Ana.
A conjecture is a statement that you think is true. It needs to be proved.
How can I explain why my conjecture is correct?
I need to construct an argument to justify my conjecture.
How can I construct an argument?
use numbers, objects, drawings, or actions correctly to explain my thinking.
make sure my explanation is simple, complete, and easy to understand.
Here's my thinking…
I will use drawings and numbers to explain my thinking.
The number lines represent the same whole. One is divided into fourths. One is divided into eighths.
The number lines show that 3 of the fourths is greater than 3 of the eighths.
So, \(\frac{3}{4}\) > \(\frac{3}{8}\). The conjecture is correct.
Convince Me! Construct Arguments Use numbers to construct another math argument to justify the conjecture above. Think about how you can look at the numerator and the denominator.
Construct Arguments Paul and Anna were eating burritos. The burritos were the same size. Paul ate \(\frac{2}{6}\) of a burrito. Anna ate \(\frac{2}{3}\) of a burrito. Conjecture: Paul and Anna ate the same amount.
Draw a diagram to help justify the conjecture.
No, Paul and Anna were correct.
Paul and Anna were eating burritos.
The burritos were the same size.
Paul ate \(\frac{2}{6}\) of a burrito.
Anna ate \(\frac{2}{3}\) of a burrito.
paul ate 2 of the sixths is less than the 2 of the thirds.
yes, the conjecture is not correct.
Is the conjecture correct? Construct an argument to justify your answer.
No, the conjecture was not correct.
Construct Arguments Reyna has a blue ribbon that is 1 yard long and a red ribbon that is 2 yards long. She uses \(\frac{2}{4}\) of the red ribbon and \(\frac{2}{4}\) of the blue ribbon.
Conjecture: Reyna uses the same amount of red and blue ribbon.
Yes, the conjecture was correct.
Reyna has a blue ribbon that is 1 yard long and a red ribbon that is 2 yards long.
She uses \(\frac{2}{4}\) of the red ribbon and \(\frac{2}{4}\) of the blue ribbon.
yes, the conjecture is correct.
Explain another way you could justify the conjecture.
Performance Task School Fair Twenty-one students worked at the school fair. Mrs. Gold's students worked at a class booth. The table shows the fraction of 1 hour that her students worked. Mrs. Gold wants to know the order of the work times for the students from least to greatest.
Make Sense and Persevere What comparisons do you need to make to find out who worked the least?
The student who worked least is Pedro.
School Fair Twenty-one students worked at the school fair.
Gold's students worked at a class booth.
The table shows the fraction of 1 hour that her students worked.
Tim worked 4 hours.
Cathy worked 2/4 hours.
Jose worked 2/6 hours.
Pedro worked 3/4 hours.
so the student who worked least is Pedro.
Be Precise What is the whole for each student's time? Do all the fractions refer to the same whole?
The same whole is 4.
so the students who worked least is Pedro.
Use Appropriate Tools What tool could you use to solve this problem? Explain how you would use this tool.
Construct Arguments What is the order of the work times from least to greatest? Construct a math argument to justify your answer.
The student who worked the least is Pedro.
Topic 13 Fluency Practice Activity
Find a Match
Work with a partner. Point to a clue. Read the clue.
Look below the clues to find a match. Write the clue letter in the box next to the match.
Find a match for every clue.
I can … multiply and divide within 100.
A. ls equal to 3 × 3
B. is equal to 4 × 4
C. is equal to 9 × 4
D. is equal to 0 ÷ 10
E. is equal to 35 ÷ 5
F. is equal to 12 ÷ 4
G. is equal to 5 × 4
H. is equal to 3 × 8
I. Is equal to 2 × 5
J. Is equal to 3 × 10
K. is equal to 9 × 2
L. is equal to 2 × 4
6 x 6 = 36, 40 / 4 = 10, 0 x 9 = 0, 3 x 6 = 18, 32 / 4 = 8, 10 x 2 = 20, 5 x 6 = 30, 7 / 21 = 3, 8 x 2 = 16, and 6 x 4 = 24.
A is not equal to 9.
B is equal to 16.
C is equal to 36.
D is equal to 0.
E is equal to 35 / 5 = 7.
F is equal to 3.
G is equal to 20.
H is equal to 24.
I is equal to 10.
J is equal to 30.
K is equal to 18.
L is equal to 8.
Topic 13 Vocabulary Review
Understand Vocabulary
Write T for true or F for false.
______ \(\frac{1}{6}\) and \(\frac{2}{6}\) have the same numerator.
so both the fractions have the same numerator.
________ \(\frac{1}{2}\) and \(\frac{4}{8}\) are equivalent fractions.
so both the fractions are equivalent fractions.
_______ \(\frac{3}{8}\) is a unit fraction.
so 3/8 is a unit fraction.
_________ A whole number can be written as a fraction.
A whole number can be written as a fraction.
so the whole number can be written as a fraction.
________ The denominators in \(\frac{1}{3}\) and \(\frac{2}{3}\) in are the same.
in both the fractions, the numerator, and denominator are the same.
so both the fractions have the same denominators.
_______ A number line always shows fractions.
A unit fraction is a number line that always shows fractions.
3/8 is a unit fraction.
3 is the numerator and 8 is a denominator.
so the unit fraction is a number line that always shows fractions.
For each of these terms, give an example and a non-example.
1/2 is a fraction.
1/2 = 2/4 are equivalent fractions.
the terms are fraction, unit fraction, and equivalent fractions.
Use at least 2 terms from the Word List to explain how to compare \(\frac{1}{2}\) and \(\frac{1}{3}\).
The two terms have different denominators.
so both the fractions have different denominators.
Topic 13 Reteaching
Set A pages 485-488
Two fractions are equivalent if they name the same part of a whole.
What is one fraction that is equivalent to \(\frac{6}{8}\)?
You can use fraction strips to find equivalent fractions.
\(\frac{6}{8}\) = \(\frac{3}{4}\)
You also can use area models to see that a I are equivalent fractions. The shaded fractions both show the same part of the whole.
Remember to check that both sets of strips are the same length
In 1 and 2, find an equivalent fraction. Use fraction strips and models to help.
so the equivalent fraction is 2/3.
Set B pages 489-492
Riley says the library is \(\frac{2}{8}\) of a mile from their house. Sydney says it is \(\frac{1}{4}\) of a mile.
Use the number lines to find who is correct.
The fractions \(\frac{2}{8}\) and \(\frac{1}{4}\) are equivalent. They are the same distance from 0 on a number line. Riley and Sydney are both correct.
Remember that equivalent fractions have different names, but they represent the same point on a number line.
In 1 and 2, write two fractions that name the same location on the number line.
The two fractions that name the same location on the number line = 1.
so the two fractions that name the same location on the number line = 1.
The two fractions that name the same location on the number line = 3/6 and 1/2.
so the two fractions that name the same location on the number line = 3/6 and 1/2.
Set C pages 493-496
You can use fraction strips to compare fractions with the same denominator.
Compare \(\frac{3}{4}\) to \(\frac{2}{4}\).
The denominator of each fraction is 4.
Three \(\frac{1}{4}\) fraction strips show \(\frac{3}{4}\).
Two \(\frac{1}{4}\) fraction strips show \(\frac{2}{4}\).
The fraction strips showing \(\frac{3}{4}\) have 1 more unit fraction than the strips showing \(\frac{2}{4}\).
So \(\frac{3}{4}\) > \(\frac{2}{4}\).
Remember that if fractions have the same denominator, the greater fraction has a greater numerator.
In 1-3, compare. Write <, >, or =. Use fraction strips to help.
Set D pages 497-500
You can use fraction strips to compare fractions with the same numerator.
The numerator of each fraction is 1.
The \(\frac{1}{6}\) fraction strip is less than the \(\frac{1}{2}\) strip.
So \(\frac{1}{6}\) < \(\frac{1}{2}\)
You can use reasoning to understand. Think about dividing a whole into 6 pieces and dividing it into 2 pieces. One of 6 pieces is less than 1 of 2 pieces.
Remember that if fractions have the same numerator, the greater fraction has a lesser denominator.
In 1-3, compare. Write <, >, or=. Use fraction strips to help.
Set E pages 501-504
You can compare fractions using benchmark numbers such as 0, \(\frac{1}{2}\), and 1.
Chris and Mary are painting pictures. The pictures are the same size. Chris painted \(\frac{3}{4}\) of his picture. Mary painted her picture. Who painted the greater amount?
\(\frac{3}{4}\) is greater than \(\frac{1}{2}\).
Chris painted the greater amount.
Remember that you can compare each fraction to a benchmark number to see how they relate to each other.
In 1 and 2, use benchmark numbers to help solve.
Mike had \(\frac{2}{6}\) of a candy bar. Sally had \(\frac{4}{6}\) of a candy bar. Whose fraction of a candy bar was closer to 1? Closer to 0?
Sally was closer to 1.
Mike had \(\frac{2}{6}\) of a candy bar.
Sally had \(\frac{4}{6}\) of a candy bar.
2/3 is closer to 1.
so sally was closer to 1.
Paul compared two bags of rice. One weighs \(\frac{4}{6}\) pound, and the other weighs \(\frac{4}{8}\) pound. Which bag is heavier?
The 4/6 pounds bag is heavier.
Paul compared two bags of rice.
One weighs 4/6 pound.
the other weighs 4/8 pound.
so the 4/6 pounds bag is heavier.
Set F pages 505-508
You can use a number line to compare fractions.
Which is greater, \(\frac{3}{6}\) or \(\frac{4}{6}\)?
\(\frac{4}{6}\) is farther from zero than \(\frac{3}{6}\), so \(\frac{4}{6}\) is greater.
You also can compare two fractions with the same numerator by drawing two number lines.
Remember to draw two number lines that are equal in length when comparing fractions with different denominators.
In 1 and 2, compare. Write <, >, or=. Use number lines to help.
Set G pages 509-512
How many thirds are in 2 wholes?
You can use a number line or fraction strips to find a fraction name for 2 using thirds.
The whole number 2 can also be written as the fraction \(\frac{6}{3}\).
Remember that when you write whole numbers as fractions, the numerator can be greater than the denominator.
In 1-4, write an equivalent fraction for each whole number.
The equivalent fraction is 12/4.
the whole number is 3.
so the equivalent fraction is 12/4.
The equivalent fraction is 4/2.
In 5-8, write the equivalent whole number for each fraction.
\(\frac{6}{3}\)
The equivalent whole number is 2.
so the equivalent whole number is 2.
\(\frac{10}{2}\)
The fraction is 10/2.
Set H pages 513-516
Think about these questions to help construct arguments.
Remember that when you construct an argument, you explain why your work is correct.
Odell and Tamra paint two walls with the same dimensions. Odell paints \(\frac{1}{6}\) of a wall. Tamra paints \(\frac{1}{3}\) of the other wall. Conjecture: Odell paints less than Tamra.
Draw a diagram to justify the conjecture.
Yes, Odell paints less than Tamra.
Odell and Tamra paint two walls with the same dimensions.
Odell paints \(\frac{1}{6}\) of a wall.
Tamra paints \(\frac{1}{3}\) of the other wall.
so Odell paints less than Tamra.
Use the diagram to justify the conjecture.
Topic 13 Assessment Practice
Two friends are working on a project. So far, Cindy has done \(\frac{4}{8}\) of the project, and Kim has done \(\frac{3}{8}\) of the project. Who has done more of the project? Explain.
Cindy has done more of the project.
Two friends are working on a project.
Cindy has done \(\frac{4}{8}\) of the project.
Kim has done \(\frac{3}{8}\) of the project.
so Cindy has done more of the project.
Serena can compare \(\frac{3}{4}\) and \(\frac{3}{6}\) without using fraction strips. She says that a whole divided into 4 equal parts will have larger parts than the same whole divided into 6 equal parts. Three larger parts must be more than three smaller parts, so \(\frac{3}{4}\) is greater than \(\frac{3}{6}\). Is Serena correct? If not, explain Serena's error. Then, write the correct comparison using symbols.
Yes, Serena is correct.
Serena can compare 3/4 and 3/6 without using fraction strips.
so Serena is correct.
Jill finished reading \(\frac{2}{3}\) of a book for a summer reading project. Owen read \(\frac{2}{8}\) of the same book. Use the number lines to compare how much Jill and Owen each read. Who reads more of the book?
The missing fractions are 1/3 and 2/3.
Jill finished reading \(\frac{2}{3}\) of a book for a summer reading project
Owen read \(\frac{2}{8}\) of the same book.
so Jill read more of the book.
A small cake is cut into 4 equal pieces. What fraction represents the entire cake? Explain.
The fraction 4/4 represents the entire cake.
A small cake is cut into 4 equal pieces.
so the fraction 4/4 represents the entire cake.
Mark and Sidney each have a piece of wood that is the same size. Mark paints \(\frac{2}{8}\) of his piece of wood. Sidney paints \(\frac{5}{8}\) of her piece of wood. Who painted a fraction that is closer to 1 than to 0? Explain how you found your answer. Then tell who painted less of his or her piece of wood.
Sidney is closer to 1 than to 0.
Mark and Sidney each have a piece of wood that is the same size.
Mark paints \(\frac{2}{8}\) of his piece of wood.
Sidney paints \(\frac{5}{8}\) of her piece of wood.
so Sidney is closer to 1 than to 0.
Greg colored the fraction model below.
A. Which fractions name the purple part of the model? Select all that apply.
The fraction 6/8 names the purple part of the model.
the fractions are 1/2, 3/4, 2/3, 4/6, 6/8.
6 boxes are filled with purple color.
so the fraction 6/8 names the purple part of the model.
B. Does \(\frac{1}{4}\) name the unshaded part of the model? Explain.
Yes, the fraction 1/4 names the unshaded part of the model.
2 boxes are not filled with purple color.
so the fraction 2/8 names the unshaded part of the model.
Carl, Fiona, and Jen each had a sandwich. The sandwiches were the same size and cut into eighths. Carl ate \(\frac{7}{8}\) of a sandwich, Fiona ate \(\frac{3}{8}\) of a sandwich, and Jen ate \(\frac{6}{8}\) of a sandwich. Who ate the most? Explain.
Carl ate more sandwiches.
Carl, Fiona, and Jen each had a sandwich.
The sandwiches were the same size and cut into eighths.
Carl ate \(\frac{7}{8}\) of a sandwich.
Fiona ate \(\frac{3}{8}\) of a sandwich.
Jen ate \(\frac{6}{8}\) of a sandwich.
3/8, 6/8, 7/8.
so carl ate more sandwiches.
George wants to know if two pieces of wire are the same length. One wire is \(\frac{6}{8}\) foot. The other is \(\frac{3}{4}\) foot. Are they the same length? Fill in the fractions on the number line to compare the lengths of the pieces of wire. Then explain your answer.
The missing fractions are 1/4, 2/4, and 3/4.
2/8, 4/8, and 6/8.
George wants to know if two pieces of wire are the same length.
One wire is \(\frac{6}{8}\) foot.
The other is \(\frac{3}{4}\) foot.
so the missing fractions are 1/4, 2/4, and 3/4.
Lezlie hiked \(\frac{3}{8}\) mile on Monday. On Wednesday she hiked \(\frac{3}{6}\) mile. She hiked a mile on Friday. Use benchmark fractions to arrange the lengths of the hikes in order from shortest to longest hike.
The lengths of hikes in order from shortest and longest = 3/4, 3/6, and 3/8.
Lezlie hiked \(\frac{3}{8}\) mile on Monday.
On Wednesday she hiked \(\frac{3}{6}\) mile.
She hiked a mile on Friday.
8 – 2 = 6, 6 – 2 = 4.
so the lengths of hikes in order from shortest and longest = 3/4, 3/6, and 3/8.
A mural is divided into 3 equal parts. What fraction represents the entire mural? Explain.
The entire mural is 3/3.
A mural is divided into 3 equal parts.
so the entire mural is divided into 3 parts.
Meagan ate \(\frac{3}{4}\) of a cookie. Write an equivalent fraction for the amount of cookie Meagan did NOT eat. Then write a fraction that is equivalent to the amount of the cookie that Meagan did eat, and explain why your answer is correct.
Megan did not eat = 1/4.
Meagan ate \(\frac{3}{4}\) of a cookie.
so megan did not ate = 1/4.
Circle each fraction that is equivalent to 1. Explain your reasoning. Then give another fraction that is equal to 1.
The fraction that is equivalent to 1 is 3/3 and 6/6.
the fractions are 2/4, 3/3, 3/6, 4/6, and 6/6.
circle each fraction that is equivalent to 1.
3/3 = 1, and 6/6 = 1.
Use the number line to help order the fractions from least to greatest. Then explain how you found your answer.
The fractions from least to greatest = 0/4, 1/2, 1/4, 6/8, and 4/4.
so the fractions from least to greatest = 0/4, 1/2, 1/4, 6/8, and 4/4.
Eva and Landon had the same math homework. Eva finished the homework. Landon finished of the homework. Conjecture: Eva and Landon finished the same amount of their homework.
A. Complete the number lines to help think about the conjecture.
1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8, and 8/8.
Eva and Landon had the same math homework.
Eva finished the homework.
so the missing fractions are 1/4, 2/4, 3/4, and 4/4.
B. Use your diagram to decide if the conjecture is correct. Explain.
For each pair of fractions, write the equivalent whole number in the box.
16/4 = 8/2 = 4.
the pair of fractions are 16/4, 8/2, 6/3, 4/2, 8/8, and 6/6.
Topic 13 Performance Task
Clothing Store Devin, Jenna, Eli, and Gabby work at a clothing store. On Saturday they each worked the same number of hours.
The Time Spent at Cash Register table shows the fraction of time each person spent checking out customers. The Time Spent on Customer Calls table shows the fraction of an hour Jenna spent answering phone calls for the store.
Use the Time Spent at Cash Register table to answer Questions 1-3.
Draw fraction strips to show the fraction of time each person worked at the cash register.
The fraction of time each person worked at the cash register = 3/6, 2/6, 6/6, and 5/6.
Devin, Jenna, Eli, and Gabby work at a clothing store.
Devin worked 3/6 hours a day.
Jenna worked 2/6 hours a day.
Eli worked 6/6 hours a day.
Gabby worked 5/6 hours a day.
the fraction of hours is 3/6, 2/6, 6/6, and 5/6.
so the fraction of time each person worked at the cash register = 3/6, 2/6, 6/6, and 5/6.
Who spent the most time at the cash register?
The most time spent at the cash register = Gabby.
so the number of hours Gabby worked = 5/6.
Write a comparison to show the time Gabby spent at the cash register compared to the time Devin spent. Use >, <, or =.
Gabby worked more hours than Devin.
so Gabby worked more hours than Devin.
Use the Time Spent on Customer Calls table to answer this question: On which day did Jenna spend closest to one hour on the phone? Explain how you know.
Jenna spends the closest to one hour on Monday.
on Saturday he spends a 3/6 fraction of an hour.
on Sunday he spends 3/5 fraction of an hour.
on Monday he spends 3/4 fraction of an hour.
3/4 is nearest to the 1.
so Jenna spends the closest to one hour on Monday.
The store sells different colors of men's socks. The Socks table shows the fraction for each sock color in the store.
Use the Socks table to answer Questions 5 and 6.
Complete the fractions on the number line. Label the fraction that represents each sock color.
The fraction that represents each sock color = 1/4, 2/4, 3/4, and 1.
1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8, and 1.
The store sells different colors of men's socks.
the white color socks are 1/8.
the black color sock is 1/4.
the brown color socks are 3/8.
the gray color socks are 2/8.
so the fraction that represents each sock color = 1/4, 2/4, 3/4, and 1.
Does the store have more brown socks or more white socks?
The store has more brown socks = 3/8.
so brown color socks are more than white color socks.
Use the number line in Exercise 5 Part A to construct an argument to justify the following conjecture: The store has an equal number of gray socks and black socks.
Yes, the store has an equal number of gray socks and white socks.
so the store has the equal number of gray socks and white socks.
Use the Miguel's Socks table to answer the question.
Miguel bought some socks at the clothing store. After he washed them, he counted the number of individual socks he has. Each sock is \(\frac{1}{2}\) of a pair. How many pairs of black socks does he have? Write this number as a fraction.
The number of pairs of black socks does he have = 3 pairs.
Miguel bought some socks at the clothing store.
the number of black color socks = 6.
the number of gray color socks = 8.
so the number of pairs of black socks does he have = 3 pairs. | CommonCrawl |
Kemono Chen
A Chinese high school student, interested in evaluating integrals and other mathematical things.
Evaluating $\int_0^1\ln(1+x^2)\ln(x^2+x^3)\frac{dx}{1+x^2}$
calculus integration definite-integrals asked Oct 13 '18 at 13:23
Evaluate $\lim\limits_{x\to\infty}\frac1x\int_0^x\max\{\sin t,\sin(t\sqrt2)\}dt$
calculus integration definite-integrals asked Jul 22 '18 at 9:12
Evaluating $\int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}dx$
calculus integration definite-integrals hypergeometric-function asked Feb 24 at 11:37
Evaluate $\int_0^{\pi/4}{(4\cot x\ln\sec x-x)\ln^2\tan xdx}$
calculus integration definite-integrals asked Jul 12 at 1:08
Evaluate $\int_0^\infty{\frac{\tan x}{x^n}dx}$
calculus integration definite-integrals asked Mar 17 '18 at 22:33
Finding $\max |A|$ with $a_{ij}=\pm 1$
linear-algebra determinant asked Sep 10 '18 at 8:31
Evaluating $\int_0^{\pi/2}\operatorname{arcsinh}(2\tan x)dx$
calculus integration definite-integrals polylogarithm asked Jan 22 at 8:15
Entire function $f(z)$ grows like $\exp(x^\pi)$ as $x\to+\infty$
complex-analysis limits entire-functions asked Jan 15 at 5:52
Integral $\int_{\sqrt{33}}^\infty\frac{dx}{\sqrt{x^3-11x^2+11x+121}}$
integration definite-integrals elliptic-integrals asked Jan 18 at 13:41
Evaluating $\sum_{n=1}^\infty\frac{(H_n)^2}{n}\frac{\binom{2n}n}{4^n}$
calculus sequences-and-series definite-integrals harmonic-numbers asked Apr 12 at 5:39
Evaluate $\int_0^1 \log \left( \frac{x^2+\sqrt{3}x+1}{x^2-\sqrt{3}x+1} \right) \frac{dx}{x} $
How to prove $\int_0^\infty \ln(1+\frac{z}{\cosh(x)})dx=\frac{\pi^2}{8}+\frac{(\cosh^{-1}(z))^2}{2},z\ge1$ and a closed form for $-1<z<1$?
Compute polylog of order $3$ at $\frac{1}{2}$
Solving $\int_{0}^{\infty} \frac{\sin(x)}{x^3}dx$
Evaluating $\int_0^1\frac{\ln(1+x-x^2)}xdx$ without using polylogarithms.
How to prove $\int_{-\infty}^{+\infty} \frac{x^2}{\cosh(x)^2} dx = \frac{\pi^2}{6}$?
How to compute the limit,$\lim_{x\rightarrow 0}\frac{3x^2-3x\sin x}{x^2+x\cos1/x}$
Is $\sqrt{x}$ an even function | CommonCrawl |
Explosive solutions of parabolic stochastic partial differential equations with lévy noise
DCDS Home
The global stability of 2-D viscous axisymmetric circulatory flows
October 2017, 37(10): 5085-5104. doi: 10.3934/dcds.2017220
On parabolic external maps
Luna Lomonaco 1,, , Carsten Lunde Petersen 2, and Weixiao Shen 3,
Departamento de Matemática Aplicada, Instituto de Matemática e Estatistica, Universidade de São Paulo, Rua do Matão 1010,05508-090 São Paulo -SP, Brazil
Department of Science, NSM, IMFUFA, Roskilde University, Universitetsvej 1, 4000 Roskilde, Denmark
Shanghai Center for Mathematical Sciences and School of Mathematical Sciences, Fudan University, Handan Road 220, Shanghai, China 200433
Received March 2016 Revised April 2017 Published June 2017
Fund Project: The first author has been supported by FAPESP via the process 2013/20480-7. The second author has been supported by the Danish Council for Independent Research | Natural Sciences via the grant DFF -4181-00502
Full Text(HTML)
Figure(3)
We prove that any $C^{1+\text{BV}}$ degree d ≥ 2 circle covering $h$ having all periodic orbits weakly expanding, is conjugate by a $C^{1+\text{BV}}$ diffeomorphism to a metrically expanding map. We use this to connect the space of parabolic external maps (coming from the theory of parabolic-like maps) to metrically expanding circle coverings.
Keywords: Circle covering, metric expanding, smooth conjugacy, parabolic-like map, parabolic external map.
Mathematics Subject Classification: Primary: 37E10; Secondary: 37F15.
Citation: Luna Lomonaco, Carsten Lunde Petersen, Weixiao Shen. On parabolic external maps. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5085-5104. doi: 10.3934/dcds.2017220
B. Branner and N. Fagella, Quasiconformal Surgery in Holomorphic Dynamics, Cambridge University Press, 2014. Google Scholar
G. Cui, Circle expanding maps and symmetric structures, Ergodic Theory and Dynamical Systems, 18 (1998), 831-842. doi: 10.1017/S0143385798117455. Google Scholar
A. Douady and J. H. Hubbard, On the dynamics of polynomial-like mappings, Annales scientifiques de l'École normale supérieure, 18 (1985), 287-343. doi: 10.24033/asens.1491. Google Scholar
L. Lomonaco, Parabolic-like maps, Ergodic Theory and Dynamical Systems, 35 (2015), 2171-2197. doi: 10.1017/etds.2014.27. Google Scholar
J. Ma., On Evolution of a Class of Markov Maps, Undergraduate thesis (in Chinese), University of Science and Technology of China, 2007. Google Scholar
R. Mañé, Hyperbolicity, sinks and measure in one-dimensional dynamics, Communications in Mathematical Physics, 100 (1985), 495-524. doi: 10.1007/BF01217727. Google Scholar
M. Martens, W. de Melo and S. van Strien, Julia-Fatou-Sullivan theory for real onedimensional dynamics, Acta Mathematica, 168 (1992), 273-318. doi: 10.1007/BF02392981. Google Scholar
W. de Melo and S. van Strien, One-Dimensional Dynamics, Springer-Verlag, 1993. doi: 10.1007/BF02392981. Google Scholar
W. Rudin, Real and Complex Analysis, New York-Toronto, Ont. -London, 1966. Google Scholar
M. Shishikura, Bifurcation of parabolic fixed points, in The Mandelbrot set, theme and variations, London Mathematical Society Lecture Note Series, Cambridge University Press, 274 (2000), 325-363. Google Scholar
Figure 1. A map of the maps we consider. $\mathcal{F}_d^{1+\text{BV}}$ is the set of degree $d$ smooth covering $h:\mathcal{S}^1\to\mathcal{S}^1$ with $h\in C^{1+\text{BV}}$; $\mathcal{O}_d^{1+\text{BV}}$ is the set of maps $h\in\mathcal{F}_d^{1+\text{BV}}$ for which for every periodic point $p$ say of period $s$, there is a neighborhood $U(p)$ of $p$ such that for all $x\in U(p)\setminus \{p\}$ we have $Dh^s(x)>1$; while $\mathcal{M}_d^{1+\text{BV}}$ and $\mathcal{T}_d^{1+\text{BV}}$ are the class of respectively metrically and topologically expanding $h\in F_d^{1+\text{BV}}$. $\mathcal{F}_d$ is the class of real analytic degree $d$ circle coverings, $\mathcal{T}_d$ and $\mathcal{M}_d$ the set of respectively topologically and metrically expanding $h \in \mathcal{F}_d$, and $\mathcal{T}_{d,*}$ and $\mathcal{M}_{d,*}$ the set of respectively topologically and metrically expanding $h \in \mathcal{F}_d$ for which $\text{Par}(h) \neq \emptyset$. Also, $\mathcal{P}_d$ is the class of extenal maps and $\mathcal{P}_{d,*}$ the class of parabolic external maps. Finally, $\mathcal{H}_{d,1} =\{ h \in \mathcal{F}_d | \,\,h \sim_{qs} h_d (z)= \frac{z^d+(d-1)/(d+1)}{(d-1)z^d/(d+1)+1}\}$. By Corollary 2.2, $\mathcal{O}_d^{1+\text{BV}}=\mathcal{T}_d^{1+\text{BV}}$, and by Theorem 2.4, $\mathcal{M}_d\subset\mathcal{P}_d=\mathcal{T}_{d}$, $\mathcal{M}_{d,*}~\subset~\mathcal{P}_{d,*}~=~\mathcal{T}_{d,*}$ and $\mathcal{M}_{d,1}\subset\mathcal{P}_{d,1}=\mathcal{H}_{d,1}=\mathcal{T}_{d,1}$.
Figure Options
Download as PowerPoint slide
Figure 2. A parabolic external map in $\mathcal{P}_{d,1}$.
Figure 3. Construction
Sérgio S. Rodrigues. Semiglobal exponential stabilization of nonautonomous semilinear parabolic-like systems. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020027
Larbi Berrahmoune. Null controllability for distributed systems with time-varying constraint and applications to parabolic-like equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020062
Yong Fang. On smooth conjugacy of expanding maps in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 687-697. doi: 10.3934/dcds.2011.30.687
Dyi-Shing Ou, Kenneth James Palmer. A constructive proof of the existence of a semi-conjugacy for a one dimensional map. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 977-992. doi: 10.3934/dcdsb.2012.17.977
Plamen Stefanov, Gunther Uhlmann, Andras Vasy. On the stable recovery of a metric from the hyperbolic DN map with incomplete data. Inverse Problems & Imaging, 2016, 10 (4) : 1141-1147. doi: 10.3934/ipi.2016035
Hun Ki Baek, Younghae Do. Dangerous Border-Collision bifurcations of a piecewise-smooth map. Communications on Pure & Applied Analysis, 2006, 5 (3) : 493-503. doi: 10.3934/cpaa.2006.5.493
Gung-Min Gie, Makram Hamouda, Roger Témam. Boundary layers in smooth curvilinear domains: Parabolic problems. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1213-1240. doi: 10.3934/dcds.2010.26.1213
Daniel Fusca. The Madelung transform as a momentum map. Journal of Geometric Mechanics, 2017, 9 (2) : 157-165. doi: 10.3934/jgm.2017006
Lluís Alsedà, Michał Misiurewicz. Semiconjugacy to a map of a constant slope. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3403-3413. doi: 10.3934/dcdsb.2015.20.3403
Richard Evan Schwartz. Outer billiards and the pinwheel map. Journal of Modern Dynamics, 2011, 5 (2) : 255-283. doi: 10.3934/jmd.2011.5.255
Valentin Ovsienko, Richard Schwartz, Serge Tabachnikov. Quasiperiodic motion for the pentagram map. Electronic Research Announcements, 2009, 16: 1-8. doi: 10.3934/era.2009.16.1
John Erik Fornæss, Brendan Weickert. A quantized henon map. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 723-740. doi: 10.3934/dcds.2000.6.723
Zenonas Navickas, Rasa Smidtaite, Alfonsas Vainoras, Minvydas Ragulskis. The logistic map of matrices. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 927-944. doi: 10.3934/dcdsb.2011.16.927
Alexandre N. Carvalho, Jan W. Cholewa. NLS-like equations in bounded domains: Parabolic approximation procedure. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 57-77. doi: 10.3934/dcdsb.2018005
Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597
Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098
Chiara Leone, Anna Verde, Giovanni Pisante. Higher integrability results for non smooth parabolic systems: The subquadratic case. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 177-190. doi: 10.3934/dcdsb.2009.11.177
Alexander Arguchintsev, Vasilisa Poplevko. An optimal control problem by parabolic equation with boundary smooth control and an integral constraint. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 193-202. doi: 10.3934/naco.2018011
Jianhua Huang, Wenxian Shen. Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 855-882. doi: 10.3934/dcds.2009.24.855
Genni Fragnelli, Dimitri Mugnai. Singular parabolic equations with interior degeneracy and non smooth coefficients: The Neumann case. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-17. doi: 10.3934/dcdss.2020084
HTML views (30)
Luna Lomonaco Carsten Lunde Petersen Weixiao Shen | CommonCrawl |
What is the most efficient way to generate a random permutation from probabilistic pairwise swaps?
The question I am interested in is related to generating random permutations. Given a probabilistic pairwise swap gate as the basic building block, what is the most efficient way to produce a uniformly random permutation of $n$ elements? Here I take "probabilistic pairwise swap gate" to be the operation which implements a swap gate between to chosen elements $i$ and $j$ with some probability $p$ which can be freely chosen for each gate, and the identity otherwise.
I realise this is not usually the way one generates random permutations, where usually one might use something like a Fisher-Yates shuffle, however, this will not work for the application I have in mind as the allowed operations are different.
Clearly this can be done, the question is how efficiently. What is the least number of probabilistic swaps necessary to achieve this goal?
Anthony Leverrier provides a method below which does indeed produce the correct distribution using $O(n^2)$ gates, with Tsuyoshi Ito providing another approach with the same scaling in the comments. However, the best lower bound I have so far seen is $\lceil \log_2(n!) \rceil$, which scales as $O(n\log n)$. So, the question still remains open: Is $O(n^2)$ the best that can be done (i.e. is there a better lower bound)? Or alternatively, is there a more efficient circuit family?
Several of the answers and comments have proposed circuits which are comprised entirely of probabilistic swaps where the probability is fixed at $\frac{1}{2}$. Such a circuit cannot solve this problem for the following reason (lifted from the comments):
Imagine a circuit which uses $m$ such gates. Then there are $2^m$ equiprobable computational paths, and so any permutation must occur with probability $k 2^{−m}$ for some integer k. However, for a uniform distribution we require that $k 2^{−m}=\frac{1}{n!}$, which can be rewritten as $k n! = 2^m$. Clearly this can't be satisfied for an integer value of $k$ for $n\geq3$, since $3|n!$ (for $n\geq 3$, but $3\nmid 2^m$.
UPDATE (from mjqxxxx who is offering the bounty):
The bounty being offered is for (1) a proof that $\omega(n \log n)$ gates are required, or (2) a working circuit, for any $n$, that uses less than $n(n-1)/2$ gates.
co.combinatorics randomness probabilistic-circuits
Joe Fitzsimons
Joe FitzsimonsJoe Fitzsimons
$\begingroup$ @Anthony: Perhaps it's not obvious, but you can: Imagine that circuit $C$ creates a uniform distribution of permutations of the first $n-1$ elements. Then $C$ followed by a probabilistic swap (with probability 0.5) between position $n-1$ and position $n$ will produce a uniformly random choice for position $n$. If you follow this by applying $C$ again to the first $n-1$ elements, you should get a uniformly random distribution. $\endgroup$ – Joe Fitzsimons Mar 7 '11 at 14:13
$\begingroup$ ok, thanks for the explanation! Note that the probabilistic swap should have proba $(n-1)/n$ between position $n-1$ and position $n$. $\endgroup$ – Anthony Leverrier Mar 7 '11 at 14:23
$\begingroup$ In terms of entropy required, the algorithm needs $(n-1) h(1/2) + (n-2) h(1/3) + \cdots + (n-k) h(1/(k+1)) + \cdots + h(1/n)$ random bits where $h(.)$ is the binary entropy function. I cannot compute that sum exactly but it is $O(n \log_2(n)^2)$ according to mathematica ... while the optimum is at least $O(n \log_2(n))$. $\endgroup$ – Anthony Leverrier Mar 7 '11 at 16:45
$\begingroup$ This is different from what you want, but there is a family of circuits of size O(n log n) which generate every permutation with probability at least 1/p(n!) for some polynomial p: consider a sorting network with size O(n log n) and replace each comparator with a probability-1/2 swap gate. Because of the correctness of the sorting network, every permutation has to arise with nonzero probability, which is necessarily at least 1/2^{O(n log n)} = 1/poly(n!). $\endgroup$ – Tsuyoshi Ito Mar 7 '11 at 22:59
$\begingroup$ Back to the original problem. Note that the O(n^2) solution which Anthony described can be viewed as replacing each comparator in the sorting network representing the selection sort with a probabilistic swap gate with a suitable probability. (more) $\endgroup$ – Tsuyoshi Ito Mar 7 '11 at 23:04
A working algorithm that I described in a comment above is the following:
First start by bringing a random element with probability $1/n$ in position $n$: swap positions 1 and 2 with proba $1/2$, then 2 and 3 with proba $2/3$, ... then $n-1$ and $n$ with proba $(n-1)/n$.
Apply the same procedure to bring a random element in position $n-1$: swap positions 1 and 2 with prob $1/2$ ... then positions $n-2$ and $n-1$ with proba $(n-2)/(n-1)$.
The number of gates required by this algorithm is $(n-1)+(n-2)+ \cdots + 2+1 = n(n-1)/2 = O(n^2)$.
Anthony LeverrierAnthony Leverrier
$\begingroup$ This algorithm has a connection to bubble sort. In particular consider the state space of all permutations of size n. The probability that 1st elements greater than 2nd is 1/2, swap with that probability. Assume first two elements are sorted, what is the proba 2nd element > 3rd element 2/3, etc. Therefore, it seem possible to convert sorting algorithm into swap gate circuit, where each following step should take into account conditional probabilities, arising from previous steps. Which in a sense suggest explicit inefficient method to construct such circuits. $\endgroup$ – mkatkov Mar 10 '11 at 4:53
This is neither an answer nor new information. Here I will try to summarize the discussions which occurred in comments about relations between this problem and sorting networks. In this post, all times are in UTC and a "comment" means a comment on the question unless stated otherwise.
A circuit consisting of probabilistic swap gates (which swap two values randomly) naturally reminds us of a sorting network, which is nothing but a circuit consisting of comparators (which swap two values depending on the order between them). Indeed, circuits for the current problem and sorting networks are related to each other in the following ways:
The solution by Anthony Leverrier with n(n−1)/2 probabilistic swap gates can be understood as the sorting network for the bubble sort with the comparators replaced by probabilistic swap gates with suitable probabilities. See mkatkov's comment at March 10 4:53 on that answer for details. The sorting network for the selection sort can also be used in the same way. (In the comment at March 7 23:04, I described Anthony's circuit as the selection sort, but that was not correct.)
If we just want every permutation with nonzero probability and do not care about the distribution being uniform, then every sorting network does the job when all the comparators are replaced with probability-1/2 swap gates. If we use a sorting network with O(n log n) comparators, the resulting circuit generates every permutation with probability at least 1/2O(n log n) = 1/poly(n!), as observed in my comment at March 7 22:59.
In this problem, it is required that the probabilistic swap gates fire independently. If we remove this restriction, every sorting network can be converted to a circuit which generates the uniform distribution, as I mentioned in the comment at March 7 23:08 and user1749 described in greater details at March 8 14:07.
These facts apparently suggest that this problem is closely related to sorting networks. However, Peter Taylor found an evidence that the relation may not be very close. Namely, not every sorting network can be converted to a desired circuit by replacing the comparators with probabilistic swap gates with suitable probabilities. The five-comparator sorting network for n=4 is a counterexample. See his comments at March 10 11:08 and March 10 14:01.
Tsuyoshi ItoTsuyoshi Ito
$\begingroup$ @mkatkov: I have seen three or four deleted answers and I do not remember which was whose, sorry. If you have found a solution with less than n(n−1)/2 gates, I would like to know the whole construction (and it is not to steal mjqxxxx's bounty from you :) ). $\endgroup$ – Tsuyoshi Ito Mar 10 '11 at 16:29
$\begingroup$ @mkatkov: I am still skeptical. As I wrote in the last paragraph of this post, Peter Taylor found that the five-comparator sorting network for n=4 cannot be converted to a solution for the current problem by replacing the comparators with probabilistic swap gates. This implies that your logic cannot work for every sorting network, although it does not rule out the possibility that it somehow works for, say, the odd-even mergesort. $\endgroup$ – Tsuyoshi Ito Mar 10 '11 at 16:55
$\begingroup$ @mkatkov: The reason this type of solution doesn't seem to work (or at least no working example has been shown) is that the swap gates in a pairwise sorting network fire in a highly correlated fashion. In this problem, all gates fire independently, which leads to a very different space of possible circuits. $\endgroup$ – mjqxxxx Mar 10 '11 at 17:13
$\begingroup$ @mkatbov, each step in Anthony's network selects one of m inputs (where m ranges from n down to 2). You can't select one of m inputs with fewer than m-1 gates, so in particular you can't do it with log m gates. Beating $O(n^2)$ is probably going to require some kind of divide-and-conquer approach. $\endgroup$ – Peter Taylor Mar 10 '11 at 20:20
$\begingroup$ @Tsuyoshi, Yuval and I have analysed all possible 5-gate solutions for $n=4$ and eliminated them all, which strengthens the result that not all sorting networks can be converted into uniform permutation networks into a result that there exist problem sizes for which the optimal uniform permutation network requires more gates than the optimal sorting network. $\endgroup$ – Peter Taylor Mar 14 '11 at 15:23
This isn't a full answer by any means, but it includes a result which may be useful and applies it to get some constraints on the case $n=4$ which limit the possible 5-gate solutions to 2500 easily enumerable cases.
First the general result: in any solution which permutes $n$ objects, there must be at least $n-1$ swaps which have probability $\frac{1}{2}$.
Proof: consider the permutation representation of the permutations of order $n$. These are the $n\times n$ matrices $A_\pi$ satisfying $(A_\pi)_{i,j} = [i = \pi(j)]$. Consider a swap between $i$ and $j$ with probability $p$: this has representation $(1-p)I + pA_{(i j)}$ (using cycle notation to represent the permutation). You can think of multiplication by this matrix in terms of representation theory or in Markov terms as applying the permutation $(i j)$ with probability $p$ and leaving things unchanged with probability $1-p$.
The permutation network is therefore a chain of such matrix multiplications. We start with the identity matrix and the final result will be a matrix $U$ where $U_{i,j} = \frac{1}{n}$, so we are going from a matrix of rank $n$ to a matrix of rank $1$ by multiplications - i.e. the rank is decreasing by $n-1$.
Considering the rank of the matrices $(1-p)I + pA_{(i j)}$, then, we see that they're essentially identity matrices apart from a minor $\begin{pmatrix}1-p & p \\ p & 1-p\end{pmatrix}$, so they have full rank unless $p=\frac{1}{2}$, in which case they have rank $n-1$.
Applying Sylvester's matrix inequality we therefore find that each swap decreases the rank only if $p=\frac{1}{2}$, and when this condition is met it decreases it by no more than 1. Therefore we require at least $n-1$ swaps of probability $\frac{1}{2}$.
Note that this bound can't be tightened because Anthony Leverrier's network achieves it.
Application to the case $n=4$. We already have solutions with 6 gates, so the question is whether solutions with 5 gates are possible. We now know that at least 3 of the gates must be 50/50 swaps, so we have two "free" probabilities, $p$ and $q$. There are 32 possible events (5 independent events each with two outcomes) and $4! = 24$ buckets each of which must contain at least one event. The events divide up as 8 with probability $\frac{pq}{8}$, 8 with probability $\frac{\overline{p}q}{8}$, 8 with probability $\frac{p\overline{q}}{8}$, and 8 with probability $\frac{\overline{p}\overline{q}}{8}$.
32 events into 24 buckets with no empty buckets implies that at least 16 buckets contain precisely one event, so at least two of the four probabilities given above are equal to $\frac{1}{24}$. Taking symmetries into account we have two cases: $pq = \overline{p}q = \frac{1}{3}$ or $pq = \overline{p}\overline{q} = \frac{1}{3}$.
The first case gives $p=\overline{p}=\frac{1}{2}$, $q=\frac{2}{3}$ (correction or $q=\frac{1}{3}$, unwinding the symmetry). The second case gives $pq=1-p-q+pq$, so $pq = p(1-p) = \frac{1}{3}$, which has no real solutions.
Therefore if there is a 5-gate solution we have four gates with probability $\frac{1}{2}$ and one gate with probability either $\frac{1}{3}$ or $\frac{2}{3}$. Wlog the first swap is $0\leftrightarrow 1$, and the second is either $0\leftrightarrow 2$ or $2\leftrightarrow 3$; the other three each have (no more than) five possibilities, because there's no point doing the same swap twice in a row. So we have $2\times 5^3$ swap sequences to consider and 10 ways of assigning the probabilities, leading to 2500 cases which could be enumerated and tested mechanically.
Update: Yuval Filmus and I have both enumerated and tested the cases and found no solutions, so the optimal solution for $n=4$ involves 6 gates, and examples of 6-gate solutions are found in other answers.
Peter TaylorPeter Taylor
$\begingroup$ My case enumeration failed to produce any shorter example. $\endgroup$ – Yuval Filmus Mar 13 '11 at 19:57
$\begingroup$ ... even after the correction. $\endgroup$ – Yuval Filmus Mar 13 '11 at 23:29
$\begingroup$ Excellent, that's a very nice observation. $\endgroup$ – Joe Fitzsimons Mar 14 '11 at 1:36
$\begingroup$ @mjqxxxx, I calculate that in searching for a 9-gate solution to $n=5$ you would have to consider approximately 104 million cases (although this could be reduced a bit with cleverness), but for each case you would be computing 120 equations in 5 variables with cross-terms and then checking for a solution. It's probably doable with a standard desktop computer, but it requires a bit more effort because you can't so easily constrain the possible values of the probabilities. $\endgroup$ – Peter Taylor Mar 14 '11 at 13:21
$\begingroup$ I'm awarding the bounty here, although the answer provides neither an asymptotic improvement over the $\Omega(n \log n)$ lower bound nor any improvement on the $n(n-1)/2$ upper bound, because at least it proves that $n(n-1)/2$ is optimal in a single nontrivial case. $\endgroup$ – mjqxxxx Mar 16 '11 at 20:15
The following seems to be new and relevant information:
The paper [CKKL99] shows how to get 1/n close to a uniform permutation of n elements using a switching network of depth O(log n), and hence a total of O(n log n) comparators.
This construction is not explicit, but it can be made explicit if you increase the depth to polylog(n). See the pointers in the paper [CKKL01], which also contains more information.
A previous comment already pointed out a result saying that O(n log n) switches suffice, but the difference is that in switching networks the elements being compared are fixed.
[CKKL99] Artur Czumaj, Przemyslawa Kanarek, Miroslaw Kutylowski, and Krzysztof Lo- rys. Delayed path coupling and generating random permutations via distributed stochastic processes. In Symposium on Discrete Algorithms (SODA), pages 271{ 280, 1999.
[CKKL01] Artur Czumaj, Przemyslawa Kanarek, Miroslaw Kutylowski, and Krzysztof Lo- rys. Switching networks for generating random permutations, 2001.
$\begingroup$ Thanks, that is certainly useful to know. I am still interested to know about the gate number for generating the exact distribution, however. $\endgroup$ – Joe Fitzsimons Mar 11 '11 at 22:17
Here's a somewhat interesting solution for $n=4$. The same idea also works for $n=6$.
Start with the switches $(0,1),(2,3)$ with probability $1/2$. Reducing $0,1$ to $X$ and $2,3$ to $Y$, we are in the situation $XXYY$. Apply the switches $(0,3),(1,2)$ with probability $p$. The result is $$ \begin{align*} XXYY &\text{ w.p. } (1-p)^2, \\ YYXX &\text{ w.p. } p^2, \\ XYXY &\text{ w.p. } p(1-p), \\ YXYX &\text{ w.p. } p(1-p) \end{align*} $$ Our next move is going to be $(0,2),(1,3)$ with probability $1/2$. Thus we really only care if the result of the previous stage is of the form $XXYY/YYXX$ (case A) or of the form $XYXY/YXYX$ (case B). In case A these switches will result in a uniform probability over $XXYY/XYYX/YXXY/YYXX$. In case B they will be ineffective. Therefore $p$ must satisfy $$ p(1-p) = 1/6 \Longrightarrow p = \frac{3 \pm \sqrt{3}}{6}. $$ Given that, the result is uniform.
A similar idea works for $n=6$ - you first randomly sort each half, and then "merge" them. However, even for $n=8$ I can't see how to merge the halves properly.
The interesting point about this solution is the weird probability $p$.
As a side note, the set of probabilities $p$ which can conceivably help us is given by $1/(1-\lambda)$, where $\lambda \leq 0$ goes over all eigenvalues of all representations of $S_n$ at all transpositions.
Yuval FilmusYuval Filmus
$\begingroup$ The weird values for $p$ are indeed encouraging, as I think there is a reasonably simple proof that if we restrict the probabilities to $1/k$ for integer $k$ then the best you can do is $O(n^2)$. $\endgroup$ – Joe Fitzsimons Mar 8 '11 at 23:11
$\begingroup$ A little different way for 2n elements, which is still weird in a similar sense, is to shuffle the first n elements, shuffle the last n elements, swap (i,i+n) with probability p_i for i=1,…,n, shuffle the first n elements, and shuffle the last n elements. The probabilities p_i must be chosen so that the probability that exactly k out of the n swap gates fire is equal to $\binom{n}{k}^2/\binom{2n}{n}$ and such probabilities p_i are given by (1+x_i)/2 where x_1,…,x_n are the roots of the Legendre polynomial P_n. (more) $\endgroup$ – Tsuyoshi Ito Mar 8 '11 at 23:18
$\begingroup$ (cont'd) A disappointing thing about the variation which I described is that it requires n(n−1)/2 probabilistic swaps when n is a power of two, that is, exactly the same number of gates as the bubble-sort solution by Anthony Leverrier. $\endgroup$ – Tsuyoshi Ito Mar 8 '11 at 23:20
$\begingroup$ @Tsuyoshi, your construction is clearly correct, but I wonder whether it's doing more than necessary. I don't have time to work through the analysis at the moment, but if you do then you might find it worth considering whether there exist $p_0, p_1$ such that $0\leftrightarrow 1, p=\frac{1}{2}$; $2\leftrightarrow 3, p=\frac{1}{2}$; $0\leftrightarrow 2, p=p_0$; $1\leftrightarrow 3, p=p_1$; then apply a suitable permutation of the Legendre roots (and fill in the other quarters) can work. $\endgroup$ – Peter Taylor Mar 15 '11 at 23:00
Consider the problem of randomly shuffling the string $XX..XY..YY$, where each block has length $n$, with a circuit consisting of probabilistic pairwise swaps. That is, all $(2n)!/(n!)^2$ strings with $n$ $X$s and $n$ $Y$s must be equally probable outputs of the circuit, given the specified input. Let $B_{2n}$ be an optimal circuit for this problem, and let $C_{2n}$ be an optimal circuit for the original problem (randomly shuffling $2n$ elements). Applying a random permutation is sufficient to randomly interleave the $X$s and $Y$s, so $\lvert{B_{2n}}\rvert \le \lvert{C_{2n}}\rvert$. On the other hand, we can shuffle $2n$ elements by shuffling the first $n$ elements, shuffling the last $n$ elements, and finally applying circuit $B_{2n}$. This implies that $\lvert{C_{2n}}\rvert \le 2\lvert{C_{n}}\rvert + \lvert{B_{2n}}\rvert$. Combining these two bounds, we can derive the following result:
$\lvert{B_{2n}}\rvert$ and $\lvert{C_{2n}}\rvert$ are both $o(n^2)$, or neither is.
We see that the two problems are equally difficult, at least in this sense. This result is somewhat surprising, because one might expect the $XY$-shuffle problem to be easier. In particular, the entropic argument shows that $\lvert{B_{2n}}\rvert$ is $\Omega(n)$, but gives the stronger result that $\lvert{C_{2n}}\rvert$ is $\Omega(n \log n)$.
mjqxxxxmjqxxxx
Diaconis and Shahshahani 1981, "Generating a Random Permutation with Random Transpositions" shows that 1/2 n log n random transpositions (note: there is no "O" here) result in a permutation close (in total variation distance) to uniform. I'm not sure if precisely what is allowed in your application lets you use this result, but it is quite fast, and tight in that it is an example of a cut-off phenomenon. See Random Walks on Finite Groups by Saloff-Coste for a survey of similar results.
Jason MortonJason Morton
$\begingroup$ And presumably two nearly random permutations can be composed to produce a permutation that is even more nearly random. $\endgroup$ – mjqxxxx Mar 9 '11 at 21:36
$\begingroup$ ... However, it should be noted that this is not really the same problem (even allowing for an approximate rather than exact solution), because the authors consider transpositions of randomly chosen pairs of elements, not probabilistic transpositions of specified pairs of elements. $\endgroup$ – mjqxxxx Mar 9 '11 at 22:02
This is really a comment but too long to post as a comment. I suspect that the representation theory of the symmetric group might be useful to prove a better lower bound. Although I know almost nothing about representation theory and I may be off the mark, let me explain why it might be related to the current problem.
Note that the behavior of a circuit consisting of probabilistic swap gates can be fully specified as a probability distribution p over Sn, the group of permutations on n elements. A permutation g∈Sn can be thought of as the event that ith output is g(i)th input for all i∈{1,…,n}. Now represent a probability distribution p as a formal sum ∑g∈Snp(g)g. For example, the probabilistic swap between wires i and j with probability p is represented as (1−p)e+pτij, where e∈Sn is the identity element and τij∈Sn is the transposition between i and j.
An interesting fact about this formal sum is that the behavior of the concatenation of two independent circuits can be formally described as a product of these formal sums. Namely, if the behaviors of circuits C1 and C2 are represented as formal sums a1=∑g∈Snp1(g)g and a2=∑g∈Snp2(g)g, respectively, then the behavior of the circuit C1 followed by C2 is represented as ∑g1,g2∈Snp1(g1)p2(g2)g1g2 = a1a2.
Therefore, a desired circuit with m probabilistic swaps exactly corresponds to a way of writing the sum (1/n!)∑g∈Sng as a product of m sums each of which is of the form (1−p)e+pτij. We would like to know the minimum number m of factors.
The formal sums ∑g∈Snf(g)g, where f is a function from Sn to ℂ, equipped with naturally defined addition and multiplication, form the ring called group algebra ℂ[Sn]. Group algebra is closely related to representation theory of groups, which is a deep theory as we all know and fear :). This makes me suspect that something in representation theory might be applicable to the current problem.
Or maybe this is just far-fetched.
$\begingroup$ Here's what this reduces to. There are a bunch of representations of the symmetric group, that can be calculated explicitly for transpositions, with some work (usually they're only calculated explicitly for transpositions $(k,k+1)$). The initial value of each representation is the appropriate identity matrix. Applying a probabilistic swap multiplies each representation with $(1-p)I + pA_{ij}$, where $A_{ij}$ is the value of the representation at the performed swap $(ij)$. (cont'd) $\endgroup$ – Yuval Filmus Mar 11 '11 at 1:55
$\begingroup$ In order for the output to be uniform, we need all the representations other than the identity representation to be zero. So the probabilities $p$ should be chosen so that at least some of the matrices $(1-p)I + pA_{ij}$ are singular. The matrices $A_{ij}$ for each representation have different eigenvectors, so it's not clear what condition would enforce a particular representation to be zero. (cont'd) $\endgroup$ – Yuval Filmus Mar 11 '11 at 1:57
$\begingroup$ If, however, we could prove that every transposition reduces the average rank of a representation by at most $1/n^2$, say, then we would get an $n^2$ lower bound. Such a bound can be proven if we know the eigenvectors corresponding to each representation and each transposition. This information can be worked out in principle, but there's no assurance that this approach would produce anything non-trivial. $\endgroup$ – Yuval Filmus Mar 11 '11 at 1:59
$\begingroup$ (cont'd) and this linear transformation is exactly the matrix arising in the representation of S_n by n×n permutation matrices. Although n−1 is trivial as a lower bound on the number of gates (the entropy argument already gives a better lower bound), my hope is that it might be possible to generalize your argument to other representations to yield a better lower bound on the total number of gates. $\endgroup$ – Tsuyoshi Ito Mar 11 '11 at 16:53
$\begingroup$ @Yuval, @Peter: I noticed that for every representation, (1−p)I+pA_{ij} is nonsingular unless p=1/2 (because A_{ij}^2=I implies that the eigenvalues of A_{ij} are ±1). Therefore, counting the rank is useful only for lower-bounding the number of probability-1/2 gates, which was already done optimally by Peter. In other words, if the representation theory is useful in the way I suggested in this post, we need something other than counting the rank of matrices! I am not sure if that is realistic. $\endgroup$ – Tsuyoshi Ito Mar 11 '11 at 22:51
Anthony's $O(n^2)$ algorithm can be run in parallel by starting the next iteration of the procedure after the first two probabilistic swaps, resulting in $O(n)$ runtime.
$\begingroup$ I think the relevant complexity measure for this question is the number of gates and not the runtime. $\endgroup$ – Anthony Leverrier Mar 8 '11 at 22:04
$\begingroup$ @Anthony is correct that what I am interested in is simply the minimum number of gates necessary. $\endgroup$ – Joe Fitzsimons Mar 9 '11 at 1:40
If I understand correctly, if you want your circuit to be able to generate all permutations, you need at least $\lceil \log_2(n!) \rceil$ probabilistic gates, though I'm not sure how the minimal circuit can be constructed.
I think that if you take the Mergesort algorithm and replace all comparisons with random choices with appropriate probabilities you'll get the circuit you are looking for.
Antonio Valerio Miceli-BaroneAntonio Valerio Miceli-Barone
$\begingroup$ I'm not entirely sure how you would translate this into the probabilitsic swap gate model I gave above. I don't see how a probabilistic swap replaces the comparison and still achieves a random distribution. Hence, I'm also not sure why this would be optimal. $\endgroup$ – Joe Fitzsimons Mar 7 '11 at 14:19
$\begingroup$ And yes, $\lceil \log_2(n!) \rceil$ is the minimum, but this is only $O(n\log(n))$. $\endgroup$ – Joe Fitzsimons Mar 7 '11 at 14:36
$\begingroup$ Assume $n=2^k$ and proceed by induction on $k$. You have two random permutations of length $2^{k-1}$. If you merge these randomly (i.e. taking the next element from a randomly chosen subpermutation) then the merged results should certainly be random. The probability of position $i$ having an element from the "left" subpermutation is clearly 1/2 by symmetry. And conditioned on it having an element from the left subpermutation, it must have a uniformly random one from it. In this way you can see that the resulting permutation is indeed random. $\endgroup$ – Andrew D. King Mar 7 '11 at 15:06
$\begingroup$ That was also my line of thinking when I proposed mergesort, however, on a second thought, it seems to me that it may not be possible to implement the merge operation using only the required type of gates, since they don't produce an output to tell whether they performed the permutation and they have no control input to condition them. $\endgroup$ – Antonio Valerio Miceli-Barone Mar 7 '11 at 15:18
$\begingroup$ @Andrew: I don't see how to "merge these randomly" using the gates outlined in the question. $\endgroup$ – Joe Fitzsimons Mar 7 '11 at 15:49
The following answer is wrong (see @joe fitzsimon's comment), but might be useful as starting point
I have a sketch proposal in $O(n\log n)$. I have hand-checked it works for $n=4$ (!) but I have no proof yet that the result is uniform beyond $n=4$.
Suppose you have a circuit $C_n$ which generates a uniform random permutation on $n$ bits. Les $S_{i,j}^{\frac 12}$ the probabilistic swap gate which swaps the bits $i$ and $j$ with probability 1/2 and does nothing with probability $1/2$. Construct the following circuit $C_{2n}$ acting on $2n$ bits:
$\forall 1\le k\le n$, apply the gate $S_{k,k+n}^{1/2}$;
Apply $C_n$ on the first $n$ bits;
Apply $C_n$ on the last $n$ bits;
$\forall 1\le k\le n$, apply the gate $S_{k,k+n}^{1/2}$.
Step 1. is needed so that bits $1$ and $n+1$ can land in the same half of the permutation, and step 4. is needed by symmetry : if $C_{2n}$ is a solution, so is $C_{2n}^-1$ obtained by applying the gates in a reverse order is also a solution.
The size of this family of circuits obeys the following recursion relation : $$ |C_{2n}| = 2|C_n|+2n $$ with, obviously, $|C_1|=0$. One then easily sees that $|C_n|=n\log n$.
Then remains the obvious question : do these circuits perform uniform permutations ? No, see first comment below
Frédéric GrosshansFrédéric Grosshans
$\begingroup$ I don't believe that these do perform uniform permutations. In fact, I think it is impossible to do exactly with such gates if you fix the probability to be 1/2. The reason for this is simple: imagine a circuit which uses $m$ such gates. Then there are $2^{m}$ equiprobable computational paths, and so any permutation must occur with probability $k 2^{-m}$ for some integer $k$. However, for a uniform distribution we require that $k 2^{-m} = \frac{1}{n!}$. Clearly this can't be satisfied for an integer value of $k$ for $n\geq 3$. $\endgroup$ – Joe Fitzsimons Mar 8 '11 at 16:35
$\begingroup$ Indeed. I forgot too check the uniformity for $n=4$... $\endgroup$ – Frédéric Grosshans Mar 8 '11 at 17:26
Not the answer you're looking for? Browse other questions tagged co.combinatorics randomness probabilistic-circuits or ask your own question.
Probabilistic circuit complexity or size of probabilistic 2-way automata for Boolean functions
Beigel-Tarui transformation of ACC cricuits
Constructing a k-perfect permutations family
$NP$-completeness of recognizing the difference of two permutations
Can true randomness (provably) be replaced with Kolmogorov randomness for RP?
Robustness to non-uniform randomness vs. one-sidedness
Deterministic error reduction, state-of-the-art? | CommonCrawl |
Comparing Methods for segmentation of Microcalcification Clusters in Digitized Mammograms [PDF]
Hajar Moradmand,Saeed Setayeshi,Hossein Khazaei Targhi
Computer Science , 2012,
Abstract: The appearance of microcalcifications in mammograms is one of the early signs of breast cancer. So, early detection of microcalcification clusters (MCCs) in mammograms can be helpful for cancer diagnosis and better treatment of breast cancer. In this paper a computer method has been proposed to support radiologists in detection MCCs in digital mammography. First, in order to facilitate and improve the detection step, mammogram images have been enhanced with wavelet transformation and morphology operation. Then for segmentation of suspicious MCCs, two methods have been investigated. The considered methods are: adaptive threshold and watershed segmentation. Finally, the detected MCCs areas in different algorithms will be compared to find out which segmentation method is more appropriate for extracting MCCs in mammograms.
Comparing Methods for segmentation of Microcalcification Clusters in Digitized Mammograms
International Journal of Computer Science Issues , 2011,
Abstract: The appearance of microcalcifications in mammograms is one of the early signs of breast cancer. So, early detection of microcalcification clusters (MCCs) in mammograms can be helpful for cancer diagnosis and better treatment of breast cancer. In this paper a computer system devised to support a radiologist in detection MCCs in digital mammography has been proposed. First, to facilitate and improve detection step, the mammogram images have been enhanced with wavelet transformation and morphology operation. Then for segmentation of suspicious MCCs, two methods have been investigated. The considered methods are: adaptive threshold and Watershed segmentation. The purpose of this paper is to find out which segmentation method is more appropriate for extracting suspicious areas that contain MCCs in mammograms. Finally the MCCs detection areas in different algorithms will be compared.
Deep Structured learning for mass segmentation from Mammograms [PDF]
Neeraj Dhungel,Gustavo Carneiro,Andrew P. Bradley
Abstract: In this paper, we present a novel method for the segmentation of breast masses from mammograms exploring structured and deep learning. Specifically, using structured support vector machine (SSVM), we formulate a model that combines different types of potential functions, including one that classifies image regions using deep learning. Our main goal with this work is to show the accuracy and efficiency improvements that these relatively new techniques can provide for the segmentation of breast masses from mammograms. We also propose an easily reproducible quantitative analysis to as- sess the performance of breast mass segmentation methodologies based on widely accepted accuracy and running time measurements on public datasets, which will facilitate further comparisons for this segmentation problem. In particular, we use two publicly available datasets (DDSM-BCRP and INbreast) and propose the computa- tion of the running time taken for the methodology to produce a mass segmentation given an input image and the use of the Dice index to quantitatively measure the segmentation accuracy. For both databases, we show that our proposed methodology produces competitive results in terms of accuracy and running time.
Computer aided system for segmentation and visualization of microcalcifications in digital mammograms. [cached]
Branimir Reljin,Zorica Milosevi?,Tomislav Stoji?,Irini Reljin
Folia Histochemica et Cytobiologica , 2010, DOI: 10.5603/4320
Abstract: Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Segmentation of the Breast Region in Digital Mammograms and Detection of Masses [PDF]
Armen Sahakyan,Hakop Sarukhanyan
International Journal of Advanced Computer Sciences and Applications , 2012,
Abstract: The mammography is the most effective procedure for an early diagnosis of the breast cancer. Finding an accurate and efficient breast region segmentation technique still remains a challenging problem in digital mammography. In this paper we explore an automated technique for mammogram segmentation. The proposed algorithm uses morphological preprocessing algorithm in order to: remove digitization noises and separate background region from the breast profile region for further edge detection and regions segmentation.
Blocks of homogeneous effect algebras [PDF]
Gejza Jen?a
Mathematics , 2015, DOI: 10.1017/S0004972700019705
Abstract: Effect algebras, introduced by Foulis and Bennett in 1994, are partial algebras which generalize some well known classes of algebraic structures (for example orthomodular lattices, MV algebras, orthoalgebras etc.). In the present paper, we introduce a new class of effect algebras, called {\em homogeneous effect algebras}. This class includes orthoalgebras, lattice ordered effect algebras and effect algebras satisfying Riesz decomposition property. We prove that every homogeneous effect algebra is a union of its blocks, which we define as maximal sub-effect algebras satisfying Riesz decomposition property. This generalizes a recent result by Rie\v{c}anov\'a, in which lattice ordered effect algebras were considered. Moreover, the notion of a block of a homogeneous effect algebra is a generalization of the notion of a block of an orthoalgebra. We prove that the set of all sharp elements in a homogeneous effect algebra $E$ forms an orthoalgebra $E_S$. Every block of $E_S$ is the center of a block of $E$. The set of all sharp elements in the compatibility center of $E$ coincides with the center of $E$. Finally, we present some examples of homogeneous effect algebras and we prove that for a Hilbert space $\mathbb H$ with $dim(\mathbb H)>1$, the standard effect algebra $\mathcal E(\mathbb H)$ of all effects in $\mathbb H$ is not homogeneous.
Filtering for More Accurate Dense Tissue Segmentation in Digitized Mammograms [PDF]
Mario Mu?tra,Mislav Grgi?
Abstract: Breast tissue segmentation into dense and fat tissue is important for determining the breast density in mammograms. Knowing the breast density is important both in diagnostic and computer-aided detection applications. There are many different ways to express the density of a breast and good quality segmentation should provide the possibility to perform accurate classification no matter which classification rule is being used. Knowing the right breast density and having the knowledge of changes in the breast density could give a hint of a process which started to happen within a patient. Mammograms generally suffer from a problem of different tissue overlapping which results in the possibility of inaccurate detection of tissue types. Fibroglandular tissue presents rather high attenuation of X-rays and is visible as brighter in the resulting image but overlapping fibrous tissue and blood vessels could easily be replaced with fibroglandular tissue in automatic segmentation algorithms. Small blood vessels and microcalcifications are also shown as bright objects with similar intensities as dense tissue but do have some properties which makes possible to suppress them from the final results. In this paper we try to divide dense and fat tissue by suppressing the scattered structures which do not represent glandular or dense tissue in order to divide mammograms more accurately in the two major tissue types. For suppressing blood vessels and microcalcifications we have used Gabor filters of different size and orientation and a combination of morphological operations on filtered image with enhanced contrast.
Segmentation of Breast Masses in Digital Mammograms Using Adaptive Median Filtering and Texture Analysis [PDF]
Dr. Naseer M. Basheer,Mr. Mustafa H. Mohammed
International Journal of Recent Technology and Engineering , 2013,
Abstract: Breast cancer continues to be one of the major causes of death among women. Early detection is a key factor to the success of treatment process. X-ray mammography is one of the most common procedures for diagnosing breast cancer due to its simplicity, portability and cost effectiveness. Mass detection using Computer Aided Diagnosis (CAD) schemes was an active field of research in the past few years, and some of these studies showed a promising future. T`hese CAD systems serve as a second decision tool to radiologists for discovering masses in the mammograms. In this paper, a breast mass segmentation method is presented based on adaptive median filtering and texture analysis. The algorithm is implemented using MATLAB environment. The program accepts a digital mammographic image (images taken from the Mammographic Image Analysis Society (MIAS) database). Adaptive median filtering is applied for contouring the image, then the best contour is chosen based on the texture properties of the resulting Region-of-Interest (ROI). The proposed CAD system produces (92.307%) mass sensitivity at 2.75 False Positive per Image (FPI) which is considered as a proper result in this field of research.
Automatic Image Segmentation Base on Human Color Perceptions
Yu Li-jie,Li De-sheng,Zhou Guan-ling
International Journal of Image, Graphics and Signal Processing , 2009,
Abstract: In this paper we propose a color image segmentation algorithm based on perceptual color vision model. First, the original image is divide into image blocks which are not overlapped; then, the mean and variance of every image back was calculated in CIEL*a*b* color space, and the image blocks were divided into homogeneous color blocks and texture blocks by the variance of it. The initial seed regions are automatically selected depending on calculating the homogeneous color blocks' color difference in CIEL*a*b* color space and spatial information. The color contrast gradient of the texture blocks need to calculate and the edge information are stored for regional growing. The fuzzy region growing algorithm and coloredge detection to obtain a final segmentation map. The experimental segmentation results hold favorable consistency in terms of human perception, and confirm effectiveness of the algorithm.
Simulated Solar Microwave Radiation Blocks the Formation of Biofilms [PDF]
Yulia S. Shishkova, Stanislav N. Darovskih, Nadezhda V. Vdovina, Nadezhda L. Pozdnyakova, I. A. Komarova, Elena V. Shishkova, Evgenij V. Vodyanitskiy
Natural Science (NS) , 2015, DOI: 10.4236/ns.2015.73014
Abstract: The article presents the results of the experimental study that was devoted to determining the blocking influence of the solar microwave radiation on the process of biofilm formation in Gram-positive and Gram-negative microorganisms. The microwave generator that allows simulating microwave "splashes" of the Sun in the frequency range (4.0 - 4.3 GHz) with the controlled intensity of radiation (from 50 μW/sm2 to 500 μW/sm2) was used for conducting this research. It is found out that the simulated solar radiation of the microwave range blocks the formation of the extracellular matrix by the opportunistic microorganisms. The results of this study confirm the hypothesis of the evolutionary nature of the leading role of the microwave radiation of the Sun in the life processes of organisms. The technology of the exposure on the microorganisms that was used in the experiment opens up the real prospects for reducing the persistent potential of microorganisms and improving the efficiency of the bacterial infections treatment. | CommonCrawl |
Dr Alessia Gualandris
Head of Astrophysics Research Group, Senior Lecturer
[email protected]
https://www.surrey.ac.uk/astrophysics-research-group
Academic and research departments
Astrophysics Research Group.
Alessia Gualandris joined the Department as lecturer in 2013.She started her research at the University of Milano Bicocca in Italy, where she obtained her MPhys degree for her theoretical/numerical studies of dynamical encounters in the dense cores of globular clusters. She received her PhD in 2006 from the University of Amsterdam, the Netherlands, for her interdisciplinary work between astronomy and computer science, and in particular for the development of software for the simulation of dense stellar systems and her study of the ejection of high velocity stars. In those years, she became one of the few experts in adopting special purpose hardware for the efficient simulation of globular clusters and galactic nuclei.During her postdoctoral studies at the Rochester Institute of Technology (USA), at the Max-Planck Institute for Astrophysics in Garching, Germany and at the University of Leicester, she used state-of-the-art numerical simulations to study the dynamics of the Milky Way centre and other galactic nuclei hosting supermassive black holes. Through international collaborations and software development, she continues to push the limits of numerical methods to study the formation and dynamical evolution of the densest astrophysical systems.
Dynamics of dense stellar systems: star clusters, galactic nuclei.
Evolution of supermassive black hole binaries and gravitational wave sources.
Computational astrophysics.
Fast-rotating stars at the centre of the Milky Way could have migrated from the outskirts of the galaxy
Professor Dame Jocelyn Bell Burnell to deliver Adams-Sweeting Lecture
Astrophysics researcher awarded STFC Ernest Rutherford Fellowship
Gravitational waves emitted from black holes in the Galactic Centre
Zentrum fuer Astronomie Heidelberg
We spotted a star moving so fast it will enter intergalactic space
Research Techniques in Astronomy
Expand my teaching
Courses I teach on
M Colpi, A Possenti, A Gualandris (2002)The case of PSR J1911-5958A in the outskirts of NGC 6752: signature of a black hole binary in the cluster core?
We have investigated different scenarios for the origin of the binary millisecond pulsar PSR J1911-5958A in NGC 6752, the most distant pulsar discovered from the core of a globular cluster to date. The hypothesis that it results from a truly primordial binary born in the halo calls for accretion-induced collapse and negligible recoil speed at the moment of neutron star formation. Scattering or exchange interactions off cluster stars are not consistent with both the observed orbital period and its offset position. We show that a binary system of two black holes with (unequal) masses in the range of 3-100 solar masses can live in NGC 6752 until present time and can have propelled PSR J1911-5958A into an eccentric peripheral orbit during the last ~1 Gyr.
T Kouwenhoven, A Brown, H Zinnecker, L Kaper, SP Zwart, A Gualandris (2003)A search for close companions in Sco OB2
Using adaptive optics we study the binary population in the nearby OB association Scorpius OB2. We present the first results of our near-infrared adaptive optics survey among 199 (mainly) A- and B-type stars in Sco OB2. In total 151 components other than the target stars are found, out of which 77 are probably background stars. Our findings are compared with data collected from literature. Out of the remaining 74 candidate physical companions 42 are new, demonstrating that many stars A/B stars have faint, close companions.
A Gualandris, D Merritt (2007)Dynamics around supermassive black holes
The dynamics of galactic nuclei reflects the presence of supermassive black holes (SBHs) in many ways. Single SBHs act as sinks, destroying a mass in stars equal to their own mass in roughly one relaxation time and forcing nuclei to expand. Formation of binary SBHs displaces a mass in stars roughly equal to the binary mass, creating low-density cores and ejecting hyper-velocity stars. Gravitational radiation recoil can eject coalescing binary SBHs from nuclei, resulting in offset SBHs and lopsided cores. We review recent work on these mechanisms and discuss the observable consequences.
A Gualandris, S Harfst, D Merritt, S Mikkola (2008)Evolution of stellar orbits in the Galactic centre, In: Astronomische Nachrichten329(9-10)pp. 1008-1011
DOI: 10.1002/asna.200811048
We describe a novel N-body code designed for simulations of the central regions of galaxies containing massive black holes. The code incorporates Mikkola's "algorithmic" chain regularization scheme including post-Newtonian terms up to PN2.5 order. Stars moving beyond the chain are advanced using a fourth-order integrator with forces computed on a GRAPE board. Performance tests confirm that the hybrid code achieves better energy conservation, in less elapsed time, than the standard scheme and that it reproduces the orbits of stars tightly bound to the black hole with high precision. The hybrid code is applied to two sample problems: the effect of finite-N gravitational fluctuations on the orbits of the S-stars; and inspiral of an intermediate-mass black hole into the galactic centre. © 2008 WILEY-VCH Verlag GmbH & Co. KGaA.
K Rycerz, A Tirado-Ramos, A Gualandris, SP Zwart, M Bubak, PMA Sloot (2007)Regular Paper: Interactive N-Body Simulations On the Grid: HLA Versus MPI., In: IJHPCA212pp. 210-221
DOI: 10.1177/1094342007074874
HB Perets, A Gualandris, G Kupi, D Merritt, T Alexander (2009)Dynamical evolution of the young stars in the Galactic center: N-body simulations of the S-stars, In: Astrophys.J.702:884-889,2009
DOI: 10.1088/0004-637X/702/2/884
We use N-body simulations to study the evolution of the orbital eccentricities of stars deposited near (
A Pontzen, JI Read, R Teyssier, F Governato, A Gualandris, N Roth, J Devriendt (2015)Milking the spherical cow - on aspherical dynamics in spherical coordinates, In: MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY451(2)pp. 1366-1379 OXFORD UNIV PRESS
DOI: 10.1093/mnras/stv1032
Miklos Peuten, A Zocchi, Mark Gieles, Alessia Gualandris, V Henault-Brunet (2016)A stellar-mass black hole population in the globular cluster NGC 6101?, In: Monthly Notices of the Royal Astronomical Society462(3)pp. 2333-2342 Oxford University Press
DOI: 10.1093/mnras/stw1726
Dalessandro et al. observed a similar distribution for blue straggler stars and main-sequence turn-off stars in the Galactic globular cluster NGC 6101, and interpreted this feature as an indication that this cluster is not mass-segregated. Using direct N-body simulations, we find that a significant amount of mass segregation is expected for a cluster with the mass, radius and age of NGC 6101. Therefore, the absence of mass segregation cannot be explained by the argument that the cluster is not yet dynamically evolved. By varying the retention fraction of stellar-mass black holes, we show that segregation is not observable in clusters with a high black hole retention fraction (>50 per cent after supernova kicks and >50 per cent after dynamical evolution). Yet all model clusters have the same amount of mass segregation in terms of the decline of the mean mass of stars and remnants with distance to the centre. We also discuss how kinematics can be used to further constrain the presence of a stellar-mass black hole population and distinguish it from the effect of an intermediate-mass black hole. Our results imply that the kick velocities of black holes are lower than those of neutron stars. The large retention fraction during its dynamical evolution can be explained if NGC 6101 formed with a large initial radius in a Milky Way satellite.
SP Zwart, S McMillan, D Groen, A Gualandris, M Sipior, W Vermin (2007)A parallel gravitational N-body kernel
DOI: 10.1016/j.newast.2007.11.002
We describe source code level parallelization for the {____tt kira} direct gravitational $N$-body integrator, the workhorse of the {____tt starlab} production environment for simulating dense stellar systems. The parallelization strategy, called ``j-parallelization'', involves the partition of the computational domain by distributing all particles in the system among the available processors. Partial forces on the particles to be advanced are calculated in parallel by their parent processors, and are then summed in a final global operation. Once total forces are obtained, the computing elements proceed to the computation of their particle trajectories. We report the results of timing measurements on four different parallel computers, and compare them with theoretical predictions. The computers employ either a high-speed interconnect, a NUMA architecture to minimize the communication overhead or are distributed in a grid. The code scales well in the domain tested, which ranges from 1024 - 65536 stars on 1 - 128 processors, providing satisfactory speedup. Running the production environment on a grid becomes inefficient for more than 60 processors distributed across three sites.
Denis Erkal, Douglas Boubert, Alessia Gualandris, N. Wyn Evans, Fabio Antonini (2018)A hypervelocity star with a Magellanic origin, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
Using proper motion measurements from Gaia DR2, we probe the origin of 26 previously known hypervelocity stars (HVSs) around the Milky Way. We find that a significant fraction of these stars have a high probability of originating close to the Milky Way centre, but there is one obvious outlier. HVS3 is highly likely to be coming almost from the centre of the Large Magellanic Cloud (LMC). During its closest approach, 21.1 +6.1 −4.6 Myr ago, it had a relative velocity of 870 +69 −66 kms −1 with respect to the LMC. This large kick velocity is only consistent with the Hills mechanism, requiring a massive black hole at the centre of the LMC. This provides strong direct evidence that the LMC itself harbours a massive black hole of at least 4×10 3 −10 4 M ⊙ .
E Bortolas, Alessia Gualandris, M Dotti, M Spera, M Mapelli (2016)Brownian motion of massive black hole binaries and the final parsec problem, In: Monthly Notices of the Royal Astronomical Society461(1)pp. 1023-1031 Oxford University Press
Massive black hole binaries (BHBs) are expected to be one of the most powerful sources of gravitational waves in the frequency range of the pulsar timing array and of forthcoming space-borne detectors. They are believed to form in the final stages of galaxy mergers, and then harden by slingshot ejections of passing stars. However, evolution via the slingshot mechanism may be ineffective if the reservoir of interacting stars is not readily replenished, and the binary shrinking may come to a halt at roughly a parsec separation. Recent simulations suggest that the departure from spherical symmetry, naturally produced in merger remnants, leads to efficient loss cone refilling, preventing the binary from stalling. However, current N-body simulations able to accurately follow the evolution of BHBs are limited to very modest particle numbers. Brownian motion may artificially enhance the loss cone refilling rate in low-N simulations, where the binary encounters a larger population of stars due its random motion. Here we study the significance of Brownian motion of BHBs in merger remnants in the context of the final parsec problem. We simulate mergers with various particle numbers (from 8k to 1M) and with several density profiles. Moreover, we compare simulations where the BHB is fixed at the centre of the merger remnant with simulations where the BHB is free to random walk. We find that Brownian motion does not significantly affect the evolution of BHBs in simulations with particle numbers in excess of one million, and that the hardening measured in merger simulations is due to collisionless loss cone refilling.
A Gualandris, M Colpi, A Possenti (2002)Unveiling black holes ejected from globular clusters
Was the black hole in XTE J1118+480 ejected from a globular cluster or kicked away from the galactic disk?
VV Gvaramadze, A Gualandris, SP Zwart (2007)Hyperfast pulsars as the remnants of massive stars ejected from young star clusters, In: Mon.Not.Roy.Astron.Soc.385pp. 929-938
Recent proper motion and parallax measurements for the pulsar PSR B1508+55 indicate a transverse velocity of ~1100 km/s, which exceeds earlier measurements for any neutron star. The spin-down characteristics of PSR B1508+55 are typical for a non-recycled pulsar, which implies that the velocity of the pulsar cannot have originated from the second supernova disruption of a massive binary system. The high velocity of PSR B1508+55 can be accounted for by assuming that it received a kick at birth or that the neutron star was accelerated after its formation in the supernova explosion. We propose an explanation for the origin of hyperfast neutron stars based on the hypothesis that they could be the remnants of a symmetric supernova explosion of a high-velocity massive star which attained its peculiar velocity (similar to that of the pulsar) in the course of a strong dynamical three- or four-body encounter in the core of dense young star cluster. To check this hypothesis we investigated three dynamical processes involving close encounters between: (i) two hard massive binaries, (ii) a hard binary and an intermediate-mass black hole, and (iii) a single star and a hard binary intermediate-mass black hole. We find that main-sequence O-type stars cannot be ejected from young massive star clusters with peculiar velocities high enough to explain the origin of hyperfast neutron stars, but lower mass main-sequence stars or the stripped helium cores of massive stars could be accelerated to hypervelocities. Our explanation for the origin of hyperfast pulsars requires a very dense stellar environment of the order of 10^6 -10^7 stars pc^{-3}. Although such high densities may exist during the core collapse of young massive star clusters, we caution that they have never been observed.
E Gaburov, A Gualandris, SP Zwart (2007)On the onset of runaway stellar collisions in dense star clusters I. Dynamics of the first collision
We study the circumstances under which first collisions occur in young and dense star clusters. The initial conditions for our direct $N$-body simulations are chosen such that the clusters experience core collapse within a few million years, before the most massive stars have left the main-sequence. It turns out that the first collision is typically driven by the most massive stars in the cluster. Upon arrival in the cluster core, by dynamical friction, massive stars tend to form binaries. The enhanced cross section of the binary compared to a single star causes other stars to engage the binary. A collision between one of the binary components and the incoming third star is then mediated by the encounters between the binary and other cluster members. Due to the geometry of the binary-single star engagement the relative velocity at the moment of impact is substantially different than in a two-body encounter. This may have profound consequences for the further evolution of the collision product.
HB Perets, A Gualandris, D Merritt, T Alexander (2008)Dynamical evolution of the young stars in the Galactic center
Recent observations of the Galactic center revealed a nuclear disk of young OB stars near the massive black hole (MBH), in addition to many similar outlying stars with higher eccentricities and/or high inclinations relative to the disk (some of them possibly belonging to a second disk). In addition, observations show the existence of young B stars (the 'S-cluster') in an isotropic distribution in the close vicinity of the MBH ($
Imran Tariq Nasim, Alessia Gualandris, Justin I Read, Fabio Antonini, Walter Dehnen, Maxime Delorme (2021)Formation of the largest galactic cores through binary scouring and gravitational wave recoil, In: Monthly notices of the Royal Astronomical Society502(4)pp. 4794-4814
DOI: 10.1093/mnras/stab435
Massive elliptical galaxies are typically observed to have central cores in their projected radial light profiles. Such cores have long been thought to form through 'binary scouring' as supermassive black holes (SMBHs), brought in through mergers, form a hard binary and eject stars from the galactic centre. However, the most massive cores, like the $\sim 3{\, \mathrm{kpc}}$ core in A2261-BCG, remain challenging to explain in this way. In this paper, we run a suite of dry galaxy merger simulations to explore three different scenarios for central core formation in massive elliptical galaxies: 'binary scouring', 'tidal deposition', and 'gravitational wave (GW) induced recoil'. Using the griffin code, we self-consistently model the stars, dark matter, and SMBHs in our merging galaxies, following the SMBH dynamics through to the formation of a hard binary. We find that we can only explain the large surface brightness core of A2261-BCG with a combination of a major merger that produces a small $\sim 1{\, \mathrm{kpc}}$ core through binary scouring, followed by the subsequent GW recoil of its SMBH that acts to grow the core size. Key predictions of this scenario are an offset SMBH surrounded by a compact cluster of bound stars and a non-divergent central density profile. We show that the bright 'knots' observed in the core region of A2261-BCG are best explained as stalled perturbers resulting from minor mergers, though the brightest may also represent ejected SMBHs surrounded by a stellar cloak of bound stars.
S Harfst, A Gualandris, D Merritt, S Mikkola (2008)A Hybrid N-Body Code Incorporating Algorithmic Regularization and Post-Newtonian Forces, In: Monthly Notices of the Royal Astronomical Society
We describe a novel N-body code designed for simulations of the central regions of galaxies containing massive black holes. The code incorporates Mikkola's 'algorithmic' chain regularization scheme including post-Newtonian terms up to PN2.5 order. Stars moving beyond the chain are advanced using a fourth-order integrator with forces computed on a GRAPE board. Performance tests confirm that the hybrid code achieves better energy conservation, in less elapsed time, than the standard scheme and that it reproduces the orbits of stars tightly bound to the black hole with high precision. The hybrid code is applied to two sample problems: the effect of finite-N gravitational fluctuations on the orbits of the S-stars; and inspiral of an intermediate-mass black hole into the galactic center.
VV Gvaramadze, A Gualandris, SP Zwart (2009)High-velocity runaway stars from three-body encounters, In: IAU Symp.266pp. 413-416
DOI: 10.1017/S1743921309991554
We performed numerical simulations of dynamical encounters between hard massive binaries and a very massive star (VMS; formed through runaway mergers of ordinary stars in the dense core of a young massive star cluster), in order to explore the hypothesis that this dynamical process could be responsible for the origin of high-velocity (____geq 200-400 km/s) early or late B-type stars. We estimated the typical velocities produced in encounters between very tight massive binaries and VMSs (of mass of ____geq 200 Msun) and found that about 3-4 per cent of all encounters produce velocities of ____geq 400 km/s, while in about 2 per cent of encounters the escapers attain velocities exceeding the Milky Ways's escape velocity. We therefore argue that the origin of high-velocity (____geq 200-400 km/s) runaway stars and at least some so-called hypervelocity stars could be associated with dynamical encounters between the tightest massive binaries and VMSs formed in the cores of star clusters. We also simulated dynamical encounters between tight massive binaries and single ordinary 50-100 Msun stars. We found that from 1 to ____simeq 4 per cent of these encounters can produce runaway stars with velocities of ____geq 300-400 km/s (typical of the bound population of high-velocity halo B-type stars) and occasionally (in less than 1 per cent of encounters) produce hypervelocity (____geq 700 km/s) late B-type escapers.
D Merritt, A Gualandris, S Mikkola (2008)Explaining the Orbits of the Galactic Center S-Stars, In: Astrophysical Journal Letters, 693, L35 (2009)
DOI: 10.1088/0004-637X/693/1/L35
The young stars near the supermassive black hole at the galactic center follow orbits that are nearly random in orientation and that have an approximately thermal distribution of eccentricities, N(e) ~ e. We show that both of these properties are a natural consequence of a few million years' interaction with an intermediate-mass black hole (IBH), if the latter's orbit is mildly eccentric and if its mass exceeds approximately 1500 solar masses. Producing the most tightly-bound S-stars requires an IBH orbit with periastron distance less than about 10 mpc. Our results provide support for a model in which the young stars are carried to the galactic center while bound to an IBH, and are consistent with the hypothesis that an IBH may still be orbiting within the nuclear star cluster.
A Gualandris, SP Zwart, PP Eggleton (2004)N-body simulations of stars escaping from the Orion nebula, In: Mon.Not.Roy.Astron.Soc.350pp. 615-?
We study the dynamical interaction in which the two single runaway stars AE Aurigae and mu Columbae and the binary iota Orionis acquired their unusually high space velocity. The two single runaways move in almost opposite directions with a velocity greater than 100 km/s away from the Trapezium cluster. The star iota Ori is an eccentric (e=0.8) binary moving with a velocity of about 10 km/s at almost right angles with respect to the two single stars. The kinematic properties of the system suggest that a strong dynamical encounter occurred in the Trapezium cluster about 2.5 Myr ago. Curiously enough, the two binary components have similar spectral type but very different masses, indicating that their ages must be quite different. This observation leads to the hypothesis that an exchange interaction occurred in which an older star was swapped into the original iota Orionis binary. We test this hypothesis by a combination of numerical and theoretical techniques, using N-body simulations to constrain the dynamical encounter, binary evolution calculations to constrain the high orbital eccentricity of iota Orionis and stellar evolution calculations to constrain the age discrepancy of the two binary components. We find that an encounter between two low eccentricity (0.4
A Gualandris, M Colpi, SP Zwart, A Possenti (2004)Has the black hole in XTE J1118+480 experienced an asymmetric natal kick?, In: Astrophys.J.618pp. 845-851
We explore the origin of the Galactic high latitude black hole X-ray binary XTE J1118+480, and in particular its birth location and the magnitude of the kick received by the black hole upon formation in the supernova explosion. We constrain the age of the companion to the black hole using stellar evolution calculations between 2 Gyr and 5 Gyr, making an origin in a globular cluster unlikely. We therefore argue that the system was born in the Galactic disk and the supernova propelled it in its current high latitude orbit. Given the current estimates on its distance, proper motion and radial velocity, we back-trace the orbit of XTE J1118+480 in the Galactic potential to infer the peculiar velocity of the system at different disk crossings over the last 5 Gyr. Taking into account the uncertainties on the velocity components, we infer an average peculiar velocity of 183 ____pm 31 km/s. The maximum velocity which the binary can acquire by symmetric supernova mass loss is about 100 km/s, which is 2.7 sigma away from the mean of the peculiar velocity distribution. We therefore argue that an additional asymmetric kick velocity is required. By considering the orientation of the system relative to the plane of the sky, we derive a 95% probability for a non null component of the kick perpendicular to the orbital plane of the binary. The distribution of perpendicular velocities is skewed to lower velocities with an average of 93^{+55}_{-60} km/s.
H Baumgardt, A Gualandris, SP Zwart (2006)Ejection of Hyper-Velocity Stars from the Galactic Centre by Intermediate-Mass Black Holes, In: Mon.Not.Roy.Astron.Soc.372pp. 174-182
We have performed N-body simulations of the formation of hyper-velocity stars (HVS) in the centre of the Milky Way due to inspiralling intermediate-mass black holes (IMBHs). We considered IMBHs of different masses, all starting from circular orbits at an initial distance of 0.1 pc. We find that the IMBHs sink to the centre of the Galaxy due to dynamical friction, where they deplete the central cusp of stars. Some of these stars become HVS and are ejected with velocities sufficiently high to escape the Galaxy. Since the HVS carry with them information about their origin, in particular in the moment of ejection, the velocity distribution and the direction in which they escape the Galaxy, detecting a population of HVS will provide insight in the ejection processes and could therefore provide indirect evidence for the existence of IMBHs. Our simulations show that HVS are generated in short bursts which last only a few Myrs until the IMBH is swallowed by the supermassive black hole (SMBH). HVS are ejected almost isotropically, which makes IMBH induced ejections hard to distinguish from ejections due to encounters of stellar binaries with a SMBH. After the HVS have reached the galactic halo, their escape velocities correlate with the distance from the Galactic centre in the sense that the fastest HVS can be found furthest away from the centre. The velocity distribution of HVS generated by inspiralling IMBHs is also nearly independent of the mass of the IMBH and can be quite distinct from one generated by binary encounters. Finally, our simulations show that the presence of an IMBH in the Galactic centre changes the stellar density distribution inside r
Filippo Contenta, Eduardo Balbinot, James Petts, Justin Read, Mark Gieles, Michelle Collins, Jorge Peñarrubia, Maxime Delorme, Alessia Gualandris (2018)Probing dark matter with star clusters: a dark matter core in the ultra-faint dwarf Eridanus II, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
We present a new technique to probe the central dark matter (DM) density profile of galaxies that harnesses both the survival and observed properties of star clusters. As a first application, we apply our method to the `ultra-faint' dwarf Eridanus II (Eri II) that has a lone star cluster ~45 pc from its centre. Using a grid of collisional N-body simulations, incorporating the effects of stellar evolution, external tides and dynamical friction, we show that a DM core for Eri II naturally reproduces the size and the projected position of its star cluster. By contrast, a dense cusped galaxy requires the cluster to lie implausibly far from the centre of Eri II (>1 kpc), with a high inclination orbit that must be observed at a particular orbital phase. Our results imply that either a cold DM cusp was `heated up' at the centre of Eri II by bursty star formation, or we are seeing an evidence for physics beyond cold DM.
HB Perets, A Gualandris (2010)Dynamical constraints on the origin of the young B-stars in the Galactic center, In: Astrophysical Journal Letters
Regular star formation is thought to be inhibited close to the massive black hole (MBH) in the Galactic center. Nevertheless, tens of young main sequence B stars have been observed in an isotropic distribution close to it. Various models have been suggested for the formation of the B-stars closest to the MBH (
A Sesana, A Gualandris, M Dotti (2011)Massive black hole binary eccentricity in rotating stellar systems
In this letter we study the eccentricity evolution of a massive black hole (MBH) binary (MBHB) embedded in a rotating stellar cusp. Following the observation that stars on counter-rotating (with respect to the MBHB) orbits extract angular momentum from the binary more efficiently then their co-rotating counterparts, the eccentricity evolution of the MBHB must depend on the degree of co-rotation (counter-rotation) of the surrounding stellar distribution. Using an hybrid scheme that couples numerical three-body scatterings to an analytical formalism for the cusp-binary interaction, we verify this hypothesis by evolving the MBHB in spherically symmetric cusps with different fractions F of co-rotating stars. Consistently with previous works, binaries in isotropic cusps (F=0.5) tend to increase their eccentricity, and when F approaches zero (counter-rotating cusps) the eccentricity rapidly increases to almost unity. Conversely, binaries in cusps with a significant degree of co-rotation (F>0.7) tend to become less and less eccentric, circularising quite quickly for F approaching unity. Direct N-body integrations performed to test the theory, corroborate the results of the hybrid scheme, at least at a qualitative level. We discuss quantitative differences, ascribing their origin to the oversimplified nature of the hybrid approach.
A Gualandris, SP Zwart (2006)A hypervelocity star from the Large Magellanic Cloud, In: Mon.Not.Roy.Astron.Soc.Lett.376pp. L29-L33
We study the acceleration of the star HE0437-5439, to hypervelocity and discuss its possible origin in the Large Magellanic Cloud (LMC). The star has a radial velocity of 723 km/s and is located at a distance of 61 kpc from the Sun. With a mass of about 8 Msun, the travel time from the Galactic centre is of about 100 Myr, much longer than its main sequence lifetime. Given the relatively small distance to the LMC (18 kpc), we consider it likely that HE0437-5439 originated in the cloud rather than in the Galactic centre, like the other hypervelocity stars. The minimum ejection velocity required to travel from the LMC to its current location within its lifetime is of about 500 km/s. Such a high velocity can only be obtained in a dynamical encounter with a massive black hole. We perform 3-body scattering simulations in which a stellar binary encounters a massive black hole and find that a black hole more massive than 1000 Msun is necessary to explain the high velocity of HE0437-5439. We look for possible parent clusters for HE0437-5439 and find that NGC 2100 and NGC 2004 are young enough to host stars coeval to HE0437-5439 and dense enough to produce an intermediate mass black hole able to eject an 8 Msun star with hypervelocity.
Manuel Arca-Sedda, Alessia Gualandris (2018)Gravitational wave sources from inspiralling globular clusters in the Galactic Centre and similar environments, In: Monthly Notices of the Royal Astronomical Society477(4)pp. 4423-4442 Oxford University Press (OUP)
DOI: 10.1093/mnras/sty922
We model the inspiral of globular clusters (GCs) towards a galactic nucleus harboring a supermassive black hole (SMBH), a leading scenario for the formation of nuclear star clusters.We consider the case of GCs containing either an intermediate-mass black hole (IMBH) or a population of stellar mass black holes (BHs), and study the formation of gravitational wave (GW) sources. We perform direct summation N-body simulations of the infall of GCs with different orbital eccentricities in the live background of a galaxy with either a shallow or steep density profile. We find that the GC acts as an efficient carrier for the IMBH, facilitating the formation of a bound pair. The hardening and evolution of the binary depends sensitively on the galaxy's density profile. If the host galaxy has a shallow profile the hardening is too slow to allow for coalescence within a Hubble time, unless the initial cluster orbit is highly eccentric. If the galaxy hosts a nuclear star cluster, the hardening leads to coalescence by emission of GWs within 3−4 Gyr. In this case, we find a IMBH-SMBH merger rate of IIMBH−SMBH = 2.8×10−3 yr−1 Gpc−3. If the GC hosts a population of stellar BHs, these are deposited close enough to the SMBH to form extreme-mass-ratio-inspirals with a merger rate of IEMRI = 0.25 yr−1 Gpc−3. Finally, the SMBH tidal field can boost the coalescence of stellar black hole binaries delivered from the infalling GCs. The merger rate for this merging channel is IBHB = 0.4 − 4 yr−1 Gpc−3.
MBN Kouwenhoven, AGA Brown, A Gualandris, L Kaper, SFP Zwart, H Zinnecker (2003)The Primordial Binary Population in OB Associations
For understanding the process of star formation it is essential to know how many stars are formed as singles or in multiple systems, as a function of environment and binary parameters. This requires a characterization of the primordial binary population, which we define as the population of binaries that is present just after star formation has ceased, but before dynamical and stellar evolution have significantly altered its characteristics. In this article we present the first results of our adaptive optics survey of 200 (mainly) A-type stars in the nearby OB association Sco OB2. We report the discovery of 47 new candidate companions of Sco OB2 members. The next step will be to combine these observations with detailed simulations of young star clusters, in order to find the primordial binary population.
A Gualandris, JI Read, W Dehnen, E Bortolas (2017)Collisionless loss-cone refilling: there is no final parsec problem, In: Monthly Notices of the Royal Astronomical Society464(2)pp. 2301-2310
Coalescing massive black hole binaries, formed during galaxy mergers, are expected to be a primary source of low-frequency gravitational waves. Yet in isolated gas-free spherical stellar systems, the hardening of the binary stalls at parsec-scale separations owing to the inefficiency of relaxation-driven loss-cone refilling. Repopulation via collisionless orbit diffusion in triaxial systems is more efficient, but published simulation results are contradictory. While sustained hardening has been reported in simulations of galaxy mergers with N ∼ 106 stars and in early simulations of rotating models, in isolated non-rotating triaxial models the hardening rate continues to fall with increasing N, a signature of spurious two-body relaxation. We present a novel approach for studying loss-cone repopulation in galactic nuclei. Since loss-cone repopulation in triaxial systems owes to orbit diffusion, it is a purely collisionless phenomenon and can be studied with an approximated force calculation technique, provided the force errors are well behaved and sufficiently small. We achieve this using an accurate fast multipole method and define a proxy for the hardening rate that depends only on stellar angular momenta. We find that the loss cone is efficiently replenished even in very mildly triaxial models (with axis ratios 1:0.9:0.8). Such triaxiality is unavoidable following galactic mergers and can drive binaries into the gravitational wave regime. We conclude that there is no 'final parsec problem'.
VV Gvaramadze, A Gualandris (2010)Very massive runaway stars from three-body encounters, In: Monthly Notices of the Royal Astronomical Society: Letters
Very massive stars preferentially reside in the cores of their parent clusters and form binary or multiple systems. We study the role of tight very massive binaries in the origin of the field population of very massive stars. We performed numerical simulations of dynamical encounters between single (massive) stars and a very massive binary with parameters similar to those of the most massive known Galactic binaries, WR 20a and NGC 3603-A1. We found that these three-body encounters could be responsible for the origin of high peculiar velocities ($____geq$ 70 km/s) observed for some very massive ($____geq$ 60-70 Msun) runaway stars in the Milky Way and the Large Magellanic Cloud (e.g., $____lambda$ Cep, BD+43 3654, Sk-67 22, BI 237, 30 Dor 016), which can hardly be explained within the framework of the binary-supernova scenario. The production of high-velocity massive stars via three-body encounters is accompanied by the recoil of the binary in the opposite direction to the ejected star. We show that the relative position of the very massive binary R145 and the runaway early B-type star Sk-69 206 on the sky is consistent with the possibility that both objects were ejected from the central cluster, R136, of the star-forming region 30 Doradus via the same dynamical event -- a three-body encounter.
Imran Nasim, Alessia Gualandris, Justin Read, Walter Dehnen, Maxime Delorme, Fabio Antonini (2020)Defeating stochasticity: coalescence timescales of massive black holes in galaxy mergers, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
The coalescence of massive black hole binaries (BHBs) in galactic mergers is the primary source of gravitational waves (GWs) at low frequencies. Current estimates of GW detection rates for the Laser Interferometer Space Antenna and the Pulsar Timing Array vary by three orders of magnitude. To understand this variation, we simulate the merger of equal-mass, eccentric, galaxy pairs with central massive black holes and shallow inner density cusps. We model the formation and hardening of a central BHB using the Fast Multiple Method as a force solver, which features a O¹Nº scaling with the number N of particles and obtains results equivalent to direct-summation simulations. At N 5105, typical for contemporary studies, the eccentricity of the BHBs can vary significantly for different random realisations of the same initial condition, resulting in a substantial variation of the merger timescale. This scatter owes to the stochasticity of stellar encounters with the BHB and decreases with increasing N. We estimate that N 107 within the stellar half-light radius suffices to reduce the scatter in the merger timescale to 10%. Our results suggest that at least some of the uncertainty in low-frequency GW rates owes to insufficient numerical resolution.
M Colpi, A Gualandris, A Possenti (2002)Is NGC6752 hiding a double black hole binary in its core ?
NGC6752 hosts in its halo PSR J1911-5958A, a newly discovered binary millisecond pulsar which is the most distant pulsar ever known from the core of a globular cluster. Interestingly, its recycling history seems in conflict with a scenario of ejection resulting from ordinary stellar dynamical encounters. A scattering event off a binary system of two black holes with masses in the range of 3-50 solar masses that propelled PSR J1911-5958A into its current peripheral orbit seems more likely. It is still an observational challenge to unveil the imprint(s) left from such a dark massive binary on cluster's stars: PSR J1911-5958A may be the first case.
Fabio Antonini, Mark Gieles, Alessia Gualandris (2019)Black hole growth through hierarchical black hole mergers in dense star clusters: implications for gravitational wave detections, In: Monthly Notices of the Royal Astronomical Society486(4)pp. 5008-5021 Oxford University Press (OUP)
DOI: 10.1093/mnras/stz1149
In a star cluster with a sufficiently large escape velocity, black holes (BHs) that are produced by BH mergers can be retained, dynamically form new BH binaries, and merge again. This process can repeat several times and lead to significant mass growth. In this paper, we calculate the mass of the largest BH that can form through repeated BH mergers and determine how its value depends on the physical properties of the host cluster. We adopt an analytical model in which the energy generated by the black hole binaries in the cluster core is assumed to be regulated by the process of two-body relaxation in the bulk of the system. This principle is used to compute the hardening rate of the binaries and to relate this to the time-dependent global properties of the parent cluster. We demonstrate that in clusters with initial escape velocity ≳300kms−1 in the core and density ≳105M⊙pc−3, repeated mergers lead to the formation of BHs in the mass range 100−105M⊙, populating any upper mass gap created by pair-instability supernovae. This result is independent of cluster metallicity and the initial BH spin distribution. We show that about 10 per cent of the present-day nuclear star clusters meet these extreme conditions, and estimate that BH binary mergers with total mass ≳100M⊙ should be produced in these systems at a maximum rate ≈0.05Gpc−3yr−1, corresponding to one detectable event every few years with Advanced LIGO/Virgo at design sensitivity.
A Gualandris, S Gillessen, D Merritt (2010)The Galactic Centre star S2 as a dynamical probe for intermediate-mass black holes, In: Monthly Notices of the Royal Astronomical Society
We study the short-term effects of an intermediate mass black hole (IBH) on the orbit of star S2 (S02), the shortest-period star known to orbit the supermassive black hole (MBH) in the centre of the Milky Way. Near-infrared imaging and spectroscopic observations allow an accurate determination of the orbit of the star. Given S2's short orbital period and large eccentricity, general relativity (GR) needs to be taken into account, and its effects are potentially measurable with current technology. We show that perturbations due to an IBH in orbit around the MBH can produce a shift in the apoapsis of S2 that is as large or even larger than the GR shift. An IBH will also induce changes in the plane of S2's orbit at a level as large as one degree per period. We apply observational orbital fitting techniques to simulations of the S-cluster in the presence of an IBH and find that an IBH more massive than about 1000 solar masses at the distance of the S-stars will be detectable at the next periapse passage of S2, which will occur in 2018.
Mark Gieles, C Charbonnel, M Krause, V Hénault-Brunet, Oscar Agertz, H Lamers, N Bastian, Alessia Gualandris, A Zocchi, James Petts (2018)Concurrent formation of supermassive stars and globular clusters: implications for early self-enrichment, In: Monthly Notices of the Royal Astronomical Society478(2)sty1059pp. 2461-2479 Oxford University Press
DOI: 10.1093/mnras/sty1059
We present a model for the concurrent formation of globular clusters (GCs) and supermassive stars (SMSs, >103M⊙) to address the origin of the HeCNONaMgAl abundance anomalies in GCs. GCs form in converging gas flows and accumulate low-angular momentum gas, which accretes onto protostars. This leads to an adiabatic contraction of the cluster and an increase of the stellar collision rate. A SMS can form via runaway collisions if the cluster reaches sufficiently high density before two-body relaxation halts the contraction. This condition is met if the number of stars ≳106 and the gas accretion rate ≳105M⊙/Myr, reminiscent of GC formation in high gas-density environments, such as -- but not restricted to -- the early Universe. The strong SMS wind mixes with the inflowing pristine gas, such that the protostars accrete diluted hot-hydrogen burning yields of the SMS. Because of continuous rejuvenation, the amount of processed material liberated by the SMS can be an order of magnitude higher than its maximum mass. This `conveyor-belt' production of hot-hydrogen burning products provides a solution to the mass budget problem that plagues other scenarios. Additionally, the liberated material is mildly enriched in helium and relatively rich in other hot-hydrogen burning products, in agreement with abundances of GCs today. Finally, we find a super-linear scaling between the amount of processed material and cluster mass, providing an explanation for the observed increase of the fraction of processed material with GC mass. We discuss open questions of this new GC enrichment scenario and propose observational tests.
M Mapelli, A Gualandris (2015)Star Formation and Dynamics in the Galactic Centre
The centre of our Galaxy is one of the most studied and yet enigmatic places in the Universe. At a distance of about 8 kpc from our Sun, the Galactic centre (GC) is the ideal environment to study the extreme processes that take place in the vicinity of a supermassive black hole (SMBH). Despite the hostile environment, several tens of early-type stars populate the central parsec of our Galaxy. A fraction of them lie in a thin ring with mild eccentricity and inner radius ~0.04 pc, while the S-stars, i.e. the ~30 stars closest to the SMBH (
F Antonini, J Faber, A Gualandris, D Merritt (2009)Tidal break-up of binary stars at the Galactic center and its consequences, In: Astrophysical Journal Letters
The tidal breakup of binary star systems by the supermassive black hole (SMBH) in the center of the galaxy has been suggested as the source of both the observed sample of hypervelocity stars (HVSs) in the halo of the Galaxy and the S-stars that remain in tight orbits around Sgr A*. Here, we use a post-Newtonian N-body code to study the dynamics of main-sequence binaries on highly elliptical bound orbits whose periapses lie close to the SMBH, determining the properties of ejected and bound stars as well as collision products. Unlike previous studies, we follow binaries that remain bound for several revolutions around the SMBH, finding that in the case of relatively large periapses and highly inclined binaries the Kozai resonance can lead to large periodic oscillations in the internal binary eccentricity and inclination. Collisions and mergers of the binary elements are found to increase significantly for multiple orbits around the SMBH, while HVSs are primarily produced during a binary's first passage. This process can lead to stellar coalescence and eventually serve as an important source of young stars at the galactic center.
D. Boubert, D. Erkal, A. Gualandris (2020)Deflection of the hypervelocity stars by the pull of the Large Magellanic Cloud on the Milky Way, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
Stars slingshotted by the supermassive black hole at the Galactic centre escape from the Milky Way so quickly that their trajectories are almost straight lines. Previous works have shown how these `hypervelocity stars' (stars moving faster than the local Galactic escape speed) are subsequently de ected by the gravitational field of the Milky Way and the Large Magellanic Cloud (LMC), but have neglected to account for the reflex motion of the Milky Way in response to the y-by of the LMC. A consequence of this motion is that the hypervelocity stars we see in the outskirts of the Milky Way today were ejected from where the Milky Way centre was hundreds of millions of years ago. This change in perspective causes large apparent de ections of several degrees in the trajectories of the hypervelocity stars. We quantify these deflections by simulating the ejection of hypervelocity stars from an isolated Milky Way (with a spherical or flattened dark matter halo), from a fixed-in-place Milky Way with a passing LMC, and from a Milky Way which responds to the passage of the LMC, finding that LMC passage causes larger de ections than can be caused by a attened Galactic dark matter halo in CDM. The 10 as yr
A Gualandris, D Merritt (2007)Ejection of Supermassive Black Holes from Galaxy Cores
[Abridged] Recent numerical relativity simulations have shown that the emission of gravitational waves during the merger of two supermassive black holes (SMBHs) delivers a kick to the final hole, with a magnitude as large as 4000 km/s. We study the motion of SMBHs ejected from galaxy cores by such kicks and the effects on the stellar distribution using high-accuracy direct N-body simulations. Following the kick, the motion of the SMBH exhibits three distinct phases. (1) The SMBH oscillates with decreasing amplitude, losing energy via dynamical friction each time it passes through the core. Chandrasekhar's theory accurately reproduces the motion of the SMBH in this regime if 2 < ln Lambda < 3 and if the changing core density is taken into account. (2) When the amplitude of the motion has fallen to roughly the core radius, the SMBH and core begin to exhibit oscillations about their common center of mass. These oscillations decay with a time constant that is at least 10 times longer than would be predicted by naive application of the dynamical friction formula. (3) Eventually, the SMBH reaches thermal equilibrium with the stars. We estimate the time for the SMBH's oscillations to damp to the Brownian level in real galaxies and infer times as long as 1 Gyr in the brightest galaxies. Ejection of SMBHs also results in a lowered density of stars near the galaxy center; mass deficits as large as five times the SMBH mass are produced for kick velocities near the escape velocity. We compare the N-body density profiles with luminosity profiles of early-type galaxies in Virgo and show that even the largest observed cores can be reproduced by the kicks, without the need to postulate hypermassive binary SMBHs. Implications for displaced AGNs and helical radio structures are discussed.
Giacomo Fragione, Alessia Gualandris (2018)Tidal breakup of triple stars in the Galactic Centre, In: Monthly Notices of the Royal Astronomical Society475(4)pp. 4986-4993 Oxford University Press
The last decade has seen the detection of fast moving stars in the Galactic halo, the so-called hypervelocity stars (HVSs). While the bulk of this population is likely the result of a close encounter between a stellar binary and the supermassive black hole (MBH) in the Galactic Centre (GC), other mechanims may contribute fast stars to the sample. Few observed HVSs show apparent ages which are shorter than the flight time from the GC, thereby making the binary disruption scenario unlikely. These stars may be the result of the breakup of a stellar triple in the GC which led to the ejection of a hypervelocity binary (HVB). If such binary evolves into a blue straggler star due to internal processes after ejection, a rejuvenation is possible that make the star appear younger once detected in the halo. A triple disruption may also be responsible for the presence of HVBs, of which one candidate has now been observed. We present a numerical study of triple disruptions by the MBH in the GC and find that the most likely outcomes are the production of single HVSs and single/binary stars bound to the MBH, while the production of HVBs has a probability ≲1% regardless of the initial parameters. Assuming a triple fraction of ≈10% results in an ejection rate of ≲1 Gyr−1, insufficient to explain the sample of HVSs with lifetimes shorter than their flight time. We conclude that alternative mechanisms are responsible for the origin of such objects and HVBs in general.
VV Gvaramadze, A Gualandris, SP Zwart (2007)On the origin of hyperfast neutron stars, In: IAU Symp.246pp. 365-366
We propose an explanation for the origin of hyperfast neutron stars (e.g. PSR B1508+55, PSR B2224+65, RX J0822-4300) based on the hypothesis that they could be the remnants of a symmetric supernova explosion of a high-velocity massive star (or its helium core) which attained its peculiar velocity (similar to that of the neutron star) in the course of a strong three- or four-body dynamical encounter in the core of a young massive star cluster. This hypothesis implies that the dense cores of star clusters (located either in the Galactic disk or near the Galactic centre) could also produce the so-called hypervelocity stars -- the ordinary stars moving with a speed of ~1000 km/s.
JA Petts, A Gualandris, JI Read (2015)A semi-analytic dynamical friction model that reproduces core stalling, In: MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY454(4)pp. 3778-3791 OXFORD UNIV PRESS
A Gualandris, SP Zwart, M Sipior (2005)Three-body encounters in the Galactic centre: the origin of the hypervelocity star SDSS J090745.0+024507, In: Mon.Not.Roy.Astron.Soc.363pp. 223-228
Hills (1988) predicted that runaway stars could be accelerated to velocities larger than 1000 km/s by dynamical encounters with the supermassive black hole (SMBH) in the Galactic center. The recently discovered hypervelocity star SDSS J090745.0+024507 (hereafter HVS) is escaping the Galaxy at high speed and could be the first object in this class. With the measured radial velocity and the estimated distance to the HVS, we trace back its trajectory in the Galactic potential. Assuming it was ejected from the center, we find that a $____sim$ 2 mas/yr proper motion is necessary for the star to have come within a few parsecs of the SMBH. We perform three-body scattering experiments to constrain the progenitor encounter which accelerated the HVS. As proposed by Yu & Tremaine (2003), we consider the tidal disruption of binary systems by the SMBH and the encounter between a star and a binary black hole, as well as an alternative scenario involving intermediate mass black holes. We find that the tidal disruption of a stellar binary ejects stars with a larger velocity compared to the encounter between a single star and a binary black hole, but has a somewhat smaller ejection rate due to the greater availability of single stars.
A Gualandris, SP Zwart, A Tirado-Ramos (2004)Performance analysis of direct N-body algorithms for astrophysical simulations on distributed systems, In: ParallelComput.33pp. 159-173
DOI: 10.1016/j.parco.2007.01.001
We discuss the performance of direct summation codes used in the simulation of astrophysical stellar systems on highly distributed architectures. These codes compute the gravitational interaction among stars in an exact way and have an O(N^2) scaling with the number of particles. They can be applied to a variety of astrophysical problems, like the evolution of star clusters, the dynamics of black holes, the formation of planetary systems, and cosmological simulations. The simulation of realistic star clusters with sufficiently high accuracy cannot be performed on a single workstation but may be possible on parallel computers or grids. We have implemented two parallel schemes for a direct N-body code and we study their performance on general purpose parallel computers and large computational grids. We present the results of timing analyzes conducted on the different architectures and compare them with the predictions from theoretical models. We conclude that the simulation of star clusters with up to a million particles will be possible on large distributed computers in the next decade. Simulating entire galaxies however will in addition require new hybrid methods to speedup the calculation.
Manuel Arca Sedda, Alessia Gualandris, Tuan Do, Anja Feldmeier-Krause, Nadine Neumayer, Denis Erkal (2020)On the origin of a rotating metal-poor stellar population in the Milky Way Nuclear Cluster, In: Astrophysical Journal Letters IOP Publishing
We explore the origin of a population of stars recently detected in the inner parsec of the Milky Way Nuclear Cluster (NC), which exhibit sub-solar metallicity and a higher rotation compared to the dominant population. Using state-of-the-art N-body simulations, we model the infall of massive stellar systems into the Galactic center, both of Galactic and extra-galactic origin. We show that the newly discovered population can either be the remnant of a massive star cluster formed a few kpc away from the Galactic center (Galactic scenario) or be accreted from a dwarf galaxy originally located at 10-100 kpc (extragalactic scenario) and that reached the Galactic center 3
James Petts, Justin Read, Alessia Gualandris (2016)A semi-analytic dynamical friction model for cored galaxies, In: Monthly Notices of the Royal Astronomical Society463(1)pp. 858-869 Oxford University Press
We present a dynamical friction model based on Chandrasekhar's formula that reproduces the fast inspiral and stalling experienced by satellites orbiting galaxies with a large constant density core. We show that the fast inspiral phase does not owe to resonance. Rather, it owes to the background velocity distribution function for the constant density core being dissimilar from the usually-assumed Maxwellian distribution. Using the correct background velocity distribution function and the semi-analytic model from Petts, Gualandris & Read (2015), we are able to correctly reproduce the infall rate in both cored and cusped potentials. However, in the case of large cores, our model is no longer able to correctly capture core-stalling. We show that this stalling owes to the tidal radius of the satellite approaching the size of the core. By switching off dynamical friction when rt(r) = r (where rt is the tidal radius at the satellite's position) we arrive at a model which reproduces the N-body results remarkably well. Since the tidal radius can be very large for constant density background distributions, our model recovers the result that stalling can occur for Ms/Menc 1, where Ms and Menc are the mass of the satellite and the enclosed galaxy mass, respectively. Finally, we include the contribution to dynamical friction that comes from stars moving faster than the satellite. This next-to-leading order effect becomes the dominant driver of inspiral near the core region, prior to stalling.
Giacomo Fragione, Alessia Gualandris (2019)Hypervelocity stars from star clusters hosting Intermediate-Mass Black Holes, In: Monthly Notices of the Royal Astronomical Society
Hypervelocity stars (HVSs) represent a unique population of stars in the Galaxy reflecting properties of the whole Galactic potential. Determining their origin is of fundamental importance to constrain the shape and mass of the dark halo. The leading scenario for the ejection of HVSs is an encounter with the supermassive black hole in the Galactic Centre. However, new proper motions from the Gaia mission indicate that only the fastest HVSs can be traced back to the Galactic centre and the remaining stars originate in the disc or halo. In this paper, we study HVSs generated by encounters of stellar binaries with an intermediate-mass black hole (IMBH) in the core of a star cluster. For the first time, we model the effect of the cluster orbit in the Galactic potential on the observable properties of the ejected population. HVSs generated by this mechanism do not travel on radial orbits consistent with a Galactic centre origin, but rather point back to their parent cluster, thus providing observational evidence for the presence of an IMBH. We also model the ejection of high-velocity stars from the Galactic population of globular clusters, assuming that they all contain an IMBH, including the effects of the cluster's orbit and propagation of the star in the Galactic potential up to detection. We find that high-velocity stars ejected by IMBHs have distinctive distributions in velocity, Galactocentric distance and Galactic latitude, which can be used to distinguish them from runaway stars and stars ejected from the Galactic Centre.
James Petts, Alessia Gualandris (2017)Infalling Young Clusters in the Galactic Centre: implications for IMBHs and young stellar populations, In: Monthly Notices of the Royal Astronomical Society467(4)pp. 3775-3787 Oxford University Press
DOI: 10.1093/mnras/stx296
The central parsec of the Milky Way hosts two puzzlingly young stellar populations, a tight isotropic distribution of B stars around SgrA* (the S-stars) and a disk of OB stars extending to 0.5 pc. Using a modified version of Sverre Aarseth's direct summation code NBODY6 we explore the scenario in which a young star cluster migrates to the Galactic Centre within the lifetime of the OB disk population via dynamical friction. We find that star clusters massive and dense enough to reach the central parsec form a very massive star via physical collisions on a mass segregation timescale. We follow the evolution of the merger product using the most up to date, yet conservative, mass loss recipes for very massive stars. Over a large range of initial conditions, we find that the very massive star expels most of its mass via a strong stellar wind, eventually collapsing to form a black hole of mass 20−400M , incapable of bringing massive stars to the Galactic Centre. No massive intermediate mass black hole can form in this scenario. The presence of a star cluster in the central 10 pc within the last 15 Myr would also leave a 2 pc ring of massive stars, which is not currently observed. Thus, we conclude that the star cluster migration model is highly unlikely to be the origin of either young population, and in-situ formation models or binary disruptions are favoured.
Elisa Bortolas, Alessia Gualandris, Massimo Dotti, Justin I. Read (2018)The influence of Massive Black Hole Binaries on the Morphology of Merger Remnants, In: Monthly Notices of the Royal Astronomical Society477(2)pp. 2310-2325 Oxford University Press (OUP)
Massive black hole (MBH) binaries, formed as a result of galaxy mergers, are expected to harden by dynamical friction and three-body stellar scatterings, until emission of gravitational waves (GWs) leads to their final coalescence. According to recent simulations, MBH binaries can efficiently harden via stellar encounters only when the host geometry is triaxial, even if only modestly, as angular momentum diffusion allows an efficient repopulation of the binary loss cone. In this paper, we carry out a suite of N-body simulations of equal-mass galaxy collisions, varying the initial orbits and density profiles for the merging galaxies and running simulations both with and without central MBHs. We find that the presence of an MBH binary in the remnant makes the system nearly oblate, aligned with the galaxy merger plane, within a radius enclosing 100 MBH masses. We never find binary hosts to be prolate on any scale. The decaying MBHs slightly enhance the tangential anisotropy in the centre of the remnant due to angular momentum injection and the slingshot ejection of stars on nearly radial orbits. This latter effect results in about 1% of the remnant stars being expelled from the galactic nucleus. Finally, we do not find any strong connection between the remnant morphology and the binary hardening rate, which depends only on the inner density slope of the remnant galaxy. Our results suggest that MBH binaries are able to coalesce within a few Gyr, even if the binary is found to partially erase the merger-induced triaxiality from the remnant.
Alessandra Mastrobuono-Battisti, Hagai B. Perets, Alessia Gualandris, Nadine Neumayer, Anna C. Sippel (2019)Star formation at the Galactic Centre: coevolution of multiple young stellar discs, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
Studies of the Galactic Centre suggest that in-situ star formation may have given rise to the observed stellar population near the central supermassive black hole (SMBH). Direct evidence for a recent starburst is provided by the currently observed young stellar disc (2-7Myr) in the central 0:5 pc of the Galaxy. This result suggests that star formation in galactic nuclei may occur close to the SMBH and produce initially attened stellar discs. Here we explore the possible build-up and evolution of nuclear stellar clusters near SMBHs through in-situ star formation producing stellar discs similar to those observed in the Galactic Centre and other nuclei. We make use of N-body simulations to model the evolution of multiple young stellar discs, and explore the potential observable signatures imprinted by such processes. Each of the five simulated discs is evolved for 100Myr before the next one is introduced in the system. We find that populations born at different epochs show different morphologies and kinematics. Older and presumably more metal poor populations are more relaxed and extended, while younger populations show a larger amount of rotation and attening. We conclude that star formation in central discs can reproduce the observed properties of multiple stellar populations in galactic nuclei differing in age, metallicity and kinematic properties.
Tuan Do, Gregory David Martinez, Wolfgang Kerzendorf, Anja Feldmeier-Krause, Manuel Arca Sedda, Nadine Neumayer, Alessia Gualandris (2020)Revealing the Formation of the Milky Way Nuclear Star Cluster via Chemo-Dynamical Modeling, In: Astrophysical Journal Letters American Astronomical Society
The Milky Way nuclear star cluster (MW NSC) has been used as a template to understand the origin and evolution of galactic nuclei and the interaction of nuclear star clusters with supermassive black holes. It is the only nuclear star cluster with a supermassive black hole where we can resolve individual stars to measure their kinematics and metal abundance to reconstruct its formation history. Here, we present results of the first chemo-dynamical model of the inner 1 pc of the MW NSC using metallicity and radial velocity data from the KMOS spectrograph on the Very Large Telescope. We found evidence for two kinematically and chemically distinct components in this region. The majority of the stars belong to a previously-known super-solar metallicity component with a rotation axis perpendicular to the Galactic plane. However, we identify a new kinematically distinct sub-solar metallicity component which contains about 7% of the stars and appears to be rotating faster than the main component with a rotation axis that may be misaligned. This second component may be evidence for an infalling star cluster or remnants of a dwarf galaxy, merging with the MW NSC. These measurements show that the combination of chemical abundances with kinematics is a promising method to directly study the MW NSC's origin and evolution.
S Harfst, A Gualandris, D Merritt, R Spurzem, SP Zwart, P Berczik (2006)Performance Analysis of Direct N-Body Algorithms on Special-Purpose Supercomputers, In: NewAstron.12pp. 357-377
Direct-summation N-body algorithms compute the gravitational interaction between stars in an exact way and have a computational complexity of O(N^2). Performance can be greatly enhanced via the use of special-purpose accelerator boards like the GRAPE-6A. However the memory of the GRAPE boards is limited. Here, we present a performance analysis of direct N-body codes on two parallel supercomputers that incorporate special-purpose boards, allowing as many as four million particles to be integrated. Both computers employ high-speed, Infiniband interconnects to minimize communication overhead, which can otherwise become significant due to the small number of "active" particles at each time step. We find that the computation time scales well with processor number; for 2*10^6 particles, efficiencies greater than 50% and speeds in excess of 2 TFlops are reached.
A Gualandris, D Merritt (2009)Perturbations of Intermediate-mass Black Holes on Stellar Orbits in the Galactic Center, In: Astrophys.J.705pp. 361-371
We study the short- and long-term effects of an intermediate mass black hole (IMBH) on the orbits of stars bound to the supermassive black hole (SMBH) at the center of the Milky Way. A regularized N-body code including post-Newtonian terms is used to carry out direct integrations of 19 stars in the S-star cluster for 10 Myr. The mass of the IMBH is assigned one of four values from 400 Msun to 4000 Msun, and its initial semi-major axis with respect to the SMBH is varied from 0.3-30 mpc, bracketing the radii at which inspiral of the IMBH is expected to stall. We consider two values for the eccentricity of the IMBH/SMBH binary, e=(0,0.7), and 12 values for the orientation of the binary's plane. Changes at the level of 1% in the orbital elements of the S-stars could occur in just a few years if the IMBH is sufficiently massive. On time scales of 1 Myr or longer, the IMBH efficiently randomizes the eccentricities and orbital inclinations of the S-stars. Kozai oscillations are observed when the IMBH lies well outside the orbits of the stars. Perturbations from the IMBH can eject stars from the cluster, producing hypervelocity stars, and can also scatter stars into the SMBH; stars with high initial eccentricities are most likely to be affected in both cases. The distribution of S-star orbital elements is significantly altered from its currently-observed form by IMBHs with masses greater than 1000 Msun if the IMBH/SMBH semi-major axis lies in the range 3-10 mpc. We use these results to further constrain the allowed parameters of an IMBH/SMBH binary at the Galactic center.
R Spurzem, I Berentzen, P Berczik, D Merritt, P Amaro-Seoane, S Harfst, A Gualandris (2008)Parallelization, special hardware and post-newtonian dynamics in direct N-Body simulations, In: Lecture Notes in Physics760pp. 377-389
VV Gvaramadze, A Gualandris, SP Zwart (2009)On the origin of high-velocity runaway stars, In: MNRAS, 2009, 396, 570-578
We explore the hypothesis that some high-velocity runaway stars attain their peculiar velocities in the course of exchange encounters between hard massive binaries and a very massive star (either an ordinary 50-100 Msun star or a more massive one, formed through runaway mergers of ordinary stars in the core of a young massive star cluster). In this process, one of the binary components becomes gravitationally bound to the very massive star, while the second one is ejected, sometimes with a high speed. We performed three-body scattering experiments and found that early B-type stars (the progenitors of the majority of neutron stars) can be ejected with velocities of $____ga$ 200-400 km/s (typical of pulsars), while 3-4 Msun stars can attain velocities of $____ga$ 300-400 km/s (typical of the bound population of halo late B-type stars). We also found that the ejected stars can occasionally attain velocities exceeding the Milky Ways's escape velocity. | CommonCrawl |
Why isn't temperature frame dependent?
In (non-relativistic) classical physics, if the temperature of an object is proportional to the average kinetic energy ${1 \over 2} m\overline {v^{2}}$of its particles (or molecules), then shouldn't that temperature depend on the frame of reference - since $\overline {v^{2}}$ will be different in different frames?
(I.e. In the lab frame $K_l = {1 \over 2} m\overline {v^{2}} $, but in a frame moving with velocity $u$ relative to the lab frame, $K_u = {1 \over 2} m \overline {(v+u)^{2}}$).
newtonian-mechanics thermodynamics temperature inertial-frames
$\begingroup$ Your question seems related to some of the ideas behind the Unruh effect. $\endgroup$ – Brandon Enright Dec 16 '13 at 5:39
$\begingroup$ There is an earlier instance of the question floating around. Or at least, I'd have sworn it was, but so far I can't run it down. $\endgroup$ – dmckee♦ Dec 16 '13 at 5:41
$\begingroup$ @dmckee this one? physics.stackexchange.com/q/83488 $\endgroup$ – user10851 Dec 16 '13 at 5:58
$\begingroup$ @ChrisWhite : That one asks about relativity. I'm just asking about plain pre-relativistic classical physics. $\endgroup$ – user114806 Dec 16 '13 at 6:35
$\begingroup$ If not making use of relativity then $$K_u = {1 \over 2} m \overline {(v+u)^{2}} = {1 \over 2} m (\overline v^2 + 2 \overline {vu} + \overline{u^2} ).$$ Since $u$ is a constant speed have $\overline{vu} = {\overline v}u = 0$, since the gas a whole is stationary. This gives $$K_u = {1 \over 2} m \overline {(v+u)^{2}} = {1 \over 2} m (\overline v^2 + u^2)$$ showing that the energy is made up of a "random bit" and a "translation bit". The temperature is only due to the random bit. Energy is always undefined to within a constant. $\endgroup$ – jim Sep 24 '16 at 10:01
The definition of temperature in the kinetic theory of gases emerges from the notion of pressure. Fundamentally, the temperature of a gas comes from the amount, and the strength of the collisions between molecules or atoms of a gas.
The first step considers an (elastic) impact between two particles, and writes $\Delta p = p_{i,x} - p_{f,x} = p_{i,x} - ( - p_{i,x}) = 2\,m\,v_x $ where the direction $x$ denotes the direction of the collision. This, of course, is considering that the two particles have opposing velocities before impact, which is equivalent to viewing the impact in the simplest frame possible.
This calculation is independent of frame translation, as it will add the same velocity component to both velocities, and the previous equation relies only on the difference in velocities.
The second step uses the ideal gas law to get to $T \propto \frac{1}{2}mv^2$.
For more detail you can check this Wikipedia article.
So the invariance with frame translation of the temperature is due to the invariance of pressure, which only considers relative velocities.
Waffle's Crazy Peanut
KloddKlodd
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics thermodynamics temperature inertial-frames or ask your own question.
If temperature is related to average kinetic energy in an ideal gas, then does speeding up the gas container affect its temperature?
Temperature of the system stays constant even though it moves?
Do temperature and kinetic energy depend on overall motion?
Does temperature depend on velocity of observer
What is the kinetic energy of a number of discrete particles and question related to Thermodynamics
Is the temperature of a gas affected by the motion of its container?
Why am I not burned by a strong wind?
Is temperature a Lorentz invariant in relativity?
Invariance of Temperature in Classical Physics
Relation between The speed and Temperature of the molecule
Calculate relativistic boost to COM frame from two arbitary velocities?
Relativity of temperature paradox
Laws of dynamics and reference frame
Is mean kinetic energy related to temperature of a system of interacting classical particles?
The center of mass frame and lab frame
If I move an object with my hand, will the object's temperature increase?
Does heat transfer actually from high temperature to low temperature?
Does bulk translational motion factor into average kinetic energy as it relates to temperature | CommonCrawl |
How can an infinite universe expand?
I understand the expansion of the universe as actually an increase in the ratio of space to matter. Is this a correct understanding? Otherwise, I don't understand how an infinite structure can expand.
Justin Waters
Justin WatersJustin Waters
$\begingroup$ Infinities come in different sizes: en.wikipedia.org/wiki/Aleph_number $\endgroup$ – Wayfaring Stranger Nov 21 '15 at 18:06
$\begingroup$ @WayfaringStranger, thats true, but irrelevant in this context $\endgroup$ – James K Nov 21 '15 at 22:45
$\begingroup$ Possible duplicate of How can the universe be infinite? $\endgroup$ – FJC Nov 23 '15 at 14:26
$\begingroup$ universe not is infinite $\endgroup$ – RBoschini Dec 29 '15 at 18:47
$\begingroup$ Observable universe is not infinite. Hubble's law gets things receding faster than light, unobservable, at 40 some billion light years right now. We tend to assume that an observer on that last planet receding at just under c sees a universe that looks just like ours, but with us receding at just under c. That assumption will get you close to an infinite universe. $\endgroup$ – Wayfaring Stranger Jul 5 '18 at 18:02
Expansion means that distances are increasing as a function of time. Say if the distance between two galactic clusters is $D$, then in an expanding Universe the distance is governed by some strictly increasing function of time $a(t)$ called the scale factor where
$$D=a(t)D_0$$
where $D_0$ is the distance at the present time and by definition $a(t_{0})=1$.
Cosmology assumes that the Universe is on large scales the same everywhere (homogeneous) and the same in all directions (isotropic) so the above applies to all distances above a certain scale. The scale factor $a(t)$ can be found from the Friedmann equations and initial conditions.
Expansion is possible in Universes of both finite and infinite spatial extent.
As the volume of a (large enough) region of space increases in proportion to $[a(t)]^3$, but the amount of matter remains constant, the matter density changes in proportion to $[a(t)]]^{-3}$. Expansion however also decreases the kinetic energy of of its contents, so the energy density decreases by a greater factor if the contents has kinetic energy.
$\begingroup$ But if the universe is infinite, how can it expand? Saying it just can does not answer the question. $\endgroup$ – John Duffield Nov 22 '15 at 11:09
$\begingroup$ Before I made that statement I explained how expansion is the increase of distances between (comoving) objects. Clearly there is no dependence on the space being finite for distances to increase. $\endgroup$ – John Davis Nov 22 '15 at 15:06
$\begingroup$ @JohnDuffield mentalfloss.com/article/78583/infinite-hotel-paradox $\endgroup$ – Florin Andrei Jul 5 '18 at 17:43
$\begingroup$ @Florin Andrei : IMHO it says nothing useful about the real universe. See my answer below for my own thoughts on the matter. $\endgroup$ – John Duffield Jul 5 '18 at 19:40
$\begingroup$ "Expansion is possible in Universes of both finite and infinite spatial extent". This implies the universe to be held within a space. So, space is not part of the universe, but the universe is just one part of space? Seems quite debatable. $\endgroup$ – RodolfoAP Dec 7 '19 at 7:44
There is absolutely not contradiction between being infinite and able to expand (in contrast to what your question seems to suggest). This simple fact is not confined to the actual universe we are living in.
As an illustraction, take the infinite 'universe' of the natural numbers $i=0\dots\infty$. Now consider the sets $2i$ and $2i+1$, each equally infinite as the natural numbers, but stretched. Now combine those two sets to get an expanded 'universe' and you obtain the natural numbers again.
WalterWalter
$\begingroup$ a) If we make an analogy, this is equivalent to a Ponzi scheme. It works in theory. But considering nature's limitations, seems quite dubious. See Kant's first antinomy. The infinite attribute of the universe would just be a fallacy of perception. b) Extrapolating rules at different scales seems naive. What you are stating here is that galaxies are expanding, not the universe. c) AFAIK universe expansion implied "creating new space". Quite far from this response. d) This kind of universe expansion is equivalent to measurements contraction. $\endgroup$ – RodolfoAP Dec 7 '19 at 7:53
$\begingroup$ I accept this explanation, I'd like to add an intuitive example. How many numbers divide by four? inf. How many numbers divide by 2? Also inf, but this inf is two times larger. So I guess it is true, the universe started infinitely large, and it keeps growing. $\endgroup$ – Yuval Harpaz Feb 13 '20 at 17:50
How do you describe how far away two points are? You have to have some way of describing the concept of distance.
When we say that the universe is expanding, what we really mean is that distances inside it are increasing.
The idea that the expanding universe is some sort of 3D bubble or balloon that can be seen to expand from outside isn't meaningful, as there is no outside.
Perhaps a more helpful way to think of it is to say that the concept of distance is a property of the universe, and that property is changing over time.
James CaneJames Cane
$\begingroup$ "The concept of distance is a property of the universe": such was the view before the XVII century. Modern philosophy considers space and time to be subjective features. See plato.stanford.edu/entries/kant-spacetime $\endgroup$ – RodolfoAP Dec 7 '19 at 8:00
If you he universe is finite in size, something(matter) would have to be present to to fill in the empty space surrounding the entire entire universe or else it would be just more empty space. Let me explain what I mean. A room is finite in size because when you get to a certain point that the walls, floor, and ceiling stop you. So if a universe is finite, it would have to have some equivalent to walls, to keep it the same size and finite and/or to constitute an end. If space ends at some point because nothing can go any further, like if we were inside of an incomprehensibley large hollow sphere and traveled to the end of space and found the end of the universe, something would still be on the other side of whatever the end of the was. Infinite is not a hard concept at all and is really the only possibility if you think a about. Infinite can't expand. It's like a child saying infinite plus one. Only now its college educated grown men now and all the logic in the world wont stop them from trying to make it make sense. The universe is definitely infinite, anyone can observe that by looking up at the night sky. We dont live inside of a cosmic domicile with limited space. We live in infinite empty space on a finite in size spherical shaped mass of a finite amount of matter. an Infinite universe of empty space that has stars and planets, moons and whatever else. But the universe itself is already infinite and therfore can't get any bigger. Its empty space, in every direction, forever(with stars and planets here and there). Why is that so hard for science to accept?
I.Q. TeslaI.Q. Tesla
$\begingroup$ "If you he universe is finite in size, something(matter) would have to be present to to fill in the empty space surrounding the entire entire universe or else it would be just more empty space." -- this statement is completely mistaken. So is your reasoning. There is absolutely no problem with a spatially finite universe that has no spatial boundary. $\endgroup$ – Stan Liou Dec 20 '15 at 19:03
$\begingroup$ A hackneyed analogy perhaps, but you can compare the 3D volume of the Universe to the 2D surface of some arbitrarily-shaped body, e.g. a ball. The surface area of this object is finite, yet it doesn't have an edge (you're not allowed to fly up / dig yourself down, that would be cheating since you'd leave the 2D world). An infinite universe would be comparable to a 2D surface of an infinite object like an infinite table top, or an infinite saddle. $\endgroup$ – pela Dec 21 '15 at 14:13
Don't Forget This Part!! The other answers simply explain that an infinite universe can continuously expand because the increase in size is filled with space, not new matter. They confirm your explanation. However, the answer to "how can an infinite universe expand?" requires an explanation of "empty space" and mention of the fact that the universe's expansion is also accelerating.
Empty Space gives physicists headaches. Is there even such a thing as empty space? Or just space filled with matter we cannot yet detect? More on this below.
Accelerating Expansion: The universe is not only expanding, it's speeding up. That keeps some very smart people awake at night. "How can an infinite universe expand?" Well the big bang would explain it... if it were slowing down. It's not. So... it has to be pushed or pulled, right?
Why Does That Happen?
Einstein's theories first predicted that the gravity of the universe would cause the universe to collapse in on itself so he introduced something called the "cosmological constant" in his theory of relativity. That constant enabled the equations to predict a static universe.
Wiki excerpt:
[The cosmological constant] was originally introduced by Albert Einstein in 1917 as an addition to his theory of general relativity to "hold back gravity" and achieve a static universe, which was the accepted view at the time. Einstein abandoned the concept after Hubble's 1929 discovery that all galaxies... are moving away from each other, implying an overall expanding universe.
Modern Understanding
Although Einstein would later call that "cosmological constant" his "greatest blunder", it turned out to be correct. However, he also thought the value would be used to explain why things are static. It turned out the value would explain why things are continually expanding. But what is that constant, besides a number in an equation? Today most physicists seem to think it's dark matter or dark energy. It's a very new science but it is highly researched.
Wiki excerpt from 'Accelerating Universe':
Different theories of dark energy suggest different values of w, with w < -1/3 for cosmic acceleration... The simplest explanation for dark energy is that it is a cosmological constant... this leads to the Lambda-CDM model, which has generally been known as the Standard Model of Cosmology from 2003 through the present...
Wiki excerpt from Dark Energy:
around 70% of the mass–energy density of the universe can be attributed to dark energy. While dark energy is poorly understood at a fundamental level, the main required properties of dark energy are that it functions as a type of anti-gravity, it dilutes much more slowly than matter as the universe expands... The cosmological constant is the simplest possible form of dark energy since it is constant in both space and time
Answering Your Question
So I think the real answer to your question isn't that an infinite universe is possible simply because the amount of empty space increases. Actually, that might not even be possible. There may be no such thing as empty space. Empty space might just be a type of matter we can't see or detect yet.
Dark matter is a hypothetical kind of matter that cannot be seen with telescopes but accounts for most of the matter in the universe. The existence and properties of dark matter are inferred from its gravitational effects on visible matter, on radiation, and on the large-scale structure of the universe. Dark matter has not been detected directly, making it one of the greatest mysteries in modern astrophysics.
To understand the infinite expansion of the universe, ask first how such a thing could be possible. Ask why it expands infinitely instead of the mutual gravity of all the galaxies slowing each other down and causing a collapse.
Dave GDave G
$\begingroup$ "Answering your question: the answer to your question isn't...". Synthesized answer: the universe is expanding, although we don't understand space expansion because we don't understand space. $\endgroup$ – RodolfoAP Dec 7 '19 at 8:15
Think of the universe as a birthday cake. If you had a cake that was a million light years across, you could fit at least a hundred thousand candles on it, right? Probably many more. But just think how many candles you could fit on a cake that was infinite in size - millions, if not billions. But if you put this cake back in the oven and bake a smaller cake onto the side of it, then you could say you've "expanded the cake", despite it being infinite in size. In this way, you could keep adding to your cake until it truly was infinite.
answered Jul 5 '18 at 17:18
Bob the space scientistBob the space scientist
$\begingroup$ How would you put an infinite cake in an oven? $\endgroup$ – pela Jul 5 '18 at 17:38
$\begingroup$ Have to use a Big oven. $\endgroup$ – Wayfaring Stranger Jul 5 '18 at 17:57
$\begingroup$ Actually, re-baking a cake extracts its humidity, and drastically reduces its size. So, the universe is a growing cake reducing its size. $\endgroup$ – RodolfoAP Dec 7 '19 at 8:17
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
I understand the expansion of the universe as actually an increase in the ratio of space to matter. Is this a correct understanding?
It isn't wrong. The ratio is increasing. But it isn't a "correct understanding". It's merely an observation of one of the results of the expansion of space.
If the universe is infinite how can it expand?
I don't know. Nor do I know how big bang cosmology can be reconciled to an infinite universe. If you look around on the internet, you can find articles like this which say this:
"The linear dimensions of the early universe increases during this period of a tiny fraction of a second by a factor of at least 10$^{26}$ to around 10 centimetres (about the size of a grapefruit)".
However in 2013 results from the WMAP mission appeared to confirm that space is flat. Then a non-sequitur crept in. See this article and pay careful attention to this:
"We now know (as of 2013) that the universe is flat with only a 0.4% margin of error. This suggests that the Universe is infinite in extent; however, since the Universe has a finite age, we can only observe a finite volume of the Universe. All we can truly conclude is that the Universe is much larger than the volume we can directly observe."
That's a massive error. It absolutely doesn't suggest that the universe is infinite in extent. Or that the Universe is much larger than the volume we can directly observe. But this myth has legs, and people repeat it ad-infinitum, even though they can't explain how it fits in with Big Bang cosmology. What you tend to hear is that the observable universe was the size of a grapefruit, but it absolutely doesn't satisfy. Moreover there's a dreadful flaw lurking in the shadows. Take a look at the stress-energy-momentum tensor, and note the energy-pressure diagonal. A gravitational field is something like a spatial pressure gradient, and you can think of space as having an innate "pressure". So you can reason that the universe must expand. As to why Einstein didn't, I just don't know. But anyway, for an analogy, squeeze a stress-ball down in your fist, and let go. It expands because of the pressure. However if that material was infinite in extent, the pressure is counter-balanced at all locations. So it can't expand. In similar vein, in my opinion, an infinite universe can't expand.
People claim the universe must be infinite because of the cosmological principle. But this is merely an assumption. There's an assumption that the universe is homogeneous and isotropic, but this isn't fact. You cannot use it to make sweeping claims about an infinite universe that was always infinite. For all we know some observer 50 billion light years away might be looking up at the night sky wondering why half of it is black. Or a mirror-image of the other. Or some kind of edge.
It is said that in days gone by, people could not conceive of a world that was curved. They could only conceive of a world with an edge. Nowadays I rather fancy that there are some people who cannot conceive of a world that is not curved. They cannot conceive of a world with an edge.
See The Foundation of the General Theory of Relativity: "the energy of the gravitational field shall act gravitatively in the same way as any other kind of energy". Energy is the source of the stress-energy tensor. Matter is only a source because of the energy-content. Also see Inhomogeneous and interacting vacuum energy which refers to spatial energy. An interesting read is the article Universe 156 billion light-years wide featuring Neil Cornish. This isn't entirely accurate, but the compound interest and the hall of mirrors concepts are of interest. As for the non-sequitur, see this interview with Joseph Silk:
"We do not know whether the Universe is finite or not."
I hope nobody will argue with that. Reading on:
"To give you an example, imagine the geometry of the Universe in two dimensions as a plane. It is flat, and a plane is normally infinite. But you can take a sheet of paper [an 'infinite' sheet of paper] and you can roll it up and make a cylinder, and you can roll the cylinder again and make a torus [like the shape of a doughnut]. The surface of the torus is also spatially flat, but it is finite".
This is akin to the old Asteroids game. But the Planck mission found no evidence of any torus. Reading on further:
"So you have two possibilities for a flat Universe: one infinite, like a plane, and one finite, like a torus, which is also flat."
I dispute that. There is a third possibility. A flat finite universe with no intrinsic curvature. If anybody can cite some reliable sources that support the assertion that a flat universe must be infinite, I'd like to see them.
John DuffieldJohn Duffield
$\begingroup$ Pressure in the stress-energy tensor of matter actually decelerates the expansion. If you like, this is because the pressure is itself a source of gravity. General relativity does not deal with space, matter, pressure etc in the exactly same way as Newtonian physics, which is why your assumptions fail. The expansion of the Universe should be seen to be the result of initial conditions, though in later times negative-pressure dark energy is thought to have played a role in accelerating the expansion. $\endgroup$ – John Davis Nov 21 '15 at 18:21
$\begingroup$ @John Davis : the stress-energy-momentum tensor "describes the density and flux of energy and momentum in spacetime". It's a mistake to say it's the stress-energy tensor of matter. Einstein described a gravitational field as space that was "neither homogeneous nor isotropic". Given that the FLRW assumption is correct and the universe is homogeneous on the large scale, he should have known there's no overall gravitational field. Why he didn't "ditch the dust", I shall never know. $\endgroup$ – John Duffield Nov 22 '15 at 11:07
$\begingroup$ Matter is the source of the stress-energy tensor in this instance, so it is correct and common usage. That is not quite what Einstein said and it is abundantly clear that general relativity predicts expansion/contraction in isotropic and homogeneous spaces, except when either space is empty (Minkowski space) or, unstably, in the presence of a finely-tuned cosmological constant (Einstein static Universe). So there's no point quibbling about what Einstein said or what he meant, what is important is what GR says and what the observational evidence says. $\endgroup$ – John Davis Nov 22 '15 at 15:15
$\begingroup$ @John Davis : see my notes above. Energy causes gravity. Also note the dark matter pie. Only circa 4% of the energy in the universe is thought to be (ordinary) matter. That doesn't justify your claim that matter is the source of the stress-energy tensor in this instance. $\endgroup$ – John Duffield Nov 23 '15 at 22:17
$\begingroup$ I did not say the Universe was contracting, only that given the conditions above it must be contracting or expanding. The physical evidence shows it is expanding. The reason you gave for an infinite Universe to contradict expansion is from trying to apply Newtonian physics to general relativity and therefore is deeply flawed. $\endgroup$ – John Davis Nov 24 '15 at 1:53
Not the answer you're looking for? Browse other questions tagged expansion or ask your own question.
How can the universe be infinite?
Are black holes expanding?
How can gravity lead to the Big Crunch scenario?
How can the observable universe shrink in a Big Rip?
How much does a linear lightyear of space expand in a year?
How is the expansion rate of the universe obtained from Baryon Acoustic Oscillations?
Cosmological redshift vs doppler redshift
Does the gravity oppose to the Universe expansion?
How can something infinitely big have expanded from an infinitely small? | CommonCrawl |
Shapes for an infinite animal?
This creature exists in an infinite world, a flat landscape that extends ad infinitum, where light rains from the sky from infinity during recurrent day-night cycles, and, similarly, the ground goes down continuously.
All kinds of critters populate the cosmos including the skies and the underground realm. This bizarre world has existed for an infinite time and for this reason I want (at least) one creature to have inhabited it for an infinite time. And that's not all, this particular being is also physically infinite(its body has no end); obviously I don't want it to take up all the space, but only a part of it.
In the specific I was thinking of giving it a shape similar to something like Gabriel's Horn, so it would leave a large area available, however this would basically cut most of the surface in half, and on top of that most of his body is vulnerable to attack since it's so thin; for this reason I would like to avoid "infinitely small" bodies.
What infinite shape can I give my organism so that it wouldn't be too vulnerable but at the same time leave room for people to move about relatively easily, without cutting off parts of the world that are close by from each other? Also, while its body is infinite it should have only one head.
I am going to be a bit lax with the physics of this world to allow for some basic functionalities, but I want to put special attention to this aspect.
EDIT: Thanks a lot to everyone for the inputs, I went with the answer that better fits the requirements, but I will probably incorporate elements from other interesting answers
creature-design mythical-creatures alien-geometry
DevelopingDeveloper
SilverCookiesSilverCookies
$\begingroup$ A creature that is physically infinite in an infinite universe...but isn't actually infinite? I love it! $\endgroup$ – EveryBitHelps Apr 24 '18 at 14:55
$\begingroup$ Reminder to close-voters: The problem cannot be fixed if the OP is not made aware of it. $\endgroup$ – Frostfyre Apr 24 '18 at 17:17
$\begingroup$ That being said, I fail to see how one answer can be judged as "better," "more complete," or "more applicable" than another. A creature with long legs is just as viable as a creature that floats, for example. So either I fail to understand the question in its entirety, or it's primarily opinion-based. $\endgroup$ – Frostfyre Apr 24 '18 at 17:20
$\begingroup$ While I see @Frostfyre's point, I disagree that this question deserved to be closed. The OP provided criteria (not too vulnerable, room for others, can't cut off parts of the world, one head) and the answers generally reflect this criteria rather than being all over the map. If an OP provides all the criteria to guarantee only one right answer, then 99% of the time the OP has the answer already. I'm voting to reopen. $\endgroup$ – JBH Apr 24 '18 at 17:37
$\begingroup$ @JBH I've opened a meta thread on this question. $\endgroup$ – Frostfyre Apr 24 '18 at 21:12
The problem can be restated as follows: find an infinite plane non-intersecting curve which does not partition the plane.
That's easy. Two immediate examples, described in polar coordinates:
$r = \exp(\frac{1}{0.01\theta + 1})$ (the red curve in the illustration), and
$r = 1+\left(1-\exp(-0.05\theta)\right)$ (the green curve in the illustration).
Two infinite yet bounded curves which do not partition the plane: r = exp (1/(0.01t + 1)) (show in red) and r = 1 + (1 - exp (-0.05t)) (show in green). Made with the graphing calculator at Desmos. Own work, available on Flickr under the CC-BY license.
Converting the infinitely thin lines into lines with finite widths is left as an exercise. (Hint: make the width inversely proportional with $\theta$.)
Note that the red curve grows inwards, and r will always be greater than 1, while the green curve grows outwards and r will always be less than 2. No matter how long the curves get they will never partition the plane in two disjunct regions.
Also note that I'm taking "an infinite time" to mean "a really long time". Things become truly weird when infinite values are actually allowed. In particular, any event which is not impossible becomes necessary...
AlexPAlexP
$\begingroup$ This is really cool, a spiral doesn't part the plane and it might have a "head" at the center $\endgroup$ – SilverCookies Apr 24 '18 at 16:28
$\begingroup$ " In particular, any event which is not impossible becomes necessary...". I don't think it is true. Imagine that every second, you are picking one real number at random (from range 0 to 1 for simplicity). Picking 0.5 exactly is possible, but it is far from being sure, even taking infinite amount of time. I think that it has zero chances of happening in infinite amount of time, to be exact, despite being possible. Continuum >> aleph zero. $\endgroup$ – Artur Biesiadowski Apr 25 '18 at 11:58
$\begingroup$ @ArturBiesiadowski: The probability of picking exactly 0.5 is zero, so the event is almost impossible. But in an infinite amount of time, any event which has finite non-zero probability, no matter how small, becomes certain. For example, the probability of a species evolving which looks just like humans, and for an individual to be born who looks just like Shakespeare, and who speaks early modern English, and for that individual to write a play which is word for word identical to Hamlet is vanishingly small, yet finite: so in an infinite amount of time it will certainly happen. $\endgroup$ – AlexP Apr 25 '18 at 12:25
$\begingroup$ @ArturBiesiadowski I think you may be under appreciating the true scale of infinity. $\endgroup$ – Smeato Apr 25 '18 at 12:42
$\begingroup$ @ArturBiesiadowski regarding 2: this isn't relevant in an infinite-time scenario, because the external factors are also (eventually) permutated such that the vanishingly improbable is allowed to becomes certain. If that's impossible, the dependent improbable occurrence is actually impossible, and we've just calculated the probabilities wrong. $\endgroup$ – Morgen Apr 25 '18 at 14:09
You specify "creature", but would a fungus fit your requirements? Taking inspiration from Armillaria ostoyae, one example of which is the largest living organism, covering 3.4 square miles of the Malheur National Forest in Oregon (aka the Humongous Fungus). Despite its size, as it is a soil organism consisting largely of microscopic filaments interacting with plant roots it is mostly invisible, so even if it spread out over the whole world it would not get in the way. Its anatomy of largely self-sufficient parts would seem amenable to infinite scaling both in time and space. While this fungi is considered parasitic, it would be easy to imagine this being commensal or symbiotic with the plants of your world.
$\begingroup$ Brilliant. A fungi is certainly closer to an animal than it is to a plant. This is certainly the most suitable answer thus far. Welcome to Worldbuilding! $\endgroup$ – Samuel Apr 24 '18 at 15:57
$\begingroup$ That's a nice idea, but I was thinking more like an animal $\endgroup$ – SilverCookies Apr 24 '18 at 16:27
$\begingroup$ Corals are animals. It could be a coral or something of that sort with some sort of feeding / digestive structure that expands infinitely in all directions through the ground. Posting that as a suggestion here rather than my own answer since I don't feel that my idea is significantly different from this answer, the only difference is the kingdom that the organism belongs to. $\endgroup$ – Tophandour Apr 24 '18 at 16:39
$\begingroup$ Although not an animal, Pando is similar. $\endgroup$ – PyRulez Apr 25 '18 at 0:26
$\begingroup$ a fungus is about the only thing that will work since and infinite animal would need infinite food which it could never eat with a finite mouth. Fungus can eat with its entire body so it does not have this issue. $\endgroup$ – John Apr 25 '18 at 2:01
It might be infinitely tall, just like a tree (or giraffe) that simply doesn't end. Ever.
A snake-like being would work also, provided that it either flies/floats/burrows or is flat enough that it is not a barrier to other terrestrial creatures.
SurpriserSurpriser
$\begingroup$ +1 For the snake-like creature. I picture a flying dragon-like snake that has a head but maybe no tail. $\endgroup$ – David K Apr 24 '18 at 19:39
$\begingroup$ I instantly thought it must be Jörmungandr $\endgroup$ – MParm Apr 25 '18 at 3:52
$\begingroup$ I love the idea of a snake maybe a meter thick that has a head, but the other side goes forever. It's been around for ever and legend says it can answer any question. People go on pilgrimages to find the head. They follow the body, never knowing how close they are to the head, or if they are going endlessly in the wrong direction. $\endgroup$ – Cuagau Apr 25 '18 at 8:37
$\begingroup$ The creature can indeed answer any question but unfortunately it only speaks a language where any answer takes infinitely long to pronounce. $\endgroup$ – Daron Apr 25 '18 at 13:23
$\begingroup$ There are legends that it grows constantly and the world will end if it ever encounters it's tail… Snake, the mythos. $\endgroup$ – StarWeaver Apr 25 '18 at 18:25
There are some real-life species that do a decent impression of this already, namely fungal mycelium and aspen forests.
Your criteria, as I understand them, are as follows:
Infinite spatial extent
Infinite age
Does not blanket the entire surface of the infinite world
Does not impede movement of humans over/under/through the space occupied by it too much
Is not particularly vulnerable to being chopped into pieces
Has exactly one head
#3 and #4 can be addressed by having your creature exist mostly underground (or mostly in the air, but underground is easier), and #5 by having it take the form of a vast network. Like the aforementioned fungi and aspens. These organisms can survive large chunks of themselves being destroyed by virtue of having many more redundant parts all around. And in your case, if #1 is fulfilled, there is an infinite area of land infested with this organism. Any finite amount of destruction would barely be noticeable to the organism as a whole.
#6 is probably the hardest criterion to fulfill. Any of the functions typically attributed to animal heads would be better served by a collection of nodes spread throughout the network, rather than a central node in one location. A singular mouth could never take in the infinitude of nutrients the organism would need to sustain itself. A singular brain could never respond to stimuli infinitely far away. If cutting off the head could kill the entire organism, it would never live to be infinitely old and fulfill #2. And if the world is flat and uniform, why should any one location have the head and not another?
Perhaps the organism has a fractal network of nodes, each with its own brain. The smallest nodes directly control the organism's behaviors in a certain area. Larger nodes govern larger areas, delegating micromanaging those areas to the smaller nodes therein. Each node communicates with its neighbors and reports to the nearest node of the next size up. At each higher tier in the hierarchy, the number of nodes decreases, while the distance between them increases. In the limit, the infinity-th tier will contain one single node, infinitely far away from everything, which could be considered the "head". But for all practical purposes, it may as well not exist.
Also, I should point out that an infinite space can contain multiple infinite, non-intersecting volumes. In fact, your first paragraph contains two: the sky, and the ground. Each is infinite in extent, but only takes up half of the space in your world. In fact, an infinitely long pipe running in a straight line across the ground has infinite volume, but only covers an infinitesimal fraction of the surface of an infinite world. Just because your creature has an infinite volume doesn't mean it has to take up the entire volume of the world. There can be plenty of space around it.
Update: If you want something a bit more mobile than a plant or a fungus, you could have it grow various sensory organs (e.g. eyes) and prehensile limbs (e.g. tentacles) that respectively inform and are controlled by the nearest node. Which... could give it a seriously Cthulu-like appearance. If that's not quite the aesthetic you're going for, and you want it to be able to communicate with humans more easily, you could have it grow humanoid "avatar bodies" that are tethered to the main network for nutrition and communication with nearby nodes, but have their own brains and are able to act at least somewhat independently.
Someone Else 37Someone Else 37
$\begingroup$ +1 This answer addresses all of my concerns with the question. One added point about heads: if the "main head" is lost, one of the "lesser heads" could be promoted to be the new "main head." (This might not happen very quickly, of course, due to signal propagation time.) $\endgroup$ – DLosc Apr 24 '18 at 17:32
$\begingroup$ @TylerSigi Fungal mycelium and aspen forests, which are linked a couple paragraphs down. I guess I could have made that more obvious. Fixing. $\endgroup$ – Someone Else 37 Apr 25 '18 at 2:54
$\begingroup$ @DLosc But, the beauty of the infinite-fractal solution is that there's no need for the "main head" or any other nodes in the infinitely-high tiers to actually exist. No matter how high you go up the chain of command, you'll never get past the finite tiers. And if you happened to find yourself a finite distance away from an infinite-tier node, it will be entirely irrelevant to the story- assuming it doesn't deign to communicate with nodes infinitely many tiers below it, it will only interact with nodes infinitely far away, and thus any message will take infinite time to reach its destination. $\endgroup$ – Someone Else 37 Apr 25 '18 at 3:08
$\begingroup$ @DLosc "The Head" thus may as well be considered religion or folklore, rather than something that can actually have a direct impact on the story. $\endgroup$ – Someone Else 37 Apr 25 '18 at 3:09
$\begingroup$ What if the head is infinitely large, and therefore takes in infinite nutrients? $\endgroup$ – PyRulez Apr 25 '18 at 18:54
Could have a fractal outline and that would be an infinite surface within a finite volume. A Mandelbrot fractal looks as if it had a head (and a posterior as well) so that may serve your purpose. https://en.wikipedia.org/wiki/Mandelbrot_set
Maybe this is not what you mean by infinite.
Real SubtleReal Subtle
$\begingroup$ The Mandelbrot set is definitely infinite and it has a "head" however since it goes infinitely small, it would look as a de facto finite thing right? $\endgroup$ – SilverCookies Apr 24 '18 at 14:06
$\begingroup$ Could be an issue if this world has quantised space. In that scenario the fractal can only occupy finite space. $\endgroup$ – Joe Bloggs Apr 24 '18 at 14:33
$\begingroup$ In Terry Pratchett's Discworld series, there's a butterfly that has ragged edges. Due to the fractal nature of the universe, this has become the "Quantum Weather Butterfly" (for the "a butterfly flaps its wings and creates a tornado." ) This is actually true of this tiny little butterfly. Perhaps ragged edges? $\endgroup$ – FoxElemental Apr 24 '18 at 16:41
$\begingroup$ As you're saying: You can only fit an infinite object of N-1 dimensions in a finite space of N dimensions. Hence, the creature may be an infinitely long line (zero width) in a limited area of the 2D world, but it cannot be 2D. Although... there are fractal dimensions... $\endgroup$ – JimmyB Apr 24 '18 at 19:07
$\begingroup$ @SilverCookies The outline of a 2 dimensional fractal is infinitely long. One of the consequences of this is that if you treat coastlines as actual fractals, instead of close approximations, every coastline has the same length. Your creature could reproduce smaller infinite creatures with less area but the same infinite boundary (just like the Mandlebrot set has miniature Mandlebrot sets embedded in it). The skin can expand to encompass any area you wish as it eats and grows. $\endgroup$ – CJ Dennis Apr 25 '18 at 5:53
The first thing that came to my mind was an infinitely long giant centipede. It can be tall enough to let other things pass underneath it, but they would have to be weary of it's constantly moving legs.
$\begingroup$ You mean an infinipede? $\endgroup$ – PyRulez Apr 25 '18 at 1:09
$\begingroup$ Or a googolpede? A kajilliopede? $\endgroup$ – DSKekaha Apr 26 '18 at 16:14
Your creature is infinite and yet doesn't take up all the space in an infinite universe. Because you yourself specified that you want your creature to have a head, and therefore an end, I suggest that your particular creature is actually finite.
Your creature is a mobius strip with some mathematical knots added for good measure. Being a mobius strip design it goes on infinitely in a finite space. Over the lifespan of the universe, your finite creature can grow infinitely. You can have any number of these ancient creatures in your universe.
Complexity to the creature design can be added with a combination of multiple knots. A mathematical knot is joined at both ends and cannot be undone. The simplest knot is a circle, or unknot. There are many different complex knots and new ones are still being calculated (similar to values of pi). I like to think of a Mobius strip as essentially a twisted unknot, although I do knot now if this would hold up to mathematical scrutiny. Some fractals can actually be considered "wild knots". A particularly complex knotted area can act as your creature's head, brain and any other limbs you may wish.
Your MobiusBody doesn't even have to be solid plane but can be holey, allowing other creatures to travel through without breaking the creature up into separate pieces. To allow the creature to defend itself, it can move the complex KnotHead around the MobiusBody. I've attached an image with the first number of knots, I leave the rest up to your imagination.
source: https://www.shapeways.com/product/8A7NG95NH/mobius-strip-voronoi-5-frac12-in
source: http://irma.math.unistra.fr/~loday/Noeuds_table.jpg
EveryBitHelpsEveryBitHelps
$\begingroup$ The creature doesn't have to be finite; a line on a plane is infinite, and the plane itself is infinite, but the line doesn't take up the entire plane. $\endgroup$ – Hearth Apr 24 '18 at 15:42
$\begingroup$ @Felthry, true in theory. However, the creature doesn't have to be finite...but should have a single head on one end? To me, that implies the OP does indeed intend to find or describe the finite end/head of this infinite creature. Hence why they are asking for shapes other than a 'line' that doesn't involve us playing an infinite game of 'snake' :) $\endgroup$ – EveryBitHelps Apr 24 '18 at 16:15
$\begingroup$ How is this "infinite" while for example a torus isn't? $\endgroup$ – pipe Apr 24 '18 at 17:34
$\begingroup$ @pipe. as far as I am aware, a torus is defined as a tubular circle and has to maintain those circular dimensions. A Mobius strip and knots are circular but are not restricted to circle equations. They can be as squiggly as you wish. So while I specified that the Mobius strip was actually 'finite' it has the ability to have 'infinitely' more surface area than the torus. I think my edit explained my thought process.... $\endgroup$ – EveryBitHelps Apr 24 '18 at 18:11
$\begingroup$ In mathematics, there are three things that match the everyday description of "line". An interval has a beginning and an end. A ray has a beginning but no end. It extends to infinity in one direction. A line has no beginning or end. It extends to infinity in both directions. The OP is asking for a ray-like creature with a head but no tail or an infinitely long tail. $\endgroup$ – CJ Dennis Apr 27 '18 at 5:41
Immanent being.
https://en.wikipedia.org/wiki/Immanence
Immanence refers to those philosophical and metaphysical theories of divine presence in which the divine encompasses or is manifested in the material world.
This is a trippy concept. If you want a big snake that is infinitely long, or has infinite number of teeth this will not work. But your world already deals with the infinite. An infinite being would occupy the entirety of it. Immanence is a good companion concept for what is shaping up to be high concept fantasy.
Usually immanence is considered in the context of God - a conception of God is that God exists throughout the entirety of creation; not transcending creation but occupying creation. There is no reason your creature could not be like this. It exists throughout the entirety of what is. It is not the same as the world but it is inexorably intertwined with it.
You might or might not be able to sneak up on such a being. It is everywhere at once, but a cat can sneak up on my foot and I might not notice. You might or might not be able to attack such a being. An attack cannot drive it out of a place that it occupies because that is impossible; infinite-1 = infinite. But it might change the character of the being that occupies that place.
You might be able to kill such a being. The death of it will probably change the world it occupies. Remember: he's a god; it will take more than one shot.
WillkWillk
$\begingroup$ An infinite being does not necessarily take up the entirety of an infinite world... the two infinities are not necessarily equivalent. Otherwise, a really cool idea. $\endgroup$ – Bemisawa Apr 25 '18 at 16:57
This world could be filled with fractal critters. They can be as large or as small as you want, as they just have infinite detail: the closer you get to the critter, to more of it you can see.
Perhaps Menger sponges inhabit the oceans? They can be made into some cool carpets. Or perhaps mollusks with Apollonian gasket shells? An Ikeda map jellyfish? Mandelbrot manta rays?
I know you only asked for critters, but why stop there? An infinite world could also have fields of Mandelbulb flowers, crops of true Romanesco broccoli, Dragon island chains, caves with Sierpinski stalagmites, Pythagoras trees, and storms of Lichtenberg lightening and Koch snowflakes.
The options are, by defnition, endless!
GiterGiter
$\begingroup$ Infinite detail is an issue if space is quantised. It might not be in the OP's world, but it's worth considering. $\endgroup$ – Joe Bloggs Apr 24 '18 at 14:34
$\begingroup$ @JoeBloggs: I assumed by the OP's Gabriel's Horn example an 'infinite surface, finite volume' critter was the goal, so infinite detail should be possible in this world. If not, then the Gabriel's Horn critter could not be infinite as the tail could only be as thin as the world's smallest quantifiable unit. $\endgroup$ – Giter Apr 24 '18 at 14:51
$\begingroup$ Good point! I didn't think of that. Worth the OP realising though. $\endgroup$ – Joe Bloggs Apr 24 '18 at 15:48
A mist or vapor
Asking for the shape of an infinite body that isn't infinite is somewhat... definite. So we're trying to get as close to infinite as we can. The problem, of course, is that the body gets in the way of every other body in the world.
What would be the difference between such a creature and a ring of impassible mountains restricting access to the rest of the infinite world, making the world definite? After all, you're going to encounter this creature quite literally 99.99% of the time, so one hopes it has legs so you can walk beneath it... but then there'd be no sun...
Unless the creature is something more ethereal, something that can penetrate into every nook and cranny, extend from beneath the ground to the heights of the sky, something that people can walk through and still experience the entirety of this voluminous world.
Something that smacks just a bit of atmosphere...
Conclusion: the creature is a mist or vapor
Which also means you don't need to worry about where it's mouth is so that people can hear it roar from anywhere and everywhere.
JBHJBH
$\begingroup$ "you're going to encounter this creature quite literally 99.99% of the time" - Why? The world itself is infinite, it can therefore contain an infinite number of infinite beings and statistically you would never encounter a single one of them. $\endgroup$ – Samuel Apr 24 '18 at 15:54
$\begingroup$ @Samuel, the problem with playing infinity games is that it generally comes down to philosophy. If two objects of infinite size exist in infinite space would they ever meet? The philosophy of mathematics says it's possible they never do. The philosophy of physics says they must be coincident and therefore always meet. You're welcome to choose your favorite philosophy, but in the end, it's the philosophy chosen by the OP that matters, which suggests he's looking for a being that simultaneously fills the world yet allows for the movement of others. My answer supported that apparent belief. $\endgroup$ – JBH Apr 24 '18 at 16:19
$\begingroup$ I don't think that's correct. In both mathematics and physics, infinity is boundless. You may be thinking of countable vs uncountable infinities. But that difference doesn't apply here. An infinitely large creature can occupy an infinitely large universe and still have room for infinite friends, but never meet one after wandering for infinite time. $\endgroup$ – Samuel Apr 24 '18 at 16:45
$\begingroup$ @Samuel, If you don't like my answer, downvote it. If you have a better answer, post it. I'm not going to argue philosophy with you. $\endgroup$ – JBH Apr 24 '18 at 16:47
$\begingroup$ Your answer is fine, your assumption that the creature must fill all of space because all infinities are equal is what's wrong. It's possible the creature fills all of space, but it's far from required (or even likely). $\endgroup$ – Samuel Apr 24 '18 at 16:51
Higher Dimensions
(read Flatworld)
If you're amenable to more dimensions, then you could have something that exists infinitely, but takes up only a finite amount of our three dimensions.
It could be as simple as the 4th dimension being time, or perhaps something further; some other Nth dimension that I can't imagine, being bounded my my own existence in three.
goodguy5goodguy5
$\begingroup$ You could have it be a unit sphere in infinite dimensions. Infinite volume but everything is "close", so one head is feasible. $\endgroup$ – PyRulez Apr 25 '18 at 1:09
An infinitely long serpent-like fish that flies through the air. The fish is inter-dimensional/inter-planar, so doesn't have to physically fit on whatever world it happens to be currently inhabiting.
I've seen this type of creature used in a campaign in the past (Party was in an airship swallowed by the fish, then the inside of the fish became the campaign setting and the characters had to escape).
Schrodinger'sStatSchrodinger'sStat
$\begingroup$ This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post. - From Review $\endgroup$ – Renan Apr 24 '18 at 17:28
$\begingroup$ @Renan, while the answer could have been fleshed out more, it does technically answer the question. See the OP's 2nd to last paragraph. $\endgroup$ – JBH Apr 24 '18 at 17:34
Do Superorganisms count? If so, you could have an infinitely large ant colony.
Since ant colonies can already grow arbitarily large, this isn't too far fetched. The fun part is the ant colonies could pull some Hilbert Hotel style stunts, if needed.
Since you specify only one head, we will say it has only one Queen, which is the "head" of an ant colony (head of state, that is).
PyRulezPyRulez
This is very similar to a fun math book that was circulating on the web a couple of years ago: "Life on the Infinite Farm" https://www.math.brown.edu/~res/farm.pdf
It contains several thought experiments about what infinite animals might look like (as exercises in thinking about infinity).
curracqcurracq
$\begingroup$ It probably does not answer the OP's question but +1 for the link to the book! :) $\endgroup$ – Honza Zidek Apr 25 '18 at 7:16
$\begingroup$ for more resources see: richardevanschwartz.com/farm.html $\endgroup$ – Charles May 8 '18 at 20:56
According to myth, they were created when great trees displeased the Gods, and were cursed to lose everything above ground. And indeed, they do seem a little like plants - each one an immense tap root, wider than a house, drilling deep into the rock.
They branch. Branches snake off horizontally through the stone, beloved by miners because the bloody wood is easier to tunnel through than rock. The branch-roots always end somewhere interesting - porous stone saturated in oil, an underground river, or even a seam of precious metals that perhaps, eons ago, used to be a hot mineral spring.
But the main tap roots go only down. Perhaps they are nourished by magma. Perhaps they have found whatever lies beneath that. One thing we do know, is that they eat meat.
Near the surface, the root swells to the size of a sports ground, and ends in a flat top in the soil layer. Once every decade or so, a seemingly innocuous patch of grassland will collapse when a group of animals is walking across. Whatever triggered the attack - a herd of cattle, a pack of wolves, a legion of troops - all fall into an open maw. Huge tentacles sweep across the nearby ground, blindly grasping anything that escaped and hurling them into the corrosive mucous in the throat. Eventually they calm, and weave themselves again into a lid over the pit, to eventually be hidden beneath wind-blown soil.
Why do they do this? The amount they eat must be insignificant compared to the infinite body below. Perhaps it is indeed a curse. Perhaps they just like the taste.
For bonus weirdness, you could have a similar creature living vertically in the sky - like an infinitely tall sequoia whose trunk and leaves are held aloft by vast hydrogen balloons. Tentacles swoop down to pluck up their prey.
Male and female of one species?
TektotherriggenTektotherriggen
What about a creature that has a limited height and width that is infinitely long and constantly traverses the landscape.
I'm imagining something like a giant centipede or worm where there's huge arches between each supporting foot of the creature. Large enough that if it happened to come across a building (or maybe even something larger like a city) it could just step over it. The legs could follow the exact same route through the air as they move into position. And each 'foot' would land in the exact same spot that the previous 'foot' occupied. Naturally the land underneath would become super-compressed over time. Perhaps people could find a way to divert the path of a section of this creature to use the land crushed by its feet as a really flat and solid foundation for building?
Anyways the creature would be infinitely long and each section would be supported by its own independent legs and feet.
I'm not sure about nourishment though. Maybe a symbiotic relationship with creatures that live on it. Fungi, animals and even plants could live on something this large that is always predictable. It follows the same path with the same timings. Perhaps it moves at the same rate as day and night? This really depends on the exact workings of the day/night system. But if some sections would be in day while other sections are in night, then you could maybe have portions of this creature that are constantly in daylight and portions that are constantly in darkness. This could lead to variations in different sections of the creature and the symbiotic creatures that live with it.
There's a lot of things you could do with this sort of creature as it's path could be very random meaning there'd be huge sections of land without any worm and huge sections of land that might be governed by the nearby worm paths. It'd be large enough in height that it's not insignificant and is difficulty to damage. How exactly would it recover from damage though? Perhaps some sort or merge with other pieces of the worm. That's up to you.
I just like that in an infinite world there's a creature equally unending that affects almost every part of this infinity, helping form the civilizations and creatures that live in all areas of that infinite world.
TheEvilMetalTheEvilMetal
Yes, a snail
Consider a snail: an initially tiny snail continuously grows new shell at the mouth, which grows wider and wider. The tiny central coils of the spiral are just the snail's shell from when it was younger. If you zoomed out, it would look the same as a younger snail.
As long as we redefine matter
Snails in our universe had finite origins, and thus an initial finite size. Any real snail would have a minimum size as a result of the atomic structure of matter.
But in your abstract universe, given the other infinities, we can change the structure of matter to have no atoms, no indivisible pieces. Turtles all the way down. In that reality, the snail can have infinite age, whilst remaining a finite size; the spiral could go inward forever.
Infinite animals cannot be made of atoms and live
One key issue with any infinitely large creature (in any dimension) is that any signal or biological process would be propagating forever from the head. So while it could start to move in some direction, it could never finish moving. And any nutrients consumed at the head would have to be only asymptotically absorbed in an ever diminishing tail (or the tail would starve). So I think any infinite animal would have to be asymptotically small in the older direction and thus require non-atomic matter.
Snails live in the outer chambers
A snail neatly resolves the issue of moving an infinite animal; though infinitely long in the spiral, the finite size and mass of the animal permits it to still move, to still procreate, etc. You have a choice whether to have parts of the animal still living all the way down the spiral, or whether to have an infinite body containing a finite lifeform living in the outer chambers.
Gabriel's shortened horn
We can make Gabriel's horn shorter too. Instead of the animal growing linearly along the x axis, we make it grow exponentially along the x axis. So regressing backward in time, every year it grew half as much as the previous year. This is the frog jumping half as much each time (albeit backwards) and has finite size.
One issue with an infinitely old animal is reproduction. If I am infinitely old, then I wasn't born, so I cannot have parents in that sense. Nor can I have children that are born and are also infinitely old. However, if I was a division from another infinite animal, then I too could divide to yield offspring.
This would mean that as long as we match our infinitely-divisible matter with some mirroring phenomenon by which any animal could divide or replicate (down to all its infinite detail), then we have a way for a whole family of infinite snails to exist independently from each other yet still related. I can imagine some mirror or portal thing which duplicates an animal while halving its physical scale; the total finite amount of mass & volume is conserved, just the snail comes out half the size (as it is fractally self-similar).
To understand the process you need number theory and the infinite hotel. But the effect is similar to cell division.
On the flip side two animals could merge to become one twice as big; by a process reversing that of the division, the combined animal would be an infinite merge of the original two.
This gives as most of the components of reproduction as we know it, except one: a mechanism for variation.
A finite change
Our snails grow in time, and as with real snails, turtle shells, tree rings, you can look back in time by looking at the historic growth of the organism. If a snail underwent some change at random intervals (voluntarily or not), then it would be visible somewhere down the spiral, even though the spiral goes on infinitely long. This would then make one snail distinct from another snail, even where they were divisions of the same animal in some earlier time.
This mechanism gives us a way to have individuals, and a reason for the merging process; the snails have a way to be different, and a reason to choose one snail to merge with over another. The differences may be random (analogous to mutation) but the selection need not be.
One common ancestor
All snails can have derived from a common ancestor snail; in effect a division creates twins, so whilst they can ultimately yield lots of different individuals it would suggest that looking backward down any given animal's shell you could find an original snail where all snails are the same beyond this point.
The longer a snail has gone without dividing (compared to other snails), the larger it would be (given the division mechanism), so one half of the original snail could still exist alongside a larger number of smaller snails who had gone on to divide and sum separately.
Voila a snail
An infinite animal, complete with reproduction, ancestry and family.
Phil HPhil H
An infinite animal can be a snake, that starts somewhere, and then just extends, e.g., east, infinitely.
There are infinite problems with that animal. Not least of which, that (assuming a random placement of other creatures) infinitely many creatures won't encounter it, ever. If it has huge girth, there is the weird problem that inifninitely many people will be hindered in their mobility (because the snake lies around like a huge wall), while at the same time infinitely many people will never see the snake in their life (because they inhabit the infinitely big part of the world that the snake is not in life-time-wandering-distance of).
If it is coiled, or similarly distributed that its (basically) 1-D body is still all over the 2D-landscape (or even the 3D heavens) it may be encountered by everyone, and, given the interstices between coils are large enough) also avoided.
Fun fact: if the scales on the snake are randomly colored, there will exist a part of the snake that has a red arrow pointing exactly into the direction of the snake head, with an exact distance to the head in inches given, in orange, and small bold black print under it "[protagonists childhood nickname that just her siblings knew]: Go talk to the head".
bukwyrmbukwyrm
$\begingroup$ That last part could be an amazing plot device $\endgroup$ – Yuriy S May 28 '18 at 10:09
My own point of view of your problem:
Your creature should have infinite size. Either of surface area or volume, but it doesn't really matter to you. You just want it big. So immediately, peano curves come to mind. However, I don't think this quite fits with the spirit of your question. So first thing to consider: what does life need to live?
Life can reproduce itself,
Life can respond to stimuli, and
Life can grow.
There are other qualifications, but they are debatable, and it is easy to imagine a type of life without them. So tackling each one by one:
Life needs to reproduce. How it does this is up to you, but it can't be a stand alone organism. Your organism probably needs to have various stages of life, early stages in the creature's life cycle involve moving as far away from the mother organism as possible. Middle stages involve growth and reproduction. Finally (this part is optional), towards the end of its life, it shrinks and prepares to die.
This one's probably harder. An infinitely large organism would certainly have issues with responding, especially with a centralized nervous system. It would either have to react by reflex, or have separate "brains" at different spots. Each "brain" could in turn connect to a larger "brain", which connects to larger "brains" ad nauseum. How you tackle this problem could have huge effects on the sentience of the creature. i.e. the fractal network of brains could make the creature incredibly intelligent (certainly smarter than most humans), whereas reflex would make it incredibly stupid (about on scale of an insect).
This is the most difficult of your concerns. If I had to take a stab at suggesting a growth pattern, I'd suggest exponential growth where each step occurs in half the time of the last step so that it grows in a finite period of time, however how it gets the nutrients for this I don't know. Perhaps through sketchy, hand-wavy, use of light splitting into antiparticle pairs and then doing something with this, but I have no clue how you plan on discussing this if you even will. This part will take some serious ignoring of laws of physics to work.
How I'd do it
Disclaimer: I don't know what your goal with this is, so this is just my own ideas put into an example.
So, when the creature is first born, it is a small single celled organism, it whips a little flagella attempting to get as far away from it's mother's main body as possible, as it travels along its mother's body, eating anything it can endophagize (I'll pretend that's a word). It keeps swimming until it reaches its next phase in life during which it undergoes rapid mitosis and begins to form a tendril system. These tendrils grasp at anything they can and secrete digestive fluids onto what they have, slurping the resulting fluids to the main body, which is beginning to form a central brain. The creature is about the size of a cat.
The creature grows at a super fast rate as the tendrils begin to stiffen at the older spots. At stiff spots, chloroplast like organelles form to collect sunlight. The tendrils at the end begin to branch forming more mobile tendrils at each branch, smaller brains are formed each connecting to the last. The creature grows like this for almost a century, at which point it is infinitely large, it's tendrils wrapping around others of its kind forming a sort of large octopus-y forest like formation. The creature is definitely sentient, but lacks much movement capability.
At some critical point, the main brain decides to die, and shuts down, killing its immediate surrounding parts. This releases new baby versions of itself which race down the tendrils at the same speed as it dies. Each brain of the original creature activates its area's death, and releases food and nutrients for the young.
Rick M.
tox123tox123
It's infinite in higher dimensions. Any shape in higher dimensions would do. Perhaps infinite in all higher dimensions and not always in our lowly ones.
At any point it can project itself down into lower dimensions and take up that space in it's entirety.
This answer is inspired by Eldrazi, and although you don't need to follow this pattern this excerpt might be helpful:
Each Titan lives outside of the planes. When one wants to feed, it extends a part of its "body" into the plane, to create a physical manifestation of itself there, as well as an army of drones that are extensions of its body and will.
The Spirit Dragon Ugin compared this to a man sticking his hand into a pool of water; the man is the Eldrazi Titan, and the water is a plane. The fish--those who dwell on the plane--see only a part of the man--his hand. Likewise, the inhabitants of a plane can see only a part of each Titan. Even if the Titans appear to be independent beings, their physical forms are just part of a greater entity outside the plane. The same is true for every drone that the Eldrazi had created; they are all just part of the Titan that made them.
PureferretPureferret
$\begingroup$ This doesn't appear to answer the OP's question, "What infinite shape can I give my organism...?" Please do not assume people will follow links or that links will exist forever. Provide enough information in your answer to be self-contained. Thanks. $\endgroup$ – JBH Apr 24 '18 at 16:50
$\begingroup$ @JBH I could improve this by suggesting some higher dimensional shapes, but the answer is only inspired by the linked article, and isn't needed to fully understand the answer. I'll try to fish something out anyway $\endgroup$ – Pureferret Apr 25 '18 at 9:49
Interesting/Horrific Idea:
Your creature is not one being but a branching collection of infinitely long snakes/dragons/centipedes. Each creature has finite width and is infinitely long but at any given time has only a finite length exposed. The remaining length is still coiled up inside the creature's 'mother' as it can never finish dragging its entire infinite length out. So travel long enough from the head you hit the mother creature. Travel long enough from the mother's head you eventually reach the grandmother and so ad infinitum. No member is infinitely old but the age of the members is unbounded and the entire family can be said to be infinitely old.
Further Idea: Travel deep enough down the generations and the last mother disappears into the ground. We are living on some 'layer' of this creature. The day/night cycles are caused by it moving.
DaronDaron
Three ideas:
1) A dome creature that messes up spacetime. Imagine this: if you are a mile away, it looks like a fairly ordinary dome, in the distance, or what have you. But the closer you get, the bigger it appears. It occupies infinite space in every direction, but is contained in compactified spacetime. It's just like how 1/x approaches infinity at 0. Right up against it, if you put your eye on this beast, it would look as though it never ends. For this to make sense, as you got close, the appearance of the rest of the world would have to bend backwards, so even what was in front of you would now be behind in some direction. This beast could easily have a well defined head at one particular location. There could be as many of these creatures as you like, so that they're relatively popular on the land. (it doesn't have to be a dome. Any creature shape could theoretically do this)
2) An infinite collection of somehow-connected finite things. I imagine an infinity of spheres. It could be that they all are managed like limbs from one head sphere. They, together, are 1 entity. Or maybe they all have a head in their centers, that share consciousness, but if N of them are destroyed concurrently, they all "die". This would necessitate they should be able to regenerate over time, if one is destroyed in some way.
3) An ethereal being that is omnipresent, but is physically accessible by some imaginative way.
CalebCaleb
If we borrow from mythology...https://en.wikipedia.org/wiki/J%C3%B6rmungandr
...then your creature could be a large serpent. It has a head but is still infinite because it has swallowed its own tail.
But is that satisfactory as "infinite" by definition? Sure, if inside it's mouth is a wormhole that extends the creatures length indefinitely.
Or, what if your creature does NOT have a head, at least not in the traditional sense. Yours is a circular cylindrical creature that has eyes and mouths and ears all over its body. At any given point of its infinite length it can see, smell, hear, and bite.
How was a creature like that born? Which end came out first? Well, you could leave up to myth and the readers imagination, OR the whole creature came out of its parent all at once, already circular in its shape. That's how its cells formed when it was in the womb of whatever birthed it.
Or if this is fantasy setting then just magic it.
As for it not being vulnerable... the creature could be so large that inhabitants of your world think its a mountain range. Any "earthquake" would destroy even an army of attackers.
LenLen
Well, since your world is also infinite, the shape of the creature doesn't really matter. There is enough room in your world for infinitely many of said creatures. So the answer should be: It looks however you want it to look.
It could be shaped like a cat with an infinitely long tail, infinite hind legs on an infinitely long body, infinite front legs to support its infinitely wide shoulders, or infinite teeth in its infinitely huge mouth. Whatever the case, it just doesn't end in at least one dimension, same as your world. Since it has to be a world where our physics don't apply, our understanding of biology doesn't really matter. It could look as if made by Picasso or imagined by H. P. Lovecraft.
KakturusKakturus
You can imagine all sorts of creatures, as long as they live underground. People and everyone else will be able to walk all over it without even noticing what lies beneath their feet. Give the body enough holes (or even make it web-like) and the underground denizens won't be bothered by it either. Since the ground extends down infinitely, the creature can also grow very big in all 3 dimensions. You can hide the head wherever you like. :)
Bonus story opportunities:
The head could be yearning to experience the sky and flight, thus trying to move its body out (with all sorts of disastrous story-fuelling side-effects).
Since it's impossible to see any sizable portion of the creature at once, nobody (including the reader) could realize that it's the same creature until a big "revelation" later. Until then - well, you do come across these "big burrowed snake" creatures every now and then, but they're few and far between so nobody has made the connection that they're not actually distinct entities.
Vilx-Vilx-
Infinitely Large Head
If the animal is infinitely large, it will need an infinitely large head with which to gain infinite nutrients. To do this, simply take an regular animal, and make it infinitely wide (and also tall enough to allow things to move underneath it when needed). So it has infinite teeth, infinite stomachs, infinite eyes, an infinitely large brain (with many redundant parts), etc...
Part of the problem is that infinities are somewhat counterintuitive. The infinite hotel is a classic example. (*) You want an infinite being that doesn't take up everywhere and lets people wander round, and doesn't get too thin? Nothing could be simpler!
Your being is a conical being, which slithers along the ground, and is obligingly flexible so it can slither over other infinite beings. Its head is at the bluntly rounded apex of the cone, and cones of course have infinite volume, if they aren't truncated. There is no "thin" point, and it takes up an infinitesimal part of the world.
(* a hotel has an infinite number of rooms, and is 100% booked. If one guest turns up, move everyone into the next higher room, and the new guest can stay in room 1. If a (countable) infinite number of guests turn up, move every guest into a room of double their present room number, which frees an infinity of odd numbered rooms for all the new guests. If an uncountable number of guests turn up, buy stocks in Cantor's AirBNB business and quit the hotel world!)
StilezStilez
The universe is filled with a mist like substance extending into infinity, this mist is in fact the diffuse body of a super-organism, able to form denser pockets to serve its various biological purposes, filled with flora and fauna which fuel its biological processes in a similar fashion to the gut-flora of more conventional creatures.
The Head is a single location in the universe where The Haze started, a literal Genius-Loci. You can find the location of the Head by following an infinitesimal gradient of density in the Haze, or more conveniently by following a faint flow of energy and material which fuels The Head.
Think Steven King rather than Lovecraft, The Mist, but the mist itself is a monster as much as the creatures within it.
RuadhanRuadhan
The answer is coral. It is an animal, exists and uninhibited could grow to an indefinite size as an animal. Unlike a tree it can be cut to make more separate animals or reproduce to have larvae.
MuzeMuze
A Hive Mind
As hive minds work, it can expand infinitely, just spreading from one single mind to another.
To really be infinite, it should be able to assimilate any kind of living species.
Let's call it:
It is actually built from every living species on your infinite world and is actually looking forward to assimilating samples of every living thing.
Since the power of an infinite mind is monstruous it wants to acquire the most knowledge from the whole universe.
It has no real reason to assimilate every single mind since their evolution is a source of infinite knowledge for it.
A hive mind actually has only one "head", aka mind but it cannot be cut out so it's really infinite in space in time.
It is seen by other living organisms not as a predator but more like a god of knowledge which makes them the greatest gift, by assimilating them from time to time (they gift their minds to live forever in the hive mind).
The hive mind is not blocking any dimension on earth or in the sky since it's almost everywhere and nowhere at the same time.
There are no real animal instincts like on Earth, where little turtles know from the beginning they should run to the sea. The watcher actually teaches the young with the single beings he assimilated from those species to let them know the basics of their species.
When the species grows communals the humanity dies, it lets them teach each other (as we do) and give them great lectures from those that have been assimilated (even eons after their physical death). This can even give the communal species a boost in innovation and actually secure all knowledge that has been assimilated until the end of time.
Hankrecords
CalaomCalaom
protected by James♦ May 1 '18 at 17:57
Not the answer you're looking for? Browse other questions tagged creature-design mythical-creatures alien-geometry or ask your own question.
Gravity on a Minecraftian world?
How realistic are four legged aquatic animals?
Evolution of a predatory antlered cat?
Alien Griffin Design
movement of space animals
Hard SF (not magical) Vampire
Regenerating Skeletal Lizards
Best kind of creature to be capable of biologically spawning an 'ecosystem' with borrowed organic tissues?
Help me Explain this Alien's Traits
Could a deep ocean creature use some kind of bacteria in its body as a way to generate oxygen?
How can I conceal the presence of magical beasts from humans for a few weeks? | CommonCrawl |
Association of inflammatory biomarker abnormalities with mortality in COVID-19: a meta-analysis
Arpita Suri1,
Naveen Kumar Singh1 &
Vanamail Perumal ORCID: orcid.org/0000-0002-5014-46652
COVID-19 outbreak has engulfed different parts of the world, affecting more than 163 million people and causing more than 3 million deaths worldwide due to human transmission. Thus, it has become critical to identify the risk factors and laboratory parameters to identify patients who have high chances of worsening clinical symptoms or poor clinical outcomes. Therefore, the study aims to identify inflammatory markers that can help identify patients at increased risk for progression to critical illness, thus decreasing the risk of any mortality. Our study focussed on the predictive utility of C-reactive protein, Interleukin-6, D-dimer and Procalcitonin in assisting the management of COVID-19 patients with adverse clinical effects. Through literature search in electronic databases, we included the retrospective studies that evaluated the biomarkers among confirmed COVID-19 patients before initiation of treatment and who had a definite outcome (dead or discharged). Biomarkers were expressed in standardized difference in mean value, calculated based on study sizes and mean values between survivors and non-survivors considered the effect size. We carried out a meta-regression analysis to identify the causes of the heterogeneity between the studies.
Number of studies eligible for C-reactive protein, D-dimer and Interleukin-6 markers were eight, seven and four, respectively. Using random effect model revealed that the overall effect size with 95% confidence interval (CI) for C-reactive protein, D-dimer and Interleukin-6 were 1.45 (0.79–2.12) milligrams/litre, 1.12 (0.64–1.59) micrograms/millilitre Fibrinogen Equivalent Units and 1.34 (0.43–2.24) picograms/millilitre respectively was statistically significant (P < 0.05) inferring that the mean scores of these marker were significantly higher among the non-survivors compared to the survivors. Two studies were eligible for Procalcitonin marker and there was no heterogeniety (I2-statistics = 0) between these studies. Therefore, fixed-effect model revealed that the overall effect size (95% CI) for Procalcitonin was 0.75 (0.30–1.21) Nanograms/millilitre was also high among non-survivors.
The study found that serum levels of C-reactive protein, Interleukin-6 and D-dimer showed significant elevation in non-survivors compared to survivors. Raised inflammatory markers aid in the risk stratification of COVID-19 patients and their proper management.
COVID-19 outbreak originated in Wuhan, Hubei province, China, presenting with pneumonia of unknown aetiology in December 2019. The International Committee on taxonomy of viruses named the Coronavirus study group SARS-COV-2, which belongs to the family Coronaviridae and order Nidovirales (Gorbalenya et al. 2020). It is a zoonotic pathogen that has been transmitted from bats to humans (Li et al. 2020). WHO declared COVID-19 as public health emergency of international concern on 30th January 2020 (Adhikari et al. 2020). The COVID-19 outbreak has engulfed different parts of the world, affecting more than 163 million people and causing more than 3 million deaths worldwide due to human transmission (https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports.). Thus, the COVID-19 pandemic has become a principal concern to nations worldwide. Thus, it has become critical to identify the risk factors and laboratory parameters to identify patients who have high chances of worsening clinical symptoms or poor clinical outcomes. Studies have suggested that the Cytokine storm has emerged as an essential factor in the etiopathogenesis of fatal effects of COVID-19, predisposing the COVID-19 patients to heightened lung damage called acute respiratory distress leading to higher morbidity and mortality (Bhaskar et al. 2020). The systemic hyperinflammatory syndrome involves the excessive release of pro-inflammatory cytokines advancing multi-organ failure (Fajgenbaum and June 2020) and promoting a prothrombotic milieu (Kaushik et al. 2021). Thus, the study aims to identify inflammatory markers that can help identify patients at increased risk for progression to critical illness, thus decreasing the risk of any mortality. These markers could further help in development of serum based risk stratification algorithm which can assess severity of the disease and help clinicians in recognition of patients at risk of poor clinical outcome.
We included the studies if (1) retrospective study analysed the laboratory investigations of rRT-PCR confirmed COVID-19 patients who had a definite outcome (dead or discharged) (2) studies investigating serum C-reactive protein (CRP), D-dimer, Interleukin-6 (IL-6), and Procalcitonin (PCT); (3) blood samples were collected before initiation of treatment. We excluded the studies if (1) language of the abstract or full paper was in any language except English (2) Median, interquartile range of the laboratory investigations in survivor and non-survivors were not present (3) they were case series, case reports, meta-analysis, systematic reviews and editorials.
Search strategy and selection of articles
Through searching the electronic databases such as Medicine: MEDLINE (through PUBMED interface), EMBASE, Google Scholar, Science Direct and Cochrane library, we identified articles. We included the articles published from December 2019 to May 2020, with search keys "C-reactive protein", "Interleukin-6", "D-dime", "Procalcitonin", "COVID-19", and combinations of these keys. We reviewed the full text of the articles to decide their inclusion for meta-analysis.
We extracted data from the selected studies such as author, publication year, country, study design, outcome, laboratory values. PRISMA flow diagram describes the number of studies screened and included for meta-analysis.
The primary outcome was to assess the levels of various biomarkers such as CRP, D-dimer, IL-6 and PCT. We presented these biomarkers as median and interquartile range (IQR) values in the majority of studies. Therefore, we derived mean values and standard deviations (SD) for the present analysis, prerequisites to calculate the effect size of continuous variables in the meta-analysis. We derived mean and SD values using the formula as recommended in an earlier study (Wan et al. 2014)
$$\begin{aligned} {\text{Mean}} & = \left( {{\text{Median}} + q1 + q3} \right)/{3} \\ {\text{SD}} & = \left( {q3 - q1} \right)/{1}.{35} \\ \end{aligned}$$
Further, we observed all the biomarkers in different units of measurements. Therefore, the mean value of the standardised difference (std. diff) is calculated based on study sizes and mean values between survivors and non-survivors considered the effect size.
We performed a meta-analysis in two stages using Comprehensive Meta-Analysis (CMA) software version 3.0 (evaluation version). We calculated individual study-specific effect size with its 95% confidence interval (CI) in the first stage. We obtained an overall effect size as a weighted (inverse of the effect size variance) average of the individual summary statistics in the second stage. Since each study had different samples, the sampling error variability is likely high in a meta-analysis. The other source of heterogeneity might be due to characteristics of the patients, variations in the treatment, design quality and so on. Therefore, assessing the heterogeneity in meta-analysis is crucial because the presence versus the absence of true heterogeneity (between studies variability) can affect the statistical model. We tested the presence of true heterogeneity using the Q test, which follows a chi-square distribution with k − 1 degrees of freedom, k being the number of studies. When not rejecting the homogeneity hypothesis, we adopted a fixed-effects model. However, the strength of the Q statistic depends on the number of studies included in the meta-analysis. Therefore, we used I2—statistics in percentage values to measure the degree of heterogeneity. While I2-statistics ≥ 50%, we used a random-effect model.
We depicted the effect size with a 95% confidence interval (CI) for each study and the overall effect size in forest plots. We tested the effect size consistency using sensitivity analysis by leaving one study approach. Using the funnel plot and Egger regression test, we assessed publication bias between the studies. Further, to identify potential factors for heterogeneity, we carried out a meta-regression analysis of the effect size on various covariates such as age, fever rate and cough rate of the patients. For statistical significance, we considered P < 0.05.
PRISMA flow diagram (Fig. 1) shows the stages of the studies screened and included for the analysis. The number of studies selected for markers was seven (Fogarty et al. 2020; Yan et al. 2020; Fan et al. 2020; Deng et al. 2020; Zeng et al. 2020; Chen et al. 2020; Wang et al. 2020), six (Fogarty et al. 2020; Tang et al. 2020; Zhou et al. 2020; Yan et al. 2020; Zhang et al. 2020; Fan et al. 2020), four (Zhou et al. 2020; Yan et al. 2020; Fan et al. 2020; Chen et al. 2020) and two (Yan et al. 2020; Chen et al. 2020) for CRP, D-dimer, IL-6 and PCT markers, respectively. The mean age of these patients varied between 47 and 77 years. The symptoms rates such as incidence of fever (85%), cough (65%), fatigue (46.2%), headache (7%) and diarrhoea (14%) were predominant.
PRISMA flow diagram for systematic review which included searches of databases
Effect of CRP markers
A total of eight studies involving 245 non-survivors and 545 survivors were identified with CRP marker measurement. Individual study-specific analysis indicated that out of eight studies included, six studies (75%) demonstrated that the effect size was statistically significant (P < 0.050), inferring that the mean score of CRP marker was significantly higher among the non-survivors compared to the survivors (Fig. 2A). The measures of heterogeneity (I2) was about 90%, and therefore the random effect model revealed that the overall effect size (95% CI) was 1.45 (95% CI: 0.79–2.12) milligrams/litre. To ensure the consistency of the effect size, we carried out a sensitivity analysis by leaving one study approach. The analysis (Fig. 2B) showed that the effect sizes were between 1.12 and 1.60 and observed within the 95% CI of overall effect size.
Forest plot (A) and sensitivity analysis (B) of effect size among survivors and non-survivors
We plotted a funnel plot (std.diff versus standard error) to determine publication bias, showing (Fig. 3A) that there was no indication of publication bias. Subsequent egger regression analysis also showed that the intercept was not statistically significant (P = 0.944), confirming the absence of publication bias. Since there was high heterogeneity between the studies, we carried out meta-regression to identify possible significant factors among the reported covariates, such as age, fever, and cough rate. Effect size (Fig. 3B) was tend to decrease with increasing age (R2 = 0.75; P < 0.001) and fever rate (Fig. 4A; R2 = 0.83; P < 0.001). The variable cough rate could not establish a significant (P = 0.967) factor (Fig. 4B). While carrying out meta-regression with all the three covariates, we observed a similar trend of univariate analysis with R2 = 0.95; P < 0.050.
Publcation biasl (A) and regressin analysis (B) of effect size on age covariate
Regressin analysis of effect size on covariates fever rate (A) and cough rate (B)
Effect of D-dimer markers
We observed a total of seven studies involving 188 non-survivors and 676 survivors were with D-dimer marker measurement. The effect size of individual studies indicated that out of seven studies included, six (85%) demonstrated that the effect size was statistically significant (P < 0.050), inferring that the mean score of the D-dimer marker was significantly higher among the non-survivors compared to the survivors (Fig. 5A). The measures of heterogeneity (I2) was 82%, and therefore the random effect model revealed that the overall effect size (95% CI) was 1.12 (95% CI: 0.64–1.59) micrograms/millilitre (Fibrinogen Equivalent Units). Sensitivity analysis (Fig. 5B) showed that the effect sizes were between 0.93 and 1.20 and observed within the 95% CI of the overall effect size.
The funnel plot showed (Fig. 6A) that there was no indication of publication bias, and subsequent egger regression analysis also showed that the intercept was not statistically significant (P = 0.851), confirming the absence of publication bias. Meta-regression of the effect size on covariates showed that effect size (Fig. 6B) was tend to decrease with increasing age (R2 = 0.76; P < 0.008) and fever rate (Fig. 7A; R2 = 0.70; P = 0.012). The variable cough rate did not emerge as a significant (P = 0.957) factor (Fig. 7B). Multivariable meta-regression also showed a similar trend of univariate analysis (R2 = 0.93; P < 0.050).
Effect of IL-6 markers
We identified a total of five studies involving 155 non-survivors and 358 survivors with IL-6 marker measurement. Only for four studies, the Il-6 marker level was available. Individual study-specific analysis indicated that out of four studies included, two studies demonstrated that the effect size was statistically significant (P < 0.050), inferring that the mean score of the IL-6 marker was significantly higher among the non-survivors compared to the survivors (Fig. 8A). Heterogeneity (I2) was 87%, and therefore the random effect model revealed that the overall effect size (95% CI) was 1.34 (95% CI: 0.43–2.24) picograms/millilitre. Sensitivity analysis (Fig. 8B) showed that the effect sizes were between 0.85 and 1.75 and observed within the 95% CI of the overall effect size.
The funnel plot showed (Fig. 9A) that there was no indication of publication bias, and subsequent egger regression analysis also showed that the intercept was not statistically significant (P = 0.743), confirming the absence of publication bias. In addition, meta-regression of the effect size on covariates showed that the effect size was not significantly influenced by age (Fig. 9B) or fever rate (Fig. 10A). However, the effect size tended to increase with increasing cough rate (R2 = 0.91; P < 0.001), indicating that the IL-6 marker was significantly higher among non-survivors with higher cough rates (Fig. 10B).
Effect of PCT markers
Only two studies involving 58 non-survivors and 45 survivors were found to be with PCT marker measurement. Of these, only one study demonstrated that the effect size was statistically significant (P < 0.050), inferring that the mean score of the PCT marker was significantly higher among the non-survivors compared to the survivors (Fig. 11A). Heterogeneity (I2) was found to be 0%, and therefore the fixed-effect model revealed that the overall effect size (95% CI) was 0.75 (95% CI: 0.30–1.21) nanograms/millilitre. Sensitivity analysis (Fig. 11B) showed that the effect sizes were 0.72 and 0.78 and were observed to be within the 95% CI of the overall effect size.
These two studies were inadequate enough to establish a funnel plot.
Summary of diagnostic measures
The study found that serum levels of CRP, IL-6, D-dimer and Procalcitonin were significantly elevated in non-survivors compared to survivors. Raised inflammatory markers aid in the risk stratification of COVID-19 patients and their proper management. Our study focused on the predictive utility of these laboratory biomarkers in assisting COVID-19 patients with poor clinical outcome management.
Our knowledge is the first meta-analysis to examine nine studies in the mortality cohort to evaluate statistical analysis, which made our results valid and sound. Exhaustive search strategy and robust statistical analysis promoted the reliability of our study. However, there were a few limitations in our study. First, we included all types of studies, which might influence the effect size due to existence of comorbidity conditions such as hypertension, diabetes, cardiovascular disease, malignancy, pulmonary diseases, chronic kidney, chronic liver disease, and chronic bronchitis. Due to reporting bias of comorbidity conditions, we didn't add comorbidity conditions as exclusion criteria. Second, due to a limited number of studies included for Serum PCT, we could not assess publication bias. Finally, studies published in foreign languages were not included in our meta-analysis.
In March 2020, Henry et al. studied laboratory parameters in severity and mortality cohorts of COVID-19 patients. They analysed 21 studies (2984 patients) to assess the association of severity with lab parameters. They concluded markers such as D-dimer, CRP, ferritin, PCT were significantly elevated in patients with severe COVID-19 (Henry et al. 2020). On the other hand, in the mortality cohort, they included three studies and forwarded that these markers were significantly elevated in non-survivors compared to survivors. In May 2020, Aziz M et al. analysed nine studies. They advanced that estimation of interleukin-6 would aid the clinician in prognosticating COVID-19 as it is significantly higher in severe cases than controls (Aziz et al. 2020).
Additionally, they reported that IL-6 levels were associated increased risk of mortality. However, in October 2020, Leisman et al. suggested that role of cytokine release syndrome is questionable in the etiopathogenesis of severe or critical cases of COVID-19 as mean IL-6 in these conditions were significantly lower as compared to that in other inflammatory syndromes such as Sepsis, ARDS and CAR Tcell induced cytokine release syndrome (Leisman et al. 2020). Zheng et al. published a meta-analysis of 16 studies involving 3962 patients (Zeng et al. 2020). They suggested that inflammatory markers such as CRP, IL-6, PCT, ferritin were significantly higher in the severe group than the non-severe group using the random-effects model. Similar to our findings, they postulated that pro-inflammatory cytokine IL-6 was elevated in non-survivors compared to survivors. However, the studies analysed for the meta-analysis were from China, and the data of IL-6 in survivor and non-survivor groups involved only two studies. The importance of elevation of IL-6 can be gauged by the use of Tocilizumab, humanised monoclonal antibody against IL-6 receptor, which has been shown to improve survival and clinical outcome (RECOVERY Collaborative Group 2021). Thus, close monitoring of inflammatory markers can help.
Our study suggests incorporating these CRP, IL-6 and D-dimer markers to design discriminatory tools and risk stratification tools to adequately identify COVID-19 patients with poor clinical outcomes.
All the data used in the meta-analyses are available in the listed articles.
RT-PCR:
Reverse transcriptase polymerase reaction
CRP:
C reactive protein
IL-6:
PCT:
Procalcitonin
ARDS:
Acute respiratory distress syndrome
Chimeric antigen receptor
IQR:
Interquartile range
ES:
Effect size
std.diff:
Standardized difference
PRISMA:
Preferred reporting items for systematic review and meta-analyses
CMA:
Comprehensive meta-analyses
Confidence interval
Adhikari SP, Meng S, Wu Y-J et al (2020) Epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (COVID-19) during the early outbreak period: a scoping review. Infect Dis Poverty 9:29. https://doi.org/10.1186/s40249-020-00646-x
Aziz M, Fatima R, Assaly R (2020) Elevated interleukin-6 and severe COVID-19: a meta-analysis. J Med Virol 92:2283–2285. https://doi.org/10.1002/jmv.25948
Bhaskar S, Sinha A, Banach M et al (2020) Cytokine storm in COVID-19-immunopathological mechanisms, clinical considerations, and therapeutic approaches: the REPROGRAM consortium position paper. Front Immunol 11:1648. https://doi.org/10.3389/fimmu.2020.01648
Chen T, Dai Z, Mo P et al (2020) Clinical characteristics and outcomes of older patients with coronavirus disease 2019 (COVID-19) in Wuhan, China: a single-centered, retrospective study. J Gerontol A Biol Sci Med Sci 75:1788–1795. https://doi.org/10.1093/gerona/glaa089
Deng Y, Liu W, Liu K et al (2020) Clinical characteristics of fatal and recovered cases of coronavirus disease 2019 in Wuhan, China: a retrospective study. Chin Med J (engl) 133:1261–1267. https://doi.org/10.1097/CM9.0000000000000824
Coronavirus Disease (COVID-19) Situation Reports. https://www.who.int/emergencies/diseases/novel-coronovirus-2019/situation-reports. Accessed 21 June 2021
Fajgenbaum DC, June CH (2020) Cytokine storm. N Engl J Med 383:2255–2273. https://doi.org/10.1056/NEJMra2026131
Fan J, Wang H, Ye G et al (2020) Letter to the editor: low-density lipoprotein is a potential predictor of poor prognosis in patients with coronavirus disease 2019. Metabolism 107:154243. https://doi.org/10.1016/j.metabol.2020.154243
Fogarty H, Townsend L, Cheallaigh CN et al (2020) COVID19 coagulopathy in Caucasian patients. Br J Haematol 189:1044–1049. https://doi.org/10.1111/bjh.16749
Gorbalenya AE, Baker SC, Baric RS et al (2020) The species Severe acute respiratory syndrome-related coronavirus: classifying 2019-nCoV and naming it SARS-CoV-2. Nat Microbiol. https://doi.org/10.1038/s41564-020-0695-z
Henry BM, de Oliveira MHS, Benoit S et al (2020) Hematologic, biochemical and immune biomarker abnormalities associated with severe illness and mortality in coronavirus disease 2019 (COVID-19): a meta-analysis. Clin Chem Lab Med 58:1021–1028. https://doi.org/10.1515/cclm-2020-0369
Kaushik P, Kumari M, Bansal SK, Singh NK, Dawar R, Sharma M, Suri A (2021) Clash of the two titans—COVID-19 and type 2 diabetes mellitus. Curr Med Res Pract 11:39–46
Leisman DE, Ronner L, Pinotti R et al (2020) Cytokine elevation in severe and critical COVID-19: a rapid systematic review, meta-analysis, and comparison with other inflammatory syndromes. Lancet Respir Med 8:1233–1244. https://doi.org/10.1016/S2213-2600(20)30404-5
Li Q, Guan X, Wu P et al (2020) Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl J Med 382:1199–1207. https://doi.org/10.1056/NEJMoa2001316
RECOVERY Collaborative Group (2021) Tocilizumab in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial. Lancet Lond Engl 397:1637–1645. https://doi.org/10.1016/S0140-6736(21)00676-0
Tang N, Li D, Wang X, Sun Z (2020) Abnormal coagulation parameters are associated with poor prognosis in patients with novel coronavirus pneumonia. J Thromb Haemost 18:844–847. https://doi.org/10.1111/jth.14768
Wan X, Wang W, Liu J, Tong T (2014) Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol 14:135. https://doi.org/10.1186/1471-2288-14-135
Wang K, Zuo P, Liu Y et al (2020) Clinical and laboratory predictors of in-hospital mortality in patients with coronavirus disease-2019: a cohort study in Wuhan, China. Clin Infect Dis off Publ Infect Dis Soc Am 71:2079–2088. https://doi.org/10.1093/cid/ciaa538
Yan Y, Yang Y, Wang F et al (2020) Clinical characteristics and outcomes of patients with severe covid-19 with diabetes. BMJ Open Diabetes Res Care 8:e001343. https://doi.org/10.1136/bmjdrc-2020-001343
Zeng F, Huang Y, Guo Y et al (2020) Association of inflammatory markers with the severity of COVID-19: a meta-analysis. Int J Infect Dis 96:467–474. https://doi.org/10.1016/j.ijid.2020.05.055
Zhang J, Liu P, Wang M et al (2020) The clinical data from 19 critically ill patients with coronavirus disease 2019: a single-centered, retrospective, observational study. Z Gesundheitswissenschaften. https://doi.org/10.1007/s10389-020-01291-2
Zhou F, Yu T, Du R et al (2020) Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet Lond Engl 395:1054–1062. https://doi.org/10.1016/S0140-6736(20)30566-3
This work did not involve any funding agency.
Department of Biochemistry, SGT Medical College Hospital and Research Institute, Gurugram, Haryana, 122505, India
Arpita Suri & Naveen Kumar Singh
Department of Obstetrics and Gynaecology, All India Institute of Medical Sciences, New Delhi, India
Vanamail Perumal
Arpita Suri
Naveen Kumar Singh
AS—involved in framing hypotheses, screening and selection of the studies, quality assessment and manuscript writing. NKS—involved in screening, selection of the studies and data extraction. VP—involved in quality assessment, preparation of PRISMA flow diagram, database management, statistical analysis and manuscript edition of the final version. All authors have read and approved the final version of the manuscript.
Correspondence to Vanamail Perumal.
Present work was a meta-analysis from published articles and did not have direct contact with the human subjects. Therefore, ethical clearance was not applicable.
Suri, A., Singh, N.K. & Perumal, V. Association of inflammatory biomarker abnormalities with mortality in COVID-19: a meta-analysis. Bull Natl Res Cent 46, 54 (2022). https://doi.org/10.1186/s42269-022-00733-z
DOI: https://doi.org/10.1186/s42269-022-00733-z
Medicine and Human Genetics | CommonCrawl |
Search all SpringerOpen articles
Future Journal of Pharmaceutical Sciences
Different aspects in manipulating overlapped spectra used for the analysis of trimebutine maleate and structure elucidation of its degradation products
Hayam M. Lotfy1,
Eman M. Morgan ORCID: orcid.org/0000-0002-9310-003X1,
Yasmin Mohammed Fayez2 &
M. Abdelkawy1
Future Journal of Pharmaceutical Sciences volume 5, Article number: 7 (2019) Cite this article
Four rapid, accurate, and validated stability-indicating spectrophotometric methods have been described in the present work for the analysis of trimebutine maleate (TM) in existence of its degradation products in its authentic form and in pharmaceutical formulations excluding any separation steps.
These methods were a dual-wavelength (DW) method which allows the determination of TM in existence of its degradation products at 243 nm and 269 nm, second derivative (D2) method measured at peak amplitude at 268 nm, ratio difference (RD) method at 242 nm and 278 nm, and constant center coupled with spectrum subtraction (CC-SS) method at 242 nm and 278 nm versus 278 nm. By applying the suggested methods, TM could be quantified in the range of 5.0–60.0 μg/mL with percentage recoveries 99.97 ± 0.40, 100.36 ± 0.58, 99.90 ± 0.42, and 100.15 ± 0.45 for DW, D2, RD, and CC-SS methods, respectively. International Conference on Harmonization guidelines were followed for validation of the described methods, and the application of laboratory-prepared mixtures along with different pharmaceutical drugs including the target drug showed favorable results without any contribution from additives.
Statistical comparison was used to compare the proposed and official methods, and satisfactory results for both accuracy and precision were obtained. The results confirm the applicability of the suggested methods for the determination of TM in quality control laboratories.
Chemically, trimebutine maleate (TM) identified as (2RS)-2-(dimethylamino)-2-phenylbutyl 3,4,5-trimethoxybenzoate (Z)-butenedioate (Fig. 1) is an antispasmodic drug and is effective in the remediation of irritable bowel syndrome which is the well-known usage of TM over a lot of previous years [1]. It is a noncompetitive spasmolytic agent with mild opiate receptor affinity and remarkable anti-serotonin activity. Over the last decade, it reduces abnormal intestinal activity but does not alter normal GI motility and is indicated for the remediation and repose of spastic colon symptoms. Moreover, it is also used to cure patients having postoperative paralytic ileus following an abdominal surgery [2, 3].
Chemical structure of trimebutine maleate (TM)
Trimebutine maleate (TM) is an official drug presented in the British Pharmacopoeia [1]. Literature review encountered that several analytical methods were published for the quantitative analysis of TM in pharmaceutical drugs and in physiological fluids. These methods include few simple and direct UV-spectrophotometric methods for the resolution of trimebutine maleate in existence of its degradation products through the application of first derivative and first derivative of ratio spectra spectrophotometric methods [4]; different visible spectrophotometric methods were reported for the determination of TM in mixture with other pharmaceutical drugs using different coloring agents in different reaction conditions [5,6,7,8,9], high performance liquid chromatographic methods [4, 10,11,12,13,14,15,16,17,18,19], and electrochemical methods [20,21,22,23].
Stability is considered to be one of the most important criteria in quality control related to pharmaceuticals. Only stable pharmaceutical drugs would guarantee accurate delivery of the drug to the patients. Expiration dating on any drug formulation depends on scientific studies at ordinary and stressed conditions. The International Conference on Harmonization (ICH) [24] approved a stability test which suggests that it is important to investigate the inherent stability properties of the drug product through certain stress studies, for instance, hydrolysis study; therefore, this will lead to the identification and determination of drug products which will support the developed stability-indicating analytical method.
Degradation of trimebutine maleate (TM) is considered an easy technique as it is an ester-type antispasmodic drug. It was reported that it was subjected to hydrolysis under alkaline and acidic conditions to yield the same degradation products, namely 2-(dimethyl amino)-2-phenylbutanol and 3,4,5-trimethoxy benzoic acid [4]. However, the degradation products used in this work were prepared using the alkaline hydrolysis as mentioned in [4] where its condition was 100 mL 0.1 M NaOH at 100 °C for only 30 min that was enough for a complete degradation, whereas the conditions for acid hydrolysis was 100 mL 1 M HCl at 100 °C for 12 h which was enough for complete degradation even though it produces the same degradation products as that produced by the alkaline hydrolysis. That is why it is time-saving to stick to the alkaline hydrolysis that also produces the same degradation products as that obtained from acidic hydrolysis. Therefore, the proposed methods could be applied for the analysis of TM in the presence of either alkali or acid-induced degradation products.
The ultimate target of this study is to achieve and validate stability-indicating UV-spectrophotometric methods that are simple, rapid, and selective with less cost and time for the analysis of trimebutine maleate in existence of its degradation products in its authentic form and in market samples excluding any separation steps through various manipulating pathways so as to obtain satisfying results that are characterized by high accuracy and precision and are more sensitive for the substance in charge.
Apparatus and software
The following are the apparatus and software used:
A double beam UV/VIS spectrophotometer (Shimadzu, Japan) UV/VIS model UV-1800 PC with 1-cm path length quartz cell. The width of the spectral band was 2 nm, and the scanning speed was 2800 nm/min.
An IR spectrophotometer (Shimadzu 435, Kyoto, Japan).
A mass spectrophotometer: MS-QB 1000 EX, Finnigan Nat (USA).
Thin-layer chromatography (TLC) plates (20 cm × 20 cm) coated with silica gel 60 F254 (Merck, Germany).
Sonicator (Elmasonic – S30H), Germany.
UV lamp with short wavelength 254 nm (USA).
Magnetic stirrer, Bandelin Sonorox, Rx5l0S (Budapest, Hungarian).
Materials and solvents
Pure sample
Trimebutine maleate (TM) pure samples were kindly provided from the National Organization for Drug Control and Research (NODCAR), Egypt. Their purity was checked to be 99.73 ± 0.86 in reference to the official HPLC method [1].
Gast-reg® 100 mg tablets (Batch No. 144695), 200 mg tablets (Batch No. 162912), ampoules 50 mg/5 mL (Batch No. 154534), and suspension 24 mg/5 mL (Batch No. 163650) were manufactured by Amoun Pharmaceuticals Co., Egypt.
The following are the chemicals of analytical grade used in this study:
Concentrated hydrochloric acid solution (Adwic, Egypt) used to prepare 1 M HCl
Sodium hydroxide pellets (Adwic, Egypt) used to prepare 0.1 M NaOH
Isopropanol (SDFCL, India)
Concentrated ammonia solution (Teba, Egypt)
De-ionized water (Egypt Otsuka Pharmaceutical Co., SAE, Egypt)
Methanol (Adwic, Egypt)
Chloroform (Adwic, Egypt)
Standard solutions
Standard stock solutions
Preparation of TM standard stock solutions (1.0 mg/mL) along with its degradation products (derived from 1.0 mg/mL) was accomplished in 100-mL measuring flasks by dissolving 100 mg of each of them in methanol. Following this step, the volumes were increased until it reaches the mark using the same solvent and then kept in the refrigerator.
Standard working solutions
The working solutions were prepared freshly from the stock solutions by dilution using methanol to achieve a concentration of 100.0 μg/mL for each component.
Preparation of degradation products of TM (derived from 1 mg/mL)
The preparation of TM degradation products was accomplished by refluxing 0.1 g of TM with 10 mL 0.1 M sodium hydroxide at 100 °C following the reported method [4]. Complete degradation was done by refluxing for exactly 30 min that was confirmed by TLC development through the invisibility of the drug spot by the use of UV lamp at 254 nm using isopropanol-water-ammonia (14:4:2, by volumes) as a developing system. Potentiometer was used to adjust the pH of the solution to be 4.5 ± 0.02 using several values of 1.0 M hydrochloric acid and that pH was found to be adequate for precipitating the degradation products. Subsequently, filtration of the precipitate was done then dried under vacuum by the aid of separating funnel. Consequently, analysis of that dried precipitate was conducted by UV spectroscopy, IR, and mass spectrum and was confirmed that the precipitate is DEG 1. The filtrate was washed for several times, each time using 10.0 mL chloroform. The aqueous extract that was just washed was evaporated then dried under vacuum. The residue was analyzed by IR and mass spectrum and was confirmed that the filtrate is DEG 2. Then, the preparation of 1 mg/mL stock solution of the degradation products (derived from intact) was done by dissolving all the content of the solution after neutralization in 100 mL methanol.
Spectrophotometric methods
Spectral characteristics
The absorption spectra of 30.0 μg/mL of both TM and its degradation products in methanol were recorded all over the UV range of 200–400 nm using methanol as a blank.
Construction of calibration graphs
Aliquots equal to 50.0–600.0 μg TM were accurately transferred into a series of 10-mL measuring flasks from its corresponding working solution (100.0 μg/mL) following that the volume was completed to the mark using methanol to achieve solutions of final concentrations (5.0–60.0 μg/mL). Scanning of the absorption spectra of the resulting solutions from 200 to 400 nm was done and kept in the computer. Average of the three experiments were used to construct the calibration graphs of each method, and the regression equations were calculated.
Manipulating steps on zero-order absorption spectra
Dual-wavelength (DW) method
The absorbances of the stored spectra were recorded at 243 nm and 269 nm. The calibration curve was done gathering the differences between the recorded absorbance and the corresponding concentrations using the average of the three experiments, and the regression equation was calculated.
Second derivative method
The stored second derivative (D2) spectra of TM were measured through the range of 240–300 nm against solvent blank using Δλ = 8 nm and scaling factor of 100. Peak amplitude at 268 nm (zero-crossing of the degradation products) was used for constructing the calibration curve against the corresponding concentrations of TM, and the regression equation was calculated.
Manipulating steps on ratio spectra
Ratio difference (RD) method
In this method, ratio spectra were obtained and recorded by dividing the absorption spectra of the zero-order of TM (5.0–60.0 μg/mL) over the absorption spectrum of 30.0 μg/mL degradation products used as a divisor. The difference between the amplitudes of stored ratio spectra at 242 nm and 278 nm against the corresponding concentrations of TM was used for constructing the calibration curve, and then the regression equation was calculated.
Constant center coupled with spectrum subtraction (CC-SS) method
This method seems to look like the ratio difference method only in the beginning of the manipulating steps as the same ratio spectra were used with the same divisor 30.0 μg/mL of the degradation products. Subsequently, two calibration curves were obtained, one of them relating the absorption spectra of the zero-order of TM at 266 nm against its corresponding concentrations and the other one relating the difference between the amplitudes of the ratio spectra at 242 nm and 278 nm against the amplitudes at 278 nm, and therefore, two regression equations were calculated.
Analysis of laboratory-prepared mixtures
Different laboratory-prepared mixtures having different ratios of the target drug and its degradation products (5.0–85.0%) were prepared using different aliquots of TM working solutions (100.0 μg/mL) and its degradation products (derived from 100.0 μg/mL). These mixtures were added into a series of 10-mL measuring flasks, then the volumes were made up to the mark with methanol while mixing well of the solutions, measuring those prepared laboratory-prepared mixtures throughout the UV range. Then, follow the instructions as described in each spectrophotometric method. Consequently, the concentration of TM in each mixture was obtained from each corresponding regression equation.
Application on pharmaceutical formulations
For Gast-reg® (100 mg and 200 mg) tablets
Twenty tablets of each of Gast-reg® 100 mg and 200 mg tablets were weighed separately with high accuracy then finely powdered. An amount exactly equals to 100.0 mg of powder was added to a 100-mL beaker with 50 mL methanol. The solution was shaken in an ultrasonic bath for about 5 min then filtered into a 100-mL measuring flask, and the volume was made up to the mark with water in order to get a concentration of 1.0 mg/mL.
For Gast-reg® ampoules (50 mg/5 mL)
The content of five Gast-reg ampoules was mixed; 5.0 mL from the content of five ampoules was transferred accurately to a 50-mL measuring flask and made up to the mark with methanol to attain a final concentration of 1.0 mg/mL. Then, further dilution was made by transferring 2.5 mL in a 25-mL measuring flask to attain a working solution of concentration 100.0 μg/mL.
For Gast-reg® suspension (24 mg/5 mL)
Gast-reg suspension was constituted in 70 mL distilled water and mixed well to attain a final concentration of 24.0 mg of drug/5 mL. Five milliliters of the resulting suspension was transferred quantitatively into a 50-mL measuring flask, and some methanol were added then filtered through a filter paper. After the complete filtration, the solution was made up to the mark with methanol to attain a final concentration claimed to be 0.48 mg/mL.
Appropriate dilutions were done for the analyzed formulations to get solutions with final concentrations claimed to be 20.0 μg/mL of TM in case of Gast-reg® tablets and ampoules while 24.0 μg/mL of TM in case of Gast-reg® suspension. Then, the solutions were analyzed using the procedure described under the proposed methods. The regression equations were used to calculate the concentrations of TM.
According to ICH, the accuracy of the proposed spectrophotometric methods for the analysis of the different pharmaceutical formulations was determined using a standard addition technique in case of Gast-reg® 200 mg tablets, ampoules, and suspension while using a comparative study with a reported method [11] in case of Gast-reg® 100 mg tablets.
The International Conference on Harmonization (ICH) guideline, namely "stability testing of new drug substances and products" demands that the stress testing should be carried out to illustrate the stability characteristics of the active compound [24]. A typical stability-indicating method is the one that determines the amount of the standard drug alone and analyzes its degradation products. Trimebutine maleate (TM) undergoes hydrolysis under alkaline condition according to the previously reported study [4]. This work was extended to confirm such investigation where complete degradation using alkaline condition was conducted upon refluxing of pure TM with 0.1 M NaOH for 30 min at 100 °C (Fig. 2). TLC of the intact drug and degradation products using isopropanol-water-ammonia (14:4:2, by volume) as a developing system revealed that the potential degradation products were isolated and showed two new spots at Rf = 0.02 and Rf = 0.46, which is distinct from that of the intact drug (Rf = 0.81). The IR spectrum of intact TM reveals a stretching band at 1750 cm−1 which disappears in the IR spectrum of both degradation products, thus confirming the cleavage at ester linkage. In addition, the appearance of carbonyl group of acid at 1600 cm−1 (implies the structure of DEG 1) contains a carbonyl group. There is a broad band that appears at 3000 cm−1 which indicates the presence of alcoholic OH that is found in the structure of DEG 2 (Fig. 3). In the MS chart, the parent peak of the main drug, TM was identified at m/z 503 (Mol. wt of TM) and that of the isolated degradation products was identified at m/z 212 (Mol. wt of 3,4,5-trimethoxy benzoic acid) "DEG 1" and at m/z 193 (Mol. wt of 2-(dimethylamino)-2-phenylbutanol) "DEG 2" (Fig. 4).
Proposed scheme for preparing the alkali-induced degradation products of trimebutine maleate
IR spectrum of trimebutine maleate; 3,4,5-trimethoxy benzoic acid (DEG 1); and 2-(dimethylamino)-2-phenylbutanol (DEG 2)
MS chart of trimebutine maleate; 3,4,5-trimethoxy benzoic acid (DEG 1); and 2-(dimethylamino)-2-phenylbutanol (DEG 2)
The objective of this work is to develop different smart stability-indicating spectrophotometric methods for specific estimation of TM in existence of its degradation products in its authentic form, laboratory-prepared mixtures, and different pharmaceutical drugs using several manipulating steps for either direct measurements at absorption spectra of zero-order or manipulation using ratio or derivative spectra.
UV spectra of 30.0 μg/mL of each of TM with DEG 1 and DEG 2 were scanned (Fig. 5) which reveals that trimebutine maleate has λmax at 266 nm and that of DEG 1 has λmax at 256 nm while that of DEG 2 has λmax at 200 nm with poor absorbance spectrum due to the weak conjugation in its structure. Therefore, this poor absorbance of DEG 2 confirms its hard determination by different analytical methods.
Zero-order absorption spectra of 30 μg/mL trimebutine maleate, 30 μg/mL DEG 1, and 30 μg/mL DEG 2 in methanol
Dual-wavelength method
The core of dual-wavelength method is that the difference in absorbance between the two selected points on the mixture spectra is directly proportional only to the concentration of component X and equals to zero for the component Y. In order to determine TM, two wavelengths (243 nm and 269 nm) were chosen so that the absorbance difference between them is directly proportional only to the concentration of TM and equals to zero for degradation products (Fig. 6). Under optimum spectrophotometric conditions, a linear relationship was achieved between the difference in absorbance at ΔA (243 nm and 269 nm) and the corresponding concentration of TM over the range of 5.0–60.0 μg/mL, and the regression equation was calculated as listed in Table 1.
Zero-order absorption spectra of 30.00 μg/mL of trimebutine maleate and 30.00 μg/mL of its alkali-induced degradation products showing DW method
Table 1 Regression parameters and validation sheet for the determination of pure samples of trimebutine maleate by the proposed UV-spectrophotometric methods
This method has the advantage of being having minimum error and high accuracy as well as it can save effort and time, whereas its main drawback is the restriction in the selected wavelengths which are having constant absorbance of the interfering compound. This can lead to a critical measurement of the absorbance of the target drug and if any minute change in those selected wavelengths will result in unsatisfactory results with poor reproducibility.
Derivative spectrophotometry is an unpretentious and of low-cost analytical technique of high utility for producing both qualitative and quantitative information form unresolved band spectra. This method has the advantage of producing an increase in sensitivity and selectivity, whereas its disadvantage is the critical measurement at a certain wavelength and its reliance on instrumental parameters such as scan speed and slit width. The instrumental parameters of obtaining the absorption spectrum of zero-order have a high effect on the form and intensity of its derivative generations. The selectivity is the key in this method without any contribution of degradation products and depends on measuring the peak amplitude of D2 spectrum of TM using smoothing factor and Δλ = 8 and scaling factor = 100 at 268 nm with zero-crossing of the spectrum of degradation products (Fig. 7). Under optimum spectrophotometric conditions, a linear relationship was achieved between the peak amplitude of TM at 268 nm and its corresponding concentration over the range of 5.0–60.0 μg/mL, and the regression equation was calculated as listed in Table 1.
The decond derivative spectrum of 30.00 μg/mL of trimebutine maleate and 30.00 μg/mL of its alkali-induced degradation products measured at 268 nm at the zero-crossing point
Ratio difference method
In RD method, the absorption spectrum of the main drug was divided by the absorption spectrum of the substance that is needed to be canceled, and the obtained ratio spectrum represents TM/DEG + constant. The method was applied to fix the issue of overlapping spectra of TM and its degradation products where the interfering substance was canceled and therefore showed no contribution. The choice of suitable divisor and the selected two wavelengths considered to be the main steps that influence the ratio difference method. The requirements for selecting suitable divisor is that it should be adjusted between lower noise and higher sensitivity, and regarding the chosen wavelengths, they should have high absorbance difference with considerable contribution from the degradation products. Consequently, the amplitudes at 242 nm and 278 nm were chosen for the determination of TM using 30.0 μg/mL of degradation products as the divisor (Fig. 8).
Ratio spectra of 30.00 μg/mL of trimebutine maleate and its laboratory-prepared mixture using 30.00 μg/mL of its alkali-induced degradation products as a divisor
Under optimum spectrophotometric conditions, a linear relationship was attained between the amplitude differences of the ratio spectra at 242 nm and 278 nm and concentrations of TM in the range of 5.0–60.0 μg/mL, and the regression equation was calculated. Substitution in the regression equation was done to obtain the concentration of TM as listed in Table 1. Without the need for derivatization, the obtained constant will be removed in addition to any instrumental error following the subtraction of the recorded amplitudes at the selected wavelengths. Therefore, the ratio of signal to noise is reinforced. Ratio difference method was an appropriate method for the determination of TM in the existence of its degradation product because of its ease in application, minimal data used in manipulation, and ultimate reproducibility and robustness, which allow the analysis of drug and its degradation products without overlapped spectra and therefore can be applied in quality control testing with excessively pleasing results. Complete elimination of the degradation products was obtained in the form of a constant.
Constant center coupled with spectrum subtraction method
The constant center method has the advantages of being able to obtain the original spectrum of TM. In this method, ratio spectra were achieved by dividing the absorption spectra of TM over that of a certain concentration of its degradation products. The requirements for selecting a suitable divisor is that it should be adjusted between lower noise and higher sensitivity. The divisor concentration of 30.0 μg/mL of the degradation products produced preferable results concerning average recovery percent when used for the analysis of TM concentrations. The interfering substance was removed after applying ratio difference at the two chosen wavelengths of the ratio spectra of the target drug and therefore showed any contribution. The two selected wavelengths were 242 nm and 278 nm, and they should be selected on the basis of the contribution of both the target drug and its interfering substance (Fig. 8).
For the determination of TM in the laboratory-prepared mixture, scanning of the zero-order absorption spectra of the mixtures was done, and the ratio spectra of the mixtures were achieved by using 30.0 μg/mL of degradation products as a divisor where recorded amplitude at 278 nm was measured for each mixture, whereas the postulated amplitude value at 278 nm can be mathematically computed by using the equation explaining the linear relationship between the ratio difference of ratio spectra at 242 nm and 278 nm (degradation products were canceled) and the corresponding ratio amplitudes at 278 nm (Fig. 9).
Linear relationship between ∆P (278 nm and 242 nm) on the y-axis and P (278 nm) on the x-axis
$$ {P}_1-{P}_2=0.7255\kern0.5em {P}_1+0.0079\kern2.16em r=1.0000 $$
where P1 and P2 are the postulated amplitudes at 242 nm and 278 nm, respectively, and r is the correlation coefficient. The constant value was calculated by observing the influence on the amplitude of the ratio spectrum of TM at 278 nm (∆Precorded − postulated), so the constant value was mathematically computed by measuring the difference between the recorded amplitude and postulated amplitude at this wavelength.
$$ {\displaystyle \begin{array}{l}\mathrm{C}.\mathrm{V}=\left({\mathrm{P}}_{\mathrm{recorded}}\right)-\left({\mathrm{P}}_{\mathrm{postulated}}\right)\\ {}\mathrm{P}\left(\mathrm{DEG}/\mathrm{DEG}^{\prime}\right)\kern0.5em =\kern0.5em \mathrm{P}\left(\mathrm{TM}/\mathrm{DEG}^{\prime }+\kern0.5em \mathrm{DEG}/\kern0.5em \mathrm{DEG}^{\prime}\right)-\mathrm{P}\left(\mathrm{TM}/\mathrm{DEG}^{\prime}\right)\end{array}} $$
where C.V is the constant value.
Precorded is the recorded amplitude of the ratio spectrum of the laboratory-prepared mixture after being divided by 30.0 μg/mL of degradation products at 278 nm.
Ppostulated is the calculated amplitude using the exact regression equation.
The obtained constant values were multiplied by the absorption spectrum of the divisor (30 μg/mL of degradation products) to obtain the absorption spectrum of zero-order of the degradation products then be subtracted from the absorption spectrum of zero-order of the mixture to get the absorption spectrum of zero-order of TM. Under optimum spectrophotometric conditions, the concentration of TM in the mixture could be calculated by substitution in the regression equation attained by plotting the absorbance at its λmax (266 nm) and its corresponding concentration in the range of 5.0–60.0 μg/mL as listed in Table 1.
Constant center method provides many alluring features such as maximum resolution since absorption spectrum of zero-order of the target drug is achieved which can acts as a defined fingerprint and provides enhancement of sensitivity and specificity in mixture analysis.
Method validation
Method validation was accomplished according to ICH guidelines [25] for all the proposed methods as follows.
Linearity range
Linearity of the methods is examined by making different calibration curves on three consecutive days. The calibration curves are set up within concentration ranges that were picked on the basis of the expected drug concentration throughout the assay of the dosage form. Each concentration is iterated three times. All the validation parameters are listed in Table 1.
The accuracy of the presented methods was examined by analyzing three samples of TM standard solutions. The recovery percentages were listed in Table 1, and the results showed high accuracy of the suggested methods.
Repeatability and intermediate precision (intra-day and inter-day precision) are expressed as RSD and can be obtained by analyzing three different concentrations of the target drug in the linearity range on single day and on three following days as shown in Table 1.
Specificity is checked by analyzing different mixtures consisting of the drug and its degradation products in several ratios within the linearity range. Favorable percentage recoveries with minimal standard deviation among the other methods are obtained as presented in Table 2. The results of the analysis of the different pharmaceutical formulations are illustrated in Table 3. According to ICH, the accuracy of pharmaceutical formulation using the proposed methods was checked in case of Gast-reg® 200 mg tablets, suspension, and ampoules through standard addition technique, which is a recovery attained when spiking the sample solution of the pharmaceutical formulation with known concentration of the intact drug as illustrated in Table 4. A comparative study with a reported method [11] was done in case of Gast-reg® 100 mg tablets as illustrated in Table 5.
Table 2 Determination of the studied drug trimebutine maleate in laboratory-prepared mixtures with its alkali-induced degradation product by the proposed UV-spectrophotometric methods
Table 3 Determination of trimebutine maleate in pharmaceutical formulations by the proposed UV-spectrophotometric methods
Table 4 Standard addition technique for Gast-reg® 200 mg tablets, ampoules, and suspension by the proposed UV-spectrophotometric methods
Table 5 Statistical comparison between the obtained results of the analysis of Gast-reg® 100 mg tablets (Batch No. 144695) by the proposed UV-spectrophotometric method and the reported method
The suggested methods and the official HPLC method of TM are statistically compared with each other regarding the t test and F values and revealed that there was no considerable difference between the official HPLC method and the experimental values which were obtained in the pure sample analysis by the proposed methods indicating identical accuracy and precision where the results are illustrated in Table 6. The developed UV-spectrophotometric methods and the official HPLC method of TM are compared with each other using one-way ANOVA where the p value (0.315) is higher than 0.05 and the F calculated (1.247) is lower than the F tabulated (2.728) at p = 0.05, indicating that there is no critical difference between the suggested methods and the official method as illustrated in Table 7.
Table 6 Statistical comparison between the obtained results of the proposed UV-spectrophotometric methods and the official method of trimebutine maleate in its pure powdered form
Table 7 One-way ANOVA testing for the different proposed UV-spectrophotometric methods and the official method of trimebutine maleate in its pure powdered form
This work introduced different smart and unpretentious stability indicating UV-spectrophotometric methods for concurrent determination of TM in existence of its degradation products in its authentic form, laboratory-prepared mixtures, and different pharmaceutical drugs using several manipulating pathways for either direct measurements at absorption spectra of zero-order or manipulation using ratio or derivative spectra. The proposed methods were time-saving and economic since no toxic organic solvents were used when compared to the chromatographic methods. The developed methods do not require exhaustive treatment, additional sophisticated software, or especial computer programs, thus enable their application for resolving the complex mixtures in quality control laboratories. In addition, the developed methods are direct, simple, and do not need separation, specific detector, or pretreatment steps. Validation of the proposed methods was performed in reference to the ICH guidelines regarding the linearity, range, accuracy, precision, and specificity, and all the obtained results were within the acceptable range. The results were satisfactory compared to that of the official method [1] of pure powdered form of TM using Student's t test, variance ratio F test at 95% confidence interval, and one-way ANOVA testing where the calculated results were lower than that of the tabulated results which indicates that there is no influential difference with regard to accuracy and precision. The results confirm the applicability of the suggested methods for the routine determination of the drug of interest TM in quality control laboratories.
ANOVA:
Analysis of variance
CC-SS:
Constant center coupled with spectrum subtraction
Constant value
D 2 :
Second derivative
DEG:
Degradation product
DW:
Dual-wavelength
HPLC:
High-performance liquid chromatography
International Conference on Harmonization
NODCAR:
National Organization for Drug Control and Research
RD:
Ratio difference
RSD:
Relative standard deviation
TLC:
Thin-layer chromatography
TM:
Trimebutine maleate
British Pharmacopoeia, H.M.s.S.O., London (2015). Volume II: p., 1104-1106.
Delvaux M, Wingate D (1997) Trimebutine: mechanism of action, effects on gastrointestinal function and clinical results. J Int Med Res 25(5):225–246
Lacy CF, Armstrong LL, Goldman MP, Lance LL (2005) Drug information handbook: with Canadian and International Drug Monographs. Lexi-Comp Inc, 13th Edition: p. 1595-1596.https://www.amazon.ca/Drug-Information-Handbook-Comprehensive-Professionals/dp/B004JPHUOG/ref=sr_1_1?qid=1566257202&refinements=p_27%3ACharles+F.+Lacy%3BLora+L.+Armstrong%3BMorton+P.+Goldman%3BLeonard+L+Lance&s=books&sr=1-1, https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=L.L.A.+Charles+F.+Lacy%2C+Morton+P.+Goldman%2C+Leonard+L.+Lance+Drug+Information+Handbook%3A+with+Canadian+and+International+Drug+Monographs%2C+Lexi-Comp+Inc%2C+13th+Edition+%282005%29+1595-1596.&btnG=
El-Gindy A, Emara S, Hadad GM (2003) Spectrophotometric and liquid chromatographic determination of trimebutine maleate in the presence of its degradation products. J Pharm Biomed Anal. 33(2):231–241
Ayad M et al (2016) Spectrophotometric determination of tiemonium methyl sulfate, itopride hydrochloride and trimebutine maleate via ion pair complex formation and oxidation reaction. Indian J Adv Chem Sci 4(1):85–97
Elqudaby HM, Mohamed GG, El-Din GM (2014) Analytical studies on the charge transfer complexes of loperamide hydrochloride and trimebutine drugs. Spectroscopic and thermal characterization of CT complexes. Spectrochim Acta A Mol Biomol Spectrosc 129:84–95
Shaban M (2002) Spectrophotometric determination of trimebutine through ion-pair and charge-transfer complexation reactions. Sci Pharm 70(4):341–351
El-Shiekh R, Zahran F, Gouda AAE-F (2007) Spectrophotometric determination of some anti-tussive and anti-spasmodic drugs through ion-pair complex formation with thiocyanate and cobalt (II) or molybdenum (V). Spectrochim Acta A Mol Biomol Spectrosc 66(4-5):1279–1287
Abdel-Gawad FM (1998) Ion-pair formation of Bi (III)–iodide with some nitrogenous drugs and its application to pharmaceutical preparations. J Pharmaceut Biomed 16(5):793–799
Joo E-H et al (1999) High-performance liquid chromatographic determination of trimebutine and its major metabolite, N-monodesmethyl trimebutine, in rat and human plasma. J Chromatogr B Biomed Sci Appl 723(1-2):239–246
V Appala Raju ABM, Pathi PJ, Raju NA (2013) Estimation of trimebutine maleate in tablet dosage form by RP-HPLC. J Pharm Sci Innov 3:99–101
Wang W (2003) RP-ion pair HPLC determination of trimebutine maleate tablets and its related compounds. Chin J Pharm Anal 23(2):111–113
Wang H et al (2002) Quantitative determination of trimebutine maleate and its three metabolites in human plasma by liquid chromatography-tandem mass spectrometry. J Chromatogr B. 779(2):173–187
Astier A, Deutsch A (1981) Quantitative high-performance liquid chromatographic determination of antispasmodic trimebutine in human plasma: pharmacokinetic studies after intravenous administration in humans. J Chromatogr 224(1):149–155
Lin Z-h, Song M (2001) RP-HPLC determination of related impurities in trimebutine maleate and assay of components in its preparation. Chin J Pharm Anal 21(1):25–27
Sanjiu J et al (2002) Determination of trimebutine maleate tables by HPLC. Chin J Modern Appl Pharm 5:21
Lavit M et al (2000) Determination of trimebutine and desmethyl-trimebutine in human plasma by HPLC. Arzneimittelforschung. 50(07):640–644
Yong-Yu P et al (2007) Enantiomeric separation of trimebutine maleate and ondansetron on amylose chiral stationary phase. Chin J Anal Chem 35(6):880
Zekai H et al (2011) Study on detection method of impurity 3, 4, 5-trimethoxy benzoic acid in trimebutine maleate sustained-release tablets. China Pharmaceuticals. 12:37–38
Elqudaby HM, Mohamed GG, El Din GM (2013) Utilization of phosphotungestic acid in the conductometric determination of loperamide hydrochloride and trimebutine antidiarrhea drugs. J Pharm Res 7(8):686–691
Ayad M et al (2016) Conductometric determination of tiemonium methylsulfate, alizapride hydrochloride, trimebutine maleate using rose bengal, ammonium reineckate and phosphotungstic acid. Indian J Adv Chem Sci 4(2):149–159
Adhoum N, Monser L (2005) Determination of trimebutine in pharmaceuticals by differential pulse voltammetry at a glassy carbon electrode. J Pharmaceut Biomed 38(4):619–623
Elqudaby H, Mohamed GG, El Din GM (2014) Electrochemical behaviour of trimebutine at activated glassy carbon electrode and its direct determination in urine and pharmaceutics by square wave and differential pulse voltammetry. Int J Electrochemical Sci 9:856–869
ICH (1993) Stability testing of new drug substances and products. Geneva.
ICH (1997) Q2B, Note for guidance on validation of analytical methods methodology, International Conference on Harmonization. IFPMA
Pharmaceutical Chemistry Department, Faculty of Pharmaceutical Sciences & Pharmaceutical Industries, Future University in Egypt, Cairo, 12311, Egypt
Hayam M. Lotfy, Eman M. Morgan & M. Abdelkawy
Analytical Chemistry Department, Faculty of Pharmacy, Cairo University, Kasr el Aini Street, Cairo, 11562, Egypt
Yasmin Mohammed Fayez
Hayam M. Lotfy
Eman M. Morgan
M. Abdelkawy
EM prepared and analyzed the stock and working solutions of the drug of interest, trimebutine maleate (TM), and its degradation product in their lab mixtures and in pharmaceutical formulations. HL was a major contributor in designing the whole frame of work and in writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Eman M. Morgan.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Lotfy, H.M., Morgan, E.M., Fayez, Y.M. et al. Different aspects in manipulating overlapped spectra used for the analysis of trimebutine maleate and structure elucidation of its degradation products. Futur J Pharm Sci 5, 7 (2019). https://doi.org/10.1186/s43094-019-0004-y
Degradation products
UV stability-indicating spectrophotometric methods
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
← African clawed frog
Paper: When are there continuous choices for the Mean Value Abscissa? with Miles Wheeler →
How do we decide how many representatives there are for each state?
by David Lowry-Duda Posted on April 3, 2019
The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.
But what does this really mean?
If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says
Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.
This doesn't give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.
The Apportionment Act of 1792
According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2
When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.
Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3
State ideal rounded_down
Vermont 2.851 2
NewHampshire 4.727 4
Maine 3.218 3
Massachusetts 12.62 12
RhodeIsland 2.281 2
Connecticut 7.894 7
NewYork 11.05 11
NewJersey 5.985 5
Pennsylvania 14.42 14
Delaware 1.851 1
Maryland 9.283 9
Virginia 21.01 21
Kentucky 2.290 2
NorthCarolina 11.78 11
SouthCarolina 6.874 6
Georgia 2.361 2
But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional "ideal" parts:
New Jersey (0.985)
Connecticut (0.894)
South Carolina (0.874)
Vermont (0.851)
Delaware (0.851)
Massachusetts+Maine (0.838)
North Carolina (0.78)
New Hampshire (0.727)
(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?
Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?
There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4
Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.
Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.
As an aside, it's interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.
Measuring the fairness of an apportionment method
At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?
We'll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6
So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.
For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7
The number of people each representative actually represent is at the core of the notion of fairness — but even then, it's not obvious.
Suppose we enumerate the states, so that Si refers to state i. We'll also denote by Pi the population of state i, and we'll let Ri denote the number of representatives allotted to state i.
In the ideal scenario, every representative would represent the exact same number of people. That is, we would have
$$\text{pop. per rep. in state i}
= \frac{P_i}{R_i}
= \frac{P_j}{R_j}
= \text{pop. per rep. in state j}$$
for every pair of states i and j. But this won't ever happen in practice.
Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If
\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}
then we can say that each representative in state i represents more people, and thus those people have a diluted vote.
Amounts of Inequality
There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.
A few natural ideas emerge:
We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.
This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be
\frac{\lvert x – y \rvert}{\min(x, y)}.
Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.
The relative difference in per capita representation is
\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =
\frac{P_i/R_i}{P_j/R_j} – 1.
Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.
All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson's scheme — though to be fair, Jefferson's scheme doesn't seek to minimize inequality and there is no reason to think it should behave the same).
Each of these ideas leads to a different apportionment scheme, and in fact each has a name.
Minimizing differences in constituency size is the Dean method.
Minimizing differences in per capita representation is the Webster method.
Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method.
Further, each of these schemes has been used at some time in US history. Webster's method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9
In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:
The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
The apportionment that would come from the Webster method.
The apportionment that would come from the newly introduced Hill method.
If one reads congressional discussion from the time, then it will be good to note that Webster's method is sometimes called the method of major fractions and Hill's method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill's method was declared to be the recommendation of the Academy.11 From 1930 on, Hill's method has been used.
Why use the Hill method?
The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat.
The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792).
Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia's population was larger and growing much more rapidly.
In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.
Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it's still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12
The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn't suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13
Understanding the modern Hill method in practice
Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1
whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.
We can work out a different way of understanding this apportionment that is easier to implement in practice.
Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn't possible then the allocation is perfect).
We can ask if it would be a good idea to move one representative from state j to state i, since state j's constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri′ (since otherwise the relative difference is strictly smaller) and
\frac{P_jR_i'}{P_iR_j'} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1
(since the relative difference must be at least as large). This is equivalent to
\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}
\iff
\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.
As every variable is positive, we can rewrite this as
\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}
We've shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j.
Clearly it holds if i = j as the denominator on the left is strictly smaller.
If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.
This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill's method is the largest fraction
$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$
being too large. (Some call this term the Hill rank-index).
An iterative Hill apportionment
This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.
Further, it can be shown that there is a unique apportionment in Hill's method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.
This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.
Additional notes: Consequences of the 1870 and 1990 Apportionments
The 1870 Apportionment
Officially, Dean's method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton's method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean's method, not Hamilton's method. Specifically, New York and Illinois were each given one fewer seat than Hamilton's method would have given, while New Hampshire and Florida were given one additional seat each.
There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.
One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean's method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden's 184.
There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it's widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the "Reconstruction" period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.
Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson's method had been used, then Gore would have won with 271 votes to Bush's 266.
These decisions have far-reaching consequences!
Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
Balinski, Michel L., and H. Peyton Young. "The quota method of apportionment." The American Mathematical Monthly 82.7 (1975): 701-730.
Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
Peskin, Allan. "Was there a Compromise of 1877." The Journal of American History 60.1 (1973): 63-75.
US Census Results
US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
George Washington's collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg
I omit that in fact it says that the "respective Number" shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. This was amended by the Fourteenth Amendment.
As an aside, I did not know that 1 in every 6 people in the US was a slave at the time. I don't know what to make of this fact, other than that it's remarkably high.
In a later note, I will share my data and code I used to compute the statistics in this note.
There is a further point of contention. Much of the Southern population (approximately one third) consisted of slaves. The weighting agreed on in the Constitution counted against the Southern states' representative count — or so these states claimed.
And it is essentially equivalent to the D'Hondt method, proposed by Belgian mathematician D'Hondt in 1878. The presentations of the method are different — D'Hondt's method is presented as an algorithm. We'll return to this later in this note.
Actually, in 1959, both Alaska and Hawaii were temporarily given one House seat as they joined the US. Thus in 1959, the House had 437 members. But in 1960, the total seat count returned to 435. So they weren't totally left out.
Being counted as only 3/5 of a person isn't fair, is it?
There is an anomaly in 1870 that we return to at the very end of this note.
Non-coincidentally, the size of the House was fixed in 1913 at 435, so the political games leading to slightly larger House sizes every apportionment could not occur. Further, there was large-scale immigration and concentration within urban cities between 1910 and 1920. If one used the census results, then many representatives (especially Republican representatives in what was a newly elected Republican government) would have simply had their seats disappear. So they refused to reapportion.
Largely thanks to the efforts of Republican Senator Arthur Vandenberg of Michigan, who repeatedly addressed the nation by radio on the necessity of reapportionment based on the census. He also helped set up the UN and was a major voice in the Republican Party who helped guide the party away from isolationism to internationalism. He was the chair of the Senate Foreign Relations Committee from 1947–1949, during which he supported (Democratic) President Truman's Cold War policies, the Truman Doctrine, the Marshall Plan, and NATO.
Report to the President of the National Academy, 1929. This can be read in the collections of reports of the National Academy of Sciences.
This analysis was taken from Balinski and Young, Fair representation: meeting the ideal of one man, one vote.
But it isn't perfect! Balinski and Young (the same two mathematicians from the previous footnote) also proved that there doesn't exist an apportionment system that both gives results very near the ideal results and which is immune to both the Alabama paradox and the population paradox. Hard choices must be made
Note there was one Gore abstention in 2000
This entry was posted in Data, Expository, Mathematics, Politics, Story and tagged apportionment, election, Hill apportionment. Bookmark the permalink. | CommonCrawl |
Wiener measure
The probability measure $ \mu _ {W} $ on the space $ C[ 0, 1] $ of continuous real-valued functions $ x $ on the interval $ [ 0, 1] $, defined as follows. Let $ 0 < t _ {1} < \dots < t _ {n} \leq 1 $ be an arbitrary sample of points from $ [ 0, 1] $ and let $ A _ {1} \dots A _ {n} $ be Borel sets on the real line. Let $ C( t _ {1} \dots t _ {n} ; A _ {1} \dots A _ {n} ) $ denote the set of functions $ x \in C[ 0, 1] $ for which $ x( t _ {k} ) \in A _ {k} $, $ k = 1 \dots n $. Then
$$ \tag{* } \mu _ {W} ( C ( t _ {1} \dots t _ {n} ; A _ {1} \dots A _ {n} )) = $$
$$ = \ \int\limits _ { A _ 1 } p ( t _ {1} , x _ {1} ) dx _ {1} \int\limits _ { A _ 2 } p ( t _ {2} - t _ {1} , x _ {2} - x _ {1} ) dx _ {2} \dots $$
$$ {} \dots \int\limits _ { A _ n } p ( t _ {n} - t _ {n-} 1 , x _ {n} - x _ {n-} 1 ) dx _ {n} , $$
$$ p ( t, x) = { \frac{1}{\sqrt {2 \pi t } } } e ^ {- x ^ {2} / 2 t } . $$
Using the theorem on the extension of a measure it is possible to define the value of the measure $ \mu _ {W} $ on all Borel sets of $ C[ 0, 1] $ on the basis of equation (*).
The Wiener measure was introduced by N. Wiener [a1] in 1923; it was the first major extension of integration theory beyond a finite-dimensional setting. The construction outlined above extends easily to define Wiener measure $ \mu _ {W} $ on $ C [ 0, \infty ) $. The coordinate process $ x( t) $ is then known as Brownian motion or the Wiener process. Its formal derivative "dxt/dt" is known as Gaussian white noise.
[a1] N. Wiener, "Differential space" J. Math. & Phys. , 2 (1923) pp. 132–174
[a2] T. Hida, "Brownian motion" , Springer (1980)
[a3] I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1988)
[a4] L. Partzsch, "Vorlesungen zum eindimensionalen Wienerschen Prozess" , Teubner (1984)
[a5] J. Yeh, "Stochastic processes and the Wiener integral" , M. Dekker (1973)
[a6] S. Albeverio, J.E. Fenstad, R. Høegh-Krohn, T. Lindstrøm, "Nonstandard methods in stochastic analysis and mathematical physics" , Acad. Press (1986)
Wiener measure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Wiener_measure&oldid=49220
This article was adapted from an original article by A.V. Skorokhod (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Wiener_measure&oldid=49220" | CommonCrawl |
Philosophy Epistemology & Philosophy of Science
Springer Graduate Texts in Philosophy
On Numbers, Sets, Structures, and Symmetry
Authors: Kossak, Roman
Presents an introduction to formal mathematical logic and set theory
Presents simple yet nontrivial results in modern model theory
Provides introductory remarks to all results, including a historical background
Hardcover 72,79 €
This book, presented in two parts, offers a slow introduction to mathematical logic, and several basic concepts of model theory, such as first-order definability, types, symmetries, and elementary extensions.
Its first part, Logic Sets, and Numbers, shows how mathematical logic is used to develop the number structures of classical mathematics. The exposition does not assume any prerequisites; it is rigorous, but as informal as possible. All necessary concepts are introduced exactly as they would be in a course in mathematical logic; but are accompanied by more extensive introductory remarks and examples to motivate formal developments.
The second part, Relations, Structures, Geometry, introduces several basic concepts of model theory, such as first-order definability, types, symmetries, and elementary extensions, and shows how they are used to study and classify mathematical structures. Although more advanced, this second part is accessible to the reader who is either already familiar with basic mathematical logic, or has carefully read the first part of the book. Classical developments in model theory, including the Compactness Theorem and its uses, are discussed. Other topics include tameness, minimality, and order minimality of structures.
The book can be used as an introduction to model theory, but unlike standard texts, it does not require familiarity with abstract algebra. This book will also be of interest to mathematicians who know the technical aspects of the subject, but are not familiar with its history and philosophical background.
Roman Kossak is a Professor of Mathematics at the City University of New York. He does research in model theory of formal arithmetic. He has published 36 research papers and co-authored a monograph on the subject for the Oxford Logic Guides series. His other interests include philosophy of mathematics, phenomenology of perception, and interactions between mathematics philosophy and the arts.
"The author has made a significant effort to present the (not so easy) material in an understandable way … . I am sure that readers of this well-written book will experience many such satisfying moments." (Temur Kutsia, Computing Reviews, September 11, 2019)
Table of contents (16 chapters)
First-Order Logic
Kossak, Roman
Logical Seeing
What Is a Number?
Seeing the Number Structures
Points, Lines, and the Structure of $${\mathbb {R}}$$
Definable Elements and Constants
Minimal and Order-Minimal Structures
Geometry of Definable Sets
Where Do Structures Come From?
Elementary Extensions and Symmetries
Tame vs. Wild
First-Order Properties
Symmetries and Logical Visibility One More Time
Download Product Flyer Request Instructor´s Textbook Exam Copy Download High-Resolution Cover
Roman Kossak
Springer International Publishing AG part of Springer Nature
Series ISSN
XIII, 186 | CommonCrawl |
A comprehensive analysis of the animal carcinogenicity data for glyphosate from chronic exposure rodent carcinogenicity studies
Christopher J. Portier ORCID: orcid.org/0000-0002-0954-02791,2,3
Since the introduction of glyphosate-tolerant genetically-modified plants, the global use of glyphosate has increased dramatically making it the most widely used pesticide on the planet. There is considerable controversy concerning the carcinogenicity of glyphosate with scientists and regulatory authorities involved in the review of glyphosate having markedly different opinions. One key aspect of these opinions is the degree to which glyphosate causes cancer in laboratory animals after lifetime exposure. In this review, twenty-one chronic exposure animal carcinogenicity studies of glyphosate are identified from regulatory documents and reviews; 13 studies are of sufficient quality and detail to be reanalyzed in this review using trend tests, historical control tests and pooled analyses. The analyses identify 37 significant tumor findings in these studies and demonstrate consistency across studies in the same sex/species/strain for many of these tumors. Considering analyses of the individual studies, the consistency of the data across studies, the pooled analyses, the historical control data, non-neoplastic lesions, mechanistic evidence and the associated scientific literature, the tumor increases seen in this review are categorized as to the strength of the evidence that glyphosate causes these cancers. The strongest evidence shows that glyphosate causes hemangiosarcomas, kidney tumors and malignant lymphomas in male CD-1 mice, hemangiomas and malignant lymphomas in female CD-1 mice, hemangiomas in female Swiss albino mice, kidney adenomas, liver adenomas, skin keratoacanthomas and skin basal cell tumors in male Sprague-Dawley rats, adrenal cortical carcinomas in female Sprague-Dawley rats and hepatocellular adenomas and skin keratocanthomas in male Wistar rats.
Glyphosate acid (CAS # 1071-81-6) is a colorless, odorless, crystalline solid. Glyphosate is the term used to describe the salt that is formulated by combining the deprotonated glyphosate acid and a cation (isopropylamine, ammonium, or sodium). Glyphosate was first synthesized in 1950 as a pharmaceutical compound but no pharmaceutical applications were identified. Glyphosate was reformulated in 1970 and tested for its herbicidal activity and was patented for use by Monsanto. The patent has since expired and now glyphosate is produced worldwide by numerous manufacturers [1]. According to the International Agency for Research on Cancer [2], glyphosate is registered in over 130 countries as of 2010. Since the introduction of genetically engineered glyphosate-tolerant crops in 1996, the global use of glyphosate has increased 15-fold making it the most widely used pesticide worldwide [3].
Most countries require a two-year rodent carcinogenicity study (cancer bioassay) be completed and the results reported to the proper authority in order to register a pesticide for use. There have been multiple cancer bioassays conducted to determine if glyphosate is potentially carcinogenic in humans. These have been reviewed by numerous regulatory agencies including the European Food Safety Authority (EFSA) [4], the European Chemicals Agency (EChA) [5], and the US Environmental Protection Agency (EPA) [6]. All of these agencies have concluded that the animal carcinogenicity data do not support a link between glyphosate and cancer. The carcinogenicity of glyphosate was also reviewed by the International Agency for Research on Cancer (IARC) [2] who found that the animal carcinogenicity data was sufficient to establish a causal link between exposure to glyphosate and cancer incidence in animals. The data have also been reviewed by the Joint Meeting of Pesticide Residues (JMPR) [7] concluding "that glyphosate is not carcinogenic in rats but could not exclude the possibility that it is carcinogenic in mice at very high doses."
There is considerable controversy over the interpretation of these cancer bioassays. Numerous reasons have been put forth to explain the differences between IARC and the regulatory agencies on the carcinogenicity of glyphosate in rodents. These differences will be discussed at the end of this report.
This report considers the adequacy of the studies for addressing the carcinogenicity of glyphosate and, where data is available, reanalyzes these data to identify significant increases in tumors in these data sets and compares the results across studies.
Animal carcinogenicity data
The animal carcinogenicity data derives from multiple sources including the published literature, the EPA review [6], the Addendum to the EFSA review prepared by the German Institute for Risk Analysis [8], the JMPR review [7], Additional file 1 from a review of the carcinogenicity of glyphosate by a panel of scientists on behalf of industry [9], and the full laboratory reports (with redactions) for some of these studies following a recent court decision [10] (usually these full laboratory reports are not available to the public). In some cases, only limited data is reported for a given study making comparisons to other studies difficult. Only data from the core lifetime studies are included in the evaluation; data from interim sacrifices are not included.
In total, there are 13 chronic exposure animal toxicology and carcinogenicity studies of glyphosate in rats and 8 in mice (Tables 1 and 2). The full descriptions of most studies are available in either the published document in the literature, the regulatory reports, or, where available, the full laboratory reports. Table 1 lists the 13 chronic exposure toxicity and carcinogenicity studies considered acceptable for this evaluation and provides a brief description of the species, strain, exposure levels, group sizes, chemical purity and comments on survival and weight changes seen in the study. Twelve of these studies were conducted under the appropriate regulatory guidelines at the time they were conducted. A more complete description for each of these studies including the laboratory conducting the study, the substrain of the animal used (if given), a description of pathology protocols used, a list of tissues evaluated and a complete list of all tumors analyzed in this reanalysis is provided in the Additional file 1. Table 2 identifies 8 chronic exposure toxicity and carcinogenicity studies that are not included in this evaluation and the reasons for their exclusion such as falsified data, lack of tumor data, or chemical purity.
Table 1 Long-term chronic dietary exposure toxicity and carcinogenicity studies of glyphosate analyzed in this evaluation. Additional information on these studies is available in the Additional file 1
Table 2 Long-term chronic dietary exposure toxicity and carcinogenicity studies of glyphosate excluded from this evaluation
For 12 of these studies, the full study report is available. For study E (Takahashi [15]), a full study report is not available. JMPR [7] provided the only review of this study and only reported on kidney tumors in males and malignant lymphomas in females. This study is included in this review for only kidney tumors in males and malignant lymphomas in females.
Two additional chronic exposure studies of glyphosate formulations are included in this review as additional support for the carcinogenicity of glyphosate. These studies are not reanalyzed for this evaluation; the evaluations of the original authors are described in the Results section.
George et al. [35] exposed groups of 20 male Swiss Albino mice to a glyphosate formulation (Roundup Original, 360 g/L glyphosate) at a dose of 25 mg/kg (glyphosate equivalent dose) topically three times per week, topically once followed one week later by 12-o-tetradecanoylphorbol-13-acetate (TPA) three times per week, topically three times per week for three weeks followed one week later by TPA three times per week, or a single topical application of 7,12-dimethyl-benz[a]anthracene (DMBA) followed one week later by topical application of glyphosate three times per week for a total period of 32 weeks. Appropriate untreated, DMBA-treated, and TPA-treated controls were included.
Seralini, G. E., et al. [36] conducted a 24-month chronic toxicity study of Roundup (GT Plus, 450 g glyphosate/L, EU approval 2,020,448) in groups of 10 male and female Sprague-Dawley rats with drinking-water exposures of 0, 1.11•10− 8, 0.09, and 0.5% Roundup (males and females). This study noted an increase in mammary tumors. However, given the small sample sizes employed and the availability of more detailed studies, this study will be included in this review only as supporting information.
Individual tumor counts for the individual studies are reanalyzed using the exact form of the Cochran-Armitage (C-A) linear trend test in proportions [37]. Reanalyses are conducted on all primary tumors where there are at least 3 tumors in all of the animals in a sex/species/strain combination (regardless of dosing). In addition, any tumor where a positive finding (p ≤ 0.05, one-sided C-A trend test) is seen in at least one study is also evaluated, regardless of number of animals with the tumor, in all studies of the same sex/species/strain. When adenomas and carcinomas are seen in the same tissue, a combined analysis of adenomas and carcinomas is also conducted. The minimum of three tumors is used since the exact version of the C-A test cannot detect tumors in studies of this size with less than at least 3 tumors. Additional file 2: Tables S1–S13 provide the tumor count data for all tumors with a significant trend test (p ≤ 0.05) in at least one study of the same sex/species/strain along with the doses used (mg/kg/day) and the number of animals examined microscopically in each group. Pairwise comparisons between individual exposed groups and control are conducted using Fisher's exact test [37] and are provided for comparison with other reviews.
The C-A trend test belongs to the general class of logistic regression models [37]. To evaluate the consistency of a tumor finding across multiple studies using the same sex-species-strain combinations, logistic regression with individual background responses and dose trends are fit to the pooled data using maximum likelihood estimation. In mathematical terms, the regression model being used is:
$$ p=\frac{e^{\alpha_i+\beta \cdot dose}}{1+{e}^{\alpha_i+\beta \cdot dose}} $$
where p is the probability of having a tumor, αi is a parameter associated with the background tumor response (dose = 0) for study i and β is a parameter associated with a change in the tumor response per unit dose (slope). A common positive trend is seen in the pooled analysis when the null hypothesis that the slope is 0 (H0: β =0) is rejected (statistical p-value ≤0.05 using a likelihood-ratio test) in favor of the alternative that the slope is greater than 0 (HA: β > 0). The heterogeneity of slopes (all studies have different slopes vs all studies have a common slope) is tested using the model:
$$ p=\frac{e^{\alpha_i+{\beta}_i\cdot dose}}{1+{e}^{\alpha_i+{\beta}_i\cdot dose}} $$
where p and αi are as in equation (1) and βi is a parameter associated with the slope for study i. Heterogeniety is seen in the pooled analysis when the null hypothesis that the slopes are equal (H0: β1 = β2 = β3 =…) is rejected (statistical p-value ≤0.05 using a likelihood-ratio test) in favor of the alternative that at least one of the slopes is different.
For CD-1 mice, there are studies of 18 months (3) and 24 months (2) so analyses are conducted separately for 18 month studies and 24 month studies and then a combined analysis is performed. In SD rats, one study had 26 months of exposure and the remaining 3 had 24 months of exposure so similar grouped analyses are conducted. Only the combined analysis over all study durations is provided in Tables 3, 4 and 5; the sub-analyses by study duration are discussed in the text.
Table 3 P-values for the Cochran-Armitage trend test and pooled logistic regression analysis for tumors with at least one significant trend test (p ≤ 0.05) or Fisher's exact test (p ≤ 0.05) in male and female CD-1 mice
Table 4 P-values for the Cochran-Armitage trend test and pooled logistic regression analysis for tumors with at least one significant trend test or Fisher's exact test (p ≤ 0.05) in male and female Sprague-Dawley rats
Table 5 P-values for the Cochran-Armitage trend test and pooled logistic regression analysis for tumors with at least one significant trend test or Fisher's exact test (p ≤ 0.05) in male and female Wistar rats
The same methods of analysis are used to evaluate the incidence of non-cancerous toxicity in tissues where positive cancer findings are seen. These findings are discussed in the text but not shown in the tables.
In some cases, tumors that rarely (< 1% in untreated animals) appear in laboratory animals can be increased but do not show statistical significance. Most guidelines call for the use of historical control data to evaluate these cases to assess the significance of the findings [38,39,40]. For these evaluations, the test proposed by Tarone [41] is used with an appropriate historical control group as discussed in the text.
All analyses were done using MATLAB, version R2017b.
Thirteen chronic exposure animal carcinogenicity studies are reviewed and reanalyzed for this evaluation. The summary of all tumor findings with a Cochran-Armitage (C-A) trend test (one-sided) of p ≤ 0.05 in at least one study (by sex/species/strain) from the reanalysis of these studies are provided in Tables 3, 4 and 5 (columns under the heading "Individual study p-values for trend"). In addition, the p-values for trend (under the heading "Common Trend") and heterogeneity (under the heading "Heterogeneity Test") from the analysis of the pooled data are also provided in Tables 3, 4 and 5. The individual tumor counts for each individual study are shown in Additional file 2: Tables S1–S13. In addition, a few tumors where there is a significant (p ≤ 0.05) pairwise comparison by Fishers exact test in at least one study but no significant trend tests are also summarized in Tables 3, 4 and 5; this is for comparison with regulatory reviews that generally used only pairwise comparisons.
The purpose of this analysis is to understand the tumorogenicity of glyphosate across all studies and not one study at a time. Thus, rather than presenting the results of each study separately, this review focuses on the tumors that are seen as positive in any one study and compares the findings across all studies of the same tumor in the same sex/species/strain combination.
Reanalysis of the data from CD-1 Mice
Table 3 summarizes the significant results seen from five studies conducted in CD-1 mice [11,12,13,14,15]. For a complete list of all the tumors evaluated, see the Additional file 1. For simplicity, these studies will be referred to as studies A-E as noted in Table 1. Studies A and B are 24-month studies and studies C, D and E are 18-month studies. There are a total of 12 statistically significant tumor findings (p ≤ 0.05) against the concurrent controls in these studies. In addition, there are 5 significant increases in tumors seen for rare tumors using historical controls.
Significant trends for kidney adenomas (p = 0.019) and adenomas and carcinomas combined (p = 0.005) are seen in male mice in study E, marginal trends are seen in study A (p = 0.065) and study C (0.062) for combined adenomas and carcinomas with no increase in the remaining two studies. Kidney tumors are rare in CD-1 mice and it would be appropriate to compare the marginal responses against historical controls. Using historical control data for kidney tumors from the EPA archives [42] on study A results in no significant association with adenomas (p = 0.138) but significant increases in carcinomas (p < 0.001) and adenomas and carcinomas combined (p = 0.008) by Tarone's test. Using historical controls from 1990 to 1995 from the literature [43] results in a significant trend (p = 0.009) for kidney adenomas in Study C. The pooled analysis of the data shows a significant common trend for adenomas, carcinomas and the combined tumors with no indication of heterogeneity. Because of toxicity in the highest dose of study E, a second pooled analysis is done dropping this dose and yields a significant increase for adenomas (p = 0.038) and carcinomas and adenomas combined (p = 0.011) and a marginal increase for carcinomas (p = 0.077) with no heterogeneity (not shown). Data on the incidence of kidney toxicity in these studies is also reanalyzed. Study A has a significant increase in chronic interstitial nephritis (p = 0.004) and a non-significant increase in thickening of the glomerular and/or tubular basal membranes (p = 0.148) with a significant pairwise increase at the mid-dose (p = 0.036). Study B has an increase in tubular dilatation (p = 0.026) but no change in tubular hypertrophy (p = 0.642) or focal tubular atrophy (p = 0.248). Study C has no change in tubular dilatation (p = 0.913) but does show an increase in tubular atrophy (p = 0.017) and tubular vacuolation (p = 0.015). Study D has no changes in vacuolation (p = 0.830), dilatation (p = 0.831), or chronic nephropathy (p = 0.494). Study E has an increase in kidney tubular dilation (p < 0.001), tubular epithelial cell hypertrophy (p < 0.001), basophilic tubules (p = 0.009) and tubular degeneration and/or necrosis (p = 0.008).
Malignant lymphomas are significant in studies C (p = 0.016) and D (p = 0.007) and marginally significant in study B (p = 0.087) in male mice. Malignant lymphomas are not rare in these mice so no historical control analysis is conducted. The pooled analysis for a common trend is marginally significant (p = 0.093) and the studies are heterogeneous in slope because of the markedly different response in study A. The pooled analysis of the 18 month studies is highly significant (p = 0.005) but not significant for the 24 month studies (p = 0.686). Toxicity in tissues relating to the lymphatic system is reanalyzed. Study B shows a significant increase in thymus weight in the two highest exposure groups (p < 0.01 and p < 0.05, reported in [12]) in males and a non-significant (p not reported) increase in females. Studies B and C show a significant increase (trend test) in the number of males with enlarged mesenteric lymph nodes (p = 0.024 and p = 0.002 respectively). Study B shows enlarged spleens (p = 0.031) in males whereas C did not. Study C also has an increase in enlarged cervical lymph nodes (p = 0.046) and other lymph nodes (p = 0.047). Study A did not report macroscopic findings, study D has no enlarged lymphoreticular tissues and the data are not available from study E.
Hemangiosarcomas are statistically significant in study B (p = 0.004) and marginally significant in study C (p = 0.062) in male mice. Hemangiosarcomas are very rare in 18-month animals with no tumors appearing in 26 historical control data sets and moderately rare (2.1%) in 24-month studies [43]. Using the 18-month historical control data [43] results in a significant finding for study C (p < 0.001). The pooled analysis for a common trend is significant (p = 0.03) but the studies are heterogeneous in slope.
Although there is a single positive finding in the lung in male mice with a significant increase in carcinomas in study D (p = 0.028), all of the other analyses in the lung are not statistically significant including the pooled analyses. There are no dose-related non-neoplastic findings in the lungs of these animals.
In female mice, hemangiomas are significantly increased in study C (p = 0.002) and the pooled analyses is also significant (p = 0.031) with no evidence of heterogeneity. Study C has a 10% response at the highest dose whereas the other studies have much lower response resulting in the positive pooled association.
Harderian gland adenomas are significantly increased in study C (p = 0.04) but are not significant for studies A and D for adenomas, carcinomas and their combination. The pooled analyses fails to demonstrate a consistent increase. There are no non-neoplastic findings in the Harderian glands.
There is a significant increase in adenomas and carcinomas combined in the lung for female mice in study B (p = 0.048). None of the pooled analyses or any analyses in the remaining studies are significantly increased in the lung. There are no non-neoplastic findings in the lungs of these animals.
Finally, malignant lymphomas are significantly increased in study E (p = 0.050) and marginally increased in study A (p = 0.070) for females. The remaining studies show trends toward increasing risk with increasing exposure and when combined, the five mice studies show a significant increase in malignant lymphomas in female mice (p = 0.012) and no heterogeneity. The pooled analysis remains significant (p = 0.050) if the high dose group from study E is removed due to high toxicity. There are no increases in enlargement of lymphoreticular tissues in female mice in studies B, C and D and no data available for studies A and E.
Reanalysis of the data from Swiss albino mice
There is a single study in Swiss albino mice (study F). This study shows a significant increase in hemangiomas in female mice (p = 0.004) and marginal increases for malignant lymphomas in males (p = 0.064) and females (p = 0.070) and kidney adenomas in males (p = 0.090) (Additional file 2: Table S6). There are no kidney carcinomas in the males. There are no non-neoplastic changes in the kidney. Study F shows a significant increase in the incidence of thymus enlargement in males (p = 0.034) and a marginal increase in enlargement of mesenteric lymph nodes in females (p = 0.053) but not in males. For a complete list of all the tumors evaluated, see the Additional file 1.
Reanalysis of the data from SD rats
Table 4 summarizes the significant results seen from four studies conducted in SD rats [17,18,19,20]. For a complete list of all the tumors evaluated, see the Additional file 1. Study G is a 26-month study and studies H, I and J are 24-month studies. There are a total of 11 statistically significant tumor findings (p ≤ 0.05) against the concurrent controls in these studies and three significant finding against historical controls.
Study G showed a significant increase in testes interstitial-cell tumors (p = 0.009) but no increases in any other study and the pooled analysis for a common trend is also non-significant. There are no non-neoplastic lesions seen in the testis in studies G, H and J. Study I saw a marginal increase (p = 0.092) in interstitial cell hyperplasia of the testis.
Pancreas islet-cell tumors, thyroid c-cell tumors and thyroid follicular-cell adenomas and carcinomas in males are presented in Table 4. None of these studies demonstrate a significant trend in any of these tumors nor do they show a significant trend in the pooled analyses. These tumors are included here for completeness because they have been mentioned in some of the regulatory reviews of these data due to increases in at least one dose group over controls using Fisher's exact test. Study G shows an increase in pancreatic islet cell adenomas in males at the low dose and study H shows increases in males at both the low dose and the high dose. Historical control data on pancreas islet-cell tumors in study H are provided in an EPA memo [44] and Tarone's historical control test yields a highly significant response for this study (p = 0.007) with all of the treated groups showing greater tumor response than any of the controls. There are no dose-related increases in islet cell non-neoplastic findings in any of the four studies in male Sprague-Dawley rats.
Study H saw an increase in males of thyroid C-cell adenomas at the mid and high doses and an increase in adenomas and carcinomas combined at all three doses tested. However, the control response in study H for these tumors is quite low with no tumors in 50 animals whereas the historical rate of tumors in this strain of rats is 11.3% in males [45]. Reanalyzing data on non-neoplastic toxicity, Study I has a significant increase in focal C-cell hyperplasia (p = 0.048) and no other studies have significant increases in C-cell hyperplasia.
Study I shows a marginally significant trend in males of thyroid follicular cell adenomas (p = 0.067) and adenomas and carcinomas combined (p = 0.099). No non-neoplastic endpoints show dose-related changes for thyroid follicular cells in any study.
Hepatocellular adenomas (p = 0.015) and adenomas and carcinomas combined (p = 0.050) are increased in males in study I but not in any of the other studies. The increases in adenomas remained significant (p = 0.029) in the pooled analysis since most studies showed a very slight increase in these tumors, but the pooled analysis for a common trend in adenomas and carcinomas is not significant (p = 0.144). After reanalysis of these studies for non-neoplastic toxicity, study G shows a significant increase in basophilic foci (p = 0.029), study H did not report on these and studies I and J show non-significant trends with the pooled analysis for a common trend not significant (p = 0.358). Study G has an increase in clear-cell foci (p = 0.033), study I has a marginal increase in clear-cell foci (p = 0.057) and study J is non-significant with the pooled analysis showing a marginally significant trend (p = 0.073).
Kidney adenomas are increased in males (p = 0.004) in study J but not in any other study. The pooled analysis for a common trend is significant (p = 0.039) with significant heterogeneity because of the high response in study J and the generally low response in the remaining three studies. The only non-neoplastic pathology in the kidney is an increase in lymphocytic infiltration (p = 0.037) in study G.
No skin keratoacanthomas are seen in males in study F, but these tumors are significantly increased in the other three studies (p = 0.042, 0.047 and 0.029) and are highly significant in the pooled analysis for a common trend (p < 0.001) with no apparent heterogeneity. After reanalysis of non-neoplastic toxicity, focal hyperkeratosis is increased in both sexes (p ≤ 0.001 – M; p = 0.015 – F) in study J and shows a significant decrease in study I in males (p = 0.004).
Skin basal cell tumors in males are significantly increased in study J (p = 0.004) and in the pooled analysis for a common trend (p < 0.001) but not in any of the other three studies. The pooled analysis demonstrates significant heterogeneity (p = 0.009), driven by the responses at lower doses in studies G and H.
In females, thyroid C-cell adenomas are significantly increased in study H (p = 0.049), carcinomas are significantly increased in study G (p = 0.003) and adenomas and carcinomas combined are marginally significantly increased in studies G (p = 0.072) and H (p = 0.052). The authors of study G provided historical control data from 9 control groups for carcinomas and adenomas and carcinomas combined for these tumors; Tarone's test yielded p < 0.001 for the carcinomas and p = 0.037 for the combined tumors. None of the pooled analyses are statistically significant. There are no non-neoplastic changes in thyroid C-cells in females in these studies.
Adrenal cortical carcinomas are increased in females in study H (p = 0.015) and adenomas and carcinomas are marginally increased (p = 0.090) in that same study. The pooled analysis for a common trend of the cortical carcinomas is significantly increased (p = 0.031) with little indication of heterogeneity, but the pooled analysis of the combined adenomas and carcinomas is not significantly increased. After reanalysis of non-neoplastic toxicity, focal cortical hypertrophy shows a dose-related significant increase in studies G (p = 0.048) and I (p = 0.027), study H did not report hypertrophy independent of hyperplasia (the combined counts showed no increased dose-response), and study J did not report hypertrophy. There are no other dose-related increases in injury to adrenal cortical tissue in any of the studies.
Reanalysis of the data from Wistar rats
Table 5 summarizes the significant results seen from three studies conducted in Wistar rats [21,22,23]. For a complete list of all the tumors evaluated, see the Additional file 1. All three studies are 24-month studies. There are a total of 9 statistically significant tumor findings (p ≤ 0.05) against the concurrent controls in these studies.
Hepatocellular adenomas (p = 0.008) and combined adenomas and carcinomas (p = 0.008) in males are increased in study L but not in any other study (note, there are no carcinomas seen in this study so these analyses are identical). The pooled analyses for a common trend shows an increase for adenomas (p = 0.048), no increase in carcinomas (0.492) and an increase in combined adenomas and carcinomas (p = 0.029) with no indication of heterogeneity across the studies. Reanalysis of the non-neoplastic toxicity data show there is a significant decrease in basophilic-cell foci in study K (p = 0.023), no foci at all in study L and no trend in study M. Clear-cell foci are not impacted by glyphosate in male Wistar rats.
Pituitary adenomas are increased in both males (p = 0.045) and females (p = 0.014) in study M but not in the remaining studies. Carcinomas show no increase in any study but the combined adenomas and carcinomas are marginally significant in males (p = 0.059) and significant in females (p = 0.017) in study M but not in the others. None of the pooled analyses for a common trend are statistically significant although the pooled trend in males is marginally significant for both adenomas (p = 0.057) and combined adenomas and carcinomas (p = 0.073). There are no dose-dependent increases in any non-neoplastic lesion in male or female Wistar rats in any of the three studies.
Skin keratoacanthomas are significantly increased in males in study M (p = 0.030) and in the pooled analysis for a common trend (p = 0.032) with no heterogeneity. There are no keratoacanthomas in study K and a slight increase with dose in study L. No non-neoplastic pathologies are significantly linked to dose in the skin.
Adrenal pheochromocytomas are increased in study K (p = 0.048) but not in the other studies or in the pooled analysis. There are no significant trends in non-neoplastic findings in any of the three studies.
Mammary gland adenomas (p = 0.062), adenocarcinomas (p = 0.042) and their combination (p = 0.007) are all increased in study M, but not in the remaining studies. There is a marginal increase in adenocarcinomas in the pooled analysis for a common trend (p = 0.071) but not for the combined tumors (p = 0.110). The data suggests that all three endpoints demonstrated heterogeneity. Studies L and M also have fibroadenomas as well as adenomas and adenocarcinomas. Combining fibroadenomas, adenomas and adenocarcinomas results in no significant findings in any study or in the pooled analysis for this combination. Hyperplasia in mammary tissue is examined in all three studies with no significant findings in any study.
Related findings from the peer-reviewed literature
There are numerous studies in the literature that relate to the cancer findings shown in Tables 3, 4 and 5. Some of the studies are done using pure glyphosate, but many use a GBH and present the results in glyphosate-equivalent doses. GBHs contain adjuvants, some of which are also likely to be highly toxic. In what follows, these related studies are discussed and care is taken to note whether the exposure is to glyphosate or a GBH. Caution should be used in interpreting the results using the GBHs since, in most cases, it is not clear if the resulting toxicity is due to the glyphosate in the GBH or the adjuvant(s).
Increases in kidney adenomas and carcinomas (combined) are seen in male CD-1 mice and increases in adenomas are seen in Swiss albino mice and SD rats in the reanalysis in this review. A number of short-term toxicity studies have demonstrated damage to the kidneys in laboratory animals from exposure to glyphosate or GBHs. Turkman et al. [46] saw significant (p < 0.05) increases in malondialdehyde (MDA) levels and decreases in glutathione (GSH) levels in male Wistar albino rats exposed to the GBH Knockdown 48SL. They also saw degeneration in the tubulur epithelial cells and expansion and vacuolar degeneration in glomerulus Bowman's capsule (p < 0.05 for both). Dedeke et al. [47] also saw significant changes in MDA, GSH and several other kidney biomarkers from exposure to the GBH Roundup in male albino rats. They also studied glyphosate alone in equal doses to the GBH and saw smaller, but still significant increases in MDA and GSH, but not in the other biomarkers. In addition, they found that the amount of glyphosate in kidney tissue was substantially higher from exposure to the GBH than from exposure to glyphosate alone. Tang et al. [48] saw proximal and distal tubular necrosis (p < 0.01), glomerular toxicity (p < 0.01) and a reduction in weight (p < 0.05) in the kidneys of male SD rats exposed to glyphosate. They used a histopathological score and saw significant changes (p < 0.01) even down to a dose of 5 mg/kg body weight. Hamdaoui et al. [49] saw numerous histological changes and changes in urine and plasma associated with renal disfunction in female Wistar rats exposed to the GBH Kalach 360 SL. Kidney damage included fragmented glomeruli, necrotic epithelial cells, and tubular dilatation, inflammation, proximal tubular necrosis and distal tubular necrosis. Tizhe et al. [50] also saw glomerular degeneration, mononuclear cell infiltration and tubular necrosis in male and female Wistar rats exposed to the GBH Bushfire. Cavusoglu et al. [51] saw similar changes in blood chemistry and kidney pathology in male albino mice exposed to the GBH Roundup Ultra-Max. Wang et al. [52] saw kidney damage to tubular cells in Vk*MYC mice exposed to glyphosate in water.
In humans, GBHs are suspected to be involved in chronic kidney disease of unknown etiology (CKDu) in Sri Lanka, Mexico, Nicaragua, El Salvador and India [53,54,55]. Finally, the English abstract of a Chinese article by Zhang et al. [56] describe significant increases (p < 0.05) in abnormal hepatorenal function in workers occupationally exposed to glyphosate from 5 glyphosate-producing factories.
Dose-related increases in malignant lymphomas are seen in male and female CD-1 mice and marginal increases are seen in male and female Swiss albino mice in the reanalysis presented here. Wang et al. [52] exposed male and female Vk*MYC mice from the C57Bl/6 genetic background to glyphosate (purity not provided) at an exposure of 1 g/L in drinking water for 72 weeks (approximately 18 months) with an appropriate control. In addition, using the same mice, 7-day exposures were given at doses of 0, 1, 5, 10 and 30 g/L of glyphosate (n = 5 per group). Glyphosate induced splenomegaly in both wild type (WT) and Vk*MYC mice. Both WT and Vk*MYC mice demonstrated a significant increase (p < 0.05) in IgG levels when compared to controls. Vk*MYC treated mice had a clear M-spike (an indicator of multiple myeloma - MM), WT mice had a weaker M-spike and no M-spike was detected in untreated animals regardless of genetics. In addition, there were multiple hematological abnormalities in treated versus untreated mice that were consistent with MM. Activation-induced cytidine deaminase (AID, a marker of monoclonal gammopathy of undetermined significance induction, a precursor of MM) was upregulated in both bone marrow and spleen of both Vk*MYC and WT mice in the 72-week study. The same upregulation in the spleen and bone marrow were seen in the 7-day exposure animals in a dose-dependent fashion. A smaller dose-dependent increase was seen in lymph nodes. This upregulation of AID supports an AID-mediated mutational mechanism for the induction of MM and malignant lymphoma in these mice.
In humans, GBHs have been shown to increase the risk ratios for non-Hodgkins lymphomas (NHL) in several meta-analyses [2, 57,58,59]. For over 30 years, mouse models have been studied and evaluated as surrogates for NHL [60,61,62,63,64]. Classification systems for humans and mice indicate a strong similarity between malignant lymphomas in mice and NHL in humans.
Skin keratoacanthomas are increased by glyphosate in male SD rats and male Wistar rats. Skin basal-cell tumors are also increased in male SD rats in the reanalysis in this review. George et al. [35] exposed Swiss Albino mice to a glyphosate formulation (Roundup Original, 36 g/L glyphosate) in a typical skin-painting initiation-promotion study using 12-o-tetradecanoylphorbol-13-acetate (TPA) as a promoter and 7,12-dimethyl-benz[a]anthracene (DMBA) as an initiator. The group exposed to DMBA followed by glyphosate demonstrated a significant increase (p < 0.05) in the number of animals with tumors (40% of the treated animals versus no tumors in the controls) indicating the GBH has a promotional effect on carcinogenesis in the two-stage model in skin. Several in-vitro studies using human skin cells [65,66,67] have shown an increase in oxidative stress following exposure to glyphosate.
This review shows hepatocellular adenomas are increased by exposure to glyphosate in male SD rats and Wistar rats. Glyphosate has been shown to affect energy metabolism of mitochondria [68,69,70,71] and AST, ALT, and LDH [72] but not peroxisome proliferation or hypolipidemia [73] in the livers of Wistar rats. Transcriptome analyses of liver tissue in Sprague-Dwaley rats chronically exposed to the GBH Roundup Grand Travaux Plus suggest liver tissue damage is occurring [74]. Glyphosate and GBHs also seem to induce oxidative stress in the livers of several rat strains [48, 75, 76].
Adrenal cortical carcinomas are increased in female Sprague-Dawley rats in the reanalysis in this review. There is also a suggestion of an increase in adrenal pheochromocytomas in male Wistar rats and of pituitary adenomas in male and female Wistar rats. Owagboriaye et al. [77] saw a significant increase in adrenal hormones aldostererone and corticosterone in a dose-dependent fashion following exposure to a GBH (Roundup Original) in male albino rats but not following exposure to equivalent doses of glyphosate (purity not given). Significant changes in adrenocorticotropic hormone were also seen for the GBH but not glyphosate. In contrast, Pandey and Rudraiah [78] saw a significant reduction in adreno-corticotropic hormone levels at similar doses in Wistar rats. Romano et al. (2010) saw a reduction in adrenal weights from exposure to the GBH Roundup Transorb in newly-weaned male Wistar rats but saw no differences in corticosterone levels except a rather large, non-statistical increase at the lowest exposure group. Changes in these and other hormones in these three papers suggest GBHs could have an impact on the hypothalamic-pituitary-adrenal axis that, after lifetime exposure, could induce cancers in the adrenal cortex and/or pituitary.
This reanalysis shows an inconsistent effect of glyphosate on the rates of mammary gland adenomas, carcinomas and combined adenomas and carcinomas in female Wistar rats but not in SD rats. Seralini et al. (2014) [36] saw an increase in mammary tumors in female SD rats exposed to the GBH GT Plus with associated hypertrophies and hyperplasia. Glyphosate and GBHs have also been shown to disrupt estrogen receptor alpha in rats [79] and to alter cellular replication and genotoxicity in estrogen-sensitive cell lines [80,81,82,83,84,85,86].
The longest study in male Sprague-Dawley rats showed an increase in testicular interstitial cell tumors after reanalysis. Several studies have seen changes in aromatase, testosterone and/or estrogen levels in male rats exposed to glyphosate or GBHs [84, 87,88,89,90,91,92,93].
The reanalysis in this review show an inconsistent increase in thyroid C-cell adenomas and/or carcinomas in male and female SD rats and thyroid follicular cell adenomas in male SD rats. De Souza et al. [94] exposed male Wistar rats to the GBH Roundup Transorb from gestational day18 to postnatal day 5 and examined the animals for thyroid hormone effects at postnatal day 90. They saw dose-dependent decreases in thyroid stimulating hormone but no changes in circulating triiodothyronine or thyroxine. Genomic analysis suggested that genes involved in thyroid hormone metabolism and transport were probably involved in these alterations. In humans, Samsel et al. [95] hypothesized that glyphosate intake could interfere with selenium uptake, impacting thyroid hormone synthesis and increasing thyroid cancer risks. Using data from the Agricultural Health Study, Shrestha et al. [96] saw an association between ever/never use by farmworkers of GBHs and hypothyroidism (OR = 1.28, 95% CI 1.07–1.52) and for the two lowest categories of intensity of use, but not the highest category.
False positive errors
The evaluation of any one animal cancer study involves a large number of statistical tests that could lead to false positives. To evaluate this issue, the probability that all of the results in any sex/species/strain could be due to false positive results is calculated. Overall, a total of 496 evaluations are done for these 13 studies including the few evaluations done against historical controls. There are 41 evaluations at 37 tumor/site combinations with a trend test p ≤ 0.05; the probability that all of these are due to false positives is 0.001. Similarly, looking at the evaluations resulting in p ≤ 0.01, the probability that all of the findings are due to false positives is < 0.001. The strongest evidence is for male CD-1 mice, the probability of seeing 11 positive findings at p ≤ 0.05 and 8 at p ≤ 0.01 are both below 0.001. (see Additional file 2: Table S14).
Comparison to regulator reviews
In their final report on the carcinogenicity of glyphosate, the EPA concluded that "Based on the weight-of-evidence evaluations, the agency has concluded that none of the tumors evaluated in individual rat and mouse carcinogenicity studies are treatment-related due to lack of pairwise statistical significance, lack of a monotonic dose response, absence of preneoplastic or related non-neoplastic lesions, no evidence of tumor progression, and/or historical control information (when available). Tumors seen in individual rat and mouse studies were also not reproduced in other studies, including those conducted in the same animal species and strain at similar or higher doses." EFSA concluded "No evidence of carcinogenicity was confirmed by the large majority of the experts (with the exception of one minority view) in either rats or mice due to a lack of statistical significance in pair-wise comparison tests, lack of consistency in multiple animal studies and slightly increased incidences only at dose levels at or above the limit dose/MTD, lack of pre-neoplastic lesions and/or being within historical control range. The statistical significance found in trend analysis (but not in pair-wise comparison) per se was balanced against the former considerations." Other regulatory agencies used similar wording to describe their findings. Each of the issues cited in these summaries are discussed below.
Both EPA and EFSA describe a lack of significant pairwise comparisons as one reason for discarding positive findings due to positive trend analyses. This is in direct conflict with their guidelines [38, 39] which make it clear that a positive finding in either pairwise comparisons or trend tests should be sufficient to rule out chance. The net effect of requiring both tests to be positive is an increase the probability of a false negative finding.
EPA notes that a lack of monotonic dose-response was a factor in their evaluation and, even though not mentioned in EFSA's final conclusions, was also used by EFSA to eliminate positive findings. This restriction suggests a serious lack of understanding of statistical variation in tumor responses and the way in which trend tests treat this variation, especially when the lowest doses are close to the control response and the increased tumor response is low. The net effect of requiring monotonic dose-response is a severe reduction in the ability to detect a positive trend and a large increase in the probability of a false negative finding.
Both agencies note that a lack of preneoplastic or related non-neoplastic lesions led to the exclusion of some tumors. For some of the tumors mentioned above, this is the case, but certainly not for all of them as noted in the analyses shown in Tables 3, 4 and 5. In addition, both agencies failed to evaluate support in the scientific literature for any of the tumors and relied entirely on the cancer bioassay results alone to draw any conclusions. In this evaluation, changes in preneoplastic and non-neoplastic conditions are analyzed for all tissues showing positive tumor findings and in all studies with the same sex/species/strain using an appropriate trend test and many tissue changes that could relate to these tumors are identified.
Both EPA and EFSA noted that historical controls are used in their evaluations. However, in both cases, the agencies only cite the range of the historical controls as a factor when determining if a given positive cancer finding is caused by glyphosate. As noted by the IARC [40] "It is generally not appropriate to discount a tumour response that is significantly increased compared with concurrent controls by arguing that it falls within the range of historical controls." In general, the concurrent control group is the most appropriate for any statistical analysis of the data [38,39,40], however, historical controls can play an important role in evaluating changes in rare tumors and cases where it appears the control response is unreasonably low and the treated groups appear to be unchanged from each other and in the central area of the historical control data. In this evaluation, a formal statistical test [41] is used to evaluate the cancer data when it is appropriate to use historical controls rather than inappropriately using only the historical control range. In addition, in every case where EPA and EFSA noted a significant tumor response was in the range of the historical control data, the reanalysis in this paper using Tarone's test demonstrates greater statistical significance in the trend and in no case invalidates a positive trend (not shown for all cases).
EPA cites no evidence of tumor progression as a reason to exclude some of the cancer findings. For some tumors, such as malignant lymphomas, tumor progression is not an issue. In cases where there is clearly tumor progression such as for mammary gland adenomas and adenocarcinomas in study M, the agency did not consider this progression to be compelling. In addition, in cases where there is a clear increase in carcinomas and a slight decrease in adenomas, as might occur if the chemical impacts a later stage in the carcinogenic process or is a promoter, the agency did not consider this possibility. Similar comments apply to EFSA's evaluation.
EFSA notes that many studies had positive findings at or above the limit dose/MTD as a reason for excluding many study findings. There is clear guidance in the literature and regulatory guidelines on what constitutes exceedance of the MTD and how to exclude these data [39, 40, 97]. In no case did EFSA or EPA conclude that the highest dose used in any study they reviewed exceeded the MTD. The limit dose derives from the OECD guidelines for combined chronic toxicity/carcinogenicity studies [98] which states that "For the chronic toxicity phase of the study, a full study using three dose levels may not be considered necessary, if it can be anticipated that a test at one dose level, equivalent to at least 1000 mg/kg body weight/day, is unlikely to produce adverse effects." It is difficult to understand how a finding of carcinogenicity at a dose above 1000 mg/kg/day can be excluded based upon this guidance if that dose does not exceed the MTD.
Both EFSA and EPA found that there was inconsistency between studies of the tumor response and used this reasoning to exclude several tumors. Part of this relates to findings appearing in only one sex or strain but not others; this happens quite often, for example see [99] for animal carcinogenicity findings for 111 known human carcinogens. The other part of this relates to the magnitude of the response in a specific sex/species/strain; neither agency used a formal statistical method to evaluate this consistency. It is naive to assume that the raw tumor counts from studies done in different laboratories at different times using different diets, different exposure lengths and different sub-strains of animals would yield perfect agreement in response. EPA's FIFRA Science Advisory Panel, in their review of EPA's draft risk assessment [100] recommended EPA do a pooled analysis to determine an overall effect as does the IARC [40]. The pooled analyses presented in this evaluation properly adjust for study differences and demonstrate consistency for many of the tumors showing significant evidence of carcinogenicity in one or more studies and suggestive increases in carcinogenicity in other studies using the same sex/species/strain.
Finally, both agencies missed many of the tumors identified in this evaluation due to a failure to analyze all of the data using a trend test like the C-A test. EPA states that in 4 of the 8 rat carcinogenicity studies no tumors were identified for evaluation. For one of these studies [30], the data are unavailable for review and the doses are far below the MTD. For the remaining three studies [19,20,21], there are 5 positive findings not identified by the EPA. In the remaining 4 studies [17, 18, 22, 23] where they saw some tumors increased, they failed to identify 6 tumors identified in this reanalysis. EPA states that in 2 of the 6 mouse carcinogenicity studies no tumors were identified for evaluation. As noted in the Materials and methods section, one of these studies [24] was determined to have falsified data by EPA [25] and should not have been included in their evaluation. For the second study [26], the data are unavailable and could not be evaluated in this review. In the remaining four studies discussed by EPA [11,12,13,14], they missed 5 tumors identified in this evaluation (two identified through historical controls). In addition, they excluded one study [16] due to the presence of a viral infection within the colony; EPA gives no documentation of this viral infection and there is no indication within the study report of a viral infection nor any indication that these animals were unhealthy. This study has one significant finding not discussed by EPA and three marginally significant findings similar to those seen in CD-1 mice. EPA also failed to evaluate one study [13] considered in this evaluation which had two positive tumor findings. Thus, EPA discussed only 7 of the 21 statistically significant tumor increases in rats and 5 of the 16 significant tumor increases in mice. Similar comments apply to the EFSA review and all of the other regulatory reviews. To be fair to the regulatory agencies, it should be noted that the original study reports from the laboratories that did these studies also failed to identify many of the significant trends discussed in this review because they relied predominantly on pairwise evaluations like Fisher's exact test and failed to do any trend analyses. This would suggest that the regulatory agencies are relying upon the results of the analyses presented in the study reports rather than conducting their own thorough reanalysis of the data using trend tests.
The mechanisms through which glyphosate causes these tumors in laboratory animals are as controversial as the cancer findings themselves. The IARC Working Group [2] concluded there was strong evidence that glyphosate induces genotoxicity and oxidative stress. All of the regulatory reviews have concluded glyphosate is not genotoxic and most have concluded it does not cause oxidative stress. A complete review of this literature is beyond the scope of this manuscript, but as noted above, genotoxicity and oxidative stress are plausible mechanisms for many of these cancers. Also, as noted in the earlier discussion of related findings from the peer-reviewed literature, some of the cancers may be due to glyphosate altering hormonal balance in the adrenal, pituitary and thyroid glands.
Strength-of evidence conclusions
In summary, exposure of rats and mice to glyphosate in 13 separate carcinogenicity studies demonstrates that glyphosate causes a variety of tumors that differ by sex, species, strain and length of exposure. To summarize the strength-of-evidence for each tumor, four categories are used. Clear evidence (CE) is indicated when the data demonstrate a causal linkage between glyphosate and the tumor based upon the reanalysis in this review and the available peer-reviewed literature. Some evidence (SE) is indicated when the data demonstrate a linkage between glyphosate and the tumor based upon the reanalysis in this review and the available peer-reviewed literature but chance, although unlikely, cannot be ruled out. Equivocal evidence (EE) also indicates the data demonstrate a linkage between glyphosate and the tumor based upon the reanalysis in this review and the available peer-reviewed literature, but chance is as likely an explanation for the association as is glyphosate. No evidence (NE) indicates any linkage between glyphosate and the tumor based upon the reanalysis in this review is almost certainly due to chance. The factors used to put tumors into these categories include the analyses of the individual studies, the consistency of the data across studies (the pooled analyses), the analyses using historical control data, the analyses of the non-neoplastic lesions, the mechanistic evidence and the associated scientific literature. These categorizations are presented in Table 6.
Table 6 Summary of level of evidencea for tumors observed to have a significant trend in 13 rodent carcinogenicity studies in male and female, mice and ratsb
There is clear evidence that glyphosate causes hemangiosarcomas, kidney tumors and malignant lymphomas in male CD-1 mice and hemangiomas and malignant lymphomas in female CD-1 mice. There is clear evidence that glyphosate causes hemangiomas in female Swiss albino mice. There is clear evidence that glyphosate causes kidney adenomas, liver adenomas, skin keratoacanthomas and skin basal cell tumors in male Sprague-Dawley rats and adrenal cortical carcinomas in female Sprague-Dawley rats. There is clear evidence that glyphosate causes hepatocellular adenomas and skin keratocanthomas in male Wistar rats.
There is some evidence that glyphosate causes malignant lymphomas in male and female and kidney tumors in male Swiss albino mice. There is some evidence that glyphosate causes testicular interstitial cell tumors in male Sprague-Dawley rats. There is some evidence that glyphosate causes pituitary adenomas in male and female Wistar rats and mammary gland adenomas and carcinomas in female Wistar rats.
There is equivocal evidence that glyphosate causes thyroid c-cell adenomas and carcinomas in male and female Sprague-Dawley rats, and thyroid follicular cell adenomas and carcinomas and pancreas islet-cell adenomas in male Sprague-Dawley rats. There is equivocal evidence glyphosate causes adrenal pheochromocytomas in male Wistar rats.
There is no evidence that glyphosate causes lung tumors in male and female CD-1 mice or Harderian gland tumors in female CD-1 mice.
The analyses conducted for this review clearly support the IARC's conclusion that there is sufficient evidence to say that glyphosate causes cancer in experimental animals. In contrast, the regulatory authorities reviewing these data appear to have relied on analyses conducted by the registrant and not their own analyses of the data. As such, they uniformly concluded that the subset of tumor increases they identified as showing an association with glyphosate were due to chance. Had regulatory authorities conducted a full reanalysis of all of the available evidence from the 13 animal carcinogenicity studies as was done here, it is difficult to see how they could reach any conclusion other than glyphosate can cause cancers in experimental animals.
Reviewer's report
Level of interest
Please indicate how interesting you found the manuscript:
An exceptional article
Quality of written English
Please indicate the quality of language in the manuscript:
Declaration of competing interests
Please complete a declaration of competing interests, considering the following questions:
Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?
Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?
Do you hold or are you currently applying for any patents relating to the content of the manuscript?
Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?
Do you have any other financial competing interests?
Do you have any non-financial competing interests in relation to this paper?
If you can answer no to all of the above, write 'I declare that I have no competing interests' below.
If your reply is yes to any, please give details below.
I declare that I have no competing interests
I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.
I agree to the open peer review policy of the journal
The original reports for 12 of the animal carcinogenicity studies that support the findings of this study are available from EFSA, but restrictions apply to the availability of these data. All tumor data cited in this study are included in this published article [and its supplementary information files]. Additional data (historical control data, non-significant cancer sites, non-neoplastic endpoints, etc.) are available from the author upon reasonable request.
AID:
Activation-induced cytidine deaminase
ALT:
Alanine aminotransferase
AST:
Aspartate aminotransferase
DMBA:
7,12-dimethyl-benz[a]anthracene
EChA:
EFSA:
European Food Safety Authority
EPA:
US Environmental Protection Agency
GBH:
Glyphosate-based herbicide
GSH:
IARC:
JMPR:
Joint Meeting of the FAO Panel of Experts on Pesticide Residues in Food and the Environment and the WHO Core Assessment Group on Pesticide Residues
LDH:
Lactic acid dehydrogenase
MDA:
Malondialdehyde
mg/kg/d:
Milligrams per kilogram body weight per day
MM:
MTD:
Maximum tolerated dose
OECD:
Organization for Economic Cooperation and Development
SD rat:
Sprague-Dawley rat
TPA:
12-o-tetradecanoylphorbol-13-acetate
Wild type
Székács A, Darvis B. Forty years with glyphosate. In: Hasaneen M-G, editor. Herbicides - properties, synthesis and control of weeds. Croatia: InTech; 2012. p. 38.
IARC Working Group. Glyphosate. In: Some Organophosphate Insecticides and Herbicides: Diazinon, Glyphosate, Malathion, Parathion, and Tetrachlorvinphos, vol. 112. Lyon: IARC Monogr Prog; 2015. p. 1–92.
Benbrook CM. Trends in glyphosate herbicide use in the United States and globally. Environ Sci Eur. 2016;28(1):3.
European Food Safety Authority. Conclusion on the peer review of the pesticide risk assessment of the active substance glyphosate. EFSA J. 2015;13(11):4302.
Committee for Risk Assessment. Opinion proposing harmonised classification and labelling at EU level of glyphosate (ISO); N-(phosphonomethyl)glycine. Helsinki: European Chemical Agency; 2017.
EPA. Revised Glyphosate Issue Paper: Evaluation of Carcinogenic Potential. Washington: US Environmental Protection Agency; 2017.
JMPR. Report of the special session of the joint Meeting of the FAO Panel of experts on pesticide residues in food and the environment and the WHO Core assessment group on pesticide residues Geneva, Switzerland, 9–13 may 2016, vol. 227. Geneva: Food and Agriculture Agency and World Health Organization; 2017.
BFR. In: Assessment GFIfR, editor. Final Addendum to the Renewal Assessment Report: Glyphosate. Parma: European Food Safety Authority; 2015.
Greim H, Saltmiras D, Mostert V, Strupp C. Evaluation of carcinogenic potential of the herbicide glyphosate, drawing on tumor incidence data from fourteen chronic/carcinogenicity rodent studies. Crit Rev Toxicol. 2015;45(3):185–208.
European Court of Justice. Judgment of the General Court of 7 March 2019 –Hautala and Others v EFSA. Luxembourg: European Court of Justice; 2019.
Knezevich AL, Hogan GK. A chronic feeding study of glyphosate in mice: Monsanto. East Millstone: Bio/Dynamic Inc.; 1983. Report No. 77–2011. ; 1983.
Atkinson C, Martin T, Hudson P, Robb D. Glyphosate: 104 week dietary carcinogenicity study in mice. In: Inveresk Research International. Tranent: IRI Project No. 438618; 1993.
Sugimoto K. 18-Month Oral Oncogenicity Study in Mice, Vol. 1 and 2. Kodaira-shi: The Institute of Environmental Toxicology; 1997. Study No.:IET 94–0151.
Wood E, Dunster J, Watson P, Brooks P. Glyphosate Technical: Dietary Carcinogenicity Study in the Mouse. Derbyshire: Harlan Laboratories Limited; 2009. Study No. 2060–011.
Takahashi M. Oral feeding carcinogenicity study in mice with AK-01. Agatsuma: Nippon Experimental Medical Research Institute Co. Ltd.; 1999.
Kumar DPS. Carcinogenicity Study with Glyphosate Technical in Swiss Albino Mice. In: Toxicology Department Rallis Research Centre, Rallis India Limited; 2001. Study No. TOXI: 1559.CARCI-M.
Lankas GP. A Lifetime Study of Glyphosate in Rats. Monsanto: Report No. 77–2062 prepared by Bio Dynamics, Inc.; 1981.
Stout LD, Ruecker PA. Chronic study of glyphosate administered in feed to albino rats. Monsanto: Monsanto Chemical Company; 1990.
Atkinson C, Strutt A, Henderson W, et al: 104-Week Chronic Feeding/ Oncogenicity study in rats with 52-week interim kill. 1993.
Enemoto K. 24-Month Oral Chronic Toxicity and Oncogenicity Study in Rats, Vol. 1. Kodaira-shi: The Institute of Environmental Toxicology; 1997.
Suresh TP. Combined chronic toxicity and carcinogenicity study with glyphosate technical in Wistar rats. Syngenta: Toxicology Department Rallis Research Centre, Rallis India Limited; 1996.
Brammer. Glyphosate Acid: Two Year Dietary Toxicity and Oncogenicity Study in Wistar Rats. Cheshire: Central Toxicology Laboratory, Alderley Park Macclesfield; 2001.
Wood E, Dunster J, Watson P, Brooks P. Glyphosate Technical: Dietary Combined Chronic Toxicity/Carcinogenicity Study in the Rat. Derbyshire: Harlan Laboratories Limited; 2009. Study No. 2060–012.
Reyna MS, Gordon DE. 18-month carcinogenicity study with CP67573 in Swiss white mice: Monsanto. Northbrook: Industrial Bio-Test Laboratories, Inc.; 1973.
USEPA. In: O'ffice of Pesticides Program, editor. Summary of the IBT Review Program, vol. 46. Washington: US Environmental Protection Agency; 1983.
Pavkov K, Turnier JC. Two-year chronic toxicity and Oncogenecity dietary study with SC-0224 in mice. Farmington: Stauffer Chemical Company; 1987.
Reyna MS, Gordon DE. Two- Year Chronic Oral Toxicity Study with CP67573 in Albino Rats: Monsanto; 1974.
Burnett P, Borders J, Kush J. In: on behalf of Monsanto Co., editor. Report to Monsanto Company: Two Year Chronic Oral Toxicity Study with CP- 76100 in Albino Rats. Northbrook: Industrial Bio-Test Laboratories, Inc; 1979.
EPA. Glyphosate Issue Paper: Evaluation of Carcinogenic Potential. Washington: US Environmental Protection Agency; 2016.
Pavkov K, Wyand S. Two-year chronic toxicity and Oncogenecity dietary study with SC-0224 in rats. Farmington: Stauffer Chemical Company; 1987.
USEPA. Data evaluation report (accession number 4021 40–06). In. Edited by branch T. Washington: USEPA; 1987.
Excel. Combined chronic toxicity/carcinogenicity study of glyphosate technical in Sprague Dawley rats. Pune: Indian Institute of Toxicology; 1997.
Takahashi M. A combined chronic toxicity/carcinogenicity study of AK-01 bulk substance by dietary administration in rats. Agatsuma: Nippon Experimental Medical Research Institute Co. Ltd.; 1999.
Chruscielska K, Brzezinski J, Kita K, Kalhorn D, Kita I, Graffstein B, Korzeniowski P. Glyphosate: evaluation of chronic activity and possible far - reaching effects - part 1. Studies on chronic toxicity. Pestycydy. 2000;3-4:10.
George J, Prasad S, Mahmood Z, Shukla Y. Studies on glyphosate-induced carcinogenicity in mouse skin: a proteomic approach. J Proteome. 2010;73(5):951–64.
Seralini GE, Clair E, Mesnage R, Gress S, Defarge NS, Malatesta M, Hennequin D, de Vendomois J. Republished study: long-term toxicity of a roundup herbicide and a roundup-tolerant genetically modified maize. Environ Sci Eur. 2014;26(1):14.
Gart JJ, Chu KC, Tarone RE. Statistical issues in interpretation of chronic bioassay tests for carcinogenicity. J Natl Cancer Inst. 1979;62(4):957–74.
OECD. In: environment HaSP, editor. Guidance document 116 on the conduct and Design of Chronic Toxicity and Carcinogenicity Studies. Paris: OECD; 2012.
USEPA: Guidelines for Carcinogen Risk Assessment. In. Edited by Agency UEP. Washington: US Environmental Protection Agency; 2005: 166.
Preamble to the IARC Monographs [https://monographs.iarc.fr/wp-content/uploads/2019/01/Preamble-2019.pdf].
Tarone RE. The use of historical control information in testing for a trend in proportions. Biometrics. 1982;38(1):215–20.
Dykstra W. In: branch T, editor. Glyphosate - EPA registration Nos. 524–318 and 524–333 - historical control data for mouse kidney tumors. Washington, DC: US EPA; 1989.
Giknis M, Clifford C. Spontaneous neoplastic lesions in the CrI:CD-1(ICR)BR mouse. Wilmington: Charles River Laboratories; 2000.
EPA. EPA Memo Stout and Ruecker. In: Dykstra W, editor. Toxicology Branch I; 1991. MRID 416438–01 Tox review 008897.
Isobe K, Mukaratirwa S, Petterino C, Bradley A. Historical control background incidence of spontaneous thyroid and parathyroid glands lesions of rats and CD-1 mice used in 104-week carcinogenicity studies. J Toxicol Pathol. 2016;29(3):201–6.
Turkmen R, Birdane YO, Demirel HH, Yavuz H, Kabu M, Ince S. Antioxidant and cytoprotective effects of N-acetylcysteine against subchronic oral glyphosate-based herbicide-induced oxidative stress in rats. Environ Sci Pollut Res Int. 2019;26(11):11427–37.
Dedeke GA, Owagboriaye FO, Ademolu KO, Olujimi OO, Aladesida AA. Comparative assessment on mechanism underlying renal toxicity of commercial formulation of roundup herbicide and glyphosate alone in male albino rat. Int J Toxicol. 2018;37(4):285–95.
Tang J, Hu P, Li Y, Win-Shwe TT, Li C. Ion imbalance is involved in the mechanisms of liver oxidative damage in rats exposed to glyphosate. Front Physiol. 2017;8:1083.
Hamdaoui L, Naifar M, Mzid M, Ben Salem M, Chtourou A, Makni-Ayadi F, Sahnoun Z, Rebai T. Nephrotoxicity of Kalach 360 SL: biochemical and histopathological findings. Toxicol Mech Methods. 2016;26(9):685–91.
Tizhe EV, Ibrahim ND, Fatihu MY, Onyebuchi II, George BD, Ambali SF, Shallangwa JM. Influence of zinc supplementation on histopathological changes in the stomach, liver, kidney, brain, pancreas and spleen during subchronic exposure of Wistar rats to glyphosate. Comp Clin Pathol. 2014;23(5):1535–43.
Cavusoglu K, Yapar K, Oruc E, Yalcin E. Protective effect of Ginkgo biloba L. leaf extract against glyphosate toxicity in Swiss albino mice. J Med Food. 2011;14(10):1263–72.
Wang L, Deng Q, Hu H, Liu M, Gong Z, Zhang S, Xu-Monette ZY, Lu Z, Young KH, Ma X, et al. Glyphosate induces benign monoclonal gammopathy and promotes multiple myeloma progression in mice. J Hematol Oncol. 2019;12(1):70.
Gunatilake S, Seneff S, Orlando L. Glyphosate's Synergistic Toxicity in Combination with Other Factors as a Cause of Chronic Kidney Disease of Unknown Origin. Int J Environ Res Public Health. 2019;16(15):2734.
Jayasumana C, Gunatilake S, Siribaddana S. Simultaneous exposure to multiple heavy metals and glyphosate may contribute to Sri Lankan agricultural nephropathy. BMC Nephrol. 2015;16:103.
Jayasumana C, Paranagama P, Agampodi S, Wijewardane C, Gunatilake S, Siribaddana S. Drinking well water and occupational exposure to herbicides is associated with chronic kidney disease, in Padavi-Sripura, Sri Lanka. Environ Health. 2015;14:6.
Zhang F, Pan LP, Ding EM, Ge QJ, Zhang ZH, Xu JN, Zhang L, Zhu BL. Study of the effect of occupational exposure to glyphosate on hepatorenal function. Zhonghua Yu Fang Yi Xue Za Zhi. 2017;51(7):615–20.
Zhang L, Rana I, Shaffer RM, Taioli E, Sheppard L. Exposure to glyphosate-based herbicides and risk for non-Hodgkin lymphoma: a meta-analysis and supporting evidence. Mutat Res. 2019;781:186–206.
Chang ET, Delzell E. Systematic review and meta-analysis of glyphosate exposure and risk of lymphohematopoietic cancers. J Environ Sci Health B. 2016;51:1–27.
Schinasi L, Leon ME. Non-Hodgkin lymphoma and occupational exposure to agricultural pesticide chemical groups and active ingredients: a systematic review and meta-analysis. Int J Environ Res Public Health. 2014;11(4):4449–527.
Begley DA, Sundberg JP, Krupke DM, Neuhauser SB, Bult CJ, Eppig JT, Morse HC 3rd, Ward JM. Finding mouse models of human lymphomas and leukemia's using the Jackson laboratory mouse tumor biology database. Exp Mol Pathol. 2015;99(3):533–6.
Hori M, Xiang S, Qi CF, Chattopadhyay SK, Fredrickson TN, Hartley JW, Kovalchuk AL, Bornkamm GW, Janz S, Copeland NG, et al. Non-Hodgkin lymphomas of mice. Blood Cells Mol Dis. 2001;27(1):217–22.
Morse HC 3rd, Ward JM, Teitell MA. Mouse models of human B lymphoid neoplasms. In: Magrath IT, editor. The Lymphoid Neoplasms. 3rd ed. Boca Ratan: CRC Press; 2010.
Pattengale PK, Taylor CR. Experimental models of lymphoproliferative disease. The mouse as a model for human non-Hodgkin's lymphomas and related leukemias. Am J Pathol. 1983;113(2):237–65.
Ward JM. Lymphomas and leukemias in mice. Exp Toxicol Pathol. 2006;57(5–6):377–81.
Elie-Caille C, Heu C, Guyon C, Nicod L. Morphological damages of a glyphosate-treated human keratinocyte cell line revealed by a micro- to nanoscale microscopic investigation. Cell Biol Toxicol. 2010;26(4):331–9.
Heu C, Berquand A, Elie-Caille C, Nicod L. Glyphosate-induced stiffening of HaCaT keratinocytes, a peak force tapping study on living cells. J Struct Biol. 2012;178(1):1–7.
Heu C, Elie-Caille C, Mougey V, Launay S, Nicod L. A step further toward glyphosate-induced epidermal cell death: involvement of mitochondrial and oxidative mechanisms. Environ Toxicol Pharmacol. 2012;34(2):144–53.
Olorunsogo OO, Bababunmi EA, Bassir O. Effect of glyphosate on rat liver mitochondria in vivo. Bull Environ Contam Toxicol. 1979;22(3):357–64.
Olorunsogo OO, Bababunmi EA. Inhibition of succinate-linking reduction of pyridine nucleotide in rat liver mitochondria 'in vivo' by N-(phosphonomethyl)glycine. Toxicol Lett. 1980;7(2):149–52.
Olorunsogo OO. Defective nicotinamide nucleotide transhydrogenase reaction in hepatic mitochondria of N-(phosphonomethyl)-glycine treated rats. Biochem Pharmacol. 1982;31(12):2191–2.
Olorunsogo OO. Inhibition of energy-dependent transhydrogenase reaction by N-(phosphonomethyl) glycine in isolated rat liver mitochondria. Toxicol Lett. 1982;10(1):91–5.
Haskovic E, Pekic M, Focak M, Sulejevic D, Mesalic L. Effects of glyphosate on enzyme activity and serum glucose in rats Rattus norvegicus. Acta Vet-Beogr. 2016;66(2):214–21.
Vainio H, Linnainmaa K, Kahonen M, Nickels J, Hietanen E, Marniemi J, Peltonen P. Hypolipidemia and peroxisome proliferation induced by phenoxyacetic acid herbicides in rats. Biochem Pharmacol. 1983;32(18):2775–9.
Mesnage R, Arno M, Costanzo M, Malatesta M, Seralini GE, Antoniou MN. Transcriptome profile analysis reflects rat liver and kidney damage following chronic ultra-low dose roundup exposure. Environ Health. 2015;14:70.
Turkmen R, Birdane YO, Demirel HH, Kabu M, Ince S. Protective effects of resveratrol on biomarkers of oxidative stress, biochemical and histopathological changes induced by sub-chronic oral glyphosate-based herbicide in rats. Toxicol Res (Camb). 2019;8(2):238–45.
Astiz M, de Alaniz MJ, Marra CA. The oxidative damage and inflammation caused by pesticides are reverted by lipoic acid in rat brain. Neurochem Int. 2012;61(7):1231–41.
Owagboriaye F, Dedeke G, Ademolu K, Olujimi O, Aladesida A, Adeleke M. Comparative studies on endogenic stress hormones, antioxidant, biochemical and hematological status of metabolic disturbance in albino rat exposed to roundup herbicide and its active ingredient glyphosate. Environ Sci Pollut Res Int. 2019;26(14):14502–12.
Pandey A, Rudraiah M. Analysis of endocrine disruption effect of roundup((R)) in adrenal gland of male rats. Toxicol Rep. 2015;2:1075–85.
Lorenz V, Milesi MM, Schimpf MG, Luque EH, Varayoud J. Epigenetic disruption of estrogen receptor alpha is induced by a glyphosate-based herbicide in the preimplantation uterus of rats. Mol Cell Endocrinol. 2019;480:133–41.
LKS DA, Pletschke BI, Frost CL. Moderate levels of glyphosate and its formulations vary in their cytotoxicity and genotoxicity in a whole blood model and in human cell lines with different estrogen receptor status. 3 Biotech. 2018;8(10):438.
Gasnier C, Dumont C, Benachour N, Clair E, Chagnon MC, Seralini GE. Glyphosate-based herbicides are toxic and endocrine disruptors in human cell lines. Toxicology. 2009;262(3):184–91.
Hokanson R, Fudge R, Chowdhary R, Busbee D. Alteration of estrogen-regulated gene expression in human cells induced by the agricultural and horticultural herbicide glyphosate. Hum Exp Toxicol. 2007;26(9):747–52.
Mesnage R, Phedonos A, Biserni M, Arno M, Balu S, Corton JC, Ugarte R, Antoniou MN. Evaluation of estrogen receptor alpha activation by glyphosate-based herbicide constituents. Food Chem Toxicol. 2017;108(Pt A):30–42.
Nardi J, Moras PB, Koeppe C, Dallegrave E, Leal MB, Rossato-Grando LG. Prepubertal subchronic exposure to soy milk and glyphosate leads to endocrine disruption. Food Chem Toxicol. 2017;100:247–52.
Sritana N, Suriyo T, Kanitwithayanun J, Songvasin BH, Thiantanawat A, Satayavivad J. Glyphosate induces growth of estrogen receptor alpha positive cholangiocarcinoma cells via non-genomic estrogen receptor/ERK1/2 signaling pathway. Food Chem Toxicol. 2018;118:595–607.
Thongprakaisang S, Thiantanawat A, Rangkadilok N, Suriyo T, Satayavivad J. Glyphosate induces human breast cancer cells growth via estrogen receptors. Food Chem Toxicol. 2013;59:129–36.
Cassault-Meyer E, Gress S, Seralini GE, Galeraud-Denis I. An acute exposure to glyphosate-based herbicide alters aromatase levels in testis and sperm nuclear quality. Environ Toxicol Pharmacol. 2014;38(1):131–40.
Clair E, Mesnage R, Travert C, Seralini GE. A glyphosate-based herbicide induces necrosis and apoptosis in mature rat testicular cells in vitro, and testosterone decrease at lower levels. Toxicol in Vitro. 2012;26(2):269–79.
Astiz M. Hurtado de Catalfo GE, Garcia MN, Galletti SM, Errecalde AL, de Alaniz MJ, Marra CA: pesticide-induced decrease in rat testicular steroidogenesis is differentially prevented by lipoate and tocopherol. Ecotoxicol Environ Saf. 2013;91:129–38.
Dallegrave E, Mantese FD, Oliveira RT, Andrade AJ, Dalsenter PR, Langeloh A. Pre- and postnatal toxicity of the commercial glyphosate formulation in Wistar rats. Arch Toxicol. 2007;81(9):665–73.
Owagboriaye FO, Dedeke GA, Ademolu KO, Olujimi OO, Ashidi JS, Adeyinka AA. Reproductive toxicity of roundup herbicide exposure in male albino rat. Exp Toxicol Pathol. 2017;69(7):461–8.
Romano MA, Romano RM, Santos LD, Wisniewski P, Campos DA, de Souza PB, Viau P, Bernardi MM, Nunes MT, de Oliveira CA. Glyphosate impairs male offspring reproductive development by disrupting gonadotropin expression. Arch Toxicol. 2012;86(4):663–73.
Romano RM, Romano MA, Bernardi MM, Furtado PV, Oliveira CA. Prepubertal exposure to commercial formulation of the herbicide glyphosate alters testosterone levels and testicular morphology. Arch Toxicol. 2010;84(4):309–17.
de Souza JS, Kizys MM, da Conceicao RR, Glebocki G, Romano RM, Ortiga-Carvalho TM, Giannocco G, da Silva ID, Dias da Silva MR, Romano MA, et al. Perinatal exposure to glyphosate-based herbicide alters the thyrotrophic axis and causes thyroid hormone homeostasis imbalance in male rats. Toxicology. 2017;377:25–37.
Samsel A, Seneff S. Glyphosate, pathways to modern diseases II: celiac sprue and gluten intolerance. Interdiscip Toxicol. 2013;6(4):159–84.
Shrestha S, Parks CG, Goldner WS, Kamel F, Umbach DM, Ward MH, Lerro CC, Koutros S, Hofmann JN, Beane Freeman LE, et al. Pesticide use and incident hypothyroidism in pesticide applicators in the agricultural health study. Environ Health Perspect. 2018;126(9):97008.
OECD: Carcinogenicity Studies, OECD Guideline for the Testing of Chemicals, No. 451. In. Edited by OECD. Paris: Organization for Economic Co-operation and Development; 2009.
OECD: Combined Chronic Toxicity\Carcinogenicity Studies, OECD Guidelines for Testing of Chemicals, No. 453. In. Edited by OECD. Paris: Organization for Economic Co-operation and Development; 2009.
Krewski D, Rice JM, Bird M, Milton B, Collins B, Lajoie P, Billard M, Grosse Y, Cogliano VJ, Caldwell JC, et al. Concordance between sites of tumor development in humans and in experimental animals for 111 agents that are carcinogenic to humans. J Toxicol Environ Health B Crit Rev. 2019;22(7–8):203–36.
FIFRA Scientific Advisory Panel Meeting Minutes. In: Programs OoP, editor. Meeting Minutes and Final Report of the December 13–16, 2016 FIFRA SAP Meeting Held to Consider and Review Scientific Issues Associated with EPA's Evaluation of the Carcinogenic Potential of Glyphasate, vol. 101. Washington, DC: US Environmental Protection Agency; 2017.
Some of the analyses were conducted to develop expert opinions for court cases and were supported by funding from attorneys involved in these litigations. Some of the text and tables in this manuscript are duplicative of written expert testimony by CJP for these court cases. These funders had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Rollins School of Public Health, Emory University, Atlanta, GA, USA
Christopher J. Portier
Department of Toxicogenomics, Maastricht University, Maastricht, Netherlands
CJP Consulting, Seattle, Washington, USA
All work on this manuscript was done by CJP. No other persons have contributed to or reviewed this manuscript prior to submission. The author read and approved the final manuscript.
Correspondence to Christopher J. Portier.
All animal carcinogenicity studies used in this evaluation underwent ethics approval by the original study laboratory.
CJP has been paid to provide expert testimony for litigation on the carcinogenicity of glyphosate.
Details on individual animal chronic exposure toxicity and carcinogenicity studies.
Tumors of interest in male and female CD-1 mice from the 24-month feeding study of Knezevich and Hogan (1983) [11] – Study A. Table S2. Tumors of interest in male and female CD-1 mice from the 24-month feeding study of Atkinson et al. (1993) [12] – Study B. Table S3. Tumors of interest in male and female CD-1 mice from the 18-month feeding study of Sugimoto (1997) [13] – Study C. Table S4. Tumors of interest in male and female CD-1 mice from the 18-month feeding study of Wood et al. (2009) [14] – Study D. Table S5. Tumors of interest in male and female CD-1 mice from the 18-month feeding study of Takahashi (1999) [15]; data extracted from JMPR [7] – Study E. Table S6. Tumors of interest in male and female Swiss Albino mice from the 18-month feeding study of Kumar (2001) [16] – Study F. Table S7. Tumors of interest in male and female Sprague-Dawley rats the 26-month feeding study of Lankas (1981) [17] – Study G. Table S8. Tumors of interest in male and female Sprague-Dawley rats from the 24-month feeding study of Stout and Ruecker (1990) [18] – Study H. Table S9. Tumors of interest in male and female Sprague-Dawley rats from the 24-month feeding study of Atkinson et al. (1993) [19] – Study I. Table S10. Tumors of interest in male and female Sprague-Dawley rats from the 24-month feeding study of Enemoto (1997) [20] – Study J. Table S11. Tumors of interest in male and female Wistar rats from the 24-month feeding study of Suresh (1996) [21] – Study K. Table S12. Tumors of interest in male and female Wistar rats from the 24-month feeding study of Brammer (2001) [22] – Study L. Table S13. Tumors of interest in male and female Wistar rats from the 24-month feeding study of Wood et al. (2009) [23] – Study M. Table S14. Observed (Obs.) versus expected (Exp.) tumor sites with significant trends in the 13 acceptable rodent carcinogenicity studies using glyphosate.
Portier, C.J. A comprehensive analysis of the animal carcinogenicity data for glyphosate from chronic exposure rodent carcinogenicity studies. Environ Health 19, 18 (2020). https://doi.org/10.1186/s12940-020-00574-1
Received: 10 December 2019
Accepted: 06 February 2020
Animal carcinogenicity studies
Trend test
Historical controls | CommonCrawl |
communications biology
Micro-scale interactions between Arabidopsis root hairs and soil particles influence soil erosion
Visualizing the dynamics of soil aggregation as affected by arbuscular mycorrhizal fungi
E. K. Morris, D. J. P. Morris, … M. C. Rillig
Arable soil nitrogen dynamics reflect organic inputs via the extended composite phenotype
Andrew L. Neal, Harry A. Barrat, … John W. Crawford
Soil structure and microbiome functions in agroecosystems
Martin Hartmann & Johan Six
Microspectroscopic visualization of how biochar lifts the soil organic carbon ceiling
Zhe (Han) Weng, Lukas Van Zwieten, … Annette Cowie
Cover crop species have contrasting influence upon soil structural genesis and microbial community phenotype
Aurelie Bacq-Labreuil, John Crawford, … Karl Ritz
Soil microarthropods alter the outcome of plant-soil feedback experiments
Eliška Kuťáková, Simone Cesarz, … Nico Eisenhauer
Roots compact the surrounding soil depending on the structures they encounter
Maik Lucas, Steffen Schlüter, … Doris Vetterlein
Edaphic controls of soil organic carbon in tropical agricultural landscapes
Jon M. Wells, Susan E. Crow, … Jim Kiniry
Depth matters: effects of precipitation regime on soil microbial activity upon rewetting of a plant-soil system
Ilonka C. Engelhardt, Amy Welty, … Romain L. Barnard
Sarah De Baets1 na1,
Thomas D. G. Denbigh2 na1,
Kevin M. Smyth2 na1,
Bethany M. Eldridge2 na1,
Laura Weldon2,
Benjamin Higgins3,
Antoni Matyjaszkiewicz4,
Jeroen Meersmans5,
Emily R. Larson2,
Isaac V. Chenchiah ORCID: orcid.org/0000-0002-8618-620X3 na2,
Tanniemola B. Liverpool3 na2,
Timothy A. Quine ORCID: orcid.org/0000-0002-5143-51576 na2 &
Claire S. Grierson2 na2
Communications Biology volume 3, Article number: 164 (2020) Cite this article
Soil is essential for sustaining life on land. Plant roots play a crucial role in stabilising soil and minimising erosion, although these mechanisms are still not completely understood. Consequently, identifying and breeding for plant traits to enhance erosion resistance is challenging. Root hair mutants in Arabidopsis thaliana were studied using three different quantitative methods to isolate their effect on root-soil cohesion. We present compelling evidence that micro-scale interactions of root hairs with surrounding soil increase soil cohesion and reduce erosion. Arabidopsis seedlings with root hairs were more difficult to detach from soil, compost and sterile gel media than those with hairless roots, and it was 10-times harder to erode soil from roots with than without hairs. We also developed a model that can consistently predict the impact root hairs make to soil erosion resistance. Our study thus provides new insight into the mechanisms by which roots maintain soil stability.
Soil erosion rates associated with agricultural intensification and expansion are 100–1000 times higher than natural background erosion and much higher than soil formation, posing a serious threat to sustainable agriculture, food and environmental security1,2. Soil erosion is likely to worsen as the global population grows, average calorific intake increases and climate changes. Plants limit erosion by sheltering soil from erosive forces with their aerial parts and binding soils with their roots, both of which help to retain soil on slopes and anchor plants in the ground3,4,5. Plant root system architecture develops in response to local nutrient concentrations and precision nutrient placement has been mooted as a means of controlling soil erosion6. The selection of cultivars best suited to resisting erosion could be part of future sustainable soil management; however, this requires the identification of erosion-resistant traits. While the importance of meso-scale root properties of plant species (e.g. length, diameter, surface area and tensile strength) that support soil erosion resistance has been well studied experimentally and through modelling3,4,7,8 the understanding of the potential role of root micro-scale properties (e.g. root hairs, which are typically up to 1 mm long and tens of microns across) in controlling emergent soil properties like soil erosion resistance is limited.
There is a growing awareness that micro-scale processes at the root–soil interface (rhizosphere) are important for determining the properties and functions of soils and ecosystems that support sustainable agricultural land use and management. Symbiosis between roots and mycorrhizal fungi, for example, positively affect water balance, energy balance, nutrient/element cycling and soil hydrophobicity9,10. Likewise, root hairs have been linked to phosphate uptake, rhizosphere soil structure formation11, root penetration12, water uptake13 and rhizosheath (i.e. the weight of soil adhering strongly to roots upon excavation) formation in crop plants14. Plant roots also secrete compounds (exudates) that have been shown to promote soil aggregation15, supporting a composite-like medium consisting of soil particles, plant roots, and plant- and microbe-derived compounds that all contribute to mutual cohesive interactions. Nevertheless, there is no current convincing evidence for micro-scale root properties such as root hairs in soil cohesion. Indeed, the presence of root hairs is required for rhizosheath formation, but the effect of root hair length on rhizosheath strength and size has not been detected14.
Previous studies have explored the mechanisms of adhesion of roots to soil and cohesion of the root-soil composite by comparing species with different root architecture to evaluate how thick, deep roots; thin roots; and dense, fine roots change soil erosion resistance3,4,5. The technical term 'cohesion' refers to the tendency of the 'root–soil matrix' (which is a composite material of soil particles, plant roots, plant-derived compounds and microbes) to maintain mechanical integrity16. Thus, root–soil cohesion includes both roots adhering to soil as well as soil particles sticking to one another as an effect of root exudation or plant root–microbe interactions.
The explanatory power of prior studies is limited because root–soil cohesion may be influenced by inter-species differences other than those selected, especially differences in root micro-architecture. We overcome these limitations by using mutants and transgenic lines of the model plant Arabidopsis thaliana and novel root–gel attachment and uprooting resistance assays, as well as an established soil erosion assay in conjunction with a mathematical model to quantify the soil cohesion effects of root hairs. Our work advances the quantitative understanding of how root hairs affect root–soil cohesion and have a measurable effect on soil erosion resistance.
To characterise the role of root hairs in plant–substrate cohesion, our assays included the use of Arabidopsis wild type (Col-0) and root hairless or root hair overproducing mutant lines17,18,19. Transgenic 35S::RSL4 plants have longer root hairs18 and wer myb23 seedlings produce more root hairs than wild-type seedlings20, while the rsl4-1 mutant seedlings have a decreased number of short roots hairs18 and cpc try mutant seedlings do not produce root hairs21. In soil, the cpc try roots had 97% less dense network of root hairs compared to wild type, whereas the root hair density was 1.6 times higher for wer myb23 compared to wild type (Table 1, Fig. 1). We confirmed that the root architecture of 10- to 11-day-old wild type, cpc try and wer myb23 plants had no observable difference in lateral root length, lateral root count, rooting depth and vertical angle from the root system, which indicates that the only significant difference between these lines is root hair growth (Table 1, Fig. 1).
Table 1 Main root hair phenotypic differences between wild type (Col-0), cpc try and wer myb grown in gel or clay soil.
Fig. 1: Root hair phenotypes that affect plant–soil cohesion in Arabidopsis.
Root hair phenotypes of wild type, cpc try and wer myb23 Arabidopsis thaliana grown in a a clay–loam soil or on b gel medium. Black boxes in the upper panels in a indicate the regions magnified in the lower panels. Images were produced as described in the Methods using a a Nikon XT H 225 ST CT scanner (settings: energy: 90 kV, current 60 (μA) exposure 1 s, 5 frames averaged per projection, voxel size = 0.00278056) and b bright field, high contrast lighting on a Leica MZ FLIII microscope. Scale bar = 1 mm.
Root hairs contribute to root–substrate cohesion
We developed a centrifugal assay that measures the strength of root–gel adhesion in Arabidopsis seedlings with and without root hairs (Fig. 2). Seedlings were grown vertically on the surface of a sterile, solidified growth medium in Petri plates. After 5 days of growth, the mutant phenotypes were visualised (Fig. 2a) and seedlings were subject to incremental increases in centrifugal force to determine the proportion of seedlings that peeled away from the gel surface between each force interval. Even at the slowest rotation, seedlings experienced a centrifugal force at least 40 times gravity and because the aerial tissue mass of each seedling contributes to the force the roots experience, we incorporated aerial tissue mass when calculating the force (Fc) applied to each seedling (Eq. (2) in Methods; Fig. 2b). To compare the attachment of root hair-defective and root hair-overproducing lines relative to wild-type plants, we used a Cox hazard function regression model22 and report the P value of the Wald statistic (z), the hazard ratio and the lower and upper bound confidence intervals of the hazard ratio. Using this assay, we observed that the 35S::RSL4 and wer myb23 lines were more resistant to detachment from the gel medium than wild-type plants (Fig. 2c), with a risk of detachment that was 0.44 and 0.56 times that of the control, respectively (35S::RSL4 – z = −5.029, P < 0.001, HR = 0.444, 95% CI = 0.324–0.610; wer myb23 – z = −3.705, P < 0.001, HR = 0.553, 95% CI = 0.404–0.757). Conversely, the risk of detachment for rsl4-1 and cpc try mutants was 5 and 5.4 times more relative to wild-type plants (rsl4-1 – z = 10.732, P < 0.001, HR = 6.002, 95% CI = 4.327–8.325; cpc try z = 10.823, P < 0.001, HR = 6.369, 95% CI = 4.554–8.906), respectively (Fig. 2c). These results indicate that Arabidopsis seedlings with root hairs (wild type, 35S::RSL4, wer myb23) are more difficult to detach from sterile gel than seedlings that have no (cpc try) or a decreased root hair number and length (rsl4-1). Therefore, root hairs directly contribute to plant adhesion during plant growth on solid gel medium.
Fig. 2: Root hairs increase root adherence to a gel substrate.
a Roots of 5-day-old wild type, long haired 35S::RSL4, root hair overproducing wer myb23, sporadic and short haired rsl4-1, and hairless cpc try seedlings. Scale bar =1 mm. b A schematic of the Arabidopsis centrifugation root–gel adhesion assay to illustrate the centrifuge rotor and swinging bucket containing an inverted Petri plate. Ten seedlings/plate were grown on the surface of the gel medium and as the centrifuge rotates, the bucket swings out so that the plate is perpendicular to the rotor. Over a period of ~10 min, the plates are exposed to 1-min pulses of increasing centrifugal forces and the proportion of detached seedlings are scored between each speed setting. Illustration not to scale. c Survival curves showing the proportion of seedlings that remained adhered to the gel at increasing centrifugal force for 87 wild type (black); 88 cpc try (red); 94 rsl4-1 (pink); 87 wer myb23 (light blue); and 91 35S::RSL4 (dark blue) 5-day-old seedlings. The angular velocity (ω) and diameter of the centrifuge, together with the aerial tissue weight of each seedling are used to calculate the centrifugal force (mass × radius × ω2 = Fc, kg m s−2) resisted by each seedling. The seedlings with more and longer root hairs were able to remain attached to the medium over the course of the experiment compared to wild type, while the seedlings with fewer or no root hairs did not. Black crosses represent plants that remained adhered to the gel medium after the maximum centrifugal speed (1611 RPM). Results are from one representative experiment of at least two independent batches, which each included over 70 biological replicates for each genotype.
Root hairs contribute to plant anchoring in soil
We performed uprooting assays to investigate whether Arabidopsis root hairs contribute to root–soil cohesion. Plants from each genotype were grown in soil for 3–4 weeks and then uprooted from either a compost–sand mixture or clay soil using a tensile testing machine to record uprooting resistance of the different genotypes (Fig. 3a–c). After uprooting, the plant material was recovered and the root length density (RLD, km m−3) of each plant was calculated. Since the root–soil system responds to the uprooting force by a combination of deformation and damage, the maximum force and total energy expended to dislodge the plant from its substrate are macroscopic measures of the strength of root–soil cohesion (Fig. 3c). We identified differences in the total energy in Joules (kg m2 s−2 m−1 root) and maximum pulling resistance (kg m s−2 m−1 root) required to uproot wild type and mutant plants (Supplementary Table 1).
Fig. 3: Root hairs increase uprooting resistance from compost and soil.
a A wild-type Arabidopsis seedling being uprooted by a tensile testing machine illustrates how the cables are anchored to a washer that the mature plant has grown through. As the tensile machine uproots the plant by retracting the cables attached to the washer, the force required to remove the plant from the soil is recorded. b Schematic diagram showing a soil-grown Arabidopsis plant grown through an aluminium washer for tensile machine wire attachment. The rosette of the mature plant stabilised the washer so that the force required to uproot the plant could be measured. c Representative plot of load (kg m s−2) against displacement during the uprooting of the Arabidopsis plant from a clay soil shown in a. The adjacent panel shows the portion of the trace enclosed by the red box. d Plots of the total work done (i.e. area under curve in c), peak force and magnitude of force drops during the uprooting of wild type (black squares), hairless cpc try (red circles) and hairy wer myb23 (blue triangles) mutant plants from compost and a clay soil. Thirteen wild-type plants, 16 wer myb23 plants and 13 cpc try plants were grown in compost, and 17 plants of each genotype were grown in clay soil. In both soil conditions, the presence of root hairs increased the amount of force needed to uproot plants compared to when root hairs were absent.
The root hair overproducing wer myb23 plants grown in clay soil required a greater maximum uprooting force per RLD and greater total energy per RLD for uprooting compared to wild-type plants (t = 2.605, P < 0.05 and t = 3.807, P < 0.001, respectively; Fig. 3d). However, when wild type and wer myb23 were grown in compost, no detectable difference in uprooting force was observed (Table S2). In contrast, the hairless cpc try plants had a lower maximum uprooting force and required total energy to uproot them from clay soil (t = −3.034, P < 0.05 and t = −2.814, P < 0.001, respectively) and compost (maximum uprooting force, t = −2.394, P < 0.05; total energy, t = −3.618, P < 0.001, respectively) compared to wild-type plants (Fig. 3d). The magnitudes of incremental vertical drops in force during uprooting were measured and indicated that smaller force drops occurred when root hairless cpc try plants were uprooted from clay soil (t = −4.300, P < 0.001) compared to wild type (Fig. 3d). No difference in the uprooting force was observed between wer myb23 and wild-type plants grown in compost, indicating that root hairs provide resistance in soil until root tissues snap, which is reflected by higher force drops. While root hair overproduction in wer myb23 increased root–soil cohesion in clay soil compared to wild type, it had no detectable benefit in compost (Fig. 3d), suggesting that composition and structure of the anchoring medium can affect root–soil cohesion behaviour.
Root hairs affect soil erosion rates
We tested whether root hairs contribute to soil water erosion resistance by comparing the erosion rates of clay–loam soil sown with wild type, hairless cpc try or hair overproducing wer myb23 plants. Plants were grown in 250 × 250 × 150-mm soil boxes over a range of densities (144–1600 m−2) for 4–6 weeks. After removing the aerial plant tissue, 150 L of water were flowed over the soil–root blocks for a maximum of 110 s to simulate an overland flow event (Fig. 4a). RLD ranged between 3–56, 8–48 and 5–34 km m−3 for wild type, cpc try and wer myb23, respectively, which correspond with topsoil RLD ranges (1–45 km m−3) of six common cover crop species measured in field conditions23. We observed that soil–root blocks that contained root length densities >19 km m−3 of either wild type or wer myb23 roots reduced erosion rates to almost zero or 0.27 times that of the bare soil controls, respectively (Fig. 4b; Supplementary Movie 1). For root length densities <19 km m−3 that corresponded to <850 plants m−2 planting densities, soil erosion decreased exponentially with increased RLD for all mutants (Fig. 4c). The exponents of the empirical regression lines and goodness of fit for wild type, cpc try and wer myb23 were −0.095 ± 0.007 (R2 = 0.96), −0.069 ± 0.007 (R2 = 0.57) and −0.066 ± 0.008 (R2 = 0.62), respectively. At RLD > 19 km m−3 that corresponded with plant densities >850 plants m−2, hairless cpc try was best modelled using a linear regression with the constant term 0.268 ± 0.033, which indicated that there was no further erosion reduction with RLD > 19 km m−3 for plants without root hairs (Fig. 4c). In contrast, plots planted with wer myb23 lines reduced soil erosion to 0.19, 0.10 and 0.05 times that of bare soil rates at 25, 35 and 45 km m−3, respectively (Fig. 4c, d). Erosion rates in wild-type plots were reduced to 0.09, 0.04 and 0.01 times that of the bare soil rates at 25, 35 and 45 km m−3, respectively (Fig. 4c, d). These results show that at high root densities, roots with root hairs reduced erosion rates to near zero, while erosion rates of soils containing hairless roots were only reduced to 0.25 times that of bare soil.
Fig. 4: Root hairs reduce soil erosion.
a Schematic diagram of water flume and soil box used to test the erodibility of bare soil and soil through which roots of wild type, hairless cpc try and hair overproducing wer myb23 plants were grown. Green shapes indicate positions of plants at a density of nine plants per soil box; the green shaded box shows in 2D the space occupied by soil containing Arabidopsis roots. Model dimensions define the root system shape parameters used in the mechanistic model (see Methods for parameter definitions). b Representative images of bare soil and soil–root blocks containing 22, 22 and 21 km m−3 of wild type, cpc try and wer myb23 roots, respectively (stills taken from Supplementary Movie 1). Scale bar = 5 cm. Upper panels show how approximately 150 L of water flown over these blocks eroded sections of soil, highlighted by red shading in lower panels. c Empirical model describing erosion reduction as a function of root length density (RLD) for wild type (black), cpc try (red) and wer myb23 (blue) mutants grown in clay–loam soil. See Results for exponents of the empirical regression lines and goodness of fit for each plant genotype. For RLD > 19, cpc try data were modelled using a linear regression with constant term 0.268 ± 0.033. Dashed lines represent the 95% model error bounds computed by Monte Carlo simulation. Markers represent measured erosion reduction rates and corresponding root length densities (RLD, km m−3). d Output of mechanistic model illustrates either exponential or exponential crossing over to linear dependence of erosion reduction as a function of RLD for plant type. e Modelled root reinforcement (kPa) of clay–loam root-reinforced soils as a function of root length density (RLD, km m−3) for wild type, cpc try and wer myb23 plants. Regression models, represented as lines, for root reinforcement are 1.23 × LN(RLD + 1) (R2 = 0.70), 0.50 × LN(RLD + 1) (R2 = 0.40) and 0.86 × LN(RLD + 1) (R2 = 0.51), for wild type, cpc try and wer myb23, respectively. Dashed lines represent 95% model error bounds (Monte Carlo simulation). For c, d and e, n = 18 soil boxes containing wild-type roots, 17 (cpc try) and 27 (wer myb23).
The regression models fitted through the experimental erosion data were robust with relatively narrow 95% error bounds (simulated with Monte Carlo), especially for wild-type plants (Fig. 4c). Despite evidence that hairless roots provide limited erosion resistance, the overproduction of hairs by wer myb23 does not offer an erosion resistance advantage over wild-type root hair production. Wild-type roots confer erosion resistance at the upper end of the range observed and with lower variance.
Erosion rates from a modelled intensive summer rainstorm (peak rainfall intensity of 60 mm h−1) indicated that a 3 kPa soil cohesion increase due to the presence of Lolium perenne roots could reduce soil loss to 0.015 that of bare soil7. Therefore, we determined soil reinforcement values in the presence of roots with different root hair densities/phenotypes using the same method employed previously7 for each soil–root block (Fig. 4e). Using regression parameters, we calculated that wild type and wer myb23 roots increased soil cohesion by approximately 3.7 and 2.6 kPa at 19 km m−3 RLD, respectively, whereas soil reinforced with hairless roots at a similar RLD only improved soil cohesion by 1.5 kPa (Fig. 4e). The modelling and experimental observations suggest that the soil reinforcement effect of root hairs has a quantitative impact on soil cohesion values that serve as input for erosion models and, hence, have relevance to soil erosion predictions at field and landscape scales.
Mechanistic model describing erosion response to root hairs
We developed a mechanistic model that simulates erosion response to changes in RLD and root hair expression (Fig. 4d). This model enabled us to explore previously unaddressed variations in soil cohesion at the scale of roots and root hairs. In the model, we used information of root system architecture24,25 to account for the heterogeneous cohesion of root-reinforced soil, where resistance to detachment is strongest along the primary root, decreases radially outwards, and depends on rooting depth (Fig. 4a). The effectiveness of the root hairs to enhance soil cohesion is represented by the function γ, described by parameters M1 and M2 (Eq. (11) in Methods), where the maximum root hair enhancement (Mmax) is represented by M1/M2. Using the model to simulate our experimental observations, we deduced that the maximum effectiveness of root hair enhancement in the lines tested would be wild type > wer myb23 > cpc try (M1/M2 = 139, 134, 88, respectively). While cohesion enhancement in wer myb23 and wild type were comparable, there was clearly a much lower value for cpc try plants (Table 2). These M1 and M2 parameter values reproduced the experimental observations with accuracy of 3%. These results further support the hypothesis that root hairs contribute to soil–root cohesion and soil erosion resistance. Our study provides evidence that root hairs enhance soil cohesion.
Table 2 Model parameters of the function describing the amount of reinforcement (i.e. additional cohesion) provided by root hairs in different Arabidopsis thaliana mutants.
This study is the first to our knowledge to show that root hairs improve the erosion reduction potential of plant roots. We show that root hairs on Arabidopsis plants contribute to increased plant attachment to a gel medium, uprooting resistance from soil and compost, and reduce water erosion rates to almost zero in our experimental system compared to hairless roots, which showed no such effect even at high planting densities. While the lack of root hairs consistently reduced plant–soil cohesion compared to wild-type roots, root hair overproduction helped roots resist uprooting from both sterile gel and clay soil growth media more than wild type or performed similarly to wild type in compost (Figs. 2c and 3d). Moreover, wer myb23 and wild-type roots reduced erosion at similar rates compared to bare soil (Fig. 4c, d). These variations in the measured effects of root hairs on soil erosion between genotypes with and without root hairs suggest that there are additional components of the root–soil interface that contribute to or have limited effects on soil cohesion.
There are contradictory reports in the literature of whether root hairs do or do not assist in substrate adhesion or soil penetration in different plant species26,27,28,29. However, by using Arabidopsis mutants specific for single root traits, we were able to determine the relative contributions the presence of root hairs make to root-substrate cohesion without confounding species-specific contributions. Further research will be required to characterise how aspects such as plant species and age, soil type, total root hair surface area, root hair density and root hair length specifically affect plant–substrate interactions.
A predominant view in the literature is that plant carbon (C) is converted by soil microorganisms into compounds that increase soil cohesion30 and that soil structure is important for soil C storage31. While mycorrhizal fungi release glomalin-related soil proteins and other exopolymers that affect soil aggregate stability32, Arabidopsis is not known to form mycorrhizal associations; however, there are evidence that root hairs can alter soil pore space and connectivity between these pores in the rhizosphere11. Indeed, we found that root hairs increase the adhesive strength of seedlings in a sterile root–gel system in the absence of microorganisms (Fig. 2), suggesting that root hairs alone account for substrate-adhesive properties.
Future work will explore the physical and biochemical aspects unique to root hairs that contribute to their soil–root binding abilities. In this respect, it is interesting to note that Akhtar et al.33 used a novel assay to identify polysaccharides important for soil cohesion, including chitosan, β-1,3-glucan, gum tragacanth, xanthan and xyloglucan. Similarly, Galloway et al.15 found that xyloglucan, a component secreted by a wide range of angiosperm roots, can increase soil particle aggregation. Building on these recent studies and our current results, we postulate three potential mechanisms by which plant root hairs might reinforce soil: (i) substrate components such as gel molecules or soil aggregates bind directly to root hair surfaces; (ii) root hairs release exudates that reinforce soil; and (iii) root hairs release exudates that are processed into material that reinforces soil by microbes. The approaches applied in this study can quantify relative contributions of other root traits to soil cohesion/erosion. The application of our findings and experimental approach will inform the selection or modification of root properties that could reduce soil erosion in agricultural, recreational and civil engineering contexts. Understanding how root hairs specifically affect plant–soil interactions improves our investigation of plant biology and has applications in soil and land preservation and maintenance under changing climate conditions.
Plants, soil and growth conditions
All Arabidopsis thaliana (L). Heynh mutants were in the Columbia (Col-0) background except cpc try, which was produced from a Col-0 × Wassilewskija cross and repeatedly backcrossed to Col-0. Plants were grown in controlled environment rooms at 20–22 °C, 60% humidity and 16 h light cycle (light intensity, 120–145 μmol m−2 s−1). Plants for the centrifugal root–gel attachment assay were sown onto the surface of gel medium. Plants for uprooting assays were grown on sieved (7 mm), clay soil (42.1% clay, 38% silt, 19.9% sand), or a sieved (7 mm) compost/sand mix (3:1 Levington UK F2 compost and J Arthur Bowers UK horticultural silver sand). Plants for erosion assays were grown on sieved (7 mm) clay–loam soil (27% clay, 36% silt, 37% sand). Gravimetric soil moisture content prior to the erosion tests ranged from 26% to 29%. Soil composition was determined using a sedigraph with a hexametaphosphate pre-treatment and ultrasonic bath. The United States Department of Agriculture standard was used to define soil textural description34. All soil was frozen at −50 °C to limit microbes and insects before use.
Centrifugal gel-adhesion assay
Seeds were surface sterilised in 10% bleach, 0.05% Triton X-100 for 15 min, washed five times with sterile water and stratified at 4 °C in the dark for 48 h35. Ten sterile seeds were sown in two horizontal rows onto 90 mm Petri plates (Thermo Scientific RC2260) containing 1/2 Murashige and Skoog basal medium (Sigma M5519) with 1% (w/v) sucrose and 1% (w/v) agar (Sigma A1296), pH 5.7 and grown vertically for 5 days. Seedlings were spaced 1 cm apart and any seedlings touching each other were excluded during experimental reporting. Plates were placed inverted into a hanging basket centrifuge (Beckman Coulter Allegra X-30R Centrifuge) and subjected to 1-min incremental increases in centrifugal force of 720, 1018, 1247, 1440 and 1611 RPM (100, 200, 300, 400 and 500 × g). The proportion of seedlings that detached from the gel surface was determined between each speed. Data were collected for 87 wild type (Col-0), 87 wer myb23, 91 35S::RSL4, 94 rsl4 and 88 cpc try seedlings. We report the results of a single experiment, which are representative of at least two independent experiments.
Calculations and statistics for the gel adhesion assay
We determined the perpendicular plane of rotation of the hanging buckets within the enclosed centrifuge mathematically. The bucket is attached 70 mm from the axis of rotation, which also corresponds approximately to the surface of the gel in the plate and is free to swing about the axis. The centre of the bucket and plate mass lies on the axis of the bucket at a distance l from its attachment point. We assigned m as the mass of the bucket and plate, ω as the angular velocity (in radians per second) and θ as the inclination of the bucket to the vertical. The centrifugal force acting on the bucket was at least 0.07mω2 Newton, and the gravitational force is mg Newton. To define the balancing moments about the attachment point:
$$0.07\,m\omega ^2l\,\sin {\uptheta} \, < \, mgl\,\cos \theta.$$
Thus, \(\tan \theta \, < \, g/(0.07\omega ^2)\). From the centrifuge documentation, ω is 720 √n rpm where n (i.e. the speed setting) is 1, 2, …, 9. We calculated that θ at the slowest rotation setting is less than 1.41° and, therefore, assumed that the bucket quickly swings out during centrifugation so that the Petri plates are orientated perpendicular to the plane of rotation. Hence, the seedlings experience a centrifugal force that can peel them away from the gel.
The maximum force resisted by the seedlings was used as a measure of root–gel adhesion. The angular velocity (ω) and diameter of the centrifuge (i.e. the distance between the seedling and the axis of rotation =70 mm and the aerial tissue weight of each seedling were used to calculate the maximal centrifugal force (Fc (kg m s−2)) that was experienced by each seedling (mass, Ms (kg)) at each centrifugal speed:
$${\mathrm{Fc}} = Ms \times {\mathrm{radius}} \times \omega ^2.$$
We applied a Cox hazard function regression model22 to statistically test for differences between the risk of detachment for each root hair mutant relative to wild type. We set up a priori contrasts and used the coxph function with exact treatment of ties within the survival package in R36. We censored seedlings that remained attached to the gel medium after the maximum centrifugal speed setting because we did not determine what speed these seedlings would have detached from the gel.
For each regression model run, we report P value of the Wald test (z) and the hazard ratio with the upper and lower bound confidence intervals. Since the hazard ratio is an exponential coefficient that compares the risk of seedling detachment between root hair lines relative to wild type plants, the hazard ratio has been used as a measure of effect size37. Wild-type plants have a hazard ratio of one; root hair lines with a higher risk of detachment will have a hazard ratio above one, while lines with a lower risk of detachment will have a hazard ratio below one.
Plant uprooting from soil and compost
Individual plants from multiple transgenic lines were grown and uprooted at the same time as control for run effects. Single, centrally placed plants were grown in 375 cm3 pots containing the same amount of soil. A polytetrafluoroethylene-coated (hydrophobic Tectane) aluminium washer was placed over the seedling within 3–4 days of germination. The plants were grown through the centre of the washer for 3–4 weeks until the plants had mature rosettes but were not yet reproductive. Therefore, the rosette anchored the washer around the plant so that the cables of the tensile machine could be attached to the washer for uprooting force measurement. Pots were saturated overnight in 3 cm water and plants were uprooted from either a compost–sand mixture (n = 13 wild type, n = 16 wer myb23, n = 13 cpc try) or a clay soil (n = 17 all lines). Plants were pulled vertically from the soil using a tensile testing machine (Instron 3343 with a 10 Newton load cell 2519-201) at a constant speed of 5 mm min−1 (refs. 38,39,40,41). Force traces were analysed to obtain total energy expended (area under the curve), peak force (maximum force reached) and the magnitude of the incremental force drops. The Instron 10 Newton load cell is accurate to 0.25% from 0.05 kg m s−2 and the mean of force above this threshold was calculated. The significance of the difference was also tested with limits at 0.1 and 0.035 kg m s−2, which satisfied P < 0.05.
After uprooting, plant material was recovered and RLD (km m−3) recorded. RLD is a root trait frequently used to estimate the erosion-reducing potential of plant species and select the most suitable species for controlling soil erosion processes42,43,44,45. To determine RLD, the soil and root complex was washed thoroughly over a 0.7 mm sieve before manually separating the roots from the soil. The dry weights of these roots were used to determine plant RLDs by converting the root masses into root lengths using specific root length (i.e. root length per unit mass) values for each genotype. To obtain root specific length values, at least 10 m of root per genotype from at least three representative plants were separated into single strands on a high contrast background, photographed, measured in ImageJ46, dried and weighed. The root specific length values (m mg−1 root ± 1 standard deviation) for wild type, cpc try and wer myb23 were 0.63 ± 0.04, 0.43 ± 0.07 and 1.02 ± 0.31 m mg−1 in clay soil and 0.51 ± 0.03, 0.39 ± 0.02 and 0.53 ± 0.02 m mg−1 in compost, respectively.
Uprooting data were analysed with a linear modelling framework that used the lm() function in 'R' (3.0.3) to investigate the variable-under-investigation/RLD relationship and how it is affected by the mutant background (Supplementary Table 1). Residuals were normal.
Calculations and statistics for plant uprooting
Since the root–soil system responds to the uprooting force by a combination of deformation and damage, the peak force and total energy expended are macroscopic measures of root–soil cohesion. Let f (kg m s−2) be the uprooting force corresponding to a deformation x (m). Allowing for damage we may write generally that:
$$f\left( x \right) = k\left( x \right)x$$
for 0 < x < xp, where the function k(x) > 0 is a macroscopic elastic modulus and xp is the deformation corresponding to the peak force.
At the peak force, \(f( {x_p}) = k(x_p)x_p\), the system sustained critical damage and force decreases with subsequent deformation:
$$f\left( x \right) = k(x_p)x_p + h(x - x_p)(x - x_p)$$
for xp < x < xu where h(x) < 0 and xu is the deformation corresponding to uprooting.
At uprooting, the force decreases to 0 and
$$k( {x_p} )x_p + h( {x_u - x_p} )( {x_u - x_p} ) = 0$$
and the total energy expended in uprooting is given by the integral f from 0 to xu.
The (possibly non-differentiable) functions k(x) and h(x) can vary from plant to plant and determine the mechanical properties of the system. We chose the peak force f(xp) and total energy expended, E (kg m2 s−2), as a measurement of the mechanical resistance as they are both functions of k and h and allow for the comparison of the mechanical resistance of the different mutants.
From our measurements, we found statistically significant differences between the mutants for f(xp) and E, which imply statistically significant differences for the functions k and h between mutants.
Root reinforced soil resistance against concentrated flow erosion
Plants were grown in sieved clay–loam soil (dry soil bulk density 1.07–1.27 g cm−3) in boxes with inner dimensions 250 × 250 × 150 mm fitted with a weed suppression mat. Different plant densities of 9, 16, 32, 49, 81 and 100 plants in each box box (i.e. a plant density range of 144–1600 plants m−2) were established, which corresponded to shoot densities between 0.15 and 2.37, 0.37 and 1.72, and 0.57 and 2.66 kg m−2 for wild type, cpc try and wer myb23, respectively. All boxes were tested for erosion resistance at the same developmental stage after about 5 weeks growth (i.e. shortly after bolting). Data were collected from 18 boxes with wild-type roots, 17 with cpc try roots or 27 with wer myb23 roots.
Immediately prior to erosion, the boxes were saturated by capillary rise, photographed and the aerial tissue and weed mats were removed. Gravimetric soil moisture content before erosion tests was between 0.26 and 0.29 g g−1. Erosion assays were conducted in a water flume with a 28° slope similar to that used in previous studies47,48. The soil surface was exposed to 1 l s−1 of running water at corresponding mean bottom flow shear stresses between 13 and 24 Pa. The run-off water and eroded soil was captured for 5 s at 10 s intervals for 2 min. Flow velocity was measured using the dye tracing technique49. A bare soil control experiment was prepared at the same time and in the same way as the planted samples. Bare soil boxes were placed in growth rooms for the same period of time with a weed control sheet on the surface to prevent algae and moss growth and watered along with the planted boxes. Sediment concentration was used to calculate sediment detachment rates (kg m2 s−1) for each collection interval. Soil detachment rates were averaged out per sample and normalised using the extrapolated soil detachment value for a root density of zero.
Immediately after each experimental run, roots were separated from the soil by hand washing using an adapted version of the method previously described50. The recovered roots were washed and weighed so that RLD (km m−3) could be calculated using specific root length (root length/unit mass) measured from 10 wild type, cpc try and wer myb23 root samples, which were 0.44 ± 0.05, 0.52 ± 0.10 and 0.58 ± 0.15 m mg−1, respectively (Fig. 4d).
Calculations and statistics for resistance to erosion
Nonlinear regression models with functional forms that corresponded to exponential decay to a constant value were fitted through the experimental data describing the erosion-reducing potential of root-permeated soils as a function of the root variable RLD. In order to calculate the error on the modelled curves due to parameter uncertainty, Monte Carlo simulations were performed by perturbing the parameter estimates 10,000 times from a set of parameter values randomly chosen from a normal probability distribution with mean and standard deviation equal to the estimated value and its standard error, respectively. Hence, the uncertainty bounds on the modelled curves indicate the 95% confidence interval of the fitted functions. Where the modelled curves do not fall within another curve's uncertainty bound, they are significantly different at P < 0.05.
Derivation of root reinforcement
Root reinforcement was defined as the difference between bare soil cohesion and the cohesion of soil containing roots. Soil cohesion values for bare and root-containing soils were established by back calculation of transport capacity efficiencies and corresponding soil cohesion values. Measured soil detachment rates were set equal to modelled soil detachment rates using the EUROSEM51 equation for modelling detachment by runoff. Therefore, the only unknown parameter is the flow detachment efficiency coefficient, β, derived from the measured soil detachment rate (ASD, g cm−2 s−1), flow and sediment properties:
$$\beta = \frac{{\mathrm{{ASD}}}}{{\root {{B_{\mathrm{D}}}} \of {{\frac{{4d_{50}\left( {\rho _{\mathrm{s}}\, -\, \rho _{\mathrm{w}}} \right)g}}{{3C_{\mathrm{D}}\rho _{\mathrm{W}}}}}}C_{\mathrm{{TC}}}}},$$
where ρs is density of the detached sediments (g cm−3) where 2.65 g cm−3 was used52, ρw (g cm−3) is density of water, g is gravity acceleration, d50 (μm) is median grain-size diameter (equalling 16 ± 1.14 μm for our clay–loam soil), and CD the drag coefficient calculated from a formula using the grain Reynolds number that is calculated from the flow characteristics of the experimental runs and from the average grain size of the soil.
The value of soil cohesion that corresponds to this β value was calculated using the following empirically derived equation53,54:
$$C = ( - 1/0.85)\ln (\beta /0.79).$$
The full method for back calculation of corresponding soil cohesion when soil detachment and flow characteristics were measured was as described7.
Mechanistic modelling
For each mutant, the enhancement of soil cohesion with increasing RLD was quantified. We first determined the volume occupied by the Arabidopsis root system. From the flow rate and velocity, we deduced the shear force acting on the soil surface. This force is resisted by the cohesion of root-reinforced soil, but not in a homogeneous manner because resistance is strongest along the primary root, decreases radially outwards, and depends on depth; we use knowledge of root architecture to model this (Fig. 4a). Erosion occurs at regions where the shear force exceeds the local soil cohesion. The model integrates fluid flow, root architecture, soil mechanics and debris entrainment as described below and in the literature16,55,56,57. The erosion depth increases over the course of the experiment and reaches a maximum value R.
The water volume flow rate (Q, m3 s−1) and surface velocity (V, m s−1) were experimentally measured. Since flow profile is assumed parabolic, the shear stress acting on the soil surface can be calculated as
$$\tau = (3gQd_{50}^2\sin \left( {28^\circ } \right))/(2VkW),$$
where W = 360 mm is the width of the flume and k is the bare soil permeability, for which we used the representative value 0.2273 μm2 58.
Soil-volume occupied by roots
The typical volume occupied by an Arabidopsis root system after 5 weeks of growth and the arrangement of the volume occupied by roots is described by a kite shaped structure with W, D and d as parameters describing the size of the root system (Fig. 4a). D and W describe root volume in horizontal and vertical directions, respectively, and d indicates the depth at which the root system has maximum lateral spread. For our experimental design, D is 100 mm, d is 20–30 mm and W is 100 mm. R and r are erosion parameters. The maximum erosion depth is given by R and the depth of the root system within R of the surface diagonally between plant stems is r. Hence, r is no larger than R. Here R = 50 mm and r varies from 23 to 42 mm depending on the number of plants. r was derived using eroded mass, bulk density and box dimensions. R was a set value. Each experiment was stopped when erosion depth reached 50 mm59.
Soil mechanics
Soil is modelled as an isotropic nonlinear elastic material which has limiting behaviour that tends to that of a material described by Mohr–Coulomb theory as a brittle material. We call this an isotropic 'elastic-Coulomb' material that is eroded when the shear stress (τ) reaches a critical value determined by the Coulomb criterion
$$\tau = - \mu N + c,$$
where µ is a friction coefficient, c is the cohesion of soil containing plant roots and N is the normal stress. The cohesion of soil containing plant roots depends on RLD. The maximum cohesion, cMax, that occurs at the tap root is
$$c_{{\mathrm{Max}}} = c_{{\mathrm{Bare}}}(1 + \gamma \left( {RLD_{\rm{T}}} \right)R),$$
(10a)
where cBare is the cohesion of bare soil and γ(RLDT), in mm−1, is the increased depth-integrated soil cohesion due to roots, which is a function of RLDT as the true root length density, or the total root length divided by the volume of the regions occupied by the root. Similarly, the minimum soil cohesion occurs furthermost from the tap root and is
$$c_{{\mathrm{Min}}} = c_{{\mathrm{Bare}}}(1 + \gamma \left( {{\mathrm{{RLD}}}_{\rm{T}}} \right)r).$$
(10b)
In Eq. (10) r and R represent the erosion parameters defined in the previous paragraph. The function γ(x) has two properties that are approximately linear for small values of x and saturates to a constant value at large x (i.e., there is a limit to the enhancement root hairs have on soil cohesion). We chose a simple function that has these properties:
$$\gamma \left( x \right) = M_{\mathrm{{max}}}\tan {\mathrm{h}}(M_1x/M_{\mathrm{{max}}}).$$
The maximum amount of root hair enhancement is given by Mmax, since tan h takes values no larger than 1. For very low root length densities x, \(\gamma (x) \approx M_1x\), hence the initial slope (i.e. the rate of enhancement at low RLDT is given by M1 because tan h (x) ≈ x for small x. For analysis is it useful to define the new parameter, M2 = M1/Mmax, which is the ratio of the initial enhancement rate to the maximum enhancement. M1, M2 are thus parameters describing the reinforcement plant roots provide and depend on the micro-scale properties of plant roots. To model the regular periodic array of plants, the cohesion c in Eq. (9) is allowed to spatially vary in a sinusoidal manner taking values between cMin and cMax. Therefore, we obtain a mechanical model for erosion as a function of RLDT (Fig. 4c, d). Since we have controlled for the root architecture (factor RLDT in Eq. (10) and function γ in Eq. (11)), M1 and M2 quantify the amount of cohesion enhancement by micro-scale root traits and allow us to compare the effectiveness of different mutants in controlling erosion.
Analysis of root morphology for root phenotyping
Surface sterilised seeds were stratified at 4 °C for 48 h then grown on square 12 cm2 plates containing nutrient medium60 with 1% sucrose 1% Phytagel (Sigma Aldrich), pH 5.7, and sealed with Parafilm (Bemis, NA). Plates were incubated vertically for 10–11 days, when the root tips of wild-type plants reached within 1 cm of the bottom of the plate. For each genotype, 20 single seeded plates and 5 plates with 5 seeds were used to measure and compare the growth of single and grouped plants. Each set was compared with a wild-type control. 'Root depth' was the vertical distance the root tip had progressed down the plate. Root-hair counts were taken using dark field lighting on a Leica MZ FLIII microscope. A Nikon D50 camera with a polarising filter and SPOT image capture software (SPOTIMAGING) was used to capture microscope images. Image analysis was conducted with a combination of ImageJ42 and RootNav software61. Plants for X-raying were grown in 200 μl pipette tips filled with sieved clay soil for 5–7 days. Tips were scanned with a Nikon XT H 225 ST CT scanner (settings: energy: 90 kV, current 60 (μA) exposure 1 s, 5 frames averaged per projection, voxel size = 0.00278056).
Statistics and reproducibility
For all experiments, biological replicates for each root hair line were randomly selected from pools of seed containing genetically identical individuals for the trait of interest.
Ten seedlings of each line were sown onto a single Petri plate containing 30 ml gel medium. For an individual experiment, there was a replicate size of over n = 70 for each line. To account for potential heterogeneity in gel thickness and composition between Petri plates, angular rotation that each seedling experienced and spin number were incorporated as covariates in our analysis. The Petri plates were oriented vertically at approximately 80°, in stacks of five in a controlled growth room using a Latin Square design. Statistical analysis was performed in R with a Cox hazard function regression that included all covariates listed above in our statistical model and were removed only if they had no significant effect. The reported effect size is the hazard ratio, which includes lower and upper bound confidence intervals. P values were calculated from the Wald Statistic (z) with a significance level of 0.05. This study was conducted blind. The results for each line presented in this paper are representative of at least two independent experiments, although we have observed similar results in at least five independent trials, each run by different lab members.
Pots containing single plants were grown in trays containing six pots, organised in a Latin square design in a temperature and light-controlled growth room, and were rotated every 2 days to prevent edge effects. The tensile testing machine used to uproot plants was tested prior to conducting an experiment. The night before an experiment, pots were placed in 3 cm water to allow saturation to ensure a consistent soil moisture level. Uprooting experiments were conducted blind to genotype. Between 13 and 17 individual plants were uprooted for each genotype. Pairwise comparisons of peak force, work done and force drop magnitude were conducted on wild-type plants relative to root hair mutants using the lm() function in R. The number of samples tested per line was sufficient for linear regression and pairwise comparisons of regression parameters for all genotypes. A significance level of 0.05 was used for all regression parameters.
We grew 9, 16, 32, 49, 81 or 100 plants per box in a sieved clay-loam soil medium. For each experimental replicate, plants from all three lines were grown simultaneously in a controlled environment growth room to keep growing conditions as homogeneous as possible. On each experimental day at least five boxes were tested from at least two different lines. Different planting densities were used to obtain variation in root density. In total, 18 (wild type), 17 (cpc try) and 27 (wer myb23) soil boxes were tested. The number of soil boxes tested per line was sufficient to conduct nonlinear regression models and the comparison of regression parameters for different lines. Statistical analysis was performed in IBM SPSS Statistics 25 using the nonlinear regression function and in MATLAB R2014a (MathWorks, Natick, Massachusetts, USA) for computing the error bounds on the modelled regressions. To compute the regression error bounds, we perturbed the parameter estimates 10,000 times from a set of parameter values randomly chosen from a normal probability distribution with mean and standard deviation equal to the estimated value and its standard error, respectively. Therefore, when the modelled curves do not fall within another curve's uncertainty bound, this indicates a significant difference between lines at P < 0.05.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
All figures have associated raw data. The data that support the findings of this study are available from the University of Bristol's research data repository, data.bris, at https://doi.org/10.5523/bris.1vca1omqff8bj2a7rpkbcgxc7y. Data collection and data analysis codes are available upon request from the authors. There are no restrictions on data availability.
Code availability
The following codes are available from the authors upon request: To detect vertical force drops during uprooting in Python 2.7.9 (Python Software Foundation), to analyse uprooting data using 'R' 3.0.3 (R Foundation), for Monte Carlo analysis and the mechanistic erosion model in MATLAB R2014a (MathWorks, Natick, Massachusetts, USA).
FAO. Status of the World's Soil Resources: Main Report. 650 (2015).
Pimentel, D. Soil erosion: a food and environmental threat. Environ. Dev. Sustain. 8, 119–137 (2006).
De Baets, S., Poesen, J., Knapen, A. & Galindo, P. Impact of root architecture on the erosion-reducing potential of roots during concentrated flow. Earth Surf. Process. Landf. 32, 1323–1345 (2007).
Stokes, A., A. C. Desirable plant root traits for protecting natural and engineered slopes against landslides. http://publications.cirad.fr/une_notice.php?dk=551905 (Accessed 12th Sep. 2019).
Ghestem, M. et al. A framework for identifying plant species to be used as 'ecological engineers' for fixing soil on unstable slopes. PLoS ONE 9, e95876 (2014).
Ola, A., Dodd, I. C. & Quinton, J. N. Can we manipulate root system architecture to control soil erosion? Soil 1, 603–612 (2015).
De Baets, S., Torri, D., Poesen, J., Salvador, M. P. & Meersmans, J. Modelling increased soil cohesion due to roots with EUROSEM. Earth Surf. Process. Landf. 33, 1948–1963 (2008).
Gould, I. J., Quinton, J. N., Weigelt, A., Deyn, G. B. D. & Bardgett, R. D. Plant diversity and root traits benefit physical properties key to soil function in grasslands. Ecol. Lett. 19, 1140–1149 (2016).
Rillig, M. C. Arbuscular mycorrhizae, glomalin, and soil aggregation. Can. J. Soil. Sci. 84, 355–363 (2004).
Rillig, M. C. et al. Material derived from hydrothermal carbonization: effects on plant growth and arbuscular mycorrhiza. Appl. Soil Ecol. 45, 238–242 (2010).
Koebernick, N. et al. High-resolution synchrotron imaging shows that root hairs influence rhizosphere soil structure formation. N. Phytol. 216, 124–135 (2017).
Haling, R. E. et al. Root hairs improve root penetration, root-soil contact, and phosphorus acquisition in soils of different strength. J. Exp. Bot. 64, 3711–3721 (2013).
Carminati, A. et al. Root hairs enable high transpiration rates in drying soils. N. Phytol. 216, 771–781 (2017).
Brown, L. K., George, T. S., Neugebauer, K. & White, P. J. The rhizosheath—a potential trait for future agricultural sustainability occurs in orders throughout the angiosperms. Plant Soil 418, 115–128 (2017).
Galloway, A. F. et al. Xyloglucan is released by plants and promotes soil particle aggregation. N. Phytol. 217, 1128–1136 (2018).
Nedderman, R. Statics and Kinematics Granular Materials (Cambridge University Press, 2005).
Schiefelbein, J. W. & Somerville, C. Genetic control of root hair development in Arabidopsis thaliana. Plant Cell 2, 235–243 (1990).
Yi, K., Menand, B., Bell, E. & Dolan, L. A basic helix-loop-helix transcription factor controls cell growth and size in root hairs. Nat. Genet. 42, 264–267 (2010).
Bruex, A. et al. A gene regulatory network for root epidermis cell differentiation in Arabidopsis. PLoS Genet. 8, e1002446 (2012).
Jones, A. R. et al. Auxin transport through non-hair cells sustains root-hair development. Nat. Cell Biol. 11, 78–84 (2009).
Schellmann, S. et al. TRIPTYCHON and CAPRICE mediate lateral inhibition during trichome and root hair patterning in Arabidopsis. EMBO J. 21, 5036–5046 (2002).
Prentice, R. L. & Kalbfleisch, J. D. Mixed discrete and continuous Cox regression model. Lifetime Data Anal. 9, 195–210 (2003).
De Baets, S., Poesen, J., Meersmans, J. & Serlet, L. Cover crops and their erosion-reducing effects during concentrated flow erosion. CATENA 85, 237–244 (2011).
Satbhai, S. B., Ristova, D. & Busch, W. Underground tuning: quantitative regulation of root growth. J. Exp. Bot. 66, 1099–1112 (2015).
Rellán-Álvarez, R. et al. GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems. Elife 4, https://doi.org/10.7554/eLife.07597 (2015).
Bailey, P. H. J., Currey, J. D. & Fitter, A. H. The role of root system architecture and root hairs in promoting anchorage against uprooting forces in Allium cepa and root mutants of Arabidopsis thaliana. J. Exp. Bot. 53, 333–340 (2002).
Bengough, A. G., Loades, K. & McKenzie, B. M. Root hairs aid soil penetration by anchoring the root surface to pore walls. J. Exp. Bot. 67, 1071–1078 (2016).
Moreno-Espíndola, I. P., Rivera-Becerril, F., de Jesús Ferrara-Guerrero, M. & De León-González, F. Role of root-hairs and hyphae in adhesion of sand particles. Soil Biol. Biochem. 39, 2520–2526 (2007).
Melzer, B. et al. The attachment strategy of English ivy: a complex mechanism acting on several hierarchical levels. J. R. Soc. Interface 7, 1383–1389 (2010).
Naveed, M. et al. Plant exudates may stabilize or weaken soil depending on species, origin and time. Eur. J. Soil Sci. 68, 806–816 (2017).
Six, J. & Paustian, K. Aggregate-associated soil organic matter as an ecosystem property and a measurement tool. Soil Biol. Biochem. 68, A4–A9 (2014).
Rillig, M. C. & Mummey, D. L. Mycorrhizas and soil structure. N. Phytol. 171, 41–53 (2006).
Akhtar, J., Galloway, A. F., Nikolopoulos, G., Field, K. J. & Knox, P. A quantitative method for the high throughput screening for the soil adhesion properties of plant and microbial polysaccharides and exudates. Plant Soil 428, 57–65 (2018).
Eswaran, H., Ahrens, R. & Rice, T. J. Soil Classification: A Global Desk Reference (Taylor and Francis Group, 2002).
Jones, M. A. et al. The Arabidopsis Rop2 GTPase is a positive regulator of both root hair initiation and tip growth. Plant Cell 14, 763–776 (2002).
R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2014).
Cox, D. R. & Oaks, D. Analysis of Survival Data. CRC Press. https://www.crcpress.com/Analysis-of-Survival-Data/Cox-Oakes/p/book/9780412244902 (Accessed 12th Sep. 2019).
Matyjaszkiewicz, A. Uprooting Plants (University of Bristol, 2011).
Ennos, A. R. The mechanics of anchorage in seedlings of sunflower, Helianthus annuus L. N. Phytologist 113, 185–192 (1989).
Fogelberg & Gustavsson. Resistance against uprooting in carrots (Daucus carota) and annual weeds: a basis for selective mechanical weed control. Weed Res. 38, 183–190 (1998).
Toukura, Y., Devee, E. & Hongo, A. Uprooting and shearing resistances in the seedlings of four weedy species. Weed Biol. Manag. 6, 35–43 (2006).
De Baets, S. et al. Methodological framework to select plant species for controlling rill and gully erosion: application to a Mediterranean ecosystem. Earth Surf. Process. Landf. 34, 1374–1392 (2009).
Pohl, M., Alig, D., Körner, C. & Rixen, C. Higher plant diversity enhances soil stability in disturbed alpine ecosystems. Plant Soil 324, 91–102 (2009).
Burylo, M., Rey, F., Mathys, N. & Dutoit, T. Plant root traits affecting the resistance of soils to concentrated flow erosion. Earth Surf. Process. Landf. 37, 1463–1470 (2012).
Vannoppen, W., Poesen, J., Peeters, P., De Baets, S. & Vandevoorde, B. Root properties of vegetation communities and their impact on the erosion resistance of river dikes. Earth Surf. Process. Landf. 41, 2038–2046 (2016).
Knapen, A., Posen, J., Govers, G. & De Baets, S. The effect of conservation tillage on runoff erosivity and soil erodibility during concentrated flow. Hydrol. Process. 22, 1497–1508 (2008).
Smets, T., Poesen, J., Langhans, C., Knapen, A. & Fullen, M. A. Concentrated flow erosion rates reduced through biological geotextiles. Earth Surf. Process. Landf. 34, 493–502 (2009).
Giménez, R. & Govers, G. Flow detachment by concentrated flow on smooth and irregular beds. Soil Sci. Soc. Am. J. 66, 1475–1483 (2002).
Schuurmann, J. & Goedewaagen, M. Methods for the Examination of Root Systems and Roots (Centre for Agricultural Publication and Documentation, 1971).
Morgan, R. P. C. et al. The European Soil Erosion Model (EUROSEM): a dynamic approach for predicting sediment transport from fields and small catchments. Earth Surf. Process. Landf. 23, 527–544 (1998).
Blake, G. in Encyclopedia of Soil Science (ed. Chesworth, W.) (Springer, 2008).
Govers, G. et al. A long flume study of the dynamic factors affecting the resistance of a loamy soil to concentrated flow erosion. Earth Surf. Process. Landf. 15, 313–328 (1990).
Govers, G. Time-dependency of runoff velocity and erosion the effect of the initial soil moisture profile. Earth Surf. Process. Landf. 16, 713–729 (1991).
Wittmer, J. P., Claudin, P., Cates, M. E. & Bouchaud, J.-P. An explanation for the central stress minimum in sand piles. Nature 382, 336–338 (1996).
Iverson, R. M. The physics of debris flows. Rev. Geophys. 35, 245–296 (1997).
Liu, A. J. & Nagel, S. R. The jamming transition and the marginally jammed solid. Annu. Rev. Condens. Matter Phys. 1, 347–369 (2010).
Baer, J. Dynamics of Fluids in Porous Media. 1 (American Elsevier Publishing Company, 1988).
Higgins, B. Root-Soil Reinforcement and Its Effect on Erosion: A Theoretical and Computational Investigation (University of Bristol, 2017).
Wymer, C. L., Bibikova, T. N. & Gilroy, S. Cytoplasmic free calcium distributions during the development of root hairs of Arabidopsis thaliana. Plant J. 12, 427–439 (1997).
Pound, M. P. et al. RootNav: navigating images of complex root architectures. Plant Physiol. 162, 1802–1814 (2013).
This work was supported by Leverhulme Trust project grant RPG-2013-260 to C.S.G., T.A.Q. and T.B.L. that supported S.D.B., K.M.S., L.W. and B.H., as well as a BBSRC SWDTP PhD studentship to T.D.G. (funded by BBSRC grant BB/J014400/1) a BBSRC SWBio PhD studentship to B.M.E. (funded by BBSRC grant BB/M009122/1), and an EPSRC BCCS PhD studentship to A.M. (funded by EPSRC grant EP/E501214/1). A University of Exeter award to T.A.Q. co-funded S.D.B. We are indebted to Don Grierson FRS and Enrico Coen FRS who independently suggested the centrifugation assay. We thank the Dolan and Schiefelbein laboratories and Nottingham Arabidopsis Stock Centre for seed stocks. We also thank workshop staff at the Universities of Bristol and Exeter for constructing the flume and plant growth boxes. We are grateful to James Chidlow for glasshouse support, and Nick Smirnoff and Mike Deeks for plant growth space. We thank Yusuf Mahadik and Julie Etches for expertise in the Instron machine, Scott Hayes and Natascha Steinberg for data collection, Mark Beaumont, Innes Cuthill, Tim Fawcett and Jan Zaucha for help with data analysis, Jill Harrison for improvements to the text and Jon Aitken for optimising video quality.
These authors contributed equally: Sarah De Baets, Thomas D. G. Denbigh, Kevin M. Smyth, Bethany M. Eldridge.
These authors jointly supervised the work: Isaac V. Chenchiah, Tanniemola B. Liverpool, Timothy A. Quine, Claire S. Grierson.
KU Leuven, Oude Markt 13-bus 5005, 3000, Leuven, Belgium
Sarah De Baets
School of Biological Sciences, Life Sciences Building, University of Bristol, 24 Tyndall Avenue, BS8 1TQ, Bristol, UK
Thomas D. G. Denbigh, Kevin M. Smyth, Bethany M. Eldridge, Laura Weldon, Emily R. Larson & Claire S. Grierson
School of Mathematics, University of Bristol, Fry Building, Bristol, BS8 1UG, UK
Benjamin Higgins, Isaac V. Chenchiah & Tanniemola B. Liverpool
School of Engineering Mathematics, Queen's Building, University of Bristol, Bristol, BS8 1TR, UK
Antoni Matyjaszkiewicz
TERRA Teaching and Research Centre, Gembloux Agro-Bio Tech, University of Liège, Gembloux, 5030, Belgium
Jeroen Meersmans
Geography, College of Life and Environmental Sciences, University of Exeter, Amory Building, Rennes Drive, Exeter, EX4 4RJ, UK
Timothy A. Quine
Thomas D. G. Denbigh
Kevin M. Smyth
Bethany M. Eldridge
Laura Weldon
Benjamin Higgins
Emily R. Larson
Isaac V. Chenchiah
Tanniemola B. Liverpool
Claire S. Grierson
C.S.G., T.B.L., I.V.C., A.M., T.A.Q., and S.D.B. conceived and designed the project. A.M., S.D.B., C.S.G., T.D.G., B.H., K.M.S., L.W., and J.M. conducted pilot studies and collected preliminary data. K.M.S., T.D.G., L.W., A.M., S.D.B., B.H., B.M.E., and J.M. performed the experiments, theoretical work and statistical analyses. E.R.L. edited, consulted on, and formatted the final manuscript. All authors wrote, reviewed and approved the final manuscript.
Correspondence to Sarah De Baets or Claire S. Grierson.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Movie 1
Description of Additional Supplementary Files
De Baets, S., Denbigh, T.D.G., Smyth, K.M. et al. Micro-scale interactions between Arabidopsis root hairs and soil particles influence soil erosion. Commun Biol 3, 164 (2020). https://doi.org/10.1038/s42003-020-0886-4
Development of mechanical soil stability in an initial homogeneous loam and sand planted with two maize (Zea mays L.) genotypes with contrasting root hair attributes under in-situ field conditions
U. Rosskopf
D. Uteau
S. Peth
Plant and Soil (2022)
Soil anti-scourability enhanced by herbaceous species roots in a reservoir water level fluctuation zone
Wen-xiu Xu
Ling Yang
Jie Wei
Journal of Mountain Science (2021)
Communications Biology (Commun Biol) ISSN 2399-3642 (online) | CommonCrawl |
Construction solutions for Neumann problem with Hénon term in $ \mathbb{R}^2 $
Existence of the normalized solutions to the nonlocal elliptic system with partial confinement
April 2019, 39(4): 2203-2232. doi: 10.3934/dcds.2019093
NLS bifurcations on the bowtie combinatorial graph and the dumbbell metric graph
Roy H. Goodman
Department of Mathematical Sciences, New Jersey Institute of Technology, University Heights, Newark, NJ 07102, USA
Received June 2018 Revised September 2018 Published January 2019
Figure(18)
We consider the bifurcations of standing wave solutions to the nonlinear Schrödinger equation (NLS) posed on a quantum graph consisting of two loops connected by a single edge, the so-called dumbbell, recently studied in [27]. The authors of that study found the ground state undergoes two bifurcations, first a symmetry-breaking, and the second which they call a symmetry-preserving bifurcation. We clarify the type of the symmetry-preserving bifurcation, showing it to be transcritical. We then reduce the question, and show that the phenomena described in that paper can be reproduced in a simple discrete self-trapping equation on a combinatorial graph of bowtie shape. This allows for complete analysis by parameterizing the full solution space. We then expand the question, and describe the bifurcations of all the standing waves of this system, which can be classified into three families, and of which there exists a countably infinite set.
Keywords: Quantum graphs, bifurcation, nonlinear Schrödinger equation, standing wave, reduced models.
Mathematics Subject Classification: Primary: 35R02; Secondary: 35B32.
Citation: Roy H. Goodman. NLS bifurcations on the bowtie combinatorial graph and the dumbbell metric graph. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 2203-2232. doi: 10.3934/dcds.2019093
R. Adami, C. Cacciapuoti, D. Finco and D. Noja, Stationary states of NLS on star graphs, EPL–Europhys. Lett., 100 (2012). http://iopscience.iop.org/article/10.1209/0295-5075/100/10003/meta. doi: 10.1209/0295-5075/100/10003. Google Scholar
R. Adami, E. Serra and P. Tilli, NLS ground states on graphs, Calc. Var., 54 (2014), 743-761. doi: 10.1007/s00526-014-0804-z. Google Scholar
R. Adami, E. Serra and P. Tilli, Lack of ground state for NLSE on bridge-type graphs, in Mathematical Technology of Networks (ed. D. Mugnolo), vol. 128 of Springer Proc. in Math. and Stat., Springer, 2015, 1–11. doi: 10.1007/978-3-319-16619-3_1. Google Scholar
R. Adami, E. Serra and P. Tilli, Negative Energy Ground States for the $L^2$-Critical NLSE on Metric Graphs, Commun. Math. Phys., 352 (2017), 387-406. doi: 10.1007/s00220-016-2797-2. Google Scholar
R. Adami, E. Serra and P. Tilli, Threshold phenomena and existence results for NLS ground states on metric graphs, Journal of Functional Analysis, 271 (2016), 201-223. doi: 10.1016/j.jfa.2016.04.004. Google Scholar
R. Adami, E. Serra and P. Tilli, Nonlinear dynamics on branched structures and networks, Riv. Math. Univ. Parma (N.S.), 8 (2017), 109-159. Google Scholar
G. Berkolaiko and P. Kuchment, Introduction to Quantum Graphs, Mathematical surveys and monographs, Amer. Math. Soc., 2013. Google Scholar
G. Berkolaiko, An elementary introduction to quantum graphs, in Geometric and Computational Spectral Theory, vol. 700 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2017, 41–72. doi: 10.1090/conm/700/14182. Google Scholar
G. Berkolaiko, Y. Latushkin and S. Sukhtaiev, Limits of quantum graph operators with shrinking edges, 2018. https://arXiv.org/abs/1806.00561. Google Scholar
J. Bolte and J. Kerner, Many-particle quantum graphs and Bose-Einstein condensation, J. Math. Phys., 55 (2014), 061901, 16pp. doi: 10.1063/1.4879497. Google Scholar
C. Cacciapuoti, D. Finco and D. Noja, Topology-induced bifurcations for the nonlinear Schrödinger equation on the tadpole graph, Phys. Rev. E, 91 (2015), 013206, 8pp. doi: 10.1103/PhysRevE.91.013206. Google Scholar
C. Cacciapuoti, D. Finco and D. Noja, Ground state and orbital stability for the NLS equation on a general starlike graph with potentials, Nonlinearity, 30 (2017), 3271-3303. doi: 10.1088/1361-6544/aa7cc3. Google Scholar
B. Delourme, S. Fliss, P. Joly and E. Vasilevskaya, Trapped modes in thin and infinite ladder like domains. Part 1: Existence results, Asymptotic Anal., 103 (2017), 103-134. doi: 10.3233/ASY-171422. Google Scholar
A. Dhooge, W. Govaerts, Y. A. Kuznetsov, H. G. E. Meijer and B. Sautois, New features of the software MatCont for bifurcation analysis of dynamical systems, Math. Comp. Model. Dyn., 14 (2008), 147-175. doi: 10.1080/13873950701742754. Google Scholar
A. Dhooge, W. Govaerts and Y. A. Kuznetsov, MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs, ACM T. Math. Software, 29 (2003), 141-164. doi: 10.1145/779359.779362. Google Scholar
NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.0.15 of 2017-06-01, F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. Saunders, eds. Google Scholar
J. C. Eilbeck and M. Johansson, The discrete nonlinear Schrödinger equation–20 years on, in Proceedings Of The Third Conference On Localization And Energy Transfer In Nonlinear Systems (eds. R. S. MacKay, L. Vázquez and M. P. Zorzano), World Scientific, Madrid, 2003, 44–67. https://www.worldscientific.com/doi/abs/10.1142/9789812704627_0003. doi: 10.1142/9789812704627_0003. Google Scholar
J. C. Eilbeck, P. S. Lomdahl and A. C. Scott, The discrete self-trapping equation, Phys. D, 16 (1985), 318-338. doi: 10.1016/0167-2789(85)90012-0. Google Scholar
[19] P. Glendinning, Stability, Instability and Chaos, An Introduction to the Theory of Nonlinear Differential Equations, Cambridge University Press, 1994. doi: 10.1017/CBO9780511626296. Google Scholar
S. Gnutzmann and D. Waltner, Stationary waves on nonlinear quantum graphs: General framework and canonical perturbation theory, Phys. Rev. E, 93 (2016), 032204, 19pp. doi: 10.1103/physreve.93.032204. Google Scholar
S. Gnutzmann and D. Waltner, Stationary waves on nonlinear quantum graphs. Ⅱ. Application of canonical perturbation theory in basic graph structures, Phys. Rev. E, 94 (2016), 062216. https://journals.aps.org/pre/abstract/10.1103/PhysRevE.94.062216. doi: 10.1103/PhysRevE.94.062216. Google Scholar
M. Golubitsky and D. Schaeffer, Singularities and Groups in Bifurcation Theory vol. I, Springer New York, 1985. doi: 10.1007/978-1-4612-5034-0. Google Scholar
W. J. F. Govaerts, Numerical Methods for Bifurcations of Dynamical Equilibria, SIAM, 2000. doi: 10.1137/1.9780898719543. Google Scholar
P. G. Kevrekidis, The Discrete Nonlinear Schrödinger Equation: Mathematical Analysis, Numerical Computations and Physical Perspectives, vol. 232 of Springer Tr. Mod. Phys., Springer, Berlin Heidelberg, 2009. doi: 10.1007/978-3-540-89199-4. Google Scholar
E.-W. Kirr, Long time dynamics and coherent states in nonlinear wave equations, in Recent Progress and Modern Challenges in Applied Mathematics, Modeling and Computational Science (eds. R. Melnik, R. Makarov and J. Belair), vol. 79 of Fields Inst. Commun., Springer, 2017, 59–88. Google Scholar
P. Kuchment and O. Post, On the spectra of carbon nano-structures, Communications in Mathematical Physics, 275 (2007), 805-826. doi: 10.1007/s00220-007-0316-1. Google Scholar
J. L. Marzuola and D. E. Pelinovsky, Ground state on the dumbbell graph, Appl. Math. Res. Express, 2016 (2016), 98-145. doi: 10.1093/amrx/abv011. Google Scholar
J. L. Marzuola and D. E. Pelinovsky, Ground state on the dumbbell graph (v4), 2017. https://arXiv.org/abs/1509.04721. Google Scholar
J. L. Marzuola and M. I. Weinstein, Long time dynamics near the symmetry breaking bifurcation for nonlinear Schrödinger/Gross-Pitaevskii equations, Discrete Contin. Dyn. Syst., 28 (2010), 1505-1554. doi: 10.3934/dcds.2010.28.1505. Google Scholar
A. Nayfeh and B. Balachandran, Applied Nonlinear Dynamics: Analytical, Computational and Experimental Methods, Wiley Series in Nonlinear Science, Wiley, New York, 1995. doi: 10.1002/9783527617548. Google Scholar
H. Niikuni, Schrödinger operators on a periodically broken zigzag carbon nanotube, P. Indian Acad. Sci.–Math. Sci., 127 (2017), 471-516. doi: 10.1007/s12044-017-0342-7. Google Scholar
D. Noja, S. Rolando and S. Secchi, Standing waves for the NLS on the double-bridge graph and a rational-irrational dichotomy, J. Differ. Equations, 266, (2019), 147-178. doi: 10.1016/j.jde.2018.07.038. Google Scholar
D. Noja, D. E. Pelinovsky and G. Shaikhova, Bifurcations and stability of standing waves in the nonlinear Schrödinger equation on the tadpole graph, Nonlinearity, 28 (2015), 2343-2378. doi: 10.1088/0951-7715/28/7/2343. Google Scholar
D. E. Pelinovsky and T. V. Phan, Normal form for the symmetry-breaking bifurcation in the nonlinear Schrödinger equation, J. Diff. Eq., 253 (2012), 2796-2824. doi: 10.1016/j.jde.2012.07.007. Google Scholar
D. E. Pelinovsky and G. Schneider, Bifurcations of standing localized waves on periodic graphs, Ann. Henri Poincaré, 18 (2017), 1185–1211. doi: 10.1007/s00023-016-0536-z. Google Scholar
The Mathworks, Inc., MATLAB Release 2018a, Natick, Massachusetts, United States. Google Scholar
Wolfram Research, Inc., Mathematica, Version 11.3, Champaign, IL, 2018. Google Scholar
J. Yang, Newton-conjugate-gradient methods for solitary wave computations, J. Comput. Phys., 228 (2009), 7007-7024. doi: 10.1016/j.jcp.2009.06.012. Google Scholar
J. Yang, Classification of solitary wave bifurcations in generalized nonlinear Schrödinger equations, Stud. Appl. Math., 129 (2012), 133-162. doi: 10.1111/j.1467-9590.2012.00549.x. Google Scholar
J. Yang, Personal communication, 2018. Google Scholar
39]. (a) Saddle-node, (b) Transcritical, (c) Pitchfork. Top row: coordinate $ a $ vs. parameter $ \Lambda $. Bottom row: power $ Q $ vs. $ \Lambda $">Figure 1.1. The three most common bifurcations, after [39]. (a) Saddle-node, (b) Transcritical, (c) Pitchfork. Top row: coordinate $ a $ vs. parameter $ \Lambda $. Bottom row: power $ Q $ vs. $ \Lambda $
Figure 1.2. The dumbbell graph with its vertices and edges labeled
27]. The red $ \times $ symbols, added by this author, mark the bifurcation locations predicted by equation (3.3)">Figure 1.3. A numerically computed bifurcation diagram from Ref. [27]. The red $ \times $ symbols, added by this author, mark the bifurcation locations predicted by equation (3.3)
Figure 2.1. The bowtie combinatorial graph
Figure 2.2. Branches of stationary solutions to the bowtie-shaped DST system on the subspace $ \mathcal{S} _2 $
Figure 3.1. The first two members of the even family of eigenfunctions (a-b), odd family (c-d), and loop-localized family (e-f) of the linear eigenvalue problem (3.1) on the dumbbell graph, computed numerically, along with the associated eigenvaluess. In subfigure (f) the analytical value is obviously $ \lambda = 4 $, giving an indication of the accuracy of this computation
Figure 3.2. A pitchfork bifurcation may split into either (a) one branch with no bifurcations and one branch with a saddle node (b) a saddle-node and a transcritical bifurcation
Fig. 1.1 indicates that the loop-centered and constant solutions meet in a transcritical bifurcation. The computation indicates that the centered solution also undergoes saddle-node and pitchfork bifurcations">Figure 3.3. Numerical continuation of the PDE on the quantum graph. Comparison with Fig. 1.1 indicates that the loop-centered and constant solutions meet in a transcritical bifurcation. The computation indicates that the centered solution also undergoes saddle-node and pitchfork bifurcations
27]. (b) Large-amplitude two-soliton solution. (c) Solution arising from symmetry-breaking of centered state. (d) Solution arising from symmetry-breaking of constant state. Subplot labels correspond to marked points in Figure 3.3">Figure 3.4. (a) Large-amplitude centered solution on the half-branch discovered in Ref. [27]. (b) Large-amplitude two-soliton solution. (c) Solution arising from symmetry-breaking of centered state. (d) Solution arising from symmetry-breaking of constant state. Subplot labels correspond to marked points in Figure 3.3
Figure 4.1. A graph that supports similar bifurcations
Fig. 3.3 with $ L = 15 $ and $ L = 50 $. As $ L $ is increased, the angle with which the two branches of solution approach the transcritical bifurcation decreases, making it appear, locally, more like a pitchfork">Figure 4.2. The analogy of Fig. 3.3 with $ L = 15 $ and $ L = 50 $. As $ L $ is increased, the angle with which the two branches of solution approach the transcritical bifurcation decreases, making it appear, locally, more like a pitchfork
Figure 5.1. The phase plane of Equation (1.9), whose trajectories are level sets of the energy given by Equation (5.2)
Figure 5.2. The shooting function described in the text whose zeros correspond to nonlinear standing waves on the graph $ \Gamma $
Fig. 3.3">Figure 5.3. Two views of a partial bifurcation diagram with $ L = 2 $. (a) Plotting $ Q $ the squared $ L^2 $ norm of the standing wave solutions. (b) Plotting the value $ q $ used in the shooting function. Colors of branches are consistent between the two panels and with Fig. 3.3
Figure 5.4. Three views of a typical solution with two complete loops
Figure 5.5. Bifurcation diagram for solutions with two complete loops. Plotted are solutions with $\left| {n_j} \right| \le 2$ and $ \left| m \right| \le2$. Color indicates type of solution on the edge $ \mathtt{e} _2 $. The dashed line shows the nonzero constant solution $ \Phi = \sqrt{-\Lambda/2} $
Fig. 5.5. (a) $ (0, 0, 2) $, (b) $ (1, 0, 2) $, (c) $ (1, 1, 2) $, (d) $ (1, \Lambda, 2) $, (e) $ (1, -1, 2) $, (f) $ (2, -1, 1) $. Note from (e) and (f) that reversing $ n_1 $ and $ n_3 $ is not equivalent to a symmetry operation since a half-period of the $ dn $-function has no symmetries. As $ \Lambda $ decreases, (b) bifurcates from (a), and then (c), (d), and (e-f) bifurcate from (b) in that order">Figure 5.6. The standing waves at the six marked points in the bifurcation diagram of Fig. 5.5. (a) $ (0, 0, 2) $, (b) $ (1, 0, 2) $, (c) $ (1, 1, 2) $, (d) $ (1, \Lambda, 2) $, (e) $ (1, -1, 2) $, (f) $ (2, -1, 1) $. Note from (e) and (f) that reversing $ n_1 $ and $ n_3 $ is not equivalent to a symmetry operation since a half-period of the $ dn $-function has no symmetries. As $ \Lambda $ decreases, (b) bifurcates from (a), and then (c), (d), and (e-f) bifurcate from (b) in that order
Figure 5.7. (a) Solid curves: Partial bifurcation diagram on the lollipop subgraph. Dashed curves (red) indicate the maximum values of the quantized cnoidal solutions and the dash-dot curves (green) the maximum and minimum values of the quantized conoidal solutions on edge $ \mathtt{e} _3 $, with the regions between them shaded, alternately, for clarity. The marked points at intersections between the two families of curves indicate saddle node bifurcations of solutions with cnoidal or dnoidal solutions on the edge $ \mathtt{e} _3 $. (b) Partial bifurcation diagram on the dumbbell graph
Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita. Standing wave concentrating on compact manifolds for nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2015, 14 (3) : 825-842. doi: 10.3934/cpaa.2015.14.825
Yue Liu. Existence of unstable standing waves for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 193-209. doi: 10.3934/cpaa.2008.7.193
Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525
François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2009, 25 (4) : 1229-1247. doi: 10.3934/dcds.2009.25.1229
Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. Communications on Pure & Applied Analysis, 2018, 17 (1) : 163-175. doi: 10.3934/cpaa.2018010
Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 121-136. doi: 10.3934/dcds.2008.21.121
Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843
Nan Lu. Non-localized standing waves of the hyperbolic cubic nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3533-3567. doi: 10.3934/dcds.2015.35.3533
Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093
José Luis López, Jesús Montejo-Gámez. On viscous quantum hydrodynamics associated with nonlinear Schrödinger-Doebner-Goldin models. Kinetic & Related Models, 2012, 5 (3) : 517-536. doi: 10.3934/krm.2012.5.517
Alexander Pankov. Nonlinear Schrödinger Equations on Periodic Metric Graphs. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 697-714. doi: 10.3934/dcds.2018030
Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1671-1680. doi: 10.3934/cpaa.2018080
Huifang Jia, Gongbao Li, Xiao Luo. Stable standing waves for cubic nonlinear Schrödinger systems with partial confinement. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2739-2766. doi: 10.3934/dcds.2020148
Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1749-1762. doi: 10.3934/dcds.2017073
Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 831-850. doi: 10.3934/cpaa.2013.12.831
José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2601-2617. doi: 10.3934/dcds.2020376
Matt Coles, Stephen Gustafson. A degenerate edge bifurcation in the 1D linearized nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 2991-3009. doi: 10.3934/dcds.2016.36.2991
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
Brenton LeMesurier. Modeling thermal effects on nonlinear wave motion in biopolymers by a stochastic discrete nonlinear Schrödinger equation with phase damping. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 317-327. doi: 10.3934/dcdss.2008.1.317
Renata Bunoiu, Radu Precup, Csaba Varga. Multiple positive standing wave solutions for schrödinger equations with oscillating state-dependent potentials. Communications on Pure & Applied Analysis, 2017, 16 (3) : 953-972. doi: 10.3934/cpaa.2017046 | CommonCrawl |
Longevity, Aging and Cancer: Thermodynamics and Complexity
J.M. Nieto-Villar, R. Mansilla
Subject: Life Sciences, Cell & Developmental Biology Keywords: longevity; aging; cancer; complex systems; non-equilibrium thermodynamics; biological phase transition; ferroptosis
From the formalism of the thermodynamics of irreversible processes and the theory of complex systems, characterization of longevity and aging and its relationship with the emergence and evolution of cancer was carried out. It was found that: 1. The rate of entropy production can be used as an index of robustness, plasticity, the aggressiveness of cancer, and as a measure of biological age; 2. The aging process, as well as the evolution of cancer, goes through what we have called "biological phase transition"; 3. The process of metastasis, which occurs through epithelial-mesenchymal transition (EMT), appears as a phase transition far from thermodynamic equilibrium and exhibits Shilnikov chaos-like dynamic behavior. This dynamic guarantees the robustness of the process and, in turn, its unpredictability; 4. It was shown that as the ferroptosis process is strengthened, the complexity of the dynamics associated with the emergence and evolution of cancer decreases. The theoretical framework developed contributes to a better understanding of the biophysical-chemical phenomena of longevity and aging and their relationship with cancer.
Simple, Accurate and User-Friendly Differential Constitutive Model for the Rheology of Entangled Polymer Melts and Solutions from Non-Equilibrium Thermodynamics
Pavlos Stephanou, Ioanna Tsimouri, Vlasis Mavrantzas
Subject: Materials Science, Polymers & Plastics Keywords: entangled polymer melts; concentrated polymer solutions; non-equilibrium thermodynamics; polymer tumbling; transient shear viscosity undershoot
In a recent reformulation of the Marrucci-Ianniruberto constitutive equation for the rheology of entangled polymer melts in the context of non-equilibrium thermodynamics, rather large values of the convective constraint release parameter \beta_{ccr} had to be used in order not to violate the second law of thermodynamics. In this work, we present an appropriate modification of the model which avoids the splitting of the evolution equation for the conformation tensor into an orientation and a stretching part. Then, thermodynamic admissibility dictates simply that \beta_{ccr}≥ 0, thus allowing for more realistic values of \beta_{ccr} to be chosen. Moreover, and in view of recent experimental evidence for a transient stress undershoot (following the overshoot) at high shear rates whose origin may be traced back to molecular tumbling, we have incorporated in the model additional terms accounting, at least in an approximate way, for non-affine deformation through a slip parameter \xi. Use of the new model to describe available experimental data for the transient and steady-state shear and elongational rheology of entangled polystyrene melts and solutions shows close agreement. Overall, the modified model proposed here combines simplicity with accuracy, which renders it an excellent choice for managing complex viscoelastic fluid flows in large-scale numerical calculations.
General Equilibrium Theory in Economics And Beyond
Mohamad Rilwan, Agra T. Wijeratne
Subject: Social Sciences, Accounting Keywords: Energy; Equilibrium; Gradients; Non-Equilibrium Thermodynamics; Entropy
The General equilibrium theory tries to show how and why all free markets tend toward equilibrium in the long run. However, what is meant by equilibrium in this paper is more from a thermodynamical point of view. In order to understand the actual situation, it is necessary to study open systems which are complex. In physics, such a behavior in a complex system can be explain by using Non Equilibrium Thermodynamics. A system is able to self-organize and sustain itself away from equilibrium. Economic systems may fluctuate around a particular point. To sustain it far from the equilibrium state, it needs to degrade more energy and materials. In this study, the energy consumption patterns of Sri Lanka and USA are discussed. The pattern concerning Sri Lanka is close to the model proposed here, whereas the energy consumption pattern of USA is more complicated due to external factors.
Non-Equilibrium Phase Behavior of Hydrocarbons in Compositional Simulations and Upscaling
Ilya M. Indrupskiy, Olga A. Lobanova, Vadim R. Zubov
Subject: Engineering, Energy & Fuel Technology Keywords: non-equilibrium phase behavior; compositional flow simulations; phase transitions; upscaling; hydrocarbon mixtures; non-equilibrium constant volume depletion
Numerical models widely used for hydrocarbon phase behavior and compositional flow simulations are based on assumption of thermodynamic equilibrium. However, it is not uncommon for oil and gas-condensate reservoirs to exhibit essentially non-equilibrium phase behavior, e.g., in the processes of secondary recovery after pressure depletion below saturation pressure, or during gas injection, or for condensate evaporation at low pressures. In many cases the ability to match field data with equilibrium model depends on simulation scale. The only method to account for non-equilibrium phase behavior adopted by the majority of flow simulators is the option of limited rate of gas dissolution (condensate evaporation) in black oil models. For compositional simulations no practical yet thermodynamically consistent method has been presented so far except for some upscaling techniques in gas injection problems. Previously reported academic non-equilibrium formulations have a common drawback of doubling the number of flow equations and unknowns compared to the equilibrium formulation. In the paper a unified thermodynamically-consistent formulation for compositional flow simulations with non-equilibrium phase behavior model is presented. Same formulation and a special scale-up technique can be used for upscaling of an equilibrium or non-equilibrium model to a coarse-scale non-equilibrium model. A number of test cases for real oil and gas-condensate mixtures are given. Model implementation specifics in a flow simulator are discussed and illustrated with test simulations. A non-equilibrium constant volume depletion algorithm is presented to simulate condensate recovery at low pressures in gas-condensate reservoirs. Results of satisfactory model matching to field data are reported and discussed.
Towards a Physically Consistent Phase-Field Model for Alloy Solidification
Peter Bollada, Peter K Jimack, Andrew M Mullis
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: phase-field; solidification; non-equilibrium thermodynamics; crystal formation
We summarise contributions made to the computational phase-field modelling of alloy solidification from the University of Leeds spoke of the LiME project. We begin with a general introduction to phase-field, and then reference the numerical issues that arise from solution of the model, before detailing each contribution to the modelling itself. These latter contributions range from controlling and developing interface-width independent modelling; controlling morphology in both single and multiphase settings; generalising from single to multi-phase models; and creating a thermodynamic consistent framework for modelling entropy flow and thereby postulate a temperature field consistent with the concepts of, and applicable in, multiphase and density-dependent settings.
Matrix Product State Simulations of Non-equilibirum Steady States and Transient Heat Flows in the Two-Bath Spin-Boson Model at Finite Temperatures
Angus Dunnett, Alex W. Chin
Subject: Physical Sciences, Acoustics Keywords: Open quantum systems, Tensor networks, non-equilibrium dynamics
Simulating the non-perturbative and non-Markovian dynamics of open quantum systems is a very challenging many body problem, due to the need to evolve both the system and its environments on an equal footing. Tensor network and matrix product states (MPS) have emerged as powerful tools for open system models, but the numerical resources required to treat finite temperature environments grow extremely rapidly and limit their applications. In this study we use time-dependent variational evolution of MPS to expore the striking theory of Tamescelli et al. that shows how finite temperture open dyanmics can be obtained from zero temperature, i.e. pure wave function, simulations. Using this approach, we produce a benchmark data set for the dynamics of the Ohmic spin-boson model across a wide range of coupling and temperatures, and also present detailed analysis of the numerical costs of simulating non-equilibrium steady states, such as those emerging from the non-perturbative coupling of a qubit to baths at different temperatures. Despite ever growing resource requirements, we find that converged non-perturbative results can be obtained, and we discuss a number of recent ideas and numerical techniques that should allow wide application of MPS to complex open quantum systems.
Information Length as a Useful Index to Understand Variability in the Global Circulation
Eun-jin Kim, James Heseltine, Hanli Liu
Subject: Physical Sciences, Fluids & Plasmas Keywords: variabilities; modeling; non-equilibrium; turbulence; gravity waves; PDFs
With improved measurement and modelling technology, variability has emerged as an essential feature in non-equilibrium processes. While traditionally, mean values and variance have been heavily used, they are not appropriate in describing extreme events where a significant deviation from mean values often occurs. Furthermore, stationary Probability Density Functions (PDFs) miss crucial information about the dynamics associated with variability. It is thus critical to go beyond a traditional approach and deal with time-dependent PDFs. Here, we consider atmospheric data from the Whole Atmosphere Community Climate Model (WACCM) and calculate time-dependent PDFs and the information length from these PDFs, which is the total number of statistically different states that a system passes through in time. Time-dependent PDFs are shown to be non-Gaussian in general, and the information length calculated from these PDFs shed us a new perspective of understanding variabilities, correlation among different variables and regions. Specifically, we calculate time-dependent PDFs and information length and show that the information length tends to increase with the altitude albeit in a complex form. This tendency is more robust for flows/shears than temperature. Also, much similarity among flows and shears in the information length is found in comparison with the temperature. This means a stronger correlation among flows/shears because of a strong coupling through gravity waves in this particular WACCM model. We also find the increase of the information length with the latitude and interesting hemispheric asymmetry for flows/shears/temperature, a stronger anti-correlation (correlation) between flows/shears and temperature at a higher (low) latitude. These results also suggest the importance of high latitude/altitude in the information budge in the Earth's atmosphere, the spatial gradient of the information as a useful proxy for the transport of physical quantities.
Thermodynamic, non-extensive, or turbulent quasi equilibrium for space plasma environment
Peter Yoon
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: non-extensive entropic principle; plasma turbulence; quasi equilibrium
The Boltzmann-Gibbs (BG) entropy has been used in a wide variety of problems for more than a century. It is well known that BG entropy is extensive, but for certain systems such as those dictated by long-range interactions, the entropy must be non-extensive. Tsallis entropy possesses non-extensive characteristics, which is parametrized by a variable q (q = 1 being the classic BG limit), but unless q is determined from microscopic dynamics, the model remains but a phenomenological tool. To this date very few examples have emerged in which q can be computed from first principles. This paper shows that the space plasma environment, which is governed by long-range collective electromagnetic interaction, represents a perfect example for which the q parameter can be computed from micro-physics. By taking the electron velocity distribution function measured in the heliospheric environment into account, and considering them to be in quasi equilibrium state with electrostatic turbulence known as the quasi-thermal noise, it is shown that the value corresponding to q = 9/13 = 0.6923 may be deduced. This prediction is verified against observation made by spacecraft, and it is shown to be in excellent agreement.
Flow and Convection in Metal Foams: A Survey and New CFD Results
Beatrice Pulvirenti, Michele Celli, Antonio Barletta
Subject: Physical Sciences, Fluids & Plasmas Keywords: Metal Foam; Porous Medium; Convection; Local Thermal Non-Equilibrium
Metal foams are widely studied as possible tools for the enhancement of heat transfer from hot bodies. The basic idea is that a metal foam tends to increase significantly the heat exchange area between the hot solid body and the external cooling fluid. For this reason, this class of porous materials are considered as good candidates for an alternative to finned surfaces, with different pros and cons. Among the pros, we mention the generally wider area of contact between solid and fluid. Among the cons is the difficulty of producing different specimens with the same inner structure, with the consequence that their performance may be significantly variable. This paper will offer a survey of the literature with a focus on the main heat transfer characteristics of the metal foams. Then, a numerical simulation of the heat transfer at the pore scale level for an artificial foam with a spatially periodic structure will be discussed. Finally, these numerical results will be employed to assess the macroscopic modelling of the flow and heat transfer in a metal foam.
Design of Multiplex Lateral Flow Tests: A Case Study for Simultaneous Detection of Three Antibiotics
Anastasiya V. Bartosh, Dmitriy V. Sotnikov, Olga D. Hendrickson, Anatoly V. Zherdev, Boris B. Dzantiev
Subject: Chemistry, Analytical Chemistry Keywords: multiparametric assay; rapid tests; immunochromatography; antibiotics; non-equilibrium interactions
The presented study is focused on the impact of binding zones locations at immunochromatographic test strips into analytical parameters of multiplex lateral flow assay. Due to non-equilibrium conditions for such assays the duration of immune reactions influences significantly on analytical parameters, and the integration of several analytes into one multiplex strip may cause essential decrease of sensitivity. To choose the best location of binding zones, we have tested reactants for immunochromatographic assays of lincomycin, chloramphenicol, and tetracycline. The influence of the distance to the binding zones on the intensity of coloration and limit of detection (LOD) was rather different. Basing on the obtained data, the best order of binding zones was chosen. In comparison with non-optimal location the LODs were 5-10 fold improved. The final assay provides LODs 0.4, 0.4 and 1.0 ng/mL for lincomycin, chloramphenicol, and tetracycline, respectively. The proposed approach can be applied for multiassays of other analytes.
Solving Fuzzy Bi-matrix Games Through a Interval Value Function Approach
Kaisheng Liu, Yumei Xing
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: fuzzy bi-matrix game; equilibrium solution; non-linear optimization problem
This article puts forward the bi-matrix games with crisp parametric payoffs based on interval value function approach. We conclude that the equilibrium solution of the game model can converted into optimal solutions of the pair of the non-linear optimization problem. Finally, experiment results show the efficiency of the model.
Non- and Quasi-Equilibrium Multi-Phase Field Methods Coupled with CALPHAD Database for Rapid-Solidification Microstructural Evolution in Laser Powder Bet Additive Manufacturing Condition
Sukeharu Nomoto, Masahito Segawa, Makoto Watanabe
Subject: Materials Science, Biomaterials Keywords: additive manufacturing; rapid solidification; microstructural evolution; non-equilibrium; quasi-equilibrium; multi-phase field method; CALPHAD database; nickel alloy
Solidification microstructure is formed under high cooling rates and temperature gradients in powder-based additive manufacturing. In this study, a non-equilibrium multi-phase field method (MPFM), which was based on a finite interface dissipation model proposed by Steinbach et. al., coupled with a CALPHAD database was developed for a multicomponent Ni alloy. A qua-si-equilibrium MPFM was also developed for comparison. Two-dimensional equiaxed micro-structural evolution for the Ni (Bal.)–Al–Co–Cr–Mo–Ta–Ti–W–C alloy was performed at various cooling rates. The temperature–γ fraction profiles obtained under 10^5 K/s using non- and qua-si-equilibrium MPFMs were in good agreement with each other. Over 10^6 K/s, the differences between non- and quasi-equilibrium methods grew as the cooling rate increased. The non-equilibrium solidification was strengthened over a cooling rate of 10^6 K/s. Colum-nar-solidification microstructural evolution was performed under cooling rates from 5×10^5 K/s to 1×10^7 K/s at various temperature gradient values under the constant interface velocity (0.1 m/s). The results showed that as the cooling rate increased, the cell space decreased in both methods, and the non-equilibrium MPFM agreed well with experimental measurements. Our results show that the non-equilibrium MPFM can simulate solidification microstructure in powder bed fusion additive manufacturing.
The Post Shock Nonequilibrium Relaxation in a Hypersonic Plasma Flow Involving Reflection off A Thermal Discontinuity
Anna Markhotok
Subject: Physical Sciences, Fluids & Plasmas Keywords: Hypersonic plasma dynamics; Optical Discharge; Shock wave structure; Non-equilibrium state
The evolution in the post shock nonequilibrium relaxation in a hypersonic plasma flow was investigated during a shock's reflection off a thermal discontinuity. Within a transitional period, the relaxation zone parameters past both, the reflected and transmitted waves, evolve differently compared to that in the incident wave. In a numerical example for the non-dissociating N2 gas heated to 5000 K/10,000 K across the interface and M = 3.5, the relaxation time for the transmitted wave is up to 50% shorter and the relaxation depth for both waves is significantly reduced, thus resulting in a weakened wave structure. The results of extension into larger values of heating strength and the shock Mach numbers are discussed. The findings can be useful in the areas of research involving strong shocks interacting with optical discharges or other heated media on the scale where the shock structure becomes important.
Emergence of Lagrangian Field Theory from Self-Organized Criticality
Subject: Physical Sciences, General & Theoretical Physics Keywords: Self-organized criticality, multifractals, Lagrangian field theory, non-equilibrium dynamics, complexity theory.
Self-organized criticality (SOC) is a universal mechanism for self-sustained critical behavior in large-scale systems evolving outside equilibrium. Our report explores a tentative link between SOC and Lagrangian field theory, with the long-term goal of bridging the gap between complex dynamics and the non-perturbative behavior of quantum fields.
An Overview of Emergent Order in Far-from-Equilibrium Driven Systems: From Kuramoto Oscillators to Rayleigh-Bénard Convection
Atanu Chatterjee, Nicholas Mears, Yash Yadati, Germano Iannacchione
Subject: Physical Sciences, Condensed Matter Physics Keywords: non-equilibrium thermodynamics; Ising model; Kuramoto model; Rayleigh-Bénard convection; pattern formation
Soft-matter systems when driven out-of-equilibrium often give rise to structures that usually lie in-between the macroscopic scale of the material and microscopic scale of its constituents. In this paper we review three such systems, the two-dimensional square-lattice Ising model, the Kuramoto model and the Rayeligh-Bénard convection system which when driven out-of-equilibrium give rise to emergent spatio-temporal order through self-organization. A common feature of these systems is that the entities that self-assemble are coupled to one another in some way, either through local interactions or through a continuous media. Therefore, the general nature of non-equilibrium fluctuations of the intrinsic variables in these systems are found to follow similar trends as order emerges. Through this paper, we attempt to look for connections between among these systems and systems in general which give rise to emergent order when driven out-of-equilibrium.
An Introduction to the Non-Equilibrium Steady States of Maximum Entropy Spike Trains
Rodrigo Cofre, Leonardo Videla, Fernando Rosas
Subject: Keywords: non-equilibrium steady states; maximum entropy principle; spike train statistics; entropy production
Although most biological processes are characterized by a strong temporal asymmetry, several popular mathematical models neglect this issue. Maximum entropy methods provide a principled way of addressing time irreversibility, which leverages powerful results and ideas from the3literature of non-equilibrium statistical mechanics. This article provides a comprehensive overview of these issues, with a focus in the case of spike train statistics. We provide a detailed account of the5mathematical foundations and work out examples to illustrate the key concepts and results from non-equilibrium statistical mechanics
Analytical Calculation of Superconducting Transition Temperatures Including a Complete Consideration of Many-Body Interactions and Non-equilibrium States
Shinichi Ishiguri
Subject: Physical Sciences, Condensed Matter Physics Keywords: non-equilibrium superconductivity; EPR-pair superconductivity; Many-body interaction; transition temperature; Lorentz conservations
In this work, we analytically describe a superconducting transition in a non-equilibrium state taking into account many-body interactions; the obtained transition temperatures indicate the presence of superconductivity at room temperatures.First, we consider many-body interactions and discuss the case of locally thermal equilibrium with many-body interactions; in this section, we derive statistical equations that describe many-body interactions at locally thermal equilibrium state. Then, the same theory is used to derive a many-body statistical equation that is expanded to include the case of non-equilibrium states; in this case a transition temperature is derived. Moreover, a wave function of an Einstein–Podolsky–Rosen pair (EPR pair) is calculated according to the Lorentz conservation, and a specific condensation is observed and the Meissner effect is found to be present.Furthermore, considering the Lorentz conservations, relativistic energy, and Boltzmann statistics, algorithms are presented to calculate charge density, current density, and internal local energy. We note that these calculations do not require a specific code but instead utilize the software Microsoft Excel.We present plots showing the charge density and current density vs. the applied electric potential, which demonstrate the practical applicability of the theory. Moreover, internal local energy was found to be close to zero for sufficiently large electric potentials at room temperature.In the discussion section, the universally induced superconducting current is derived, which can be employed as the renewable energy.This paper describes non-equilibrium and EPR-pair type superconductivity, with the complete consideration of many-body interactions.
Non-equilibrium Thermodynamic Foundations of the Origin of Life
Karo Michaelian
Subject: Life Sciences, Biophysics Keywords: origin of life; disspative structuring; prebiotic chemistry; abiogenisis; non-equilibrium thermodynamics; thermodynamic dissipation theory
There is little doubt that life's origin followed from the known physical and chemical laws of Nature. The most general scientific framework incorporating the laws of Nature and applicable to most known processes to good approximation, is that of thermodynamics and its extensions to treat out-of-equilibrium phenomena. The event of the origin of life should therefore also be amenable to such an analysis. In this paper, I describe the non-equilibrium thermodynamic foundations of the origin of life for the non-expert. This ``Thermodynamic Dissipation Theory for the Origin of Life'' is founded on Classical Irreversible Thermodynamic theory developed by Lars Onsager, Ilya Prigogine, and coworkers.
Non-Equilibrium Quantum Brain Dynamics: Super-radiance and Equilibration in 2+1 Dimensions
Akihiro Nishiyama, Shigenori Tanaka, Jack A. Tuszynski
Subject: Physical Sciences, Other Keywords: non-equilibrium quantum field theory; quantum brain dynamics; kadanoff–baym equation; entropy; super-radiance
We derive time evolution equations, namely the Schrodinger-like equations and the Klein-Gordon equations for coherent fi elds and the Kadanoff-Baym (KB) equations for quantum fluctuations, in Quantum Electrodynamics (QED) with electric dipoles in 2 + 1 dimensions. Next we introduce a kinetic entropy current based on the KB equations in the 1st order of the gradient expansion. We show the H-theorem for the Leading-Order self-energy in the coupling expansion (the Hartree-Fock approximation). We show a conserved energy in the spatially homogeneous systems in the time evolution. We derive aspects of the super-radiance and the equilibration in our single Lagrangian. Our analysis can be applied to Quantum Brain Dynamics, that is QED with water electric dipoles. The total energy consumption to maintain super-radiant states in microtubules seems to be within the energy consumption to maintain the ordered systems in a brain.
Fundamental Clock of Biological Aging: Convergence of Molecular, Neurodegenerative, Cognitive, and Psychiatric Pathways: Non-Equilibrium Thermodynamics Meet Psychology
Victor Vasilyevich Dyakin, Nika Victorovna Dyakina-Fagnano, Laura Beth McIntire, Vladimir Nikolaevich Uversky
Subject: Biology, Physiology Keywords: spontaneous; non-enzymatic; post translational modifications; racemization; biological clock; natural selection; allostatic load; psychological aging; psychological stress; stress response sys-tem; phase transitions.
In humans, age-associated degrading changes are observed in molecular and cellular processes underly the time-dependent decline in spatial navigation, time perception, cognitive and psy-chological abilities, and memory. Cross talk of biological, cognitive, and psychological clocks provides an integrative contribution to healthy and advanced aging. At the molecular level, ge-nome, proteome, and lipidome instability are widely recognized as the primary causal factors in aging. We narrow attention to the roles of protein aging linked to prevalent amino acids chirali-ty, enzymatic and spontaneous (non-enzymatic) post-translational modifications (PTMs SP), and non-equilibrium phase transitions. The homochirality of protein synthesis, resulting in the steady-state non-equilibrium condition of protein structure, makes them prone to multiple types of enzymatic and spontaneous PTMs, including racemization and isomerization. Spontaneous racemization leads to the loss of the balanced prevalent chirality. Advanced biological aging re-lated to irreversible PTMs SP has been associated with the nontrivial interplay between poor so-matic and mental health conditions. Through stress response systems (SRS), the environmental and psychological stressors contribute to the age-associated "collapse" of protein homochirality. The role of prevalent protein chirality and entropy of protein folding in biological aging is mainly overlooked. In a more generalized context, the time-dependent shift from enzymatic to the non-enzymatic transformation of biochirality might represent an important and yet un-der-appreciated hallmark of aging.
The Dissipative Photochemical Origin of Life: UVC Abiogenisis of the Purines
Claudeth Clarissa Hernández, Karo Michaelian
Subject: Life Sciences, Biochemistry Keywords: origin of life; disspative structuring; non-equilibrium thermodynamics; prebiotic chemistry; abiogenisis; adenine; guanine; hypoxanthine; xanthine; purines
We have suggested that the abiogenisis of life around the beginning of the Archean may have been an example of microscopic dissipative structuring of UVC pigments (the fundamental molecules of life) under the prevailing surface UV solar spectrum. In a previous article in this series, we have describe the non-equilibrium thermodynamics and the photochemical mechanisms which may have been involved in the dissipative structuring of the purines adenine and hypoxanthine from the common precursor molecules of HCN and water under UVC light. In this article we extend our analysis to include the production of the other two important purines, guanine and xanthine, from these same precursors. The photochemical reactions are presumed to occur within a fatty acid vesicle floating on a hot ocean surface exposed to the prevailing UV light. Reaction-diffusion equations are resolved under different environmental conditions. Significant amounts of adenine (∼10−5 M) and guanine (∼10−6 M) are obtained within only a few months at 80 °C under plausible initial concentrations of HCN and cyanogen (a photochemical product of HCN).
Self-Organized Criticality of Traffic Flow: There is Nothing Sweet about the Sweet Spot
Jorge Laval
Subject: Engineering, Civil Engineering Keywords: traffic flow; kinematic wave model; self-organized criticality; fractals; complexity; catastrophe theory; non-equilibrium critical phenomena
This paper shows that the kinematic wave model exhibits self-organized criticality when initialized with random initial conditions around the critical density. A direct consequence is that conventional traffic management strategies seeking to maximize the flow may be detrimental as they make the system more unpredictable and more prone to collapse. Other implications for traffic flow in the capacity state are discussed, such as: \item jam sizes obey a power-law distribution with exponents 1/2, implying that both its mean and variance diverge to infinity, and therefore traditional statistical methods fail for prediction and control, \item the tendency to be at the critical state is an intrinsic property of traffic flow driven by our desire to travel at the maximum possible speed, \item traffic flow in the critical region is chaotic in that it is highly sensitive to initial conditions, \item aggregate measures of performance are proportional to the area under a Brownian excursion, and therefore are given by different scalings of the Airy distribution, \item traffic in the time-space diagram forms self-affine fractals where the basic unit is a triangle, in the shape of the fundamental diagram, containing 3 traffic states: voids, capacity and jams. This fractal nature of traffic flow calls for analysis methods currently not used in our field.
Bouncing oil droplets, de Broglie's quantum thermostat and convergence to equilibrium
Mohamed Hatifi, Ralph Willox, Samuel Colin, Thomas Durt
Subject: Physical Sciences, General & Theoretical Physics Keywords: Bouncing oil droplets; Stochastic quantum dynamics; de Broglie–Bohm theory; Quantum non-equilibrium; H-theorem; Ergodicity
Recently, the properties of bouncing oil droplets, also known as `walkers', have attracted much attention because they are thought to offer a gateway to a better understanding of quantum behaviour. They indeed constitute a macroscopic realization of wave-particle duality, in the sense that their trajectories are guided by a self-generated surrounding wave. The aim of this paper is to try to describe walker phenomenology in terms of de Broglie-Bohm dynamics and of a stochastic version thereof. In particular, we first study how a stochastic modification of the de Broglie pilot-wave theory, à la Nelson, affects the process of relaxation to quantum equilibrium, and we prove an H-theorem for the relaxation to quantum equilibrium under Nelson-type dynamics. We then compare the onset of equilibrium in the stochastic and the de Broglie-Bohm approaches and we propose some simple experiments by which one can test the applicability of our theory to the context of bouncing oil droplets. Finally, we compare our theory to actual observations of walker behavior in a 2D harmonic potential well.
A Computational Model to Inform Effective Control Interventions against Yersinia enterocolitica Coinfection
Reihaneh Mostolizadeh, Andreas Dräger
Subject: Life Sciences, Biochemistry Keywords: reproduction number; disease-free equilibrium; co-existence equilibrium; Yersinia; gastroenteritis
The complex interplay among pathogens, host factors, and the integrity and composition of the endogenous microbiome determine the course and outcome of gastrointestinal infections. The model organism Yersinia entercolitica (Ye) is one of the five top frequent causes of bacterial gastroenteritis based on the Epidemiological Bulletin of the Robert Koch Institute (RKI) published on September 10, 2020. A fundamental challenge in predicting the course of an infection is to understand whether co-infection with two Yersinia strains differing only in their capacity to resist killing by the host immune system may decrease the overall virulence by competitive exclusion or increase it by acting cooperatively. Herein, we study the primary interactions among Ye, the host immune system and the microbiota, and their influence on Yersinia population dynamics. The employed model considers two host compartments, the intestinal mucosa and lumen, commensal bacteria, the co-existence of wild-type and mutant Yersinia strains, as well the host immune responses. We determine four possible equilibria: the disease-free, wild-type-free, mutant-free, and co-existence of wild-type and mutant equilibrium. We also calculate the reproduction number for each strain as a threshold parameter to determine if the population may either be eradicated or persist within the host. We conclude that the infection should disappear if the reproduction numbers for each strain fall below one, and the commensal bacteria's growth rate exceeds the pathogens' growth rates. These findings will help inform public health control strategies. The supplement includes MATLAB source script, Maple workbook, and figures.
A Photon Force and Flow for Dissipative Structuring: Application to Pigments, Plants and Ecosystems
Karo Michaelian, Ramon Eduardo Cano Mateo
Subject: Physical Sciences, Other Keywords: disspative structuring; non-equilibrium thermodynamics; entropy production; origin of life; organic pigments; plants; ecosystems; evolution; chlorophyll; biosignatures
Through a modern derivation of Planck's formula for the entropy of an arbitrary beam of photons we derive a general expression for the entropy production due to the irreversible process of the absorption of an arbitrary incident photon spectrum in material and its dissipation into an infrared-shifted grey-body emitted spectrum, the rest being reflected or transmitted. Employing the framework of Classical Irreversible Thermodynamic theory, we define the generalized thermodynamic flow as the flow of photons from the incident beam into the material and the generalized thermodynamic force is then just the entropy production divided by the photon flow which is the entropy production per unit photon at a given wavelength. We compare the entropy production under sunlight of different inorganic and organic materials (water, desert, leaves and forests) and show that organic materials are the greater entropy producing materials. Intriguingly, plant and phytoplankton pigments (including chlorophyll) have peak absorption exactly where entropy production through photon dissipation is maximal for our solar spectrum $430<\lambda<550$ nm, while photosynthetic efficiency is maximal between 600 and 700 nm. These results suggest that the evolution of pigments, plants and ecosystems has been towards optimizing entropy production rather than photosynthesis. We propose using the wavelength dependence of global entropy production as a biosignature for discovering life on planets of other stars.
Multi-Relaxation Time Lattice Boltzmann Simulations of oOscillatory Instability in Lid-Driven Flows of 2D Semi-Elliptical Cavity
Zhe FENG, HeeChang LIM
Subject: Engineering, Mechanical Engineering Keywords: lattice Boltzmann method; mass-conserved wall treatment; non-equilibrium extrapolation boundary condition; mass leakage; parallel computation; CFD
In this study, the multi-relaxation-time lattice Boltzmann method is applied to investigate the oscillatory instability of lid-driven flows in two-dimensional semi-elliptical cavities with different vertical to horizontal aspect ratios K in the range of 1.0--3.0. The program implemented in this study is parallelized using CUDA (compute unified device architecture), a parallel computing platform, and computations are carried out on NVIDIA Tesla K40c GPU. To carry out precise calculations, the CUDA algorithm is extensively investigated, and its parallel efficiency indicates that the maximum speedup is 47.6 times faster. Furthermore, the steady--oscillatory Reynolds numbers are predicted by implementing the CUDA-based programs. The amplitude coefficient is defined to quantify the time-dependent oscillation of the velocity magnitude at the monitoring point. The simulation results indicate that the transition Reynolds numbers correlate negatively with the aspect ratio of the semi-elliptical cavity, and are smaller than those of the rectangular cavity at the same aspect ratio. In addition, the detailed vortex structures of the semi-elliptical cavity within a single period are also investigated when the Reynolds number is larger than the steady--oscillatory value to determine the effects of periodic oscillation of the velocity magnitude.
Reciprocally-coupled Gating: Strange Loops in Bioenergetics, Genetics, and Catalysis
Charles W. Carter, Jr, Peter R Wills
Subject: Biology, Anatomy & Morphology Keywords: Genetic coding; free energy transduction; non-equilibrium thermodynamics; transition-state stabilization; conformational change; aminoacyl-tRNA synthetases; emergent phenomena
Bioenergetics, genetic coding, and catalysis are all difficult to imagine emerging without pre-existing historical context. That context is often posed as a "Chicken and Egg" problem; its resolution is concisely described by de Grasse Tyson: "the egg was laid by a bird that was not a chicken". The concision and generality of that answer furnish no details—only an appropriate framework from which to examine detailed paradigms that might illuminate paradoxes underlying these three life-defining biomolecular processes. We examine experimental aspects here of five examples that all conform to the same paradigm. The paradox in each example is resolved by coupling if, and only if, conditions for two related transitions between levels. One drives, and each restricts fluxes through, or "gates" the other. That reciprocally-coupled gating, in which two gated processes constrain one another, maps onto the formal structure of "strange loops". That mapping may help unite the axiomatic foundations of genetics, bioenergetics, and catalysis. As a physical analog for Gödel's logic, biomolecular strange-loops provide a natural metaphor around which to organize these data, linking biology to the physics of information, free energy, and the second law of thermodynamics.
Fluctuation Theorem of Information Exchange within an Ensemble of Paths Conditioned at a Coupled-Microstates
Lee Jinwoo
Subject: Physical Sciences, Condensed Matter Physics Keywords: local non-equilibrium thermodynamics, fluctuation theorem, mutual information, entropy production, local mutual information, thermodynamics of information, stochastic thermodynamics
Fluctuation theorems are a class of equalities each of which links a thermodynamic path functional such as heat and work to a state function such as entropy and free energy. Jinwoo and Tanaka [L. Jinwoo and H. Tanaka, Sci. Rep. 5, 7832 (2015)] have shown that each microstate of a fluctuating system can be regarded as an ensemble (or a 'macrostate') if we consider trajectories that reach each microstate. They have revealed that local forms of entropy and free energy are true thermodynamic potentials of each microstate, encoding heat, and work, respectively, within an ensemble of paths that reach each state. Here we show that information that is characterized by the local form of mutual information between two subsystems in a heat bath is also a true thermodynamic potential of each coupled state and encodes the entropy production of the subsystems and heat bath during a coupling process. To this end, we extend the fluctuation theorem of information exchange [T. Sagawa and M. Ueda, Phys. Rev. Lett. 109, 180602 (2012)] by showing that the fluctuation theorem holds even within an ensemble of paths that reach a coupled state during dynamic co-evolution of two subsystems.
Towards the Understanding of Ice Crystal-graupel Collision Charging in Thunderstorm Electrification
Yuanping He, Boyan Gu, Daizhou Zhang, Weizhen Lu, Chuck Wah Yu, Zhaolin Gu
Subject: Earth Sciences, Atmospheric Science Keywords: Thunderstorm electrification; Ice crystal-graupel collision; Relative growth rate theory; Temperature gradient; Non-thermal equilibrium; Tripole charge structure; Thunderclouds hydrometeors
Thunderstorm electrification has been studied for hundreds of years. Several mechanisms have been proposed to elucidate the electrification, including convective charging, inductive precipitation charging, and ice crystal-graupel collision charging. Field observations and model studies have demonstrated the vital roles that graupel and ice crystals play in the electrification, but the mechanism of the collision charging is still unclear. The fundamental essence of relative growth rate theory used for explaining the tripole charge structure in thunderclouds also needs a further exploration. We analyze the processes of ice crystal-graupel collision charging from charge migration inside hydrometeors to charge separation between two hydrometeors. The driving effects of temperature gradient and chemical potential gradient in charge migration are clarified, as well as the applicability of the relative growth rate theory, thermoelectric effect and surface tension gradient in different humidities. Based on the understanding from these electrification mechanisms, we propose that the essence of charge separation is driven by non-thermal equilibrium, and future studies on thunderstorm electrification should focus on the dynamical non-thermal equilibrium of cloud particles.
Complex Network Formation as Antagonistic Game: Numerical Modeling
Pavel Bocharov, Alexander Goryashko
Subject: Mathematics & Computer Science, General Mathematics Keywords: Graph complexity; antagonistic game theory; partition networks; neural networks; numeric modellng; Nash equilibrium; Neumann equilibrium
The basic challenges of this work are twofold: demonstrating the dependence between the functional and topological qualities of partition networks and finding the simplest—with respect to algorithmic complexity—network elements. The study of these problems is based on finding the solution to an appropriate antagonistic vertex game. The results of the numerical simulations of antagonistic partition games demonstrate that the winner's graphs are "almost always" dense and hyperenergetic compared to the loser's graphs. These observations reveal that successful evolutionary mechanisms can be realized, in principle, by the simplest objects (such as viruses).
Epidemic Analysis and Mathematical Modelling of H1N1 (A) with Vaccination
Jagan Mohan Jonnalagadda, Kartheek Gaddam
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: basic reproduction number; disease free equilibrium; endemic equilibrium; local asymptotic stability; global asymptotic stability; influenza
This article investigates a proposed new mathematical model that considers the infected individuals using various rate coefficients such as transmission, progression, recovery and vaccination. The fact that the dynamic analysis is completely determined by the basic reproduction number is established. More specifically, local and global stabilities of the disease-free equilibrium and the endemic equilibrium are proved under certain parameter conditions when the basic reproduction number is below or above unity. A realistic computer simulation is performed for better understanding of the variations in trends of different compartments after the outbreak of the disease.
Unsteady Analytical Solution of the Influenced of a Thermal Radiation Force Generated from a Heated Rigid Flat Plate on Non-homogeneous Gas Mixture
Taha Abdel Wahid, Taha Abdel-Karim
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Unsteady Exact analytical solutions; Partial differential equations system; Travelling wave method; Moment method; Boltzmann kinetic equation; Neutral non-homogenous gas; Thermal radiation force; Non-equilibrium irreversible thermodynamics; Internal energy.
In the present paper, the effect of the non-linear thermal radiation on the neutral gas mixture in the unsteady state is investigated for the first time. The unsteady BGK technique of the Boltzmann kinetic equations for a neutral non-homogenous gas is solved. The solution of the unsteady case makes the problem more general significance than the stationary one. For this purpose, the moments' method, together with the traveling wave method, is applied. The temperature and concentration are calculated for each gas component and mixture for the first time.Furthermore, the study is held for aboard range of temperatures ratio parameter and a wide range of the molar fraction. The distribution functions are calculated for each gas component and the gas mixture. The significant non-equilibrium irreversible thermodynamic characteristics the entire system is acquired analytically. That technic allows us to investigate the consistency of Boltzmann's H-theorem, Le Chatelier principle, and thermodynamics laws. Moreover, the ratios among the different participation of the internal energy alteration are evaluated via the Gibbs formula of total energy. The final results are utilized to the argon-helium non-homogenous gas at different magnitudes of radiation force strength and molar fraction parameters. 3D-graphics are presented to predict the behavior of the calculated variables, and the obtained results are theoretically discussed.
Effect of Sulfur Content on Copper Recovery in the Reduction Smelting Process
Long Wu, Hongyang Wang, Kai Dong
Subject: Materials Science, Metallurgy Keywords: Copper recovery; Thermodynamic equilibrium; Reduction experiment
This work discussed the advantages of reducing copper in molten copper slag with low S content. FactSage calculated the distribution of copper at equilibrium under different sulfur content. The effect of sulfur content on copper recovery under different oxygen partial pressures was pointed out. The effect of sulfur content on copper recovery in the actual reduction process was explored through experimental research. Under the condition of low sulfur, the Recovery of copper and the stability of the experiment have an ideal results.
Finite Rate Reaction Mechanism Adapted for Modeling Pseudo-Equilibrium Pyrolysis of Cellulose
Tomas Mora
Subject: Engineering, Energy & Fuel Technology Keywords: cellulose; pyrolysis; chemical equilibrium; chemical kinetics
This works is related to the modeling of cellulose pyrolysis with a pseudo equilibrium approach. The objective is to model the kinetics of the cellulose pyrolysis with a semi-global mechanism obtained from the literature, in order to obtain the yield and the rate of formation of char. The pseudo equilibrium sense consists in the supposition that the solid phase devolatilization can be described kinetically - at finite rate - , preserving the competitive characteristic between the production of char and tar, while the gas phase can be described by means of chemical equilibrium. A set of ordinary, linear and non linear, differential equations was obtained and solved numerically with a simple but consistent scheme using the Totally Implicit Euler method. Chemical equilibrium was solved using CANTERA coupled with a code written in Matlab. Results showed that the scheme preserve the tar-gas competitive characteristic for cellulose pyrolysis. The gas phase, is defined as a mixture of CO2, CO, H2O, CH4, H2 and N2 showed similarly composition compared with models from literature. Finally, the extension of the model to biomass in general, is straightforward, in order to include the hemicellulose and lignin.
Investigation of Propane and n- butane Hydrate Formation Condition and Determination of Equilibrium Pressure
Somayeh Salehfekr, Sajjad Porgar, Nejat Rahmanian
Subject: Engineering, Biomedical & Chemical Engineering Keywords: hydrate; propane; normal butane; equilibrium pressure
The purpose of this study is to determine the equilibrium conditions for the formation of a mixture of propane and normal butane hydrates including temperature, pressure and mole fraction. In order to prevent the formation of hydrates in the cooling path, it is necessary to examine the conditions of hydrate formation and provide solutions. Modeling of hydrate formation conditions was performed using Hydoff software and compared with experimental results in this field, which obtained an acceptable error percentage. The range of temperature is between 267-276 °C and the molar percentage of propane is 0.7,0.8 and 0.9 and the mathematical equation was presented to predict hydrate formation. By analyzing the results, it was found that by increasing the concentration of ethane in the presence of other compounds, hydrate growth increased and hydrates formed more stable, also by increasing the concentration of propane and normal butane the amount of equilibrium pressure will decrease.
Fatty Acid Vesicles as Hard UV-C Shields for Early Life
Iván Lechuga, Karo Michaelian
Subject: Chemistry, Organic Chemistry Keywords: ultraviolet shield; protocell; fatty acid vesicles; origin of life; dissipative structuring; prebiotic chemistry; abiogenesis; non-equilibrium thermodynamics; thermodynamic dissipation theory; Mie scattering.
Theories on life's origin generally acknowledge the advantage of a semi-permeable vesicle (protocell) for enhancing the chemical reaction-diffusion processes involved in abiogenesis. However, more and more evidence indicates that the origin of life concerned the photo-chemical dissipative structuring of the fundamental molecules under UV-C light. In this paper, we analyze the Mie UV scattering properties of such a vesicle made from long chain fatty acids. We find that the vesicle could have provided early life with a shield from the faint, but dangerous, hard UV-C ionizing light (180-210 nm) that probably bathed Earth's surface from before the origin of life and until perhaps 1,500 million years after, until the formation of a protective ozone layer as a result of the evolution of oxygenic photosynthesis.
Second Law and Non-Equilibrium Entropy of Schottky Systems --Doubts and Verification--
Wolfgang Muschik
Subject: Physical Sciences, General & Theoretical Physics Keywords: Non-equilibrium entropy; Schottky systems; Inert partition; Second law; Contact temperature; Entropy-free thermodynamics; Defining inequalities; Adiabatical uniqueness; Clausius inequality ofopen systems
Meixner's historical remark in 1969 "... it can be shown that the concept of entropy in the absence of equilibrium is in fact not only questionable but that it cannot even be defined...." is investigated from today's insight. Several statements --such as the three laws of phenomenological thermodynamics, the embedding theorem and the adiabatical uniqueness-- are used to get rid of non-equilibrium entropy as a primitive concept. In this framework, Clausius inequality of open systems can be derived by use of the defining inequalities which establish the non-equilibrium quantities contact temperature and non-equilibrium molar entropy which allow to describe the interaction between the Schottky system and its controlling equilibrium environment.
Stability and Boundedness Properties of a Rational Exponential Difference Equation
J. Leo Amalraj, M.Maria Susai Manuel, Adem Kılıçman, D. S. Dilip
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: boundedness; equilibrium; global asymptotic stability; Rational Equation
This article aims to discuss, the stability and boundedness character of the solutions of the rational equation of the form \begin{equation}\label{eql21.1} y_{t+1}=\frac{\nu\epsilon^{-y_t}+\delta\epsilon^{-y_{t-1}}}{\mu+\nu y_t+\delta y_{t-1}},\quad t\in N(0). \end{equation} Here, $\epsilon>1, \nu,\delta,\mu\in (0,\infty)$ and $y_0, y_1$ are taken as arbitrary non-negative reals and $N(a)=\{a,a+1,a+2,\cdots \}$. Relevant examples are provided to validate our results. The exactness is tested using MATLAB.
Stability and Periodic Nature of a System of Difference Equations
Erkan Taşdemir
Subject: Physical Sciences, Mathematical Physics Keywords: difference equations; equilibrium points; stability; periodicity; invariant
In this paper, we investigate the equilibrium points of following a system of difference equations xn+1 = xn 2yn−1, y n+1 = yn 2xn−1. We also study the asymptotic stability of related system of difference equations. Further we examine the periodic solutions of related system with period two. Additionally, we find out the invariant interval and periodic cycles of related system of difference equations.
Strut-and-Tie Models of Masonry Shear Walls
Radosław Jasiński
Subject: Engineering, Civil Engineering Keywords: masonry shear walls; ST models; equilibrium models
This paper contains theoretical fundamentals of strut and tie models, used in unreinforced horizontal shear walls. Depending on support conditions and wall loading, we can distinguish models with discrete bars when point load is applied to the wall (type I model) or with continuous bars (type II model) when load is uniformly distributed at the wall boundary. The main part of this paper compares calculated results with the own tests on horizontal shear walls made of solid brick, silicate elements and autoclaved aerated concrete. The tests were performed in Poland. The model required some modifications due to specific load and static diagram.
Dynamic Complexity Measures: Definition and Calculation
José Roberto C. Piqueira
Subject: Physical Sciences, Other Keywords: complexity; disequilibrium; equilibrium; individual information; informational entropy
This work is a generalization of the Lopez-Ruiz, Mancini and Calbet (LMC); and Shiner, Davison and Landsberg (SDL) complexity measures, considering that the state of a system or process is represented by a dynamical variable during a certain time interval. As the two complexity measures are based on the calculation of informational entropy, an equivalent information source is defined and, as time passes, the individual information associated to the measured parameter is the seed to calculate instantaneous LMC and SDL measures. To show how the methodology works, an example with economic data is presented.
Entropy Production as the Origin of Information Encoding in RNA and DNA
Julián Mejía, Karo Michaelian
Subject: Life Sciences, Biophysics Keywords: entropy; entropy production; non-equilibrium thermodynamics; information encoding; nucleic acids; DNA; RNA; origin of life; origin of codons; amino acids; stereochemical era; photon potential
Ultraviolet light incident on organic material can initiate its spontaneous dissipative structuring into chromophores which can catalyze their own replication. This may have been the case for one of the most ancient of all chromophores dissipating the Archean UVC photon flux, the nucleic acids. Oligos of nucleic acids with affinity to particular amino acids which foment UVC photon dissipation would have been selected through non-equilibrium thermodynamic imperatives which favor entropy production. Indeed, we show here that those amino acids with characteristics most relevant to fomenting UVC photon dissipation are precisely those with greatest chemical affinity to their codons or anticodons. Entropy production could thus provide an explanation for the accumulation of information in nucleic acids relevant to the dissipation of the externally imposed thermodynamic potentials. The accumulation of information in this manner provides a link between evolution and entropy production.
Characteristic Length Scale during the Time Evolution of a Turbulent Bose-Einstein Condensate
Lucas Madeira, Arnol D. García-Orozco, Michelle A. Moreno-Armijos, Francisco Ednilson Alves dos Santos, Vanderlei S. Bagnato
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: quantum turbulence; Bose-Einstein condensate; out-of-equilibrium
Quantum turbulence is characterized by many degrees of freedom interacting non-linearly to produce disordered states, both in space and time. The advances in trapping, cooling, and tuning the interparticle interactions in atomic Bose-Einstein condensates (BECs) make them excellent candidates for studying quantum turbulence. In this work, we investigate the decaying regime of quantum turbulence in a trapped BEC. Although much progress has been made in understanding quantum turbulence, other strategies are needed to overcome some intrinsic difficulties. We present an alternative way of investigating this phenomenon by defining and computing a characteristic length scale, which possesses relevant characteristics to study the establishment of the quantum turbulent regime. One intrinsic difficulty related to these systems is that absorption images of BECs are projected to a plane, thus eliminating some of the information present in the original momentum distribution. We overcome this difficulty by exploring the symmetry of the cloud, which allows us to reconstruct the three-dimensional momentum distributions with the inverse Abel transform. We present our analysis with both the two- and three-dimensional momentum distributions, discussing their similarities and differences. We argue that the characteristic length allows us to visualize the time evolution of the turbulent state intuitively.
THE GENERALIZED OTOC FROM SUPERSYMMETRIC QUANTUM MECHANICS: Study of Random Fluctuations from Eigenstate Representation of Correlation Functions
Sayantan Choudhury
Subject: Physical Sciences, General & Theoretical Physics Keywords: OTOC, Supersymmetry, Out-of-equilibrium quantum statistical mechanics
The concept of out-of-time-ordered correlation (OTOC) function is treated as a very strong theoretical probe of quantum randomness, using which one can study both chaotic and non-chaotic phenomena in the context of quantum statistical mechanics. In this paper, we define a general class of OTOC, which can perfectly capture quantum randomness phenomena in a better way. Further we demonstrate an equivalent formalism of computation using a general time independent Hamiltonian having well defined eigenstate representation for integrable supersymmetric quantum systems. We found that one needs to consider two new correlators apart from the usual one to have a complete quantum description. To visualize the impact of the given formalism we consider the two well known models viz. Harmonic Oscillator and one dimensional potential well within the framework of supersymmetry. For the Harmonic Oscillator case we obtain similar periodic time dependence but dissimilar parameter dependences compared to the results obtained from both microcanonical and canonical ensembles in quantum mechanics without supersymmetry. On the other hand, for one dimensional potential well problem we found significantly different time scale and the other parameter dependence compared to the results obtained from non-supersymmetric quantum mechanics. Finally, to establish the consistency of the prescribed formalism in the classical limit, we demonstrate the phase space averaged version of the classical version of OTOCs from a model independent Hamiltonian along with the previously mentioned these well cited models.
Tools for a Circular Economy: Assessing Waste Taxation in a CGE Multi-Pollutant Framework
Jaume Freire-González, Veronica Martinez-Sanchez, Ignasi Puig-Ventosa
Subject: Social Sciences, Accounting Keywords: Environmental taxes; computable general equilibrium; environmental impacts; waste
Economic theory states that incineration and landfill taxation can effectively diminish the environmental impacts of pollution and resource use by reducing their associated pollutants while stimulating the reuse and recycling of materials, and therefore, fostering a circular economy. The aim of this research is to assess the economic and environmental effects of these taxes in Spain under different scenarios with a detailed dynamic computable general equilibrium (CGE) model, as there are no studies analyzing this in detail. We focus on the economic impact on GDP and sectorial production and the environmental impact on different categories: global warming potential, marine eutrophication potential, photochemical ozone formation potential, particulate matter, human toxicity (cancer and noncancer), ecotoxicity, and depletion of fossil resources. We find in all scenarios that these taxes have a limited economic impact while reducing all of the environmental impact categories analyzed. The study reinforces the theory that policy makers need to impose taxes on landfill and incineration to reinforce the circularity of the economy and reduce environmental burdens, but also demonstrates that they can improve their design without additional costs.
Preprint TECHNICAL NOTE | doi:10.20944/preprints201607.0080.v1
Equilibrium Model of Movable Elements of Micromechanical Devices with Internal Suspensions
Olga Ezhova, Igor Lysenko, Boris Konoplev, Filipp Bondarev
Subject: Engineering, Other Keywords: micromechanical mirrors, equilibrium model, electrostatic actuators, criteria, coefficient.
In this work model of mirror elements equilibrium of the micromechanical components is developed, the behavior analysis of the mirror element of micromechanical mirrors in case changing of control voltages of electrostatic actuators is carried out, an expression for determining the maximum value of deflection voltage at which the snap-down effect will take the following form is obtained in case of the influence of the coefficient of the electrostatic rigidity of electrostatic actuators. The developed equilibrium model of mirror elements and the obtained results of modeling can be used at design of micromechanical mirrors with internal suspensions.
Consensus towards Partially Cooperative Strategies in Self-regulated Evolutionary Games on Networks
Dario Madeo, Chiara Mocenni
Subject: Keywords: Evolutionary Games; Cooperation; Consensus; Dynamics on Networks; Stag-Hunt Game; Chicken Game; Mixed Nash Equilibrium; Self-regulation; Stable Equilibrium; Complex Systems
Cooperation is widely recognized to be challenging for the well-balanced development of human societies. The emergence of cooperation in populations has been largely studied in the context of the Prisoner's Dilemma game, where temptation to defect and fear to be betrayed by others often activate defective strategies. In this paper we analyze the decision making mechanisms fostering cooperation in the two-strategy Stag-Hunt and Chicken games, which include the mixed strategy Nash equilibrium, describing partially cooperative behavior. We find the conditions for which cooperation is asymptotically stable in both full and partial cases, and we show that the partially cooperative steady state is also globally stable in the simplex. Furthermore, we show that the last can be more rewarding than the first, thus making the mixed strategy effective, although people cooperate at a lower level with respect to the maximum allowed, as it is reasonably expected in real situations. Our findings highlight the importance of Stag-Hunt and Chicken games in understanding the emergence of cooperation in social networks.
Economic Evaluation of Large-Scale Biorefinery Deployment: A Framework Integrating Dynamic Biomass Market and Techno-Economic Models
Jonas Zetterholm, Elina Bryngemark, Johan Ahlström, Patrik Söderholm, Simon Harvey, Elisabeth Wetterlund
Subject: Engineering, Energy & Fuel Technology Keywords: supply chain; partial equilibrium; biofuel; soft-linking; dynamic prices
Biofuels and biochemicals play significant roles in the transition towards a fossil-free society. However, large-scale biorefineries are not yet cost-competitive with their fossil-fuel counterparts, and it is important to identify biorefinery concepts with high economic performance. For evaluating early-stage biorefinery concepts, one needs to consider not only the technical performance and process costs but also the economic performance of the full supply chain and the impacts on feedstock and product markets. This article presents and demonstrates a conceptual interdisciplinary framework that can constitute the basis for evaluations of the full supply-chain performance of biorefinery concepts. This framework considers the competition for biomass across sectors, assumes exogenous end-use product demand, and incorporates various geographical and technical constraints. The framework is demonstrated empirically through a case study of a sawmill-integrated biorefinery producing liquefied biomethane from forestry and forest industry residues. The case study results illustrate that acknowledging biomass market effects in the supply chain evaluation implies changes in both biomass prices and the allocation of biomass across sectors. The proposed framework should facilitate the identification of biorefinery concepts with a high economic performance which are robust to feedstock price changes caused by the increase in biomass demand.
Liquid-Liquid Equilibrium Behavior and In Vitro Digestion Simulation of Medium Chain Fatty Acids
Ericsem Pereira, Antonio J. A. Meirelles, Guilherme J. Maximo
Subject: Chemistry, Food Chemistry Keywords: phase equilibrium; in vitro lipid digestion; fats and oils
The absorption of medium-chain fatty acids (MCFA) depends on the solubility of these components in the gastric fluid. Parameters such as the total MCFA concentration, carboxyl ionization level, and carbon chain length affect the solubility of these molecules. Moreover, the enzymatic lipolysis of solubilized triacylglycerol (TAG) molecules may depend on the carbon chain length of the fatty acids (FAs) components and their positions on the glycerol backbone. This present study aimed at investigating the effect of electrolyte usually formed during the gastric digestion phase on the solubility of MCFA, and evaluating the influence of the FA carbon chain length on the lipolysis rate during the in vitro digestion simulation. The results obtained here showed that the increasing of electrolyte concentrations tend to decrease the mutual solubility of systems composed by the caproic and caprylic fatty acids + sodium chloride, sodium bicarbonate, and potassium chloride solutions. We also observed that a conventional version of the thermodynamic UNIQUAC model was able to correlate the liquid-liquid phase behavior of the electrolyte solutions. Regarding the in vitro digestion simulation, the experimental data indicated that the action of the pancreatic enzyme occurred preferentially in TAG molecules comprised of short and medium-chain fatty acids.
Adapted or Adaptable: How to Manage Entropy Production?
Christophe Goupil, Eric Herbert
Subject: Physical Sciences, Other Keywords: out of Equilibrium Thermodynamics; Finite Time Thermodynamics; Living Systems
Adaptable or adapted? Whether it is a question of physical, biological or even economic systems, this problem arises when all these systems are the location of matter and energy conversion. To this interdisciplinary question we propose a theoretical framework based on the two principles of thermodynamics. Considering a finite time linear thermodynamic approach, we show that non-equilibrium systems operating in quasi-static regime are quite deterministic as long as boundary conditions are correctly defined. The Novikov-Curzon-Ahlborn approach [1,2] applied to non-endoreversible systems then makes it possible to precisely determine the conditions for obtaining characteristic operating points. As a result, power maximization principle (MPP), entropy minimization principle(mEP), efficiency maximization, or waste minimization states are only specific modalities of system operation. We show that boundary conditions play a major role in defining operating points because they define the intensity of the feedback that ultimately characterizes the operation. Armed with these thermodynamic foundations, we show that the intrinsically most efficient systems are also the most constrained in terms of controlling the entropy and dissipation production. In particular, we show that the best figure of merit necessarily leads to a vanishing production of power. On the other hand, a class of systems emerges which, although they do not offer extreme efficiency or power, have a wide range of use and therefore marked robustness. It therefore appears that the number of degrees of freedom of the system leads to an optimization of the allocation of entropy production.
Investigation of the Effects of Steam Injection on Equilibrium Products and Thermodynamic Properties of Diesel and Biodiesel Fuels
Jean Paul Gram Shou, Marcel Obounou, Timoléon Crépin Kofané, Mahamat Hassane Babikir
Subject: Engineering, Energy & Fuel Technology Keywords: chemical equilibrium products; combustion; biodiesel; diesel; steam injection method
The use of biodiesel fuels in compression ignition engines leads to decrease CO, PM, HC and smoke opacity. However, NOx emissions increase importantly. Various methods to reduce NOx are used namely the EGR, the water injection method and the steam injection method. In this study, the steam injection method has been used instead of the other methods because of its benefits. This study examines the effects of steam injection on combustion products of diesel and biodiesel fuels by considering chemical equilibrium in order to determine the equilibrium combustion products involving 10 combustion products. A developed simulation code determing the equilibrium mole fractions and thermodynamic properties of combustion products is used for diesel and biodiesel fuels. It can be used for any fuel consisting of carbon, hydrogen or any oxygenated fuel. The results show that the mole fraction of CO2 and CO decrease with the steam injection ratios. NO mole fractions decrease with steam injection for lean mixtures but they increase slightly in the case of rich mixtures. The specific heat of combustion products increase with the steam injection ratios. Thus, the engine performance can be improved using the method. The model has been validated by comparing model results with the ones of NASA CEA software and GASEQ software using the methane as fuel. The relative errors of equilibrium mole fractions and thermodynamic properties of combustion products are less than 0.98 %.
Stability Analysis and Semi-analytic Solution to a SEIR-SEI Malaria Transmission Model Using HE's Variational Iteration Method
Kingsley Timilehin Akinfe, Adedapo Chris Loyinmi
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: SEIR-SEI; Basic Reproduction number; Disease-Free equilibrium point (DFE); Endemic equilibrium point; Stability; Variational iteration method (VIM); Runge-Kutta-Felhberg (RKF-45)
We have considered a SEIR-SEI Vector-host mathematical model which captures malaria transmission dynamics, described and built on 7-dimensional nonlinear ordinary differential equations. We compute the basic reproduction number of the model; examine the positivity and boundedness of the model compartments in a region using well established methods viz: Cauchy's differential theorem, Birkhoff & Rota's theorem which verifies and reveals the well-posedness, and carrying capacity of the model respectively, the existence of the Disease-Free (DFE) and Endemic (EDE) equilibrium points were determined and examined. Using the Gaussian elimination method and the Routh-hurwitz criterion, we convey stability analyses at DFE and EDE points which indicates that the DFE (malaria-free) and the EDE (epidemic outbreak) point occurs when the basic reproduction number is less than unity (one) and greater than unity (one) respectively. We obtain a solution to the model using the Variational iteration method (VIM) (an unprecedented method) to each population compartments and verify the efficacy, reliability and validity of the proposed method by comparing the respective solutions via tables and combined plots with the computer in-built Runge-kutta-Felhberg of fourth-fifths order (RKF-45). We illustrate the combined plot profiles of each compartment in the model, showing the dynamic behavior of these compartments; then we speculate that VIM is efficient and capable to conduct analysis on Malaria models and other epidemiological models.
Nestedness-Based Measurement of Evolutionarily Stable Equilibrium of Global Production System
Jiaqi Ren, Yu Han, Lizhi Xing, Xianlei Dong
Subject: Social Sciences, Econometrics & Statistics Keywords: global value chain; global economic integration; nestedness; evolutionarily stable equilibrium
Nested structure is a structural feature that is conducive to system stability formed by the co-evolution of species in mutualistic ecosystems, and reflects the ability of ecosystem stability to be restored to a stable state again after being destroyed. The co-opetition relationship and value flow between industrial sectors in the global value chain are similar to the mutualistic ecosystem, and the pattern of the global economic system is always changing in dynamic equilibrium. Nestedness theory is used in this article to define the generalist and specialist sectors in the global value chain to analyze the changes in the global supply pattern. Then we study the mechanism of the global economic system to reach a stable equilibrium and the role of different sectors in the steady of the economic system, so as to provide countermeasures for enhancing the stability of the global economic system. At the end of the article, the domestic trade network, export trade network and import trade network of each country are extracted, and an econometric model is designed to analyze how the microstructure of the production system affects a country's macroeconomic performance, thereby deriving the conclusion that the stability of the international trade network is crucial to a country's economic development.
Experimental Phase Equilibria and Isopleth Section of 8Nb-TiAl Alloys
Yong Xu, Yongfeng Liang, Lin Song, Guojian Hao, Bin Tian, Rongfu Xu, Junpin Lin
Subject: Materials Science, Biomaterials Keywords: Titanium-Aluminum-Niobium; Phase Diagram; Vertical Section; Equilibrium Relation; CALPHAD
The 8Nb isopleth section of a Ti-Al-Nb system is experimentally determined based on thermal analysis and thermodynamic calculation methods to obtain the phase transformation and equilibrium relations required for material design and fabrication. The phase transus and relations for the 8Nb-TiAl system show some deviations from the calculated thermodynamic results. The ordered βo phase transforms from the disordered β/α phases at 1200–1400 °C over a large Al concentration range, and this transformation is considered to be an intermediate type between the first- and second-order phase transitions. Moreover, the βo phases are retained at the ambient temperature in the 8Nb-TiAl microstructures. The ωo phase transforms from the highly ordered βo phase, rather than from α2 or βo with low degree of atom ordering B2 (LOB2) structure, with Al concentration of 32–43 at.% at approximately 850 °C. From the experimental detection, the transition of the ωo phase from the βo phase is considered to be a further ordering process.
Pareto Efficiency of Mixed Quantum Strategy Equilibria
Marek Szopa
Subject: Social Sciences, Accounting Keywords: game theory; quantum games; Nash equilibrium; Pareto-efficiency; correlated equilibria
The aim of the paper is to investigate Nash equilibria and correlated equilibria of classical and quantum games in the context of their Pareto optimality. We study four games: the prisoner's dilemma, battle of the sexes and two versions of the game of chicken. The correlated equilibria usually improve Nash equilibria of games but require a trusted correlation device. We analyze the quantum extension of these games in the Eisert-Wilkens-Lewenstein formalism with the full SU(2) space of players' strategy parameters. It has been shown that the Nash equilibria of these games in quantum mixed Pauli strategies are closer to Pareto optimal results than their classical counterparts. The relationship of mixed Pauli strategies equilibria and correlated equilibria is also analyzed.
The Hydromechanical Interplay in the Three-dimensional Limit Equilibrium Analyses of Unsaturated Slope Stability
Panagiotis Sitarenios, Francesca Casini
Subject: Engineering, Automotive Engineering Keywords: unsaturated slope; Ruedlingen field experiment; lateral resistance; limit equilibrium solution
The paper presents a three-dimensional slope stability limit equilibrium solution for translational, planar failure modes. The proposed solution uses Bishop's average skeleton stress combined with the Mohr – Coulomb failure criterion to describe soil strength evolution under unsaturated conditions while its formulation ensures a natural and smooth transition from the unsaturated to the saturated regime and vice versa. The proposed analytical solution is evaluated by comparing its predictions with the results of the Ruedlingen slope failure experiment [1]. The comparison suggests that despite its relative simplicity the analytical solution can capture well the experimentally observed behaviour and highlights the importance of lateral resistance consideration together with a realistic interplay between mechanical parameters (cohesion) and hydraulic (pore water pressure) conditions.
Entropy of a Turbulent Bose-Einstein Condensate
Lucas Madeira, Arnol Daniel García-Orozco, Francisco Ednilson Alves dos Santos, Vanderlei Salvador Bagnato
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: quantum turbulence; Bose-Einstein condensate; out-of-equilibrium; particle cascade
Quantum turbulence deals with the phenomenon of turbulence in quantum fluids, such as superfluid helium and trapped Bose-Einstein condensates (BECs). Although much progress has been made in understanding quantum turbulence, several fundamental questions remain to be answered. In this work, we investigated the entropy of a trapped BEC in several regimes, including equilibrium, small excitations, the onset of turbulence, and a turbulent state. We considered the time evolution when the system is perturbed and let to evolve after the external excitation is turned off. We derived an expression for the entropy consistent with the accessible experimental data, that is, using the assumption that the momentum distribution is well-known. We related the excitation amplitude to different stages of the perturbed system, and we found distinct features of the entropy in each of them. In particular, we observed a sudden increase in the entropy following the establishment of a particle cascade. We argue that entropy and related quantities can be used to investigate and characterize quantum turbulence.
On the Solutions of Four Rational Difference Equations Associated to Tribonacci Numbers
İnci Okumuş, Yüksel Soykan
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations, solution, equilibrium point, tribonacci number, global asymptotic stability
In this study, we investigate the form of solutions, stability character and asymptotic behavior of the following four rational difference equations x_{n+1} = (1/(x_{n}(x_{n-1}±1)±1)), x_{n+1} = ((-1)/(x_{n}(x_{n-1}±1)∓1)), such that their solutions are associated with Tribonacci numbers.
Mathematical Analysis of Transfusion—Transmitted Malaria Model with Optimal Control
Michael Olaniyi Adeniyi, Oluwaseun Raphael Aderele
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: malaria; transfusion–transmitted; basic reproduction number; stability; equilibrium; optimal control
An SIRS (Susceptible–Infected–Removed-Susceptible) mathematical model for the transmission dynamics of the Transfusion–Transmitted Malaria (TTM) model with optimal control pair u1(t) and u2(t) was developed and studied in this research work. The model Transfusion–Transmitted Malaria disease–free equilibrium and endemic equilibriums points were determined. The model exhibited two equilibriums; disease-free and endemic equilibrium. It is shown that the disease–free equilibrium was locally asymptotically stable if the associated basic reproduction numbers R0 is less than unity while the disease persists if R0 is greater than unity. The global stability of the Transfusion–Transmitted Malaria model at the disease-free equilibrium was established using the comparison method. The optimality system was derived and an optimal control model of blood screening and drug treatment for the Transfusion–Transmitted Malaria model was investigated. Conditions for the optimal control were considered using Pontryagin's Maximum Principle and solved numerically using the Forward and Backward Finite Difference Method (FBDM). Numerical results obtained are in perfect agreement with our analytical results.
Ex Post Nash Equilibrium in Games for Decision Making in Multi-environments
Abbas Edalat, Samira Hossein Ghorban, Ali Ghoroghi
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Bayesian game, Ex post Nash equilibrium, Prisoner's Dilemma, Trust Game
We employ the solution concept of ex post Nash equilibrium to predict the interaction of a finite number of agents competing in a finite number of basic games simultaneously. The competition is called a multi-game. For each agent, a specific weight, considered as private information, is allocated to each basic game representing its investment in that game and the utility of each agent for any strategy profile is the weighted sum, i.e., convex combination, of its utilities in the basic games. Multi-games can model decision making in multi-environments in a variety of circumstances, including decision making in multi-markets and decision making when there are both material and social utilities for agents as, we propose, in the Prisoner's Dilemma and the Trust Game. Given a set of pure Nash equilibria, one for each basic game in a multi-game, we construct a pure Bayesian Nash equilibrium for the multi-game. We then focus on the class of so-called uniform multi-games in which each agent is constrained to play in all games the same strategy from an action set consisting of a best response per game. Uniform multi-games are equivalent to multi-dimensional Bayesian games where the type of each agent is a finite dimensional vector with non-negative components. A notion of pure type-regularity for uniform multi-games is developed and it is shown that a multi-game that is pure type-regular on the boundary of its type space has a pure ex post Nash equilibrium which is computed in constant time with respect to the number of the types and is independent of prior probability distributions. We then develop an algorithm, linear in the number of types of the agents, which tests if a multi-game is pure type-regular on the boundary of its type space in which case it returns a pure ex post Nash equilibrium for the multi-game.
Is Equilibrium Modelling Outdated for Recent Challenges in River Management?
Arianna Varrani, Michael Nones
Subject: Earth Sciences, Other Keywords: 1D modelling; large rivers; morphodynamic equilibrium; river concavity; bottom fining
To date, several different approaches are available to study sediment dynamics at reach or watershed scale, based on very different hypothesis. One of such assumptions, the so-called "morphodynamic equilibrium hypothesis" is becoming little unpopular for its embedded simplifications. The aim of this work is to demonstrate how this approach proves yet effective in modelling landscape morphodynamics at the watershed scale, for what concerns the longitudinal profile of a river and the sedimentary aspects. The application of a 1-D model based on the equilibrium hypothesis has been implemented for several large rivers worldwide. Geomorphological parameters have been analysed, which describe the evolution of longitudinal profile (concavity) and sediments characteristics (aggrading and fining), and the results show a reasonably good correspondence with qualitative estimation of the same parameters. At the scale of analysis and for the chosen systems, which show high inertia to geomorphological changes likely owing to their longitudinal extension, the model can detect where the present conditions reflect a big disturbance to the "natural equilibrium" thus allowing water managers to identify present issues to be addressed.
Linking of Financial Data with Non-Financial Information on CSR of Companies Listed on the Stock Exchange in Poland – Polish Case Study
Małgorzata Anna Węgrzyńska
Subject: Social Sciences, Accounting Keywords: CSR; non-financial reporting; non-financial disclosures
Reporting on CSR activities has become the essence of reporting for modern business entities. In this regard, particular attention is paid to public interest companies. Therefore, the following paper aims to answer the question of whether there are differences in the linguistic structure of the studied CSR reports in three selected industry indices on the Warsaw Stock Exchange (WSE) in Poland, i.e. WIG-energy index, WIG-fuel index, WIG-mining index and their relationship with the performance of selected companies. The study was conducted on a purposely selected sample of companies between 2013 and 2018. A total of 138 CSR reports and 138 annual separate financial statements prepared in accordance with international balance sheet law were collected. The study was carried out based on a panel regression model. It was found that CSR reports contained similar average percentages of parts of speech such as nouns and adjectives. When linking the economic performance of companies, expressed with selected indices, to the information on the implementation of CSR concepts, it was revealed that the results are more likely to describe business performance when it is satisfactory.
Stability and Stabilization of Ecosystem for Epidemic Virus Transmission Under Neumann Boundary Value Via Impulse Control
Ruofeng Rao
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Neumann boundary value; positive equilibrium point; Poincare inequality lemma; impulse control
In this paper, by using the variational method, a sufficient condition for the unique existence of the stationary solution of the reaction-diffusion ecosystem is obtained, which directly leads to the global asymptotic stability of the unique equilibrium point. Besides, employing impulse control technique derives the globally exponential stability criterion of delayed feedback ecosystem.And numerical examples illuminate the effectiveness of impulse control, which has a certain enlightening effect on the actual epidemic prevention work . That is, in the face of the epidemic situation, taking a certain frequency of positive and effective epidemic prevention measures is conducive to the stability and control of the epidemic situation. particularly, the newly-obtained theorems quantifies this feasible step.
Deleterious Behaviorally Tansmitted Traits in Equilibrium
Robert Shuler
Subject: Social Sciences, Accounting Keywords: culture; co-evolution; meme; altruism; natural selection; competitive equilibrium; Fermi Paradox: memetics; genetics
Abstract. Background: This paper investigates the propagation of behaviorally transmitted traits with negative effect on host fitness. Methods: We analyze equilibrium between genetically transmitted and behaviorally transmitted competing propagators and consider whether a behavioral propagator is linked to reproduction (e.g. vertical culture transmission), or not. We employ combined genetic and behavior-induced fitness components for hosts, while behavioral propagators have replication factors to distinguish from what's good for the host (fitness). Results: A trait which spreads faster than its marginal host fitness contribution reduces population will establish itself. The often transient nature of laterally transmitted traits may be a defense against accumulation of deleterious traits. Laterally transmitted traits with high spreading rate often do not equalize with genetic traits, spreading outside natural selection of the hosts. Vertical transmission reduces replication rate and allows group selection against deleterious behaviorally transmitted traits. Competing mutually exclusive propagators contribute to inequality and altruism, but compete through adverse fitness since exclusivity assumes low conversion. Conclusion: Behaviorally transmitted traits, in some cases a tremendous advantage, may also be a significant problem in the development of societies.
Nonlinear Problems of Equilibrium Charge State Transport in Hot Plasmas
Vladimir A. Shurygin
Subject: Physical Sciences, Fluids & Plasmas Keywords: magnetically confined plasma; impurity, charge state, transport, coronal equilibrium; diffusion coefficient
The general coupling between particle transport and ionization-recombination processes in hot plasma is considered on the key concept of equilibrium charge state (CS) transport. A theoretical interpretation of particle and CS transport is gained in terms of a two-dimensional (2D) Markovian stochastic (random) processes, a discrete 2D Fokker-Plank-Kolmogorov equation (in charge and space variables) and generalized 2D coronal equilibrium between atomic processes and particle transport. The basic tool for analysis of CS equilibrium and transport is the equilibrium cell (EC) (two states on charge and two on space), which presents (i) a unit phase volume, (ii) the characteristic scale of local equilibrium, (iii) a comprehensive solution for the simplest nonlinear relations between transport and atomic processes. The approach opens up new perspectives on transport studies: (i) the direct modelling of equilibrium and transport of impurity using the atomic data base, (ii) recovery of the complete recombination rate profile based on knowledge of density profiles and ionization rate profiles, (iii) the local transport analysis, based on the reduction of the equilibrium set to the single EC (in particular, central or edge), (iv) analysis of the reduced transport coefficients (diffusion and convection) on the density profile measurements.
Predicted Electronic Commerce Helps China's Economic Resilient --A Simulation-Based Analysis on COVID-19 Pandemic Outbreak
Dong Yang, Hongxin Li
Subject: Social Sciences, Business And Administrative Sciences Keywords: Electronic Commerce Industry; Economic Impact; Computalble General Equilibrium Model; COVID-19
(1)Background: To perform a simulation-based analysis of 2019 COVID-19 outbreak and how would the electronic commerce indursty help China's economy resilient. (2)Methods: As the epidemic continues, it is possible to use Computable General Equilibrium model to simulate the economic consequences and analyse the role of electronic commerce industy in COVID-19 outbreak. (3)Results: Estimates and models produced at the time of the outbreak suggested that COVID-19 could have a catastrophic effect on China's economy. National statistics were examined for anomalies that corresponded to the timing of COVID-19 outbreak and, where possible, the size of any gain or loss found estimated. Our analysis suggests that the electronic commerce industry could help China's economy to recover by stimulating consumption, improving technological level and expanding investment. (4)Conclusions: This exercise holds important lessons for estimating the electronic commerce's role of similar events – such as pandemic influenza – and measures to recover the economy. We suggest that electronic commerce industry should be paid more attention in economic development, especially in epidemic time. The implications of our findings are discussed in the light of a prospective epidemic.
Trade Effects Based on Trade Equilibrium
Baoping Guo
Subject: Social Sciences, Economics Keywords: E; factor price equalization; Heckscher-Ohlin; equilibrium price; equalized factor price
The Rybczynski theorem describes the trade effect within production analyses between factor endowments and outputs. The Stolper-Samuelson theorem focuses on cost analyses between factor reward and commodity price. This paper examines the trade effect of changes of factor endowments on prices, based on general equilibrium. The study shows that changes of factor endowments cause domestic output changes (the Rybczynski effect), which affect output prices and factor prices (the Stolper-Samuelson effect). It is like a chain of effects that the Rybczynski's trade effect triggers the Stolper-Samuelson's trade effect. The analysis of this paper shows that a small increase of a factor endowment of any country rewards another factor and the commodity using the latter factor intensively. It displays a tuneful circle. Trade brings a well-balanced development to the world.
The Monty Hall Problem as a Bayesian Game
Mark Whitmeyer
Subject: Social Sciences, Economics Keywords: Monty Hall; Equiprobability Bias; games of incomplete information; Bayes Nash Equilibrium
This paper formulates the classic Monty Hall problem as a Bayesian game. Allowing Monty a small amount of freedom in his decisions facilitates a variety of solutions. The solution concept used is the Bayes Nash Equilibrium (BNE), and the set of BNE relies on Monty's motives and incentives. We endow Monty and the contestant with common prior probabilities (p) about the motives of Monty, and show that under certain conditions on p, the unique equilibrium is one where the contestant is indifferent between switching and not switching. This coincides and agrees with the typical responses and explanations by experimental subjects. Finally, we show that our formulation can explain the experimental results in Page (1998) [12]; that more people gradually choose switch as the number of doors in the problem increases.
To Freeze or Not to Freeze? Epidemic Prevention and Control in the DSGE Model Using an Agent-Based Epidemic Component
Jagoda Kaszowska, Przemysław Włodarczyk
Subject: Social Sciences, Accounting Keywords: COVID-19; agent-based modelling; dynamic stochastic general equilibrium models; scenario analyses
The ongoing COVID-19 pandemic has raised numerous questions concerning the shape and range of state interventions whose goals are to reduce the number of infections and deaths. The lockdowns, which have become the most popular response worldwide, are assessed as being an outdated and economically inefficient way to fight the disease. However, in the absence of efficient cures and vaccines, there is a lack of viable alternatives. In this paper we assess the economic consequences of the epidemic prevention and control schemes that were introduced in order to respond to the COVID-19 pandemic. The analyses report the results of epidemic simulations that were obtained using the agent-based modelling methods under the different response schemes and their use in order to provide conditional forecasts of the standard economic variables. The forecasts were obtained using the DSGE model with the labour market component.
On the Solutions of Four Second-Order Nonlinear Difference Equations
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations, form of solutions, equilibrium point, tribonacci number, global asymptotic stability.
This paper deals with the form, the stability character, the periodicity and the global behavior of solutions of the following four rational difference equations x_{n+1} = ((±1)/(x_{n}(x_{n-1}±1)-1)) x_{n+1} = ((±1)/(x_{n}(x_{n-1}∓1)+1)).
Quantum Foundations of Classical Reversible Computing
Michael Frank, Karpur Shukla
Subject: Physical Sciences, Applied Physics Keywords: non-equilibrium quantum thermodynamics; thermodynamics of computing; Landauer's principle; Landauer limit; reversible computing; resource theory of quantum thermodynamics; Gorini-Kossakowski-Sudarshan-Lindblad dynamics; von Neumann entropy; Rényi entropy; open quantum systems
The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible paradigm. However, to date, the essential rationale for and analysis of classical reversible computing (RC) has not yet been expressed in terms that leverage the modern formal methods of non-equilibrium quantum thermodynamics (NEQT). In this paper, we begin developing an NEQT-based foundation for the physics of reversible computing. We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a. Lindbladians) with multiple asymptotic states, incorporating recent results from resource theory, full counting statistics, and stochastic thermodynamics. Important conclusions include that, as expected: (1) Landauer's Principle indeed sets a strict lower bound on entropy generation in traditional non-reversible architectures for deterministic computing machines when we account for the loss of correlations; and (2) implementations of the alternative reversible computation paradigm can potentially avoid such losses, and thereby circumvent the Landauer limit, potentially allowing the efficiency of future digital computing technologies to continue improving indefinitely. We also outline a research plan for identifying the fundamental minimum energy dissipation of reversible computing machines as a function of speed.
α-Synuclein Amyloid Fibrils Investigation with the Use of Fluorescent Probe Thioflavin T
Anna I. Sulatskaya, Natalia P. Rodina, Maksim I. Sulatsky, Olga I. Povarova, Iuliia A. Antifeeva, Irina M. Kuznetsova, Konstantin K. Turoverov
Subject: Life Sciences, Molecular Biology Keywords: α-synuclein; amyloid fibrils; fibrillogenesis; thioflavin T; equilibrium microdialysis; binding parameters; structural polymorphism
In this work α-synuclein amyloid fibrils, formation of which is a biomarker of the Parkinson's disease, were investigated with the use of fluorescent probe thioflavin T (ThT). Experimental conditions of the protein fibrillogenesis were chosen so that a sufficient number of continuous measurements can be performed to characterize and analyze all stages of this process. The reproducibility of fibrillogenesis and the structure of the obtained aggregates (that is a critical point for their further investigation) were proved using a wide range of physical-chemical methods. For determination of ThT—α-synuclein amyloid fibrils binding parameters sample and reference solutions were prepared with the use of equilibrium microdialysis. By absorption spectroscopy of these solutions ThT—fibrils binding mode with the binding constant about 104 M−1 and stoichiometry of ThT per protein molecule about 1:8 was observed. Fluorescence spectroscopy of the same solutions with the subsequent correction of the recorded fluorescence intensity on the primary inner filter effect allowed to determine another mode of ThT binding to fibrils with the binding constant about 106 M−1 and stoichiometry about 1:2500. Analysis of photophysical characteristics of the dye molecules bound to the sites of different binding modes allowed to assume the possible localization of these sites. Obtained differences in the ThT binding parameters to amyloid fibrils formed from α-synuclein and other amyloidogenic proteins, as well as in the photophysical characteristics of the bound dye, confirmed the hypothesis of amyloid fibrils polymorphism.
Game of Thrones: Accommodating Monetary Policies in a Monetary Union
Dmitri Blueschke, Reinhard Neck
Subject: Social Sciences, Economics Keywords: dynamic game; feedback Nash equilibrium; Pareto solution; monetary union; macroeconomics; public debt; coalitions
In this paper we present an application of the dynamic tracking games framework to a monetary union. We use a small stylized nonlinear three-country macroeconomic model of a monetary union to analyse the interactions between fiscal (governments) and monetary (common central bank) policy makers, assuming different objective functions of these decision makers. Using the OPTGAME algorithm we calculate solutions for several games: a noncooperative solution where each government and the central bank play against each other (a feedback Nash Equilibrium solution), a fully cooperative solution with all players following a joint course of action (a Pareto optimal solution), and three solutions where various coalitions (subsets of the players) play against coalitions of the other players in a noncooperative way. It turns out that the fully cooperative solution yields the best results, the noncooperative solution fares worst, and the coalition games lie in between, with a broad coalition of the fiscally more responsible countries and the central bank against the less thrifty country coming closest to the Pareto optimum.
Motor of Mutual Retention
Marco Aurélio Nunes da Silva
Subject: Engineering, Energy & Fuel Technology Keywords: equilibrium of mutual retention; self-contained energy of mutual retention; motor of mutual retention
This work proposes a new theoretical approach for the next generation of extremely efficient motors, whose impact will be substantial for future sustainable technological development. Building on well-known physical concepts, this approach introduces the concept of equilibrium of mutual retention, upon which a device remains in an equilibrium state between movement and attraction-repulsion. This new theoretical approach showed promising results in computer simulation experiments, indicating that the equilibrium of mutual retention can allow for a new way of modeling an extremely efficient motor. Finally, a theoretical efficiency analysis showed the need to expand the limits of some physical concepts already established.
Hoard or Exploit? Intergenerational Allocation of Exhaustible Natural Resources
Hala Abo-Kalla, Ruslana Rachel Palatnik, ofira Ayalon, Mordechai Shechter
Subject: Social Sciences, Accounting Keywords: Economic welfare; Energy; Exhaustible resource; General equilibrium model; Sovereign Wealth Fund (SWF); Natural Gas
In this paper we develop a "general equilibrium" (GE) model for the allocation of exhaustible natural resources to examine the impact of different extraction scenarios on intergenerational economic welfare. We apply a stylized GE model to Israel's natural gas (NG) market to evaluate economic indicators resulting from NG-extraction scenarios: a baseline scenario based on current policy in the NG sector, a conservative scenario based on a lower extraction rate, and an intensive scenario based on faster extraction. We also examine the impact of various resource income-allocation strategies on intergenerational economic welfare through the mechanism of a "sovereign wealth fund" (SWF). The results indicate that a higher NG-extraction rate combined with an appropriate investment strategy for NG profits is preferable from an economic perspective to a conservative rate. Investment of the government take from the NG market in research and development (R&D) of renewable electricity productivity can sustainably increase economic welfare.
Nernst Voltage Loss in Oxyhydrogen Fuel Cells
Jinzhe Lyu
Subject: Chemistry, Electrochemistry Keywords: Nernst voltage; activation overvoltage; concentration loss; equilibrium potential; exchange current density; net current density
Normally, the Nernst voltage calculated from the concentration of the reaction gas in the flow channel is considered to be the ideal voltage (reversible voltage) of the oxyhydrogen fuel cell, but actually it will cause a concentration gradient when the reaction gas flows from the flow channel through the gas diffusion layer to the catalyst layer. The Nernst voltage loss in fuel cells in most of the current literature is thought to be due to the difference in concentration of reaction gas in the flow channel and concentration of reaction gas on the catalyst layer at the time when the high net current density is generated. Based on the Butler-Volmer equation in oxyhydrogen fuel cell, this paper demonstrates that the Nernst voltage loss is caused by the concentration difference of reaction gas in flow channel and on the catalytic layer at the time when equilibrium potential (Galvanic potential) of each electrode is generated.
An Investigation of Experimental Reports on the Relativistic Relation for Doppler Shift
James McKelvie
Subject: Physical Sciences, Other Keywords: Doppler; relativistic; non-relativistic
An exhaustive list of thirteen instances of reports confirming experimentally the relativistic Doppler relation, are examined. For those involving longitudinal Doppler, the non-relativistic relation is seen to be confirmed, within the reported experimental accuracies, to the same degree as the standard relativistic relation. Higher values of the speed of the emitter would be required to examine further the claimed confirmations. For those reports involving saturation spectroscopy, there is much confusion over the appropriate Doppler relation to be used, together with some serious analytical flaws. For the two cases that involve transverse Doppler, there are seen to be either serious faults in the theoretical part, or intrusion from the first order effect. Therefore, the reported conclusions - that the results for the experiments confirm the relativistic SR relation - cannot be justified by any of the experimental works.
Non-Commutative Key Exchange Protocol
Luis Adrián Lizama-Pérez, José Mauricio López Romero
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Non-commutative; matrix; cryptography
We introduce a novel key exchange protocol based on non-commutative matrix multiplication defined in $\mathbb{F}_p^{n \times n}$. The security of our method does not rely on computational problems as integer factorization or discrete logarithm whose difficulty is conjectured. We show that the public, secret and channel keys become indistinguishable to the eavesdropper under matrix multiplication. Remarkably, for achieving a 512-bit security level, the public key is 1024 bits and the private key is 768 bits, making them the smallest keys among post-quantum key exchange algorithms. Also, we discuss how to achieve key authentication, interdomain certification and Perfect Forward Secrecy (PFS). Therefore, Lizama's algorithm becomes a promising candidate to establish shared keys and secret communication between (IoT) devices in the quantum era.
We introduce a novel key exchange protocol based on non-commutative matrix multiplication defined in $\mathbb{Z}_p^{n \times n}$. The security of our method does not rely on computational problems as integer factorization or discrete logarithm whose difficulty is conjectured. We claim that the unique eavesdropper's opportunity to get the secret/private key is by means of an exhaustive search which is equivalent to the unsorted database search problem. Furthermore, we show that the secret/private keys become indistinguishable to the eavesdropper. Remarkably, to achieve a 512-bit security level, the keys (public/private) are of the same size when matrix multiplication is done over a reduced 8-bit size modulo. Also, we discuss how to achieve key certification and Perfect Forward Secrecy (PFS). Therefore, Lizama's algorithm becomes a promising candidate to establish shared keys and secret communication between (IoT) devices in the quantum era.
Non-Archimedean Welch Bounds and Non-Archimedean Zauner Conjecture
K. Mahesh Krishna
Subject: Mathematics & Computer Science, Analysis Keywords: Non-Archimedean valued field; non-Archimedean Hilbert space; Welch bound; Zauner conjecture
Let $\mathbb{K}$ be a non-Archimedean (complete) valued field satisfying \begin{align*} \left|\sum_{j=1}^{n}\lambda_j^2\right|=\max_{1\leq j \leq n}|\lambda_j|^2, \quad \forall \lambda_j \in \mathbb{K}, 1\leq j \leq n, \forall n \in \mathbb{N}. \end{align*} For $d\in \mathbb{N}$, let $\mathbb{K}^d$ be the standard $d$-dimensional non-Archimedean Hilbert space. Let $m \in \mathbb{N}$ and $\text{Sym}^m(\mathbb{K}^d)$ be the non-Archimedean Hilbert space of symmetric m-tensors. We prove the following result. If $\{\tau_j\}_{j=1}^n$ is a collection in $\mathbb{K}^d$ satisfying $\langle \tau_j, \tau_j\rangle =1$ for all $1\leq j \leq n$ and the operator $\text{Sym}^m(\mathbb{K}^d)\ni x \mapsto \sum_{j=1}^n\langle x, \tau_j^{\otimes m}\rangle \tau_j^{\otimes m} \in \text{Sym}^m(\mathbb{K}^d)$ is diagonalizable, then \begin{align}\label{WELCHNONABSTRACT} \max_{1\leq j,k \leq n, j \neq k}\{|n|, |\langle \tau_j, \tau_k\rangle|^{2m} \}\geq \frac{|n|^2}{\left|{d+m-1 \choose m}\right| }. \end{align} We call Inequality (\ref{WELCHNONABSTRACT}) as the non-Archimedean version of Welch bounds obtained by Welch [\textit{IEEE Transactions on Information Theory, 1974}]. We formulate non-Archimedean Zauner conjecture.
Non-Ionizing Millimeter Waves Non-thermal Radiation of Saccharomyces Cerevisiae – Insights and Interactions.
Ayan Barbora, Sailendra Rajput, Konstantin Komoshvili, Jacob Levitan, Asher Yahalom, Stella Liberman- Aronov
Subject: Life Sciences, Biochemistry Keywords: Non-ionizing Radiation; Millimeter waves; Novel biomedical applications; Yeast; Non-invasive devices
Nonionizing millimeter-waves (MMW) interact with cells in a variety of ways. Here the inhibited cell division effect was investigated using 85-105 GHz MMW irradiation within the ICNIRP (International Commission on Non-Ionizing Radiation Protection) non-thermal 20 mW/cm2 safety standards. Irradiation using a power density of about 1.0 mW/cm2 , SAR over 5-6 hours on 50 cells/μl samples of Saccharomyces cerevisiae model organism resulted in 62% growth rate reduction compared to the control (sham). The effect was specific for 85-105 GHz range, and was energy and cell density dependent. Irradiation of wild type and Δrad52 (DNA damage repair gene) deleted cells presented no differences of colony growth profiles indicating non-thermal MMW treatment does not cause permanent genetic alterations. Dose versus response relations studied using a standard horn antenna (~1.0 mW/cm2) and compared to that of a compact waveguide (17.17 mW/cm2) for increased power delivery resulted in complete termination of cell division via non-thermal processes supported by temperature rise measurements. We have shown that non-thermal MMW radiation has potential for future use in treatment of yeast related diseases and other targeted biomedical outcomes.
Mechanisms of the Non-Thermal Exposure Effects of Non-Ionizing Millimeter Waves Radiation on Eukaryotic Cells for Improving Technological Precision Enabling Novel Biomedical Applications
Ayan Barbara, Shailendra Rajput, Konstantin Komoshvili, Jacob Levitan, Asher Yahalom, Stella Liberman- Aronov
Subject: Life Sciences, Biophysics Keywords: non-ionizing radiation; millimeter waves; novel biomedical applications; yeast; non-invasive devices
Nonionizing millimeter-waves (MMW) are reported to interact with cells in a variety of ways. Possible mechanisms of the inhibited cell division effect were investigated using 85-105 GHz MMW irradiation within the ICNIRP (International Commission on Non-Ionizing Radiation Protection) non-thermal 20 mW/cm2 safety standards. ~1.0 mW/cm2 exposure over 5-6 hours treatment on 50 cells/μl samples of Saccharomyces cerevisiae model organism, resulted in 62% growth rate reduction compared to control (sham). The effect was specific for 85-105 GHz range and energy dose and cell density dependent. Irradiation of wild type and Δrad52 (DNA damage repair gene) deletion cells presented no differences of colony growth profiles indicating non-thermal MMW treatment does not cause genetic alterations. Dose versus response relations studied using a standard horn antenna (~1.0 mW/cm2) and compared to that of a compact waveguide (17.17 mW/cm2) for increased power delivery resulted in complete termination of cell division via non-thermal processes supported by temperature rise measurements. Combinations of MMW mediated Structure Resonant Energy Transfer (SRET), membrane modulations eliciting signaling effects, and energetic resonance with biomolecules were indicated to be responsible for the observations reported. Our results provide novel mechanistic insights enabling innovative applications of nonionizing radiation procedures for eliciting targeted biomedical outcomes.
Some Results for a Time Interval Approach to Field Theory and Gravitation
Harmen Henricus Hollestelle
Subject: Physical Sciences, General & Theoretical Physics Keywords: Time interval; equilibrium; field theory; complementarity; field operator; gravitation; wave propagation surface; emission source; cosmology
This paper consists of two parts. In part I some new relations for a field theory with time intervals are derived. One concept of field theory evaluated is complementarity, another is field operators both defined within a time interval description. Part II includes specific results and commentary. Discussed are time interval dependent wave propagation surfaces for star source emission waves and derived is a metric propagation surface area requirement. The results allow to consider one same field that like gravitation within General Relativity applies to both non zero and zero mass. The associated field energy is space time dependent for non zero mass, and is related to a space time dependent metric tensor for zero mass wave particles. Defined is internal energy transfer where wave particle numbers increase linearly and mass and momentum diminish, decrease inversely with the distance from the wave emission source. The commentary are applications related to cosmological overall volume and temperature dependence.
Working Paper CONCEPT PAPER
Service-Value and Nash-Equilibrium Pricing: An Axiomatic Methodology
Victor Tang
Subject: Social Sciences, Accounting Keywords: pricing services; value-pricing services; services Nash Equilibrium; value co-creation; service science; services metrology
We present a normative methodology, for value-pricing B2B services, using a Nash Equilibrium mechanism. Value is fundamental to any service, yet it has defied a definition. The literature is conspicuously silent on how to define value, but abounds in richly descriptive characterizations. To overcome this deficit, we focus on the financial economics of service-value. We formulate an axiomatic definition and a differential equation that embodies the idea of service-value. We specify a set of value axioms and multidisciplinary postulates that coherently form our service-value constructs. Value-pricing is challenging. It is natural that providers desire a high price and customers want a low price. Realistically, both will agree on a win-win price. To uncover this price, we specify an algorithm to reveal the Nash Equilibrium. Once agreed, it validates providers' and customers' commitments, and payment obligations. Service value is cocreated by both provider and customer. For a reciprocal process, its treatment is remarkably asymmetric favoring the customer. We argue that one-sided descriptions of value-in-use and value-proposition are limiting mental models. We propose the additional ideas of value-from-use and value-supposition to strengthen the conceptual symmetry of cocreation. Our work reveals a critical gap in service science, metrology. Metrology, the science of measurements, is absent from service-science. Service-value is silent on the questions of quantities, units, scales, measurement principles and instruments. We argue for a call to action for Service Metrology. We sketch a roadmap of actionable suggestions to get started.
Time, Equilibrium, and General Relativity
Subject: Physical Sciences, General & Theoretical Physics Keywords: time interval; equilibrium; graphs; derivative; metric; general relativity; starlight radiation; qm wave packet collapse; cosmology
Considered is "time as an interval" including time from the past and from the future, in contrast to time as a moment. Equilibrium as the basis for a description of changing properties in physics is understood to depend on the "mean velocity theorem", while a "time" of equilibrium resembles a center of weight. This turns out to be a good method to derive properties for any function of time t including space coordinates q(t) and expressions for the time dependent Hamiltonian. Introduced are derivatives depending on time intervals instead of time moments and with these a new relation between the Lagrangian L and the Hamiltonian H. As an application introduced is a step by step method to integrate stationary state "local" time interval measurements to beyond "locality" in General Relativity. Because of limits on the measures of the resulting time intervals and their asymmetry, this allows for a probabilistic interpretation of quantities that have these intervals as time domain in QM. Their asymmetry also questions the time reversal symmetry of GR. Another application of time intervals is the discussion of the measurement of starlight radiation energy and QM wave packet collapse as an example of a time dependent Hamiltonian. Finally a relation between starlight frequency, metric and space- and time intervals is found. Discussed is how finite and asymmetric time intervals correspond to time dependent H and symmetric infinite time intervals to a time independent H. From there, in cosmological perspective, finite time intervals can help to describe how entropy change could relate to dark energy.
Dynamic Evolution Hypothesis of Organisms
Yonghua Wu
Subject: Biology, Ecology Keywords: Diminished fitness return, mutation rate tuning, Darwinian evolution; neutral evolution; punctuated equilibrium; unified evolutionary theory
I propose a dynamic evolution hypothesis regarding the evolution of organisms by incorporating both diminished fitness returns and mutation rate tuning during adaptation to a constant environment. Basically, accumulating evidence from life history studies conducted over the past 70 years suggests that the evolution of individual fitness is subject to ecological constraints, leading to the evolutionary existence of an upper limit of individual fitness (ULIF). Given the existence of the ULIF, organismal evolution, which might initially have relatively great fitness returns through primarily Darwinian evolution, will eventually be subject to diminished fitness returns towards zero. With the diminished fitness return, Darwinian selection strength may eventually become smaller than the power of random genetic drift, leading to the occurrence of neutral evolution at both phenotypic and molecular levels. Meanwhile, mutation rates may change from an initial increase, due to the relatively strong fitness return, to subsequent decreases, due to both the diminished fitness return of beneficial mutations and the cost of deleterious mutations. The diminished fitness returns with subsequently reduced mutation rates are two potential evolution barriers leading to eventual evolutionary stasis. These findings provide important insights for understanding the conditions for the occurrences of different evolutionary patterns. Darwinian evolution theory, neutral evolution theory and punctuated equilibrium theory can be unified in the context of the dynamic evolution hypothesis formulated in this study.
A Coalition Formation Game Approach for Efficient Cooperative Multi-UAV Deployment
Lang Ruan, Jin Chen, Qiuju Guo, Han Jiang, Yuli Zhang, Dianxiong Liu
Subject: Engineering, Electrical & Electronic Engineering Keywords: UAV-assisted sensor network; UAV cooperative coverage; coalition formation game; stable coalition partition; Nash equilibrium
UAV cooperative control has been an important issue in UAV-assisted sensor network, thanks to the considerable benefit obtained from cooperative mechanism of UAVs being applied as a flying base station. In coverage scenario, the tradeoff between coverage performance and transmission performance often makes deployment of UAVs fall into a dilemma, since both indexes are related to the distance between UAVs. To address this issue, UAV coverage and data transmission mechanism is analyzed in this paper, then an efficient multi-UAV cooperative deployment model is proposed. The problem is also modeled as a coalition formation game (CFG). The CFG with Pareto order is proved to have a stable partition. Then, an effective approach consisting of coverage deployment and coalition selection is designed, wherein UAVs can decide strategies cooperatively to achieve better coverage performance. Combining analysis of game approach, a coalition selection and position deployment algorithm based on Pareto order (CSPDA-PO) is designed to execute coverage deployment and coalition selection. Finally, simulation results are shown to validate the proposed approach based on efficient multi-UAV cooperative deployment model.
Non-invertible Public Key Certificates
Luis Lizama-Pérez, J. Mauricio López
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Non-invertible; cryptography; certificate; PKI
Post-quantum public cryptosystems introduced so far do not define an scalable public key infrastructure for the quantum era. We demonstrate here a public certification system based in Lizama's non-invertible Key Exchange Protocol which can be used to implement a public key infrastructure (PKI), secure, scalable, interoperable and efficient. We show functionality of certificates across different certification domains. Finally, we discuss that non-invertible certificates can exhibit Perfect Forward Secrecy (PFS).
Public Debt and Economic Growth Nexus: Evidence from South Asia
Saira Saeed, Tanweer Islam
Subject: Social Sciences, Economics Keywords: endogeneity, non-linearity, threshold, FMOLS
It is well established in literature that the public debt and economic growth bear positive and non-linear relationship. However, in recent literature, evidence of no causal relationship is found when accounted for endogeneity in case of advanced economies (Panizza & Presbitero, 2014). Chudik, Mohaddes, Pesaran, & Raissi, (2017) analyse the data on forty countries and find no evidence of universally applicable threshold effect in the relationship between debt and growth. These advancements in the debt-growth literature provides the motivation to re-explore the relationship between public debt and economic growth under non-linearity and endogeneity in context of developing economies of South Asia including Pakistan, India, Bangladesh and Sri-Lanka for the period 1980-2014. There exists a significant, positive but nonlinear relationship between the public debt and economic growth for the selected set of developing countries when accounted for endogeneity and non-linearity. The negative association between the public debt and economic growth for SAARC region is found when the debt level is higher than 61% of GDP which is quite lower than developed economies (90% of GDP). Individual threshold levels for debt-to-GDP ratio divulge that Sri Lanka, Pakistan and India need to control their public borrowings as their current debt levels are higher and/or around the respective threshold levels.
A Duality Principle and a Concerning Convex Dual Formulation Suitable for Non-convex Variational Optimization
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Convex dual variational formulation; duality principle for non-convex optimization; model in non-linear elasticity
This article develops a duality principle and a related convex dual formulation suitable for a large class of models in physics and engineering. The results are based on standard tools of functional analysis, calculus of variations and duality theory. In particular, we develop applications to a model in non-linear elasticity.
Influence of Material-Dependent Damping on Brake Squeal in the Specific Disc Brake System
Juraj Úradníček, Miloš Musil, Ľuboš Gašparovič, Michal Bachratý
Subject: Engineering, Automotive Engineering Keywords: brake squeal; dissipation induced instability; non-proportional damping; non-conservative system; complex eigen value analysis
The connection of two phenomena - non-conservative friction forces and dissipation-induced instability can lead to many interesting engineering problems. The paper studies general material-dependent damping influence on dynamical instability of disc brake systems leading to brake squeal. The effect of general damping is demonstrated on a minimal and complex model of a disc brake. A complex system including material-dependent damping is defined in the commercial finite element software. The finite element model validated by experimental data on the brake-disc test bench is used to compute the influence of a pad and a disc damping variations on system stability by complex eigenvalue analysis. Analyzes show a significant sensitivity of the experimentally verified unstable mode of the system to the ratio of the damping between the disc and the friction material components.
Application of the Incremental Modal Analysis for Bridges (IMPAb) Subjected to Near-Fault Ground Motions
Alessandro Vittorio Bergami, Gabriele Fiorentino, Davide Lavorato, Bruno Briseghella, Camillo Nuti
Subject: Engineering, Civil Engineering Keywords: near field; pulse like ground motions; bridge, non-linear static analysis; non-linear dynamic analysis
Near-fault ground motions can cause severe damage to civil structures, including bridges. Safety assessment of these structures for near fault ground motion is usually performed through Non-Linear Dynamic Analyses, while faster methods are often used. IMPAb (Incremental Modal Pushover Analysis for Bridges) permits to investigate the seismic response of a bridge by considering the effects of higher modes, which are often relevant for bridges. In this work, IMPAb is applied to a bridge case study considering near-fault pulse-like ground motion records. The records were analyzed and selected from the European Strong Motion Database and the pulse parameters were evaluated. In the paper results from standard pushover procedures and IMPAb are compared with nonlinear Response-History Analysis (NRHA), considering also the vertical component of the motion, as benchmark solutions and incremental dynamic analysis (IDA). Results from the case study demonstrate that the vertical seismic action has a minor influence on the structural response of the bridge. Therefore IMPAb, which can be applied considering vertical motion, remains very effective conserving the original formulation of the procedure, and can be considered a well performing procedure also for near-fault events.
A Deep Dive into Genome Assemblies of Non-vertebrate Animals
Nadège Guiglielmoni, Ramón Rivera-Vicéns, Romain Koszul, Jean-François Flot
Subject: Life Sciences, Genetics Keywords: genome assembly; sequencing; non-vertebrate animals
Non-vertebrate species represent about ~95% of known metazoan (animal) diversity. They remain to this day relatively unexplored genetically, but understanding their genome structure and function is pivotal for expanding our current knowledge of evolution, ecology and biodiversity. Following the continuous improvements and decreasing costs of sequencing technologies, many genome assembly tools have been released, leading to a significant amount of genome projects being completed in recent years. In this review, we examine the current state of genome projects of non-vertebrate animal species. We present an overview of available sequencing technologies, assembly approaches, as well as pre and post-processing steps, genome assembly evaluation methods, and their application to non-vertebrate animal genomes.
Transcriptome-wide Association Study of Blood Cell Traits in African Ancestry and Hispanic/Latino Populations
Jia Wen, Munan Xie, Bryce Rowland, Jonathan D. Rosen, Quan Sun, Amanda L. Tapia, Huijun Qian, Madeline H. Kowalski, Yue Shan, Kristin L. Young, Marielisa Graff, Maria Argos, Christy L. Avery, Stephanie A. Bien, Steve Buyske, Jie Yin, Hélène Choquet, Myriam Fornage, Chani J. Hodonsky, Eric Jorgenson, Charles Kooperberg, Ruth J.F. Loos, Yongmei Liu, Jee-Young Moon, Kari E. North, Stephen S. Rich, Jerome I. Rotter, Jennifer A. Smith, Wei Zhao, Lulu Shang, Tao Wang, Xiang Zhou, Alexander P. Reiner, Laura M. Raffield, Yun Li
Subject: Life Sciences, Biochemistry Keywords: TWAS; non-European; blood cell traits
Background: Thousands of genetic variants have been associated with hematological traits, though target genes remain unknown at most loci. Also, limited analyses have been conducted in African ancestry and Hispanic/Latino populations; hematological trait associated variants more common in these populations have likely been missed. Methods: To derive gene expression prediction models, we used ancestry-stratified datasets from the Multi-Ethnic Study of Atherosclerosis (MESA, including N=229 African American and N=381 Hispanic/Latino participants, monocytes) and the Depression Genes and Networks study (DGN, N = 922 European ancestry participants, whole blood). We then performed a transcriptome-wide association study (TWAS) for platelet count, hemoglobin, hematocrit, and white blood cell count in African (N = 27,955) and Hispanic/Latino (N = 28,324) ancestry participants. Results: Our results revealed 24 suggestive signals (p < 1×10^(-4)) that were conditionally distinct from known GWAS identified variants and successfully replicated these signals in European ancestry subjects from UK Biobank. We found modestly improved correlation of predicted and measured gene expression in an independent African American cohort (the Genetic Epidemiology Network of Arteriopathy (GENOA) study (N=802), lymphoblastoid cell lines) using the larger DGN reference panel; however, some genes were well predicted using MESA but not DGN. Conclusions: These analyses demonstrate the importance of performing TWAS and other genetic analyses across diverse populations and of balancing sample size and ancestry background matching when selecting a TWAS reference panel.
Design of Tunnel Drier for the Non-centrifugal Sugar Industry
S.P. Raj, B Sravya, Morapakala Srinivas, K.S Reddy
Subject: Engineering, Mechanical Engineering Keywords: Non-centrifugal sugar; drying; tunnel dryer
The quality and shelf-life of NCS (Non-centrifugal sugar) mainly depend on the moisture content present in it. NCS formed by the current practice of open sun drying contains moisture substantially greater than the acceptable level of 3%. This paper presents the work taken up to design a tunnel dryer to attain require moisture content in granular NCS for various load conditions. Initially, an experimental investigation had been carried out on a laboratory scale dryer to achieve required moisture content (< 3%) for various load conditions. This experimental data was used for validating two drying models and found that one of the models is best suitable for designing an industrial-scale dryer. For various load conditions on each tray and dryer exit temperature, nine different cases were arrived at. The number of trucks, trays, drying time and energy requirements were computed using the suitable theoretical model. Tunnel dryer with a length of 18 m, a height of 1.2 m, a width of 1 m, number of trucks of 18 and 24 number of trays on each truck was found to be the suitable dryer to dry 1 tone of NCS based on the minimum energy requirement of 176.49 MJ, and a minimum drying time of 68 minutes.
Functional RNA Structures in the 3'UTR of Tick-Borne, Insect-Specific and No-Known-Vector Flaviviruses
Roman Ochsenreiter, Ivo L. Hofacker, Michael T. Wolfinger
Subject: Life Sciences, Virology Keywords: Flavivirus; non-coding RNA; secondary structure
Untranslated regions (UTRs) of flaviviruses contain a large number of RNA structural elements involved in mediating the viral life cycle, including cyclisation, replication, and encapsidation. Here we report on a comparative genomics approach to characterize evolutionarily conserved RNAs in the 3'UTR of tick-borne, insect-specific and no-known-vector flaviviruses in silico. Our data support the wide distribution of previously experimentally characterized exoribonuclease resistant RNAs xrRNAs within tick-borne and no-known-vector flaviviruses and provide evidence for the existence of a cascade of duplicated RNA structures within insect-specific flaviviruses. On a broader scale, our findings indicate that viral 3'UTRs represent a flexible scaffold for evolution to come up with novel xrRNAs | CommonCrawl |
Measurement and application of patient similarity in personalized predictive modeling based on electronic medical records
Ni Wang1,2,
Yanqun Huang1,2,
Honglei Liu1,2,
Xiaolu Fei3,
Lan Wei3,
Xiangkun Zhao1 &
Hui Chen1,2
Conventional risk prediction techniques may not be the most suitable approach for personalized prediction for individual patients. Therefore, individualized predictive modeling based on similar patients has emerged. This study aimed to propose a comprehensive measurement of patient similarity using real-world electronic medical records data, and evaluate the effectiveness of the individualized prediction of a patient's diabetes status based on the patient similarity.
When using no more than 30% of the whole training sample, the personalized predictive models outperformed corresponding traditional models built on randomly selected training samples of the same size as the personalized models (P < 0.001 for all). With only the top 1000 (10%), 700 (7%) and 1400 (14%) similar samples, personalized random forest, k-nearest neighbor and logistic regression models reached the globally optimal performance with the area under the receiver-operating characteristic (ROC) curve of 0.90, 0.82 and 0.89, respectively.
The proposed patient similarity measurement was effective when developing personalized predictive models. The successful application of patient similarity in predicting a patient's diabetes status provided useful references for diagnostic decision-making support by investigating the evidence on similar patients.
In personalized medicine, clinicians and health policy makers must choose the most appropriate clinical trial and make predictions for the right patient during decision-making [1, 2]. This approach is used to individualize medical practice.
At present, clinicians can predict diseases by many methods like diagnostic imaging technique [3,4,5,6,7] but with fewer predictive models. In recent years, predictive modeling has been successfully applied in the medical scenarios, including the identification of risk factors [8, 9] and early detection of disease onset [10, 11]. In addition, advances have been made in using predictive modeling to predict patient outcomes [2]. The traditional predictive modeling approach involves building a global predictive model using all available training data. However, this may not be the most suitable approach for personalized prediction for individual patients. Furthermore, generally there are varieties of noisy data in electronic medical records (EMR) data, which were primarily designed for administration and improving healthcare efficiency, and many studies have found secondary use such as patient trajectory modeling, disease inference and clinical decision support system [12]. It is recommended to de-noise data before building a global predictive model, which will be time consuming and challenging to represent and model. In this context, individualized predictive modeling based on patient similarity emerged and was shown to be adjustable for individual patients. Employing patient similarity helps to identify a precision cohort for an index patient, which will then be used to train a personalized model [2]. Accordingly, when building a predictive model for an index patient, training samples are determined as "patients like me," instead of using all available training samples in a conventional way. "Patients like me" are selected from the training sample set on the basis of similarity between the index patient and each training sample. Of note, based on patient similarity, patients with noisy data are less likely to be selected as similar patients of an index patient for the reason of the less similarity between them. Patient similarity is usually measured by considering information on demographics, disease history, comorbidities, laboratory tests, hospitalizations, treatment, and pharmacotherapy. Such data are easily extracted from the EMR for tens of millions of patients [13].
In this study, we defined a patient as a vector in a d-dimensional feature space. Then, a multi-dimensional approach to estimate patient similarity was proposed. To demonstrate the effectiveness of the proposed similarity measure, the most similar patients were retrieved to build personalized models to predict the diabetes status of a given patient.
To assist physicians with the selection of the most appropriate recommendations and the prediction of a given patient, several methodologies have been applied in personalized medicine such as clustering, principle component analysis and patient similarity computation.
Clustering is the most popular method used in personalized medicine. This aims to create groups of patients with similar disease evolution [14], with the prediction for a new patient identified with the label of their most similar cluster. To determine the subtype for a breast cancer patient and provide the most effective treatment, Wang et al. [15] defined a novel consensus clustering method to automatically cluster numerical and categorical data using Euclidean distance and categorical distance, respectively. The proposed method demonstrated great superiority and robustness in clustering and differentiating patient outcomes. Li et al. [16] presented an unsupervised clustering framework based on topological analysis to identify type 2 diabetes subgroups. The topology-based patient–patient network could be used for identifying three distinct subgroups of type 2 diabetes successfully. Panahiazar et al. [17] designed two different approaches for medication recommendation for a heart-failure patient, using both unsupervised clustering (hierarchical clustering and K-means clustering) and supervised clustering (using the medication plan as class variable). Their results showed that supervised clustering outperformed the unsupervised clustering.
Another frequently used technique for predicting patient outcomes is based on the patient similarity. Patient similarity evaluation was investigated as a tool to enable precision medicine [14], and was identified as a fundamental problem in many data mining algorithms and practical information process systems [18]. Most commonly, through exhaustive comparisons between a given patient and a cohort of existing patients, an assessment specific to the given patient can help in identifying his similar patients. Lee et al. [19] used a cosine-based patient similarity metric to identify patients who agreed the most with each patient. The result suggested that using fewer but more similar data could get higher predictive performance than using overall available data. David et al. [20] proposed an algorithm for the anomaly detection and characterization on the basis of the Euclidean distance between the medical laboratory data. With the selected neighbors around him, the index patient could be segmented into one of the seven disease groups with a higher accuracy. For early screening and assessment of suicidal risks, researchers used the sum of absolute distances for each predictor to retrieve a cohort of similar patients and determined the most potential risk level for a new patient [21]. Among these studies, one of them [19] compared the performance of the patient similarity-based personalized predictive models with the whole population-based global predictive models. The results demonstrated that personalized predictive models showed a higher performance.
Many previous studies usually calculated the patient similarity using single similarity measures (e.g., Euclidean distance, cosine distance, and Mahalanobis distance), and most of them did not take the importance of patient features into consideration while calculating the similarity. In this study, we aimed to investigate in depth the patient similarity in the following two aspects. One is using different similarity metrics for different types of feature data. The other is assigning different weights (importance) to patient features when integrating feature similarities into a patient similarity.
Overview of patient similarity
To validate the predictive performance of the patient similarity-based models, we calculated all possible similarities between each pair of patients (one selected from the test set and the other from the training set). In the distribution scatter plot (Fig. 1) of similarity measurements for a patient with diabetes mellitus (DM), other patients with DM were more likely to be closer to the index patient than patients without DM (Fig. 1a). There was a similar trend in the distribution scatter plot for a patient without DM (Fig. 1b).
Visualization of patient similarity when the feature similarity for disease diagnosis was calculated using International Classification of Diseases, tenth revision (ICD-10) disease codes. The central big dots represent two index patients from the test sample set [red for a patient with diabetes mellitus (DM) and green for a patient without DM]. The surrounding dots represent all patients with DM (red) and without DM (green) from the training sample set, where the distance to the central dot corresponds to the similarity. The closer the surrounding dots are to the central dot, the more similar are the two patients
On average, similarities between pairs of patients with DM [0.576 ± 0.078 calculated by Eq. 7 and 0.596 ± 0.100 calculated by Eq. 8, respectively] were both statistically greater than those between patient pairs that included at least one patient without DM (0.550 ± 0.078 and 0.565 ± 0.097, respectively; t test, P values < 0.001 for both). International Classification of Diseases, tenth revision (ICD-10) codes-based similarities among patients with DM were less than Clinical Classification Software (CCS) codes-based similarities (t-test, P < 0.001; Fig. 2).
Patient similarity among patients with and without diabetes. D-D(ICD-based similarity) and D-D(CCS-based similarity) represent similarities between pairs of patients with diabetes mellitus (DM) based on ICD-10 and CCS disease codes, respectively. Error bars represent standard deviation. D-nD(ICD-based similarity) and D-nD(CCS-based similarity) represent similarities between patient pairs that included at least one patient without DM based on ICD-10 and CCS disease codes, respectively
Evaluation of predictive performance
When no more than 30% of the whole training sample (i.e., 3000 samples) were used to build the models, all three personalized predictive models outperformed the corresponding traditional models, which were built on randomly selected training samples of the same size as the personalized models (Mann–Whitney U test adjusted by Bonferroni, P values < 0.001 for all). As the number of training samples increased, the personalized and traditional predictive models showed almost the same globally optimal performance. However, only the top 1000 (10%), 700 (7%), and 1400 (14%) similar samples were used for building the personalized random forest (RF), k-nearest neighbor (kNN), and logistic regression (LR) models, respectively, while 3600 (36%), 1400 (14%), and 3700 (37%) random selected samples were used for the corresponding traditional models (Fig. 3). This suggested that the personalized models reached the optimal performance using fewer, but more similar training samples.
Predictive performance of random forest (RF), logistic regression (LR), and k-nearest neighbor (kNN) models. For simplicity, only performances of the models built on 2% (200 samples) to 30% (3000 samples) of the 10,000 training sample candidates are displayed in the figure. Blue, cyan, and dark red lines represent RF, kNN, and LR models, respectively. Lines with dot, triangle, and cross markers represent models built on the randomly selected samples and the most similar samples based on patient similarity when the similarity of disease diagnoses feature was calculated using ICD-10 and CCS codes
When the top 1000 (10%), 700 (7%), and 1400 (14%) similar samples selected according to the CCS-based similarity were used, the personalized RF, kNN, and LR models showed a clear increasing trend from the initial area under the receiver-operating characteristic (ROC) curve of 0.87, 0.79, and 0.70 to the saturated area under the ROC curve (AUC) of 0.90, 0.82, and 0.89, respectively. When the kNN model was built using up to the top 4% of similar samples, it outperformed the LR model. This suggested that more appropriate data were needed for the LR model parameters to be properly trained. Similar results were found when patient similarities were based on ICD-based similarity. When RF, kNN, and LR models were built on the top 12%, 7%, and 15% of similar samples, respectively, they showed the globally optimal performance. The RF model showed significantly higher performance than the LR and kNN models (Mann–Whitney U test adjusted by Bonferroni, P values < 0.001 for all), partially because of its built-in feature selection property.
Further comparisons of predictive performance of the personalized models built on ICD-10- and CCS-based similar patients showed that there were no significant differences for RF, kNN, and LR models (Mann–Whitney U test adjusted by Bonferroni, P = 0.491, 0.988 and 0.635, separately).
Interpretation of predictive models
The visualized classification process of the kNN model for a randomly selected index patient (a true DM patient, the central circle) is shown in Fig. 4. No matter what the parameter k was set, the index patient was always predicted to be a DM patient. For example, there were 100% (10/10), 94% (47/50) and 86% (86/100) patients with DM (red dots) among the index patient's 10, 50 and 100 nearest neighbors (i.e. k = 10, 50, and 100), respectively.
The visualized classification process of the k-nearest neighbor (kNN) model for a randomly selected index patient. The k represents the number of nearest neighbors. The central circle represents an index patient from the test sample set. The surrounding dots represent k-nearest neighbors with DM (red) and without DM (green) from the training sample set, where the distance to the central circle corresponds to the Euclidean distance
Since the RF model provided the highest predictive performance in this study, feature importance obtained from the RF models was presented to help understanding the model (Fig. 5). The top 20 important features for diabetes prediction included one demographic characteristic (i.e., age) and several laboratory tests (such as serum glucose, urine glucose, and serum chlorine). Features' importance varied with the training samples (similar samples or randomly selected samples) on which RF models were built.
The plot showing the top 20 important features for diabetes identified by the random forest (RF) model according to Gini coefficients. Dark blue and orange columns represent RF model built on similar samples selected according to the CCS-based similarity and randomly selected samples, respectively
Prediction of risk for specific diseases is important in a variety of applications, including health insurance, tailored health communication, and public health [22]. In this paper, we proposed a method for predicting risk for a potential disease using a large clinical dataset collected from an EMR system. In the proposed method, classification algorithms (kNN, LR, and RF) were built to predict a patient's diabetes status based on patient similarities assessed using a multi-dimensional approach covering demographics, disease diagnoses, and laboratory tests. The investigation pipeline can easily be extended to the study of other complex and multifactorial diseases.
Because patients' disease diagnoses were an important part of EMR data and a key factor for disease prediction, we investigated two similarity measurements for disease diagnoses. One was calculated using a hierarchical similarity measure with ICD-10 disease codes, and the other using simple cosine similarity with CCS disease codes. Although the hierarchical similarity measure has been argued to be a more direct mapping of hierarchical information to distances [23], we found that predictive models built on the most similar samples selected according to patient similarity based on hierarchical similarity did not show higher performance than those based on cosine disease similarity. This suggests that narrowing ICD-10 diagnosis codes into CCS codes may be useful for presenting disease data at a descriptive statistical categorical level [16]. Therefore, feature similarity for disease diagnoses based on CCS codes and cosine similarity was more effective and efficient than that based on ICD-10 codes and hierarchical similarity in this study.
A previous study suggested that in personalized medicine, using patient similarity in data-driven analysis of patient cohorts will significantly assist physicians to make informed decisions and choose the most appropriate clinical trial [24]. In this study, three different predictive models using similar cohorts showed a consistently higher performance, especially in that they used fewer training samples than those built on randomly selected samples. This finding coincided with the conclusion that similarity-based selection was better than random selection [8]. In particular, the personalized LR model showed the largest performance increase. This demonstrated that patient similarity has potential to improve the predictive performance of machine learning models.
Furthermore, predictive performance for both the personalized and traditional models reached a saturated level when increasing numbers of training samples were involved in the modeling, where the personalized models reached earlier. This finding was consistent with the conclusion of two previous studies that little was gained from using more dissimilar patients when building models [8, 25]. Generally, there are varieties of noisy data (errors) in EMR, where noisy data referred to the irrelevant and dissimilar data for a patient with the specific disease. When building personalized models, the most similar samples measured by the proposed patient similarity were used as the training samples, which could be considered as "the patients like me". Under this situation, noisy data which may disturb the prediction were less likely to be selected as training samples due to the less similarity; thus, patient similarity measurement proposed herein could be harnessed as a de-noising method. This improved the predictive performance and the overall robustness of aforementioned models to some degree. Using fewer but more similar samples, personalized predictive models may perform as well as traditional predictive models built on the entire training samples. For the personalized models, as the training sample size increased, more and more samples with less similarity were added into the training set, making the overlap of training set for the personalized models and traditional models enlarged. When the training sample size increased to the whole available training samples, no difference would exist in the similarity-based selection and random selection of training samples. The personalized models, thus, degenerated into the traditional ones, both showing the same predictive performance, the global performance.
Diabetes prediction is a challenging task for its multifactorial characteristics and various manifestations. Park et al. [25] applied their new knowledge discovery techniques to improve the performance of diabetes prediction, obtaining an average accuracy of 0.76. In another study [8] of diabetes prediction, the best performance (AUC, 0.62) of the personalized models was obtained when the predictive model was built on 2000 similar patients. In our study, based on the proposed similarity measurement, predictive performances for diabetes improved a lot with the highest AUC of 0.90.
There are some limitations to our research. First, when constructing study cohort, no exclusion criterion specific to the predictive task was employed. Second, the patient similarity was calculated directly, without making the full use of the information provided by the large amount of sample patients. Last, the performance of the proposed patient similarity measure was only evaluated for disease prediction. In the further work, we will improve the algorithm for the similarity measurement, including learning the patient similarity automatically, and the patient similarity will be used in other application scenarios, such as patient stratification for disease sub-typing.
In this study, we proposed a comprehensive measurement of patient similarity using real-world EMR data, and evaluated the effectiveness of the individualized prediction of a patient's diabetes status based on the patient similarity. The proposed similarity measure was designed to reflect the data type and clinical meaning of each patient feature. Moreover, predictive models built on similar cohorts had a consistently higher performance than those built on randomly selected samples. They also performed as well as models built on entire training samples. This makes it possible for further large-scale and high-dimensional predictive applications at relatively lower time and space costs and higher performance. The successful application of patient similarity in predicting a patient's diabetes status provided useful references for diagnostic decision-making support by investigating the evidence on similar patients.
In this study, patient similarity was estimated using four types of patient information or features: age, sex, multiple laboratory test items, and multiple disease diagnoses. Similarities were first calculated at the feature level, and then combined into a single similarity measure at the patient level. The main steps of the workflow are shown in Fig. 6.
Main steps of the workflow. a Retrieving analyzed data from EMRs data. b Calculation of four types of feature similarities and patient similarity. c Application of patient similarity into personalized predictive model for future diabetes status prediction. kNN k-nearest neighbor, LR logistic regression, RF random forest, EMRs electrical medical records
Similarity calculation
Feature similarity for age
Let Agei and Agej denote the age of patients i and j, respectively. The feature similarity for age (FSA) was defined as the ratio of the smaller age value to the larger one:
$${\text{FS}}_{A} \left( {i, j} \right) = \frac{{\hbox{min} \left( {{\text{Age}}_{i} ,{\text{Age}}_{j} } \right)}}{{\hbox{max} \left( {{\text{Age}}_{i} ,{\text{Age}}_{j} } \right)}}.$$
Feature similarity for sex
The feature similarity for sex (FSS) between patients i and j was defined as 1 if the two patients had the same sex and 0 otherwise.
$${\text{FS}}_{S} \left( {i,j} \right) = \left\{ {\begin{array}{*{20}c} {1,} & {{\text{if}}\;{\text{patients}}\;i\;{\text{and}}\;j\;{\text{had}}\;{\text{the}}\;{\text{same}}\;{\text{sex}}} \\ {0,} & {\text{otherwise}} \\ \end{array} } \right..$$
Feature similarity for laboratory test
All m laboratory test items had continuous values in the EMR in this study. They were first normalized to Lxy ~ N (0,1) for the further calculation, where Lxy represents the normalized lab test y for patient x. The feature similarity for lab test (FSL) was defined as 1 minus the normalized Euclidean distance (by min–max normalization), as shown in Eqs. (3) to (5).
$$d_{\text{lab}} \left( {i, j} \right) = \sqrt {\left( {L_{i1} - L_{j1} } \right)^{2} + \left( {L_{i2} - L_{j2} } \right)^{2} + \cdots + \left( {L_{im} - L_{jm} } \right)^{2} }$$
$$d^{\prime} = \frac{{d_{\text{lab}} \left( {i, j} \right) - \hbox{min} \left( {d_{\text{lab}} } \right)}}{{\hbox{max} \left( {d_{\text{lab}} } \right) - \hbox{min} \left( {d_{\text{lab}} } \right)}}$$
$${\text{FS}}_{L} \left( {i, j} \right) = 1 - d^{\prime}.$$
Feature similarity for disease diagnoses
Disease diagnoses were initially identified using ICD-10 codes [26]. In the ICD-10 code scheme, each code begins with a letter (A–Z for 22 chapters) followed by five digits, arranged in a tree-like hierarchical manner (Additional file 1: Figure S1). The letter and first three digits are usually used for statistical purposes [16]; they were, therefore, used to calculate feature similarity for disease diagnosis in this study. As an alternative to the ICD-10 code scheme, the CCS code scheme [27] collapsed ICD-10 codes into 259 diagnosis codes (numbered 1–259) with better generalization and clinical meaningfulness [16]. For example, DM was designated as ICD-10 codes E10.x–E14.x; corresponding CCS codes were 49 (DM without complications) and 50 (DM with complications).
We proposed two methods of measuring disease diagnosis similarity based on the two code schemes with totally different structures.
Feature similarity for disease diagnoses based on the ICD-10 code scheme
Considering the path distance between concepts (nodes) in the ICD-10 hierarchy system, the similarity S(x, y) between two single codes x and y was calculated using the level of their nearest common ancestor (NCA) over the level of themselves in the hierarchy system, as shown in Eq. (6) [28].
$$S\left( {x,y} \right) = \frac{{{\text{NCA}}\left( {x,y} \right)}}{{\# {\text{levels}}}},$$
where #levels is the number of levels in the ICD-10 hierarchy system. For example, the level of ICD-10 codes E10.9 and E11.9 was 4, and the level of their NCA (i.e., E1) was 2; therefore, the similarity of the two diagnoses was calculated as 2/4 = 0.5.
Two patients were considered similar if their sets of diagnoses overlapped, and more similar if they showed a greater degree of overlap. For two ICD-10 code sets, X = {x1, x2, … xl} for patient i and Y = {y1, y2, … yn} for patient j, only the elements in the intersection of the two sets were considered when calculating similarity. The feature similarity for disease diagnosis represented by ICD-10 codes (FSD1) was defined in Eq. (7) [23]:
$${\text{FS}}_{{{\text{D}}1}} \left( {i,j} \right) = 1 - \frac{1}{{\left| {X \cup Y} \right|}}\left( {\sum\nolimits_{{x_{l} \in X\backslash Y}} {\frac{1}{\left| Y \right|}\sum\nolimits_{{y_{n} \in Y}} {d\left( {x_{l} ,y_{n} } \right) + \sum\nolimits_{{y_{n} \in Y\backslash X}} {\frac{1}{\left| X \right|}\sum\nolimits_{{x_{l} \in X}} {d(y_{n} ,x_{l} } } } } } \right),$$
where \(d\left( {x,y} \right) = d\left( {y,x} \right) = 1 - S\left( {x,y} \right)\) in Eq. (6).
Feature similarity for disease diagnoses based on the CCS code scheme
For patient X, disease diagnoses were represented by a 259-dimensional 0–1 vector X = {x1, x2, … x259}, where xn = 1 if the patient had the disease represented by the CCS code k, and 0 otherwise. Feature similarity for disease diagnosis represented by CCS codes (FSD2) was defined as the cosine similarity between CCS code vectors X for patient i and Y for patient j (Eq. 8).
$${\text{FS}}_{{{\text{D}}2}} \left( {i, j} \right) = \frac{X*Y}{ X Y} = \frac{{\sum x_{n} y_{n} }}{{\sqrt {\sum x_{n}^{2} } \times \sqrt {\sum y_{n}^{2} } }}$$
Patient similarity
The weighted sum of the four feature similarities was used as the single measure of patient similarity (PS) for patients i and j:
$${\text{PS}}\left( {i,j} \right) = w_{1} *\left[ {{\text{FS}}_{{{\text{D}}1}} \left( {i,j} \right) \;{\text{or}}\; {\text{FS}}_{{{\text{D}}2}} \left( {i,j} \right)} \right] + w_{2} *{\text{FS}}_{L} \left( {i,j} \right) + w_{3} *{\text{FS}}_{A} \left( {i,j} \right) + w_{4} *{\text{FS}}_{S} \left( {i,j} \right),$$
where 0 ≤ w1–w4 ≤ 1 (Σwi = 1) are the weights of the four feature similarities. In the current study, w1–w4 were assigned to 0.4, 0.4, 0.1, and 0.1, respectively, which were determined experimentally in our previous study [29].
Application of patient similarity
EMR data used in this study were derived from all inpatients discharged from a tertiary hospital in Beijing, China between 2014 and 2016. Individual hospitalizations were de-identified and maintained as unique records, including age at admission, sex, disease diagnoses at discharged (up to 11), and laboratory tests during hospitalization. Disease diagnoses were identified using ICD-10 codes.
Records for patients who had disease diagnoses with ICD-10 codes starting with O (complications of pregnancy), P (certain conditions originating in the perinatal period), S and T (incidental conditions such as poisoning and injuries), and Y and V (supplementary classification codes) were excluded. In addition, for patients with more than one hospitalization (i.e., readmission), records for follow-up admissions were excluded to maintain a study dataset containing distinct patients.
In one hospitalization episode, patients are not necessary to take all laboratory tests, leading to a large number of missing values in laboratory test fields. This will make it more difficult to compute feature similarity for laboratory test. Therefore, records with more missing laboratory tests should be excluded in the current study. For the task of disease prediction, DM (ICD-10 codes of E10–E14 [30, 31]) was chosen as the target disease. Thus, 77 most regular laboratory test items related to DM, including blood test, urine test and electrolyte test were employed for the similarity computation. Records with missing values of any of the above 77 laboratory test items were then excluded.
In total, 8245 patients with any diabetes diagnosis (positive samples) remained and another 8245 patients without any diabetes diagnoses (negative samples) were randomly selected, giving a study dataset of 16,490 samples (Additional file 1: Figure S2). The mean ages of the patients with and without DM were 63.0 ± 11.6 years and 57.2 ± 17.1 years (t-test, P < 0.001), respectively. 5163 (62.6%) patients in DM group were males, whereas 6062 (73.5%) in non-DM group (χ2 test, P < 0.001).
Machine learning models
For an index (test) patient with an unknown label, a personalized predictive model was built based on the most similar patients from the training samples. This model was then tested on the index patient. This study predicted the index patient as diabetic or not diabetic, which was a binary classification problem. To explore the impact of the model on the performance of the similarity-based predictive model, three machine learning-based classification models with disparate algorithms and structures were used: kNN, LR, and RF classifiers.
In our classification setting, the kNN classifier assigned each index patient with the majority class of its k (k = 50 in this study) nearest labeled neighbors, based on Euclidean distance from the training set [32]. The probability of that patient being predicted as diabetic was defined as the proportion of patients with diabetes among the k neighbors. LR is a discriminative model in machine learning, or a kind of generalized linear model with a logit link function and binomial distribution [32]. The predicted outcome of the LR classifier for the index patient was the probability of belonging to the positive class. RF [33] is an ensemble classifier consisting of many decision trees (100 trees in this study) based on random feature selection [34, 35] and bootstrap aggregation [36]. The final predicted probability of belonging to each class for the index patient was obtained by combining the predictions of individual trees.
Input features for the classification models were age, sex, disease diagnoses and 77 laboratory tests. To reduce the dimensionality of the feature space, diseases that occurred in less than 1% of the study dataset were ruled out. In total, 27 diseases with a statistically different occurrence rate between patients with and without DM (χ2 test, P < 0.05) remained for further modeling. Finally, 106 features were used as the input features for the models.
We used a hold-out method to validate the predictive models. All the 8245 patients with DM were split randomly into a set of 5000 samples and a set of 3245 samples for training and test, respectively. Accordingly, 5000 and 3245 patients without DM were selected randomly to be used as training and test samples, respectively. As a result, the final study population was consisted of 16,490 samples, 10,000 of them were used as the training samples and the rest 6490 samples as the test samples. The basic characteristics of samples both in the training set and test set were presented in Table 1. The characteristics included age, sex, several major chronic diseases according to the Charlson comorbidities [37] and expert's advice (such as heart disease, pulmonary disease, liver disease, and hypertension), and two laboratory test items (i.e., serum glucose and urine glucose) related to diabetes diagnosis. There were no statistical differences between the two groups in these characteristics.
Table 1 The basic characteristics of samples in the test set and training set
To dynamically evaluate the potentials of the proposed patient similarity when being used in selecting similar samples for predicting diabetes, predictive models were trained based on top K similar patients, where the smaller the sample size K, the more similar the selected training patients. Performance evaluation and comparisons were then conducted among the three classification models built on similar and randomly selected samples with the same sample size, and the changing trends of the predictive performance as the size of the training samples increased could be analyzed. Predictive performance was evaluated by the AUC. The cubic polynomial fitting was used to give the changing trends of AUCs.
To help understand the classification process of the kNN model, the patient to be predicted and its k (k = 10, 50, 100, respectively) nearest neighbors were visualized. Another visualization was used to show the top 20 important features captured by the RF models which were built on similar patients and randomly selected patients, separately. Feature importance was determined by the Gini coefficients.
All computations and analyses were conducted using R 3.4.0 software (https://cran.r-project.org/).
area under the ROC curve
CCS:
Clinical Classification Software
DM:
EMR:
International Classification of Diseases, tenth revision
KNN:
k-nearest neighbor
NCA:
nearest common ancestor
RF:
Henriques J, Carvalho P, Paredes S, Rocha T. Prediction of heart failure decompensation events by trend analysis of telemonitoring data. IEEE J Biomed Health Inform. 2014;19(5):1757–69.
Sharafoddini A, Dubin JA, Lee J. Patient similarity in prediction models based on health data: a scoping review. JMIR Med Inform. 2017;5(1):e7.
Krysik K, Dobrowolski D, Polanowska K, Lyssek-Boron A, Wylegala EA. Measurements of corneal thickness in eyes with pseudoexfoliation syndrome: comparative study of different image processing protocols. J Healthc Eng. 2017;2017:4315238.
Lyssek-Boroń A, Wylęgała A, Polanowska K, Krysik K, Dobrowolski D. Longitudinal changes in retinal nerve fiber layer thickness evaluated using Avanti Rtvue-XR optical coherence tomography after 23G vitrectomy for epiretinal membrane in patients with open-angle glaucoma. J Healthc Eng. 2017;2017:4673714.
Chatterjee A, He D, Fan X, Antic T, Jiang Y, Eggener S, Karczmar GS, Oto A. Diagnosis of prostate cancer by use of MRI-derived quantitative risk maps: a feasibility study. Am J Roentgenol. 2019;213:1–10.
Yang C, Lu M, Duan Y, Liu B. An efficient optic cup segmentation method decreasing the influences of blood vessels. Biomed Eng Online. 2018;17(1):130.
Krysik K, Dobrowolski D, Stanienda-Sokół K, Wylegala EA, Lyssek-Boron A. Scheimpflug camera and swept-source optical coherence tomography in pachymetry evaluation of diabetic patients. J Ophthalmol. 2019;2019:1–6.
Ng K, Sun J, Hu J, Wang F. Personalized predictive modeling and risk factor identification using patient similarity. AMIA Summits Transl Sci Proc. 2015;2015:132–6.
Whellan DJ, Ousdigian KT, Alkhatib SM, Pu W, Sarkar S, Porter CB, Pavri BB, O'Connor CM, Investigators PS. Combined heart failure device diagnostics identify patients at higher risk of subsequent heart failure hospitalizations: results from PARTNERS HF (program to access and review trending information and evaluate correlation to symptoms in patients with heart failure) study. J Am Coll Cardiol. 2010;55(17):1803–10.
Sepanski RJ, Godambe SA, Mangum CD, Bovat CS, Zaritsky AL, Shah SH. Designing a pediatric severe sepsis screening tool. Front Pediatr. 2014;2(56):56.
Wu J, Roy J, Stewart W. Prediction modeling using EHR data: challenges, strategies, and a comparison of machine learning approaches. Med Care. 2010;48(6 Suppl):S106.
Shickel B, Tighe PJ, Bihorac A. Deep EHR: a survey of recent advances on deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform. 2017;22(5):1589–604.
Marcos M, Maldonado JA, Martinez-Salvador B, Bosca D, Robles M. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility. J Biomed Inform. 2013;46(4):676–89.
Parimbelli E, Marini S, Sacchi L, Bellazzi R. Patient similarity for precision medicine: a systematic review. J Biomed Inform. 2018;83:87–96.
Wang C, Machiraju R, Huang K. Breast cancer patient stratification using a molecular regularized consensus clustering method. Methods. 2014;67(3):304–12.
Li L, Cheng WY, Glicksberg BS, Gottesman O, Tamler R, Chen R, Bottinger EP, Dudley JT. Identification of type 2 diabetes subgroups through topological analysis of patient similarity. Sci Transl Med. 2015;7(311):311ra174.
Panahiazar M, Taslimitehrani V, Pereira NL, Pathak J. Using EHRs for heart failure therapy recommendation using multidimensional patient similarity analytics. Stud Health Technol Inform. 2015;210:369–73.
Wang F. Adaptive semi-supervised recursive tree partitioning: the ART towards large scale patient indexing in personalized healthcare. J Biomed Inform. 2015;55:41–54.
Lee J, Maslove DM, Dubin JA. Personalized mortality prediction driven by electronic medical data and a patient similarity metric. PLoS ONE. 2015;10(5):e0127428.
David G, Bernstein L, Coifman RR. Generating evidence based interpretation of hematology screens via anomaly characterization. Open Clin Chem J. 2011;4(1):10–6.
Chattopadhyay S, Ray P, Chen HS. Suicidal risk evaluation using a similarity-based classifier. Adv Data Min Appl. 2008;5139:51–61.
Popescu M, Khalilia M. Improving disease prediction using ICD-9 ontological features. IEEE Int Conf Fuzzy Syst. 2011;56(10):1805–9.
Girardi D, Wartner S, Halmerbauer G, Ehrenmüller M, Kosorus H, Dreiseitl S. Using concept hierarchies to improve calculation of patient similarity. J Biomed Inform. 2016;63(C):66–73.
Hielscher T, Spiliopoulou M, Volzke H, Kuhn JP. Using participant similarity for the classification of epidemiological data on hepatic steatosis. In: IEEE international symposium on computer-based medical systems. Washington, D.C.: IEEE Computer Society; 2014. p. 1–7.
Park YJ, Kim BC, Chun SH. New knowledge extraction technique using probability for case-based reasoning: application to medical diagnosis. Expert Syst. 2010;23(1):2–20.
Ashley J. The international classification of diseases: the structure and content of the tenth revision. Health Trends. 1990;22(4):135.
Cowen ME, Dusseau DJ, Toth BG, Guisinger C, Zodet MW, Shyr Y. Casemix adjustment of managed care claims data using the clinical classification for health policy research method. Med Care. 1998;36(7):1108–13.
Gottlieb A, Stein GY, Ruppin E, Altman RB, Sharan R. A method for inferring medical diagnoses from patient similarities. BMC Med. 2013;11(1):194.
Huang Y, Wang N, Liu H, Zhang H, Fei X, Wei L, Chen H. Study on patient similarity measurement based on electronic medical records. Stud Health Technol Inform. 2019;264:1484–5.
Chen G, Khan N, Walker R, Quan H. Validating ICD coding algorithms for diabetes mellitus from administrative data. Diabetes Res Clin Pract. 2010;89(2):189–95.
Khokhar B, Jette N, Metcalfe A, Cunningham CT, Quan H, Kaplan GG, Butalia S, Rabi D. Systematic review of validated case definitions for diabetes in ICD-9-coded and ICD-10-coded data in adult populations. BMJ Open. 2016;6(8):e009952.
Neuvirth H, Ozery-Flato M, Hu J, Laserson J, Kohn MS, Ebadollahi S, Rosen-Zvi M. Toward personalized care management of patients at risk: the diabetes case study. In: Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining; 2011. p. 395–403.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Amit Y, Geman D. Shape quantization and recognition with randomized trees. Neural Comput. 1997;9(7):1545–88.
Ho T. The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell. 1998;20(8):832–44.
Breiman L. Bagging predictors. Mach Learn. 1996;24(2):123–40.
Charlson ME, Pompei P, Ales KL, Mackenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–83.
This work was supported by the National Natural Science Foundation of China (Nos. 81901707, 81671786 and 81701792).
School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing, 100069, China
Ni Wang
, Yanqun Huang
, Honglei Liu
, Xiangkun Zhao
& Hui Chen
Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing, 100069, China
Information Center, Xuanwu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
Xiaolu Fei
& Lan Wei
Search for Ni Wang in:
Search for Yanqun Huang in:
Search for Honglei Liu in:
Search for Xiaolu Fei in:
Search for Lan Wei in:
Search for Xiangkun Zhao in:
Search for Hui Chen in:
HC conceived the study and developed the methods; XF and LW collected data; NW and YH sorted and analyzed the data. HL and XZ drafted the manuscript; NW prepared the figures; HC provided critical review of the manuscript. All authors have reviewed the final version of the manuscript for publication. All authors read and approved the final manuscript.
Correspondence to Hui Chen.
All authors have approved the manuscript and agreed with submission and publication. The manuscript has not previously been published elsewhere and is not under consideration by any other journals.
Additional file 1: Figure S1. Partial view of the hierarchy system of the International Classification of Diseases, tenth revision. Figure S2. A flow chart of the record selection. DM, diabetes mellitus.
Wang, N., Huang, Y., Liu, H. et al. Measurement and application of patient similarity in personalized predictive modeling based on electronic medical records. BioMed Eng OnLine 18, 98 (2019) doi:10.1186/s12938-019-0718-2
Personalized prediction
Model performance | CommonCrawl |
Google Slides API
Home Guides Reference Samples Support
Create and Manage Presentations
Create a Slide
Add Text and Shapes
Size and Position Shapes
Merge Data into Slides
Add Charts
Style Text
Read or Write Speaker Notes
Use Field Masks
Install Client Libraries
Extend Slides with Apps Script
More Learning Resources
Slides API video library
Slides API
Sizing and Positioning Page Elements
This guide describes how you size and position page elements using affine transforms. For a conceptual introduction to affine transforms, see the Transforms concept guide.
Transforming elements
The Slides API lets you reposition and scale elements on a page. To do this, first determine what kind of transformation needs to be applied, then apply that transform using the presentations.batchUpdate method containing one or more UpdatePageElementTransformRequest elements.
Transforms can be made in one of two applyModes:
ABSOLUTE transforms replace the element's existing transformation matrix. Any parameters you omit from the transform update request are set to zero.
RELATIVE transforms are multiplied with the element's existing transformation matrix (the order of multiplication matters):
$$A' = BA$$
Relative transforms move or scale the page element from where it currently is; for example, moving a shape 100 points to the left, or rotating it 40 degrees. Absolute transforms discard existing position and scale information; for example, moving a shape to the center of the page, or scaling it to be a specific width.
Complex transformations can usually be expressed as a sequence of simpler ones. Precalculating a transform—combining multiple transformations using matrix multiplication—can often reduce overhead.
The order of transform operations matters—in most cases, rotating an element about a point then translating produces different results than translating it first.
For some operations, you must know what an element's existing transform parameters are. If you don't have these values, you can retrieve them with a presentations.pages.get request.
Translation is simply the action of moving a page element to a new position on the same page. Absolute translations move the element to a specific point, while relative translations move the element a specific distance.
Remember that the translation parameters specify the position of the element's upper-left corner, not its center.
A basic translation transform matrix has the form:
$$T=\begin{bmatrix} 1 & 0 & translate\_x\\ 0 & 1 & translate\_y\\ 0 & 0 & 1 \end{bmatrix}$$
When you use an UpdatePageElementTransformRequest to translate an element (without altering its size, shear, or orientation), you can use one of the following AffineTransform structures:
// Absolute translation:
'transform': {
'scaleX': current scaleX value,
'scaleY': current scaleY value,
'shearX': current shearX value,
'shearY': current shearY value,
'translateX': X coordinate to move to,
'translateY': Y coordinate to move to,
'unit': 'EMU' // or 'PT'
// Relative translation (scaling must also be provided to avoid a matrix multiplication error):
'scaleX': 1,
'scaleY': 1,
'translateX': X coordinate to move by,
'translateY': Y coordinate to move by,
Scaling is the action of stretching or squeezing an element along the X and/or Y dimension to change its size. A basic scaling transform matrix has the form:
$$S=\begin{bmatrix} scale\_x & 0 & 0\\ 0 & scale\_y & 0\\ 0 & 0 & 1 \end{bmatrix}$$
You can use this matrix form directly as a RELATIVE transform to resize an element, but this can also affect the element's rendered shear and translation. To scale the element without affecting its shear or translation, shift to its reference frame.
Rotation transforms rotate a page element around a point, using the scaling and shear parameters. The basic rotation transform matrix has the following form, where the angle of rotation (in radians) is measured from the X-axis, moving counterclockwise:
$$R=\begin{bmatrix} cos(\theta) & sin(\theta) & 0\\ -sin(\theta) & cos(\theta) & 0\\ 0 & 0 & 1 \end{bmatrix}$$
As with scaling, you can use this matrix form directly as a RELATIVE transform to rotate an element, but this causes the element to be rotated about the origin of the page. To rotate the element about its center or a different point, shift to that reference frame.
Reflection mirrors an element across a specific line or axis. The basic x- and y-axis reflection transform matrix has the following forms:
$$F_x=\begin{bmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}\qquad\qquad F_y=\begin{bmatrix} -1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}$$
As with scaling, you can use this matrix form directly as a RELATIVE transform to reflect an element, but this causes the element to translate as well. To reflect the element without any translation, shift to its reference frame.
Element reference frames
Applying a basic scale, reflection, or rotation transform directly to a page element produces a transformation in the page's reference frame. For example, a basic rotation rotates the element about the page's origin (the upper-left corner). However, you can operate in the reference frame of the element itself, for example to rotate an element around its center point.
To transform an element within its own reference frame, enclose it between two other translations: a preceding translation T1 that moves the element center to the page origin, and a following translation T2 that moves the element back to its original position. The full operation can be expressed as a matrix product:
$$A' = T2 \times B \times T1 \times A$$
You can also switch to other reference frames, by translating different points to the origin instead. These points become the center of the new reference frame.
It's possible to perform each of these transformations individually as sequential RELATIVE transform requests. Ideally, you should precompute A' above with matrix multiplications and apply the result as a single ABSOLUTE transform. Alternatively, precompute the T2 * B * T1 product and apply that as a single RELATIVE transform. These are both more efficient, in terms of API operations, then sending the transform requests individually.
The Slides API might refactor your values
When you create a page element, you can specify a size and transform that provide a certain visual result. However, the API may replace your provided values with other ones that yield the same visual appearance. In general, if you write a size using the API, you are not guaranteed to be returned the same size. However, you should get the same results if you take the transform into account. | CommonCrawl |
Renal insufficiency was correlated with 2-year mortality for rural female patients with ST-segment elevation acute myocardial infarction after reperfusion therapy: a multicenter, prospective study
Yuan Gao1,
Daming Jiang2,
Bo Zhang3,
Yujiao Sun4,
Lina Ren1,
Dandan Fan1 &
Guoxian Qi1
BMC Cardiovascular Disorders volume 15, Article number: 179 (2015) Cite this article
Renal insufficiency (RI) following ST-segment elevation acute myocardial infarction (STEMI) is associated with a worse clinical prognosis. We investigated the impact of RI on long-term mortality in rural female patients with STEMI and evaluated prognostic factors.
A prospective cohort study of 436 consecutive rural female patients who were successfully treated with reperfusion therapy for STEMI between May 2009 and August 2011 in secondary care hospitals in Liaoning province northeastern China and followed up for 2 years. Patients were divided into three groups by estimated glomerular filtration rate (eGFR): Normal group, eGFR ≥90 mL/min/1.73 m2 (n = 233). Moderate group, eGFR 60–90 mL/min/1.73 m2 (n = 108). RI group, eGFR <60 mL/min/1.73 m2 (n = 95). The primary outcome was 2-year mortality.
During follow-up (mean 741 ± 118 days), the RI group had a significantly higher mortality than the other groups (24.21 % vs. 6.87 % and 10.19 %, p < 0.001). The RI group had significantly higher hospital mortality (7.37 % p = 0.045 vs. Normal group). RI increased the risk of hospital mortality (hazard ratio (HR) 1.832, 95 % CI 1.017–3.091, p = 0.033), and increased the risk of 2-year mortality (HR 3.872, 95 % CI 2.004–6.131, p < 0.001). Multivariate analysis showed eGFR <90 ml/min/1.73 m2 and age ≥75 years as independent predictors of mortality at 2 years. In detail these were eGFR 60–90 ml/min/1.73 m2 with HR 2.081, 95%CI 1.250–2.842, p < 0.001; eGFR <60 ml/min/1.73 m2 with HR 3.872, 95%CI 2.004–6.131, p < 0.001; age ≥75 with HR 1.461, 95%CI 1.011–1.952, p = 0.024.
RI had a powerful correlation with long-term mortality for rural female patients with STEMI after reperfusion therapy.
At present the incidence of chronic kidney disease is rapidly increasing [1]. Nearly 30 % of patients with ST-segment elevation acute myocardial infarction (STEMI) have combined renal insufficiency (RI) [2]. Widely used early reperfusion therapy, including emergency primary percutaneous coronary intervention (PCI) or thrombolysis therapy has beneficial effects for STEMI [3, 4]. However, RI following STEMI is associated with a worse clinical prognosis [5–7], a 6 to 11-fold increase in hospital risk of death [8], and a 1.76- to 6.18-fold 7-month risk of death [6]. Unfortunately, most STEMI patients with RI are excluded from randomized trials. Renal insufficiency may lead to alteration in lipid metabolism, vascular endothelial injury and dysfunction, trigger the inflammatory response, coagulation and oxidative stress and increase atherosclerosis, by the sympathetic, neurohormonal pathway and renin angiotensin aldosterone axis activation [9–11].
Most clinical studies into myocardial infarction involve only a minority of female patients. For example women accounted for 29.6 % of the total enrolled patients in the Korea acute myocardial infarction registry study [7]. This is of concern because acute myocardial infarction mortality is higher in females than males and while there have been declines in the risk of death in men; the rate in women remains fairly constant [12]. The risk of in hospital mortality after primary PCI is also significantly higher for females than males [13], and female patients with STEMI show significantly greater death rates than males [14], with younger females at much higher risk than males of the same age [15].
In China the rates of mortality due to cardiac disease are growing, and while they are highest in urban areas, the rate in rural areas is increasing more rapidly [16]. Therefore, female patients with STEMI complicated by RI who reside in rural areas are an often neglected population that may be at high risk of death resulting from their condition. Little is known of the impact of RI on the prognosis in rural female patients with STEMI regardless of reperfusion therapy in Liaoning province in northeastern China.
The objective of this study was to determine the association between RI and the risk of death in STEMI patients successfully treated with PCI or thrombolytic therapy. We hypothesized that RI would be associated with higher 2-year mortality. The results of our prospective cohort study provide convincing evidence of this association in a real world situation.
This was a prospective, multicenter study conducted at 16 hospitals in the Liaoning Province of northeast China from May 2009 to August 2011. The 16 hospitals were: First Affiliated Hospital, China Medical University; First Affiliated Hospital, Dalian Medical University; Changtu xian People's Hospital; Fuxin Mongolian Autonomous County People's Hospital; Yixian People's Hospital; Benxi Steel Company Hospital; Fushun Coal Mining Administration Hospital; Chaoyang Center Hospital; Fuxin Center Hospital; Zhuanghe Center Hospital; Wafangdian Center Hospital; Pulandian Center Hospital; Donggang Center Hospital; Dashiqiao Center Hospital; Zhangwu xian People's Hospital; Kuandian xian Center Hospital.
This study was conducted in accordance with the declaration of Helsinki, and was conducted with approval from the Ethics Committee of China Medical University. Written informed consent was obtained from all participants.
We enrolled 479 consecutive rural female STEMI patients from May 2009 to August 2011 from all of the centers. The inclusion criteria were: (1) STEMI was diagnosed according to European Society of Cardiology (ESC) criteria [3]; (2) it was the first time STEMI was diagnosed; (3) all patients were given primary PCI treatment within 12 h or thrombolytic therapy within 6 h after symptom onset. The exclusion criteria were: (1) acute myocardial infarction patients with acute kidney injury (AKI); (2) patients undergoing dialysis treatment. (3) patients whose non-infarct-related artery was treated during primary PCI; (4) PCI was undertaken after thrombolytic therapy.
Acute renal failure was diagnosed by an increase in serum creatinine levels of 50 % or an absolute increase of ≥26.5 μmol/l in 48 h [17].
Demographic and basic clinical data were obtained from all patients, which included age, gender, body mass index and cardiovascular risk factors. Additional clinical data including clinical laboratory tests, coronary imaging data, therapeutic strategies and adverse cardiac events were collected by trained personnel.
eGFR measurement
Kidney function was measured by estimated glomerular filtration rate (eGFR), calculated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) Modification of Diet in Renal Disease (MDRD) equation [18] [CKD-EPI formula If female and if serum creatinine (Scr) ≤0.7 mg/dL:
$$ \mathrm{C}\mathrm{K}\mathrm{D}\hbox{-} \mathrm{E}\mathrm{P}\mathrm{I}=144\times \mathrm{S}\mathrm{c}\mathrm{r}\left(\mathrm{mg}/\mathrm{dL}\right)/0.{7}^{\hbox{-} 0.329}\times 0.99{3}^{\mathrm{age}\left(\mathrm{years}\right)} $$
If female and if Scr > 0.7 mg/dL:
$$ \mathrm{C}\mathrm{K}\mathrm{D}\hbox{-} \mathrm{E}\mathrm{P}\mathrm{I}=144\times \mathrm{S}\mathrm{c}\mathrm{r}\left(\mathrm{mg}/\mathrm{dL}\right)/0.{7}^{\hbox{-} 1.209}\times 0.99{3}^{\mathrm{age}\left(\mathrm{years}\right)}. $$
Renal insufficiency (RI) was defined as eGFR <60 mL/min/1.73m2according to the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines [19] eGFR were calculated by the Scr examination results immediately after admission.
The patients were divided into 3 groups based on their eGFR values; the Normal group (high eGFR group, eGFR ≥ 90 mL/min/1.73 m2, n = 233), the Moderate group (middle eGFR group, 60 mL/min · m2 < eGFR <89 mL/min/1.73 m2, n = 108) who had moderate RI, and the RI group (low eGFR group, eGFR <60 mL/min/1.73 m2, n = 95).
Successful PCI
PCI was considered to be successful if the infarct-related artery residual stenosis was <10 %, and the Thrombolysis in Myocardial Infarction (TIMI) flow level reached 3 [20]. All patients were estimated immediately after PCI.
Successful thrombolytic therapy
1.5 million Units of urokinase were administrated to patients by intravenous infusion within 30 min for thrombolytic therapy.
Thrombolytic therapy was considered successful if more than 2 of 4 criteria were met [9]: ST segment resolved by ≥ 50 % by electrocardiogram (ECG) within 2 h, chest pain disappeared within 2 h; reperfusion arrhythmia occurred; serum myocardial enzyme peaks were detected in advance.
All patients received the recommended standard management for STEMI, including 300 mg loading dose of aspirin and clopidogrel after admission, aspirin, clopidogrel, low-molecular-weight heparin, beta-blockers, statins and angiotensin-converting enzyme (ACE) inhibitors/ angiotensin receptor antagonist (ARB) as appropriate. The details of the medications are included in Appendix 1.
Definition of risk factors
Abnormal body mass index (BMI) was defined as BMI ≥ 25 kg/m [2, 21]. Diabetes was diagnosed by previous medical history or fasting glucose ≥ 7.0 mmol/L and/or 2 h plasma glucose level ≥11.1 mmol/l (measured after 75 g oral glucose load). Hypertension was diagnosed by previous medical history or systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg after admission. Hypercholesterolemia was diagnosed by previous medical history or low-density lipoprotein (LDL) cholesterol ≥ 2.6 mmol/L and/or non high-density lipoprotein (HDL) cholesterol ≥3.3 mmol/L. Definition of contrast-induced nephropathy (CIN) was an increase in serum creatinine ≥ 0.5 mg/dL, occurring 48 h after exposure to contrast media [22]. Current smoker was defined as smoking >300 cigarettes/year.
Major adverse cardiac events (MACE) included: death, recurrent myocardial infarction, target vessel revascularization and stroke.
Key bleeding end points were analyzed on the basis of global use of strategies to open occluded coronary arteries (GUSTO) criteria [23].
CIN, contrast-induced nephropathy was defined as either a greater than 25 % increase of serum creatinine or an absolute increase in serum creatinine of 0.5 mg/dL [24].
Clinical examination and laboratory analysis
Patients underwent physical examination, ECG examination and fasting blood biochemical examination after admission including Scr, serum creatine kinase MB (CKMB), and cardiac troponin I (cTNI). Patients were examined for myocardial necrosis markers and by ECG once every 8 h in 72 h, then every 24 h they were examined again. Blood leukocyte counts were examined 24 h after admission and echocardiography 48 h after admission.
To ensure the standard of data collection, laboratory procedures, data management and coordination between the multiples centers involved in the study was up to our quality control all the clinicians involved in this study received uniform training before the research began.
The patients were followed up by telephone for 2 years; once per year; by the same doctor.
Statistical method
Categorical data were expressed with absolute numbers and percentages and analyzed using the χ 2 test. Continuous data with normal distribution were described with mean and standard deviation (SD) and median and interquartile range (IQR; 25th to 75th). Data among groups were compared using ANOVA. Further Student–Newman–Keuls (SNK) analyses were performed between the three groups. Non-normally distributed data among groups were compared using rank test.
Corresponding Kaplan-Meier curves with the log-rank test were constructed. Univariate analysis of eGFR, Scr, Microalbuminuria, age, gender, diabetes, hypertension, hyperlipidemia, Killip class, heart rate (HR), ejection fraction (EF), white blood cell (WBC) counts, and medication treatment were performed to determine the predictors for mortality. Multiple Cox proportional hazard model was used to estimate associations between significant factors identified in the univariate analysis. Hazard ratios (HR) and 95 % confidence intervals (CI) were calculated and p value <0.05 was considered statistically significant. All analyses were performed using SPSS version 19.0 (SPSS Inc., Chicago, IL, USA).
The flowchart showing inclusion of patients in the study is presented in Fig. 1. In total 479 patients were enrolled in the study, 24 from these were excluded including 8 patients with incomplete data, 4 for whom clinical data were incomplete and 4 whose laboratory data were incomplete. Thus, 455 patients were included and 19 patients were lost to follow up. Finally 436 patients were included in the study 233 in the Normal group, 108 in the Moderate group, and 95 in the RI group (Fig. 1).
Flow chart of the selection of the study population and allocation into groups according to estimated glomerular filtration rate
Study sample and characteristics
The mean follow-up period was 741 ± 118 days. Among the 436 individuals in the final cohort, the mean age was 67.52 years, and 35.78 % had diabetes (Table 1). Elderly patients and those with hypertension and diabetes accounted for a high proportion of patients in the RI group. Median symptom to door time, door to balloon time and door to needle time were 183, 134 and 56 min, respectively. In the three groups the mean ages were 61.11 ± 8.42 years in the Normal group, 64.08 ± 6.91 years in the Moderate group and 75.57 ± 7.53 years in the RI group with a significant difference between all of the groups (p < 0.05). The diabetes rates were 29.61 % in the Normal group, 37.96 % in the Moderate group and 48.42 % in the RI group with a significant difference between the Normal and RI groups (p < 0.05). The hypertension rates were 32.19 % in the Normal group 44.44 % in the Moderate group and 52.63 % in the RI group with a significant difference between the Normal and RI group (p < 0.05). There were higher proportions in the RI group of Killip ≥ 2 at 21.05 % compared to 9.26 % (p < 0.05) in the Moderate group and 9.01 %, (p < 0.05) in the Normal group, longer hospital stay at 12.05 ± 5.74 days compared to 8.36 ± 5.11 days (p < 0.05) in the Moderate group and 8.53 ± 4.78 days (p < 0.05) in the Normal group, low EF values at 44.38 ± 13.05 % compared to 48.92 ± 14.02 % (p < 0.05) in the Moderate group and 50.11 ± 13.60 % (p < 0.05) in the Normal group, high Scr values at 116.67 ± 59.01 mmol/l compared to 85.47 ± 44.90 mmol/l (p < 0.05) in the Moderate group and 78.12 ± 31.13 mmol/l (p < 0.05) in the Normal group, and cTnI peak at 45.33 ± 15.26 ng/ml compared to 30.19 ± 18.73 ng/ml (p < 0.05) for the Moderate group and 31.06 ± 16.28 ng/ml for the Normal group (p < 0.05) (Table 1). There were 298 patients who underwent primary PCI and 138 thrombolytic therapy.
Table 1 Baseline characteristics
Clinical outcomes
The mortality rates of the patients in the Normal group, Moderate group and RI group were 1.72 %, 3.7 %, and 7.37 %, respectively, during hospitalization. Patients in the RI group had a significantly higher mortality rate compared with Normal group during hospitalization (p = 0.045). There were no significant differences in RMI, TVR, stroke and bleeding between the three groups. In-hospital MACE developed more frequently in the patients in the RI group compared with Normal group (p = 0.022). But there was no significant difference in the incidence of CIN (Table 2).
Table 2 Outcomes of patients according to eGFR group
A similar trend was observed during 1 year of follow up after hospital discharge. Patients in the RI group had a significantly higher mortality rate compared with those in the Normal group (16.84 % v.s.4.29 %, p < 0.001), and a significantly higher MACE rate compared with the other two groups (p < 0.001) (Table 2).
During 2 years of follow up, patients in the RI group had a significantly higher mortality rate compared with the other two groups (24.21 % v.s. 6.87 % and 10.19 %, p < 0.001). There were no significant differences in recurrent myocardial infarction (RMI), target vessel revascularization (TVR), stroke and bleeding between the three groups. Compared with the RI group respectively, the Moderate group and the Normal group had higher MACE rates (36.11 % v.s. 22.32 %, 52.63 % v.s. 22.32 %, p < 0.001) (Table 2).
During 2 years of follow-up, there were 1 case of moderate bleeding and 4 cases of minor bleeding in the RI group. There were 3 cases of minor bleeding in the other two groups, respectively.
Risk factors of 2 year mortality
Variables were analyzed by univariate analysis for significant factors for 2 year mortality and are presented in Table 3, this suggested that eGFR of less than 90 ml/min/1.73 m2, may be a predictor of 2 year mortality as both 60–90 ml/mim/ m2 eGFR and eGFR <60 ml/min/1.73 m2 were significant (both p > 0.001). Age ≥75 years was another significant factor (p = 0.049) as well as hypertension (p = 0.013), Killip class ≥2 (p = 0.023), and EF <40 (p = 0.025). The Kaplan–Meier survival curves are depicted in Fig. 2. The survival rate of the RI group was significantly lower than in the other two groups (log-rank test, p < 0.001).
Table 3 Univariate and multivariate analysis for prediction of 2-year mortality
Kaplan-Meier curve survival analysis of the three groups of patients to 2-year post treatment. A represents the Normal group, B represents the Moderate group and C represents the RI group
Multivariate analysis then identified eGFR <90 ml/min/1.73 m2 and age ≥75 years as independent predictors of mortality at 2 years. In detail these were eGFR 60–90 ml/min/1.73 m2 with HR 2.081, 95%CI 1.250–2.842, p < 0.001; eGFR <60 ml/min/1.73 m2 with HR 3.872, 95%CI 2.004–6.131, p < 0.001; age ≥75 with HR 1.461, 95%CI 1.011–1.952, p = 0.024.
Risk factors for in hospital mortality
All variables were also included into univariate analysis to investigate the independent predictors for in hospital mortality. The independent predictors of mortality in hospital were eGFR <60 ml/min/1.73 m2 and Killip ≥ 2 (Table 4). Further analysis of the significant factors by multiple cox proportional hazard modeling identified independent predictors of mortality. This showed that the independent predictors of hospital mortality were eGFR <60 ml/min/1.73 m2 [HR 1.832, 95 % CI 1.017–3.091, p = 0.033] and Killip ≥ 2 [HR 1.340, 95 % CI 1.012–1.647, p = 0.018].
Table 4 Univariate and multivariate analysis for prediction of in hospital mortality
The aim of this study was to investigate the effect of RI on the mortality of female rural patients with STEMI. We demonstrated that the in hospitalization and 2-year mortality in the RI group (<60 ml/min/1.73 m2) were significantly higher than in the other two groups. And RI (eGFR <60 ml/min/1.73 m2) was an independent predictor of in hospital and 2-year mortality in the female STEMI patients after emergency reperfusion therapy. Compared with the Normal group, there was a 1.8-fold higher risk of death during hospitalization, and 3.9-fold higher risk of 2-year mortality. In the Moderate group (60–89 ml/min/1.73 m2), there was a 2.1-fold higher risk of 2-year mortality. We found a decreased eGFR was associated with a higher risk of death. Thus, RI was associated with higher rates of death. The most important finding of our study is that the inclusion of RI in the risk model improves the risk in female patients with STEMI in rural areas. We demonstrated hospitalization mortality in the RI group was 7.37 %. Another reported study showed a similar mortality rate of 7.69 % [25]. It has also previously been confirmed that RI is an independent risk factor for poor prognosis in both the short and long term for female STEMI patients [26].
Renal dysfunction is an independent risk factor for death in patients with STEMI [27]. And patients with renal dysfunction have been shown to have a 6 to 11-fold higher in-hospital mortality rate compared to patients with normal renal function [8]. These patients more commonly developed low left ventricular ejection fraction, higher Killip class, cardiogenic shock, hemodynamic instability or malignant arrhythmia during admission [28]. Previous studies have also found that the risk of MACEs and cardiac death at both 1 month and 1 year increased with lower eGFR [5]. Thus, the results of our study are in agreement with the previous research.
Comparison between the three groups showed that there were higher proportions of Killip class ≥ 2, longer hospital stay, low LVEF values, high serum creatinine levels and cTnI peak in the RI group. The patients had larger myocardial infarction area and worse heart function. These differences may be related to the high mortality in the RI group. The Killip class is a useful prognostic tool for predicting in hospital mortality [29], and our results support this as the ≥ class 2 was also a predictive factor for in hospital mortality by multivariate analysis. Another recent study on RI in STEMI also found a higher Killip class was found with RI, and those patients also showed increased mortality with RI [30]. That study identified that RI patients were more likely to be female as well as older, and more likely to have diabetes mellitus, and hypertension [30]. Our study also identified age, diabetes and hypertension as likely to be higher in the RI group. Increased Killip score and lower LVEF were also significant in RI patients in a study evaluating the in-hospital outcome of patients with acute STEMI [31]. The mean LVEF was also decreased in the RI patients in our study. In terms of the 2-year mortality risk factors the RI group was found to be at much higher risk than the other two groups in this study and the only other predictive factor was age ≥75 years. That was possibly an expected result [32].
We found there were no long-term standardized medication regimens after hospital discharge in the RI group. And dual antiplatelet therapy was used less in hospital in this group. Underuse of antiplatelet therapy, ACE inhibitors, β-blockers and statins might also be related to a reduced survival rate in patients with renal insufficiency as discontinuation of cardiac medication is itself associated with increased risk of mortality [33]. This may be another factor that is related to the high mortality in the RI group.
Renal dysfunction was associated with a 3 fold increased odds of discontinuation of antiplatelet drugs in patients with PCI [34]. Reasons for shorter duration of antiplatelet therapy and discontinuation observed in these studies and ours may include bleeding events, scheduled invasive procedures, psychiatric drug use, unemployment, patient choice and non-adherence and other medical events not specified including earlier mortality. It had been reported AMI patients with decreased GFR may receive less aggressive evidence-based therapies as those normal patients [35].
This study has some limitations as it was a prospective multicenter study with a limited sample size because of the limited number of rural patients. In the RI group the patients were not grouped further to eGFR 30–60 ml/min/1.73 m2 and eGFR <30 ml/min/1.73 m2 to provide information on the severity of RI. We also were unable to provide a mean value of eGFR for the groups in this study because some of the raw data was lost, so we had to rely on the grouping ranges for our analysis. We did not include a control group without cardiac disease, to investigate whether these results relating to RI and mortality would be similar in a group of patients without STEMI. In addition, we did not address the question of a relationship between cardiac disease and kidney disease in these patients. Further analysis of more pathophysiological factors such as biomarkers for cardiac damage for example troponin would provide important information on the relationship between cardiac and kidney diseases.
In this real-world prospective multicenter study we found among female rural patients with STEMI after thrombolytic therapy or primary PCI therapy that RI was an independent risk factor for in-hospital and long-term mortality and was associated with poor prognosis. RI could be one of the better indices for clinical risk stratification.
The data set supporting the results of this article are included within the article.
angiotensin receptor antagonist
CI:
CIN:
contrast-induced nephropathy
CKD-EPI:
chronic kidney disease epidemiology collaboration
CKMB:
creatine kinase MB
Ctni:
cardiac troponin I
EF:
ejection fraction
eGFR:
estimated glomerular filtration rate
GUSTO:
global use of strategies to open occluded coronary arteries
HDL:
high-density lipoprotein
KDIGO:
kidney disease: improving global outcomes
LDL:
low-density lipoprotein
MACE:
major adverse cardiac events
MDRD:
modification of diet in renal disease
PCI:
RI:
RMI:
recurrent myocardial infarction
Scr:
serum creatinine
STEMI:
ST-segment elevation acute myocardial infarction
TIMI:
thrombolysis in myocardial infarction
TVR:
target vessel revascularization
WBC:
white blood cell
Ojo A. Addressing the global burden of chronic kidney disease through clinical and translational research. Trans Am Clin Climatol Assoc. 2014;125:229–43. discussion 43–6.
Fox CS, Muntner P, Chen AY, Alexander KP, Roe MT, Cannon CP, et al. Use of evidence-based therapies in short-term outcomes of ST-segment elevation myocardial infarction and non-ST-segment elevation myocardial infarction in patients with chronic kidney disease: a report from the National Cardiovascular Data Acute Coronary Treatment and Intervention Outcomes Network registry. Circulation. 2010;121(3):357–65.
Task Force on the management of STseamiotESoC, Steg PG, James SK, Atar D, Badano LP, Blomstrom-Lundqvist C, et al. ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J. 2012;33(20):2569–619.
Said S, Hernandez GT. The link between chronic kidney disease and cardiovascular disease. J Nephropathol. 2014;3(3):99–104.
Anavekar NS, McMurray JJ, Velazquez EJ, Solomon SD, Kober L, Rouleau JL, et al. Relation between renal dysfunction and cardiovascular outcomes after myocardial infarction. N Engl J Med. 2004;351(13):1285–95.
Masoudi FA, Plomondon ME, Magid DJ, Sales A, Rumsfeld JS. Renal insufficiency and mortality from acute coronary syndromes. Am Heart J. 2004;147(4):623–9.
Bae EH, Lim SY, Cho KH, Choi JS, Kim CS, Park JW, et al. GFR and cardiovascular outcomes after acute myocardial infarction: results from the Korea Acute Myocardial Infarction Registry. Am J Kidney Dis. 2012;59(6):795–802.
Kim JY, Jeong MH, Ahn YK, Moon JH, Chae SC, Hur SH, et al. Decreased glomerular filtration rate is an independent predictor of In-Hospital Mortality in patients with ST-segment elevation myocardial infarction undergoing Primary percutaneous coronary intervention. Korean Circ J. 2011;41(4):184–90.
Choi JH, Kim KL, Huh W, Kim B, Byun J, Suh W, et al. Decreased number and impaired angiogenic function of endothelial progenitor cells in patients with chronic renal failure. Arterioscler Thromb Vasc Biol. 2004;24(7):1246–52.
Shlipak MG, Heidenreich PA, Noguchi H, Chertow GM, Browner WS, McClellan MB. Association of renal insufficiency with treatment and outcomes after myocardial infarction in elderly patients. Ann Intern Med. 2002;137(7):555–62.
Napoli C, Casamassimi A, Crudele V, Infante T, Abbondanza C. Kidney and heart interactions during cardiorenal syndrome: a molecular and clinical pathogenic framework. Future Cardiol. 2011;7(4):485–97.
Gulati M, Shaw LJ, Bairey Merz CN. Myocardial ischemia in women: lessons from the NHLBI WISE study. Clin Cardiol. 2012;35(3):141–8.
Pancholy SB, Shantha GP, Patel T, Cheskin LJ. Sex differences in short-term and long-term all-cause mortality among patients with ST-segment elevation myocardial infarction treated by primary percutaneous intervention: a meta-analysis. JAMA Intern Med. 2014;174(11):1822–30.
D'Ascenzo F, Gonella A, Quadri G, Longo G, Biondi-Zoccai G, Moretti C, et al. Comparison of mortality rates in women versus men presenting with ST-segment elevation myocardial infarction. Am J Cardiol. 2011;107(5):651–4.
Zheng X, Dreyer RP, Hu S, Spatz ES, Masoudi FA, Spertus JA, et al. Age-specific gender differences in early mortality following ST-segment elevation myocardial infarction in China. Heart. 2014;101(5):349–55.
Jiang G, Wang D, Li W, Pan Y, Zheng W, Zhang H, et al. Coronary heart disease mortality in China: age, gender, and urban-rural gaps during epidemiological transition. Rev Panam Salud Publica. 2012;31(4):317–24.
Mehta RL, Kellum JA, Shah SV, Molitoris BA, Ronco C, Warnock DG, et al. Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury. Crit Care. 2007;11(2):R31.
Levey AS, Stevens LA, Schmid CH, Zhang YL, Castro 3rd AF, Feldman HI, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604–12.
Kidney Disease: Improving Global Outcomes (KDIGO) Acute Kidney Injury Work Group. KDIGO Clinical Practice Guideline for Acute Kidney Injury. Kidney Int Suppl. 2012;2:1–138.
Levine GN, Bates ER, Blankenship JC, Bailey SR, Bittl JA, Cercek B, et al. 2011 ACCF/AHA/SCAI Guideline for Percutaneous Coronary Intervention: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation. 2011;124(23):e574–651.
Consultation WHOE. Appropriate body-mass index for Asian populations and its implications for policy and intervention strategies. Lancet. 2004;363(9403):157–63.
Rihal CS, Textor SC, Grill DE, Berger PB, Ting HH, Best PJ, et al. Incidence and prognostic importance of acute renal failure after percutaneous coronary intervention. Circulation. 2002;105(19):2259–64.
An international randomized trial comparing four thrombolytic strategies for acute myocardial infarction. The GUSTO investigators. N Engl J Med. 1993;329(10):673-82.
Barrett BJ, Parfrey PS. Clinical practice. Preventing nephropathy induced by contrast medium. N Engl J Med. 2006;354(4):379–86.
Liu Y, Gao L, Xue Q, Yan M, Chen P, Wang Y, et al. Impact of renal dysfunction on long-term outcomes of elderly patients with acute coronary syndrome: a longitudinal, prospective observational study. BMC Nephrol. 2014;15:78.
Go AS, Chertow GM, Fan D, McCulloch CE, Hsu CY. Chronic kidney disease and the risks of death, cardiovascular events, and hospitalization. N Engl J Med. 2004;351(13):1296–305.
Rodrigues FB, Bruetto RG, Torres US, Otaviano AP, Zanetta DM, Burdmann EA. Effect of kidney disease on acute coronary syndrome. Clin J Am Soc Nephrol. 2010;5(8):1530–6.
Parikh CR, Coca SG, Wang Y, Masoudi FA, Krumholz HM. Long-term prognosis of acute kidney injury after acute myocardial infarction. Arch Intern Med. 2008;168(9):987–95.
de Mello BH, Oliveira GB, Ramos RF, Lopes BB, Barros CB, Carvalho Ede O, et al. Validation of the Killip-Kimball classification and late mortality after acute myocardial infarction. Arq Bras Cardiol. 2014;103(2):107–17.
Sabroe JE, Thayssen P, Antonsen L, Hougaard M, Hansen KN, Jensen LO. Impact of renal insufficiency on mortality in patients with ST-segment elevation myocardial infarction treated with primary percutaneous coronary intervention. BMC Cardiovasc Disord. 2014;14:15.
Pasha K, Ali MA, Habib MA, Debnath RC, Islam MN. In-hospital outcome of patients with acute STEMI with impaired renal function. Mymensingh Med J. 2011;20(3):425–30.
PubMed CAS Google Scholar
Newell MC, Henry JT, Henry TD, Duval S, Browning JA, Christiansen EC, et al. Impact of age on treatment and outcomes in ST-elevation myocardial infarction. Am Heart J. 2011;161(4):664–72.
Ivers NM, Schwalm JD, Grimshaw JM, Witteman H, Taljaard M, Zwarenstein M, et al. Delayed educational reminders for long-term medication adherence in ST-elevation myocardial infarction (DERLA-STEMI): protocol for a pragmatic, cluster-randomized controlled trial. Implement Sci. 2012;7:54.
Ferreira-Gonzalez I, Marsal JR, Ribera A, Permanyer-Miralda G, Garcia-Del Blanco B, Marti G, et al. Background, incidence, and predictors of antiplatelet therapy discontinuation during the first year after drug-eluting stent implantation. Circulation. 2010;122(10):1017–25.
Coca SG, Krumholz HM, Garg AX, Parikh CR. Underrepresentation of renal disease in randomized controlled trials of cardiovascular disease. JAMA. 2006;296(11):1377–84.
Department of Cardiology, First Affiliated Hospital of China Medical University, Shenyang, Liaoning, 110001, China
Yuan Gao, Lina Ren, Dandan Fan & Guoxian Qi
Department of Cardiology, Dandong Center Hospital, Dandong, Liaoning, 118000, China
Daming Jiang
Department of Cardiology, First Affiliated Hospital, Dalian Medical University, Dalian, Liaoning, 116011, China
Bo Zhang
Department of Geriatric Cardiology, First Affiliated Hospital of China Medical University, Shenyang, Liaoning, 110001, China
Yujiao Sun
Yuan Gao
Lina Ren
Dandan Fan
Guoxian Qi
Correspondence to Yuan Gao.
YG participated in literature search, study design, data collection, data analysis, data interpretation and wrote the manuscript. DMJ, BZ, YJS, LNR and DDF participated in clinical examination and laboratory analysis. GXQ conceived of the study, and participated in its design and coordination and provided the critical revision. All authors read and approved the final manuscript.
Details of medication
There were no significant difference in the administration of aspirin, ACEI/ARB, statins, beta -blockers, LWMH and traditional Chinese medicine between the three groups. But clopidogrel and dual antiplatelet therapy were lower in the RI group compared with the Normal group during hospitalization. The patients were followed up at 2 year. We found that clopidogrel, dual antiplatelet therapy, ACEI/ARB, statins, b-blocker were less used,but traditional Chinese medicine were more used in the RI group during follow-up (Appendix 1).
Table 5 Medication use
Gao, Y., Jiang, D., Zhang, B. et al. Renal insufficiency was correlated with 2-year mortality for rural female patients with ST-segment elevation acute myocardial infarction after reperfusion therapy: a multicenter, prospective study. BMC Cardiovasc Disord 15, 179 (2015). https://doi.org/10.1186/s12872-015-0174-2
ST-segment elevation | CommonCrawl |
The Forbes Group
$\newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\uvect}[1]{\hat{#1}} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\I}{\mathrm{i}} \newcommand{\ket}[1]{\left|#1\right\rangle} \newcommand{\bra}[1]{\left\langle#1\right|} \newcommand{\braket}[1]{\langle#1\rangle} \newcommand{\op}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\d}{\mathrm{d}} \newcommand{\pdiff}[3][]{\frac{\partial^{#1} #2}{\partial {#3}^{#1}}} \newcommand{\diff}[3][]{\frac{\d^{#1} #2}{\d {#3}^{#1}}} \newcommand{\ddiff}[3][]{\frac{\delta^{#1} #2}{\delta {#3}^{#1}}} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\order}{O} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\sech}{sech} $
Quantum Dynamics from Cold Atoms to Neutron Stars
In our group we study the dynamical properties of quantum many-body systems ranging from cold atoms trapped in one of the coolest places in the universe -- to the neutron stars where matter is compressed to such extremes that a teaspoon would weight more than a mountain.
super_hydro: Exploring Superfluids¶
Nobel laureate Richard Feynman said: "I think I can safely say that nobody really understands quantum mechanics". Part of the reason is that quantum mechanics describes physical processes in a regime that is far removed from our every-day experience – namely, when particles are extremely cold and move so slowly that they behave more like waves than like particles.
This application attempts to help you develop an intuition for quantum behavior by exploiting the property that collections of extremely cold atoms behave as a fluid – a superfluid – whose dynamics can be explored in real-time. By allowing you to interact and play with simulations of these superfluids, we hope you will have fun and develop an intuition for some of the new features present in quantum systems, taking one step closer to developing an intuitive understanding of quantum mechanics.
Beyond developing an intuition for quantum mechanics, this project provides an extensible and general framework for running interactive simulations capable of sufficiently high frame-rates for real-time interactions using high-performance computing techniques such as GPU acceleration. The framework is easily extended to any application whose main interface is through a 2D density plot - including many fluid dynamical simulations.
Negative-Mass Hydrodynamics¶
Negative mass is a peculiar concept. Counter to everyday experience, an object with negative effective mass will accelerate backward when pushed forward. This effect is known to play a crucial role in many condensed matter contexts, where a particle's dispersion can have a rather complicated shape as a function of lattice geometry and doping. In our work we show that negative mass hydrodynamics can also be investigated in ultracold atoms in free space and that these systems offer powerful and unique controls.
Michael McNeil Forbes¶
Our universe is an incredible place. Despite its incredible diversity and apparent complexity, an amazing amount of it can be described by relatively simple physical laws referred to as the Standard Model of particle physics. Much of this complexity "emerges" from the interaction of many simple components. Characterizing the behaviour of "many-body" systems forms a focus for much of my research, with applications ranging from some of the coldest places in the universe - cold atom experiments here on earth, to nuclear reactions, the cores of neutrons stars, and the origin of matter in our universe.
Kyle Elsasser¶
Kyle was raised on a farm in the mountains of northern Idaho before becoming a Nuclear Reactor Technician on US Navy submarines and later a firefigher/EMT back in his hometown. His curiosity got the better of him and he attended Eastern Washington University, completing a bachelor's degree in Physics and one in Mathematics in 2017 before continuing to Washington State University to pursue his PhD in Physics.
Currently, he is working jointly under Dr Forbes and Dr Bose to re-derive the Tolman-Oppenheimer-Volkoff (TOV) equations for arbitrary rotation speeds, and has interest in investigating the mechanisms that cause neutron star glitching.
Chunde Huang¶
Chunde comes from China where he got his bachelor degree (Software Engineering) and master degree (Computer Science) from Xiamen University, his major research was computer vision and machine learning. He worked as a professional software engineer for three years with experience on embed system framework development (C++ middleware for Android OS), smart traffic surveillance (object detection and tracking) and distribute system (Content Distribution Network). He started his pursuit of Ph. D in physics in 2013 at WSU.
Praveer Tiwari¶
Praveer comes from India where he got his BSc-MSc(Research) degree(Physics) from Indian Institute of Science, his major research was on Accretion Disk Modeling and Gravitational Wave Data Analysis. He started his pursuit of Ph.D in physics in 2016 at WSU. Since 2017, he worked in professor Jeffrey McMahon Group learning different aspects of machine learning and computational condensed matter.
Currently, he is working jointly under Dr Forbes and Dr Bose. He is working on constraining the parameters of the equation of state of neutron stars using the gravitation wave detection. He is also working on employing novel machine learning techniques to characterize different aspects of gravitational wave detections.
Ted Delikatny¶
Ted simulates various phenomena related to quantum turbulence in superfluids, including the dynamics and interactions of vortices, solitons, and domain wall dynamics. Currently Ted is working to understand the phenomenon of self-trapping in BECs with negative-mass hydrodynamics.
Khalid Hossain¶
Khalid comes from Bangladesh where he got his MS in theoretical physics from University of Dhaka. Currently Khalid is simulating two-component superfluid mixtures - Spin-Orbit Coupled Bose Einstein Condensates (BECs) and mixture of Bose and Fermi superfluids. In particular, the interest is in detecting the entrainment (dragging of one component with another) effect, which may shed light on the astrophysical mystery of neutron star glitching.
Saptarshi Rajan Sarkar¶
Saptarshi is currently looking at quantum turbulence in a axially symmetric Bose-Einstein condensate, in which a shockwave is created by a piston. The axially symmetric simulation, although missing some key features like Kelvin waves and vortex reconnections, has a considerably less computational cost, while retaining the shock behaviour. He is also interested in learning about the vortex filament model to look into quantum turbulence in detail.
Ryan Corbin¶
Ryan hails from the greater Seattle area. His current research focuses on DFT simulations of nuclei.
Popular Explanations
Here are resources that explain some of the physics at the core of our research in a fun way:
Atoms As Big As Mountains — Neutron Stars Explained (YouTube)
Eli Francis
Prerequisites: Models¶
This post describes various models that are useful for demonstrating interesting physics.
Thermodynamics¶
This post discusses how phase equilibrium is established. In particular, we discuss multi-component saturating systems which spontaneously form droplets at zero temperature. This was specifically motivated by the discussion of the conditions for droplet formation in Bose-Fermi mixtures 1801.00346 and 1804.03278. Specifically, the following conditions in 1804.03278:
\begin{gather} \mathcal{E} < 0, \qquad \mathcal{P} = 0; \tag{i}\\ \mu_b\pdiff{P}{n_f} = \mu_f\pdiff{P}{n_b}; \tag{ii}\\ \pdiff{\mu_b}{n_b} > 0 , \qquad \pdiff{\mu_f}{n_f} > 0, \qquad \pdiff{\mu_b}{n_b}\pdiff{\mu_f}{n_f} > \left(\pdiff{\mu_b}{n_f}\right)^2. \tag{iii} \end{gather}
1801.00346: https://arxiv.org/abs/1801.00346 1804.03278: https://arxiv.org/abs/1804.03278
Uncertainties¶
Here we discuss the python uncertainties package and demonstrate some of its features.
Galilean Covariance¶
In his "Dialogue Concerning the Two Chief World Systems", Galileo put forth the notion that the laws of physics are the same in any constantly moving (inertial) reference frame. Colloquially this means that if you are on a train, there is no experiment you can do to tell that the train is moving (without looking outside).
This post will explain formally what Galilean covariance means, clarify the difference between covariance and invariance, and elucidate the meaning of Galilean covariance in classical and quantum mechanics. In particular, it will explain the following result obtained by simply changing coordinates, which may appear paradoxical at first:
Consider a modern Lagrangian formulation of a classical object moving without a potential in 1D with coordinates $x$ and $\dot{x}$, and the same object in a moving frame with coordinates $X = x - vt$ and $\dot{X} = \dot{x} - v$. The Lagrangian and conjugate momenta in these frames are:
\begin{align} L[x, \dot{x}, t] &= \frac{m\dot{x}^2}{2}, & p &\equiv \pdiff{L}{\dot{x}} = m\dot{x},\\ L_v[X, \dot{X}, t] &= \frac{m(\dot{X}+v)^2}{2}, & P &\equiv \pdiff{L_v}{\dot{X}} = m(\dot{X}+v) = p. \end{align}
Perhaps surprisingly the conjugate momentum $P$ is the same in the moving frame whereas Galilean invariance implies that one should have a description in terms of $P = m\dot{X} = p - mv$. The later description exists, but requires a somewhat non-intuitive addition to the Lagrangian of a total derivative.
Kamiak Cluster at WSU¶
Here we document our experience using the Kamiak HPC cluster at WSU.
Resources¶
Kamiak Specific¶
Kamiak Users Guide: Read this.
Service Requests: Request access to Kamiak here and use this for other service requests (software installation, issues with the cluster, etc.)
Queue List: List of queues.
General¶
SLURM: Main documentation for the current job scheduler.
Lmod: Environment module system.
Conda: Package manager for python and other software.
This post describes the prerequisites that I will generally assume you have if you want to work with me. It also contains a list of references where you can learn these prerequisites. Please let me know if you find any additional resources particularly useful so I can add them for the benefit of others. This list is by definition incomplete - you should regard it as a minimum.
[email protected]
Many-body Quantum Mechanics¶
In this notebook, we briefly discuss the formalism of many-body theory from the point of view of quantum mechanics. | CommonCrawl |
The meaning of r- an K-selection
Oecologia 48(2):260-264
DOI:10.1007/BF00347974
Gregory D Parry
Citations (126)
This paper catalogues several different dichotomies that have all been termed r- and K-selection. The status of the concept of r- and K-selection is discussed and a more restricted usage of the terms is recommended.
Content uploaded by Gregory D Parry
All content in this area was uploaded by Gregory D Parry on May 14, 2014
A preview of the PDF is not available
... In summary, the remarkable strategy of post-fire active functional types is to switch from a typical nutrient-stress-tolerating (Grime 1977) or a K strategy (McArthur and Wilson 1967;Parry 1981) between fires, to an opportunistic ruderal (Grime 1977) or r strategy (McArthur and Wilson 1967;Parry 1981) immediately after a fire, nomenclature depending on which ecological theory is adopted. We are aware of minor shifts in strategy during plant ontogeny (Dayrell et al. 2018), but know virtually nothing about what underpins major shifts in strategy following a wildfire. ...
Strategies to acquire and use phosphorus in phosphorus-impoverished and fire-prone environments
PLANT SOIL
Hans Lambers
Patricia de Britto Costa
Greg Cawthray
Hong-Tao Zhong 钟宏韬
Background Unveiling the diversity of plant strategies to acquire and use phosphorus (P) is crucial to understand factors promoting their coexistence in hyperdiverse P-impoverished communities within fire-prone landscapes such as in cerrado (South America), fynbos (South Africa) and kwongan (Australia). Scope We explore the diversity of P-acquisition strategies, highlighting one that has received little attention: acquisition of P following fires that temporarily enrich soil with P. This strategy is expressed by fire ephemerals as well as fast-resprouting perennial shrubs. A plant's leaf manganese concentration ([Mn]) provides significant clues on P-acquisition strategies. High leaf [Mn] indicates carboxylate-releasing P-acquisition strategies, but other exudates may play the same role as carboxylates in P acquisition. Intermediate leaf [Mn] suggests facilitation of P acquisition by P-mobilising neighbours, through release of carboxylates or functionally similar compounds. Very low leaf [Mn] indicates that carboxylates play no immediate role in P acquisition. Release of phosphatases also represents a P-mining strategy, mobilising organic P. Some species may express multiple strategies, depending on time since germination or since fire, or on position in the landscape. In severely P-impoverished landscapes, photosynthetic P-use efficiency converges among species. Efficient species exhibit rapid rates of photosynthesis at low leaf P concentrations. A high P-remobilisation efficiency from senescing organs is another way to use P efficiently, as is extended longevity of plant organs. Conclusions Many P-acquisition strategies coexist in P-impoverished landscapes, but P-use strategies tend to converge. Common strategies of which we know little are those expressed by ephemeral or perennial species that are the first to respond after a fire. We surmise that carboxylate-releasing P-mobilising strategies are far more widespread than envisaged so far, and likely expressed by species that accumulate metals, exemplified by Mn, metalloids, such as selenium, fluorine, in the form of fluoroacetate, or silicon. Some carboxylate-releasing strategies are likely important to consider when restoring sites in biodiverse regions as well as in cropping systems on P-impoverished or strongly P-sorbing soils, because some species may only be able to establish themselves next to neighbours that mobilise P.
... In the present study, the r-selected decapods were represented by the Penaeoidea group (Dendrobranchiata). The K-selected ones were represented by the Pleocyemata, i.e., Caridea, Brachyura, Astacidae, and Achelata (Parry 1981;Fenwick 1984). The mean GS values of all groups within Dendrobranchiata and Pleocyemata were closely related, despite a little lower value for Dendrobrachiata, and exceeded the caridean shrimps that present a mean GS larger than that of the other groups (Table 2). ...
Patterns of genome size variation in caridean shrimps: new estimates for non-gambarelloides Synalpheus species
Isabela Ribeiro Rocha Moraes
Luis Miguel Pardo
Cristian Araya-Jaime
Antonio Castilho
Genome size (GS) or DNA nuclear content is considered a useful index for making inferences about evolutionary models and life history in animals, including taxonomic, biogeographical, and ecological scenarios. However, patterns of GS variation and their causes in crustaceans are still poorly understood. This study aimed to describe the GS of five Neotropical Synalpheus nongambarelloides shrimps (S. apioceros, S. minus, S. brevicarpus, S. fritzmueller, and S. scaphoceris) and compare the C-values of all Caridea Infraorder in terms of geography and phylogenetics. All animals were sampled in the coast of São Paulo State, Brazil and GS was assessed by flow cytometry analysis (FCA). The C-values ranged from 7.89 pg in S. apioceros to 12.24 pg in S. scaphoceris. Caridean shrimps had higher GS than other Decapoda crustaceans. The results reveal a tendency of obtaining larger genomes in species with direct development in Synalpheus shrimps. In addition, a tendency of positive biogeographical (latitudinal) correlation with Caridea Infraorder was also observed. This study provides remarkable and new protocol for FCA (using gating strategy for the analysis), which led to the discovery of new information regarding GS of caridean shrimps, especially for Neotropical Synalpheus, which represents the second-largest group in the Caridea Infraorder.
... To implement the ABC methodology, we remark that even if our knowledge on the value of κ is very poor, we usually have some information about an effective upper bound for κ, denoted K max , from the dynamics of the population that we model via the CBP. An example of this situation is the family of K-selected species (see [17]), which includes larger mammals such as elephants, horses, and primates, and whose species are relatively stable populations and produce relatively low numbers of offspring. For practical purposes and without loss of generality, throughout this paper we consider offspring laws with finite support. ...
Approximate Bayesian computation approach on the maximal offspring and parameters in controlled branching processes
Miguel González
Carmen Minuesa
Ines Del Puerto
Our purpose is to estimate the posterior distribution of the parameters of interest for controlled branching processes (CBPs) without prior knowledge of the maximum number of offspring that an individual can give birth to and without explicit likelihood calculations. We consider that only the population sizes at each generation and at least the number of progenitors of the last generation are observed, but the number of offspring produced by any individual at any generation is unknown. The proposed approach is twofold. Firstly, to estimate the maximum progeny per individual we make use of an approximate Bayesian computation (ABC) algorithm for model choice and based on sequential importance sampling with the raw data. Secondly, given such an estimate and taking advantage of the simulated values of the previous stage, we approximate the posterior distribution of the main parameters of a CBP by applying the rejection ABC algorithm with an appropriate summary statistic and a post-processing adjustment. The accuracy of the proposed method is illustrated by means of simulated examples developed with the statistical software R. Moreover, we apply the methodology to two real datasets describing populations with logistic growth. To this end, different population growth models based on CBPs are proposed for the first time.
... for representative taxa within each of the classes (see Methods in Supporting Information 1 The general structure of the exemplary AES closely mirrors seminal ideas on fundamental life-history strategies (i.e. r-and Kstrategies; Parry, 1981;Pianka, 1970;Stearns, 1976Stearns, , 1977 as well as the principal trade-offs between growth, survival and reproduction that have been proposed to structure plant and animal life histories according to the dynamic energy budget theory and pace-of-life syndromes (Capdevila et al., 2020;Healy et al., 2019). ...
Towards an animal economics spectrum for ecosystem research
Robert R Junker
Jörg Albrecht
Marcel Becker
Matthias Schleuning
The framework of the plant economics spectrum advanced our understanding of plant ecology and proved as a unifying concept across plant taxonomy, growth forms and biomes. Similar approaches for animals mostly focus on linking life‐history and metabolic theory, but not on their application in ecosystem research. To fill this gap, we propose the animal economics spectrum (AES) based on broadly available traits that describe organismal size, biological times and rates. To exemplify the feasibility and general usefulness of constructing the AES, we compiled data on adult and offspring body mass, life span, age at first reproduction, reproductive and metabolic rate of 98 terrestrial taxa from seven selected animal classes and mapped these taxa into an exemplary quantitative trait space. The AES consists of two principal axes related to reproductive strategies and the pace of life; both axes are linked by animal metabolism. The AES thus closely mirrors seminal ideas on fundamental life‐history strategies and more recent discoveries and developments in the fields of life‐history and metabolic theory. Furthermore, we find associations between the positions of animals within the AES and taxonomy, thermoregulation and body plan. The AES shows that key dimensions describing different ecological strategies of animals can be depicted with functional traits that are relatively easily accessible for a broad spectrum of animal taxa. We suggest future steps towards an application of the AES in ecosystem research aiming at the understanding of ecological processes and ecosystem functions. Additionally, we urge for databases that compile comparable functional traits for a large proportion of animals but also for further groups of organisms with the ultimate goal to map the economics spectrum of life. The framework of the AES will be relevant for understanding ecological processes across animal taxa at species, community and ecosystem level. We further discuss how it can facilitate predictions on how the functional composition and diversity of animal communities can be affected by global change.
... The very high fecundity of G. decadactylus in Gabonese waters resembles that found in Nigeria, where it ranged from 58 001 to 279 277 oocytes per female (mean 168 639 oocytes per female; Emmanuel et al. 2010). Very high fecundity associated with an early first size at maturity identifies G. decadactylus as an r-strategist (Pianka 1970;Nichols et al. 1976;Parry 1981). As may be expected, fecundity is strongly related to gonad size and body size in G. decadactylus. ...
Reproductive biology of the lesser African threadfin Galeoides decadactylus in Gabon, Gulf of Guinea
AFR J MAR SCI
Jean Daniel Mbega
Oumar Sadio
Jean Hervé Mve Beh
François Le Loc'h
The lesser African threadfin Galeoides decadactylus (family Polynemidae) is one of the most captured marine fish species in Central Africa. This study examines aspects of the reproductive biology of G. decadactylus in the Libreville area of Gabon. Fish caught with encircling gillnets and bottom gillnets were collected from May 2017 to May 2018 from artisanal fishermen. A total of 776 specimens were studied, comprising 401 females (14-36 cm total length [TL]), 347 males (13-28 cm TL), and 28 individuals of indeterminate sex (12-16 cm TL). Monthly monitoring of gonadosomatic ratio, condition factor and sexual maturity stages revealed that G. decadactylus reproduces continuously but has two slight peak periods: one in the long rainy season and the other in the short rainy season. The species is protandrous, with sizes at first sexual maturity of 17.7 cm TL for males and 18.7 cm for females. Mature individuals largely dominated the catches of small-scale fishers in Gabon. Mean absolute fecundity of females was 179 447 (SD 107 240) oocytes, and mean relative fecundity was 848 (SD 323) oocytes g-1. This study provides fisheries managers with crucial knowledge, such as size at sexual maturity, that could be used as a basis for sustainable management of G. decadactylus stocks in Gabon using minimum size limits.
... r-selected species tend to have faster growth rates and are relatively short-lived. Albeit a simplified view of natural selection (Stearns, 1977;Parry, 1981), one should be able to quantify the effects of density-dependent selection on the reproductive strategies of species in an r/K continuum (Mueller et al., 1991;Reznick et al., 2002). ...
Host specificity and the reproductive strategies of parasites
Jean-François Doherty
Marin Milotic
Antoine Filion
Alan Eriksson
Environmental stability can have profound impacts on life history trait evolution in organisms, especially with respect to development and reproduction. In theory, free-living species, when subjected to relatively stable and predictable conditions over many generations, should evolve narrow niche breadths and become more specialised. In parasitic organisms, this level of specialisation is reflected by their host specificity. Here, we tested how host specificity impacts the reproductive strategies of parasites, a subject seldomly addressed for this group. Through an extensive review of the literature, we collated a worldwide dataset to predict, through Bayesian multilevel modelling, the effect of host specificity on the reproductive strategies of parasitic copepods of fishes or corals. We found that copepods of fishes with low host specificity (generalists) invest more into reproductive output with larger clutch sizes, whereas generalist copepods of corals invest less into reproductive output with smaller clutch sizes. The differences in host turnover rates through an evolutionary timescale could explain the contrasting strategies across species observed here, which should still favour the odds of parasites encountering and infecting a host. Ultimately, the differences found in this study reflect the unique evolutionary history that parasites share both intrinsically and extrinsically with their hosts.
... Reproductive strategy assists in the colonisation of new environments and habitats. R strategist species focus primarily on the production of large amounts of juveniles, fast growth and so are able to colonise newly available habitat faster and proliferate in abundance compared to K strategist species (Parry, 1981). Furthermore, ascidians, which often colonise hard substrates, are hermaphroditic and predominantly reproduce through sexual cross-fertilisation and the production of free-swimming larvae (Rius et al., 2010). ...
Biology and ecology of deployed shellfish habitats in the Swan-Canning Estuary
Charles Josef Maus
Large extents of shellfish reefs have become degraded around the world as a result of anthropogenic activities to the point where such reefs are functionally extinct in some regions. Due to the ecosystem services provided by these biogenic habitats, there has recently been a concerted effort to restore shellfish reefs, particularly in Australia. While oysters have traditionally been used as a candidate species, the Mediterranean Mussel Mytilus galloprovincialis is gaining popularity and provides a similar suite of ecosystem services to oysters. As part of a pilot program, shellfish habitats, each comprising translocated M. galloprovincialis seeded onto 100 wooden stakes, were deployed at three sites in Melville Water in the Swan-Canning Estuary (Western Australia). The aims of this study where to; 1) investigate the mortality, body condition and growth of the translocated M. galloprovincialis; 2) compare the characteristics of fish fauna at the shellfish reef habitats and nearby unstructured (control) habitats and 3) determine the benthic macroinvertebrate and tunicate species associated with the shellfish habitat. The mortality of M. galloprovincialis was high at all three sites. This was attributed to poor environmental conditions in offshore waters of Melville Water, compounded by stress associated with translocation, their spawning activity, and fouling by ascidians. Seasonally adjusted von Bertalanffy growth models best explained the growth of M. galloprovincialis and growth was rapid, with individuals attaining ~50 mm within their first year. This is likely due to the high phytoplankton availability in the Swan-Canning Estuary. The shellfish habitats harboured a significantly different fish faunal composition compared to nearby unstructured habitats (sandy areas), with many species observed only at shellfish sites or in greater densities. The increased abundances of zoobenthivores on the shellfish habitats suggest they are utilising the invertebrate prey communities associated with the structure as a food source. The invertebrate iv community varied spatially among the three sites and over time. A suite of non-native ascidians rapidly colonised the stakes along with the mussels, which, in turn, supported many small crustaceans. Given the importance of shellfish restoration globally, and the aim to undertake large-scale projects to provide such habitats in south-western Australian estuaries, the results of this study will increase the understanding of the biology of M. galloprovincialis and help elucidate how faunal communities respond and utilise shellfish habitats. The results of this pilot study will assist in the planning of future mussel reef restoration projects, in particularly those under development in southern Australia.
... Under more competitive and population dense conditions, K-selection is said to occur. Subsequent focus was on how these reproductive strategies coalesce with other differences (e.g., length of lifespan, litter size, rate of maturation) to form broad LH strategies that can be used to describe wide-ranging differences between species (Biro & Stamps, 2008;Parry, 1981;Pianka, 1970;Promislow & Harvey, 1990). K-selected, in contrast to r-selected, species mature later, have a longer lifespan, have smaller litters, and exhibit greater parental investment. ...
Life history strategy and intelligence: Commonality and personality profile differences
PERS INDIV DIFFER
Curtis S Dunkel
Dimitri Van der Linden
Richard H. Holler
Previous work on individual and group differences in life history (LH) strategy posited a central role for intelligence. Yet, empirical results failed to support the hypothesized positive association between a slow LH strategy and intelligence. The current investigation (N = 102) represents an attempt to not only re-examine the LH/intelligence hypothesis, but also to conduct an in-depth examination on how LH strategy and intelligence are expressed in personality profiles. The California Adult Q-sort measure of slow LH strategy exhibited a significant positive correlation with performance (r = 0.32), verbal (r = 0.34), and full (r = 0.38) IQ test scores. Additional findings suggest that a slow LH strategy and intelligence both include personality characteristics reflecting ambition and, possibly, social perceptiveness. Alternatively, intelligence is more closely aligned with a personality profile including intellectual ability, independence, and creativity while LH strategy was uniquely associated with interpersonal warmth, conformity, and reticence.
... McArthur and Wilson (1967) coined the terms r strategy and K strategy to describe selection for rapid population growth in uncrowded populations and selection for competitive ability in crowded populations, respectively. Over time, the meaning of these terms has broadened (Parry, 1981) and, according to the broader concept, B. sessilis is an r strategist, while B. attenuata is a K strategist. We do not know the physiological pattern of allocating P among foliar P fractions that allows species to exhibit a particular life history strategy and efficient use of P in contrasting low-P environments. ...
Foliar nutrient-allocation patterns in Banksia attenuata and Banksia sessilis differing in growth rate and adaptation to low-phosphorus habitats
ANN BOT-LONDON
Zhongming Han
Jianmin Shi
Jiayin Pang
Background and aims: Phosphorus (P) and nitrogen (N) are essential nutrients that frequently limit primary productivity in terrestrial ecosystems. Efficient use of these nutrients is important for plants growing in nutrient-poor environments. Plants generally reduce foliar P concentration in response to low soil P availability. We aimed to assess ecophysiological mechanisms and adaptive strategies for efficient use of P in Banksia attenuata (Proteaceae), naturally occurring on deep sand, and B. sessilis, occurring on shallow sand over laterite or limestone, by comparing allocation of P among foliar P fractions. Methods: We carried out pot experiments with slow-growing B. attenuata, which resprouts after fire, and faster-growing opportunistic B. sessilis, which is killed by fire, on substrates with different P availability using a randomised complete block design. We measured leaf P and N concentrations, photosynthesis, leaf mass per area, relative growth rate, and P allocated to major biochemical fractions in B. attenuata and B. sessilis. Key results: The two species had similarly low foliar total P concentrations, but distinct patterns of P allocation to P-containing fractions. The foliar total N concentration of B. sessilis was greater than that of B. attenuata on all substrates. The foliar total P and N concentrations in both species decreased with decreasing P availability. The relative growth rate of both species was positively correlated with concentrations of both foliar nucleic acid P and total N, but there was no correlation with other P fractions. Faster-growing B. sessilis allocated more P to nucleic acids than B. attenuata did, but other fractions were similar. Conclusions: The nutrient-allocation patterns in faster-growing opportunistic B. sessilis and slower-growing B. attenuata revealed different strategies in response to soil P availability which matched their contrasting growth strategy.
Developmental dynamics and survival characteristics of the common horse bot flies (Diptera, Gasterophilidae, Gasterophilus) in desert steppe
VET PARASITOL
Ke Zhang
Zhongrui Ju
Hongjun Chu
The genus Gasterophilus (Diptera, Gastrophilidae) is an obligate parasite of the equine family that causes widespread myiasis in desert steppe. Based on four common naturally excreted Gasterophilus larvae collected systematically in the Karamaili Ungulate Nature Reserve from March to September 2021, this paper studies the population dynamics and ontogenetic laws of horse flies, and discuss the coexistence pattern and population dynamics prediction of horse flies. The results showed that the Gasterophilus larvae had obvious concentrated development period, and the time of population peaks was different, the earliest was G. nigricornis (late March), followed by G. pecorum-Ⅰ (mid-April), G. nasalis (late April), G. intestinalis (early May), G. pecorum-Ⅱ (mid-August). The order of development threshold temperature "Cnigricornis < Cpecorum-Ⅰ ≤ Cpecorum-Ⅱ < Cnasalis < Cintestinalis" is consistent with the peak order of different larval populations. The life history survival rate (L) was as follows: Lnigricornis (83.97%) ≥ Lintestinalis (81.25%) > Lnasalis (72.42%) ≥ Lpecorum-Ⅱ (71.65%) > Lpecorum-Ⅰ (39.23%). This study combined indoor experiments and field surveys revealed the development of horse fly populations with different life strategies in desert grasslands. Based on the different development threshold temperatures of several horse flies, the staggered population dynamics of Gasterophilus form continuous infection stress on the host. In addition, G. pecorum exhibited a univoltine bimodal population distribution in this area and led to two high-intensity host infections, which is one of the important reasons why it has become the dominant species of myiasis in desert steppe.
Life historical consequences of natural selection
Madhav Gadgil
William H. Bossert
A Primer of Population Biology.
M. G. Morris
Edward O. Wilson
SIZE AND ENVIRONMENTAL PREDICTABILITY FOR SALAMANDERS
Virginia C. Maiorana
LIFE-HISTORY VARIATION IN POA ANNUA
Richard Law
A. D. Bradshaw
Phil Putwain
A GENERAL THEORY OF CLUTCH SIZE
Martin L. Cody
It is possible to think of organisms as having a certain limited amount of time or energy available for expenditure, and of natural selection as that force which oper- ates in the allocation of this time or energy in a way which maximizes the contribution of a genotype to following generations. This manner of treatment of problems con- cerning the adaptation of phenotypes is called the "Principle of Allocation" (Levins and MacArthur, unpublished), and one of its applications might be the formulation of a general theory to account for clutch size in birds. At this stage we will assume that clutch size is a hereditary phenotypic characteristic which can be affected to a greater or lesser extent by the prevailing environmental conditions and which ex- hibits the normal variability of such char- acteristics. Lack (1954) discusses the validity of several hypotheses which' at- tempt to account for clutch size and its variation under different circumstances and conditions, all of which were rejected in favor of his now widely accepted theory that clutch size is adapted to a limited food supply. This paper is an attempt to show that this and other existing hypotheses when taken singly are inadequate in some respect to account for all the data, that each holds for some particular set of con- ditions, and that each is but a part of the complete explanation. The theories will be dealt with individually and it will be shown that as environment varies so will the fac- tors which determine clutch size. PRESENTATION OF THE THEORY
On r- and K−selection
E.R. Pianka
The Theory of Island Biogeography
Robert H. MacArthur
This book had its origin when, about five years ago, an ecologist (MacArthur) and a taxonomist and zoogeographer (Wilson) began a dialogue about common interests in biogeography. The ideas and the language of the two specialties seemed initially so different as to cast doubt on the usefulness of the endeavor. But we had faith in the ultimate unity of population biology, and this book is the result. Now we both call ourselves biogeographers and are unable to see any real distinction between biogeography and ecology.
77ie theory of island biogeography
R.H. MacArthur
The evolution of life-cycle strategies in freshwater gastropods
MALACOLOGIA
Peter Calow
Impact of scallop dredging
Affordances are Signs
January 2007 · TripleC
John Pickering
Peirce and Whitehead share a common project: to restrict the over-extension of reductionism, to show how matter must be sensate and to create an ontology of process and subjectivity. This article claims that biosemiotics can assist this project. Moreover, it shows that the concept of affordance is a means to produce a theory of causation that embraces physical, natural and cultural levels of ... [Show full abstract] order.
Rough sets determined by tolerances
September 2014 · International Journal of Approximate Reasoning
Jouni Järvinen
Sándor Radeleczki
We show that for any tolerance $R$ on $U$, the ordered sets of lower and upper rough approximations determined by $R$ form ortholattices. These ortholattices are completely distributive, thus forming atomistic Boolean lattices, if and only if $R$ is induced by an irredundant covering of $U$, and in such a case, the atoms of these Boolean lattices are described. We prove that the ordered set ... [Show full abstract] $\mathit{RS}$ of rough sets determined by a tolerance $R$ on $U$ is a complete lattice if and only if it is a complete subdirect product of the complete lattices of lower and upper rough approximations. We show that $R$ is a tolerance induced by an irredundant covering of $U$ if and only if $\mathit{RS}$ is an algebraic completely distributive lattice, and in such a situation a quasi-Nelson algebra can be defined on $\mathit{RS}$. We present necessary and sufficient conditions which guarantee that for a tolerance $R$ on $U$, the ordered set $\mathit{RS}_X$ is a lattice for all $X \subseteq U$, where $R_X$ denotes the restriction of $R$ to the set $X$ and $\mathit{RS}_X$ is the corresponding set of rough sets. We introduce the disjoint representation and the formal concept representation of rough sets, and show that they are Dedekind--MacNeille completions of $\mathit{RS}$.
Dynamic programming, Fermat's principle, and the eikonal equation for anisotropic media
March 1974 · Journal of the Optical Society of America
J. J. Brandstatter
In this note, we apply the concept of dynamic programming to derive the eikonal equation from Fermat's principle of least time for anisotropic media. The derivation for isotropic media was given by Kalaba and the result of the present paper is a natural extension of his treatment. The key to the derivation is Bellman's principle of optimality, which is stated below. First, we derive the eikonal ... [Show full abstract] equation for isotropic media, for three dimensions, because Kalaba's derivation was restricted to two dimensions. After this we establish the result for anisotropic media. We follow Kalaba's derivation closely.
Renormalization group consistency and low-energy effective theories
Jens Braun
Marc Leonhardt
Jan M. Pawlowski
Low-energy effective theories have been used very successfully to study the low-energy limit of QCD, providing us with results for a plethora of phenomena, ranging from bound-state formation to phase transitions in QCD. These theories are consistent quantum field theories by themselves and can be embedded in QCD, but typically have a physical ultraviolet cutoff that restricts their range of ... [Show full abstract] validity. Here, we provide a discussion of the concept of renormalization group consistency, aiming at an analysis of cutoff effects and regularization-scheme dependences in general studies of low-energy effective theories. For illustration, our findings are applied to low-energy effective models of QCD in different approximations including the mean-field approximation. More specifically, we consider hot and dense as well as finite systems and demonstrate that violations of renormalization group consistency affect significantly the predictive power of the corresponding model calculations.
CHARGED PARTICLE TRACKS IN POLYMERS NO. 4: CRITERION FOR TRACK REGISTRATION
Eugene V. Benton
A new criterion for track registration in polymer charged particle detectors is proposed. Based on the concept of the restricted energy loss rate, it indicates that only energy transfer collisions of 1.0 plus or minus 0.2 keV and less are important in track formation. For track registration in cellulose nitrate and Lexan, a charged particle must have a restricted energy loss rate of 1.1 and 3700 ... [Show full abstract] MeV sq cm/g respectively. This threshold appears to be sharp in both materials. (Author) | CommonCrawl |
Best proximity pair and coincidence point theorems for nonexpansive set-valued maps in Hilbert spaces
This paper is concerned with the best proximity pair problem in
Hilbert spaces. Given two subsets $A$ and $B$ of a Hilbert space
$H$ and the set-valued maps $F:A o 2^ B$ and $G:A_0 o 2^{A_0}$,
where $A_0={xin A: |x-y|=d(A,B)~~~mbox{for some}~~~ yin
B}$, best proximity pair theorems provide sufficient conditions
that ensure the existence of an $x_0in A$ such that
$$d(G(x_0),F(x_0))=d(A,B).$$
Best proximity pair
coincidence point
nonexpansive map
Hilbert space
Receive Date: 10 February 2010
Revise Date: 27 July 2010
Accept Date: 27 July 2010
Amini-Harandi, A. (2011). Best proximity pair and coincidence point theorems for nonexpansive set-valued maps in Hilbert spaces. Bulletin of the Iranian Mathematical Society, 37(No. 4), 229-234.
A. Amini-Harandi. "Best proximity pair and coincidence point theorems for nonexpansive set-valued maps in Hilbert spaces". Bulletin of the Iranian Mathematical Society, 37, No. 4, 2011, 229-234.
Amini-Harandi, A. (2011). 'Best proximity pair and coincidence point theorems for nonexpansive set-valued maps in Hilbert spaces', Bulletin of the Iranian Mathematical Society, 37(No. 4), pp. 229-234.
Amini-Harandi, A. Best proximity pair and coincidence point theorems for nonexpansive set-valued maps in Hilbert spaces. Bulletin of the Iranian Mathematical Society, 2011; 37(No. 4): 229-234. | CommonCrawl |
Archived Talks
Summaries of past talks are available. They are listed here in reverse chronological order. You can also browse past talks by subject through the tags page.
Expander Graphs
Delivered by Frieda Rong on Wednesday November 29, 2017
In this talk, we'll study graphs which are "sparse" yet "highly connected". Known as expanders, these graphs exhibit interesting properties which can be viewed from a rich array of analytic, combinatorial, and probabilistic perspectives. Contributions of expanders range from the proof of important results in complexity theory to derandomization of algorithms to the construction of robust networks and cryptographic hash functions. We'll see one application to coding theory, where expanders can be used to provide asymptotically "good" error-correcting codes with linear time encoding and decoding.
Prerequisites: linear algebra (familiarity with eigenvalues and eigenvectors).
Social Choice Functions
Delivered by Sidhant Saraogi on Wednesday November 8, 2017
Social choice functions help aggregate the opinions of many agents. Social choice problems arise in examples as varied as citizens voting in an election, committees deciding on alternatives, and independent computational agents making collective decisions. We aim to study social choice theory through the lens of boolean functions, and study concepts such as influence and noise stability, which provide analogues for natural concepts in the study of social choice. We will finish off by looking at the famous Arrow's Theorem often popularly stated as "the only voting method that isn't flawed is a dictatorship".
Constructive analysis
Delivered by Fengyang Wang on Wednesday November 1, 2017
Constructive mathematics, as the name would suggest, is centered on the philosophy that mathematical proofs should be able to be turned into algorithms. We will contextualize constructive approaches to analysis, roughly following Bridges and Vîţă. This talk has no formal prerequisites beyond an elementary understanding of the real numbers and the usual concept of completeness. In particular, no logical background is assumed; intuitionistic logic will be overviewed in the talk. We will finish with a discussion of the ramifications of completeness of the real numbers.
A summary of this talk is available here.
Automatic sequences
Delivered by Laindon Burnett on Wednesday October 25, 2017
This talk will begin with a brief overview behind the theory of words in mathematics as well as the theory of finite automata in theoretical computer science. After this, we will define what an automatic sequence is, prove some fundamental theorems about them, and investigate some of their more intriguing properties. The majority of information presented comes from the text "Automatic Sequences: Theory, Applications, Generalisations" by Jean-Paul Allouche and the University of Waterloo's own Jeffrey Shallit, from the department of Computer Science.
The speaker has provided a PDF document covering the same content as this talk.
A 3/2-approximation algorithm for the stable marriage problem with ties
Delivered by Felix Bauckholt on Wednesday October 4, 2017
I will introduce the Stable Marriage Problem, and its NP-complete cousin, the Stable Marriage Problem with ties. I will present a simplified version of Király's 3/2-approximation algorithm, which archieves the best approximation ratio known.
The slides for this presentation are available.
Markov Chain Monte Carlo
Delivered by Jacob Jackson on Wednesday October 4, 2017
The talk will introduce Markov chain Monte Carlo methods as a means of sampling from a distribution. The Metropolis-Hastings algorithm will be discussed as well as applications of Markov chain Monte Carlo for Bayesian inference and optimization.
Will expect familiarity with basic probability theory, especially conditional probability.
The slides for this talk are available at Jacob Jackson's website.
Cantor Set and Dynamical Systems
Delivered by James Bai on Friday March 31, 2017
The talk will be begin on the cosnstruction of the most commonly used tenary Cantor set. The talk will then talk about the common properties of Cantor set and methods of evaluating the size of the set. Then, depending on time, a brief introduction will be given on the dynamical system and chaos.
Group Theoretic Attacks on the Enigma Cipher
Delivered by Laindon Burnett on Friday March 31, 2017
Metric embeddings and dimensionality reduction
Delivered by Frieda Rong on Friday March 31, 2017
In this talk, we consider embeddings which preserve the pairwise distances of a set of points. It is often useful to find mappings from one high dimensional space to a lower dimensional space that preserve the geometry of the points. One source of applications is in streaming large amounts of data, for which storage is costly and/or impractical. However, the study of such embeddings has also inspired developments in the design of approximation algorithms and compressed sensing.
At the crux of the talk is the remarkable Johnson-Lindenstrauss lemma. This fundamental result shows that for Euclidean spaces, it is possible to achieve significant dimensionality reduction of a set of points while approximately preserving the pairwise distances. An elementary proof will be given, along with subsequent speed improvements with sparse projections and an interesting use of the Fourier transform. We will also discuss applications of the lemma to the fields mentioned above.
Lie Groups and Special Relativity
Delivered by Mohamed El Mandouh on Friday March 24, 2017
Brachistochrone
Delivered by Manas Joshi on Friday March 24, 2017
In the late 1600's, Johann Bernoulli came out with a problem to challenge the world's greatest mathematicians. The problem, known as the Brachistochrone, had garnered some interest and eventually was solved by Johann, Jacob Bernoulli and Newton amongst others. In this talk, we will analyze the solution given by Leonard Euler (and Lagrange), which started a new field of Calculus, called the Calculus of Variations. We will prove the most fundamental equation, Euler-Lagrange, and see how this can be applied to solve the Brachistochrone problem.
Universal Property of Quotients
Delivered by Lirong Yang on Friday March 17, 2017
In this talk, we generalize universal property of quotients (UPQ) into arbitrary categories. UPQs in algebra and topology and an introduction to categories will be given before the abstraction. As in the discovery of any universal properties, the existence of quotients in the category of sets and that of groups will be presented.
If you have not yet been exposed to group theory, please read Monoids and Groups for an introduction.
Inverse Galois Theory
Delivered by David Liu on Friday March 17, 2017
Almost 200 years after Évariste Galois's death, there is still one question about Galois groups — the symmetries of the roots of polynomials, that still remains unsolved. This is the Inverse Galois Problem — whether every group is a Galois group of a Galois extension of the rational numbers. In this talk, I will give an overview of the progress that has been made, the approaches that mathematicians are making, and directions for further research.
A note on the requirements for this talk: It is recommended that you be comfortable with groups, fields, field extensions, automorphisms, and basic Galois theory. This requirement can be met by any of the following suggested alternatives:
having taken PMATH 347, and currently taking PMATH 348;
having taken PMATH 347, and coming for the prerequisite knowledge presentation (see below);
having equivalent knowledge to PMATH 347 and PMATH 348;
otherwise, if you are currently taking or have taken MATH 146, this talk is still accessible, but it is highly recommended that you read An Introduction to Galois Theory by Dan Goodman, and watch this 18-minute lecture by Matthew Salomone (after reading the article) and optionally also come for the prerequisite knowledge presentation
A prerequisite knowledge presentation about Galois theory will be given at 17:00, prior to the beginning of the talk at 17:30. This presentation will last about 20 minutes. If you are unfamiliar with the material, it is recommended that you read some of the linked material above in addition to attending this presentation. Attending this presentation is optional. The reference material used for this presentation is the Galois Theory document.
Bloom Filters and Other Probabilistic Data Structures
Delivered by Luthfi Mawarid on Friday March 3, 2017
With the advent of big data, the ability to process large volumes of data is becoming increasingly important. For instance, when dealing with large data sets, we may want to perform simple operations such as counting the number of unique elements or checking whether or not an element is present in the set. While there are deterministic data structures, such as hash tables, that can perform these quickly, the sheer size of the data involved makes their use largely impractical and unscalable.Instead, we may want to trade-off some accuracy in our answers in exchange for greater space efficiency and ease of parallelization. For this, we introduce the concept of probabilistic data structures.
In this talk, we will mainly focus on Bloom filters, which are commonly used to test set membership and speed up data access. We will explore its main use cases, its implementation details, and the mathematics behind it. If time permits, I will also talk about the count min-sketch, used for frequency counting, and/or the HyperLogLog counter, used for cardinality estimation.
This talk will assume basic knowledge of probability.
Total Functional Programming
Delivered by Adam Hofmann on Friday March 3, 2017
Total functional programming is a paradigm of functional programming with the additional restriction that all functions must be total; that is, every function is defined for every element of its domain.
The immediate benefits of total functional programming are clear; every function in a total language is guaranteed to terminate and not cause a runtime error. This talk will cover the basics of writing and understanding functions that are total, as well as benefits that come from having totality.
Also in the scope of this talk is the major trade offs of total functional programming, including Turing completeness and things like input and output that seem unattainable. This is accompanied by strategies to write functions with non-trivial proofs of termination such that totality is guaranteed and verifiable by the interpreter.
General Secure Multi-Party Computation from any Linear Secret-Sharing Scheme
Delivered by Zihao Zhu on Friday February 17, 2017
As more and more sensitive data gets digitized, there is a need to ensure privacy and reliability of the data, especially in the face of adversarial parties who attempt to corrupt or unwanted access to sensitive secrets.
In many instances such as online gambling, bidding, and even Google's targeted advertisements, a client wants to be able to take inputs from multiple sources (for example, auction bids) and produce an output (for example, the highest bidder) without revealing any information about the other inputs. We will use such scenarios as well as more cryptography related ones in order to motivate Multi-Party Computation as a method to compute on encrypted data. With MPC, we will quickly see it's limitations with unsecure channels and first develop secret sharing schemes (specifically linear secret sharing schemes) such as Shamir's scheme, and soon after, verifiable secret sharing schemes.
We will introduce the different types of adversarial structures and explore both the robustness and limitations of secret sharing schemes against them.
Finally, we will show that all Linear Secret Sharing Schemes can be constructed to be verifiable. We will explore the consequences of this and discuss techniques in their construction.
Prereqs: Math136 used in proofs
Bitcoin and the Blockchain
Delivered by Ben Zhang on Friday February 17, 2017
In this talk, we will learn about the principles behind the Double Spend Problem, the Blockchain, and explore the various ways this technology is being used today.
Transferring money in the physical world is easy. However, the transfer of virtual currency is not as easy to validate. The Double Spend Problem has long stood in the way of a free (libre et gratis) virtual currency, and the world found a need for a third party (usually in the form of large banks) to validate all virtual transactions.
In 2008, a mysterious individual known as Satoshi Nakamoto published a paper titled "Bitcoin: A Peer-to-Peer Electronic Cash System" which describes a system for virtual transactions to be validated through the distributed computing power of the community. The system, known as the Blockchain, uses hashing and non-deterministic mathematics to protect itself from Double Spending attacks. Nakamoto's paper led to the creation of free online currencies such Bitcoin, Litecoin, and Ethereum, which are used in marketplaces today.
Prerequisite Information: Middle school math.
Delivered by Michael Pang on Friday February 10, 2017
Introduction to quantum computing. We start off by tackling a classical problem via deterministic and probabilistic computation and then motivate a quantum model of computation. Along the way we lay out some of the mathematics needed to describe quantum computation and how it corresponds to key concepts in quantum mechanics such as interference and superposition.
Delivered by Eddie Onochie on Friday February 10, 2017
In this talk we will talk about ordinals and the philosophy of infinity. We will define what ordinals are and how to construct them. We will also define transfinite recursion and use the axiom of choice to give meaning to the "cardinality" of a set.
$p$-adic Numbers
Delivered by Akshay Tiwary on Friday February 3, 2017
In this talk we will discover an alternate way to define the "size" of a ration al number. We will define the p-adic absolute value and see that this absolute value is Non-archimedean and that this fact leads to a a geometry that is very different from the geometry of the Real numbers (every p-adic triangle is isosc eles!). In addition I will show you some fun ideas from p-adic numbers (which I will denote by $\mathbb{Q}_p$ like the fact that $\sum_{n = 1}^{\infty} 3^n$ converges and how every element of $\mathbb{Q}_p$ has a base $p$ expansion. This is meant to be a very leisurely talk with almost no prerequisites and it should be fun so please come for it!
$q$-analogs
Delivered by Fengyang Wang on Friday February 3, 2017
Some of the most interesting results in combinatorics are generalizations of simple problems or theorems to problems or theorems parameterized by a parameter $q$ (often complex-valued). By taking the limit as $q\to1$, the simple problems can be recovered. Surprisingly, many of these $q$-analogs take similar forms, and are seen across a wide variety of problems that may initially seem unrelated.
Concepts from MATH 239 will be used. It is heavily recommended that people who have not taken MATH 239 or an equivalent course read some material on ordinary generating functions.
Tensor Products
Delivered by Felix Bauckholt on Friday January 27, 2017
What is a tensor product? How are they useful? This talk will elaborate on this broad, widely applicable topic.
Integrability of Riccati Equations
Delivered by Letian Chen on Friday January 27, 2017
In 1841, 3 years before Joseph Liouville discovered transcendental numbers, Joseph Liouville showed that the Riccati equation $y' = ay^2 + bx^m$ has a quadrature solution if and only if $m = 0, -2, -\frac{4n}{2n±1}$. Nowadays, we see Liouville's results as the foundation to qualitative analysis to differential equations, a fascinating subarea of DEs, which studies for example the existence and uniqueness of different kinds of equations. In my talk, I will (hopefully) prove the classical result of Liouville after a quick review of history and related linear theory.
Diophantine Approximation and the Discovery of Transcendental Numbers
Delivered by Anton Mosunov on Friday January 20, 2017
In 1844, Joseph Liouville discovered transcendental numbers — those numbers that are not roots of polynomials with rational coefficients. Nowadays, we see Liouville's discovery as the foundation of Diophantine approximation, a fascinating subarea of number theory, which studies how well algebraic numbers can be approximated by the rationals. In my talk, I will prove the classical result of Liouville and explain further advances in the area, such as Thue's theorem and the celebrated theorem of Roth, which enabled its discoverer to receive the Fields medal in 1958.
Game Theory (Part 2): Extensive Form Finite and Infinite Games
Delivered by Koosha Totonchi on Friday January 20, 2017
An extensive form game, unlike a regular one-shot-game, has the element of sequential or repeated movement. Chess for example, is an extensive form game. However, we will see that our mathematical definition of "game" will be much broader, so models we construct can be applied to problems in economics, computer science, and engineering—not just chess.
We will explore what it means to "solve" a game where players make back to back moves. We will also go over games where players pick their moves at the same time but play over and over. There will be a review of basic definitions from Game Theory Part 1, and we will introduce some new terms which will help us with extensive forms.
At the end, we can hopefully look at some interesting applications.
Note: this builds on the first Game Theory talk given in Fall 2016. If you need to review important concepts or you couldn't attend, a webpage with definitions and basic information is available here. We'll do a brief review so don't worry too much! If you didn't come to the first session, you should be able to understand everything here regardless.
Voting with Homomorphic Encryption
Delivered by Sidhant Saraogi on Friday December 2, 2016
In light of the recently concluded Elections or as John Oliver would call it "A horifying glimpse at Satan's Pinterest Board 2016", "The One who must not be named" has repeatedly insinuated that the elections have been rigged. Our humble aim, present a voting scheme where:
each voter casts exactly one ballot.
voting is anonymous.
We delve into two areas on our way to prove our goal :
Blind Signatures, which allow for anonymous voting
Pallier Cryptosystem, which gives us the ability to sum up the votes even though they have been encrypted thus allowing the election to be "publically audited".
We might also, if time permit, talk about more modern systems of enabling fair elections that have even been implemented in real life.
This talk is based off Ron Rivest's lecture, of which a summary is available.
How to Complicate Fourier Analysis
Delivered by Mohamed El Mandouh on Friday December 2, 2016
Fourier analysis was initially introduced as a way to study the thermodynamic heat equation. At its simplest, it is the study of how to represent functions as an infinite sum of sines and cosines. However, the evil mathematicians felt that Fourier analysis was too simple and decided to steal it from the hardworking physicists and expand on it. In addition, they decided to complicate it by introducing Harmonic analysis. So, what is the difference between Harmonic and Fourier analysis? We say that Harmonic analysis is the process of representing functions on a locally compact group G as a sum of the characters of the group, and Fourier analysis restricts this process to abelian groups.
In this talk I will introduce the Fourier series, the Fourier transform and how it applies to abelian groups. What about non-abelian groups, you might ask? The answer is Harmonic analysis! Finally, I will explain the role of Fourier transform in quantum mechanics, specifically how the Fourier transform interchanges position and momentum space.
Quandles: The Algebra of Knots
Delivered by Brennen Creighton-Young on Friday November 25, 2016
One of the major goals of knot theory is to determine whether or knot two knots can be continuously deformed into one another. This idea can be fully captured algebraically — and leads us to algebraic structures that not only capture the desired notion of knot deformations, but reveal themselves as truly fascinating mathematical objects. We will use only linear algebra and elementary abstract algebra.
Though continuous deformation is a straight forward idea, the rigorous definition of this, the notion of ambient isotopy, is practically unusable. This talk will provide a quick overview of basic knot theory and will work towards developing numerous algebraic strategies to identify when two knots are ambient isotopic. We will focus mostly on providing motivation for keis, an algebraic representation of knots, as well as their generalization, quandles. Quandles will prove to be not only helpful with respect to the goal of the classification of knots, but also as rich algebraic structures. The talk will involve small amounts of group theory, linear algebra and module theory.
Naïve Lie Theory
Delivered by Aidan Patterson on Friday November 25, 2016
In 1870, Sophus Lie was studying the symmetries of differential equations, which generally form "continuous" groups. The analogous problem for polynomials was solved by Galois previously, so there was incentive to solve the related problem for these new kinds of groups. We'll motivate such continuous groups by looking at matrices, assuming only a little linear algebra.
Lie started a study of simplicity in these continuous groups. Lie understood these groups as groups generated by infinitesimal elements, which led him to believe that a group $G$ should be generalised to consider infinitesimal elements.
Today we separate the infinitesimal elements of a group $G$ to form a Lie algebra $g$, which captures most of the important structures of $G$, but is easier to handle. This talk will focus on motivating Lie's definitions, and provide some techniques used to prove simplicity for Lie groups. As well, some specific examples such as $\mathrm{O}(n)$, $\mathrm{SO}(n)$, $\mathrm{U}(n)$, $\mathrm{SU}(n)$, and $\mathrm{Sp}(n)$ will be mentioned to illustrate the concepts presented.
Please read this basic overview of Lie Theory.
Basic Elliptic Curves
Delivered by Akshay Tiwary on Friday November 18, 2016
Elliptic Curves lie in the intersection of Algebra, Geometry, Number Theory and Complex Analysis. While my talk won't require any experience with complex analysis or algebraic geometry, I hope to expose you to this active area of research. Although I could tell you that the meat of Wiles' proof of Fermat's Last Theorem involved proving a special case conjecture previously known as the Taniyama Shimura conjecture involving elliptic curves, or that the Birch and Swinnerton-Dyer conjecture is an open millenium prize problem that talks about the relation between the arithmetic behaviour of elliptic curves and the analytic behaviour of an L-function, or that elliptic curves over finite fields is so useful for cryptography that there was a memo that recommended elliptic curves for federal government use in 1999, I don't have to because you're a math student who will attend this talk without needing to be wowed with cool applications, right?
Hypercomplex Numbers
Delivered by Fengyang Wang on Friday November 18, 2016
There are exactly three distinct two-dimensional unital algebras over the reals, up to isomorphism. Each of these algebras corresponds to a unique geometry, with applications. This talk will develop the concepts needed to understand two-dimensional algebras over the reals, starting from the definitions of key concepts. We will rediscover the familiar complex numbers and generalize its construction to find the other hypercomplex number systems. We will then prove the result that these are the unique hypercomplex number systems, up to isomorphism. Finally, we will discuss possible generalizations to $n$ dimensions. Please ensure that you have a good understanding of fundamental concepts of two-dimensional linear algebra.
We will use Catoni, F., Cannata, R., Catoni, V., & Zampetti, P. (2004) as a reference.
Random Graphs and Complex Networks
Delivered by Frieda Rong on Friday November 11, 2016
From the graph of Facebook friendships to the neurons inside your brain, networks are all around us. We'll go over some surprising connections in network theory and see some of the following:
Delivered by Kai Rüsch on Friday November 11, 2016
How do we count how many holes a shape has? We can answer this question using homology groups, whose order's count the number of $n$-dimensional holes.
Game Theory (Part 1)
Delivered by Koosha Totonchi on Friday November 4, 2016
A game is a "mathematical model between interacting decision makers" where each player must make choices based on a set of rules. Every individual in a game must also have a preferred reaction to any combination of actions taken by other agents. Game theory is about the study and application of these models. It involves various solution concepts and methods that can be employed to predict the outcomes of strategic engagements. This talk will introduce the major ideas in the field. There will be a focus on basic definitions, types of games, and how we can "solve games" using the Nash equilibrium. Hopefully we'll get to "play" some ourselves. Towards the end, we can also review some neat unsolved problems in game theory that are very easy to understand, but prove really difficult to solve.
Delivered by Gregory Patchell on Friday October 28, 2016
The first part of this talk will be concerned with a brief introduction to measure theory. We will answer questions such a: How do we measure sets? Can every set be measured?
The second part of this talk will be the construction of the Lebesgue integral along with basic properties of the Lebesgue integral, along with comparisons to Riemann integration as we go. From Math 148, we saw that pointwise convergence doesn't play well with Riemann integration. We will see that with the powerful Monotone Convergence Theorem and the Dominated Convergence Theorem, pointwise convergence and Lebesgue integration are a match made in heaven.
Definitions and examples of σ-algebras, measures, and measurable functions
Motivations for Lebesgue integrals
Construction of the Lebesgue integral with comparison to construction of the Riemann integral
Benefits and properties of Lebesgue integration and limitations of Riemann integration
Limitations of Lebesgue Integration
Probability Spaces
Vitali Sets
Convex Optimization
Delivered by Rolina Wu on Friday October 28, 2016
This talk will introduce the basics for Convex Optimization, several popular optimization algorithms, and the application for convex optimization in Machine Learning.
Boyd and Vandenberghe, 2004 will be used for reference.
Reinforcement Learning in Games
Delivered by Agastya Kalra on Friday October 21, 2016
Delivered by Luthfi Mawarid on Friday October 21, 2016
This talk will cover the very basics of Category Theory, motivated by simple examples using the category of sets. I will then introduce some applications to other areas of mathematics, such as linear algebra and programming language theory.
Types and Object-Oriented Programming
Delivered by Nikita Kapustin on Friday October 14, 2016
I'll be talking about how to use basic types like sets and functions to construct other types and use them to represent objects.
Delivered by Sidhant Saraogi on Friday October 14, 2016
I will try to provide a brief introduction to Information Theory working towards motivating Shannon's Source Coding Theorem. We will use rather simple examples (for e.g. Repetition Codes) to explain the idea of noisy channels and similarly simple examples to explain the idea behind the theorem and eventually try to prove it for a rather specific example. (if we have the time !)
Infinitely Awesome Graphs
Delivered by Zouhaier Ferchiou on Wednesday October 12, 2016
This talk will cover the basics of dealing with graphs with an infinite number of verticies, focusing mostly on countably many verticies.
I will cover basic definitions of graph theory, Menger's theorem (Only 1 version, don't worry), trees, girths and chromatic numbers briefly for people not familiar with the terms. We will then jump into the sweet stuff, defining teeth and spines, local infinity, rays...
I will then present (and prove some) quite useful theorems and lemmas, including <q>de Bruijn & Erdos theorem</q> and <q>Star-Comb Lemma</q>. Finally, we will talk about Universal graphs, the Rado graph and why they are cool.
Ultrafilters
Delivered by Felix Bauckholt on Wednesday October 12, 2016
In this talk, I will try to explain what ultrafilters are. I will do this by presenting a few motivating examples, and also some examples that, even if they don't motivate anything, are just really cool.
Hyperreal numbers are used to motivate this talk.
Delivered by Fengyang Wang on Wednesday October 12, 2016
This talk will cover the basics of group theory. There are no official prerequisites for this talk, but MATH 145 and MATH 146 are an asset. The group theory part of the talk will mostly be based on Alekseev, 2004, specifically sections 1.1 (motivation), 1.2 (transformation groups), 1.5, 1.9, and 1.13 (various morphisms), and 1.11 (quotient groups).
Due to time constraints, it is inevitable that some content must be rushed. I will not go over proofs and derivations in full rigour, and I highly advise studying the reference material after the talk to catch up on what you may have missed. Please message me in advance if there are any other subjects in particular that you would like me to discuss. | CommonCrawl |
Integrated Modelling of Microstructure Evolution and Mechanical Properties Prediction for Q&P Hot Stamping Process of Ultra-High Strength Steel | springerprofessional.de Skip to main content
vorheriger Artikel Dynamic Modeling and Analysis of 5-PSS/UPU Para...
nächster Artikel Anomalies in Special Permutation Flow Shop Sche...
Integrated Modelling of Microstructure Evolution and Mechanical Properties Prediction for Q&P Hot Stamping Process of Ultra-High Strength Steel
Yang Chen, Huizhen Zhang, Johnston Jackie Tang, Xianhong Han, Zhenshan Cui
Consumption of non-renewable petroleum, traffic safety, and air pollution have been becoming a serious problem along with the development of auto industry. Song et al. [ 1 ] studied that weight reduction of body-in-white can significantly lower the entire weight, resulting in fuel efficiency. The ultra-high strength steel manufactured via hot stamping has become one of the primary materials to build light-weighting vehicles. According to Bok et al. [ 2 ], a blank was heated to the austenitizing temperature (above the \(Ae_{3}\)) and kept a few minutes before being transferred onto a tool for further forming and quenching. The tensile strength of the final product would reach approximately 1500 MPa due to almost full transition of martensite.
Lath martensite is the essential microstructure of the parts made by hot stamping process, providing the specimens with high strength but low plasticity. In general, the elongation of hot-stamped sample is less than 7%, which leads to unexpectedly comprehensive mechanical properties. Q&P heat treatment technology was proposed by Speer et al. [ 3 ] at initial. The duplex phase involving martensite and austenite is the final microstructure of a product after Q&P heat treatment, in which the plasticity and toughness has been improved. Liu et al. [ 4 ] proposed Q&P hot stamping process which combined Q&P heat treatment with hot stamping. The consequence of thermal simulation indicated that the product's plasticity was significantly revised by Q&P hot stamping, while the strength loss was relatively negligible. Han et al. [ 5 ] designed and manufactured a corresponding mold to produce some U-cap parts by Q&P hot stamping where three types of steel were studied for the availability of Q&P hot stamping, respectively.
Investigating microstructure evolution is worthwhile, because the microstructure transformation has a great influence on the final mechanical properties of a product manufactured through Q&P hot stamping process. Therefore, the development of the prediction models of mechanical properties, and simulation are mandatory in the operation of the process and optimization of parameters.
Microstructure evolution models have become a rising research field to investigators who contribute in hot forming technology. Regarding to diffusional phase transformation, Kirkaldy and Venugopalan [ 6 ] proposed the K-V model for microstructure prediction. Li et al. [ 7 ] presented a modified model where the TTT curve in the K-V model was substituted by CCT curve, and the new model was known as Li model. Åkerström and Oldenburg [ 8 ] generated an A-O model that is based on the K-V model, where the effect of boron element was considered and included. As for non-diffusional phase transformation, Koistinen and Marburger [ 9 ] created the K-M model, which became the fundamental model in studying martensitic transformation. The impacts of temperature change and austenite grain size were further involved in Lee model proposed by Lee et al. [ 10 ], and the model had predicted the deformation of a cylindrical sample during quenching process successfully. In addition, Zhu et al. [ 11 ] proposed a model to describe the carbon diffusion and interface migration during the carbon partitioning process. Wang et al. [ 12 ] coupled the micro-scale carbon diffusion and interface migration laws into macro-scale thermomechanical coupling simulation, performed a multi-physics, multi-scale, multi-phase coupling simulation for Q&P hot stamping, and studied the microstructure evolution during two-stage Q&P process [ 13 ].
The research on the relationship between microstructure and mechanical properties is relatively deficient compared with the aforementioned studies, and almost concentrated on hardness prediction. For example, Lee et al. [ 14 ] studied the decomposition process of austenite among 1045 steel through Li model, and the accuracy of the prediction was verified through a hardness experiment. Zhu et al. [ 15 ] predicted the microstructure evolution of hot-stamping components through K-V, and K-M model, and observed the changing tendency of hardness and strength along with cooling rate via experimental results. Hamelin et al. [ 16 ] predicted the ferrite distribution and hardness by the Li model during a welding process. Cui et al. [ 17 ] investigated the austenite decomposition process of the spheroidized bearing steel by combining the thermodynamics and kinetics simulation, the hardness of the product was predicted as well. Yasuhiro et al. [ 18 ] predicted the phase transformation and hardness of spot welded tailored blank in hot stamping, and the calculated results were consistent with experimental values. Mori et al. [ 19 ] summarized the research on properties prediction, and offered some instances of the prediction of product property, i.e., hardness prediction.
According to previous researches and utilizations of phase transformation models and properties prediction, the following issues should be concerned: (1) The phase transformation models of hot stamping were only applied within general hot stamping, but occasionally used in the Q&P hot stamping. (2) The current predictions of the mechanical properties of hot-stamped materials are based on the measurement of microstructure content by observation, and the predicted field is limited to the prediction of hardness values, and rarely involves tensile strength and elongation.
In this work, a complete procedure of microstructure evolution and properties prediction integrated model for Q&P hot stamping process was established, in which combined with the constrained carbon para equilibrium model (CCE model). All introduced models were incorporated in the finite element software LS-DYNA, and the thermal simulation of Q&P hot stamping and the experiments of U-cap parts were operated. Conclusively, the accuracy of the integrated models was persuasive by comparing to experimental results.
2 Integrated Model for Phase Transformation and Mechanical Properties Prediction
2.1 Description of Q&P Hot Stamping Process
Q&P hot stamping is an advanced technology combining of hot stamping and Q&P heat treatment. In order to gain a whole austenite microstructure, the blank is heated to exceed the temperature of \(Ae_{3}\) before transferring onto a tool for quenching and forming. The specimen is being held at a carbon partitioning temperature ( \(P_{T}\)) after approach a designated temperature spot between \(M_{s}\) and \(M_{f}\), which is known as the quenching temperature ( \(Q_{T}\)). Finally, lath martensite and retained austenite are collected at the end of cooling, i.e., the temperature of the sheet has equaled to the room temperature. Moreover, one-step Q&P hot stamping indicates the carbon partitioning temperature \(P_{T}\) and the quenching temperature \(Q_{T}\) are identical. Meanwhile, the temperature of carbon diffusion \(P_{T}\) is adjusted to be higher than \(Q_{T}\) in two-step Q&P process. The prediction of microstructure evolution and mechanical properties were achieved by one-step Q&P hot stamping essentially in this study. The schematic illustration of the one-step Q&P hot stamping proposed by Han et al. [ 5 ] is shown in Figure 1.
Schematic illustration of the one-step Q&P hot stamping process
Q&P hot stamping consists of two quenching processes and one carbon diffusion process. The transformations of ferrite, pearlite, bainite and martensite are conducted regularly in the first quenching stage, until the carbon partitioning temperature is approached. Carbon atoms are transferred from the martensite into the retained austenite during dissemination, resulting in the reduction of the second martensite transformation temperature ( \(M_{rs}\)) since the carbon content within austenite is adequate. Additionally, unconverted austenite is shifted into secondary martensite while others remain as retained austenite if the \(M_{rs}\) is higher than the room temperature. Dissimilarly, the rest of austenite is unaffected, i.e., no transformation happened after subsequent quenching process if the \(M_{rs}\) is lower than the room temperature. Anyway, both diffusional and non-diffusional phase transitions and carbon diffusion process during Q&P hot stamping require corresponding models of description.
2.2 Integrated Model for Phase Transformation
2.2.1 Diffusional Phase Transition Model
The Li model [ 7 ] was developed and modified in accordance with K-V model [ 6 ], which was qualified to predict the diffusional phase transitions, i.e., ferrite, pearlite and bainite transitions. The TTT curve of the K-V model was substituted by CCT curve in Li model which is capable for continuous-cooling condition happened in practical manufacture. Li model can be expressed as:
$$\frac{{{\text{d}}X}}{{{\text{d}}t}} = \frac{{f\left( {G,T} \right)}}{{f\left( {Comp} \right)}}f\left( X \right),$$
where \(X\) is the phase fraction of ferrite, pearlite and bainite, \(f\left( {G,T} \right)\) represents the influence of the austenite grain size and the temperature on the phase transformation, \(f\left( {Comp} \right)\) stands for the effect of the material element on phase transformation, \(f\left( {G,T} \right)\) and \(f\left( {Comp} \right)\) are various for different phase. \(f\left( X \right)\) represents the S curve of phase transformation fraction, which is defined as:
$$f\left( X \right) = X^{{0.4\left( {1 - X} \right)}} \left( {1 - X} \right)^{0.4X} .$$
2.2.2 Non-diffusional Phase Transition Model
Lee model [ 10 ] is applied to predict martensitic transformation during both two quenching processes, where the influence of austenite grain size and temperature changing are involved to improve prediction accuracy. The specific expression of the Lee model is expressed by:
$$\frac{{{\text{d}}X_{m} }}{{{\text{d}}T}} = K \cdot X_{m}^{a} \left( {1 - X_{m} } \right) ^{b} ,$$
where \(X_{m}\) is the martensite content, and a, b, and K can be calculated individually by:
$$a = 0.420 - 0.246C + 0.359C^{2} ,$$
$$b = 1.320 + 1.576C + 1.933C^{2} ,$$
$$K = \frac{{G^{0.240} \cdot \left( {M_{s} - T} \right)^{0.191} }}{9.017 + 62.88C + 9.27Ni - 1.08Cr + 25.76Mo},$$
where \(M_{s}\) can be expressed by:
$$M_{s} \left( K \right) = 402 - 797C + 14.4Mn + 15.3Si - 31.1Ni + 345.6Cr + 434.6Mo + \left( {59.6C + 3.8Ni - 41Cr - 53.8Mo} \right)G + 273.15,$$
where the element symbols stand for mass fractions.
2.2.3 Carbon Diffusion Model
The constrained carbon para equilibrium model (CCE model) was proposed by Speer et al. [ 20 ] for investigating the carbon diffusional dynamics within carbon partitioning. The CCE model is expressed by:
$$X_{C}^{\gamma } = X_{C}^{a} exp\left( {\frac{{76789 - 43.8T - \left( {169105 - 120.4T} \right)X_{C}^{\gamma } }}{R \cdot T}} \right).$$
The carbon contents within austenite phase and martensite phase were equal to the total carbon content \(X_{C}\) within steel before carbon partitioning. After the initial quenching, austenite mole fraction \(f_{i}^{\gamma }\) and the martensite mole fraction \(f_{i}^{\alpha }\) could be computed. The austenite mole fraction and carbon content were updated to \(f_{CCE}^{\gamma }\) and \(X_{{C_{CCE} }}^{\gamma }\), respectively, when the carbon partitioning was done. According to the conservation of iron atoms without movement at interface, the following equation can be derived:
$$f_{CCE}^{\gamma } \left( {1 - X_{{C_{CCE} }}^{\gamma } } \right) = f_{i}^{\gamma } \left( {1 - X_{C} } \right).$$
The total content of carbon is constant, which can be expressed by:
$$f_{CCE}^{a} X_{{C_{CCE} }}^{a} + f_{CCE}^{\gamma } X_{{C_{CCE} }}^{\gamma } = X_{C} .$$
Additionally, the ultimate mole fraction of the two phases is constant as well:
$$f_{CCE}^{a} + f_{CCE}^{\gamma } = 1,$$
where \(X_{C}^{\gamma }\) and \(X_{C}^{a}\) are the carbon mole fraction in austenite \(\gamma\) and martensite \(a\), respectively. \(X_{C}\) is the total carbon content, R is gas constant 8.314 J/(mol·K), T is the absolute temperature, \(f_{i}^{\gamma }\) is the austenite mole fraction after the initial quenching of a steel blank. \(f_{\text{CCE}}^{a}\) and \(X_{{C_{CCE} }}^{a}\) are the martensite mole fraction, and carbon content after the carbon partitioning, individually. \(f_{CCE}^{\gamma }\) and \(X_{{C_{CCE} }}^{\gamma }\) are the austenite mole fraction and carbon content after carbon partitioning, independently. The carbon contents will be identical after partitioning is finished, i.e., \(X_{C}^{\gamma }\) = \(X_{{C_{CCE} }}^{\gamma }\), \(X_{C}^{a} = X_{{C_{CCE} }}^{a}\).
2.2.4 Integrated Model for Phase Transformation
In this work, the Li model [ 7 ] and Lee model [ 10 ] were utilized to describe the transitions of diffusional phase and non-diffusional phase in the first quenching processes to obtain the contents of martensite, bainite, ferrite, pearlite, and unconverted austenite. Then, the CCE model [ 20 ] was employed to describe the diffusional dynamics of carbon in the carbon partitioning process, and the carbon contents of martensite and austenite were calculated. Based on this, the \(M_{rs}\) point for the second quenching process was also obtained. If \(M_{rs}\) was higher than room temperature, the Lee model would be used to describe the second martensite transformation process. Otherwise, the unconverted austenite would be reserved to room temperature. Thus, the phase transformation and the final phase contents were all predicted accurately.
Besides aforementioned models, other alternative models, such as K-V model, A-O model that including the influence of boron are also available for diffusional phase transitions. In contrast, the K-M model is extensively utilized in the prediction of non-diffusional phase transitions. The calculated results of some models will be compared and analyzed in the next part.
2.3 Mechanical Properties Prediction Models
Regarding a target material, its mechanical properties are primarily depended on the composition of each phase, i.e., the mechanical properties can be predicted based on the accurate content of different phases. A combined rule of calculating the hardness, and an empirical model for calculating the strength are discussed in this section. Regarding to the elongation prediction, a modified model involving the effect of bainite has been created on the basic of the two-phase hybrid representation model.
2.3.1 Hardness Combined Rule
The hardness combined rule is implemented to compute the hardness of an output after Q&P hot stamping. The overall objective function is expressed by Li et al. [ 7 ]:
$$HV = X_{M} HV_{M} + X_{B} HV_{B} + (X_{F} + X_{P} )HV_{F + P} ,$$
where \(HV\) represents the Vickers hardness, \(HV_{M}\), \(HV_{B} ,\) \(HV_{F + P}\) represent martensite hardness, bainite hardness, and ferrite & pearlite mixed phase hardness. The magnitude of hardness of each phase relates to the cooling rate and the content of elements. The calculation formulas are expressed by Li et al. [ 7 ].
Martensite hardness:
$$HV_{M} = 127 + 949C + 27Si + 11Mn + 8Ni + 16Cr + 21logVr.$$
Bainite hardness:
$$HV_{B} = - 323 + 185C + 330Si + 153Mn + 65Ni + 144Cr + 191Mo + \left( {89 + 53C - 55Si - 22Mn - 10Ni - 20Cr - 33Mo} \right)logVr.$$
Ferrite and pearlite mixture hardness:
$$HV_{F + P} = 42 + 223C + 53Si + 30Mn + 12.6Ni + 7Cr + 19Mo + \left( {10 - 19Si + 4Ni + 8Cr + 130V} \right)logVr,$$
where \(Vr\) is the cooling rate (°C/h) at 700 °C.
2.3.2 Strength Prediction Model
Cui's empirical formula is able to output the material strength after Q&P hot stamping. Cui et al. [ 21 ] performed a plenty of tensile, and hardness tests on boron steel after hot stamping, where the empirical formula for strength and hardness is concluded as follows:
$$y = y_{0} + ae^{x/b} ,$$
where \(y_{0}\) = 226.05108, \(a\) = 272.0922, b = 29.15449, \(x\) is Rockwell hardness. The Vickers hardness predicted by hardness combined rule can be converted into Rockwell hardness according to GB/T 1172-1999. In the meantime, the magnitude of strength can be computed by Eq. ( 16).
2.3.3 Elongation Prediction Model and Its Modified Model
Matlock and Speer [ 22 ] used the Mileiko's two-phase hybrid representation model presented by Mileiko [ 23 ] to predict the tensile strength and elongation of mixed martensitic and austenitic phase. The Mileiko's model assumes that both phases are capable to process plastic deformation, and the strains are equally consistent at every time segment. The relationship between stress and strain of the hybrid phases is expressed by:
$$S_{all} = X_{M} S_{M} + X_{A} S_{A} = \left( {1 - X_{A} } \right)S_{M}^{ *} \left( {\frac{\varepsilon }{{\varepsilon_{M}^{ *} }}} \right)^{{\varepsilon_{M}^{ *} }} \exp \left( {\varepsilon_{M}^{ *} - \varepsilon } \right) + X_{A} S_{A}^{ *} \left( {\frac{\varepsilon }{{\varepsilon_{A}^{ *} }}} \right)^{{\varepsilon_{A}^{ *} }} \exp \left( {\varepsilon_{A}^{ *} - \varepsilon } \right),$$
where \(S_{all}\) is the total stress of the hybrid phases (nominal stress), \(\varepsilon\) is true strain, \(S_{M}\) and \(S_{A}\) are the martensite stress and austenite stress, respectively. \(X_{M}\) and \(X_{A}\) are the volume fraction of the martensite and austenite, respectively. \(S_{M}^{*}\) and \(\varepsilon_{M}^{*}\) are the tensile strength and elongation of martensite, and \(S_{A}^{*}\) and \(\varepsilon_{A}^{*}\) are the tensile strength and elongation of austenite individually.
Besides some austenite remains, the content of bainite is noticeable after performing a metallographic structure observation and various phase contents determination of Q&P thermal simulation samples in this paper. Kumar et al. [ 24 ] have confirmed that the elongation and toughness can be improved by bainite structure, especially upper bainite. Hence, a modified model of elongation prediction is made depending on the two-phase hybrid representation model and the effect of bainite, which can be expressed as:
$$S_{all} = X_{M} S_{M} + X_{A} S_{A} + X_{B} S_{B} = X_{M} S_{M}^{*} \left( {\frac{\varepsilon }{{\varepsilon_{M}^{*} }}} \right)^{{\varepsilon_{M}^{*} }} \exp \left( {\varepsilon_{M}^{*} - \varepsilon } \right) + X_{A} S_{A}^{*} \left( {\frac{\varepsilon }{{\varepsilon_{A}^{*} }}} \right)^{{\varepsilon_{A}^{*} }} \exp \left( {\varepsilon_{A}^{*} - \varepsilon } \right) + X_{B} S_{B}^{*} \left( {\frac{\varepsilon }{{\varepsilon_{B}^{*} }}} \right)^{{\varepsilon_{B}^{*} }} \exp \left( {\varepsilon_{B}^{*} - \varepsilon } \right).$$
In accordance with Kovalenko et al. [ 25 ], the tensile strength approaches to the maximal value when the necking appears, expressed by:
$$\frac{{{\text{d}}S_{all} }}{{{\text{d}}\varepsilon }} = 0.$$
The calculated result of Eq. ( 19) is:
$$X_{M} S_{M}^{*} \exp \left( {\varepsilon_{M}^{*} - \varepsilon } \right)\frac{{\varepsilon^{{\varepsilon_{M}^{*} - 1}} }}{{\varepsilon_{M}^{{*\varepsilon_{M}^{*} }} }}\left( {\varepsilon_{M}^{*} - \varepsilon } \right) + X_{A} S_{A}^{*} \exp \left( {\varepsilon_{A}^{*} - \varepsilon } \right)\frac{{\varepsilon^{{\varepsilon_{A}^{*} - 1}} }}{{\varepsilon_{A}^{{*\varepsilon_{A}^{*} }} }}\left( {\varepsilon_{A}^{*} - \varepsilon } \right) + X_{B} S_{B}^{*} \exp \left( {\varepsilon_{B}^{*} - \varepsilon } \right)\frac{{\varepsilon^{{\varepsilon_{B}^{*} - 1}} }}{{\varepsilon_{B}^{{*\varepsilon_{B}^{*} }} }}\left( {\varepsilon_{B}^{*} - \varepsilon } \right) = 0.$$
The current magnitude of strain ε is regarded as the elongation, and expressed as \(\varepsilon^{ *}\), where Eq. ( 20) can be adjusted as:
$$\varepsilon^{*} = \frac{{X_{M} \beta_{M} \varepsilon_{M}^{*} \varepsilon^{{*\left( {\varepsilon_{M}^{*} - 1} \right)}} + X_{A} \beta_{A} \varepsilon_{A}^{*} \varepsilon^{{*\left( {\varepsilon_{A}^{*} - 1} \right)}} + X_{B} \beta_{B} \varepsilon_{B}^{*} \varepsilon^{{*\left( {\varepsilon_{B}^{*} - 1} \right)}} }}{{X_{M} \beta_{M} \varepsilon^{{*\left( {\varepsilon_{M}^{*} - 1} \right)}} + X_{A} \beta_{A} \varepsilon^{{*\left( {\varepsilon_{A}^{*} - 1} \right)}} + X_{B} \beta_{B} \varepsilon^{{*\left( {\varepsilon_{B}^{*} - 1} \right)}} }},$$
where \(\beta_{M} = \frac{{S_{M}^{ *} { \exp }\left( {\varepsilon_{M}^{ *} } \right)}}{{\varepsilon_{M}^{{ *\varepsilon_{M}^{ *} }} }}, \beta_{A} = \frac{{S_{A}^{ *} { \exp }\left( {\varepsilon_{A}^{ *} } \right)}}{{\varepsilon_{A}^{{ *\varepsilon_{A}^{ *} }} }},\beta_{B} = \frac{{S_{B}^{ *} { \exp }\left( {\varepsilon_{B}^{ *} } \right)}}{{\varepsilon_{B}^{{ *\varepsilon_{B}^{ *} }} }}\). Eq. ( 21) implies the relationship between the phase contents and the elongation \(\varepsilon^{ *}\). The elongation \(\varepsilon^{ *}\) can be calculated by Eq. ( 21) when the austenite content \(X_{A}\), bainite content \(X_{B}\) and martensite content \(X_{M}\) have been determined. The single-phase mechanical properties of martensite, austenite and bainite are listed in Table 1, where the mechanical properties of martensite and austenite are obtained by Davies [ 26 ], and the mechanical properties of bainite are referred to Li [ 27 ].
Mechanical properties of austenite, martensite and bainite
Elongation (%)
Austenite
Bainite
3 Experimental Design of Q&P Hot Stamping
3.1 Stamping Material
Uncoated and cold-rolled blanks of boron steel B1500HS were used as the testing material, offered by the Bao Steel Co. The chemical compositions within the specimen are shown in Table 2, in which 1.6 mm is the thickness of all samples. Likewise, the original microstructure is consisted of ferrite and pearlite. The critical cooling rate of martensite transformation is 27 °C/s obtained by Tang et al. [ 28 ], and the \(M_{s}\) and \(M_{f}\) of B1500HS are 373 °C and 235 °C, respectively.
Chemical compositions (wt.%) of the B1500HS steel
B1500HS
3.2 Thermal Simulation Scheme
Gleeble 3500 thermomechanical simulator was operated to simulate the temperature changing during the Q&P hot stamping process. The dimension of a desire sample is 150 mm × 15 mm. The specimens were heated to 920 °C at a heating rate of 10 °C/s, and being held at the temperature for 5 min. Subsequently, hot samples would be quenched to a certain temperature (250 °C/300 °C/350 °C) at a cooling rate of 30 °C/s, and being kept at the specified temperature for 80 s. Finally, all samples would achieve at room temperature 20 °C at a cooling rate of 30 °C/s, as shown in Figure 2. Oppositely, the samples used in comparative experiment were directly cooled to room temperature at a cooling rate of 30 °C/s after completion of austenite transition.
Scheme of the thermal simulation for B1500HS
3.3 Q&P Hot Stamping Scheme of a U-cap Part
In order to verify the applicability of the integrated model proposed in this paper to the actual Q&P hot stamping process, the Q&P hot stamping experiment of a U-cap part was carried out. The dimensions of a formed U-cap part are illustrated in Figure 3. The sheet thickness is 1.6 mm, and the blank size is L260 × W150 (mm).
Dimensions of the formed U-Cap part
The sheet blank was heated up to 920 °C in the furnace and maintained for 5 min, then it was transferred to the tool quickly to complete the forming and quenching processes. Where a non-contact temperature measurement method by using the infrared thermal imager FLIR A615 was used to record the temperature change of the blank. After a short while, the blank was taken out from the tool at 15 °C above the carbon distribution temperature and then quickly transferred to a carbon distribution furnace, where the carbon partitioning process was conducted in the designated temperature and holding time. Finally, the specimen was taken out and secondly quenched to the room temperature for the completion of Q&P hot stamping.
As shown in Figure 3, the flange, side and bottom of the U-cap part were taken for the following analysis. According to their contact conditions with the tool during the forming and quenching process, the order of the cooling rate for them is flange > side > bottom.
3.4 Measurement Methods
3.4.1 Mechanical Properties Test
The mechanical properties of B1500HS were obtained by tensile and hardness tests at room temperature. For Gleeble thermal simulation experiments, the temperature was distributed uniformly within a length of approximately 20 mm. In order to generate necking and fracture in the middle of the piece during the tensile test, the gauge length of the sample was determined as 10 mm. The shape of sample is shown in Figure 4. When sanding was completed, the specimens were performed a tensile test on a Zwick/Roell Z100 tester at a rate of 1 mm/min. Moreover, the entire procedure were repeated for three times to obtain an average value for tensile strength. Simultaneously, a little part was sheared off from the middle of the thermal simulation sample, and the measurement of hardness was carried by HVS-30P device. For the U-cap part obtained by Q&P hot stamping, the specimens were cut from the side and flange of the U-cap part respectively for tensile and hardness testing.
Dimensions of tensile specimens
3.4.2 Optical Microstructure (OM) Observation
The microstructure of processed samples was observed by the Image A1m metallographic microscope, in which the size was 5 mm × 6 mm for all specimens. Notably, they were pre-process through a general manufacture, i.e., mechanical grinding with SiC sanding paper and polishing, and corroded by 4% nital for 5‒6 s.
3.4.3 Retained Austenite Content Measurement
The content of the retained austenite was estimated by X-ray diffraction (XRD), in which Rigaku D/max-2550VB/PC X-ray diffractometer was operated to support the experiment. Initially, rotating the Cu target with a scanning range from 35° to 105°, and setting the unit angle to 0.02°. 120 mA was the magnitude of working current, as well as the operating voltage was 35 kV. The dimension of a specimen was 5 mm×6 mm, and the oxide layer had been removed by mechanical grinding. Each set of thermal simulation process was repeated twice with individual sampling, and all samples were measured for two times for gaining the average values.
3.4.4 Retained Bainite Content Measurement
Bainite was detected in specimens manufactured through Q&P hot stamping correspondingly. The mechanical properties of steel, mainly the elongation, were benefitted from certain content of bainite. Liu et al. [ 29 ] have demonstrated that bainite is a combination of ferrite + carbide, or a complex structure composed of ferrite + carbide + retained austenite. However, determining the bainite content by XRD and other inspected methods precisely is still a challenge. In accordance with the measure developed by Naderi et al. [ 30 ], the bainite is gray scaled, and the content can be obtained by Image-Pro Plus 6.0 software, which can determine the area ratio.
The calibrations of bainite at the condition of \(P_{T} = 300\) °C are shown in Figure 5. Figure 5a) is the normal metallographic diagram of B1500HS after Q&P hot stamping, while Figure 5b) is the calibration result, where the red area stands for bainite. Its content can be determined by the average area ratio after repeatedly calibrations.
Bainite calibration when \(P_{T}\) is 300 °C: a normal metallographic diagram; b calibrated result
3.4.5 Austenite Grain Size Measurement
A thorough procedure of measuring the original austenite grain size was carried out in order to obtain the accurate prediction results. More specifically, obtaining a piece of sample (5 mm × 6 mm) from the thermal simulation experiment, removing the decarburized layer by mechanically sanding and polishing. Afterwards, the sample was put into a corrosive solution, made by picric acid + SDBS (Sodium dodecyl benzene sulfonate) + water, at a designated temperature of 70 °C. Eventually, microstructure observation was implemented by Image A1m when the original austenite grain boundary was distinct. The initial austenite grain size was determined as guided in the national standard GB/T 6394-2017. Figure 6 is the image of original austenitic grain boundaries, and the average diameter of the austenite grain is \(11.2\)μm, where G=10.
Image of original austenitic grain boundaries
4 Results and Discussion
In this paper, the presented model was verified by Gleeble thermal simulation experiment. The data collected and generated by Gleeble simulator was accurate and reliable for the precise control of heating/cooling rate, which lead to a more convincing validation. Additionally, the combined model was utilized to predict the final properties of Q&P hot-stamped U-cap parts based on the FEM software LS-DYNA. Afterwards, the application of the integrated model in practical manufacture case was approved by the comparison with the tested characterizations of U-cap parts as well.
4.1 Validation of the Presented Integrated Model
According to the Gleeble thermal simulation experiment scheme, the phase content was predicted by the presented integrated model. The original austenite grain size is selected as G=10 according to the measurement of the grain size in Section 3.4.
The phase content prediction results of the presented integrated model are plotted in Figure 7. When \(P_{T}\) is 350 °C, the martensite content is 57.2% and the untransformed austenite content is 35.9% after the first quenching process. The carbon content in the austenite is 0.67(wt%) when an 80-second duration of carbon distribution is finished. The \(M_{rs}\) is 276 °C, which indicates that a certain amount of secondary martensite will be formed during the second cooling process. Ultimately, the final retained austenite content is 4.2% and the martensite content is 88.9%. Similarly, when \(P_{T}\) is 300 °C, the \(M_{rs}\) is 155 °C and the secondary martensite is formed during subsequent cooling processing, leading to 7.0% content of the final retained austenite, and 86.1% of the martensite content. The austenite content is 3.5% and the martensite content is 89.6% regarding to 250 °C as the last designated \(P_{T}\). Particularly, the \(M_{rs}\) point is lower than the room temperature after the carbon distribution, which illustrates the austenite can be preserved to room temperature reliably. In term of aforementioned heat treatment parameters, the ferrite and pearlite contents are situated at a degree of \(10^{ - 5}\), which can be neglected. The final bainite contents (6.9%) are almost identical for all three \(P_{T}\), because the finalization of bainite transformation is always prior to the carbon partitioning stage.
Phase content prediction results for Q&P hot stamping: a P T = 350 °C; b P T = 300 °C; c P T = 250 °C (where A is austenite, F is ferrite, P is pearlite, B is bainite, M is martensite and Mr is second martensite)
A combined model, involving A-O, K-M, and CCE, was performed as the contrast model in this paper. The A-O model presented by Åkerström and Oldenburg [ 8 ] is a modified model based on the extensively utilized classical K-V model, considering the influence of boron in hot stamped steel, so it's usually used for the prediction of diffusional phase transformation of hot stamped boron steel. The K-M model proposed by Koistinen and Marburger [ 9 ] is the main model widely used for studying martensitic transformation at present, in which the original austenite grain size is not taken into account. Therefore, through comparing with the prediction results of the contrast model, the rationality of the present model used in this paper can be better proved.
The comparison results are illustrated in Table 3. Obviously, the predicted contents of austenite and bainite outputted by the presented model are closer to the experimental values at three carbon partition temperatures compared with the contrast model. For instance, the relative error yielded by the contrast model is 83% for the prediction of austenite when \(P_{T}\) is 350 °C, but only 11% for the presented model. Therefore, the accuracy of the presented model is more trustworthy and reliable than the opposite model.
Comparisons of phase contents prediction results with experimental values
\(P_{T}\) (°C)
Cont. (%)
Err. (%)
Presented model
Contrast model
The integrated model is capable to foresee the phase contents of Q&P hot-stamped products, in which the main reasons for the conclusion are as follows:
Regarding to the prediction of martensitic transformation, K-M model considers the influences of supercooling and phase content exclusively, but the influence of austenite grain size and the carbon content in austenite are further considered in Lee model. Lee et al. [ 31 ] confirmed that the original austenite grain size has a great influence among the martensite nucleation, generation, and \(M_{s}\) point. Hippchen et al. [ 32 ] obtained the phase transition curve of martensite under K-M model and Lee model via LS-DYNA 971. The martensitic transformation curve predicted by Lee model coincides with the experimental curve, while the curve predicted by K-M model is much different from the experimental curve at lower temperatures.
Regarding to the prediction of diffusional phase transitions, i.e., bainite, ferrite, and pearlite transitions, the TTT curve within K-V model is substituted by CCT curve in Li model, because most phase transitions are performed under the condition of continuous cooling in industrial manufacture. Bok et al. [ 2 ] predicted both the phase transformation and the hardness of a hot stamped B-pillar reinforcing part via LS-DYNA. Afterwards, the simulation results of B-pillar formation outputted by K-V model, Li model and A-O model were compared to the experimental consequences. Eventually, the comparison exhibited that the most precise prediction of hardness is provided by Li model, which demonstrated that the Li model is more applicable and authentic than K-V model and A-O model for the microstructure and mechanical properties prediction of hot stamping.
Microstructure evolution of high strength steel can be exhibited via the presented model thoroughly, so it is utilized as the fundamental way to predict the mechanical properties as well. The comparison between the predicted, and experimental results of both hardness and strength after Q&P hot stamping is presented in Figure 8, where the predicted consequences are similar to the experimental outputs for Q&P hot stamping, and the relative errors are less than 5%.
Comparison between predicted and experimental results for Q&P hot stamping: a hardness; b tensile strength
Elongation prediction is relatively arduous comparing with the hardness and strength so far. Figure 9 shows the calculated results by Mileiko's model and the revised model, as well as the experiment results. The results yielded by the modified model are more accurate at target temperatures. When \(P_{T}\) is 300 °C, the predicted elongation is 10.7% while the measured elongation is 11.39%. Compared to the Mileiko's model, the error is decreased from 15% to 6%. The predicted elongation of the revised model is 9.1%, and the experimental value is 10.19% at 350 °C, where the error is diminished from 19% to 10%. Even though the accuracy of altered model at 250 °C is less than other temperatures, its prediction is more reasonable than the previous model.
Comparison between predicted and experimental results of elongation
4.2 Model Application in A U-cap Part Properties Prediction
The presented integrated model was implemented in the commercial FEM platform LS-DYNA, where both the diffusional and non-diffusional phase transition models including Li and Lee models have been involved in the material model No. 248 of the version 971. The CCE model was secondarily programmed and implemented in the platform, as well as those properties prediction models that are relied on the calculated phase contents.
To simulate the Q&P hot stamping process, the forming and first quenching were firstly run, and the contents of martensite, ferrite, pearlite, bainite and austenite were obtained. Then, the CCE model was motivated to calculate the diffusion of carbon from martensite to austenite, and get the secondary martensite transformation temperature \(M_{rs}\). If \(M_{rs}\) was higher than room temperature, the Lee model would be activated to simulate the second martensite transformation during the quenching process. Otherwise, the unconverted austenite would be reserved to room temperature. Based on the final calculated phase contents, the hardness, strength and elongation predication models then run successively to get the mechanical properties of the part.
The effectiveness of the integrated model was examined by inspecting mechanical properties of a U-cap part outputted by the Q&P hot stamping. Figure 10 presents the FEM model, where the shell element was used and the element quantities of the sheet, the upper tool, the blanking ring, and the lower tool are 1560, 3036, 1080 and 2880, respectively. According to the measurement, the initial temperature of the sheet and tool were set to 830 °C and 25 °C, respectively. Crucially, the determination of the heat transfer coefficient (HTC) between the tool and the blank is compulsory for further temperature calculation, a theoretical model of HTC proposed by Han et al. [ 33 ] was applied in this paper.
Finite element model
The microstructure evolution process of the U-cap part is shown in Figure 11, where only side part is provided as an example and the details are given in Tables 4 and 5.
Microstructure evolution process of the side of U-cap part: a P T = 350 °C; b P T = 300 °C; c P T = 250 °C
Comparison between experimental and calculated phase fraction and mechanical properties for the side of a U-cap part
Phase fraction (%)
\(\sigma_{b}\) (MPa)
\(\updelta\) (%)
Cal.
Comparison between experimental and calculated phase fraction and mechanical properties for the flange of a U-cap part
Table 4 and Table 5 illustrate the results of prediction and experiment of side and flange section, respectively. The prediction model is not appropriate for bottom section due to relatively slow cooling rate, i.e., when the side drops to the carbon partitioning temperature, the martensite transformation has not started at the bottom.
The predicted strength of the side is similar to the experimental data, e.g., the deviation is only 5‒8 MPa at 250 °C and 300 °C. When \(P_{T}\) is 350 °C, the experimental and predicted tensile strength are 1448 MPa, 1499 MPa, individually, in which the relative error is 3.5%. Regarding to flange part, the predicted tensile strength is slightly different from the experimental result at 250 °C, in which the relative error is 3.6%. At either 300 °C or 350 °C, the predicted and experimental values are relatively consistent.
The predicted elongation values approximately coincide with the experimental results when \(P_{T}\) is 250 °C. While the predicted elongations are a little lower than the corresponding experimental results when \(P_{T}\) are 300 °C and 350 °C. The reasonable explanation could be that the grain has been refined during the forming and quenching process, resulted in higher real elongation than the predicted value.
Figure 12 illustrates the microstructures of flange and side after Q&P hot stamping when \(P_{T}\) is 300 °C. Both parts fill with lath martensite, while more austenite can be found in the flange, which is consistent with the phase content prediction results presented in Table 4 and Table 5.
Microstructure of a U-cap part after Q&P hot stamping: a flange; b side
In this investigation, an integrated model of microstructure evolution and properties prediction with high accuracy was established for the products manufactured via Q&P hot stamping process. Their mechanical properties including hardness, strength and elongation were yielded by theoretical and empirical formulas. All aforementioned models were operated in LS-DYNA to simulate the evolution of a U-cup part during Q&P hot stamping process as well. Finally, the prediction models of microstructure transformations and mechanical properties were approved as accurate and reliable through the comparison with experimental results. Some innovations have been explored and achieved in this study:
A general phase transformations and mechanical properties integrated model scheme for the entire procedure of Q&P hot stamping has been established originally.
The effect of the initial grain size of austenite has been concerned and involved during the phase transition simulation.
A modified model with better accuracy for the prediction of elongation has been made by comprising the influence of bainite microstructure.
The authors thank Ms. Yanan Ding and Mr. Chenglong Wang for their support and previous work.
Yang Chen, born in 1976, is currently a PhD candidate at School of Materials Science and Engineering, Shanghai Jiao Tong University, China. He received her master degree from Shanghai Jiao Tong University, China, in 2003. His research interest is hot stamping technology of ultra-high strength steel. Tel: +86-21-62813430.
Huizhen Zhang, born in 1996, is currently a master graduate at School of Materials Science and Engineering, Shanghai Jiao Tong University, China.
Johnston Jackie Tang, born in 1993, is currently a master candidate at School of Materials Science and Engineering, Shanghai Jiao Tong University, China.
Xianhong Han, born in 1977, is currently a professor at School of Materials Science and Engineering, Shanghai Jiao Tong University, China. His main research interests include advanced sheet metal forming technology and FEM simulation. Tel: +86-13918604955.
Zhenshan Cui, born in 1963, is currently a professor at School of Materials Science and Engineering, Shanghai Jiao Tong University, China.
The authors declare no competing financial interests.
Zurück zum Zitat L Q Song, H Liu, H J Du, et al. Applications and research on representative car-body part of hot stamping. Materials Science and Technology, 2014, 22(2): 49–54. L Q Song, H Liu, H J Du, et al. Applications and research on representative car-body part of hot stamping. Materials Science and Technology, 2014, 22(2): 49–54.
Zurück zum Zitat H H Bok, M G Lee, E J Pavlina, et al. Comparative study of the prediction of microstructure and mechanical properties for a hot-stamped B-pillar reinforcing part. International Journal of Mechanical Sciences, 2011, 53(9): 744–752. CrossRef H H Bok, M G Lee, E J Pavlina, et al. Comparative study of the prediction of microstructure and mechanical properties for a hot-stamped B-pillar reinforcing part. International Journal of Mechanical Sciences, 2011, 53(9): 744–752. CrossRef
Zurück zum Zitat J G Speer, F C R Assuncao, D K Matlock, et al. The "quenching and partitioning" process: background and recent progress. Materials Research, 2005, 8(4): 417–423. CrossRef J G Speer, F C R Assuncao, D K Matlock, et al. The "quenching and partitioning" process: background and recent progress. Materials Research, 2005, 8(4): 417–423. CrossRef
Zurück zum Zitat H Liu, X Lu, X Jin, et al. Enhanced mechanical properties of a hot stamped advanced high-strength steel treated by quenching and partitioning process. Scripta Materialia, 2011, 64(8): 749–752. CrossRef H Liu, X Lu, X Jin, et al. Enhanced mechanical properties of a hot stamped advanced high-strength steel treated by quenching and partitioning process. Scripta Materialia, 2011, 64(8): 749–752. CrossRef
Zurück zum Zitat X H Han, Y Y Zhong, S L Tan, et al. Microstructure and performance evaluations on Q&P hot stamping parts of several UHSS sheet metals. Science China: Technological Sciences, 2017, 60: 1692–1701. CrossRef X H Han, Y Y Zhong, S L Tan, et al. Microstructure and performance evaluations on Q&P hot stamping parts of several UHSS sheet metals. Science China: Technological Sciences, 2017, 60: 1692–1701. CrossRef
Zurück zum Zitat J S Kirkaldy, D Venugopalan. Prediction of microstructure and hardenability in low alloy steels. Proceedings of the International Conference on Phase Transformation in Ferrous Alloys, 1983: 128–148. J S Kirkaldy, D Venugopalan. Prediction of microstructure and hardenability in low alloy steels. Proceedings of the International Conference on Phase Transformation in Ferrous Alloys, 1983: 128–148.
Zurück zum Zitat M V Li, D V Niebuhr, L L Meekisho, et al. A computational model for the prediction of steel hardenability. Metallurgical and Materials Transactions B, 1998, 29(3): 661–672. CrossRef M V Li, D V Niebuhr, L L Meekisho, et al. A computational model for the prediction of steel hardenability. Metallurgical and Materials Transactions B, 1998, 29(3): 661–672. CrossRef
Zurück zum Zitat P Åkerström, M Oldenburg. Austenite decomposition during press hardening of a boron steel—Computer simulation and test. Journal of Materials Processing Technology, 2006, 174(1): 399–406. CrossRef P Åkerström, M Oldenburg. Austenite decomposition during press hardening of a boron steel—Computer simulation and test. Journal of Materials Processing Technology, 2006, 174(1): 399–406. CrossRef
Zurück zum Zitat D P Koistinen, R E Marburger. A general equation prescribing the extent of the austenite-martensite transformation in pure iron-carbon alloys and plain carbon steels. Acta Metallurgica, 1959, 7(1): 59–60. CrossRef D P Koistinen, R E Marburger. A general equation prescribing the extent of the austenite-martensite transformation in pure iron-carbon alloys and plain carbon steels. Acta Metallurgica, 1959, 7(1): 59–60. CrossRef
Zurück zum Zitat S J Lee, Y K Lee, S J Lee. Erratum to "Finite element simulation of quench distortion in a low alloy steel incorporating transformation kinetics"[Acta Mater 2008;56:1482-90]. Acta Materialia, 2009, 57(8): 1482–1490. CrossRef S J Lee, Y K Lee, S J Lee. Erratum to "Finite element simulation of quench distortion in a low alloy steel incorporating transformation kinetics"[Acta Mater 2008;56:1482-90]. Acta Materialia, 2009, 57(8): 1482–1490. CrossRef
Zurück zum Zitat B Zhu, Z Liu, Y Wang, et al. Application of a model for quenching and partitioning in hot stamping of high-strength steel. Metallurgical and Materials Transactions A, 2018, 49(4): 1304–1312. CrossRef B Zhu, Z Liu, Y Wang, et al. Application of a model for quenching and partitioning in hot stamping of high-strength steel. Metallurgical and Materials Transactions A, 2018, 49(4): 1304–1312. CrossRef
Zurück zum Zitat Z Wang, K Wang, Y Liu, et al. Multi-scale simulation for hot stamping quenching & partitioning process of high-strength steel. Journal of Materials Processing Technology, 2019, 269: 150–162. CrossRef Z Wang, K Wang, Y Liu, et al. Multi-scale simulation for hot stamping quenching & partitioning process of high-strength steel. Journal of Materials Processing Technology, 2019, 269: 150–162. CrossRef
Zurück zum Zitat Y Wang, H Geng, B Zhu, et al. Carbon redistribution and microstructural evolution study during two-stage quenching and partitioning process of high-strength steels by modeling. Materials (Basel), 2018, 11(11). Y Wang, H Geng, B Zhu, et al. Carbon redistribution and microstructural evolution study during two-stage quenching and partitioning process of high-strength steels by modeling. Materials (Basel), 2018, 11(11).
Zurück zum Zitat S J Lee, E J Pavlina, C J Van Tyne. Kinetics modeling of austenite decomposition for an end-quenched 1045 steel. Materials Science and Engineering A, 2010, 527(13): 3186–3194. CrossRef S J Lee, E J Pavlina, C J Van Tyne. Kinetics modeling of austenite decomposition for an end-quenched 1045 steel. Materials Science and Engineering A, 2010, 527(13): 3186–3194. CrossRef
Zurück zum Zitat L J Zhu, Z W Gu, H Xu, et al. Modeling of microstructure evolution in 22MnB5 steel during hot stamping. Journal of Iron and Steel Research International, 2014, 21(2): 197–201. CrossRef L J Zhu, Z W Gu, H Xu, et al. Modeling of microstructure evolution in 22MnB5 steel during hot stamping. Journal of Iron and Steel Research International, 2014, 21(2): 197–201. CrossRef
Zurück zum Zitat C J Hamelin, O Muránsky, M C Smith, et al. Validation of a numerical model used to predict phase distribution and residual stress in ferritic steel weldments. Acta Materialia, 2014, 75(7): 1–19. CrossRef C J Hamelin, O Muránsky, M C Smith, et al. Validation of a numerical model used to predict phase distribution and residual stress in ferritic steel weldments. Acta Materialia, 2014, 75(7): 1–19. CrossRef
Zurück zum Zitat W Cui, D San-Martín, E J Pedro. Towards efficient microstructural design and hardness prediction of bearing steels — An integrated experimental and numerical study. Materials and Design, 2017, 133: 464–475. CrossRef W Cui, D San-Martín, E J Pedro. Towards efficient microstructural design and hardness prediction of bearing steels — An integrated experimental and numerical study. Materials and Design, 2017, 133: 464–475. CrossRef
Zurück zum Zitat Y Yogo, N Kurato, N Iwata. Investigation of hardness change for spot welded tailored blank in hot stamping using CCT and deformation-CCT diagrams. Metallurgical and Materials Transactions A, 2018, 49(6): 2293–2301. CrossRef Y Yogo, N Kurato, N Iwata. Investigation of hardness change for spot welded tailored blank in hot stamping using CCT and deformation-CCT diagrams. Metallurgical and Materials Transactions A, 2018, 49(6): 2293–2301. CrossRef
Zurück zum Zitat K Mori, P F Bariani, B A Behrens, et al. Hot stamping of ultra-high strength steel parts. CIRP Annals, 2017, 66(2): 755–777. CrossRef K Mori, P F Bariani, B A Behrens, et al. Hot stamping of ultra-high strength steel parts. CIRP Annals, 2017, 66(2): 755–777. CrossRef
Zurück zum Zitat J Speer, D K Matlock, B C De Cooman, et al. Carbon partitioning into austenite after martensite transformation. Acta Materialia, 2003, 51(9): 2611–2622. CrossRef J Speer, D K Matlock, B C De Cooman, et al. Carbon partitioning into austenite after martensite transformation. Acta Materialia, 2003, 51(9): 2611–2622. CrossRef
Zurück zum Zitat J Cui, C Lei, Z Xing, et al. Predictions of the mechanical properties and microstructure evolution of high strength steel in hot stamping. Journal of Materials Engineering and Performance, 2012, 21(11): 2244–2254. CrossRef J Cui, C Lei, Z Xing, et al. Predictions of the mechanical properties and microstructure evolution of high strength steel in hot stamping. Journal of Materials Engineering and Performance, 2012, 21(11): 2244–2254. CrossRef
Zurück zum Zitat D K Matlock, J G Speer. Third generation of AHSS: Microstructure design concepts. In: A Haldar, S Suwas, D Bhattacharjee (Eds.) Microstructure and Texture in Steels. Springer London, 2009: 185–205. CrossRef D K Matlock, J G Speer. Third generation of AHSS: Microstructure design concepts. In: A Haldar, S Suwas, D Bhattacharjee (Eds.) Microstructure and Texture in Steels. Springer London, 2009: 185–205. CrossRef
Zurück zum Zitat S T Mileiko. The tensile strength and ductility of continuous fibre composites. Journal of Materials Science, 1969, 4(11): 974–977. CrossRef S T Mileiko. The tensile strength and ductility of continuous fibre composites. Journal of Materials Science, 1969, 4(11): 974–977. CrossRef
Zurück zum Zitat A Kumar, S B Singh, K K Ray. Influence of bainite/martensite-content on the tensile properties of low carbon dual-phase steels. Materials Science and Engineering A, 2008, 474(1): 270–282. CrossRef A Kumar, S B Singh, K K Ray. Influence of bainite/martensite-content on the tensile properties of low carbon dual-phase steels. Materials Science and Engineering A, 2008, 474(1): 270–282. CrossRef
Zurück zum Zitat V P Kovalenko, S T Mileiko, V V Tvardovskii. A model of failure of a composite pipe in compression. Mechanics of Composite Materials, 1989, 25(1): 103–111. CrossRef V P Kovalenko, S T Mileiko, V V Tvardovskii. A model of failure of a composite pipe in compression. Mechanics of Composite Materials, 1989, 25(1): 103–111. CrossRef
Zurück zum Zitat R G Davies. The deformation behavior of a vanadium-strengthened dual phase steel. Metallurgical Transactions A, 1978, 9(1): 41–52. CrossRef R G Davies. The deformation behavior of a vanadium-strengthened dual phase steel. Metallurgical Transactions A, 1978, 9(1): 41–52. CrossRef
Zurück zum Zitat H P Li. Research on the constitutive relationship of hot stamping boron steel B1500HS at high temperature. Journal of Mechanical Engineering, 2012, 48(8): 21. (in Chinese) CrossRef H P Li. Research on the constitutive relationship of hot stamping boron steel B1500HS at high temperature. Journal of Mechanical Engineering, 2012, 48(8): 21. (in Chinese) CrossRef
Zurück zum Zitat B Tang, Q Wang, Z Wang, et al. The influence of deformation history on microstructure and microhardness during the hot stamping process of boron steel B1500HS. Int. J. of Materials and Product Technology, 2013, 46(4): 255–268. CrossRef B Tang, Q Wang, Z Wang, et al. The influence of deformation history on microstructure and microhardness during the hot stamping process of boron steel B1500HS. Int. J. of Materials and Product Technology, 2013, 46(4): 255–268. CrossRef
Zurück zum Zitat Z C Liu, H Y Wang, Y F Wang, et al. Morphology and formation mechanism of bainitic carbide. Transactions of Materials and Heat Treatment, 2008, (1): 32–37+46. Z C Liu, H Y Wang, Y F Wang, et al. Morphology and formation mechanism of bainitic carbide. Transactions of Materials and Heat Treatment, 2008, (1): 32–37+46.
Zurück zum Zitat M Naderi, M Ketabchi, M Abbasi, et al. Semi-hot stamping as an improved process of hot stamping. Journal of Materials Science and Technology, 2011, 27(4): 369–376. CrossRef M Naderi, M Ketabchi, M Abbasi, et al. Semi-hot stamping as an improved process of hot stamping. Journal of Materials Science and Technology, 2011, 27(4): 369–376. CrossRef
Zurück zum Zitat S J Lee, Y K Lee, S J Lee. Finite element simulation of quench distortion in a low-alloy steel incorporating transformation kinetics. Acta Materialia, 2008, 56(7): 1482–1490. CrossRef S J Lee, Y K Lee, S J Lee. Finite element simulation of quench distortion in a low-alloy steel incorporating transformation kinetics. Acta Materialia, 2008, 56(7): 1482–1490. CrossRef
Zurück zum Zitat P Hippchen, A Lipp, H Grass, et al. Modelling kinetics of phase transformation for the indirect hot stamping process to focus on car body parts with tailored properties. Journal of Materials Processing Technology, 2016, 228(8): 59–67. CrossRef P Hippchen, A Lipp, H Grass, et al. Modelling kinetics of phase transformation for the indirect hot stamping process to focus on car body parts with tailored properties. Journal of Materials Processing Technology, 2016, 228(8): 59–67. CrossRef
Zurück zum Zitat X H Han, X Hao, K Yang, et al. Theoretical and experimental study of the rule for heat transfer coefficient in hot stamping of high strength steels. In: Numisheet 2014, Melbourne, Australia, 2014. X H Han, X Hao, K Yang, et al. Theoretical and experimental study of the rule for heat transfer coefficient in hot stamping of high strength steels. In: Numisheet 2014, Melbourne, Australia, 2014.
Yang Chen
Huizhen Zhang
Johnston Jackie Tang
Xianhong Han
Zhenshan Cui
Study on Cutting Force, Cutting Temperature and Machining Residual Stress in Precision Turning of Pure Iron with Different Grain Sizes
Algebraic Method-Based Point-to-Point Trajectory Planning of an Under-Constrained Cable-Suspended Parallel Robot with Variable Angle and Height Cable Mast
Temperature Field Simulation and Experimental Study of Anti-backlash Single-Roller Enveloping Hourglass Worm Gear
A Fast Multi-tasking Solution: NMF-Theoretic Co-clustering for Gear Fault Diagnosis under Variable Working Conditions
Keep Healthcare Workers Safe: Application of Teleoperated Robot in Isolation Ward for COVID-19 Prevention and Control
Smoothing Parametric Design of Addendum Surfaces for Sheet Metal Forming | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues
Vol. 8,
Issue 9,
•https://doi.org/10.1364/OPTICA.425593
Stable, multi-mode lasing in the strong localization regime from InP random nanowire arrays at low temperature
Mohammad Rashidi, Hark Hoe Tan, and Sudha Mokkapati
Mohammad Rashidi,1,* Hark Hoe Tan,1,2 and Sudha Mokkapati3,4
1Department of Electronic Materials Engineering, Research School of Physics, The Australian National University, Canberra ACT 2601, Australia
2Australian Research Council Centre of Excellence for Transformative Meta-Optical Systems, Research School of Physics,The Australian National University, Canberra ACT 2601, Australia
3Department of Materials Science and Engineering, Monash University, Clayton, Victoria 3800, Australia
4e-mail: [email protected]
*Corresponding author: [email protected]
M Rashidi
H Tan
S Mokkapati
Mohammad Rashidi, Hark Hoe Tan, and Sudha Mokkapati, "Stable, multi-mode lasing in the strong localization regime from InP random nanowire arrays at low temperature," Optica 8, 1160-1166 (2021)
Controlling the lasing modes in random lasers operating in the Anderson localization regime
Mohammad Rashidi, et al.
Opt. Express 29(21) 33548-33557 (2021)
Localized modes revealed in random lasers
Bhupesh Kumar, et al.
Optica 8(8) 1033-1039 (2021)
In-plane directionality control of strongly localized resonant modes of light in disordered arrays...
A. K. M. Naziul Haque, et al.
Distributed feedback lasers
Laser materials
Laser sources
Multimode lasers
Random lasers
Original Manuscript: March 22, 2021
Revised Manuscript: July 26, 2021
Manuscript Accepted: August 6, 2021
References and links
Suppl. Mat. (1)
Disorder is generally considered an undesired element in lasing action. However, in random lasers whose feedback mechanism is based on random scattering events, disorder plays a very important and critical role. Even though some unique properties in random lasers such as large-angle emission, lasing from different surfaces, large-area manufacturability, and wavelength tunability can be advantageous in certain applications, the applicability of random lasers has been limited due to the chaotic fluctuations and instability of the lasing modes because of weak confinement. To solve this, mode localization could reduce the spatial overlap between lasing modes, thus preventing mode competition and improving stability, leading to laser sources with high quality factors and very low thresholds. Here, by using a random array of III-V nanowires, high-quality-factor localized modes are demonstrated. We present the experimental evidence of strong light localization in multi-mode random nanowire lasers which are temporally stable at low temperatures.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Strong light scattering by random scatterers could increase the dwell time of light and the formation of closed optical loops. Lasing can occur in such systems if the modal gain exceeds the optical losses. In these types of lasers, which are called random lasers, the feedback mechanism is based on multiple scattering events induced by the random scatterers rather than defined optical cavities. Unlike regular lasers, in random lasers mirrorless cavities may lead to low-cost designs and the feasibility to form these lasers on a variety of surfaces, including paper [1], polyethylene terephthalate [2], etc. Random lasers have been used for a variety of applications that include dye-circulated structured microfluidic channels [3], optofluidic bio-lasers [4], optical batteries [5], cancer diagnostic [6], speckle-free full-field imaging [7], lab-on-a-chip random spectrometers [8], time-resolved microscopies [9], sensors [10], random distributed feedback fiber lasers [11], laser paints [12], and military purposes [13].
Random lasers can operate in either the strong localization or diffusive regime [Fig. 1(a)]. In the former, different resonant modes supported in the system could be spatially non-interactive leading to stable and multi-mode lasing. Several sharp peaks in the lasing spectrum corresponding to the different resonant modes are expected in random lasers operating in this regime. On the other hand, most random lasers operate in the delocalized regime [14] (also known as the diffusive regime), where mostly narrowing of the gain spectrum and hence single-mode operation is observed due to temporal and spatial averaging effect [15]. In other cases, where sharp peaks have been observed from diffusive samples [16], the resonance wavelength was generally bigger than the transport mean free path [17–19]. These narrow peaks are due to reasons such as the interaction of the lasing modes through the spatial hole-burning [20] or the strong amplification of certain optical paths [17], and the observed narrowing of the peaks in these systems has nothing to do with the strong localization regime [15]. In the Anderson localization regime, light can be well confined inside the open disordered media, leading to behavior similar to conventional multi-mode lasers. In this regime, the modes are strongly confined and are spatially decoupled from other modes in multi-mode lasers, leading to stable mode operation. This localization can result in resonant cavities with high quality ($Q$) factors and lasers with low thresholds. These properties make the Anderson localization regime an interesting research topic. However, except in quasi-1D geometries [21], the vast majority of experiments on random lasers do not appear to be in the localized regime even though they exhibit discrete laser peaks above the threshold [22]. To achieve strong localization in a random scattering medium the transport mean free path, ${{l}_t}$, should be smaller than the wavelength of light, $\lambda$ (Ioffe–Regel criterion, ${{kl}_t} \approx {1}$, ${k} = {2}\pi {\rm /}\lambda$ is the wavenumber) [23]. Increasing the refractive index contrast of the scatterers relative to the refractive index of the background material could increase the scattering efficiency of the scatterers, leading to a lower ${{l}_t}$. It has been shown that high refractive index contrast is essential for strong light scattering, strong light localization, and high $Q$ factor cavities in disordered media [22].
Fig. 1. Design of the random nanowire lasers. (a) Illustration of the vertical random nanowire array used in this study. The nanowires are illuminated from the top by a pulsed laser with an intensity of ${{I}_0}(\lambda)$. The insets show pictorially the diffusive and localization regimes in random media. In the diffusive regime, normally a broad emission is observed at lasing, while in the localization regime, narrow spectral peaks over a broad background are observed at lasing. (b) The effects of filling factor and average nanowire diameter on ${{kl}_t}$. The red line indicates where the Ioffe–Regel criterion is satisfied. (c) The resonance spectrum calculated for a system with FF = 0.3, ${{d}_{\textit{av}}} = {125}\;{\rm nm}$, and L = 3 µm. The 2D spatial profiles for the three high $Q$ factor modes (Mode I: ${\lambda _r} = {829}\;{\rm nm}$, Mode II: ${\lambda _r} = {791}\;{\rm nm}$, and Mode III: ${\lambda _r} = {848}\;{\rm nm}$) are shown at a depth of 0.5 µm (i.e., at ${ z} = {2.5}\;\unicode{x00B5}{\rm m}$) from the tip of the nanowires, clearly showing strong localization in the $x - y$ plane. The inset in (c) also shows the 3D view of the simulated mode profile, where confinement can also be observed in the vertical direction and confined predominantly in the nanowires. The NW length is 3 µm, and the top of the NW corresponds to ${z} = {3}\;\unicode{x00B5}{\rm m}$. Only the top 2 µm of the NW array is shown in this inset.
Download Full Size | PPT Slide | PDF
To obtain lasing in a disordered media, gain must be present and can be incorporated by using scatterers with gain [24] or adding a separate medium as gain [25]. Many compound semiconductors, due to their direct bandgaps, are excellent candidates as a lasing gain medium. Furthermore, they have a high refractive index in comparison to air, and in the nanostructure form they can be used as both highly efficient scatterers and gain materials in random lasers. Different semiconductor nanostructures with bandgaps in the UV and visible regions such as ZnO [26], AlGaN [27], and ${\rm SnO}_{2}$ [28] nanowires (NWs), GaN nanocolumns [29], and ZnS nanospheres [30] have been used in random lasers.
Here, we use wurtzite phase InP NWs with a bandgap in the near-infrared region (${{E}_g} = {1.42}\;{\rm eV}$ at room temperature) as both the scatterers and the gain medium. The very high refractive index contrast between InP and free space ($\Delta {n}\simeq{2.46}$) results in a high scattering efficiency for the NWs. Using numerical methods, the random array made up of vertically aligned InP NWs are designed to provide localized cavities that support high $Q$ factors (${\sim}{1200}$). The modes are localized in three dimensions, where the lateral localization (perpendicular to the NW's axial direction) is due to the random scattering, and the vertical localization (along the NW's axial direction) is provided by the refractive index contrast created by carrier generation at the tip of the NW due to external optical pumping. It is shown that this localization can increase the overlap between the modes of the cavity and the gain region leading to high modal gain and confinement factors, $\Gamma$ (higher than 0.6). Experimentally, we show that the system operates in a multi-mode and the near-field images show that the modes are spatially localized and have localization lengths, $\xi$, of 200–500 nm. Results from different excitation intensities and after illumination by a large number of pulses show that these lasing modes are stable.
A. Numerical Calculations
For calculating ${{kl}_t}$ and mode profiles in Fig. 1, the commercial finite-difference time-domain (FDTD) software package (Lumerical FDTD solutions, https://www.lumerical.com) was used. We identified resonant modes in the nanowire arrays by introducing electrical dipole sources, positioned in the array, and perfectly matched layer absorbing boundary conditions were used to examine the decay of the electromagnetic fields.
B. Substrate Preparation
Before growth, a standard preparation process [31,32] for patterned substrates was used. Using plasma-enhanced chemical vapor deposition, $\sim {30}\;{\rm nm}$ thick ${{\rm SiO}_x}$ was deposited on semi-insulating (111)A InP substrates as a mask, followed by electron beam lithography to create the randomly positioned holes on the resist. The pattern was then transferred to the ${\rm SiO}_{x}$ mask through dry etching using ${{\rm CHF}_3}$. The patterned substrate was trim-etched in 10% ${{\rm H}_3}{{\rm PO}_4}$ to remove any native oxide layer [32] and then immediately loaded into the metal-organic chemical vapor deposition (MOCVD) system for nanowire growth.
C. MOCVD Growth
In this work, a close-coupled showerhead reactor (Aixtron CCS ${3} \times {2}$) was used for the epitaxial growth. The reactor was operated at a low pressure of 100 mbar and ultra-high purity ${{\rm H}_2}$ was used as the carrier gas with a total flow of ${10}\;{\rm L}\;{{\rm min}^{- 1}}$. Trimethylindium (TMIn) and phosphine (${{\rm PH}_3}$) were used as precursors for In and ${ P}$, respectively. All substrates were first annealed in a ${{\rm PH}_3}$ ambient at a surface temperature of 660°C in the reactor. After a 10 min annealing step, the reactor was ramped up to a (wafer) surface temperature of 680°C. Epitaxial growth was carried out by introducing TMIn into the chamber for 8 min. To reduce the possibility of NWs merging (i.e., to minimize radial growth), the molar flow of ${{\rm PH}_3}$ and TMIn was ${1.25} \times {{10}^{- 3}}$ and ${2.1} \times {{10}^{- 6}}\;{\rm mol}\;{{\rm min}^{- 1}}$, respectively, leading to a high V/III ratio of $\sim {595}$.
D. Optical Characterization
A frequency-doubled solid-state laser (femtoTRAIN IC-Yb-2,000, ${\lambda _{{\rm source}}} = {522}\;{\rm nm}$, repetition rate 20.8 MHz, pulse length 300 fs) with a Gaussian profile beam shape [full width at half-maximum $({\rm FWHM})\sim{5}\;\unicode{x00B5}{\rm m}$] was used to pump the random NWs. The NWs were excited through an aberration-corrected ${60} \times {/0.70}$ numerical aperture, long working distance objective lens (Nikon CFI Plan Fluor), and the resulting emission was collected through the same lens. The collected light was passed through a bandpass filter to remove the pump laser wavelength. Spectral measurements were made using a grating spectrometer (Acton, SpectraPro 2,750) and a CCD (Princeton Instruments, PIXIS). Low-temperature (${T} = {6}\;{\rm K}$) experiments were conducted in a He-cooled cryostat (Janis research). The schematic diagram of the optical setup used for the characterization is illustrated in Supplement 1 Fig. 10.
Figure 1(a) illustrates our vertically aligned InP NW array and the insets show the schematic of the two different regimes. To simulate the random NW arrays, distributions of diameters, pitch, and fill factor are defined. Under certain conditions, the scattering efficiency and mean path can achieve the Ioffe–Regel criterion for strong localization. For example, as shown by the red line in Fig. 1(b), for NWs with a diameter, ${d} = {140}\;{\rm nm}$, and filling factor, ${\rm FF} \approx {0.25}$, at 850 nm strong light localization can be achieved (for further details see Supplement 1 Section 1.1).
In the Ioffe–Regel regime, cavities with a high $Q$ factor and small mode volumes are expected [27,33], which are essential for low threshold [34] or thresholdless lasing [35]. In our system, as shown in Supplement 1 Fig. 6, the amount of disorder or randomness can have noticeable effects on the $Q$ factor of the system. In addition, it has also been shown that the size of scatterers and the filling factor of the system [27,29,36] can also influence the $Q$ factor in random lasers. Simulation results show to support high $Q$ factor cavities with resonance wavelength, ${\lambda _r}$, in the range of 800–850 nm, our InP NW array needs to have FF = 0.3, ${{d}_{\textit{av}}} = {120 {-} 130}\;{\rm nm}$, and a maximum deviation center of 50 nm (${\sigma _{c,\max}} = {50}\;{\rm nm}$) (for further details, see Supplement 1 Section 1.2).
To understand better how the designed system localizes uncoupled high $Q$ factor modes in three dimensions, we use Lumerical FDTD Solutions to solve Maxwell's curl equations. The resonance spectrum of the random structure with geometrical parameters of FF = 0.3, ${{d}_{\textit{av}}} = {125}\;{\rm nm}$, ${L} = {3}\;\unicode{x00B5}{\rm m}$, and ${{L}_x} = {{L}_y} = {5}\;\unicode{x00B5}{\rm m}$ is shown in Fig. 1(c) (${L}$ is the length of the NWs, ${{L}_x}$ and ${{L}_y}$ are the lateral sizes of the random system in the $x$ and $y$ directions). The lateral profiles of three modes (Mode I: $Q = {1182}$, ${\lambda _r} = {829}\;{\rm nm}$; Mode II: $Q = {949}$, ${\lambda _r} = {791}$; and Mode III: $Q = {1001}$, ${\lambda _r} = {848}\;{\rm nm}$) with the highest $Q$ factors are presented in the inset of Fig. 1(c). Additionally, the vertical profile of Mode I is also shown in the inset. This inset shows that most of the mode is localized at the top 1.5 µm of the NW array leading to negligible leakage of the mode to the InP substrate (note that the tip of the NW corresponds to ${z} = {3}\;\unicode{x00B5}{\rm m}$). These mode profiles confirm the 3D localization of light within the random NW array. The lateral confinement (in the $x - y$ plane) of the mode is due to 2D Anderson localization, while the 1D vertical confinement is a result of the change in refractive index along the length of the NWs as the carriers are generated at the top of the NWs. The NWs are excited by a pulsed laser from the top, with photo-generated carriers mostly within the top 300 nm of the NWs, and considering the effect of carrier diffusion, they will be mostly confined in the top 1.1 µm of the NWs [37] (see Supplement 1 Fig. 8). Furthermore, as a result of the increase in temperature [38] because of thermal relaxation of carriers and high density of photo-excited carriers [39], a change in the refractive index in this segment of the NWs is expected. As a result of this refractive index contrast along the NW array, as shown in the inset of Fig. 1, the mode profile is localized in the vertical direction within this 1.1 µm segment. As discussed in Supplement 1 Section 2.2, considering the average carrier density of ${\sim}{2.1} \times {{10}^{18}}\;{{\rm cm}^{- 1}}$ at a pump intensity of ${{P}_{{\rm in}}} = \sim{1200}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ and a temperature difference of ${\sim}{200}\;{\rm K}$, this region would experience a refractive index difference of ${\sim}{0.087}$ compared to the rest of the NWs, resulting in mode confinement in the $z$ direction. Thus, by using a random InP NW array, optical confinement in three dimensions can be achieved through a combination of lateral confinement due to randomness (2D Anderson localization) and vertical confinement due to refractive index contrast along the length of the NWs.
The lateral size of Mode I is ${0.634}\;\unicode{x00B5}{\rm m}^2$ (${\sim}{11}{(\lambda {/n})^2}$), which is much smaller than the area of the active scattering region (${5} \times {5}\;\unicode{x00B5}{\rm m}^2$ in the simulation), which is a necessary condition for operation in the strong localization regime [23]. The confinement in the $z$ direction as a result of increased refractive index results in a decrease of mode leakage into the substrate, leading to an increase of the $Q$ factor by more than a factor of 3 times for the resonant modes. Furthermore, it can also be observed from the mode profile of Mode I in Fig. 1(c) that the mode encompasses several nanowires and most of the field is confined inside those nanowires, leading to a mode confinement factor, $\Gamma$, of ${\sim}{0.58}$ (for further details, see Supplement 1 Section 2). Due to the strong mode confinement factor and the high gain provided by InP, it is expected that lasing will occur in this NW system.
Fig. 2. Single-mode lasing behavior from the random nanowire array. (a) SEM image showing a tilted view of our nanowire array (scale bar 1 µm). The inset shows a top view confirming the random nature of the nanowires in the $x - y$ plane (scale bar 500 nm). (b) Emission spectra at 6 K from the nanowire array at various pump fluences. The normalized spectral map (inset) shows a broadening of the spectrum with increasing fluence up to the transition to the amplified spontaneous emission (${550}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse) but then a sudden reduction in the linewidth above ${{P}_{{\rm in}}} = {550}\;\unicode{x00B5}{\rm J}\;{{\rm cm}^{- 2}}$ per pulse. (c) Light output versus excitation fluence plotted on a log–log scale, showing an "S-like" while the same data plotted on a linear scale (inset) shows a "kink-like" threshold behavior—indications of lasing from the nanowire array. The shaded gray area is the region of amplified spontaneous emission. The data points are the experimental results, and the red lines are just a guide to the eye.
Comparing the profiles of Modes I and II, although these modes are spectrally decoupled, there is a spatial overlap between these modes. On the other hand, Modes I and III are both spectrally and spatially decoupled, which is one of the advantages of random lasers operating in the localized regime to support stable and multi-mode operations.
We grew the random distribution of InP NW arrays using selective area [31] metal-organic vapor phase epitaxy (see Section 2). To create the designed pattern, i.e., FF = 0.3 and ${{ d}_{\textit{av}}}\simeq{125}\;{\rm nm}$, taking into account additional lateral growth, a random array of hole openings with a diameter of $\sim{90}\;{\rm nm}$ was patterned onto a 30 nm ${{\rm SiO}_x}$ layer deposited on the InP substrate as a growth mask. Figure 2(a) shows a scanning electron micrograph of the InP NW array. As shown in Supplement 1 Fig. 9, the average diameter of the NWs is around 116 nm with a standard deviation of ${\sim}{13}\;{\rm nm}$ (${\sim}{12}\%$ diameter variation according to the average diameter). The FF of the fabricated system is ${\sim}{0.25}$, which is quite close to the designed value.
Figure 2(b) shows the spectra from a random NW laser array at low temperature (6 K) at several pump fluences. The array was pumped from the top of the NWs using a 522 nm pulsed laser (see Section 2), which had a Gaussian beam shape profile with a FWHM of ${\sim}{5}\;\unicode{x00B5}{\rm m}$. The spectra are offset vertically for clarity. At low pump fluence, a broad emission is observed but as the fluence is increased, the emission intensity increases and is accompanied by broadening of the spectrum due to the band filling effect. At a pump fluence of ${\sim}{550}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, we observe a shoulder appearing at 843 nm, which is further amplified with increasing fluence. The inset indicates first a broadening of the spectrum with increasing fluence, and a sudden narrowing after a threshold fluence is reached, an indication that the array is lasing. By fitting the spectrum at ${P} = {693}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse with three Lorentzian functions, the lasing peak and its FWHM are determined to be 841 nm and ${\sim}{2}\;{\rm nm}$, respectively, resulting in a $Q$ factor of 420 (for further details, see Supplement 1 Section 3.3). The power-dependent output intensity [Fig. 2(c)] follows the typical "S"-curve shape where three emission regimes could clearly be observed. Spontaneous emission dominates at low excitation intensities until the transition to the amplified spontaneous emission at ${\sim}{550}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, followed by a super-linear increase indicative of amplified spontaneous emission (shaded gray region), and finally, the emergence of lasing above ${733}\;\unicode{x00B5}{\rm J}\;{{\rm cm}^{- 2}}$ per pulse [40–42]. A small emission with a peak centered at ${\sim}{875}\;{\rm nm}$ can also be observed in the spectra, which correspond to emission from the underlying InP substrate (zincblende phase) [43].
Figure 3 shows the spectra of another random NW array at different excitation fluences, where multi-mode lasing is observed. At excitation fluence (${386}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse) below the threshold, two broad peaks are observed, corresponding to the substrate (zincblende phase at ${\sim}{875}\;{\rm nm}$) and NWs (wurtzite phase at ${\sim}{830}\;{\rm nm}$) peaks, as discussed above. With increasing fluence, the long wavelength peak increases in intensity but remains broad, confirming lasing is not from the underlying InP substrate. On the other hand, for the NW emission, sharp peaks can clearly be observed in addition to a broad background above a fluence of ${1720}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse. The number and intensity of these sharp peaks increase with excitation fluence. Figure 3(b) shows a log–log plot of three L−L curves corresponding to the wavelength region I (${769} \lt \lambda \lt {787}\;{\rm nm}$), region II (${787} \lt \lambda \lt {807}\;{\rm nm}$), and the total spectrum of the NW region (${725} \lt \lambda \lt {850}\;{\rm nm}$) for comparison. The onset of amplified spontaneous emission for the modes in regions I and II occurs at ${\sim}{1720}$ and ${\sim}{1937}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, respectively. The power intensity at which this transition has happened for the spectral region of the NWs is almost similar to the value of the mode in region II. This is because emission from region II is much higher than that from region I and therefore higher excitation fluence is required for the latter region to achieve lasing. The near-field emission profiles as viewed from the top of the NWs at different pumping intensities are shown in Figs. 3(c), 3(e), and 3(g). At low pump fluence, a broad profile is observed but as the fluence is gradually increased, spatially localized peaks begin to appear above the broad Gaussian-like emission profile. Line scans across the profile in various directions confirm this observation [Figs. 3(d), 3(f), 3(h)]. At ${{P}_{{\rm in}}} = {2435}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, three high-intensity spots appear, which can be correlated to the three predominant modes at $\lambda = {788}$, 799, and 826 nm in Fig. 3(a). Increasing the pump fluence further to ${{P}_{{\rm in}}} = {3072}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse leads to the lasing mode at 799 nm becoming more dominant [Figs. 3(g) and 3(h)]. Fitting the profiles of the three localized regions with exponential decay functions [44] gives average localization lengths, $\xi$, of 210, 492, and 508 nm for each of the three modes. The number of nanowires in which the mode is localized, ${{N}_{{\rm NW}}}$, can be calculated through ${{N}_{{\rm NW}}} = \pi {\xi ^2}$. ${\rm FF}/{{A}_{\textit{av}}}$, where ${{A}_{\textit{av}}}$ is the average cross-sectional area of the NW. This results in around 4, 20, and 22 NWs interacting with each of the three localized modes, respectively. Since $\xi$ is much smaller in size than the area of the active scattering region (which is approximately the pumped area whose FWHM is around 5 µm), this is confirmation that 2D Anderson localization effect in the $x - y$ plane is achieved in these random scatterers. In addition, as shown in Figs. 3(c)–3(h), the modes are spatially isolated, i.e., the distance between the modes is higher than the sum of the localization length of the modes along the lines connecting them (for further details, see Supplement 1 Section 4). These features are consistent with the theoretical prediction for multi-mode random lasing in the Anderson localization regime [45]. On the other hand, in diffusive random lasers, the lasing wavelength is primarily determined by the spectral peak of the gain medium [24] and only weak dispersion is incorporated into the scattering medium, resulting in a significantly broader lasing peak. Here in our case where localization occurs, lasing modes even at wavelengths in the lower part of the gain spectrum (758 and 778 nm) can also be observed, which is in agreement with the random lasing in the localization regime [15].
Fig. 3. Multi-mode lasing and near-field emission profiles from the random nanowire array. (a) Emission spectra from the nanowire array at various pump fluences. Emission from the substrate is indicated, while emission from the nanowires occurs below 850 nm. (b) Log–log plot of integrated light output versus fluence from two spectral regions indicated in (a): shaded red region I (${769} \lt \lambda \lt {787}\;{\rm nm}$) and green region II (${787} \lt \lambda \lt {807}\;{\rm nm}$). The results for total light output from the nanowire array (${725} \lt \lambda \lt {850}\;{\rm nm}$) are also shown for reference. (c), (e), (g)Near-field emission profiles from the nanowire array at three different excitation fluences: ${{P}_{{\rm in}}} = {386}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, ${{ P}_{{\rm in}}} = {2}{,}{435}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse, and ${{P}_{{\rm in}}} = {3}{,}{072}\;\unicode{x00B5}{\rm J}\;{{\rm cm}^{- 2}}$ per pulse (scale bars 2 µm). (d), (f), (h) Corresponding line scan across the near-field intensity profiles in (c), (e), and (g) along three directions, AB, CD, and EF. The cross-section mode profiles clearly indicate strong localization above the lasing threshold as indicated in (f) and (h). The localization lengths for the mode profiles are indicated in the image (g). All measurements were done at 6 K.
Unlike random lasers operating in the diffusive regime, where the single-shot emission spectrum is unstable and changes from shot to shot [46], in our random NW system the spectrum is stable under successive pulsed excitation. Figure 4 shows how the different modes evolved with increasing excitation fluence over a numerous number of pulses, for two different locations on the sample. $\Delta {{N}_{{\rm pulse}}}$ at a particular excitation fluence is defined as the difference between the number of pulses at that excitation fluence and the lowest excitation fluence. For example, $\Delta {{N}_{{\rm pulse}}} = {375} \times {{10}^6}$ at ${{ P}_{{\rm in}}} = {2258}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse in Fig. 4(a) means the NW array has been subjected to an additional ${375} \times {{10}^6}$ pulses recording the spectrum at ${{P}_{{\rm in}}} = {386}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse. It can be seen at both locations whose spectra are shown in Figs. 4(a) and 4(b), even though lots of pulses have excited the sample, the modes are stable and well defined even with increasing the excitation fluence with only a small blueshift due to changes in refractive index with carrier density and temperature [47,48].
Fig. 4. Lasing stability. Emission spectra at different excitation fluences from two different regions of the random nanowire array which support (a) three and (b) five lasing modes (${T} = {6}\;{\rm K}$). With increasing fluences, the nanowire array has been subjected to more excitation pulses. The difference in the total number of pulses used to collect each spectrum from those used for the lowest fluence is indicated by $\Delta {{N}_{{\rm pulse}}}$.
Fig. 5. Statistical distribution of several key parameters of the random nanowire laser. (a) Measured pumping intensity distribution taken at the commencement of amplified spontaneous emission. (b) Experimentally measured $Q$ factor of the modes. (c) Calculated distribution of the mode confinement factor. (d) Calculated distribution of the spatial lateral mode size.
In the field of random lasers, unlike lasers with conventional feedback mechanisms, there is a lack in the quantification of various lasing parameters. The statistical quantification of several key parameters shown in Fig. 5 provides an insight into the variability of the performance of our random nanowire system. In Fig. 5(a) the pump intensities at the onset of amplified spontaneous emission show an almost normal distribution with most of the modes having a pump intensity between ${1500 {-} 2000}\;\unicode{x00B5} {\rm J}\;{{\rm cm}^{- 2}}$ per pulse. It is shown [Supplement 1 Figs. 7 and 8(c)] that this range of pump intensities is sufficient to achieve positive material gain for modes in the wavelength range of 790–850 nm. Figure 5(b) shows the distribution of the experimental quality factor of the resonant cavities. The distribution of simulated $Q$ factors for different modes and their correlation with the mode confinement factor and lateral size are presented in Supplement 1 Fig. 13. The highest values for the calculated and measured $Q$ factor are 1181 and 420, respectively. The median for the $Q$ factor and wavelength resonance are ${{Q}_{m,\exp}} = {165}$, ${\lambda _{m,\exp}} = {790}\;{\rm nm}$ and ${{Q}_{m,{\rm sim}}} = {380}$, ${\lambda _{m,{\rm sim}}} = {819}\;{\rm nm}$ for the experimental and simulated results, respectively. The measured $Q$ factors are generally less than the conventional cavities such as Fabry–Perot (FP) cavities ($Q$ factor is usually ${{10}^3} {-} {{10}^4}$) [49,50] and cavities supporting whispering gallery modes ($Q$ factor is usually ${{10}^4} {-} {{10}^5}$) [51,52]. Figure 5(c) shows the calculated mode confinement factor for our system is mostly more than 0.5 and some modes even have confinement factor values as high as 0.62. Although these values are lower than in conventional nanowire lasers operating in the FP mode [53,54], they are higher than those expected for those operating in the diffusive regime. In the diffusive regime, the lower $Q$ factor leads to more leakage of the field from the NWs and consequently lower confinement factors. The correlation between $Q$ factor and $\Gamma$ is quantified by Spearman's rank correlation coefficients, ${{r}_s}$, and is around 0.61, which shows a good correlation between these two parameters, where modes with lower $Q$ factors usually show less localization [Supplement 1 Fig. 13(c)]. For example, in our simulation results, a mode with $Q = {568}$ has a $\Gamma = {0.62}$; however, another mode with $Q = {188}$ shows only a confinement factor of 0.48. Figure 5(d) shows the calculated size distribution of the lateral mode. The average, minimum, and maximum values for the lateral mode area are 1.67 (${\sim}{29}{({ n/}{\lambda _{\textit{av}}})^2}$), 0.24 (${\sim}{4.7}{({n/}\lambda)^2}$), and ${3.74}\;\unicode{x00B5}{\rm m}^2$ (${\sim}{62}{({n/}\lambda)^2}$), respectively. These calculated values are much smaller than the pumping area, confirming the possibility of supporting localized modes shown in Fig. 3. Considering the values of the minimum and maximum lateral mode areas, the smallest and the largest modes encompass 4 and 91 NWs, respectively. Furthermore, as shown in Supplement 1 Fig. 13(b), the poor correlation of ${r_s} = - {0.45}$ between the $Q$ factor and lateral mode size indicates that a high $Q$ factor does not necessarily lead to smaller modes. Indeed, high $Q$ factor modes could involve scattering between more NWs than a mode with a lower $Q$ factor.
We have designed and experimentally demonstrated that by using InP random NW arrays, Anderson localization in random lasers could be achieved due to their high gain and strong scattering properties. Light is confined in three dimensions, where the lateral confinement is provided by random scattering (2D Anderson localization) while the vertical confinement is provided by refractive index contrast. The near-field images of the mode profiles show exponential decay of the localized modes. The strong spatial confinement of the modes results in the stable multi-mode operation of random lasers. Mode control in random lasers opens up new opportunities for the facile fabrication of lasers over large areas (${10s {-} 100s}\;\unicode{x00B5}{\rm m}^2$) for application in the next-generation meta-optical systems.
Australian Research Council (Discovery Project-DP170102530).
We acknowledge the Australian Research Council for the financial support. Access to the epitaxial growth and fabrication facilities is made possible through the Australian National Fabrication Facility, ACT Node.
The authors declare no conflicts of interest.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Supplemental document
See Supplement 1 for supporting content.
1. I. Viola, N. Ghofraniha, A. Zacheo, V. Arima, C. Conti, and G. Gigli, "Random laser emission from a paper-based device," J. Mater. Chem. C 1, 8128–8133 (2013). [CrossRef]
2. Y.-J. Lee, C.-Y. Chou, Z.-P. Yang, T. B. H. Nguyen, Y.-C. Yao, T.-W. Yeh, M.-T. Tsai, and H.-C. Kuo, "Flexible random lasers with tunable lasing emissions," Nanoscale 10, 10403–10411 (2018). [CrossRef]
3. B. S. Bhaktha, N. Bachelard, X. Noblin, and P. Sebbah, "Optofluidic random laser," Appl. Phys. Lett. 101, 151101 (2012). [CrossRef]
4. X. Fan and S.-H. Yun, "The potential of optofluidic biolasers," Nat. Methods 11, 141 (2014). [CrossRef]
5. L. Xu, H. Zhao, C. Xu, S. Zhang, and J. Zhang, "Optical energy storage and reemission based weak localization of light and accompanying random lasing action in disordered Nd3+ doped (Pb, La)(Zr, Ti)O3 ceramics," J. Appl. Phys. 116, 063104 (2014). [CrossRef]
6. R. Polson and Z. Vardeny, "Cancerous tissue mapping from random lasing emission spectra," J. Opt. 12, 024010 (2010). [CrossRef]
7. B. Redding, M. A. Choma, and H. Cao, "Speckle-free laser imaging using random laser illumination," Nat. Photonics 6, 355–359 (2012). [CrossRef]
8. B. Redding, S. F. Liew, R. Sarma, and H. Cao, "Compact spectrometer based on a disordered photonic chip," Nat. Photonics 7, 746–751 (2013). [CrossRef]
9. A. Mermillod-Blondin, H. Mentzel, and A. Rosenfeld, "Time-resolved microscopy with random lasers," Opt. Lett. 38, 4112–4115 (2013). [CrossRef]
10. Q. Song, S. Xiao, Z. Xu, J. Liu, X. Sun, V. Drachev, V. M. Shalaev, O. Akkus, and Y. L. Kim, "Random lasing in bone tissue," Opt. Lett. 35, 1425–1427 (2010). [CrossRef]
11. S. K. Turitsyn, S. A. Babin, A. E. El-Taher, P. Harper, D. V. Churkin, S. I. Kablukov, J. D. Ania-Castañón, V. Karalekas, and E. V. Podivilov, "Random distributed feedback fibre laser," Nat. Photonics 4, 231–235 (2010). [CrossRef]
12. F. Luan, B. Gu, A. S. Gomes, K.-T. Yong, S. Wen, and P. N. Prasad, "Lasing in nanocomposite random media," Nano Today 10(2), 168–192 (2015). [CrossRef]
13. J. Dubois and S. La Rochelle, "Active cooperative tuned identification friend or foe (ACTIFF)," U.S. patent US5966227A (12 October 1999).
14. D. S. Wiersma and A. Lagendijk, "Light diffusion with gain and random lasers," Phys. Rev. E 54, 4256–4265 (1996). [CrossRef]
15. R. Sapienza, "Determining random lasing action," Nat. Rev. Phys. 1, 690–695 (2019). [CrossRef]
16. K. L. van der Molen, A. P. Mosk, and A. Lagendijk, "Quantitative analysis of several random lasers," Opt. Commun. 278, 110–113 (2007). [CrossRef]
17. S. Mujumdar, M. Ricci, R. Torre, and D. S. Wiersma, "Amplified extended modes in random lasers," Phys. Rev. Lett. 93, 053903 (2004). [CrossRef]
18. H. Cao, J. Y. Xu, S. H. Chang, and S. T. Ho, "Transition from amplified spontaneous emission to laser action in strongly scattering media," Phys. Rev. E 61, 1985–1989 (2000). [CrossRef]
19. S. V. Frolov, Z. V. Vardeny, A. A. Zakhidov, and R. H. Baughman, "Laser-like emission in opal photonic crystals," Opt. Commun. 162, 241–246 (1999). [CrossRef]
20. H. E. Türeci, L. Ge, S. Rotter, and A. D. Stone, "Strong interactions in multimode random lasers," Science 320, 643–646 (2008). [CrossRef]
21. V. Milner and A. Z. Genack, "Photon localization laser: low-threshold lasing in a random amplifying layered medium via wave localization," Phys. Rev. Lett. 94, 073901 (2005). [CrossRef]
22. J. Andreasen, A. A. Asatryan, L. C. Botten, M. A. Byrne, H. Cao, L. Ge, L. Labonté, P. Sebbah, A. D. Stone, H. E. Türeci, and C. Vanneste, "Modes of random lasers," Adv. Opt. Photonics 3, 88–127 (2011). [CrossRef]
23. H. Cao, "Review on latest developments in random lasers with coherent feedback," J. Phys. A 38, 10497 (2005). [CrossRef]
24. H. Cao, Y. Zhao, S.-T. Ho, E. Seelig, Q. Wang, and R. P. Chang, "Random laser action in semiconductor powder," Phys. Rev. Lett. 82, 2278 (1999). [CrossRef]
25. N. M. Lawandy, R. Balachandran, A. Gomes, and E. Sauvain, "Laser action in strongly scattering media," Nature 368, 436–438 (1994). [CrossRef]
26. C. Liu, J. A. Zapien, Y. Yao, X. Meng, C. S. Lee, S. Fan, Y. Lifshitz, and S. T. Lee, "High-density, ordered ultraviolet light-emitting ZnO nanowire arrays," Adv. Mater. 15, 838–841 (2003). [CrossRef]
27. K. H. Li, X. Liu, Q. Wang, S. Zhao, and Z. Mi, "Ultralow-threshold electrically injected AlGaN nanowire ultraviolet lasers on Si operating at low temperature," Nat. Nanotechnol. 10, 140–144 (2015). [CrossRef]
28. H. Y. Yang, S. F. Yu, S. P. Lau, S. H. Tsang, G. Z. Xing, and T. Wu, "Ultraviolet coherent random lasing in randomly assembled SnO2 nanowires," Appl. Phys. Lett. 94, 241121 (2009). [CrossRef]
29. M. Sakai, Y. Inose, K. Ema, T. Ohtsuki, H. Sekiguchi, A. Kikuchi, and K. Kishino, "Random laser action in GaN nanocolumns," Appl. Phys. Lett. 97, 151109 (2010). [CrossRef]
30. J. Bingi, A. R. Warrier, and C. Vijayan, "Raman mode random lasing in ZnS-β-carotene random gain media," Appl. Phys. Lett. 102, 221105 (2013). [CrossRef]
31. N. Wang, X. Yuan, X. Zhang, Q. Gao, B. Zhao, L. Li, M. Lockrey, H. H. Tan, C. Jagadish, and P. Caroff, "Shape engineering of InP nanostructures by selective area epitaxy," ACS Nano 13, 7261–7269 (2019). [CrossRef]
32. Q. Gao, D. Saxena, F. Wang, L. Fu, S. Mokkapati, Y. Guo, L. Li, J. Wong-Leung, P. Caroff, H. H. Tan, and C. Jagadish, "Selective-area epitaxy of pure wurtzite InP nanowires: high quantum efficiency and room-temperature lasing," Nano Lett. 14, 5206–5211 (2014). [CrossRef]
33. J. Liu, P. D. Garcia, S. Ek, N. Gregersen, T. Suhr, M. Schubert, J. Mørk, S. Stobbe, and P. Lodahl, "Random nanolasing in the Anderson localized regime," Nat. Nanotechnol. 9, 285–289 (2014). [CrossRef]
34. L. A. Coldren, S. W. Corzine, and M. L. Mashanovitch, Diode Lasers and Photonic Integrated Circuits (Wiley, 2012).
35. I. Prieto, J. M. Llorens, L. E. Muñoz-Camúñez, A. G. Taboada, J. Canet-Ferrer, J. M. Ripalda, C. Robles, G. Muñoz-Matutano, J. P. Martínez-Pastor, and P. A. Postigo, "Near thresholdless laser operation at room temperature," Optica 2, 66–69 (2015). [CrossRef]
36. B. Fazio, P. Artoni, M. A. Iatì, C. D'andrea, M. J. L. Faro, S. Del Sorbo, S. Pirotta, P. G. Gucciardi, P. Musumeci, and C. S. Vasi, "Strongly enhanced light trapping in a two-dimensional silicon nanowire random fractal array," Light Sci. Appl. 5, e16062 (2016). [CrossRef]
37. H. J. Joyce, C. J. Docherty, Q. Gao, H. H. Tan, C. Jagadish, J. Lloyd-Hughes, L. M. Herz, and M. B. Johnston, "Electronic properties of GaAs, InAs and InP nanowires studied by terahertz spectroscopy," Nanotechnology 24, 214006 (2013). [CrossRef]
38. K. A. Meradi, F. Tayeboun, S. Ghezali, R. Naoum, and H. T. Hattori, "Design of a thermal tunable photonic-crystal coupler," J. Russ. Laser Res. 32, 572–578 (2011). [CrossRef]
39. B. R. Bennett, R. A. Soref, and J. A. D. Alamo, "Carrier-induced change in refractive index of InP, GaAs and InGaAsP," IEEE J. Quantum Electron. 26, 113–122 (1990). [CrossRef]
40. M. A. Zimmler, J. Bao, F. Capasso, S. Müller, and C. Ronning, "Laser action in nanowires: observation of the transition from amplified spontaneous emission to laser oscillation," Appl. Phys. Lett. 93, 051101 (2008). [CrossRef]
41. S. W. Eaton, M. Lai, N. A. Gibson, A. B. Wong, L. Dou, J. Ma, L.-W. Wang, S. R. Leone, and P. Yang, "Lasing in robust cesium lead halide perovskite nanowires," Proc. Natl. Acad. Sci. USA 113, 1993–1998 (2016). [CrossRef]
42. D. Saxena, S. Mokkapati, P. Parkinson, N. Jiang, Q. Gao, H. H. Tan, and C. Jagadish, "Optically pumped room-temperature GaAs nanowire lasers," Nat. Photonics 7, 963–968 (2013). [CrossRef]
43. A. Mishra, L. V. Titova, T. B. Hoang, H. E. Jackson, L. M. Smith, J. M. Yarrison-Rice, Y. Kim, H. J. Joyce, Q. Gao, H. H. Tan, and C. Jagadish, "Polarization and temperature dependence of photoluminescence from zincblende and wurtzite InP nanowires," Appl. Phys. Lett. 91, 263104 (2007). [CrossRef]
44. T. Schwartz, G. Bartal, S. Fishman, and M. Segev, "Transport and Anderson localization in disordered two-dimensional photonic lattices," Nature 446, 52–55 (2007). [CrossRef]
45. P. Stano and P. Jacquod, "Suppression of interactions in multimode random lasers in the Anderson localized regime," Nat. Photonics 7, 66–71 (2013). [CrossRef]
46. S. Mujumdar, V. Türck, R. Torre, and D. S. Wiersma, "Chaotic behavior of a random laser with static disorder," Phys. Rev. A 76, 033807 (2007). [CrossRef]
47. C. Tessarek, R. Goldhahn, G. Sarau, M. Heilmann, and S. Christiansen, "Carrier-induced refractive index change observed by a whispering gallery mode shift in GaN microrods," New J. Phys. 17, 083047 (2015). [CrossRef]
48. P. Zhao, Z. Feng, F. Qi, A. Qi, Y. Wang, and W. Zheng, Blue Shift of Laser Mode in Photonic Crystal Microcavity (SPIE, 2014).
49. P. Liu, X. He, J. Ren, Q. Liao, J. Yao, and H. Fu, "Organic–inorganic hybrid Perovskite nanowire laser arrays," ACS Nano 11, 5766–5773 (2017). [CrossRef]
50. R. Agarwal, C. J. Barrelet, and C. M. Lieber, "Lasing in single cadmium sulfide nanowire optical cavities," Nano Lett. 5, 917–920 (2005). [CrossRef]
51. R. Chen, B. Ling, X. W. Sun, and H. D. Sun, "Room temperature excitonic whispering gallery mode lasing from high-quality hexagonal ZnO microdisks," Adv. Mater. 23, 2199–2204 (2011). [CrossRef]
52. K. Wang, S. Sun, C. Zhang, W. Sun, Z. Gu, S. Xiao, and Q. Song, "Whispering-gallery-mode based CH3NH3PbBr3 perovskite microrod lasers with high quality factors," Mater. Chem. Front. 1, 477–481 (2017). [CrossRef]
53. D. Saxena, F. Wang, Q. Gao, S. Mokkapati, H. H. Tan, and C. Jagadish, "Mode profiling of semiconductor nanowire lasers," Nano Lett. 15, 5342–5348 (2015). [CrossRef]
54. A. Z. Mariano, C. Federico, M. Sven, and R. Carsten, "Optically pumped nanowire lasers: invited review," Semicond. Sci. Technol. 25, 024001 (2010). [CrossRef]
Article Order
I. Viola, N. Ghofraniha, A. Zacheo, V. Arima, C. Conti, and G. Gigli, "Random laser emission from a paper-based device," J. Mater. Chem. C 1, 8128–8133 (2013).
[Crossref]
Y.-J. Lee, C.-Y. Chou, Z.-P. Yang, T. B. H. Nguyen, Y.-C. Yao, T.-W. Yeh, M.-T. Tsai, and H.-C. Kuo, "Flexible random lasers with tunable lasing emissions," Nanoscale 10, 10403–10411 (2018).
B. S. Bhaktha, N. Bachelard, X. Noblin, and P. Sebbah, "Optofluidic random laser," Appl. Phys. Lett. 101, 151101 (2012).
X. Fan and S.-H. Yun, "The potential of optofluidic biolasers," Nat. Methods 11, 141 (2014).
L. Xu, H. Zhao, C. Xu, S. Zhang, and J. Zhang, "Optical energy storage and reemission based weak localization of light and accompanying random lasing action in disordered Nd3+ doped (Pb, La)(Zr, Ti)O3 ceramics," J. Appl. Phys. 116, 063104 (2014).
R. Polson and Z. Vardeny, "Cancerous tissue mapping from random lasing emission spectra," J. Opt. 12, 024010 (2010).
B. Redding, M. A. Choma, and H. Cao, "Speckle-free laser imaging using random laser illumination," Nat. Photonics 6, 355–359 (2012).
B. Redding, S. F. Liew, R. Sarma, and H. Cao, "Compact spectrometer based on a disordered photonic chip," Nat. Photonics 7, 746–751 (2013).
A. Mermillod-Blondin, H. Mentzel, and A. Rosenfeld, "Time-resolved microscopy with random lasers," Opt. Lett. 38, 4112–4115 (2013).
Q. Song, S. Xiao, Z. Xu, J. Liu, X. Sun, V. Drachev, V. M. Shalaev, O. Akkus, and Y. L. Kim, "Random lasing in bone tissue," Opt. Lett. 35, 1425–1427 (2010).
S. K. Turitsyn, S. A. Babin, A. E. El-Taher, P. Harper, D. V. Churkin, S. I. Kablukov, J. D. Ania-Castañón, V. Karalekas, and E. V. Podivilov, "Random distributed feedback fibre laser," Nat. Photonics 4, 231–235 (2010).
F. Luan, B. Gu, A. S. Gomes, K.-T. Yong, S. Wen, and P. N. Prasad, "Lasing in nanocomposite random media," Nano Today 10(2), 168–192 (2015).
J. Dubois and S. La Rochelle, "Active cooperative tuned identification friend or foe (ACTIFF)," U.S. patentUS5966227A (12October1999).
D. S. Wiersma and A. Lagendijk, "Light diffusion with gain and random lasers," Phys. Rev. E 54, 4256–4265 (1996).
R. Sapienza, "Determining random lasing action," Nat. Rev. Phys. 1, 690–695 (2019).
K. L. van der Molen, A. P. Mosk, and A. Lagendijk, "Quantitative analysis of several random lasers," Opt. Commun. 278, 110–113 (2007).
S. Mujumdar, M. Ricci, R. Torre, and D. S. Wiersma, "Amplified extended modes in random lasers," Phys. Rev. Lett. 93, 053903 (2004).
H. Cao, J. Y. Xu, S. H. Chang, and S. T. Ho, "Transition from amplified spontaneous emission to laser action in strongly scattering media," Phys. Rev. E 61, 1985–1989 (2000).
S. V. Frolov, Z. V. Vardeny, A. A. Zakhidov, and R. H. Baughman, "Laser-like emission in opal photonic crystals," Opt. Commun. 162, 241–246 (1999).
H. E. Türeci, L. Ge, S. Rotter, and A. D. Stone, "Strong interactions in multimode random lasers," Science 320, 643–646 (2008).
V. Milner and A. Z. Genack, "Photon localization laser: low-threshold lasing in a random amplifying layered medium via wave localization," Phys. Rev. Lett. 94, 073901 (2005).
J. Andreasen, A. A. Asatryan, L. C. Botten, M. A. Byrne, H. Cao, L. Ge, L. Labonté, P. Sebbah, A. D. Stone, H. E. Türeci, and C. Vanneste, "Modes of random lasers," Adv. Opt. Photonics 3, 88–127 (2011).
H. Cao, "Review on latest developments in random lasers with coherent feedback," J. Phys. A 38, 10497 (2005).
H. Cao, Y. Zhao, S.-T. Ho, E. Seelig, Q. Wang, and R. P. Chang, "Random laser action in semiconductor powder," Phys. Rev. Lett. 82, 2278 (1999).
N. M. Lawandy, R. Balachandran, A. Gomes, and E. Sauvain, "Laser action in strongly scattering media," Nature 368, 436–438 (1994).
C. Liu, J. A. Zapien, Y. Yao, X. Meng, C. S. Lee, S. Fan, Y. Lifshitz, and S. T. Lee, "High-density, ordered ultraviolet light-emitting ZnO nanowire arrays," Adv. Mater. 15, 838–841 (2003).
K. H. Li, X. Liu, Q. Wang, S. Zhao, and Z. Mi, "Ultralow-threshold electrically injected AlGaN nanowire ultraviolet lasers on Si operating at low temperature," Nat. Nanotechnol. 10, 140–144 (2015).
H. Y. Yang, S. F. Yu, S. P. Lau, S. H. Tsang, G. Z. Xing, and T. Wu, "Ultraviolet coherent random lasing in randomly assembled SnO2 nanowires," Appl. Phys. Lett. 94, 241121 (2009).
M. Sakai, Y. Inose, K. Ema, T. Ohtsuki, H. Sekiguchi, A. Kikuchi, and K. Kishino, "Random laser action in GaN nanocolumns," Appl. Phys. Lett. 97, 151109 (2010).
J. Bingi, A. R. Warrier, and C. Vijayan, "Raman mode random lasing in ZnS-β-carotene random gain media," Appl. Phys. Lett. 102, 221105 (2013).
N. Wang, X. Yuan, X. Zhang, Q. Gao, B. Zhao, L. Li, M. Lockrey, H. H. Tan, C. Jagadish, and P. Caroff, "Shape engineering of InP nanostructures by selective area epitaxy," ACS Nano 13, 7261–7269 (2019).
Q. Gao, D. Saxena, F. Wang, L. Fu, S. Mokkapati, Y. Guo, L. Li, J. Wong-Leung, P. Caroff, H. H. Tan, and C. Jagadish, "Selective-area epitaxy of pure wurtzite InP nanowires: high quantum efficiency and room-temperature lasing," Nano Lett. 14, 5206–5211 (2014).
J. Liu, P. D. Garcia, S. Ek, N. Gregersen, T. Suhr, M. Schubert, J. Mørk, S. Stobbe, and P. Lodahl, "Random nanolasing in the Anderson localized regime," Nat. Nanotechnol. 9, 285–289 (2014).
L. A. Coldren, S. W. Corzine, and M. L. Mashanovitch, Diode Lasers and Photonic Integrated Circuits (Wiley, 2012).
I. Prieto, J. M. Llorens, L. E. Muñoz-Camúñez, A. G. Taboada, J. Canet-Ferrer, J. M. Ripalda, C. Robles, G. Muñoz-Matutano, J. P. Martínez-Pastor, and P. A. Postigo, "Near thresholdless laser operation at room temperature," Optica 2, 66–69 (2015).
B. Fazio, P. Artoni, M. A. Iatì, C. D'andrea, M. J. L. Faro, S. Del Sorbo, S. Pirotta, P. G. Gucciardi, P. Musumeci, and C. S. Vasi, "Strongly enhanced light trapping in a two-dimensional silicon nanowire random fractal array," Light Sci. Appl. 5, e16062 (2016).
H. J. Joyce, C. J. Docherty, Q. Gao, H. H. Tan, C. Jagadish, J. Lloyd-Hughes, L. M. Herz, and M. B. Johnston, "Electronic properties of GaAs, InAs and InP nanowires studied by terahertz spectroscopy," Nanotechnology 24, 214006 (2013).
K. A. Meradi, F. Tayeboun, S. Ghezali, R. Naoum, and H. T. Hattori, "Design of a thermal tunable photonic-crystal coupler," J. Russ. Laser Res. 32, 572–578 (2011).
B. R. Bennett, R. A. Soref, and J. A. D. Alamo, "Carrier-induced change in refractive index of InP, GaAs and InGaAsP," IEEE J. Quantum Electron. 26, 113–122 (1990).
M. A. Zimmler, J. Bao, F. Capasso, S. Müller, and C. Ronning, "Laser action in nanowires: observation of the transition from amplified spontaneous emission to laser oscillation," Appl. Phys. Lett. 93, 051101 (2008).
S. W. Eaton, M. Lai, N. A. Gibson, A. B. Wong, L. Dou, J. Ma, L.-W. Wang, S. R. Leone, and P. Yang, "Lasing in robust cesium lead halide perovskite nanowires," Proc. Natl. Acad. Sci. USA 113, 1993–1998 (2016).
D. Saxena, S. Mokkapati, P. Parkinson, N. Jiang, Q. Gao, H. H. Tan, and C. Jagadish, "Optically pumped room-temperature GaAs nanowire lasers," Nat. Photonics 7, 963–968 (2013).
A. Mishra, L. V. Titova, T. B. Hoang, H. E. Jackson, L. M. Smith, J. M. Yarrison-Rice, Y. Kim, H. J. Joyce, Q. Gao, H. H. Tan, and C. Jagadish, "Polarization and temperature dependence of photoluminescence from zincblende and wurtzite InP nanowires," Appl. Phys. Lett. 91, 263104 (2007).
T. Schwartz, G. Bartal, S. Fishman, and M. Segev, "Transport and Anderson localization in disordered two-dimensional photonic lattices," Nature 446, 52–55 (2007).
P. Stano and P. Jacquod, "Suppression of interactions in multimode random lasers in the Anderson localized regime," Nat. Photonics 7, 66–71 (2013).
S. Mujumdar, V. Türck, R. Torre, and D. S. Wiersma, "Chaotic behavior of a random laser with static disorder," Phys. Rev. A 76, 033807 (2007).
C. Tessarek, R. Goldhahn, G. Sarau, M. Heilmann, and S. Christiansen, "Carrier-induced refractive index change observed by a whispering gallery mode shift in GaN microrods," New J. Phys. 17, 083047 (2015).
P. Zhao, Z. Feng, F. Qi, A. Qi, Y. Wang, and W. Zheng, Blue Shift of Laser Mode in Photonic Crystal Microcavity (SPIE, 2014).
P. Liu, X. He, J. Ren, Q. Liao, J. Yao, and H. Fu, "Organic–inorganic hybrid Perovskite nanowire laser arrays," ACS Nano 11, 5766–5773 (2017).
R. Agarwal, C. J. Barrelet, and C. M. Lieber, "Lasing in single cadmium sulfide nanowire optical cavities," Nano Lett. 5, 917–920 (2005).
R. Chen, B. Ling, X. W. Sun, and H. D. Sun, "Room temperature excitonic whispering gallery mode lasing from high-quality hexagonal ZnO microdisks," Adv. Mater. 23, 2199–2204 (2011).
K. Wang, S. Sun, C. Zhang, W. Sun, Z. Gu, S. Xiao, and Q. Song, "Whispering-gallery-mode based CH3NH3PbBr3 perovskite microrod lasers with high quality factors," Mater. Chem. Front. 1, 477–481 (2017).
D. Saxena, F. Wang, Q. Gao, S. Mokkapati, H. H. Tan, and C. Jagadish, "Mode profiling of semiconductor nanowire lasers," Nano Lett. 15, 5342–5348 (2015).
A. Z. Mariano, C. Federico, M. Sven, and R. Carsten, "Optically pumped nanowire lasers: invited review," Semicond. Sci. Technol. 25, 024001 (2010).
Agarwal, R.
Akkus, O.
Alamo, J. A. D.
Andreasen, J.
Ania-Castañón, J. D.
Arima, V.
Artoni, P.
Asatryan, A. A.
Babin, S. A.
Bachelard, N.
Balachandran, R.
Bao, J.
Barrelet, C. J.
Bartal, G.
Baughman, R. H.
Bennett, B. R.
Bhaktha, B. S.
Bingi, J.
Botten, L. C.
Byrne, M. A.
Canet-Ferrer, J.
Cao, H.
Capasso, F.
Caroff, P.
Carsten, R.
Chang, R. P.
Chang, S. H.
Chen, R.
Choma, M. A.
Chou, C.-Y.
Christiansen, S.
Churkin, D. V.
Coldren, L. A.
Conti, C.
Corzine, S. W.
D'andrea, C.
Del Sorbo, S.
Docherty, C. J.
Dou, L.
Drachev, V.
Dubois, J.
Eaton, S. W.
Ek, S.
El-Taher, A. E.
Ema, K.
Fan, S.
Fan, X.
Faro, M. J. L.
Fazio, B.
Federico, C.
Feng, Z.
Fishman, S.
Frolov, S. V.
Fu, H.
Fu, L.
Gao, Q.
Garcia, P. D.
Ge, L.
Genack, A. Z.
Ghezali, S.
Ghofraniha, N.
Gibson, N. A.
Gigli, G.
Goldhahn, R.
Gomes, A.
Gomes, A. S.
Gregersen, N.
Gu, B.
Gu, Z.
Gucciardi, P. G.
Guo, Y.
Harper, P.
Hattori, H. T.
He, X.
Heilmann, M.
Herz, L. M.
Ho, S. T.
Ho, S.-T.
Hoang, T. B.
Iatì, M. A.
Inose, Y.
Jackson, H. E.
Jacquod, P.
Jagadish, C.
Jiang, N.
Johnston, M. B.
Joyce, H. J.
Kablukov, S. I.
Karalekas, V.
Kikuchi, A.
Kim, Y. L.
Kishino, K.
Kuo, H.-C.
La Rochelle, S.
Labonté, L.
Lagendijk, A.
Lai, M.
Lau, S. P.
Lawandy, N. M.
Lee, C. S.
Lee, S. T.
Lee, Y.-J.
Leone, S. R.
Li, K. H.
Liao, Q.
Lieber, C. M.
Liew, S. F.
Lifshitz, Y.
Ling, B.
Liu, C.
Liu, J.
Liu, P.
Liu, X.
Llorens, J. M.
Lloyd-Hughes, J.
Lockrey, M.
Lodahl, P.
Luan, F.
Ma, J.
Mariano, A. Z.
Martínez-Pastor, J. P.
Mashanovitch, M. L.
Meng, X.
Mentzel, H.
Meradi, K. A.
Mermillod-Blondin, A.
Mi, Z.
Milner, V.
Mishra, A.
Mokkapati, S.
Mørk, J.
Mosk, A. P.
Mujumdar, S.
Müller, S.
Muñoz-Camúñez, L. E.
Muñoz-Matutano, G.
Musumeci, P.
Naoum, R.
Nguyen, T. B. H.
Noblin, X.
Ohtsuki, T.
Parkinson, P.
Pirotta, S.
Podivilov, E. V.
Polson, R.
Postigo, P. A.
Prasad, P. N.
Prieto, I.
Qi, A.
Qi, F.
Redding, B.
Ren, J.
Ricci, M.
Ripalda, J. M.
Robles, C.
Ronning, C.
Rosenfeld, A.
Rotter, S.
Sakai, M.
Sapienza, R.
Sarau, G.
Sarma, R.
Sauvain, E.
Saxena, D.
Schubert, M.
Schwartz, T.
Sebbah, P.
Seelig, E.
Segev, M.
Sekiguchi, H.
Shalaev, V. M.
Smith, L. M.
Song, Q.
Soref, R. A.
Stano, P.
Stobbe, S.
Stone, A. D.
Suhr, T.
Sun, H. D.
Sun, S.
Sun, X.
Sun, X. W.
Sven, M.
Taboada, A. G.
Tan, H. H.
Tayeboun, F.
Tessarek, C.
Titova, L. V.
Torre, R.
Tsai, M.-T.
Tsang, S. H.
Türck, V.
Türeci, H. E.
Turitsyn, S. K.
van der Molen, K. L.
Vanneste, C.
Vardeny, Z.
Vardeny, Z. V.
Vasi, C. S.
Vijayan, C.
Viola, I.
Wang, F.
Wang, L.-W.
Wang, N.
Wang, Q.
Wang, Y.
Warrier, A. R.
Wen, S.
Wiersma, D. S.
Wong, A. B.
Wong-Leung, J.
Wu, T.
Xiao, S.
Xing, G. Z.
Xu, C.
Xu, J. Y.
Xu, L.
Xu, Z.
Yang, H. Y.
Yang, P.
Yang, Z.-P.
Yao, J.
Yao, Y.
Yao, Y.-C.
Yarrison-Rice, J. M.
Yeh, T.-W.
Yong, K.-T.
Yu, S. F.
Yuan, X.
Yun, S.-H.
Zacheo, A.
Zakhidov, A. A.
Zapien, J. A.
Zhang, C.
Zhang, S.
Zhang, X.
Zhao, B.
Zhao, H.
Zhao, P.
Zhao, S.
Zhao, Y.
Zheng, W.
Zimmler, M. A.
ACS Nano (2)
Adv. Mater. (2)
Adv. Opt. Photonics (1)
Appl. Phys. Lett. (6)
IEEE J. Quantum Electron. (1)
J. Appl. Phys. (1)
J. Mater. Chem. C (1)
J. Opt. (1)
J. Phys. A (1)
J. Russ. Laser Res. (1)
Light Sci. Appl. (1)
Mater. Chem. Front. (1)
Nano Lett. (3)
Nano Today (1)
Nanoscale (1)
Nat. Methods (1)
Nat. Nanotechnol. (2)
Nat. Photonics (5)
Nat. Rev. Phys. (1)
New J. Phys. (1)
Opt. Commun. (2)
Opt. Lett. (2)
Phys. Rev. A (1)
Phys. Rev. E (2)
Phys. Rev. Lett. (3)
Proc. Natl. Acad. Sci. USA (1)
Semicond. Sci. Technol. (1)
Supplement 1 Supplemental document
Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
View in Article | Download Full Size | PPT Slide | PDF
Prem Kumar, Editor-in-Chief
Issues in Progress | CommonCrawl |
Centripetal force
The centripetal force on a body is defined as the external force which causes the body to move in a circular path with a constant speed and acts along the radius and towards the centre of the circular path.
${\rm{a}} = {\rm{r}}{\omega ^2}$
∴ ${{\rm{F}}_{\rm{c}}} = {\rm{mr}}{\omega ^2}$
Centrifugal force
The outward forces acting on bodies when they move in circular paths are called centrifugal forces.
centrifugal force=$\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$
It is not a real force but a fictitious force and is due to the inertial property of the body.
An expression for the centripetal force:
It is defined as the inward force acting on a body when it moves in a circular path. This force is directed towards the centre of the circular path. It is given by,
F = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$
Let us consider a body of mass 'm' is moving in a circular path of radius 'r' with uniform speed 'v'. The velocity of the body is changing due to direction. At any instant, the body is at the point A.
Let $\overrightarrow {{{\rm{V}}_{\rm{A}}}} $ be the velocity of the body at A
After time $\Delta $T , the body reaches the point B
Let $\overrightarrow {{{\rm{V}}_{\rm{B}}}} $ be the velocity of the body at B.
Change in velocity is,
$\Delta {\rm{\vec V}}$ = $\overrightarrow {{{\rm{V}}_{\rm{B}}}} $ - $\overrightarrow {{{\rm{V}}_{\rm{A}}}} $ --------------i)
Angular velocity (w):
It is defined as the rate of change of angular displacement.
It is denoted by w
It is given by w = $\frac{\theta }{{\rm{t}}}$
Unit of W = rads-1
Average angular velocity Wav= $\frac{{\Delta \theta }}{{\Delta {\rm{t}}}}$
Angular acceleration ($\alpha $);
It is defined as the rate of change of angular velocity.
It is denoted by $\alpha $
It is given by $\alpha $ = $\frac{{\rm{w}}}{{\rm{t}}}$
Average angular acceleration $\alpha $av = $\frac{{\Delta {\rm{W}}}}{{\Delta {\rm{t}}}}$
Relational between linear velocity (v) and angular velocity ($\omega $):
Let us consider a body is moving in a circular path of radius 'r'. When the body reaches to B from A
Then, the angular displacement of the body is θ.
Let the arc length arc AB is denoted by S.
By trigonometry,
θ = $\frac{{{\rm{arc\: length}}}}{{{\rm{radius}}}}{\rm{\: }}$
θ = $\frac{{{\rm{AB}}}}{{\rm{r}}}$
$\theta $ = $\frac{{\rm{s}}}{{\rm{r}}}$
Or, s = r${\rm{\: }}$θ-----------i)
Differentiating eqn (i) w.r.t to t
$\frac{{{\rm{d}}\left( {\rm{S}} \right)}}{{{\rm{dt}}}}$ = $\frac{{\rm{d}}}{{{\rm{dt}}}}$ (r${\rm{\: }}\theta $)
$\frac{{{\rm{dS}}}}{{{\rm{dt}}}} = {\rm{r}}.\frac{{{\rm{d}}\theta }}{{{\rm{dt}}}}$
V=rw ($\frac{{{\rm{dS}}}}{{{\rm{dt}}}}$ = velocity, $\frac{{{\rm{d}}\theta }}{{{\rm{dt}}}}$= angular velocity)
Centripetal force in terms of angular velocity:
Let us draw a vector triangle PQR. So, that the triangles AOB and PQR are similar triangles because both are isosceles triangle with same vertex angle.
In similar triangle, the ration of corresponding side is equal.
$\frac{{{\rm{OA}}}}{{{\rm{OB}}}}$ = $\frac{{{\rm{PQ}}}}{{{\rm{PR}}}}$
or, $\frac{{\rm{r}}}{{{\rm{AB}}}}$ = $\frac{{\rm{V}}}{{\Delta {\rm{V}}}}$ ---------ii)
Distance = speed ${\rm{*}}$ time
AB = v ${\rm{*}}$$\Delta $t ----------iii)
Putting the value of AB in eqn ii)
$\frac{{\rm{r}}}{{{\rm{V}}\Delta {\rm{t}}}}$ = $\frac{{\rm{V}}}{{\Delta {\rm{V}}}}$
$\frac{{\rm{V}}}{{\Delta {\rm{V}}}}$ = $\frac{{{{\rm{v}}^2}}}{{\rm{r}}}$
a = $\frac{{{{\rm{v}}^2}}}{{\rm{r}}}$ --------------iv)
It gives the centripetal acceleration.
By Newton's 2nd law of motion,
F = ma
It gives centripetal force
As we have,
V = $\omega $r
F = $\frac{{{\rm{m}}{\omega ^2}{{\rm{r}}^2}}}{{\rm{r}}}$
F = mr${\omega ^2}$
It gives centripetal force in terms of angular velocity.
Motion of cyclist on a circular path:
Let us consider a cyclist of mass 'm' is moving on a circular path of radius 'r' with uniform speed 'v'.
Let 'mg' be the weight of the cyclist and 'R' be the normal reaction on the cyclist.
To move on a circular path, he should bend through an angle $\theta $ from vertical . Let us resolve 'r' into two components. Rcos$\theta $ and Rsin$\theta $
The component Rcos$\theta $ balances weight of the cyclist.
∴ Rcos$\theta $ = mg ------------i)
The component Rsin$\theta $ provides necessary centripetal force to the cyclist to move on a circular path.
∴Rsin$\theta $ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{R}}}$ ------------ii)
Dividing eqn ii) by i)
$\frac{{{\rm{Rsin}}\theta }}{{{\rm{Rcos}}\theta }}$ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{{\rm{r*mg}}}}$
Or, tan$\theta $ = $\frac{{{{\rm{v}}^2}}}{{{\rm{rg}}}}$
Or, v2 = tan$\theta $rg
Or, v = $\sqrt {{\rm{tan}}\theta {\rm{rg}}} $
This relation gives permissible speed of a cyclist to move on a circular path.
To provide centripetal force by himself and to allow maximum speed cyclist lean from vertical wheel moving in a circular path.
Banked track:
If the outer edge of road is slightly raised than inner edge, then it is banked road. To provide necessary centripetal force to the vehicle, roads are banked. The component of normal reaction R sin$\theta $ provides centripetal force to the vehicle. On the banked road the permissible speed is more, side slip of vehicle is prevented and the motion becomes independent of friction.
Let us consider a car of mass 'm' is moving in a circular banked track of radius 'r' with uniform speed 'v'. Let 'mg' be the weight of the car and 'R' be the normal reaction on the car.
Let us resolve R in two components. Rcos$\theta $ and Rsin$\theta $, where $\theta $is the angle of banking.
The component 'Rcos$\theta $' balances weight of the ar.
∴ Rcosθ = mg -------- i)
The component Rsin$\theta $ provides necessary centripetal force to the car to move in a circular path.
∴ Rsinθ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ -------------ii)
It gives the permissible speed of car on a banked truck. Here it should be noted $\theta $ should be less than angle of repose.
Motion of a body in a vertical circle:
Let us consider a body of mass 'm' is connected at one end of a string of length 'l' and it is rotated in a vertical circle with uniform velocity 'v'. Here, length of string = radius
At lowest point A,
Let 'mg' be the weight of the body T1 be the tension produced in the string then,
Net force towards centre = T1 – mg
$\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ = T1 – mg
T1 = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mg
Tmax = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mg
At highest point B
Let 'mg' be the weight of the body and 'T2' be the tension produced in the string then,
Net force towards centre = T2 + mg
$\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ = T2 + mg
T2 = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ - mg
Tmin = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ - mg
At pont C (string is horizontal)
Let 'mg' be the weight and T3 be the tension produced in the string then,
Net force towards centre = T3
$\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ = T3
At point D
Let T1 be the tension produced in the string then net force towards centre = T – mgcos$\theta $
$\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ = T- mgcos$\theta $
T = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mgcos$\theta $
If $\theta $ = 0$\infty $, then T = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mgcos0$\infty $ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mg (At A)
If $\theta $ = 90$\infty $, then T = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mgcos90$\infty $ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ (At C)
If $\theta $ = 180$\infty $, then T = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ + mgcos180$\infty $ = $\frac{{{\rm{m}}{{\rm{v}}^2}}}{{\rm{r}}}$ - mg (At B)
Horizontal plane:
Speed, kinetic energy and angular momentum physical quantities remain constant for a particle moving along a circular path in a horizontal plane.
A particle is executing circular motion with constant speed, is its acceleration also constant:
When a body moves in a circular motion with constant speed, it has acceleration because the direction of the velocity changes, while magnitude of velocity (speed) remains constant. When a particle is moving with uniform speed v the magnitude of acceleration v2/r always remains constant but its direction continuously changes which is directed towards the centre of the circular path and perpendicular to the velocity.
Cyclist lean inwards while rounding a curve:
When cyclist tends to move in curved path, sidewise frictional forces come into play between the tyres and the road. The forces of friction act towards the centre of the curved path and hence provide necessary centripetal force and cyclist lean inwards while rounding a curve.
Curved railway tracks banked:
When a fast moving train takes a curved path, it tends to move away tangentially off the track. In order to prevent this, the curved tracks are banked on the outside to produce the necessary centripetal force required to keep the train moving in a curved path.
The passenger of a car rounding a curve does is thrown outward:
The passenger of a car rounding a curve are thrown outward by turning the tires, friction from the road exerts a force on the car which pushes the car round the curve. Hence provide necessary centrifugal force and car thrown outward while rounding a curve.
A smooth ball is placed on the circumference of a smooth disc. When the disc rotates, the ball falls down:
A smooth ball is placed on the circumference of a smooth disc. When the disc rotates, the ball falls down because The center of mass isn't stable and centrifugal force will throw it outwards.
An airplane tilts when it makes a curved flight;
An airplane tilts when it makes a curved flight because the weight of the air plane gets used in providing it the necessary centripetal force.
If a small can filled with water is rapidly swing in a vertical circle:
As a bucket is tied to a string and is rapidly spin in a vertical circle the tension force acting upon the bucket provides the centripetal force that is required for the circular motion and prevent the water from falling outside. As a bucket of water is tied to a string and spun in a circle, the tension force acting upon the bucket provides the centripetal force required for circular motion." | CommonCrawl |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
Online ISSN 1534-7486; Print ISSN 1056-3911
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | All issues | Previous article | Recently published articles | Next article
Donaldson–Thomas invariants of abelian threefolds and Bridgeland stability conditions
Authors: Georg Oberdieck, Dulip Piyaratne and Yukinobu Toda
Journal: J. Algebraic Geom. 31 (2022), 13-73
DOI: https://doi.org/10.1090/jag/788
Published electronically: September 14, 2021
Abstract | References | Additional Information
We study the reduced Donaldson–Thomas theory of abelian threefolds using Bridgeland stability conditions. The main result is the invariance of the reduced Donaldson–Thomas invariants under all derived autoequivalences, up to explicitly given wall-crossing terms. We also present a numerical criterion for the absence of walls in terms of a discriminant function. For principally polarized abelian threefolds of Picard rank one, the wall-crossing contributions are discussed in detail. The discussion yields evidence for a conjectural formula for curve counting invariants by Bryan, Pandharipande, Yin, and the first author.
For the proof we strengthen several known results on Bridgeland stability conditions of abelian threefolds. We show that certain previously constructed stability conditions satisfy the full support property. In particular, the stability manifold is non-empty. We also prove the existence of a Gieseker chamber and determine all wall-crossing contributions. A definition of reduced generalized Donaldson–Thomas invariants for arbitrary Calabi–Yau threefolds with abelian actions is given.
References [Enhancements On Off] (What's this?)
J. Alper, D. Halpern-Leistner, and J. Heinloth, Existence of moduli space for algebraic stacks, arXiv:1812.01128 (2018).
Jarod Alper, Good moduli spaces for Artin stacks, Ann. Inst. Fourier (Grenoble) 63 (2013), no. 6, 2349–2402 (English, with English and French summaries). MR 3237451
Arend Bayer, Emanuele Macrì, and Paolo Stellari, The space of stability conditions on abelian threefolds, and on some Calabi-Yau threefolds, Invent. Math. 206 (2016), no. 3, 869–933. MR 3573975, DOI 10.1007/s00222-016-0665-5
Arend Bayer, Emanuele Macrì, and Yukinobu Toda, Bridgeland stability conditions on threefolds I: Bogomolov-Gieseker type inequalities, J. Algebraic Geom. 23 (2014), no. 1, 117–163. MR 3121850, DOI 10.1090/S1056-3911-2013-00617-7
T. Beckmann and G. Oberdieck, On equivariant derived categories, arXiv:2006.13626 (2020).
Kai Behrend, Donaldson-Thomas type invariants via microlocal geometry, Ann. of Math. (2) 170 (2009), no. 3, 1307–1338. MR 2600874, DOI 10.4007/annals.2009.170.1307
C. Birkenhake and H. Lange, The dual polarization of an abelian variety, Arch. Math. (Basel) 73 (1999), no. 5, 380–389. MR 1712138, DOI 10.1007/s000130050412
Tom Bridgeland, Stability conditions on triangulated categories, Ann. of Math. (2) 166 (2007), no. 2, 317–345. MR 2373143, DOI 10.4007/annals.2007.166.317
Tom Bridgeland, Hall algebras and curve-counting invariants, J. Amer. Math. Soc. 24 (2011), no. 4, 969–998. MR 2813335, DOI 10.1090/S0894-0347-2011-00701-7
Jim Bryan, Georg Oberdieck, Rahul Pandharipande, and Qizheng Yin, Curve counting on abelian surfaces and threefolds, Algebr. Geom. 5 (2018), no. 4, 398–463. MR 3813750, DOI 10.14231/ag-2018-012
Martin G. Gulbrandsen, Donaldson-Thomas invariants for complexes on abelian threefolds, Math. Z. 273 (2013), no. 1-2, 219–236. MR 3010158, DOI 10.1007/s00209-012-1002-3
D. Huybrechts, Fourier-Mukai transforms in algebraic geometry, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, Oxford, 2006. MR 2244106, DOI 10.1093/acprof:oso/9780199296866.001.0001
Jun-ichi Igusa, A classification of spinors up to dimension twelve, Amer. J. Math. 92 (1970), 997–1028. MR 277558, DOI 10.2307/2373406
Dominic Joyce, Configurations in abelian categories. III. Stability conditions and identities, Adv. Math. 215 (2007), no. 1, 153–219. MR 2354988, DOI 10.1016/j.aim.2007.04.002
Dominic Joyce, Motivic invariants of Artin stacks and 'stack functions', Q. J. Math. 58 (2007), no. 3, 345–392. MR 2354923, DOI 10.1093/qmath/ham019
Dominic Joyce and Yinan Song, A theory of generalized Donaldson-Thomas invariants, Mem. Amer. Math. Soc. 217 (2012), no. 1020, iv+199. MR 2951762, DOI 10.1090/S0065-9266-2011-00630-1
Jason Lo and Zhenbo Qin, Mini-walls for Bridgeland stability conditions on the derived category of sheaves over surfaces, Asian J. Math. 18 (2014), no. 2, 321–344. MR 3217639, DOI 10.4310/AJM.2014.v18.n2.a7
Antony Maciocia, Computing the walls associated to Bridgeland stability conditions on projective surfaces, Asian J. Math. 18 (2014), no. 2, 263–279. MR 3217637, DOI 10.4310/AJM.2014.v18.n2.a5
Antony Maciocia and Dulip Piyaratne, Fourier-Mukai transforms and Bridgeland stability conditions on abelian threefolds, Algebr. Geom. 2 (2015), no. 3, 270–297. MR 3370123, DOI 10.14231/AG-2015-012
Antony Maciocia and Dulip Piyaratne, Fourier-Mukai transforms and Bridgeland stability conditions on abelian threefolds II, Internat. J. Math. 27 (2016), no. 1, 1650007, 27 pp.
D. Maulik, N. Nekrasov, A. Okounkov, and R. Pandharipande, Gromov-Witten theory and Donaldson-Thomas theory. I, Compos. Math. 142 (2006), no. 5, 1263–1285. MR 2264664, DOI 10.1112/S0010437X06002302
Davesh Maulik and Richard P. Thomas, Sheaf counting on local K3 surfaces, Pure Appl. Math. Q. 14 (2018), no. 3-4, 419–441. MR 4047404, DOI 10.4310/PAMQ.2018.v14.n3.a1
S. Mukai, Abelian variety and spin representation, Proceedings of symposium Hodge theory and algebraic geometry (Sapporo, 1994), pp. 110–135 (in Japanese, English translation: Univ. of Warwick, preprint, 1998, www.kurims.kyoto-u.ac.jp/~mukai).
Shigeru Mukai, Semi-homogeneous vector bundles on an Abelian variety, J. Math. Kyoto Univ. 18 (1978), no. 2, 239–272. MR 498572, DOI 10.1215/kjm/1250522574
Shigeru Mukai, Duality between $D(X)$ and $D(\hat X)$ with its application to Picard sheaves, Nagoya Math. J. 81 (1981), 153–175. MR 607081
Georg Oberdieck, On reduced stable pair invariants, Math. Z. 289 (2018), no. 1-2, 323–353. MR 3803792, DOI 10.1007/s00209-017-1953-5
Georg Oberdieck and Junliang Shen, Reduced Donaldson-Thomas invariants and the ring of dual numbers, Proc. Lond. Math. Soc. (3) 118 (2019), no. 1, 191–220. MR 3898990, DOI 10.1112/plms.12178
Georg Oberdieck and Junliang Shen, Curve counting on elliptic Calabi-Yau threefolds via derived categories, J. Eur. Math. Soc. (JEMS) 22 (2020), no. 3, 967–1002. MR 4055994, DOI 10.4171/jems/938
D. O. Orlov, Derived categories of coherent sheaves on abelian varieties and equivalences between them, Izv. Ross. Akad. Nauk Ser. Mat. 66 (2002), no. 3, 131–158 (Russian, with Russian summary); English transl., Izv. Math. 66 (2002), no. 3, 569–594. MR 1921811, DOI 10.1070/IM2002v066n03ABEH000389
Dulip Piyaratne, Stability conditions under the Fourier-Mukai transforms on abelian 3-folds, Q. J. Math. 70 (2019), no. 1, 225–288. MR 3927850, DOI 10.1093/qmath/hay036
Dulip Piyaratne and Yukinobu Toda, Moduli of Bridgeland semistable objects on 3-folds and Donaldson-Thomas invariants, J. Reine Angew. Math. 747 (2019), 175–219. MR 3905133, DOI 10.1515/crelle-2016-0006
Matthieu Romagny, Group actions on stacks and applications, Michigan Math. J. 53 (2005), no. 1, 209–236. MR 2125542, DOI 10.1307/mmj/1114021093
SageMath, the Sage Mathematics Software System (Version 7.4), The Sage Developers, 2016, http://www.sagemath.org.
Ashoke Sen, $\scr N=8$ dyon partition function and walls of marginal stability, J. High Energy Phys. 7 (2008), 118, 18. MR 2430066, DOI 10.1088/1126-6708/2008/07/118
Junliang Shen, The Euler characteristics of generalized Kummer schemes, Math. Z. 281 (2015), no. 3-4, 1183–1189. MR 3421659, DOI 10.1007/s00209-015-1526-4
Yukinobu Toda, Generating functions of stable pair invariants via wall-crossings in derived categories, New developments in algebraic geometry, integrable systems and mirror symmetry (RIMS, Kyoto, 2008) Adv. Stud. Pure Math., vol. 59, Math. Soc. Japan, Tokyo, 2010, pp. 389–434. MR 2683216, DOI 10.2969/aspm/05910389
Yukinobu Toda, Stable pairs on local K3 surfaces, J. Differential Geom. 92 (2012), no. 2, 285–371. MR 2998674
Yukinobu Toda, Hall algebras in the derived category and higher-rank DT invariants, Algebr. Geom. 7 (2020), no. 3, 240–262. MR 4087861, DOI 10.14231/ag-2020-008
Ashoke Sen, $\mathcal {N}=8$ dyon partition function and walls of marginal stability, J. High Energy Phys. 7 (2008), 118, 18. MR 2430066, DOI 10.1088/1126-6708/2008/07/118
Georg Oberdieck
Affiliation: Mathematisches Institut, Universität Bonn, Germany
MR Author ID: 1175196
Email: [email protected]
Dulip Piyaratne
Affiliation: Department of Mathematics, Xiamen University of Malaysia, Malaysia
Email: [email protected]
Yukinobu Toda
Affiliation: Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo (WPI), Japan
MR Author ID: 792353
Email: [email protected]
Received by editor(s): August 22, 2018
Received by editor(s) in revised form: May 25, 2021
Additional Notes: The first author was supported by the National Science Foundation Grant DMS-1440140 while in residence at MSRI, Berkeley. The second author was supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan. The third author was supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 26287002) from MEXT, Japan
Article copyright: © Copyright 2021 University Press, Inc.
MathSciNet
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
Coherent modulation of the sea-level annual cycle in the United States by Atlantic Rossby waves
A recent increase in global wave power as a consequence of oceanic warming
Borja G. Reguero, Iñigo J. Losada & Fernando J. Méndez
Data-driven reconstruction reveals large-scale ocean circulation control on coastal sea level
Sönke Dangendorf, Thomas Frederikse, … Benjamin D. Hamlington
Increase in sea level variability with ocean warming associated with the nonlinear thermal expansion of seawater
Matthew J. Widlansky, Xiaoyu Long & Fabian Schloesser
An increase in marine heatwaves without significant changes in surface ocean temperature variability
Tongtong Xu, Matthew Newman, … Michael A. Alexander
North Atlantic Ocean Circulation and Decadal Sea Level Change During the Altimetry Era
Léon Chafik, Jan Even Øie Nilsen, … Thomas Frederikse
Longer and more frequent marine heatwaves over the past century
Eric C. J. Oliver, Markus G. Donat, … Thomas Wernberg
Tendencies, variability and persistence of sea surface temperature anomalies
Claire E. Bulgin, Christopher J. Merchant & David Ferreira
Persistent acceleration in global sea-level rise since the 1960s
Sönke Dangendorf, Carling Hay, … Jürgen Jensen
Likely weakening of the Florida Current during the past century revealed by sea-level observations
Christopher G. Piecuch
Francisco M. Calafat ORCID: orcid.org/0000-0002-7474-135X1,
Thomas Wahl2,
Fredrik Lindsten3,
Joanne Williams1 &
Eleanor Frajka-Williams4
Nature Communications volume 9, Article number: 2571 (2018) Cite this article
Physical oceanography
A Publisher Correction to this article was published on 12 October 2018
This article has been updated
Changes in the sea-level annual cycle (SLAC) can have profound impacts on coastal areas, including increased flooding risk and ecosystem alteration, yet little is known about the magnitude and drivers of such changes. Here we show, using novel Bayesian methods, that there are significant decadal fluctuations in the amplitude of the SLAC along the United States Gulf and Southeast coasts, including an extreme event in 2008–2009 that is likely (probability ≥68%) unprecedented in the tide-gauge record. Such fluctuations are coherent along the coast but decoupled from deep-ocean changes. Through the use of numerical and analytical ocean models, we show that the primary driver of these fluctuations involves incident Rossby waves that generate fast western-boundary waves. These Rossby waves project onto the basin-wide upper mid-ocean transport (top 1000 m) leading to a link with the SLAC, wherein larger SLAC amplitudes coincide with enhanced transport variability.
The sea-level annual cycle (SLAC) can have local peak-to-peak amplitudes comparable to the global average sea-level rise over the 20th century (~16 cm). These annual variations in sea level have a profound effect on coastal areas. They affect the habitat availability, nutrient budgets, and productivity of estuaries1,2; enable substantial coastal erosion to an extent comparable, over a year, to that caused by a hurricane3; and modulate coastal groundwater dynamics and discharge4. In low-lying areas, large annual variations can also contribute to nuisance flooding, which occurs during clear-sky conditions due to the combination of high mean sea level and spring tides5. In addition, they can compound the effect of sea-level rise and expose the coastline to increased risk of flooding by raising the baseline for storm surges.
The SLAC is primarily associated with the response of the ocean-atmosphere system to changes in solar insolation by season and latitude, although it includes also a small gravitational contribution6. Such response is governed by a complex interplay between local and large-scale dynamics7, and thus is highly location dependent. As a result, both the amplitude and phase of the SLAC exhibit great geographic variability8,9. Furthermore, since the climate system may respond nonlinearly to the periodic forcing by solar insolation, the oscillatory characteristics of the SLAC can change considerably over time. Indeed, significant temporal variations in the amplitude and phase of the SLAC have been observed in many regions around the world10,11,12,13,14,15,16,17,18. These changes in the SLAC can significantly exacerbate the effects of seasonal variations on coastal areas. Knowing how to model and predict these seasonal changes would provide crucial time to better protect coastal areas and to utilize their resources more effectively, in turn bringing great socioeconomic and environmental benefits. However, this requires a deep understanding of their underlying dynamics, which is still lacking in many regions.
The United States Gulf and Southeast coasts are particularly vulnerable to the effects of seasonal sea-level changes due to their hurricane-prone and predominantly low-lying coastal areas, yet studies focused on these regions are very limited12,16. Significant changes in the amplitude of the SLAC were observed in tide-gauge records from both regions, but the processes controlling these changes remain poorly understood. Multiple regression16 and correlation12 analyses were used to examine the relationship between the amplitude changes and several proxy variables. Low (~0.3) or non-significant correlations were found along the Atlantic coast for all the proxies considered12. Along the Gulf Coast, changes in the amplitude of the SLAC were found to correlate with air surface temperature for some periods but only very weakly with sea surface temperature and steric height16, which is difficult to reconcile with sea-level theory and interpret in terms of direct causal processes. Therefore, a causal explanation of the changes in the SLAC amplitude is still lacking. Filling this gap in our knowledge is an immediate priority since it severely limits our ability to understand, model, and ultimately predict these seasonal sea-level changes.
The difficulty of finding a physical explanation arises because sea level depends on the density structure of the whole water column7, which is set by both local and non-local dynamics. The implication is that sea-level changes are not necessarily governed by local forcing and thus the commonly used approach based on correlation/regression against surface atmospheric variables cannot establish causation and must be guided by theory and supported by basin-scale estimates of the ocean density field. This is especially true for western boundaries since they are strongly affected by remote forcing in the ocean interior19.
Another aspect that merits consideration is the choice of the method to estimate changes in the amplitude of the SLAC. In the present context, the SLAC refers to the response of the climate system to the external periodic forcing by solar radiation. The response of a non-linear system to a periodic force is not necessarily periodic and often exhibits both amplitude and frequency modulation20. While approaches that assume a stationary annual cycle and analyze anomalies relative to such cycle are valid and can be successful at explaining the variability, allowing for deviations from periodicity provides an alternative view that can greatly facilitate the analysis and understanding of annual changes21. The most commonly used method to estimate changes in the SLAC is a harmonic least-squares fit to running windows of a selected length10,11,15,16,17,18. This method, however, suffers from the limitation of requiring a window of at least 5 years in order to yield robust estimates8, which limits inference about variations at decadal or shorter timescales (a 5-year running mean attenuates the power of decadal signals by ~61%). In addition, this method does not provide estimates within half the window size from the edges of the time series, and uses information contained only within the corresponding window.
Here, we present a novel method based on Bayesian state-space modeling22 that overcomes the issues of the windowing method, enabling estimation with unprecedented temporal resolution and robustness. We use our Bayesian method and a combination of sea-level observations, modeling, and theory to quantify changes in the amplitude of the SLAC along the Gulf and Southeast coasts of the United States, and provide a deep insight into the key drivers. We show that there are significant decadal fluctuations in the annual amplitude and identify an extreme event in 2008–2009 that is likely (probability ≥68%) unprecedented in the tide-gauge record. Such fluctuations are coherent over large distances along the coast from the Yucatan Peninsula to Cape Hatteras but they are confined to the coastal zone. The primary driver involves density anomalies propagating westward as baroclinic Rossby waves which, on reaching the western boundary, generate fast boundary waves that modulate the SLAC along the coast. These density anomalies drive changes in the geostrophic component of the meridional overturning circulation (MOC) at 26.5°N, both in observations from the Rapid Climate Change Programme23 (RAPID) and in the ocean models, leading to a link between the SLAC and the upper mid-ocean transport (UMO, the top 1000 m meridional transport).
Changes in the SLAC amplitude from tide-gauge records
Time series of the SLAC amplitude for tide-gauge records along the United States Gulf and Atlantic coasts are shown in Fig. 1a (the location of the tide gauges is shown in Supplementary Fig. 1). All time series display significant amplitude variations (up to 71% of the time-mean value) at decadal timescales, reflecting strong SLAC changes. These variations show a striking regional coherence along two distinct sections of coastline divided at approximately Cape Hatteras. Amplitude changes across stations to the south of Cape Hatteras (stations 1–14) are very coherent and show both larger magnitude (up to 7.8 cm from the time mean) and a larger time-mean value (up to 11.1 cm) than changes at stations north of Cape Hatteras (stations 15–25) (time-mean value of 6.5 cm and deviations of up to 4.6 cm). This suggests that two different regimes of seasonal variability are operating north and south of Cape Hatteras. This regional coherence and the division line marked by Cape Hatteras is made clearer by plotting the correlation matrix of the amplitude time series (Fig. 1b). The cross-correlation for stations on the same side of Cape Hatteras, either south or north, is very high (average of 0.80 and 0.89, respectively) reflecting the coherence along the two coastline sections, but it is much lower (average of 0.36) for stations on different sides of the Cape. The existence of two different regimes north and south of Cape Hatteras has been observed previously for inter-annual sea-level variability24,25. Given the larger amplitude variations and the high vulnerability of the Gulf and Southeast coasts to sea-level changes, hereafter we focus on this region.
Amplitude from tide-gauge records. a Temporal changes in the amplitude of the SLAC from tide-gauge records along the United States Gulf and Atlantic coasts as estimated using a Bayesian state-space model. Solid lines denote the mean of the posterior distribution at each time step, whereas shaded areas represent the 68% (1-sigma) credible interval. The colors of the solid lines denote the time-mean value of the annual amplitude. The name of each station along with their identification number are also shown (see Supplementary Fig. 1 for tide-gauge locations). b Correlation matrix of the time series shown in a. Numbers along the axes represent tide-gauge identification numbers, whereas black dots denote significant correlation at the 95% confidence level
A prominent feature of the time-varying amplitudes is their particularly large values around 2009 uniformly across all stations south of Cape Hatteras. This feature is further emphasized by plotting the number of months for which the annual amplitude is above the 95th percentile in 7-year running windows for the longest tide-gauge records (Fig. 2). The maximum number of exceedances is found in 2008–2009 at all stations and is likely (probability ≥68%) unprecedented in the tide-gauge record as indicated by the non-overlapping credible intervals. An amplification of the SLAC after 1990 was reported recently for stations in the Gulf of Mexico16, but it was not clear from that study whether and to what extent changes after 1990 represented a sustained change. Here we clarify this issue and show that such changes do not represent a permanent amplification of the SLAC but consist of a succession of decadal fluctuations with a particularly large peak around 2009. We illustrate this by plotting the amplitude at Key West as derived from our method together with the estimate of ref. 16 based on the windowing method (Fig. 3). Overall, the two time series are in good agreement, though the latter shows reduced fluctuations and misses some features such as the peaks in the 1960s and the early 1970s. Importantly, the last value in the estimate of ref. 16 is for June 2009, which coincides exactly with the time of the highest peak over the entire record. This coincidence results in a curve that is characterized by a relatively flat period until 1990 and a marked rise from that point onwards, giving the impression of a sustained change. Our estimate, however, shows that the annual amplitude fell back to average values after 2009 as part of a large decadal oscillation (Fig. 3), limiting support for the existence of a long-term trend but revealing the presence of an enhanced fluctuation at the end of the record.
Exceedances of the 95th percentile. Number of months within 7-year running windows for which the amplitude of the SLAC is above the 95th percentile (computed over the entire record) for tide-gauge records with at least 50 years of data. To construct the histogram, a 7-year window is shifted month by month starting with a window centered at month 43 of the record. Numbers along the x axis refer to the identification numbers shown in Fig. 1, while colors correspond to the color bar and denote time. The two error bars in each histogram represent the 68% credible intervals associated with the maximum values in 2008–2009 and in the period before 1990, whereas the black dots represent such maximum values
SLAC amplitude at Key West. Comparison of the SLAC amplitude for the Key West tide gauge as estimated with a Bayesian state-space model (black) and by ref. 16 using the method of 5-year running windows (red). The gray-shaded area represents the 68% (1-sigma) credible interval for our estimate
Mechanisms of changes in the annual amplitude
The coherent signal observed by tide gauges could represent either a coastal signal or a basin-scale mode where both the coastal zone and the deep ocean oscillate together. Determining which of the two cases applies is crucial to understanding the true nature of this signal, but such determination cannot be made solely on the basis of tide gauges located on the coast. To shed light on this issue, we have computed the point-wise correlation between the annual amplitude from satellite altimetry data at each grid point and that averaged along the United States Gulf and Southeast coasts (Fig. 4a). The correlation map shows that changes in the amplitude are coherent along the coast from the Yucatan Peninsula to Cape Hatteras. However, the coherence is confined to the coastal zone. The altimetry data covers only the period 1993–2016, therefore the question arises as to whether the correlation pattern depends on the period considered or its length. To address this question we have computed analogous maps based on data from the Ocean Circulation and Climate Advanced Modelling (OCCAM) project model (Fig. 4b), the Nucleus for European Modelling of the Ocean (NEMO) model (Fig. 4c), and the Simple Ocean Data Assimilation (SODA) reanalysis (Fig. 4d) (see Methods for details of the models). The three model-based patterns are very similar to that from altimetry, showing strong coherence along the coast south of Cape Hatteras and providing confidence in the robustness of the correlation spatial structures.
Correlation maps from satellite altimetry and ocean models. Point-wise correlation between the amplitude of the SLAC at each grid point and that averaged along the United States Gulf and Southeast coasts for a altimetry (1993–2016), b OCCAM (1985–2003), c NEMO (1968–2012), and d SODA (1900–2010). The average has been computed over grid points within the 0–500 m depth range following the coast from Pensacola to Charleston. Black line denotes significance of the correlation at the 95% confidence level, whereas yellow line represents the 500 m isobaths
Sea-level changes can be partitioned into the sum of three components: steric, mass, and the inverse barometer (IB) effect (Methods). Different processes contribute differently to these components, and thus establishing the dominant component generally reveals key information on the underlying mechanisms. To this end, we have computed the correlation of the steric annual amplitude at each grid point with the amplitude of the SLAC averaged along the Gulf and Southeast coasts in the OCCAM model (Fig. 5a). The highest correlations are found predominantly along the continental slope, suggesting that coastal changes in the annual amplitude are attributable to steric changes. The coherence of the steric signal stretches from the Yucatan Peninsula to Cape Hatteras following the slope. The low correlations at the coast are explained by the fact that the steric component is defined as a depth integral and, hence, is necessarily small in shallow waters. Note, however, that steric signals over the slope can be transmitted to the coast through an indirect effect on bottom pressure.
Correlation map and time series of the steric component from OCCAM. a Point-wise correlation of the steric annual amplitude at each grid point with the SLAC amplitude averaged along the United States Gulf and Southeast coasts. The average has been computed over grid points within the 0–500 m depth range following the coast from Pensacola to Charleston. The yellow line represents the 300 m isobath. b The annual cycle of total sea level (blue), total steric height (orange), and the steric contributions from above the seasonal thermocline (black) and due to surface heat fluxes (red) at the location denoted by the black dot shown in a. c The annual amplitude (time mean removed) of the time series shown in b
We further confirm the dominant role of the steric component by analyzing time series of the SLAC amplitude from OCCAM along the Gulf and Southeast coasts. In particular, we show that while the mean SLAC is primarily driven by the expansion and contraction of the water column above the seasonal thermocline (top ~70 m) due to changes in surface heat fluxes, the modulation of the SLAC is due to steric changes in deeper layers. The contribution of surface heat fluxes to the SLAC, \(\eta _{\mathrm{s}}^{{\mathrm{hf}}}\), is estimated using Eq. (4), and is compared to the steric contribution from above the seasonal thermocline, \(\eta _{\mathrm{s}}^{{\mathrm{upper}}}\), as given by Eq. (2), as well as to the total steric and the total sea level. The results of the analysis for an arbitrary location in the Gulf of Mexico are shown in Fig. 5. We find that \(\eta _{\mathrm{s}}^{{\mathrm{hf}}}\) explains 95% of the variance in the annual cycle of \(\eta _{\mathrm{s}}^{{\mathrm{upper}}}\), and in turn the latter explains 89% of the variance in the SLAC (Fig. 5b). However, neither \(\eta _{\mathrm{s}}^{{\mathrm{hf}}}\) nor \(\eta _{\mathrm{s}}^{{\mathrm{upper}}}\) explains very much of the changes in the amplitude of the SLAC (Fig. 5c). In contrast, the total steric explains the majority (91%) of the changes in the amplitude of the SLAC. Similar results are found at other locations along the slope. Two implications can be drawn. First, all the information on the modulation of the SLAC resides in the ocean layers below the seasonal thermocline. Second, the SLAC can be simply described as the sum of the unmodulated cycle and a term representing steric changes below the seasonal thermocline (hereafter referred to as the modulator):
$${\mathrm{SLAC}} = \underbrace{{\mathrm{SLAC}}_{{\mathrm{mean}}}}_{{{{\rm{Steric}}\,{\rm{above}}\,{\rm{seasonal}}\,{\rm{thermocline}}}}} + \underbrace {{\mathrm{SLAC}}_{{\mathrm{modulator}}}}_{{{{\rm{Steric}}\,{\rm{below}}\,{\rm{seasonal}}\,{\rm{thermocline}}}}}.$$
These two implications affect how we understand the SLAC modulation and provide the basis for the subsequent analysis. In this regard, note the following. In the time domain, amplitude modulation typically involves multiplication of a low-frequency modulating signal and a high-frequency sine wave (the latter is often termed the carrier in radio communications). However, from properties of the Fourier transform, multiplication in the time domain corresponds to convolution in the frequency domain. Therefore, in the frequency domain, amplitude modulation appears as sums and differences of the frequencies of the two input signals. This implies that any modulated signal can always be mathematically described as the sum of the carrier and a superposition of sinusoids with frequencies slightly above and below the carrier frequency (see Methods for proof). This alternative interpretation is exactly analogous to the steric representation of the SLAC modulation. It turns out that the ocean, along its vertical dimension, behaves similarly to a Fourier transform in that it separates the frequency components of the SLAC into different ocean layers. This result will greatly facilitate our analysis.
The fact that changes along the coast are correlated over large distances but are decoupled from nearby deep-ocean changes is highly suggestive of fast wave propagation along the coast and indicates that local forcing is an unlikely driving factor. Indeed, the local response to changes in atmospheric pressure, quantified through Eq. (1) (Methods), explains <5% of the variance in the annual amplitude at all tide gauges. Similarly, we find no statistically significant correlation with local wind changes at any station. A number of mechanisms may be invoked to explain the correlation patterns (Figs. 4 and 5). The first one involves the generation of coastally trapped waves26,27 by longshore wind or buoyancy forcing (e.g., a river). These waves propagate along the boundary with the coast on the right (in the Northern Hemisphere) at speeds of a few m/s (first baroclinic mode), have an offshore length scale of about 50 km, and can carry the effects of the forcing over large distances along the coast. Importantly, the thermocline displacements associated with these waves are correlated with sea-level changes at the coast, and thus are captured by tide gauges. Propagation of sea-level anomalies along the coast has been observed in many regions28,29,30. The second plausible mechanism involves the generation of boundary waves by incident Rossby waves from the ocean interior31, which could, similarly, affect coastal sea level over large sections of coastline. Processes of Rossby wave generation include wind-stress-curl32 and buoyancy33 forcing. In the following, we explore which of these two mechanisms is more likely to explain the observed changes in the SLAC amplitude.
We have assessed the role of longshore wind by means of the model described in Appendix A of ref. 34. In particular, we have integrated the model equation from north to south starting at Cape Hatteras using a range of values for the length decay scale (100–1000 km), but have found no agreement with the changes in the amplitude of the SLAC from tide-gauge records. In addition, we have compared the SLAC from tide gauges with the annual cycle of river discharge for the major rivers in the United States flowing into the Atlantic, again without finding a good agreement. This leaves us with the incidence of Rossby waves on the western boundary as the most likely mechanism. In the following, we concentrate on this possibility and explore it on the basis of the OCCAM and NEMO models.
To investigate the role of Rossby waves in controlling the SLAC modulation, we focus on the region east of the Bahamas. The reason for this choice is that Rossby waves play a particularly important role in driving sea-level variability in this region32. In addition, changes in the SLAC amplitude in this region are significantly correlated with changes along the coastline of the mainland United States (Figs. 4 and 5), which suggests a common driving mechanism. This location is also convenient because at this latitude the Gulf Stream is restricted to the Florida Strait and hence does not interfere with the Rossby waves reaching the Bahamas east coast.
We have computed the correlation at different lags of the steric contribution from below the seasonal thermocline at the continental slope east of the Bahamas with that at each grid point in both OCCAM and NEMO. For grid points in shallow areas (<200 m), the correlation is computed with the SLAC modulator instead of the steric. The pattern of evolution (Fig. 6) shows a region of significant correlation several hundred kilometers off the coast of the Bahamas at lags of ~3 months, indicating a lagged relationship between this region and the western boundary. As the lag decreases, the region of correlation propagates westward until it reaches the coast at lag zero and then the entire shelf and coastal zone become significantly correlated, both in the Gulf of Mexico and along the Southeast coast. The close resemblance between the maps from the two models provides confidence in the robustness of this spatiotemporal pattern. We conclude that the SLAC modulator along the Gulf and Southeast coasts is related to density anomalies below the seasonal thermocline propagating westward.
Lagged correlation maps of the steric contribution from below the seasonal thermocline. Point-wise lagged correlation of the steric contribution from 200 to 1000 m depth at the continental slope east of the Bahamas with that at each grid point for a OCCAM and b NEMO. Steric time series have been band-pass filtered (Butterworth filter with lower and higher cutoff frequencies: 1/16 and 1/8 months–1) to focus on the frequencies relevant to the SLAC modulator. For grid points in shallow areas (<200 m), the correlation is computed with the SLAC modulator instead of the steric. Negative lags indicate that steric changes at the slope east of the Bahamas lag relative to other grid points. Black line denotes significance of positive correlations at the 95% confidence level
Further supporting evidence for the link to propagating anomalies is provided by producing a time-longitude section of the steric modulator east of the Bahamas at 26.5°N in OCCAM (Fig. 7a). The SLAC modulator along the Gulf and Southeast coasts is consistent with steric anomalies that originated in the ocean interior at earlier times, as indicated by the alignment of peaks and troughs in the time series of the SLAC modulator and the steric modulator at the Bahamas coast. While often the steric anomalies are formed far in the interior of the Atlantic, sometimes they originate only a few hundred kilometers offshore and reach the coast after a few months. Our calculations show that the density anomalies propagate at an average speed of about 4.1 cm/s, which is consistent with the observed phase speed of long Rossby waves at this latitude35.
Hovmöller diagram of the steric modulator and estimates from a reduced gravity model. a Time-longitude section of the steric modulator for grid points east of the Bahamas at 26.5°N in the OCCAM model. The time series of the SLAC modulator averaged along the United States Gulf and Southeast coasts (ηgom−se) is also shown to the left. b Estimates of the SLAC amplitude at the Bahamas east coast based on a 1.5-layer, reduced gravity model for solutions one (orange) and two (blue), along with the SLAC amplitude (time mean removed) from OCCAM at the Bahamas east coast (black) and that averaged over tide-gauge stations 1–7 (red)
The results above suggest that a simple model based on Rossby wave dynamics might be used to capture the modulation of the SLAC along the Gulf and Southeast coasts. To test this, we use a 1.5-layer, reduced gravity model forced by wind (Methods). We compute two solutions. In the first solution, we start the integration at xe = 66.5°W (~1000 km from the Bahamas coast) and set the value of η at xe equal to the OCCAM sea level, while in the second one, we set xe = 46.5°W (~3000 km from the Bahamas coast) and η at xe to zero. The first solution includes the effects of the wind between the Bahamas and xe plus any contribution originating to the east of xe (wind-driven or otherwise), while the second solution includes only the effects of the wind. Starting the integration further to the east in the second solution changes the results only marginally. The reduced gravity model gives a good match to both tide-gauge observations and OCCAM data (Fig. 7b), providing strong evidence for a physical link between the SLAC and Rossby waves. In particular, the correlation between the modeled and observed SLAC amplitude is 0.67 and 0.65 for the first and second solutions, respectively. These values are comparable to the correlation of the OCCAM estimates with tide-gauge data (0.7). The strong resemblance between the two solutions along with the good agreement with observations indicate that wind forcing is a dominant cause of the Rossby waves. We note, however, that the second solution slightly underestimates the peak in 1995 relative to the first solution, suggesting that other drivers and/or non-linear effects may also play a role.
It is interesting to assess whether the incident density anomalies are modified by the sloping topography when they approach the western boundary. To this end, we have computed the standard deviation of the SLAC modulator as a function of distance from the Bahamas coast along with the correlation between the modulator at the coast and that offshore (Supplementary Fig. 2). While there is a gradual decrease in the magnitude of the modulator with proximity to the boundary, the phase coherence remains significant through the continental slope as indicated by the correlation between the modulator at the coast and that in the open ocean. The reduction in dynamic height variability toward the western boundary has been reported before and is explained by frictional energy dissipation and the export of energy through boundary waves31,36,37. The latter is precisely the mechanism that we invoke to explain the coherence of the amplitude over large distances along the coast.
It is also interesting to note that the meridional coherence scale of the westward-propagating density anomalies is relatively small (Fig. 6). Nevertheless, both observations and models show that changes in the amplitude of the SLAC are coherent along the entire coastline up to Cape Hatteras. Because boundary waves propagate along the coast with the coast to the right, the coherence at latitudes north of the Bahamas may suggest an effect of the Rossby waves on the Gulf Stream. This would be consistent with results from previous studies that showed a significant response of the Gulf Stream to incident density anomalies from the ocean interior38,39. In particular, it has been found that, on the timescales relevant to the SLAC modulator (~annual), the Florida Current responds almost instantaneously to incident density anomalies just east of the Bahamas leading to a significant anti-correlation with the UMO. This response of the Florida Current could explain the coherence of the SLAC amplitude at high latitudes. In support of this premise, we find that the SLAC modulator from tide gauges along the Southeast coast (stations 10–14) is correlated (−0.36, significant at the 95% confidence level) with band-pass filtered (1/20–1/5 months–1) variations of the Florida Current transport.
In summary, we have shown that the mean SLAC is driven by steric changes above the seasonal thermocline induced by variations in surface heat fluxes, while the SLAC modulation is related to density changes between 200 and 1000 m depth that originate in the ocean interior and propagate westward as Rossby waves. Upon impinging on the western boundary, we conjecture that the Rossby waves generate boundary waves that propagate rapidly along the continental slope giving rise to highly-coherent sea-level changes along the coast. A schematic illustration explaining the proposed mechanisms is shown in Fig. 8. It should be noted that our results regarding the variability associated with the modulator are general in that they do not depend on whether the annual cycle is interpreted as a changing or repeating cycle. By definition, the modulator is closely related to the variability that results from removing a stationary annual cycle and then applying a band-pass filter around the relevant frequencies. This implies that approaches that assume a stationary annual cycle and focus on the frequencies of the modulator will reach the same conclusions as presented here, with the difference that such approaches will not interpret the variability as being part of a modulated annual cycle but rather as anomalies relative to a repeating cycle.
Schematic illustration of the mechanism of SLAC modulation. The mean SLAC is associated with steric changes in the seasonal thermocline induced by variations in surface heat fluxes, whereas its modulation is related to density anomalies in deeper layers propagating westward as Rossby waves. These Rossby waves give rise to fast boundary waves upon impinging on the western boundary, which in turn modulate the SLAC along the Gulf and Southeast coasts and lead to the coherence over large distances along the coast
Finally, it must be noted that there is a theoretical upper limit on the frequency of Rossby waves beyond which such waves cannot exist. This limit follows from the dispersion relation and for long Rossby waves varies with latitude according to19 \(\omega _{{\mathrm{max}}} = \left( {c/2R} \right){\mathrm{cot}}\,\varphi\), where c is the baroclinic gravity-wave phase speed, R is the radius of the earth, and φ denotes latitude. The dependence on latitude imposes a constraint on where Rossby waves might act as the SLAC modulator because this possibility requires waves with nearly annual periods. In particular, spectral analysis of the modulator reveals an upper sideband of ~10.5 months at all tide-gauge stations (Supplementary Fig. 3). It follows then that Rossby waves with periods of ~10.5 months are required, but these are only possible at latitudes south of ~40°N.
The relationship between the SLAC and the UMO transport
Previous studies37,40 have found that Rossby waves and oceanic eddies impinging on the western boundary can have a significant effect on the geostrophic component of the MOC, especially at intra-annual timescales. Therefore, a question arises as to whether the propagating density anomalies responsible for the changes in the SLAC amplitude exhibit themselves also in this component of the MOC. To explore this possibility, we analyze time series of UMO transport from RAPID23 for the period April 2004 to October 2015 and from OCCAM for the period 1985–2003. The UMO transport is related to the horizontal difference in pressure between the eastern and western boundaries. If a relationship exists between the SLAC and the UMO, this is most likely due to the western-boundary contribution to the UMO, where the influence of incident Rossby waves occurs37,40. Such contribution, however, does not have an annual cycle but instead exhibits non-periodic variations. For non-periodic signals, the notion of peak amplitude is not well defined, so to relate the UMO to the SLAC amplitude we use the instantaneous variance of the UMO transport as a measure of its amplitude or intensity. We expect changes in the UMO variance (at the frequencies of the SLAC modulator) to covary with changes in the SLAC amplitude. To estimate the instantaneous variance of the UMO transport at the relevant timescales, we use a stochastic variance model (see Methods for details). We find a strong relationship between the amplitude of the SLAC and the variance of the UMO transport, wherein larger annual amplitudes are associated with increased fluctuation intensity in the UMO (Fig. 9). The correlation between the two quantities is higher for RAPID observations (0.91) than for OCCAM data (0.75), but in both cases it is significant at the 95% confidence level. The implication is that the density anomalies that modulate the SLAC also affect the UMO transport by altering the zonal pressure gradient through density variations at the western boundary.
SLAC from tide-gauge records and the UMO transport. Instantaneous amplitude of the SLAC (left axis) averaged over tide-gauge stations 1–10 together with the instantaneous standard deviation of the UMO transport at 26.5°N (right axis) both from RAPID (red solid) and the OCCAM model (red dashed). The gray-shaded area denotes the standard deviation about the average annual amplitude of the 10 tide-gauge stations. A long-term trend has been removed from all time series
Our analysis of tide-gauge records has revealed the presence of large inter-annual to decadal variations in the amplitude of the SLAC along the United States Gulf and Southeast coasts, which have been particularly large since the 1990s. Because the SLAC in this region peaks during the period of maximum hurricane activity in the Atlantic (between August and October), larger annual amplitudes imply increased risk of damage from hurricane storm surges due to a higher base water level. In addition, larger seasonal variations also significantly increase the likelihood of nuisance flooding and exacerbate other direct effects of annual sea-level changes (e.g., erosion, estuary productivity, and so on). Furthermore, the variations in the amplitude of the SLAC are coherent over large distances along the coast, which means that the increased risk associated with them is not localized but affects the entire coastline at any particular time. Importantly, we have suggested that these variations are associated with incident baroclinic Rossby waves from the open ocean. Since these waves propagate slowly, their effects on the coastal SLAC are felt months or even years after they are formed in the ocean interior. This delayed coastal response raises the possibility of seasonal forecasts of water levels in coastal areas, which would allow coastal managers and communities to better assess and mitigate associated risks. We have also provided observational and model-based evidence of a link between the SLAC amplitude and the UMO transport, wherein larger SLAC amplitude coincide with amplified annual UMO variations. This result suggests that long tide-gauge records could be used to infer properties of the UMO variability for periods during which no direct estimates are available (i.e., before 2004). Given the role of the MOC in northward heat transport and the climate, the new link between the SLAC and the UMO is of particular importance and will aid current efforts to better understand the behavior of this key component of the climate system.
Data sets and ocean models
Monthly mean values of sea level from tide-gauge records are obtained from the Revised Local Reference data archive of the Permanent Service for Mean Sea Level41 (http://www.psmsl.org/). The location of the tide gauges used in this study is shown in Supplementary Fig. 1. Sea-level pressure and wind monthly data are obtained from the 20th Century Reanalysis v2c3042 for the period before 2015 and from the National Centers for Environmental Prediction reanalysis43 for the period 2015–2016 (both data sets are available at http://www.esrl.noaa.gov/psd/data). Monthly flow rates of rivers in the United States flowing into the North Atlantic are from the Research Data Archive at the National Center for Atmospheric Research (https://rda.ucar.edu). Monthly values of net surface heat flux covering the period 1983–2009 were provided by the WHOI OAFlux project (http://oaflux.whoi.edu). The time series of the UMO transport is obtained from the RAPID-WATCH MOC monitoring project44 (available at http://www.rapid.ac.uk), which is provided at twice daily resolution and covers the period from April 2004 to October 2015. Daily mean transport estimates of the Florida Current from a submarine cable and calibration cruises covering the period 1982–2016 were obtained from the Atlantic Oceanographic and Meteorological Laboratory web page (www.aoml.noaa.gov/phod/floridacurrent/). Both the UMO and the Florida Current transport time series are averaged into monthly values.
The satellite altimetry data are obtained from the multi-mission gridded sea surface heights product provided by the Copernicus Marine Environment Monitoring Service (available at http://marine.copernicus.eu/). The data are made available as weekly fields on a 1/4° × 1/4° near global grid covering the period from January 1993 to May 2016. These weekly fields are averaged into monthly fields for our analysis. The data are provided with all standard corrections applied, including corrections for tropospheric (wet and dry) and ionospheric path delays, sea state bias, tides (solid earth, ocean, loading, and pole), and atmospheric effects (sea-level pressure and high-frequency winds).
In this study, we use data from the OCCAM model. The version that we use here is a free-surface free-running (without data assimilation) global model with a spatial resolution of 1/4° × 1/4° in the horizontal and 66 non-uniform z-levels in the vertical, covering the period 1985–200345. We also use data from the NEMO (1/4°) global ocean model in its ORCA02546 (available from http://www.ceda.ac.uk/projects/jasmin), which covers the period 1958–2012 and from the SODA reanalysis47, which covers the period 1871–2010. The NEMO model is not spun up prior to 1958, and thus to make sure that we start from a stable ocean state we consider only NEMO data from 1968.
As a validation, we have compared annual amplitudes derived from OCCAM and NEMO with those from tide-gauge observations (Supplementary Fig. 4). Both models show significant correlations at most tide-gauge stations, though OCCAM performs better than NEMO as indicated by the higher correlations. It should also be noted that the correlation maps from OCCAM and NEMO are very similar between them and also to altimetry (Fig. 4). The good agreement between model data and observations gives us confidence in the capability of the two models to capture the dynamics governing the observed changes in the SLAC.
Sea-level equations
It is convenient for our purposes to describe sea-level changes, η, as the sum of three components: (1) the IB effect, ηIB, representing the effect of changes in sea-level pressure; (2) the steric component, ηs, representing the effect of variations in the ocean density field; and (3) the mass component, ηm, representing the effect of mass redistribution within the Earth system unrelated to changes in sea-level pressure. Expressions for each of these components are obtained from integration of the hydrostatic relation7:
$$\eta _{\mathrm{IB}} = \frac{1}{{g\rho _0}}\left( \overline {P_{\mathrm{a}} } - {P_{\mathrm{a}}}\right),$$
$$\eta _{\mathrm{s}} = - \frac{1}{{\rho _0}}\mathop {\scriptstyle\int }\nolimits_{ - H}^0 \rho {\mathrm{d}}z,$$
$$\eta _{\mathrm{m}} = \frac{1}{{g\rho _0}}\left( {P_{\mathrm{b}} } - \overline {P_{\mathrm{a}} } \right),$$
where g is the gravitational acceleration, ρ0 is a reference density (1025 kg m−3), Pa is the atmospheric sea-level pressure anomaly, ρ is the in situ density anomaly of the water, Pb is the pressure anomaly at the ocean bottom z = −H, and the overbar denotes spatial average over the global oceans. Note that, as written, Eq. (2) gives the total steric contribution, but the same equation can also be used to calculate the steric contribution, for example, from above the seasonal thermocline simply by replacing H with the appropriate depth. The reference depth that we use to compute the steric contribution from above the seasonal thermocline is 70 m, which corresponds to the average depth of the mixed layer in OCCAM. Selecting other values in the range 60–120 m did not affect our results in any significant manner.
The steric contribution due to changes in surface heat fluxes, \(\eta _{\mathrm{s}}^{{\mathrm{hf}}}\), can be estimated from the following first-order linear equation7:
$$\frac{{\partial \eta _{\mathrm{s}}^{{\mathrm{hf}}}}}{{\partial t}} = \frac{\alpha }{{\rho _{0C_{\mathrm{p}}}}}\left( {Q_{{\mathrm{net}}}\left( t \right) - \left\langle {Q_{{\mathrm{net}}}\left( t \right)} \right\rangle } \right),$$
where α is the coefficient of thermal expansion, Cp is the specific heat of sea water, Qnet is the net surface heat flux, and the angle brackets denote temporal averaging. α is estimated from the OCCAM temperature and salinity fields averaged over the mixed layer. The depth of the mixed layer is determined using a potential density threshold of 0.12548 (sigma units) relative to the density at the first model level (2.7 m).
The 1.5-layer reduced gravity model
To quantify the contribution of Rossby waves to the modulation of the SLAC, we use a 1.5-layer, reduced gravity model forced by wind stress. Under the long-wave and quasi-geostrophic approximations, the equation describing the evolution of sea level, η, can be written as49
$$\frac{{\partial \eta }}{{\partial t}} - C_{\mathrm{R}}\frac{{\partial \eta }}{{\partial x}} + R\eta = - \frac{{g\prime }}{g}{\mathbf{k}} \cdot \nabla \times \left( {\frac{{\mathbf{\tau }}}{{\rho _0f}}} \right),$$
where τ is the wind-stress vector, f is the Coriolis parameter (allowed to vary with latitude), g′ is the reduced gravity, R is the decay rate, CR is the propagation speed of long baroclinic Rossby waves, and k is the vertical unit vector. Here we choose CR = 4 cm s−1, R = (1.5 years)−1, and g′ = 3 cm s−2. Our results are fairly insensitive to the choice of the model parameters within the typical ranges (1 year)−1 ≤ R ≤ (2 years)−1 and 2 cm s−2 ≤ g′ ≤ 4 cm s−2.
We want to calculate η at 26.5°N on the east coast of the Bahamas. This is done by integrating Eq. (5) from a point to the east of the Bahamas, xe, along the baroclinic Rossby wave characteristic
$$\begin{array}{*{20}{l}} {\eta \left( {x_{\mathrm{B}},t} \right)} \hfill & = \hfill & {\eta \left( {x_{\mathrm{e}},t + \frac{{x_{\mathrm{B}} - x_{\mathrm{e}}}}{{C_{\mathrm{R}}}}} \right){\mathrm{exp}}\left[ {R\left( {x_{\mathrm{B}} - x_{\mathrm{e}}} \right)/C_{\mathrm{R}}} \right]} \hfill \\ {} \hfill & {} \hfill & { + \frac{{g\prime }}{{gC_{\mathrm{R}}}}\mathop {\int }\nolimits_{x_{\mathrm{e}}}^{x_{\mathrm{B}}} {\mathbf{k}} \cdot \nabla \times \left[ {{\mathbf{\tau }}\left( {x\prime ,y,t + \frac{{x_{\mathrm{B}} - x\prime }}{{C_{\mathrm{R}}}}} \right)/\left( {\rho _0f} \right)} \right]{\mathrm{exp}}\left[ {R\left( {x_{\mathrm{B}} - x\prime } \right)/C_{\mathrm{R}}} \right]{\mathrm{d}}x\prime } \hfill \end{array}$$
where xB is the point at which the solution is wanted (i.e., the Bahamas east coast).
Solutions are based on the same wind-stress data that was used to force OCCAM, which allows us to evaluate the performance of the model by comparison with OCCAM. To focus on the frequencies relevant to the SLAC modulator, a Butterworth band-pass filter (lower and higher cutoff frequencies: 1/16 and 1/8 months–1) is applied to the solution.
Calculation of the upper mid-ocean transport
Following ref. 50, the UMO transport from OCCAM at 26.5°N has been obtained by first computing the zonally integrated northward geostrophic transport per unit depth, T(z), as
$$T\left( z \right) = \frac{{P_{\mathrm{E}}\left( z \right) - P_{\mathrm{W}}\left( z \right)}}{{\rho _0f}},$$
where PE(z) and PW(z) denote pressure at the eastern and western boundaries of the North Atlantic, respectively, and f is the Coriolis parameter. PE(z) is calculated at the easternmost grid point for any given depth, whereas PW(z) is calculated at the westernmost grid point where water depth is at least 4800 m (i.e., a vertical profile ~25 km from the coast). The meridional transport due to the flow between the western vertical profile and the Bahamas east coast (referred to as the western-boundary wedge), TWBW(z), is estimated directly from the model velocities. The UMO transport is then given by the depth integral of T(z) + TWBW(z) between the surface and 1100 m:
$${\mathrm{UMO}} = \mathop {\scriptstyle\int }\nolimits_{ - 1100}^0 \left[ {T\left( z \right) + T_{{\mathrm{WBW}}}\left( z \right)} \right]{\mathrm{d}}z.$$
State-space model for the annual cycle
To estimate the instantaneous amplitude and phase of the SLAC, we use a state-space model. The state-space approach provides a powerful framework for addressing problems such as the one at hand in which we wish to estimate, based on indirect information from noisy observations, a set of state variables (e.g., the amplitude of the annual cycle) that are not directly measurable. Here we formulate the inference problem in terms of a non-linear state-space model and adopt a Bayesian approach, thus modeling the unknown static parameters of the model as random variables. For the representation of a system in state-space form, two types of equations are required22. In standard state-space notation, let y1:T = (y1,…,yT) be a sequence of observations (e.g., a tide-gauge record), xt∈ℝn denote the latent state at time t, and θ denote the unknown static parameters of the model. The state-space model then consists of the transition probability density pθ(xt|xt−1) describing the evolution of the state variables with time, and a measurement model linking the observations to the state as defined by the probability density pθ(yt|xt). Our goal is to compute the joint state and parameter posterior distribution given all observations p(θ, x1:T|y1:T).
One key advantage of our method is that it computes estimates conditioned on the full history of sea-level observations, which significantly improves the resolvability of the state variables. In other words, the method uses all available past and future observations to estimate the state of the system at each time step. In addition, the method provides estimates for the entire period covered by the sea-level observations, including the edges of the time series. These are important distinctions to the method based on a harmonic fit to running windows. Another key aspect of our method is that it allows for parameter uncertainty and involves rigorous error propagation, thus providing realistic uncertainty estimates. Furthermore, the method does not rely on large data samples to be accurate, and is relatively insensitive to starting values in the parameters. One limitation of the method is that it is computationally expensive.
Here the sea-level time series are modeled as the sum of four terms: (1) an annual cycle with instantaneous amplitude \(a_t^{\mathrm{a}}\) and phase \(\phi _t^{\mathrm{a}}\); (2) a semi-annual cycle with instantaneous amplitude \(a_t^{{\mathrm{sa}}}\) and phase \(\phi _t^{{\mathrm{sa}}}\); (3) a low-frequency component bt, which includes any existing non-linear trend; and (4) white Gaussian noise et. The measurement model takes the following form:
$$y_t = a_t^{\mathrm{a}}\,{\mathrm{cos}}\,\phi _t^{\mathrm{a}} + a_t^{{\mathrm{sa}}}\,{\mathrm{cos}}\,\phi _t^{{\mathrm{sa}}} + b_t + e_t,\quad e_t\sim {\cal N}\left( {0,\sigma _0^2} \right),$$
where \({\cal N}\left( {m,\sigma ^2} \right)\) denotes the normal distribution of mean m and variance σ2, and \(\sigma _0^2\) is a parameter to be estimated (and thus contained in θ).
Our state-space model has been designed to incorporate realistic dynamics for all state variables while at the same time keeping it simple enough to make Bayesian inference feasible. In this regard, two aspects of the state dynamics in particular merit careful consideration when designing the state transition kernel. First, the frequency of either the annual or semi-annual cycles may change over time but it should not drift too far away from its mean value. In other words, the frequency of the cycles should be stationary, but not an iid process since the frequency should be allowed to deviate from its mean value for certain periods of time. This is achieved by modeling \(\phi _t^{\mathrm{a}}\) as an integrated process of order one with the phase increments \(\omega _t^{\mathrm{a}} = \phi _t^{\mathrm{a}} - \phi _{t - 1}^{\mathrm{a}}\) following a first-order autoregressive (AR1) process (and similarly for the phase of the semi-annual cycle \(\phi _t^{{\mathrm{sa}}}\)).
The second aspect that requires consideration concerns the fact that the amplitude is a non-negative-valued variable. To satisfy this requirement, we model the logarithm of the amplitude of the annual and semi-annual cycles (\(\lambda _t^{\mathrm{a}}\) and \(\lambda _t^{{\mathrm{sa}}}\), respectively), as a random walk, i.e. \(p_{\mathbf{\theta }}\left( {\lambda _t^{\mathrm{a}}|\lambda _{t - 1}^{\mathrm{a}}} \right) = {\cal N}\left( {\lambda _t^{\mathrm{a}};\lambda _{t - 1}^{\mathrm{a}},\sigma _1^2} \right)\), where \(\sigma _1^2\) is a parameter to be estimated. The amplitude is then obtained by taking the exponential of the log amplitude, i.e., \(a_t^{\mathrm{a}} = {\mathrm{exp}}\lambda _t^{\mathrm{a}}\). This approach is standard in Bayesian statistics and has the great advantage of allowing us to place a conjugate prior on the unknown parameter \(\sigma _1^2\) resulting in a closed-form expression for the posterior distribution, thus greatly facilitating the task of sampling from such distribution.
The evolution of the state variables is modeled as follows.
Log amplitude of the annual and semi-annual cycles:
$$\lambda _t^{\mathrm{a}} = \lambda _{t - 1}^{\mathrm{a}} + q_t,\quad q_t \sim {\cal N}\left( {0,\sigma _1^2} \right),$$
$$\lambda _t^{{\mathrm{sa}}} = \lambda _{t - 1}^{{\mathrm{sa}}} + d_t,\quad d_t \sim {\cal N}\left( {0,\sigma _2^2} \right).$$
AR1 process for the phase increments of the annual and semi-annual cycles:
$$\omega _t^{\mathrm{a}} = \omega _{\mathrm{m}}^{\mathrm{a}} + \rho _1\left( {\omega _{t - 1}^{\mathrm{a}} - \omega _{\mathrm{m}}^{\mathrm{a}}} \right) + g_t,\quad g_t\sim {\cal N}\left( {0,\sigma _3^2} \right),$$
$$\omega _t^{{\mathrm{sa}}} = \omega _{\mathrm{m}}^{{\mathrm{sa}}} + \rho _2\left( {\omega _{t - 1}^{{\mathrm{sa}}} - \omega _{\mathrm{m}}^{{\mathrm{sa}}}} \right) + s_t,\quad s_t\sim {\cal N}\left( {0,\sigma _4^2} \right).$$
Phase of the annual and semi-annual cycles:
$$\phi _t^{\mathrm{a}} = \phi _{t - 1}^{\mathrm{a}} + \omega _t^{\mathrm{a}},$$
$$\phi _t^{{\mathrm{sa}}} = \phi _{t - 1}^{{\mathrm{sa}}} + \omega _t^{{\mathrm{sa}}}.$$
Low-frequency component:
$$b_t = b_{t - 1} + v_t,\quad v_t\sim {\cal N}\left( {0,\sigma _5^2} \right),$$
where \(\omega _{\mathrm{m}}^{\mathrm{a}}\) and \(\omega _{\mathrm{m}}^{{\mathrm{sa}}}\) represent the mean frequency of the annual and semi-annual cycles, respectively, and hence their value is set equal to 2π/12 and 2π/6 (for monthly data). \({\mathbf{x}}_t = \left( {\lambda _t^{\mathrm{a}},\lambda _t^{{\mathrm{sa}}},\omega _t^{\mathrm{a}},\omega _t^{{\mathrm{sa}}},\phi _t^{\mathrm{a}},\phi _t^{{\mathrm{sa}}},b_t} \right)\) is the latent state at time t, whereas \({\mathbf{\theta }} = \left( {\rho _1,\rho _2,\sigma _0^2,\sigma _1^2,\sigma _2^2,\sigma _3^2,\sigma _4^2,\sigma _5^2} \right)\) are the unknown parameters of the model. Equations (9–16) form our state-space model.
Bayesian inference in state-space models relies on evaluation of the joint posterior density p(θ, x1:T|y1:T), which for our non-linear model does not admit a closed-form expression. To perform inference in our model, we use a recently introduced class of algorithms named particle Markov chain Monte Carlo (MCMC) samplers51, which enables us to sample efficiently from p(θ, x1:T|y1:T) in an MCMC. In particular, we use a state-of-the-art particle MCMC sampler referred to as particle Gibbs with ancestor sampling52 (PGAS), which has been shown to provide rapid mixing of the Markov kernel even when using few particles in the underlying particle filter.
One special feature of our state-space model is that the state transition kernel is degenerate in the sense that the process noise associated with either \(\phi _t^{\mathrm{a}}\) or \(\phi _t^{{\mathrm{sa}}}\) is exactly zero, which renders PGAS inapplicable in its standard form. To address this issue and enable inference in our degenerate model, we use a modification of PGAS-denoted particle rejuvenation53. With this modification, the algorithm to sample from p(θ, x1:T|y1:T) consists of sampling iteratively from p(θ|x1:T,y1:T) and pθ(x1:T|y1:T) as follows:
Step 1: set θ(0) and x1:T(0) arbitrarily
Step 2: for iteration i ≥ 1 do
Draw θ(i) ~ p(θ|x1:T(i−1),y1:T), and
Sample x1:T(i) from the PGAS Markov kernel (with particle rejuvenation) targeting pθ(i)(x1:T|y1:T) conditional on x1:T(i − 1)
Step 2a requires that we ascribe prior distributions to all the static parameters. For the variance parameters \(\left( {\sigma _i^2} \right)_{i = 0:5}\), we use a non-informative inverse gamma prior, \({\cal I}{\mathrm{G}}\left( {{\mathrm{0}}{\mathrm{.01,0}}{\mathrm{.01}}} \right)\), while the AR1 coefficients (ρ1, ρ2) are assigned a uniform prior \({\cal U}\left( {{\mathrm{0,1}}} \right)\). The inverse gamma distribution as a prior for variance parameters is a standard choice in Gaussian models because it gives a closed-form expression for the posterior by virtue of its conjugate form. Setting its two hyperparameters (shape and scale) to a small number (e.g., 0.01) defines a non-informative (or weakly informative) prior that has little effect on the posterior, and thus on our inference. Therefore, it is crucial to note that all the static parameters along with the latent state are inferred from the observations by PGAS without any manual tweaking.
For the particle filter, we use a bootstrap implementation with the number of particles set equal to the length of the time series (i.e., T) (note that the algorithm does not rely on asymptotics in the number of particles to be correct, however a higher number of particles improves the mixing speed of the algorithm). The number of iterations is set to 20,000 with a burn-in period of 2000. All credible intervals shown in this paper represent the highest posterior density interval (i.e., the interval with the smallest width among all the credible intervals for a specified significance level) and are computed using the Chen-Shao algorithm54. As an illustration, estimates of the state variables for the Key West tide gauge derived using our method along with the trace plots for the eight static parameters of the model are shown in Supplementary Fig. 5.
To demonstrate the high skill of our method, we have performed a numerical experiment on synthetic data. In particular, we have generated a synthetic time series containing a predetermined time-varying annual cycle, a low-frequency component, and realistic noise. The prescribed annual amplitude has variations similar to those observed in tide-gauge records. We have then applied our method to infer the annual amplitude from the synthetic time series and have compared our estimates with those obtained using a harmonic fit to 5-year running windows. Estimates are computed for 100 different realizations of the noise. The results are shown in Supplementary Fig. 6. The range of individual estimates based on the state-space model fully encompasses the true amplitude at all time steps. Furthermore, nearly all estimates fall within the 95% credible interval computed by PGAS, indicating that our method provides realistic uncertainty estimates. In addition, the mean of the 100 individual estimates matches the true amplitude almost exactly, meaning that our method gives unbiased estimates of the amplitude. In contrast, the range of individual estimates computed using the method of running windows do not entirely contain the true amplitude and the mean consistently underestimates the true amplitude, indicating that this method is a biased estimator of the amplitude. Note also that the method does not provide estimates within half the window size from the edges of the time series.
We have also performed a simple residual check to assess if the assumption of white-noise innovations holds. In particular, we have computed the residuals in Eqs. (9–16) based on the 18,000 samples from the posterior distribution. This results in 18,000 time series of residuals for each equation and tide-gauge station. Then, we have computed the mean AR1 coefficient over the 18,000 samples. Values of the mean AR1 coefficient fall within the range (−0.03, 0.15) for all equations and tide-gauge stations, which gives us confidence in the correctness of the model.
Finally, It is worth mentioning that we have tested a slightly modified version of the model that included an AR1 term in addition to the four terms of Eq. (9), but found that such model yielded almost identical results to the model without the AR1 process. Furthermore, in most cases the MCMC chain exhibits better mixing in the simpler model. This indicates that the benefit of adding an AR1 process does not outweigh the costs of increased complexity, and thus we opted for the simpler model.
State-space model for the UMO variance
To estimate the instantaneous variance of the UMO transport time series, we use a stochastic variance model, which is given by the following state-space model:
$$m_t = m_{t - 1} + j_t,\quad j_t \sim {\cal N}\left( {0,\sigma _{\mathrm{m}}^2} \right),$$
$$n_t = n_{t - 1} + p_t,\quad p_t \sim {\cal N}\left( {0,\sigma _{\mathrm{n}}^2} \right),$$
$$u_t = \rho _1u_{t - 1} + \rho _2u_{t - 2} + h_t,\quad h_t\sim {\cal N}\left( {0,\kappa } \right),$$
$$y_t = {\mathrm{exp}}\left( {n_t/2} \right)u_t + m_t + k_t,\quad k_t\sim {\cal N}\left( {0,\sigma _{\mathrm{y}}^2} \right),$$
where now y1:T = (y1,…,yT) denote the UMO transport time series, mt represent low-frequency variations in the UMO, ut is a second-order autoregressive (AR2) process used to model the high-frequency (intra- to inter-annual) variations in the UMO, and exp(nt) is the instantaneous variance of the AR2 process (the quantity that we use as a measure of the intensity of the UMO variations). The unknown parameters of this model are \({\mathbf{\theta }} = \left( {\rho _1,\rho _2,\sigma _{\mathrm{m}}^2,\sigma _{\mathrm{n}}^2,\sigma _{\mathrm{y}}^2} \right)\), and \(\kappa = \frac{{1 + \rho _2}}{{1 - \rho _2}}\left[ {\left( {1 - \rho _2} \right)^2 - \rho _1^2} \right]\) to ensure that ut has variance equal to one. For this model, the latent state at time t is xt = (mt, nt, ut). Inference in this stochastic model is performed by PGAS, in the same way as for the seasonal state-space model described in the previous section. To do this, we assign a non-informative inverse gamma prior, \({\cal I}\left( {{\mathrm{0}}{\mathrm{.01,0}}{\mathrm{.01}}} \right)\), to the variance parameters, and uniform prior to the coefficients of the AR2 process to ensure that stationarity conditions (ρ1 + ρ2 < 1, ρ2 − ρ1 < 1, and |ρ2| < 1)are satisfied.
Definition of the modulator
Amplitude modulation is mathematically expressed as
$$y_{\mathrm{m}}\left( t \right) = \left[ {1 + g\left( t \right)} \right]\underbrace {A\,{\mathrm{cos}}\,\left( {2\pi ft} \right)}_{y\left( t \right)},$$
where g(t) is a modulation signal and y(t) is a periodic signal of frequency f and constant amplitude A (often termed the carrier in radio communications and related disciplines).
For the sake of simplicity, let us assume that g(t) is a sinusoid of frequency fm (typically fm ≪ f) and constant amplitude B < 1. Equation (21) can then be rewritten as
$$y_{\mathrm{m}}\left( t \right) = \left[ {1 + B\,{\mathrm{cos}}\,(2\pi f_{\mathrm{m}}t + \varphi )} \right]A{\mathrm{cos}}\left( {2\pi ft} \right),$$
which, by using the identity \({\mathrm{cos}}\,a\,{\mathrm{cos}}\,b = 1/2\left[ {{\mathrm{cos}}\,\left( {a + b} \right) + {\mathrm{cos}}\left( {a - b} \right)} \right]\), can be rearranged as
$$y_{\mathrm{m}}\left( t \right) = y\left( t \right) + \underbrace {\frac{{AB}}{2}\left[ {{\mathrm{cos}}\left( {2\pi (f + f_{\mathrm{m}})t + \varphi } \right) + {\mathrm{cos}}\left( {2\pi (f - f_{\mathrm{m}})t - \varphi } \right)} \right]}_{{\mathrm{modulator}}}.$$
Hence, the amplitude-modulated signal, ym(t), can be viewed as the sum of the unmodulated signal (or carrier), y(t), and two sinusoids with frequencies (referred to as the upper and lower sidebands) equal to the sum and difference frequencies of the carrier and modulation signals. The sum of the two sidebands is what we refer to as the modulator. Note that in the context of this paper, y(t) represents the mean SLAC because it is what we obtain if we fit an annual harmonic to the modulated SLAC.
In practice, g(t) will take a more complicated form than a sinusoid. However, by Fourier analysis, any general function can be written in terms of sinusoids, which implies that ym(t) can always be put in the form of Eq. (23), with two sidebands for each frequency component of g(t).
The modulator is not a direct output of our state-space model, however it can be easily computed as:
$$m\left( t \right) = {\mathrm{Res}}\left[ {a^{\mathrm{a}}\left( t \right)\,{\mathrm{cos}}\,\phi ^{\mathrm{a}}\left( t \right)} \right],$$
where aa(t) and ϕa(t) are the estimates of the annual amplitude and phase provided by the state-space model, and Res is an operator denoting residual after subtraction of the mean annual cycle. To emphasize the timescales of interest, we apply a Butterworth band-pass filter (lower and higher cutoff frequencies: 1/16 and 1/8 months–1) to the output of Eq. (24). Note that, from Eq. (23), such band-pass filter in the modulator domain is equivalent to a 4-year low-pass filter in the amplitude domain.
As an illustration, we show the power spectral density of the SLAC modulator for the Key West and St. Petersburg tide gauges (Supplementary Fig. 3). Both tide gauges display two dominant sidebands at ~10.5 and ~13.8 months, which from Eq. (23) implies that the amplitude of the SLAC has most of its energy at periods of ~7 years. The two additional sidebands at nearly 12 months represent lower-frequency (>30 years) variability of the annual amplitude. Similar spectral peaks are observed at all tide gauges.
Statistical significance of correlations
The significance of cross-correlations is quantified by using the non-parametric random-phase test described by ref. 55, which accounts for serial correlation in the time series. Here we use 10,000 random-phase simulations.
C++ code for the state-space models is available from the corresponding author upon request.
All data sets analyzed during this study are publicly available from the links provided in the Methods section. The data from the OCCAM model are available to anyone from the corresponding author upon request.
The original version of this Article contained an error in the first sentence in the legend of Fig. 1, which incorrectly read 'The first letter of 'Hatteras' should be capitalized, in both Figure 1a and 1b since Hatteras is a proper noun.' The correct version removes this sentence. This has been corrected in both the PDF and HTML versions of the Article.
Morris, J. T., Kjerfve, B. & Dean, J. M. Dependence of estuarine productivity on anomalies in mean sea level. Limnol. Oceanogr. 35, 926–930 (1990).
Morris, J. T. in Estuarine Science: A Synthetic Approach to Research and Practice (ed Hobbie, J.) 107–127 (Island Press, Washington, D.C., 2000).
Theuerkauf, E. J., Rodriguez, A. B., Fegley, S. R. & Luettich, R. A. Jr. Sea level anomalies exacerbate beach erosion. Geophys. Res. Lett. 41, 5139–5147 (2014).
Gonneea, M. E., Mulligan, A. E. & Charette, M. A. Climate-driven sea level anomalies modulate coastal groundwater dynamics and discharge. Geophys. Res. Lett. 40, 2701–2706 (2013).
Moftakhari, H. R. et al. Increased nuisance flooding along the coasts of the United States due to sea level rise: past and future. Geophys. Res. Lett. 42, 9846–9852 (2015).
Pugh, D. T. Tides, Surges and Mean Sea-Level: A Handbook for Engineers and Scientists (John Wiley, New York, 1987).
Gill, A. E. & Niiler, P. P. The theory of the seasonal variability in the ocean. Deep Sea Res. 20, 141–177 (1973).
Tsimplis, M. N. & Woodworth, P. L. The global distribution of the seasonal sea level cycle calculated from coastal tide gauge data. J. Geophys. Res. 99, 16031–16039 (1994).
Vinogradov, S. V., Ponte, R. M., Heimbach, P. & Wunsch, C. The mean seasonal cycle in sea level estimated from a data-constrained general circulation model. J. Geophys. Res. 113, C03032 (2008).
Plag, H. P. & Tsimplis, M. N. Temporal variability of the seasonal sea-level cycle in the North Sea and Baltic Sea in relation to climate variability. Glob. Planet. Change 20, 173–203 (1999).
Marcos, M. & Tsimplis, M. N. Variations of the seasonal sea level cycle in southern Europe. J. Geophys. Res. 112, C12011 (2007).
Barbosa, S. M., Silva, M. F. & Fernandes, M. J. Changing seasonality in North Atlantic coastal sea level from the analysis of long tide gauge records. Tellus 60A, 165–177 (2008).
Hünicke, B. & Zorita, E. Trends in the amplitude of Baltic Sea level annual cycle. Tellus 60A, 154–164 (2008).
Dangendorf, S. et al. Mean sea level variability and influence of the North Atlantic Oscillation on long-term trends in the German Bight. Water 4, 170–195 (2012).
Torres, R. R. & Tsimplis, M. N. Seasonal sea level cycle in the Caribbean Sea. J. Geophys. Res. 117, C07011 (2012).
Wahl, T., Calafat, F. M. & Luther, M. E. Rapid changes in the seasonal sea level cycle along the US Gulf coast from the late 20th century. Geophys. Res. Lett. 41, 491–498 (2014).
Feng, X. et al. Spatial and temporal variations of the seasonal sea level cycle in the northwest Pacific. J. Geophys. Res. Oceans 120, 7091–7112 (2015).
Amiruddin, A. M., Haigh, I. D., Tsimplis, M. N., Calafat, F. M. & Dangendorf, S. The seasonal cycle and variability of sea level in the South China Sea. J. Geophys. Res. Oceans 120, 5490–5513 (2015).
Gill, A. E. Atmosphere-Ocean Dynamics (Academic, San Diego, 1982).
Pikovsky, A., Rosenblum, M., Osipov, G. & Kurths, J. Phase synchronization of chaotic oscillators by external driving. Phys. D 104, 219–238 (1997).
Article MathSciNet MATH Google Scholar
Wu, Z. et al. The modulated annual cycle: an alternative reference frame for climate anomalies. Clim. Dyn. 31, 823–841 (2008).
Särkkä, S. Bayesian Filtering and Smoothing 3rd edn (Cambridge University Press, Cambridge, 2013).
McCarthy, G. D. et al. Measuring the Atlantic meridional overturning circulation at 26°N. Prog. Oceanogr. 130, 91–111 (2015).
Thompson, K. R. North Atlantic sea-level and circulation. Geophys. J. R. Astron. Soc. 87, 15–32 (1986).
Woodworth, P. L., Maqueda, M. Á. M., Roussenov, V. M., Williams, R. G. & Hughes, C. W. Mean sea-level variability along the northeast American Atlantic coast and the roles of the wind and the overturning circulation. J. Geophys. Res. Oceans 119, 8916–8935 (2014).
Gill, A. E. & Clarke, A. J. Wind-induced upwelling, coastal currents, and sea-level changes. Deep Sea Res. 21, 325–345 (1974).
Huthnance, J. M. On coastal trapped waves: analysis and numerical calculation by inverse iteration. J. Phys. Oceanogr. 8, 74–92 (1978).
Enfield, D. B. & Allen, J. S. On the structure and dynamics of monthly mean sea level anomalies along the Pacific coast of North and South America. J. Phys. Oceanogr. 10, 557–578 (1980).
Hughes, C. W. & Meredith, P. M. Coherent sea-level fluctuations along the global continental slope. Philos. Trans. R. Soc. A 364, 885–901 (2006).
Calafat, F. M., Chambers, D. P. & Tsimplis, M. N. Mechanisms of decadal sea level variability in the eastern North Atlantic and the Mediterranean Sea. J. Geophys. Res. 117, C09022 (2012).
Marshall, D. P. & Johnson, H. L. Propagation of meridional circulation anomalies along western and eastern boundaries. J. Phys. Oceanogr. 43, 2699–2717 (2013).
Cabanes, C., Huck, T. & deVerdière, A. C. Contributions of wind forcing and surface heating to interannual sea level variations in the Atlantic Ocean. J. Phys. Oceanogr. 36, 1739–1750 (2006).
Piecuch, C. G. & Ponte, R. M. Buoyancy-driven interannual sea level changes in the southeast tropical Pacific. Geophys. Res. Lett. 39, L05607 (2012).
Hong, B. G., Sturges, W. & Clarke, A. J. Sea level on the U.S. east coast: decadal variability caused by open ocean wind-curl forcing. J. Phys. Oceanogr. 30, 2088–2098 (2000).
Chelton, D. B. & Schlax, M. G. Global observations of oceanic Rossby waves. Science 272, 234–238 (1996).
Kanzow, T. et al. Basinwide integrated volume transports in an eddy-filled ocean. J. Phys. Oceanogr. 39, 3091–3110 (2009).
Clément, L., Frajka-Williams, E., Szuts, Z. B. & Cunningham, S. A. Vertical structure of eddies and Rossby waves, and their effect on the Atlantic meridional overturning circulation at 26.5°N. J. Geophys. Res. Oceans 119, 6479–6498 (2014).
Frajka-Williams, E., Johns, W. E., Meinen, C. S., Beal, L. M. & Cunningham, S. A. Eddy impacts on the Florida current. Geophys. Res. Lett. 40, 349–353 (2013).
Frajka-Williams, E. et al. Compensation between meridional flow components of the Atlantic MOC at 26°N. Ocean Sci. 12, 481–493 (2016).
Hirschi, J. J.-M., Killworth, P. D. & Blundell, J. R. Subannual, seasonal, and interannual variability of the North Atlantic meridional overturning circulation. J. Phys. Oceanogr. 37, 246–1265 (2007).
Holgate, S. J. et al. New data systems and products at the permanent service for mean sea level. J. Coast. Res. 29, 493–504 (2013).
Compo, G. P. et al. The twentieth century reanalysis project. Q. J. R. Meteorol. Soc. 137, 1–28 (2011).
Kalnay, E. et al. The NCEP/NCAR 40-year reanalysis project. Bull. Am. Meteorol. Soc. 77, 437–471 (1996).
Smeed, D. et al. Atlantic Meridional Overturning Circulation Observed by the RAPID-MOCHA-WBTS (RAPID-Meridional Overturning Circulation and Heatflux Array-Western Boundary Time Series) Array at 26N from 2004 to 2015 (British Oceanographic Data Centre—Natural Environment Research Council, Southampton, 2016).
Coward, A. & de Cuevas, B. The OCCAM 66 Level Model: Physics, Initial Conditions and External Forcing Internal Rep. 99, 58 (National Oceanography Centre, Southampton, 2005).
Marzocchi, A. et al. The North Atlantic subpolar circulation in an eddy-resolving global ocean model. J. Mar. Syst. 142, 126–143 (2015).
Carton, J. A. & Giese, B. S. A reanalysis of ocean climate using simple ocean data assimilation (SODA). Mon. Weather Rev. 136, 2999–3017 (2008).
Levitus, S. Climatological Atlas of the World Ocean (Geophysical Fluid Dynamics Laboratory, Princeton, 1982).
Capotondi, A., Alexander, M. A. & Deser, C. Why are there Rossby wave maxima in the Pacific at 10°S and 13°N? J. Phys. Oceanogr. 33, 1549–1563 (2003).
Kanzow, T. et al. A prototype system for observing the Atlantic meridional overturning circulation: scientific basis, measurement and risk mitigation strategies, and first results. J. Oper. Oceanogr. 1, 19–28 (2008).
Andrieu, C., Doucet, A. & Holenstein, R. Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. 72, 269–342 (2010).
Lindsten, F., Jordan, M. I. & Schön, T. B. Particle Gibbs with ancestor sampling. J. Mach. Learn. Res. 15, 2145–2184 (2014).
MathSciNet MATH Google Scholar
Lindsten, F., Bunch, P., Singh, S. S. & Schön, T. B. Particle ancestor sampling for near-degenerate or intractable state transition models. Preprint at https://arxiv.org/abs/1505.06356 (2015).
Chen, M. H. & Shao, Q. M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 8, 69–92 (1999).
MathSciNet Google Scholar
Ebisuzaki, W. A method to estimate the statistical significance of a correlation when data are serially correlated. J. Clim. 10, 2147–2153 (1997).
We acknowledge the PSMSL for the tide-gauge data, the OCCAM, NEMO, and SODA projects for the ocean model data, the RAPID-WATCH MOC for the UMO data, the WHOI OAFlux project for the heat flux data, CMEMS for the altimetry data, and NOAA AOML for the Florida current transport time series. E.F.-W. was supported by a Leverhulme Trust Research Fellowship. Plotting was done in Python using the Matplotlib and Basemap libraries. This work has been partially supported by the Natural Environment Research Council (NERC) National Capability funding. F.L. was supported by the Swedish Research Council (ref no: 2016-04278) and the Swedish Foundation for Strategic Research (ref no: ICA16-0015). We thank Chris W. Hughes for helpful discussions.
National Oceanography Centre, Joseph Proudman Building, 6 Brownlow Street, Liverpool, L3 5DA, UK
Francisco M. Calafat & Joanne Williams
Department of Civil, Environmental and Construction Engineering and National Center for Integrated Coastal Research, University of Central Florida, 12800 Pegasus Drive, Suite 211, Orlando, 32816-2450, FL, USA
Thomas Wahl
Department of Information Technology, Uppsala University, Lägerhyddsv. 2, hus 2, Uppsala, 752 37, Sweden
Fredrik Lindsten
Ocean and Earth Sciences, University of Southampton, European Way, Southampton, SO14 3ZH, UK
Eleanor Frajka-Williams
Francisco M. Calafat
Joanne Williams
This study was conceived by F.M.C. with contributions from the other co-authors. F.M.C. designed and implemented the state-space models, coded the reduced gravity model, performed the data analysis, and produced the figures. T.W. provided the time series of the annual amplitude based on the windowing method. F.L. aided with the implementation of PGAS and the development of the state-space models. J.W. provided the Matlab code to read the OCCAM data. E.F.-W. assisted with the calculation of the UMO. F.M.C. wrote the paper with input from the other co-authors. All authors discussed the results and implications and commented on the paper.
Correspondence to Francisco M. Calafat.
Calafat, F.M., Wahl, T., Lindsten, F. et al. Coherent modulation of the sea-level annual cycle in the United States by Atlantic Rossby waves. Nat Commun 9, 2571 (2018). https://doi.org/10.1038/s41467-018-04898-y
The impact of remote temperature anomalies on the strength and position of the Gulf Stream and on coastal sea level variability: a model sensitivity study
Tal Ezer
Sönke Dangendorf
Ocean Dynamics (2022)
The mean seasonal cycle in relative sea level from satellite altimetry and gravimetry
Richard D. Ray
Bryant D. Loomis
Victor Zlotnicki
Journal of Geodesy (2021)
Matthew J. Widlansky
Xiaoyu Long
Fabian Schloesser
Communications Earth & Environment (2020)
Impact of Continental Freshwater Runoff on Coastal Sea Level
Fabien Durand
Rui M. Ponte
Surveys in Geophysics (2019)
Forcing Factors Affecting Sea Level Changes at the Coast
Philip L. Woodworth
Angélique Melet
Mark A. Merrifield | CommonCrawl |
Research on dynamic creep strain and settlement prediction under the subway vibration loading
Junhui Luo1 &
Linchang Miao ORCID: orcid.org/0000-0002-1453-74451
This research aims to explore the dynamic characteristics and settlement prediction of soft soil. Accordingly, the dynamic shear modulus formula considering the vibration frequency was utilized and the dynamic triaxial test conducted to verify the validity of the formula. Subsequently, the formula was applied to the dynamic creep strain function, with the factors influencing the improved dynamic creep strain curve of soft soil being analyzed. Meanwhile, the variation law of dynamic stress with sampling depth was obtained through the finite element simulation of subway foundation. Furthermore, the improved dynamic creep strain curve of soil layer was determined based on the dynamic stress. Thereafter, it could to estimate the long-term settlement under subway vibration loading by norms. The results revealed that the dynamic shear modulus formula is straightforward and practical in terms of its application to the vibration frequency. The values predicted using the improved dynamic creep strain formula closed to the experimental values, whilst the estimating settlement closed to the measured values obtained in the field test.
With the rapid development of the nation's economy, subways have become an important traffic reduction strategy in congested urban areas in China (Guo et al. 2012). Nevertheless, subway vibration loading may lead to the non-uniform settlement on subway foundation. Furthermore, it may cause the cracking of tunnel lining and structural cracking. Hence the surrounding environment is adversely affected by vibration (Luo et al. 2015). Accordingly, it is necessary to estimate the settlement during operation, and it represents a key research topic in the design of urban mass transit.
The dynamic modulus is an important parameter used to evaluate the dynamic characteristics of the soil. It is primarily divided into three types, namely the field shear wave test, empirical calculating method and the estimation method (Kyung and Yoo 2014). Nonetheless, the relationship between dynamic modulus and dynamic creep strain is not to establish. Meanwhile, the systematic methods of settlement estimation for evaluating the dynamic characteristics of subway foundation cannot be established.
Numerous research projects have been conducted on the dynamic creep strain function. For instance, Monismith and Ogawan (1975) presented an exponential empirical formula of time-dependent dynamic strain formula. Meanwhile, Li and Selig (1996); Chai and Miura (2002) proposed an improved model based on Monismith's theory, implementing the settlement prediction. Nonetheless, none of them evaluated the dynamic characteristics of soil. The Singh–Mitchell exponential empirical formulas creep strain model (Singh and Mitchell 1968) evaluated the engineering characteristics of soil. This model can be used to estimate the range from 20 to 80 % shear stress level. However, when the shear stress level is equal to zero, the strain estimation will be less than zero error results. In an attempt to address this issue, Mesri modified the Singh–Mitchell model to devise the Mesri empirical formulas model (Mesri et al. 1981; Kondner 1963). Subsequently, it can be computed arbitrary shear stress level. Accordingly, it is suitable for the analysis of low stress level, including subway vibration loading.
Meanwhile, the dynamic stress amplitude is required for estimating the settlement of subway foundation, and the finite element method is commonly utilized (Olsson and Kallsner 2015). Metrikine and Vrouwenvelder (2000); Paulo et al. (2015) devised the subway finite element model to determine the dynamic stress and analyzed dynamic response under vibration loading. In order to analyze the dynamic characteristics of the subway foundation soil along different direction during operation, Forrest and Hunt (2006) created the 3D finite element model to estimate the settlement.
As such, this research focuses on the dynamic characteristics of soft soil and the settlement prediction. The dynamic shear modulus formula considering vibration frequency was established, then the proposed formula was introduced into the dynamic creep strain function. Subsequently, the dynamic stress of the subway foundation was obtained through finite element simulation. Then the improved dynamic creep strain curve was determined by the dynamic stress. Thereby, the settlement of the subway foundation was estimated. This study would be helpful to illuminate new theoretical soft soil research on dynamic characteristics and settlement prediction. It also has profound guiding significance in subway engineering practices.
Experimental study on dynamic shear modulus
Analysis of dynamic triaxial test
Shield segment can be affected by subway vibration loading during operations (Shen et al. 2014). In order to simulate the subway vibration loading on soil, the dynamic triaxial test was performed.
Firstly, using the thin soil sampler (as shown in Fig. 1) to collect the in-situ undisturbed soil. The location of the obtained undisturbed soil was in the Hexi area of Nanjing, China, Metro Line 2 of Nanjing crossed through the area. In order to facilitate the test, the undisturbed soil samples were made of cylinder sample cylinders that were 38 mm in diameter and 75 mm in height. Meanwhile, the researcher ensured that the sample preparation was performed carefully to ensure the structure of undisturbed soil samples were not disturbed.
Thin soil sampler
Thereafter, the physical and mechanical indexes of undisturbed soil were obtained through normal laboratory experiments, as detailed in Table 1.
Table 1 Physical and mechanical properties of soft clay
Studies have shown that the subway loading produce a vibration waveform which is similar to the sine wave (Rucker 1977). Based on this, during the dynamic triaxial test, the loading mode used in the test is shown in Fig. 2. Where \(\sigma_{3}^{'}\) is effective confining pressure whereas \(\sigma_{3}^{'}+\sigma_{d}\) is the total vertical effective dynamic stress. The dynamic triaxial test was performed with the undisturbed cylinders samples (left-hand section of Fig. 2). The test process is shown on the right-hand side of Fig. 2. Firstly, the saturation stage is carried out (0–A), before the sample is subjected to the effective confining pressure \(\sigma_{3}^{'}\) during the isotropic consolidation process (A–B). Thereafter, the vertical effective load is loaded during the vibration stage (B–C) (Zhang et al. 2013). For instance, when the effective confining pressure \(\sigma_{3}^{'}\) was 75 kPa, and the dynamic stress σ d was 8.3 kPa, and the median value of dynamic stress was 4.15 kPa. Hence, the total vertical effective dynamic stress \(\sigma_{3}^{'}+\sigma_{d}\) was 83.3 kPa.
Loading mode
In Fig. 3, it was the dynamic triaxial apparatus. During operation, the vibration staff (controlled by the vibration controller) driver the oscillating vibration head up and down so that vibration loading was applied to the soil in the cell chamber.
Dynamic triaxial apparatus
Meanwhile, with the different in-situ sampling depth of the soil, the confining pressure was different. According to the effective unit weight of soil and sampling depth required to determine the corresponding effective confining pressure during the dynamic triaxial test.
Based on the hypothesis of isotropy, the effective confining pressure \(\sigma_{3}^{'}= d \cdot \gamma^{'}\) is conducted during the dynamic triaxial test. Where d is the sampling depth of undisturbed soil, \(\gamma ^{'}= \rho^{'}\cdot g\) the effective unit weight of soil, \(\rho^{'}\) the effective density of soil and g the acceleration of gravity.
In this research, the effective confining pressure was 75 and 150 kPa, respectively. It used a half Sine wave load (Rucker 1977).
The dynamic loads were 10, 15, and 20 N, (8.3, 12.5, 16.7 kPa) respectively;
According to the existing literature (Ng et al. 2013), the vibration loading of subways was predominantly in the low frequency range, so the frequency of the test was set as: 1, 0.5 and 2 Hz respectively. Moreover, it established 10,000 vibration times.
Experimental estimation of dynamic shear modulus
The dynamic shear modulus is \(G_{d} = {{\tau_{\text{d}} } \mathord{\left/ {\vphantom {{\tau_{\text{d}} } {\gamma_{\text{d}} }}} \right. \kern-0pt} {\gamma_{\text{d}} }}\) under vibration loading. Where \(\tau_{\text{d}}\) is dynamic shear stress and \(\gamma_{\text{d}}\) is dynamic shear strain, and they are the test results automatically obtained by the dynamic triaxial apparatus.
The case of dynamic loading was Fd = 20 N(\(\sigma_{\text{d}} = 16.7\,{\text{kPa}}\)) f = 1, 0.5 and 2 Hz respectively, and the strain range of this study is 5 × 10−4 to 5 × 10−2. The corresponding dynamic shear modulus when the strain equals to 0 are shown in Table 2.
Table 2 The test values of dynamic shear modulus
Under the same conditions, by increasing the effective confining pressure, the soil was compacted, the void ratio of soil decreased, the dynamic deformation reduced and the dynamic shear modulus increased. Thereafter, by increasing the vibration frequency, the time was shorter and the deformation of soil decreased, whilst corresponding dynamic shear modulus increased when the strain equals to 0.
Empirical formula of the dynamic shear modulus
When lacking the necessary test equipment, the empirical method based on the physical and mechanical properties can accurately estimate the dynamic shear modulus (Luo et al. 2015).
In order to illustrate the influence of frequency, according to the existing literature (Ng et al. 2013; Liu 2013; Zhai and Liu 2005), the frequency was changed with the varied speed when the subway run. The effect of speed on subway vibration loading under the same sampling depth mainly changes by frequency. Hence, in this case, the vibration frequency affected the dynamic stress σ d. Therefore, the frequency of the soft soil was a pivotal factor. The relationship between frequency and dynamic shear modulus can be expressed by a hyperbolic function, as shown in Fig. 4. Based on Kagawa (1992) empirical formula, the improved formula for frequency was proposed.
Relation curves between the dynamic shear modulus and frequency
Kgawa's empirical formula is as follows:
$$G = \frac{{358 - 3.8I_{p} }}{0.4 + 0.7e}\sigma_{3}^{'}$$
where I p is plastic index, \(\sigma_{3}^{'}\) is effective confining pressure and e void ratio of soil.
According to the reference (Sas et al. 2015), the dynamic shear modulus improved formula was analyzed by square regression:
$$G_{d} = \frac{{358 - 3.8I_{p} }}{0.4 + 0.7e}\sigma_{3}^{'}\cdot \left( {a_{1} \cdot f^{2} + a_{2} \cdot f + a_{3} } \right)$$
where the unit of f is dimensionless, which is frequency; I p = 17 is plastic index, e = 1.14 in Table 1; a1, a2 and a3 are calculated by square regression analysis, as shown in Table 3.
Table 3 Parameters of the dynamic shear modulus formula
Therefore, based on Kgawa's model, an improved model considering the frequency is established for estimating the dynamic shear modulus and dynamic creep strain.
Estimation and analysis of dynamic creep strain
Estimation of dynamic creep strain
Mesri creep model (Mesri et al. 1981) was calculated for the dynamic creep strain time history curve. When the time last long under dynamic loading, the soft soil would occur creep strain, so it is called the dynamic creep strain. The Singh-Mitchell model (Singh and Mitchell 1968) only describes the characteristics of soil under shear stress in the range of 20–80 %. When the shear stress level was zero, the estimation of strain would be less than zero. Due to these shortcomings, Mesri improved the Singh–Mitchell creep model. Mesri model can be used to calculate the creep strain of the soil under arbitrary shear stress levels. The shear stress level was no longer limited to the range of 20–80 %, instead including all the stress levels (0–100 %), with the deduction as follows.
According to the Mesri formula:
$$\varepsilon_{s}^{t} = \frac{2}{{(E_{d} /S_{u} )}}\frac{{\overline{D} }}{{1 - (R_{f} ) \cdot \overline{D} }}\left[ {\frac{{(t)_{i} }}{t}} \right]^{m}$$
where m are the model parameters, (t) i the unit time, t the time, \(\overline{D} = {{\left( {\sigma_{1} - \sigma_{3} } \right)} \mathord{\left/ {\vphantom {{\left( {\sigma_{1} - \sigma_{3} } \right)} {\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} }}} \right. \kern-0pt} {\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} }}\) the shear stress level and \(S_{\text{u}} = {{\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} } \mathord{\left/ {\vphantom {{\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} } 2}} \right. \kern-0pt} 2}\) the undrained shear strength. The dynamic modulus \(E_{d} = G_{d} \cdot 2(1 + \mu )\) (Hardin and Drnevich 1972) is calculated by Formula (2). \(R_{\text{f}} = {{\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} } \mathord{\left/ {\vphantom {{\left( {\sigma_{1} - \sigma_{3} } \right)_{\text{f}} } {\left( {\sigma_{1} - \sigma_{3} } \right)_{ult} }}} \right. \kern-0pt} {\left( {\sigma_{1} - \sigma_{3} } \right)_{ult} }}\) the damage ratio.
According to Formula (3), the dynamic creep strain time history curve was calculated, thereby determining the parameter m. As shown in Fig. 5, owing to the action of the vibration loading, the corresponding test value of dynamic creep strain time history curve appeared the pulse phenomenon.
Dynamic strain time history curve
The parameters of this established dynamic creep strain model under different factors by Formula (3) were in Table 4.
Table 4 Parameters of dynamic creep strain model
Comparative analysis of dynamic creep strain
According to the above formula, the influence of dynamic stress, frequency, effective confining pressure and natural water content on the dynamic creep strain of soil was analyzed.
The influence of dynamic stress
In Fig. 6, under the same condition, the vibration loading was greater, the larger the vibration energy was, thus the soil had greater kinetic energy, as was the corresponding dynamic creep strain.
Dynamic strain time history curve under different dynamic stress amplitude
However, although the dynamic stress was in a certain linear proportion (8.3, 12.5 and 16.6 kPa), the corresponding dynamic creep strain was not similar to the linear law. It revealed that the soil was a non-linear material.
The influence of frequency
The essence of the effect of frequency on the dynamic creep strain was the different duration of dynamic load. Therefore, the higher the frequency, the faster the load changed, the shorter the action lasted, the less energy that was transferred and the smaller the dynamic strain was. Such findings are shown by Fig. 7.
Dynamic strain time history curve under different frequencies
The influence of effective confining pressure
The influence of effective confining pressure on the dynamic strain is due to the difference in density degree. When the dynamic triaxial test was carried out, different levels of effective confining pressure were set. In Fig. 8, according to the sampling depth and density of the undisturbed soil, the greater sampling depth, the lower the dynamic creep strain of the soil.
Dynamic creep strain time history curve under different confining pressure
The influence of natural water content
Natural water content was a parameter of physical and mechanical properties, measuring the moisture content of soil in the natural state. In Fig. 9, the effect on dynamic creep strain reflected that when the natural water was large, the film layer of soil particles was also thick, particle spacing was bigger, the attraction between soil particles decreased, and were prone to dislocation. Therefore, the larger natural water content would produce a greater dynamic creep strain deformation under vibration loading.
Dynamic creep strain time history curve under different natural water content
Estimation of dynamic stress under vibration loading
Subway foundation soil would occur settlement during operation, which can be estimated by establishing the improved dynamic creep strain formula. The corresponding dynamic stress of each soil layer should be obtained before estimating the settlement (Ju 2009).
According to the design of dynamic stress formula:
$$\sigma_{{{\text{d}}_{ 0} }} = 0.26{\text{P(1}} \pm 0. 0 4 {\text{V)}}$$
where \(\sigma_{{{\text{d}}_{ 0} }}\) is dynamic stress acting on the track bed, P the subway train weight and V subway speed, recorded at 80 km/h.
It obtained \(\sigma_{{{\text{d}}_{ 0} }}\) = 69 kPa, which was then substituted into the subway finite element model (Ju 2009). The varied additional stress \(\sigma_{{{\text{d}}_{i} }}\) with sampling depth can be obtained at the bottom of the tunnel axis by the finite clement calculation, then determined the corresponding dynamic creep strain under first cyclic load. Subsequently, the following was settlement estimation. The case study was Yuantong station of Nanjing subway tunnel, which had a distance of 9–12 m from ground, thereby establishing the two-dimensional model of the subway.
The subway inner diameter is 5.6 m, the outside diameter 6.2 m. The lining thickness is 0.35 m, the lateral width 100 m. The center of the tunnel from ground is 9 m, the center of the tunnel from the bottom 18 m and the height 50 m. As shown in Fig. 10, the subway vibration load acted on the center of the tunnel.
The simulation of finite element
Constitutive model parameters
In order to obtain the constitutive model parameters of the soft soil on the subway foundation, the parameters were obtained through dynamic and static triaxial test as shown in Table 5.
Table 5 Parameters of constitutive model
Estimation of dynamic stress with sampling depth
It obtained \(\sigma_{{{\text{d}}_{i} }}\) the additional stress curve with sampling depth illustrated in Fig. 11.
Additional stress with sampling depth in the bottom of the tunnel
Settlement estimation
The improved dynamic creep strain in Formula 3 and the additional stress \(\sigma_{{{\text{d}}_{i} }}\) with sampling depth are used to estimate the settlement of the subway foundation, the main calculation steps are as follows:
Firstly, the dynamic stress amplitude acting on the track bed was determined, and the dynamic stress at the bottom of 0 m sampling depth was \(\sigma_{{{\text{d}}_{ 0} }}\) = 69 kPa.
The above step results were substituted into the subway finite element model, then the additional stress \(\sigma_{{{\text{d}}_{i} }}\) with sampling depth was estimated.
According to the influence of the vibration response, thickness of the vertical deformation could be determined, and the vertical layers were divided by norms.
The vertical strain deformation of each layer was estimated by the improved dynamic creep strain function.
The settlement of subway soil foundation was estimated by the layer-wise summation method of norm, expressed as (Jin 2004):
$$\Delta H = \sum\limits_{{{\text{i}} = 1}}^{\text{n}} {\varepsilon_{zi} } {\text{H}}_{i}$$
where \(\varepsilon_{zi}\) is the vertical accumulated creep strain of the i layer; Hi the height of the i layer and n the number of layers in the scope of influence distance on the bottom of the tunnel.
Based on the above steps, the long-term settlement under the condition of different frequencies (0.5, 1 and 2 Hz) can be estimated. Meanwhile, using the long-term settlement of measured values obtained by the field test data, the results are shown in Fig. 12. It reveals that the estimated values closed to the measured values obtained in the field test.
Comparison of the results of settlement between the measured values and the estimation obtained using the finite element method
The following conclusions were obtained by analyzing the dynamic characteristics and long-term settlement of the soil under vibration loading. The methods of laboratory testing, theoretical analysis and numerical simulation were using.
The case of dynamic load by dynamic triaxial test was Fd = 20 N(16.7 kPa), the frequency f = 1, 0.5, 2 Hz respectively. And the case test results showed that under the same conditions, by increasing the effective confining pressure, the dynamic deformation reduced and the corresponding dynamic shear modulus increased when the strain equals to 0. Thereafter, by increasing the frequency, the time of load acting on the soil was shorter and the deformation of the soil decreased, whilst the dynamic shear modulus increased.
Based on the Kgawa model, the improved dynamic shear modulus formula which considered the frequency was established. The results demonstrate that the correlation values of estimation calculated by the improved dynamic shear modulus formula are between 0.991–0.996, thereby verifying the validity of the improved formula.
Subsequently, the improved formula was applied to the dynamic creep strain model function. It suggested that the correlation values of the estimated values generated by the improved dynamic creep strain formula are between 0.826–0.998.
Meanwhile, the effects of amplitude, frequency, effective confining pressure and natural water content on the dynamic creep strain were analyzed. It showed that the larger the vibration amplitude, the greater the dynamic creep strain, whilst the larger the frequency, the smaller the dynamic creep strain. Likewise, the larger the effective confining pressure, the smaller the dynamic creep strain, and the larger the natural water content, the greater the dynamic creep strain.
The speed of case subway train recorded at 80 km/h. According to the design of the dynamic stress formula, the dynamic stress acting on the track bed at the bottom of 0 m sampling depth is 69 kPa. Thereafter, the dynamic stress was substituted into the subway finite element simulation to obtain additional stress curve with sampling depth acting on the soil foundation. Based on the improved dynamic creep strain function and additional stress, the long-term settlement was obtained under subway vibration loading by norms. It showed that the estimated values of the long-term settlement under the conditions of different frequencies (0.5, 1 and 2 Hz) closed to the measured values obtained through the field test.
Chai JC, Miura N (2002) Traffic-load-induced permanent deformation of road on soft subsoil. J Geotech Geoenviron Eng 128(10):907–916
Forrest JA, Hunt HEM (2006) A three-dimensional tunnel model for calculation of train-induced ground vibration. J Sound Vib 294(4):678–705
Guo WW, Xia H, De Roeck G, Liu K (2012) Integral model for train–track –bridge interaction on the Sesia viaduct: dynamic simulation and critical assessment. Comput Struct 112–113:205–216
Hardin BO, Drnevich VP (1972) Shear modulus and damping in soils: design equations and curves. J Soil Mech Found Div ASCE 98(118):667–692
Jin B (2004) Dynamic displacements of an infinite beam on a poroelastic half space due to a moving oscillating load. Arch Appl Mech 74(3–4):277–287
Ju SH (2009) Finite element investigation of traffic induced vibrations. J Sound Vib 321(3–5):837–853
Kagawa T (1992) Moduli and damping factors of soft marine clays. J Geotech Eng 118(9):1360–1375
Kondner RL (1963) Hyperbolic stress-strain response: cohesive soils. J Soil Mech Found ASCE 89(1):115–143
Kyung JS, Yoo B (2014) Rheological properties of azuki bean starch pastes in steady and dynamic shear. Starch-Starke 66(9–10):802–808
Li D, Selig ET (1996) Cumulative plastic deformation for fine-grained sub grade soils. J Geotech Eng 122(12):1006–1013
Liu F (2013) Long-term settlement of metro in soft ground and its influence on safety. Nanjing University, Nanjing (in Chinese)
Luo JH, Miao LC, Wang ZX, Shi WB (2015) Modified cam-clay model with dynamic shear modulus under cyclic loads. J VibroEng 17(1):112–124
Mesri G, Febres CE, Shields DR, Castro A (1981) Shear − stress − strain − time behavior of clays. Geotechnique 31(4):537–552
Metrikine AV, Vrouwenvelder ACWM (2000) Surface ground vibration due to a moving train in a tunnel: two dimensional model. J Sound Vib 234(1):43–66
Monismith CL, Ogawan Freeme CR (1975) Permanent deformation characteristics of subsoil due to repeated loading. Transp Res Rec 537:1–17
Ng CWW, Liu GB, Li Q (2013) Investigation of the long-term tunnel settlement mechanisms of the first metro line in Shanghai. Can Geotech J 50(6):674–684
Olsson A, Kallsner B (2015) Shear modulus of structural timber evaluated by means of dynamic excitation and FE analysis. Mater Struct 48(4):977–985
Paulo AM, Costa Pedro A, Godinho Luis MC (2015) 2.5D MFS-FEM model for the prediction of vibrations due to underground railway traffic. Eng Struct 104:141–154
Rucker W (1977) Measurement and evaluation of random vibrations. Proceedings of DMSR p 407–421
Sas W, Gabrys K, Szymanski A (2015) Effect of time on dynamic shear modulus of selected cohesive soil of one section of express way no. S2 in Warsaw. Acta Geophys 63(2):398–413
Shen SL, Wu HN, Cui YJ (2014) Long-term settlement behaviour of metro tunnels in the soft deposits of Shanghai. Tunn Undergr Space Tech 40:309–323
Singh A, Mitchell JK (1968) General stress-strain-time function for soils. J Soil Mech ASCE 94(1):21–46
Zhai H, Liu WN (2005) A study on the low frequency ground response induced by metro train and corresponding vibration reduction measures. Urban Rapin Rail Transit 4:101–105
Zhang JF, Chen JJ, Wang JH (2013) Prediction of tunnel displacement induced by adjacent excavation in soft soil. Tunn Undergr Space Tech 36:24–33
All authors took part in the experiment. Both authors designed the research and wrote the manuscript. Both authors read and approved the final manuscript.
The authors gratefully acknowledge the financial support for this research from the National Natural Science Foundation of China (Grant no. 51278099), Postgraduate research and innovation plan project in Jiangsu Province (Grant no. CXLX13_098). We thank Daniel Hampton for its linguistic assistance during the preparation of this manuscript.
Institute of Geotechnical Engineering of Southeast University, Nanjing, 210096, China
Junhui Luo
& Linchang Miao
Search for Junhui Luo in:
Search for Linchang Miao in:
Correspondence to Linchang Miao.
Luo, J., Miao, L. Research on dynamic creep strain and settlement prediction under the subway vibration loading. SpringerPlus 5, 1252 (2016) doi:10.1186/s40064-016-2707-2
Dynamic triaxial test
Dynamic shear module
Dynamic creep strain | CommonCrawl |
Search Results: 1 - 10 of 314950 matches for " James J Moon "
Page 1 /314950
EphrinA1-targeted nanoshells for photothermal ablation of prostate cancer cells
Andre M Gobin,James J Moon,Jennifer L West
International Journal of Nanomedicine , 2008,
Abstract: Andre M Gobin, James J Moon, Jennifer L WestDepartment of Bioengineering, Rice University, Houston, TX, USAAbstract: Gold-coated silica nanoshells are a class of nanoparticles that can be designed to possess strong absorption of light in the near infrared (NIR) wavelength region. When injected intravenously, these nanoshells have been shown to accumulate in tumors and subsequently mediate photothermal treatment, leading to tumor regression. In this work, we sought to improve their specificity by targeting them to prostate tumor cells. We report selective targeting of PC-3 cells with nanoshells conjugated to ephrinA1, a ligand for EphA2 receptor that is overexpressed on PC-3 cells. We demonstrate selective photo-thermal destruction of these cells upon application of the NIR laser.Keywords: nanoshell, near infrared, photothermal treatment, prostate cancer
Antigen-Displaying Lipid-Enveloped PLGA Nanoparticles as Delivery Agents for a Plasmodium vivax Malaria Vaccine
James J. Moon, Heikyung Suh, Mark E. Polhemus, Christian F. Ockenhouse, Anjali Yadava, Darrell J. Irvine
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0031472
Abstract: The parasite Plasmodium vivax is the most frequent cause of malaria outside of sub-Saharan Africa, but efforts to develop viable vaccines against P. vivax so far have been inadequate. We recently developed pathogen-mimicking polymeric vaccine nanoparticles composed of the FDA-approved biodegradable polymer poly(lactide-co-glycolide) acid (PLGA) "enveloped" by a lipid membrane. In this study, we sought to determine whether this vaccine delivery platform could be applied to enhance the immune response against P. vivax sporozoites. A candidate malaria antigen, VMP001, was conjugated to the lipid membrane of the particles, and an immunostimulatory molecule, monophosphoryl lipid A (MPLA), was incorporated into the lipid membranes, creating pathogen-mimicking nanoparticle vaccines (VMP001-NPs). Vaccination with VMP001-NPs promoted germinal center formation and elicited durable antigen-specific antibodies with significantly higher titers and more balanced Th1/Th2 responses in vivo, compared with vaccines composed of soluble protein mixed with MPLA. Antibodies raised by NP vaccinations also exhibited enhanced avidity and affinity toward the domains within the circumsporozoite protein implicated in protection and were able to agglutinate live P. vivax sporozoites. These results demonstrate that these VMP001-NPs are promising vaccines candidates that may elicit protective immunity against P. vivax sporozoites.
Dental Aesthetics: A Study Comparing Patients' Own Opinions with Those of Dentists [PDF]
Richard John Moon, Brian James Millar
Open Journal of Stomatology (OJST) , 2017, DOI: 10.4236/ojst.2017.74016
Abstract: Objective: A beautiful smile is perceived as important but the components that contribute to the patient's concept of a beautiful smile have not been fully investigated. Hence this study aimed to compare the views of patients on their own dental aesthetics with those of a group of dentists. It also assessed the patients' willingness to undergo aesthetic treatment. Methods: Fifty patients, who ranged in age from 24 to 76 years, completed self-assessment questionnaires. Photographs were taken of these patients, which were subsequently assessed by six dentists using a questionnaire with visual analogue scale to assess each parameter. Results: Significant differences (p < 0.05) were found between the opinions of the dentists and the patients. Older patients were generally more satisfied with their smile than the dentists. Eighty-six percent of the patients were willing to undergo aesthetic treatment, although factors such as the complexity of treatment, time involved, discomfort and financial costs, deterred many. The cost of treatment was the main deterrent. The younger patients were least likely to be put off treatment. Conclusion: Patients' views of their own smile differed from the dentists' opinion. Those who were the least satisfied and were most likely to undergo aesthetic treatment were in the younger age groups. Satisfaction increased with age and older patients were less likely to seek the aesthetic treatment.
Classification methods for the development of genomic signatures from high-dimensional data
Hojin Moon, Hongshik Ahn, Ralph L Kodell, Chien-Ju Lin, Songjoon Baek, James J Chen
Genome Biology , 2006, DOI: 10.1186/gb-2006-7-12-r121
Abstract: Providing guidance on specific therapies for pathologically distinct tumor types to maximize efficacy and minimize toxicity is important for cancer treatment [1,2]. For acute leukemia, for instance, different subtypes show very different responses to therapy, reflecting the fact that they are molecularly distinct entities, although they have very similar morphological and histopathological appearance [1]. Thus, accurate classification of tumor samples is essential for efficient cancer treatment on a target population of patients. Microarray technology has been increasingly used in cancer research because of its potential for classification of tissue samples based only on gene expression data, without prior and often subjective biological knowledge [1,3,4]. Much research involving microarray data analysis is focused on distinguishing between different cancer types using gene expression profiles from disease samples, thereby allowing more accurate diagnosis and effective treatment of each patient.Gene expression data might also be used to improve disease prognosis in order to prevent some patients from having to undergo painful unsuccessful therapies and unnecessary toxicity. For example, adjuvant chemotherapy for breast cancer after surgery could reduce the risk of distant metastases; however, seventy to eighty percent of patients receiving this treatment would be expected to survive metastasis-free without it [5,6]. The strongest predictors for metastases, such as lymph node status and histological grade, fail to classify accurately breast tumors according to their clinical behavior [6,7].Predicting patient response to therapy or the toxic potential of drugs based on high-dimensional data are common goals of biomedical studies. Classification algorithms can be used to process high-dimensional genomic data for better prognostication of disease progression and better prediction of response to therapy to help individualize clinical assignment of treatment. The predicti
Sex-Specific Genomic Biomarkers for Individualized Treatment of Life-Threatening Diseases
Hojin Moon,Karen L. Lopez,Grace I. Lin,James J. Chen
Disease Markers , 2013, DOI: 10.1155/2013/393020
Abstract: Numerous studies have demonstrated sex differences in drug reactions to the same drug treatment, steering away from the traditional view of one-size-fits-all medicine. A premise of this study is that the sex of a patient influences difference in disease characteristics and risk factors. In this study, we intend to exploit and to obtain better sex-specific biomarkers from gene-expression data. We propose a procedure to isolate a set of important genes as sex-specific genomic biomarkers, which may enable more effective patient treatment. A set of sex-specific genes is obtained by a variable importance ranking using a combination of cross-validation methods. The proposed procedure is applied to three gene-expression datasets. 1. Introduction Personalized medicine is defined by the use of genomic signatures of patients to assign effective therapies in order to achieve the best medical outcomes for individual patients, thus improving public health. Despite the variety of clinical, morphological, and molecular parameters used to classify human malignancies, patients receiving the same diagnosis can have markedly different clinical courses and treatment responses. Since there is no simple way to determine who will have an adverse reaction, the current system of "one-size-fits-all-" diagnoses is simply not good enough. An increasing number of studies have demonstrated sex differences in drug reactions to the same drug treatment. Migeon [1] implied that males and females responded differently to drug treatments and that sex plays a key role in cancer. In addition, females are historically less studied subjects due to the complication of estrous cycle, and therefore such studies would further benefit women's health and promote public health. Recent advancements in biotechnology have accelerated the search for molecular biomarkers useful in the diagnosis and treatment of disease. Molecular biomarkers of disease risk and status are critical to an accurate treatment by identifying patients most likely to benefit from particular drugs or experience adverse reactions. Because medicine is always practiced on individuals rather than populations, the goal is to change the assignment of therapies from a population-based approach to an individualized approach. Gene-expression data can be used to identify patients with a good disease prognosis, thereby preventing some patients from unnecessary therapies and toxicity. For example, gene-expression profiling was used to predict clinical outcomes in pediatric patients with acute myeloid leukemia and to find genes whose aberrant
Sets Characterized by Missing Sums and Differences in Dilating Polytopes
Thao Do,Archit Kulkarni,Steven J. Miller,David Moon,Jake Wellens,James Wilcox
Mathematics , 2014,
Abstract: A sum-dominant set is a finite set $A$ of integers such that $|A+A| > |A-A|$. As a typical pair of elements contributes one sum and two differences, we expect sum-dominant sets to be rare in some sense. In 2006, however, Martin and O'Bryant showed that the proportion of sum-dominant subsets of $\{0,\dots,n\}$ is bounded below by a positive constant as $n\to\infty$. Hegarty then extended their work and showed that for any prescribed $s,d\in\mathbb{N}_0$, the proportion $\rho^{s,d}_n$ of subsets of $\{0,\dots,n\}$ that are missing exactly $s$ sums in $\{0,\dots,2n\}$ and exactly $2d$ differences in $\{-n,\dots,n\}$ also remains positive in the limit. We consider the following question: are such sets, characterized by their sums and differences, similarly ubiquitous in higher dimensional spaces? We generalize the integers in a growing interval to the lattice points in a dilating polytope. Specifically, let $P$ be a polytope in $\mathbb{R}^D$ with vertices in $\mathbb{Z}^D$, and let $\rho_n^{s,d}$ now denote the proportion of subsets of $L(nP)$ that are missing exactly $s$ sums in $L(nP)+L(nP)$ and exactly $2d$ differences in $L(nP)-L(nP)$. As it turns out, the geometry of $P$ has a significant effect on the limiting behavior of $\rho_n^{s,d}$. We define a geometric characteristic of polytopes called local point symmetry, and show that $\rho_n^{s,d}$ is bounded below by a positive constant as $n\to\infty$ if and only if $P$ is locally point symmetric. We further show that the proportion of subsets in $L(nP)$ that are missing exactly $s$ sums and at least $2d$ differences remains positive in the limit, independent of the geometry of $P$. A direct corollary of these results is that if $P$ is additionally point symmetric, the proportion of sum-dominant subsets of $L(nP)$ also remains positive in the limit.
Pregnancy Outcomes at Kasungu Maternity Ward in Central Malawi—A Review of Maternity Ward Register [PDF]
Joo Heon Park, Jin Sik Song, James G. Kim, Changhyun Han, Diane J. Moon, Byungchan Kim, James Kachingwe, George Talama
Advances in Reproductive Sciences (ARSci) , 2019, DOI: 10.4236/arsci.2019.73007
Abstract: Health care services during pregnancy and childbirth and after delivery are important for the survival and wellbeing of both the mother and the infant. The pregnancy outcomes at Kasungu District Hospital Maternity Ward have not been documented. Additionally, MDHS does not capture data regarding, prematurity, APGAR scores, and causes of maternal deaths and causes of neonatal deaths. Using Kasungu District Hospital Maternity Ward register, we aimed to describe the pregnancy outcomes at Kasungu Maternity Ward. From March 2016 to February 2017, data were available for 10,842 deliveries. The calculated Perinatal Mortality Rate (PMR) was about 77/1000 births and the Maternal Mortality Ratio (MMR) was 318 deaths per 100,000 live births. The Spontaneous Vertex Delivery (SVD) rate was 86% and the caesarean section rate was 10%. 1734 (16%) of all deliveries were premature borne between 28 and 36 gestation weeks. 1182 (11%) deliveries had missing APGAR scores and 81 neonates were born with 5 min Apgar scores less than 7. Adverse pregnancy outcomes occur at Kasungu Hospital Maternity Ward. More effort and resources are needed to decrease their occurrence.
Functional Identification of Api5 as a Suppressor of E2F-Dependent Apoptosis In Vivo
Erick J Morris,William A Michaud,Jun-Yuan Ji,Nam-Sung Moon,James W Rocco,Nicholas J Dyson
PLOS Genetics , 2006, DOI: 10.1371/journal.pgen.0020196
Abstract: Retinoblastoma protein and E2-promoter binding factor (E2F) family members are important regulators of G1-S phase progression. Deregulated E2F also sensitizes cells to apoptosis, but this aspect of E2F function is poorly understood. Studies of E2F-induced apoptosis have mostly been carried out in tissue culture cells, and the analysis of the factors that are important for this process has been restricted to the testing of a few candidate genes. Using Drosophila as a model system, we have generated tools that allow genetic modifiers of E2F-dependent apoptosis to be identified in vivo and developed assays that allow effects on E2F-induced apoptosis to be studied in cultured cells. Genetic interactions show that dE2F1-dependent apoptosis in vivo involves dArk/Apaf1 apoptosome-dependent activation of both initiator and effector caspases and is sensitive to levels of Drosophila inhibitor of apoptosis-1 (dIAP1). Using these approaches, we report the surprising finding that apoptosis inhibitor-5/antiapoptosis clone-11 (Api5/Aac11) is a critical determinant of dE2F1-induced apoptosis in vivo and in vitro. This functional interaction occurs in multiple tissues, is specific to E2F-induced apoptosis, and is conserved from flies to humans. Interestingly, Api5/Aac11 acts downstream of E2F and suppresses E2F-dependent apoptosis without generally blocking E2F-dependent transcription. Api5/Aac11 expression is often upregulated in tumor cells, particularly in metastatic cells. We find that depletion of Api5 is tumor cell lethal. The strong genetic interaction between E2F and Api5/Aac11 suggests that elevated levels of Api5 may be selected during tumorigenesis to allow cells with deregulated E2F activity to survive under suboptimal conditions. Therefore, inhibition of Api5 function might offer a possible mechanism for antitumor exploitation.
Hybridische zelfpositionering en performance in Breytenbachs reisverhalen
J Moon
Tydskrif vir letterkunde , 2011,
Abstract: This article focuses on the hybrid self-representation and liminal self-positioning of Breyten Breytenbach as presented in his two travelogues, Return to Paradise (1993) and Dog Heart (1998). Firstly, the form of travel writing is shown to be a suitable genre for the manifestation of a nomadic or 'travelling' subject. Secondly, his liminal self-positioning toward Afrikaner society reflects the problem of identity in post-apartheid South Africa, as well as the writer's performance of a future agency for rehabilitating the collective self within a new South African community. Breytenbach is seen to manifest his cultural identity on the one hand, while attempting to position this identity within the multicultural society on the other. Article text in Afrikaans.
Robust Antigen Specific Th17 T Cell Response to Group A Streptococcus Is Dependent on IL-6 and Intranasal Route of Infection
Thamotharampillai Dileepan equal contributor ,Jonathan L. Linehan equal contributor,James J. Moon,Marion Pepper,Marc K. Jenkins ?,Patrick P. Cleary ?
PLOS Pathogens , 2011, DOI: 10.1371/journal.ppat.1002252
Abstract: Group A streptococcus (GAS, Streptococcus pyogenes) is the cause of a variety of clinical conditions, ranging from pharyngitis to autoimmune disease. Peptide-major histocompatibility complex class II (pMHCII) tetramers have recently emerged as a highly sensitive means to quantify pMHCII-specific CD4+ helper T cells and evaluate their contribution to both protective immunity and autoimmune complications induced by specific bacterial pathogens. In lieu of identifying an immunodominant peptide expressed by GAS, a surrogate peptide (2W) was fused to the highly expressed M1 protein on the surface of GAS to allow in-depth analysis of the CD4+ helper T cell response in C57BL/6 mice that express the I-Ab MHCII molecule. Following intranasal inoculation with GAS-2W, antigen-experienced 2W:I-Ab-specific CD4+ T cells were identified in the nasal-associated lymphoid tissue (NALT) that produced IL-17A or IL-17A and IFN-γ if infection was recurrent. The dominant Th17 response was also dependent on the intranasal route of inoculation; intravenous or subcutaneous inoculations produced primarily IFN-γ+ 2W:I-Ab+ CD4+ T cells. The acquisition of IL-17A production by 2W:I-Ab-specific T cells and the capacity of mice to survive infection depended on the innate cytokine IL-6. IL-6-deficient mice that survived infection became long-term carriers despite the presence of abundant IFN-γ-producing 2W:I-Ab-specific CD4+ T cells. Our results suggest that an imbalance between IL-17- and IFN-γ-producing CD4+ T cells could contribute to GAS carriage in humans. | CommonCrawl |
METASTATE
Demystifying Fractal: Part I
Fractal is a new general-purpose zero-knowledge proof system (and no, it doesn't have anything to do with the fractals you're probably thinking of). This is a two-part in-depth dive into FRACTAL.
Metastate Team
8 Apr 2020 • 9 min read
This is the first of two blog posts attempting to demystify the Fʀᴀᴄᴛᴀʟ SNARK and provide a more accessible overview to how it works. To start, let's talk about what it can and can't do. Fʀᴀᴄᴛᴀʟ "only" proves membership for the formal language R1CS. However this isn't really a limitation, as the language is NP-complete, meaning a large class of problems we care about is reducible to the R1CS problem. In particular, it captures arithmetic circuit satisfiablity. So if you've been reading about other SNARKs like PLONK and wondering how Fʀᴀᴄᴛᴀʟ handles circuits, the answer is that you have to convert them to R1CS first. Once you do, Fʀᴀᴄᴛᴀʟ has several desirable properties, including
Transparent setup. There is no trapdoor in the setup; it's based entirely on public randomness.
Recursive composability. Verification can be written as an R1CS instance, allowing Fʀᴀᴄᴛᴀʟ to verifyanother instance of Fʀᴀᴄᴛᴀʟ. Possible applications include blockchains like Coda which allow the amount of data needed to be stored by any particular (lite) node to remain essentially constant.
Security against quantum adversaries. Whereas some constructions are secure under classical intractability assumptions that don't hold for quantum computers, Fʀᴀᴄᴛᴀʟ is based on hash functions, for which we don't have any truly feasible quantum attacks. This actually makes Fʀᴀᴄᴛᴀʟ the first plausibly quantum-secure recursively composable proof system.
It uses only lightweight cryptography. Another benefit to avoiding the intractability assumptions is that the algebraic operations involved, such as (cryptographic sized) elliptic curve point addition, are computationally expensive compared to evaluating hash functions.
Asymptotically, for N = #constraints ≈ #gates in the corresponding arithmetic circuit the proof time is O(N log N), the argument size is O( log2 N), and the verifier time is O( log2 N). Concretely, for ≈1 million constraints in this C++ implementation, the prover runs in ~6 minutes while the verifier runs in ~10 milliseconds, and the argument size is ~12KB when configured for 128bits of security.
General Structure of zk-SNARKS
Fʀᴀᴄᴛᴀʟ (and all SNARKs, really) could only be designed by standing on the shoulders of metaphorical giants. It's incredibly complex, and SNARKs are still very much an emerging field, but a standard recipe for SNARKs is emerging as a combination of classical computer science techniques. More or less, SNARKs (and STARKs and SNARGs and...) that verify a computation follow this construction:
Starting with a circuit (or a computational trace in the case of a STARK) it is converted into an intermediate representation. The choice of representation is very important as this is generally what the proof system actually works with. So even if interactive proof is very efficient (which could mean it produces small proof strings, the prover or verifier needs only perform a very limited computation, or any of several other metrics) this won't help verify circuit computations if the complexity or size "blew up" when converted to an inefficient representation. Fʀᴀᴄᴛᴀʟ, as mentioned, assumes you've already converted your circuit to a Rank-1 Constraint System (R1CS) instance. A satisfied R1CS instance consists of the instance, a triple of square matrices $A$, $B$, and $C$ that we can think of as describing the gates in a circuit, and a satisfying assignment $z = (x, w)$ where $z$ is a vector comprised of publicly known values $x$ and a secret "witness" vector $w$ that encodes the circuit input. The constraints are "satisfied" if the following equation holds
\[ Az \boxtimes Bz = Cz \]
where $\boxtimes$ represents the entry-wise product rather than usual matrix multiplication.
The rest of the post is dedicated to breaking down what's happening with each arrow.
Interactive Oracle Proofs
Interactive oracle proofs generalize probabilistically checkable proofs (PCP's) to an interactive setting. While in a PCP, the prover outputs a single proof string and the verifier probabilistically checks the proof in only a few locations, in an IOP the prover and verifier communicate back and forth as in a standard interactive proof system. Unlike in a standard interactive proof system, the prover responds with "proof oracles", messages sent by the prover to convince the verifier [hence the "proof"] but constructed in such a way that the verifier is not required to read them in their entirety, but can probabilistically choose snippets of the message to read [similar to randomly querying an oracle].
Reed-Solomon Encoded IOPs
Reed-Solomon encoded IOP's phrase interactive proofs in the language of coding theory. Rather than the usual "accept or reject" paradigm, the prover and verifier participate in an interactive protocol and then the verifier outputs a rational constraint consisting of two arithmetic circuits, $p: \mathbb{F}^{n+1} \to \mathbb{F}$ and $q: \mathbb{F} \to \mathbb{F}$, and a degree bound $d \in \mathbb{N}$. A set of Reed-Solomon codewords $f_1, f_2, \ldots, f_n$ (functions on a subset $L \subseteq \mathbb{F}$) are said to satisfy the constraint if the function $h(x) := \frac{p(x, f_1(x), \ldots, f_n(x))}{q(x)}$ coincides with a polynomial function of degree $\leq d$. However, this is checked with a separate protocol called a low-degree test not considered part of the RS-encoded IOP, which only generates the constraints. There are a couple of advantages to this. The first is that it simplifies analysis of proof systems by breaking it down into modular components. Fʀᴀᴄᴛᴀʟ uses the FRI low-degree test, but it wouldn't be much trouble to replace it with another in the event that a significantly faster/more space-efficient low-degree test is discovered in the future. So for the purpose of explanation we can just use a black-box low degree test. The other advantage is that considering it as a separate component allows us to cut down on the number of low-degree tests. Throughout the protocol, the verifier generates between several and many rational constraints. If they all have degree bound $d$, they can be probabilistically checked with one test because with high probability a random linear combination of rational functions on $\mathbb{F}$ will satisfy the degree constraint only if each of the components does.
A special type of rational constraint called a boundary constraint can be constructed to attest to statements of the form $\hat{f}(\alpha) = \beta$ for some $f$ defined on $L$. If indeed, $\hat{f}(\alpha) = \beta$, then subtracting $\beta$ from both sides shows that $\hat{f}(x) - \beta$ has a root at $\alpha$, hence the polynomial $x-\alpha$ divides $\hat{f}(x)-\beta$, so the rational function defined by $\dfrac{\hat{f}(x)-\beta}{x-\alpha}$ should be interpolated by a polynomial of degree less than or equal to $deg(\hat{f})$. These boundary constraints are useful for verifying that a known function produces a certain output on a particular input, as in the case of checking a hash.
R1CS to lin-check to sum-check
If we had a satisfying $z$ for an R1CS instance, we could compute an $a$ such that $Az =a$, a $b$ such that $Bz = b$, and $c$ such that $Cz = c$. Fʀᴀᴄᴛᴀʟ's works by verifying these three linear relations via a protocol called lin-check and then verifying that $a \boxtimes b = c$ (row-check) (this is done with standard PCP methods not new to Fʀᴀᴄᴛᴀʟ. Notice that the component-wise product of Reed-Solomon codewords is the encoding of the products of the corresponding polynomials.) Overall, the underlying IOP (which is then turned into a non-interactive argument by a somewhat novel, but not specific to Fʀᴀᴄᴛᴀʟ transformation) splits into the following sub-protocols:
Again, the new and exciting parts are along the top, so that's where we keep our attention.
Consider a vector, $u$, whose entries were linearly independent polynomials (and $v_1, v_2$ had entries which were normal field elements), then $\langle u, v_1 \rangle$ and $\langle u, v_2 \rangle$ would be a weighted sum of the polynomial entries of $u$, and since they're linearly independent $\langle u, v_1 \rangle = \langle u, v_2 \rangle$ if and only if $v_1 = v_2$. Then with high probability, if $\langle u, v_1 \rangle \not= \langle u, v_2 \rangle$ then they would differ when evaluated at a random point.
We could use this to test whether $Az = a$. In fact $Az = a$ if and only if
\[ u^T \cdot a = u^T \cdot A \cdot z. \]
With a little bit of matrix algebra this is
\[ u^T \cdot a = (A^T \cdot u)^T \cdot z, \]
\[ u^T \cdot a - (A^T \cdot u)^T \cdot z = 0. \]
This is [almost] the condition we check. For the benefit of being able to use some algebraic results, we represent vectors as functions, but rather than doing it the obvious way and defining a function on the set ${ 1, 2, \ldots, n }$, with the first entry of the vector being $f(1)$, the second entry being $f(2)$, etc., we define the corresponding function over a subgroup $H$ of size $n$. As it turns out, such subgroups are cyclic, meaning every element is a power of some generator $g$. So we can define the function so that the first entry of the vector is $f(g)$, the second entry being $f(g^2)$, etc.
So we have a function defined by the values it takes on $n$ points, which then uniquely defines a polynomial of degree at most $n-1$ that interpolates it.
So in the language of polynomial functions, what we want to do to check is that there exist univariate polynomials $f_z$, $f_a$, $f_b$, $f_c$
\[ \forall h \in H, \ \ \ \sum_{k \in H} A_{hk} f_z(k) = f_a(h) \]
(And corresponding statements for $B$ and $C$). The vector $u$ and the matrices $A$, $B$, and $C$ will be represented by bivariate polynomials, but the specific details will be deferred to the second post in this series.
So, having determined a need to check whether a polynomial sums to a certain value on a given subset, we ask, "how do we do it? (without just computing the sum itself)". If a SNARK verifier were to compute the sum, it would take time linear in the size of the subset to be summed over. The size of this subset in an R1CS SNARK corresponds to the number of constraints, which roughly corresponds to the number of gates in a circuit, so we want to be able to do better than linear verifier time. Fʀᴀᴄᴛᴀʟ gets around this by using a holographic lin-check protocol, meaning that the verifier no longer has access to the matrices $A$, $B$, and $C$ (or even a sparse encoding of them), but now gets access to oracle which the verifier will only need to query a few times.
Previous work found an if-and-only-if condition that makes our polynomial summation problem amenable to holographic IOP's when $S$ has some additional structure.
If $S$ is a multiplicative coset, then $\sum_{s \in S} f(s) = \sigma$ if and only if $f(x) = xg(x) + \sigma / |S|$ where $g(x)$ is a polynomial of degree strictly less than $|S|-1$, which fits in nicely with our proof strategy of polynomial commitments + low degree tests.
With a little tweaking, the authors extend the result to rational functions, and from there we can check that $\sum_{s \in S} f(s) = \sigma$ by generating a corresponding rational constraint and checking that it is satisfied.
Turning it into a NARG
If you're reading this, there's a good chance you're familiar with the Fiat-Shamir heuristic, which transforms initeractive arguments into noninteractive ones by having the prover simulate the random choices of an interactive verifier with pseudorandomness generated from a hash function.
Standard PCP's are turned into a NARG via a different transformation, called the Micali transform. Like the Fiat-Shamir transform, the Micali transform's prover uses hash functions to heuristically simulate the random choices of a verifier in an interactive setting. In the Micali transform, hash functions also serve a second purpose. Any transformation from an IOP to a non-interactive argument has the additional task of showing that the prover didn't falsify the answer that the oracle gave to the [prover simulating the] verifier. In order to do this, the prover computes a proof oracle as in the interactive setting, and then uses a hash function to create a Merkle tree from the proof oracle. Having committed to the oracle, the prover uses the root of this tree to simulate the verifier's random choice and the simulated verifier queries the oracle at the appropriate random point. The prover provides the queried answer, along with an authentication path that can be checked against the Merkle tree.
The transformation used in turning a holographic IOP into the preprocessing NARG that is Fʀᴀᴄᴛᴀʟ is a combination of these two. "Between rounds" we use the Fiat-Shamir transform to make sure that the prover honestly simulates random challenges by the verifier, and for each round we use a Micali transform to show that the prover honestly simulates random queries.
The verifier as a constraint system
The proofs of security are done in the random oracle model. As usual, the theoretical random oracle is instantiated with a cryptographic hash function. Most familiar cryptographic hash functions are designed to be quickly and efficiently computed in hardware, but the complexity of arithmetic circuits that compute them is much too high. In order for Fʀᴀᴄᴛᴀʟ to verify another Fʀᴀᴄᴛᴀʟ instance that uses one of these hash functions, it needs to check the correctness of many hashes, which would amount to checking the correctness of many complex circuits evaluations. Instead, Fʀᴀᴄᴛᴀʟ uses a hash function called Rescue which was recently created specifically to have low arithmetic complexity.
Written by Nat Bunner, zero-knowledge cryptography researcher & protocol developer at Metastate. For feedback or questions, please do not hesitate to contact us : [email protected]
Image credits: Marmot baby via Wikimedia Commons
Interested in zero-knowledge cryptography? Metastate is hiring, check out our Zero-Knowledge Cryptographer & Protocol Developer position.
Subscribe to METASTATE
More in Cryptography
A gentle introduction to the multi-asset shielded pool project
PLONK by Hand (Part 3: Verification)
PLONK by Hand (Part 2: The Proof)
7 Oct 2020 – 14 min read
Demystifying Fractal: Part II
It details Fʀᴀᴄᴛᴀʟ's subprotocols, such as Univariate Sumcheck for rational functions, Holographic Lincheck, Sparse Matrix Arithmetization or the Verifier
Metastate Team 24 Apr 2020 • 6 min read
Demystifying Supersonic: Part II
Part II introduces the DARK Integer Commitment Scheme and describes at a high level how DARK can be used as a Homomorphic Polynomial Commitment Scheme, and how DARK may be instantiated in various groups of unknown order.
Metastate Team 2 Apr 2020 • 16 min read
METASTATE © 2021
You've successfully subscribed to METASTATE!
Stay up to date! Get all the latest posts delivered straight to your inbox | CommonCrawl |
Journal of Animal Science and Biotechnology
Expression of genes involved in progesterone receptor paracrine signaling and their effect on litter size in pigs
Xiao Chen1,2,
Jinluan Fu1 &
Aiguo Wang1
Journal of Animal Science and Biotechnology volume 7, Article number: 31 (2016) Cite this article
Embryonic mortality during the period of implantation strongly affects litter size in pigs. Progesterone receptor (PGR) paracrine signaling has been recognized to play a significant role in embryonic implantation. IHH, NR2F2, BMP2, FKBP4 and HAND2 were proved to involve in PGR paracrine signaling. The objective of this study was to evaluate the expression of IHH, NR2F2, BMP2, FKBP4 and HAND2 in endometrium of pregnant sows and to further investigate these genes' effect on litter size in pigs. Real-time PCR, western blot and immunostaining were used to study target genes/proteins expression in endometrium in pigs. RFLP-PCR was used to detect single nucleotide polymorphisms (SNPs) of target genes.
The results showed that the mRNA and protein expression levels of IHH, NR2F2 and BMP2 were up-regulated during implantation period (P < 0.05 or P < 0.01). All target proteins were mainly observed in luminal epithelium and glandular epithelium. Interestingly, the staining of NR2F2 and HAND2 was also strong in stroma. SNPs detection revealed that there was a -204C > A mutation in promoter region of NR2F2 gene. Three genotypes were found in Large White, Landrace and Duroc sows. A total of 1847 litter records from 625 sows genotyped at NR2F2 gene were used to analyze the total number born (TNB) and number born alive (NBA). The study of the effect on litter size suggested that sows with genotype CC tend to have higher litter size.
These results showed the expression patterns of genes/proteins involved in PGR paracrine signaling over implantation time. And the candidate gene for litter size was identified from genes involved in this signaling. This study could be a resource for further studies to identify the roles of these genes for embryonic implantation in pigs.
Most reproductive traits are complex in terms of their genetic architecture [1]. Litter size is one of the most important economical traits in pig production. But as a quantitative trait, the heritability of litter size is low (0.1–0.15) [2]. Also litter size cannot be measured until the age of sexual maturity. However, these biological constraints can be potentially ameliorated by a better knowledge of the genetic regulation of litter size, which will lead to new tools to implement gene and/or marker assisted selection [3].
Implantation process is one of the important factors that affect litter size in pigs, owing to the high embryonic mortality during this stage. Due to the significant role that progesterone receptor (PGR) plays in pregnancy [4–7], paracrine signaling initiated by PGR within the uterine microenvironment during implantation period promotes implantation of conceptus and also promotes the development and maintenance of gestation [8, 9]. It has been proved that during early stage of pregnancy the function of PGR can be successfully transmitted through HH–NR2F2 signaling axis. Indian hedgehog (IHH), which was identified as an acute PGR target gene [10], is a known member of the hedgehog (HH) signaling pathway. The HH signaling pathway has been demonstrated to be critical for embryonic development, which operates in an epithelial to mesenchymal manner within the uterus (reviewed in [11]). NR2F2 (nuclear receptor subfamily 2, group F, member 2) has been identified to be a critical regulator in cell differentiation and tissue development as well as angiogenesis and metabolism (reviewed in [12]). IHH and NR2F2 interaction works as HH–NR2F2 axis, which plays a role in transducing an epithelial to stromal signal that initiates embryonic implantation and subsequently decidualization. BMP2 (bone morphogenetic protein 2) and FKBP4 (FK506 binding protein 4) worked as down-stream target genes of HH-NR2F2 axis, which were necessary and sufficient for implantation and decidualization. BMP2 acts via a paracrine mechanism to initiate decidualization after embryonic implantation, and also plays a fundamental role in preparing the epithelium for implantation through the regulation of Fkbps and Wnt ligands. HAND2 is a basic helix-loop-helix (bHLH) transcription factor and a known downstream target of PGR. HAND2 is a critical mediator between active paracrine signaling by PGR signaling and the inhibition of estrogen-induced proliferation within the epithelium, which is critical for embryonic implantation.
Therefore, PGR paracrine signaling is critical for embryonic implantation. Porcine embryos begin to attach to the uterus on pregnancy day 13 and 14, and implantation completes from pregnancy day 18 to day 24 [13]. In this research, we detected the expression level of the genes/proteins involved in PGR paracrine signaling, including IHH, NR2F2, BMP2, FKBP4 and HAND2, in the endometrium on d 13, 18 and 24 of gestation in pigs. SNPs of these genes were detected and the association between the polymorphism and litter size in Large White, Landrace and Duroc pigs was analyzed. The results will provide information towards a better understanding of PGR paracrine signaling, which regulates implantation and subsequently affect litter size in pigs.
Animal materials
The Animal Care and Use Committee of China Agricultural University reviewed and approved the experimental protocol used in this study (Code: SYXK (Jing) 2009-0030). Multiparous Large White sows (5th parity) were observed daily for standing heat in the presence of a boar. The sows of the pregnant groups (three groups, three sows each group) were inseminated twice, 12 h and 24 h after heat detection, respectively [14]. The sows of the non-pregnant group (three sows) were treated with inactivated sperm from the same boar [14]. Pregnant sows were slaughtered by electrocution on d 13, 18 and 24 after insemination. Samples of the endometrium attachment sites and inter-sites were taken. Samples were taken from three locations of each uterine horn: proximal (the end, close to the ovaries), medial, and distal (next to the corpus uteri) [14]. Non-pregnant sows were slaughtered on d 13 after insemination. Samples were taken from the comparable locations. Endometrial tissue sampling was carried out according to the procedure of Lord, with minor modifications [15]. The samples used for real time PCR and western-blot were collected immediately, snap frozen in liquid nitrogen and stored at −80 °C. The samples used for immunohistochemistry were collected and placed in a tube containing pre-cooling paraformaldehyde solution (4 %, pH = 7.4) and placed on a rocker overnight for fixation of the tissue. Once the period of fixation was finished, the tissue was rinsed in PBS, and then processed through a series of ethanol washes to displace the water. Then the tissue was infiltrated with and embedded in paraffin. Paraffin-embedded tissues were sliced at 5 μm thickness using a microtome (Leica2016, Germany).
Animals used to identify candidate genes for litter size were from Beijing Huadu Swine Breeding Company LTD. All sows were reared and feed in the same condition. Ear tissue samples of 625 Large White, Landrace and Duroc sows were collected in centrifuge tubes (1.5 mL) with 70 % ethanol and stored at 4 °C until DNA extraction. DNA was extracted by phenol and chloroforms (1:1) extraction. There are eight sire families in Large White, eight sire families in Landrace, and seven sire families in Duroc sows. 1847 litters' records were used for statistical analysis. Litter size records such as total number born (TNB) and number born alive (NBA) were recorded by parity.
RNA isolation and real time quantitative PCR
Trizol reagent (Invitrogen, Carlsbad, CA, USA) was used to extract total RNA, according to the manufacturer's instructions. For each animal, total RNA consisted of a mix of an equal quantity of total RNA from three locations of each uterine horn: proximal (the end, close to the ovaries), medial, and distal (next to the corpus uteri).
For each sample, first strand cDNA was synthesized using 1 μg of total RNA. M-MLV FIRST STRAND KIT (Invitrogen, Shanghai, China) and oligo (dT)18 primer were used in a total of 20 μL reverse transcription reaction following the supplier's instruction. Transcript specific primer pairs (see Additional file 1: Table S1) were designed with Oligo 6.0 software. Standard PCRs on cDNA were carried out to verify amplification sizes. Transcript quantification was performed using SYBR Green mix (Roche Diagnostics GmbH, Roche Applied Science, Mannheim, Germany) in a Roche LightCylcer 480 (Roche Diagnostics GmbH, Roche Applied Science, Mannheim, Germany). The RT-PCR reactions were prepared in a total volume of 20 μL containing 5 μL of cDNA (50 ng, 1:100 dilution), 10 μL of SYBR Green mix, 3 μL water which contained in the kit and 0.02 μmol/L of both forward and reverse gene specific primers. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as the internal reference gene. Cycling conditions were 95 °C for 10 min, followed by 45 cycles of 95 °C (10 s) and 60 °C (10 s) where the fluorescence was acquired. Finally, a dissociation curve to test PCR specificity was generated by one cycle at 95 °C (10s) followed by 60 °C (1 min) and ramp up to 95 °C with acquired fluorescence during the ramp to 0.2 °C/s. PCR efficiency of each gene was estimated by standard curve calculation using four points of cDNA serial dilutions. Ct values were transformed to quantities using the comparative Ct method, setting the relative quantities of non-pregnant group for each gene to 1 (Qty = 10-ΔCt/slope). Data normalization was carried out using GAPDH as the reference gene. Comparisons of genes expression levels were done using a t-test.
Western-blot
Frozen sections of endometrial samples were prepared and western blotting was performed as previously described with minor modification [16]. Tissues protein was extracted (0.05 mol/L Tris–HCl, NaCl 8.76 mg/mL, 1 % TritonX-100 and 100 μg/mL PMSF) (Sunbio, China) by vortex meter (Kylinbell, China). Total protein concentrations were detected using the BCA Protein Assay Kit (Sunbio, China) according to the manufacturer's recommendations.
Sample 80–120 μg was separated in a 10 % Tris–HCl polyacrylamide gel in electrophoresis system (Liuyi, China), and protein from the gel was transferred onto a single PVDF membrane (BioRad, USA). After rinsed in TBST for 5 min at room temperature (RT), the membrane was soaked in 5 % skim milk (in TBST) for 1 h. Next, the membrane was immerged into specific dilution (IHH, Santa Cruz Biotechnology, Inc., sc-13088, 1:100; NR2F2, Abcam (Hong Kong) Ltd., ab50487, 1:100; BMP2, Abcam (Hong Kong) Ltd., ab14933, 1:100; FKBP4, Abcam (Hong Kong) Ltd., ab97306, 1:150; HAND2, Biobyt, orb36304, 1:100;β-Actin 1:200) of the primary antibody at 4 °C overnight. After rinsed in TBST for 5 min three times at RT, the membrane was immerged into 1:1000 dilution of the secondary antibody (HRP) (Santa Cruz, USA) for 1 h, and then rinsed in TBST for 5 min three times at RT. Finally, the membrane was colored using the DAB kit (Invitrogen, USA) and exposed using Chemiluminescence Detection Kit for HRP (Sunbio, China). Scanned images were quantified using Image J analysis software.
Sows endometrial slides were subjected to immunohistochemical analysis with immunostaining kit, Histostain-Plus Mouse Primary (Invitrogen, USA) according to the manufacturer's recommendations. After being washed in PBS, the sections were incubated with 10 % horse serum (Invitrogen, USA) at RT for 30 min. The washed sections were then reacted with primary antibodies (rabbit polyclonal to IHH, Santa Cruz Biotechnology, Inc., sc-13088; rabbit polyclonal to NR2F2, Abcam (Hong Kong) Ltd., ab50487; rabbit polyclonal to BMP2, Abcam (Hong Kong) Ltd., ab14933; rabbit polyclonal to FKBP4, Abcam (Hong Kong) Ltd., ab97306; rabbit polyclonal to HAND2, Biobyt, orb36304; mouse monoclonal toβ-Actin, Santa Cruz Biotechnology, Inc., sc-81178) at 4 °C overnight. Followed by incubation with biotinylated second antibody (Invitrogen, USA) at 37 °C for 25 min, and after being washed in PBS for 15 min three times, the sections were incubated with streptavidin-peroxidase (HRP) (Invitrogen, USA) at 37 °C for 25 min. Finally, the slides were washed with PBS and stained with DAB kit (Invitrogen, USA). After being washed fully with water for 5 min, the slides were stained with hematoxylin and eosin, and then examined by microscope (BH2, Olympus). Instead of primary antibodies, PBS was used as a negative control. Endometrial tissues of non-pregnant sows were used as positive control [17]. ImagePro Plus software was used to measure the level of staining. The gray value of the portion of the picture without tissue was set as 0 to correct the background. Scoring of staining was carried out according to the procedure of Constantine A. Axiotis (1991), with minor modifications [18]. Expression of target protein was determined by assessing the staining intensity and the percentage of stained cells. The staining intensity was rated as follows: weak staining (score = 1), moderate staining (score = 2), strong staining (score = 3). The percentages of positive cells was calculated using ImagePro plus. This formula was used to calculated the final score: ∑(percentage of positive cells)* (score of positive staining). Average of five different areas per picture was recorded. According to the final score, the protein expressed as follows: <1.0, weak, 1.0–1.5, moderate; >1.5, strong.
Detection of SNPs and litter size association analysis
DNA was extracted by phenol and chloroforms (1:1) standard techniques. 18 PCR primer pairs (see Additional file 2: Table S2) were designed to detect SNPs of target genes. PCR amplifications were carried out on an Eppendorf Mastercycler gradient 5331 PCR System (Eppendorf, Germany). The polymerase chain reaction amplification was performed using 50–100 ng of genomic DNA, 25 μL Taq PCR MasterMix (Taq DNA Polymerase: 0.05 units/μL; MgCl2: 4 mM/μL; dNTPs: 0.4 mM/μL), 10 pM of each primers in a 50 μL final volume. All reagents were collected from the National Laboratories for Agrobiotechnology, China Agricultural University. The following conditions of PCR amplification were used: a denaturation step at 95 °C for 4 min, 30 cycles at 95 °C for 30 s, 52 °C ~ 55 °C for 30 s, and 72 °C for 30 s ~ 1 min 30 s, a final extension step of 72 °C for 10 min. Amplified fragments were separated by 1.5 % agarose gel electrophoresis (AGE).
Using pooled DNA amplification and sequencing, several mutations were found. Mutation −204C > A in promoter region of NR2F2 gene caused the deletion of transcription factor binding sites (TFBS) CREB (cAMP-response-element-binding protein).
NR2F2 was selected to be the candidate gene for litter size based on its mRNA/protein expression level during embryonic implantation period and the mutation found in promoter region. PCR- Restriction fragment length polymorphism (PCR-RFLP) was used to detect different genotypes. HaeIII (NEB R0108L, BioLabs Inc.) was used. The PCR products of three genotypes were random selected and sequenced to validate the results.
Alleles and genotypes frequencies of NR2F2 were calculated from the 625 sows, respectively. GLM procedure of SAS 8.02 software was used to compute the least square means of TNB and NBA. According to the analysis, the effect of sire and dam on litter size was not significant, so the following linear model was used to analyze the genotype effect of NR2F2.
$$ \mathrm{Yijkl}=\mu +\mathrm{HYSi}+\mathrm{P}\mathrm{j}+\mathrm{G}\mathrm{k}+\mathrm{eijkl} $$
Where Yijkl is the traits of TNB and NBA, μ is the overall mean, HYSi is the effect of herd-year-season (i = 1 to 52), Pj is the effect of parity (j = 1, 2, ≥3 and all parities), Gk is the effect of genotype (k = 1 to 3) and eijkl is the random residual. The data was analyzed separately for the first parity, the second parity, the third and following parities, and all parities. The additive effect and the dominant effect were calculated according to the methods of Rothschild et al. [19].
mRNA expression in porcine endometrium
The effect of the day of pregnancy on mRNA expression of IHH, NR2F2, BMP2, FKBP4 and HAND2 in sows' endometrium during implantation period was shown in Table 1. In pregnant sows, the expression of IHH was significantly higher than that of non-pregnant sows on d 18 and d 24 of pregnancy (P < 0.05) (Table 1). The expression of IHH in attachment sites showed an uptrend. This was consistent with the expression of NR2F2 which was significantly up-regulated during implantation time.
Table 1 The mRNA level of target genes in the endometrium during implantation (M ± S.D.)
The expression of BMP2 was significantly up-regulated (P < 0.05 or P < 0.01) during implantation time (Table 1), which was consistent with IHH and NR2F2. For FKBP4, at attachment sites, the expression of FKBP4 was significantly down-regulated on d 24 of pregnancy (P < 0.01) (Table 1). The expression of HAND2 was the highest on d 18 of pregnancy (P < 0.05) (Table 1).
Protein expression in porcine endometrium
The protein expressions of IHH, NR2F2, BMP2, FKBP4 and HAND2 in the porcine endometrium during the embryonic implantation period were shown in Fig. 1 and Table 2. The protein expression of IHH was significantly up-regulated on d 18 and d 24 of pregnancy (P < 0.05 or P < 0.01) (Fig. 1 and Table 2), which was similar to its mRNA expression. The protein expression of BMP2 was higher on d 13 of pregnancy (P < 0.05) (Fig. 1 and Table 2). For the protein expression of FKBP4, there was not significantly difference between pregnant groups and non-pregnant group (Fig. 1 and Table 2), which was not consistent with its mRNA expression pattern. The protein expression of HAND2 was higher in pregnant sows (P < 0.01) (Fig. 1 and Table 2), except at attachment sites on d 18 of pregnancy.
The protein relative abundance of target proteins in endometrium of sows. Note: NP, endometrium of non-pregnant sows; D13a, endometrial attachment sites on d 13 of gestation; D13b, the endometrial inter-sites on d 13 of gestation; D18a, endometrial attachment sites on d 18 of gestation; D13b, the endometrial inter-sites on d 18 of gestation; D24a, endometrial attachment sites on d 24 of gestation; D24b, the endometrial inter-sites on d 24 of gestation
Table 2 The protein relative abundance of target proteins in endometrium of sows
Protein localization in porcine endometrium
During implantation period, IHH, NR2F2, BMP2, FKBP4 and HAND2 were observed in luminal epithelium and glandular epithelium (Figs. 2, 3, 4, 5, 6). In stroma, the staining of BMP2 and FKBP4 were weak, but the staining of NR2F2 and HAND2 was strong (Figs. 2, 3, 4, 5, 6). The result was summarized in Table 3.
Immunhistochemical localization of IHH in pig uterus. GE = glandular epithelium; LE = luminal epithelium; S = stroma. a Negative control; b Immunohistochemical staining of non-pregnanct sows uterus with IHH antibody; c Immunohistochemical staining of porcine uterus attachment site with IHH antibody on d 13 of pregnancy; d Immunohistochemical staining of porcine uterus inter-site with IHH antibody on d 13 of pregnancy; e Immunohistochemical staining of porcine uterus attachment site with IHH antibody on d 18 of pregnancy; f Immunohistochemical staining of porcine uterus inter-site with IHH antibody on d 18 of pregnancy; g Immunohistochemical staining of porcine uterus attachment site with IHH antibody on d 24 of pregnancy; h Immunohistochemical staining of porcine uterus inter-site with IHH antibody on d 24 of pregnancy
Immunhistochemical localization of NR2F2 in pig uterus. GE = glandular epithelium; LE = luminal epithelium; S = stroma. a Negative control; b Immunohistochemical staining of non-pregnanct sows uterus with NR2F2 antibody; c Immunohistochemical staining of porcine uterus attachment site with NR2F2 antibody on d 13 of pregnancy; d Immunohistochemical staining of porcine uterus inter-site with NR2F2 antibody on d 13 of pregnancy; e Immunohistochemical staining of porcine uterus attachment site with NR2F2 antibody on d 18 of pregnancy; f Immunohistochemical staining of porcine uterus inter-site with NR2F2 antibody on d 18 of pregnancy; g Immunohistochemical staining of porcine uterus attachment site with NR2F2 antibody on d 24 of pregnancy; h Immunohistochemical staining of porcine uterus inter-site with NR2F2 antibody on d 24 of pregnancy
Immunhistochemical localization of BMP2 in pig uterus. GE = glandular epithelium; LE = luminal epithelium; S = stroma. a Negative control; b Immunohistochemical staining of non-pregnanct sows uterus with BMP2 antibody; c Immunohistochemical staining of porcine uterus attachment site with BMP2 antibody on d 13 of pregnancy; d Immunohistochemical staining of porcine uterus inter-site with BMP2 antibody on d 13 of pregnancy; e Immunohistochemical staining of porcine uterus attachment site with BMP2 antibody on d 18 of pregnancy; f Immunohistochemical staining of porcine uterus inter-site with BMP2 antibody on d 18 of pregnancy; g Immunohistochemical staining of porcine uterus attachment site with BMP2 antibody on d 24 of pregnancy; h Immunohistochemical staining of porcine uterus inter-site with BMP2 antibody on d 24 of pregnancy
Immunhistochemical localization of FKBP4 in pig uterus. GE = glandular epithelium; LE = luminal epithelium; S = stroma. a Negative control; b Immunohistochemical staining of non-pregnanct sows uterus with FKBP4 antibody; c Immunohistochemical staining of porcine uterus attachment site with FKBP4 antibody on d 13 of pregnancy; d Immunohistochemical staining of porcine uterus inter-site with FKBP4 antibody on d 13 of pregnancy; e Immunohistochemical staining of porcine uterus attachment site with FKBP4 antibody on d 18 of pregnancy; f Immunohistochemical staining of porcine uterus inter-site with FKBP4 antibody on d 18 of pregnancy; g Immunohistochemical staining of porcine uterus attachment site with FKBP4 antibody on d 24 of pregnancy; h Immunohistochemical staining of porcine uterus inter-site with FKBP4 antibody on d 24 of pregnancy
Immunhistochemical localization of HAND2 in pig uterus. GE = glandular epithelium; LE = luminal epithelium; S = stroma. a Negative control; b Immunohistochemical staining of non-pregnanct sows uterus with HAND2 antibody; c Immunohistochemical staining of porcine uterus attachment site with HAND2 antibody on d 13 of pregnancy; d Immunohistochemical staining of porcine uterus inter-site with HAND2 antibody on d 13 of pregnancy; e Immunohistochemical staining of porcine uterus attachment site with HAND2 antibody on d 18 of pregnancy; f Immunohistochemical staining of porcine uterus inter-site with HAND2 antibody on d 18 of pregnancy; g Immunohistochemical staining of porcine uterus attachment site with HAND2 antibody on d 24 of pregnancy; h Immunohistochemical staining of porcine uterus inter-site with HAND2 antibody on d 24 of pregnancy
Table 3 The expression of different position of target proteins in endometrium of sows
Detection of SNPs of target genes and association analysis
After analysis samples of 625 sows, several mutations were found (Table 4). Mutation -204C > A in promoter region of NR2F2 gene was found, and this mutation caused the deletion of TFBS CREB (Fig. 7). Synonymous mutation 9619G > A in exon 3 of BMP2 gene was found (Table 4). Seven mutations in FKBP4 gene were found, but no one is missense mutation (Table 4).
Table 4 Location and type of nucleotide mutation of target genes
Change of transcription factor caused by mutation. a C at 204 bp; b A at 204 bp
NR2F2 was selected to be the candidate gene for litter size based on its mRNA/protein expression level during embryonic implantation period and the mutation found in promoter region. PCR-RFLP was used to detect different genotypes. The representative SNPs sequencing output for genotypes were shown in Fig. 8. The genotype frequencies and allele frequencies at each polymorphic locus in Large White, Landrace and Duroc sows were shown in Table 5. The genotype frequencies of AA, AC and CC in large white were 0.388, 0.414, and 0.198. In Landrace, the genotype frequencies were 0.088, 0.366, and 0.546. In Duroc, the genotype frequencies were 0.358, 0.433, and 0.208. None of the three breeds was found to be in Hardy-Weinberg equilibrium (HWE).
PCR-RFLP results of swine NR2F2 gene and sequence image of the different genotypes. a Genotypes of the RFLP marker of PCR products; b Sequence image of mutation -204C > A
Table 5 Number of alleles (n), allele and genotype frequencies of NR2F2, observed heterozygosity (h)
The data for TNB and NBA were observed for the first parity, the second parity, the third and the following parities and all parities. The least square means in Large White, Landrace and Duroc were shown in Tables 6, 7 and 8. In Large White, in the first parity, the sows with AA genotype had an advantage of 0.81 (P < 0.05) NBA per litter over the sows with CC genotype. In the second parity, the sows with CC genotype had an advantage of 1.76 (P < 0.01) and 1.56 (P < 0.01) TNB per litter over the sows with AA and AC, respectively. NBA of CC genotype were of 0.99 (P < 0.05) more piglets per litter than that of the AA genotype. In the third and following parities, NBA significantly increased for the CC genotype with 0.60 (P < 0.05) and 0.85 (P < 0.01) more piglets in comparison with the AA and AC genotype, respectively. In all parities, the sows with CC genotype had an advantage (P < 0.05) of 0.89 and 0.64 for TNB per litter over the AA and AC genotype sows, respectively. And NBA of CC genotype were of 0.97 (P < 0.01) and 0.88 (P < 0.01) more piglets per litter than that of the AA and AC genotype, respectively.
Table 6 Effects of the NR2F2 polymorphism on total number born (TNB) and number born alive (NBA) in Large White (LS means ± S.E.)
Table 7 Effects of the NR2F2 polymorphism on total number born (TNB) and number born alive (NBA) in Landrace (LS means ± S.E.)
Table 8 Effects of the NR2F2 polymorphism on total number born (TNB) and number born alive (NBA) in Duroc (LS means ± S.E.)
In Landrace, in the third and following parities, the sows with CC genotype had an advantage of 0.53 for TNB and 0.61 for NBA per litter over the sows with AA genotype, and 0.53 for TNB over the sows with AC genotype, but not significantly. In all parities, TNB of genotype CC was 1.05 (P < 0.05) piglets higher than that of the AA genotype. And the sows with the CC genotype had an advantage of 0.53 and 0.22 for NBA per litter over the sows with AA and AC, but not significantly.
In Duroc, in the second parity, the sows with the CC genotype had an advantage of 0.66 piglets (P < 0.05) for TNB and 1.34 (P < 0.05) piglets for NBA per litter over the sows with AA genotype. In the third and following parities, the sows with the CC genotype had an advantage of 1.35 piglets (P < 0.01) for TNB and 1.34 (P < 0.05) for NBA per litter over the sows with AA genotype.
Expression of genes participated in paracrine signaling in sows endometrium
The embryonic peri-implantation time of pigs is especially longer. During the peri-implantation period of pregnancy, uterine LE and conceptus trophectoderm develop adhesion competency in synchrony to initiate the adhesion cascade within a restricted period of the uterine cycle termed the "window of receptivity" [20–22]. In pigs, this window is orchestrated through the actions of progesterone and estrogen to regulate locally produced cytokines, growth factors, cell surface glycoproteins, cell surface adhesion molecules, and extracellular matrix (ECM) proteins [23]. A fundamental paradox of early pregnancy is that cessation of expression of PGR and ESR1 by uterine epithelia is a prerequisite for uterine receptivity to implantation, expression of genes by uterine epithelia and selective transport of molecules into the uterine lumen that support conceptus development. Thus, effects of P4 are mediated via PGR expressed in uterine stromal and myometrial cells by stromal cell derived growth factors known as "progestamedins" [24, 25]. As previous indicated, progesterone down regulated the expression of PGR in the uterine epithelia of pigs after d 10 of pregnancy, immediately prior to the time when the endometrium becomes receptive to implantation [26–28]. In pigs, down-regulation of PGR in uterine epithelia is a prerequisite for the expression of genes for uterine secretions and transport of molecules into the uterine lumen that support conceptus development. Down-regulation of PGR is associated with down-regulation of mucin1 (MUC1), as well as up-regulation of the expression of secreted phosphoprotein 1 (SPP1) and insulin-like growth factor binding protein 1 (IGFBP1). During conceptus elongation and the early peri-implantation period, the endometrium increases the release of a number of growth factors and cytokines such as epidermal growth factor (EGF), insulin-like growth factor-1 (IGF-1), fibroblast growth factor 7 (FGF7), vascular endothelial growth factor (VEGF), interleukin 6 (IL-6), transforming growth factor beta (TGFβ), and leukemia inhibitory factor (LIF) [29, 30]. Some of these genes had been reported to have significant effect on litter size in pigs, such as SPP1, VEGF, MUC1, LIF et al [1, 31–33].
PGR paracrine signaling has been recognized to play a significant role in pregnancy in human and mouse, which have not been studied in pigs [5]. IHH is a progesterone receptor target activated within the epithelium which signals downstream to NR2F2 in the stroma establishing the HH–NR2F2 axis within the dual uterine compartments. Strong evidence exists to propose a role of a HH–NR2F2 axis in the regulation of reproduction in human and mice [12, 34]. Identification of the signaling pathway from stroma to epithelium would aid in the understanding of how the stroma contributes to embryo implantation. Changes in endometrial transcriptome during early stages of conceptus attachment to uterine LE in previous study showed that IHH regulated significantly during pregnancy period in the pigs. In the present study, compared with non-pregnant sows, the mRNA and protein expression of IHH were up-regulated during implantation. The expression of IHH in bovine uterus had been studied. The result showed IHH is modulated by progesterone in bovine uterus, and may be required to be down-regulated to allow expression of genes that drive conceptus elongation in cattle [35]. In pigs, the conceptus elongated rapidly before d 13 of gestation, and the filamentous conceptus continue to elongate but slowly after d 13 of gestation. The expression of IHH did not show significantly changed at d 13 of pregnancy in our result. It may be because the conceptus elongate slowly after d 13 of pregnancy in pigs [36]. The expression of NR2F2 was significantly up-regulated during implantation time and the expression in attachment sites showed an upward trend. This was consistent with previous study, which found NR2F2 up-regulated in d 12 of gestation in Yorkshire pigs [37]. NR2F2 was shown to activate hypoxia-inducible factor 1 alpha (HIF-1α) and HIF-1 is an important mediator of estrogen-induced VEGF expression in the uterus [38, 39]. They thought that the expression of NR2F2 is associated with greater activation of angiogenesis at the stage of implantation in the Yorkshire breed [37]. The expression of IHH and NR2F2 were consistent with their functional role in embryonic implantation and also consistent with previous studies [40–44]. It was reported that HH-NR2F2 axis can transmit the paracrine signaling by PGR from epithelium to stroma [42]. The protein localization of IHH in porcine endometrium showed that IHH mainly observed strongly in luminal epithelium and glandular epithelium. NR2F2 was especially observed strongly in stroma. This confirmed that HH–NR2F2 axis was important in mediating the signal from epithelial to other effect or genes in the stroma.
BMP2, as a downstream gene of HH-NR2F2 axis, has demonstrated to be a critical effector for decidualization and the maintenance of pregnancy during post-implantation. BMP2 likely acts as a paracrine signaling factor for the initiation of the proliferative response after embryonic implantation within the uterine stroma. In the present study, the mRNA expression of BMP2 was significantly up-regulated during implantation time, which was consistent with the expression of IHH and NR2F2. In previous study, researchers found that BMP2 and BMP6 can significantly suppress progesterone production in pigs in vitro [45]. So this was consistent with our result, which showed BMP2 up-regulated along with PGR down-regulated during implantation period. The protein expression of BMP2 was significantly up-regulated on d 13 of pregnancy, which demonstrated that BMP2 promotes implantation cooperated with IHH and NR2F2. But on d 18 and 24, the expression did not regulate significantly. It may be because decidualization did not happen in pigs.
HAND2 was another downstream target of PGR [8]. In the stroma, HAND2 plays an important role in the inhibition of the FGF pathway, a pathway known to be involved in the promotion of epithelial proliferation by estrogen signaling [8]. Therefore, HAND2 is important to inhibit the estrogen-induced epithelial proliferation in the uterus [8]. The inhibition of epithelial proliferation by PGR signaling was possibly via HH–NR2F2 axis. HH–NR2F2 axis then activated HAND2, which caused the inhibition of estrogen signaling and subsequent allowance for proper embryonic implantation. In the present study, the expression of mRNA and protein of HAND2 were both up-regulated on d 13 of pregnancy. This may related with its inhibition of estrogen signaling, and further more promoted the positive role of PGR in implantation. In previous studies, HAND2 had been detected up-regulated at implantation period and late gestation period in pigs [31, 46]. The researchers find HAND2 related with receptivity of uterus and vascular development of placenta [31, 46]. The mRNA of HAND2 was up-regulated on d 18 of pregnancy, but the protein expression was not. Maybe there is regulation mechanism at translation level, which needs further research. The protein localization in porcine endometrium showed that HAND2 observed strongly in luminal epithelium, glandular epithelium, and stroma. This indicated that HAND2 played an important role in transmit the PGR signaling from epithelium to stroma.
The variations of NR2F2 and its association with litter size
Marker-assisted selection (MAS) in conjunction with traditional selection methods is most effective for the traits such as litter size, which are either expressed later in life, are sex-dependent, or are of low heritability [47]. The candidate gene approach has led to notable success in demonstrating reproduction-related genetic markers or major genes, such as ESR, PRLR, the erythropoietin receptor (EPOR) and so on [19, 48–50].
In the present study, we selected NR2F2 as the candidate gene for litter size in pigs, due to its biological function and the interesting mutation. Three genotypes were found: AA, AC and CC. The association with litter size revealed that CC genotype is the favorable genotype. Through analysis using Consite database (http://consite.genereg.net/cgi-bin/consite?rm=t_input_single), the C → A mutation caused deletion of TFBS CREB (Fig. 7). CREB has been proved played an important role in activation of transcription and regulation of gene transcription [51, 52]. The deletion of CREB may affect the expression of NR2F2 in porcine endometrium and stroma. The effect of NR2F2 on litter size possibly associated with its expression in endometrium during embryonic implantation. This certainly will affect the signal of PGR from endometrium to stroma, in consideration of the PGR-IHH-NR2F2 axis. Subsequently, the embryonic implantation process and litter size was affected.
In current research, the expression patterns of genes/proteins involved in PGR paracrine signaling over implantation time were studied. And candidate gene for litter size was identified from genes involved in this signaling. The present study could be a resource for further studies to identify the roles of these genes for embryonic implantation in pigs.
agarose gel electrophoresis
CREB:
cAMP-response-element-binding protein
D13a:
endometrial attachment sites on day 13 of gestation
D13b:
the endometrial inter-sites on day 13 of gestation
HWE:
Hardy-Weinberg equilibrium
MAS:
Marker-assisted selection
number born alive
NP:
endometrium of non-pregnant sows
PCR-RFLP:
PCR-Restriction fragment length polymorphism
PGR:
TFBS:
transcription factor binding sites
TNB:
total number born
Spotter A, Muller S, Hamann H, Distl O. Effect of polymorphisms in the genes for LIF and RBP4 on litter size in two German pig lines. Reprod Domest Anim. 2009;44:100–5.
Johnson RK, Nielsen M, Casey DS. Responses in ovulation rate, embryonal survival, and litter traits in swine to 14 generations ofselection to increase litter size. J Anim Sci. 1999;77:541–57.
Fernandez-Rodriguez A, Munoz M, Fernandez A, Pena RN, Tomas A, Noguera JL, et al. Differential gene expression in ovaries of pregnant pigs with high and low prolificacy levels and identification of candidate genes for litter size. Biol Reprod. 2010;84:299–307.
Brayman MJ, Julian J, Mulac-Jericevic B, Conneely OM, Edwards DP, Carson DD. Progesterone receptor isoforms A and B differentially regulate MUC1 expression in uterine epithelial cells. Mol Endocrinol. 2006;20:2278–91.
Lydon JP, DeMayo FJ, Funk CR, Mani SK, Hughes AR, Montgomery CA, et al. Mice lacking progesterone receptor exhibit pleiotropic reproductive abnormalities. Genes Dev. 1995;9:2266–78.
Tibbetts TA, Conneely OM, O'Malley BW. Progesterone via its receptor antagonizes the pro-inflammatory activity of estrogen in the mouse uterus. Biol Reprod. 1999;60:1158–65.
Mote PA, Arnett-Mansfield RL, Gava N, deFazio A, Mulac-Jericevic B, Conneely OM, et al. Overlapping and distinct expression of progesterone receptors A and B in mouse uterus and mammary gland during the estrous cycle. Endocrinology. 2006;147:5503–12.
Wetendorf M, Demayo FJ. The progesterone receptor regulates implantation, decidualization, and glandular development via a complex paracrine signaling network. Mol Cell Endocrinol. 2012;357(1–2):108-18.
Bazer FW, Spencer TE, Johnson GA, Burghardt RC, Wu G. Comparative aspects of implantation. Reproduction. 2009;138:195–209.
Takamoto N, Zhao B, Tsai SY, DeMayo FJ. Identification of Indian hedgehog as a progesterone-responsive gene in the murine uterus. Mol Endocrinol. 2002;16:2338–48.
Varjosalo M, Taipale J. Hedgehog: functions and mechanisms. Genes Dev. 2008;22:2454–72.
Lin FJ, Qin J, Tang K, Tsai SY, Tsai MJ. Coup d'Etat: an orphan takes control. Endocr Rev. 2011;32:404–21.
Kyriazakis I, Whittemore C. Whittemore's science and practice of pig production. 3rd ed. Oxford: Blackwell; 2006. p. 105–47.
Samborski A, Graf A, Krebs S, Kessler B, Bauersachs S. Deep sequencing of the porcine endometrial transcriptome on day 14 of pregnancy. Biol Reprod. 2013;88:84.
Lord E, Murphy BD, Desmarais JA, Ledoux S, Beaudry D, Palin MF. Modulation of peroxisome proliferator-activated receptor delta and gamma transcripts in swine endometrial tissue during early gestation. Reproduction. 2006;131:929–42.
Patel V, Ramesh A, Traicoff JL, Baibakov G, Emmert-Buck MR, Gutkind JS, et al. Profiling EGFR activity in head and neck squamous cell carcinoma by using a novel layered membrane Western blot technology. Oral Oncol. 2005;41:503–8.
Hewitt SM, Baskin DG, Frevert CW, Stahl WL, Rosa-Molinar E. Controls for immunohistochemistry: the Histochemical Society's standards of practice for validation of immunohistochemical assays. J Histochem Cytochem. 2014;62:693–7.
Axiotis CA, Monteagudo C, Merino MJ, LaPorte N, Neumann RD. Immunohistochemical detection of P-glycoprotein in endometrial adenocarcinoma. Am J Pathol. 1991;138:799–806.
Rothschild M, Jacobson C, Vaske D, Tuggle C, Wang L, Short T, et al. The estrogen receptor locus is associated with a major gene influencing litter size in pigs. Proc Natl Acad Sci U S A. 1996;93:201–5.
Bazer FW, Spencer TE, Johnson GA, Burghardt RC. Uterine receptivity to implantation of blastocysts in mammals. Front Biosci (Schol Ed). 2011;3:745–67.
Fazleabas AT, Kim JJ, Strakova Z. Implantation: embryonic signals and the modulation of the uterine environment--a review. Placenta. 2004;25 Suppl A:S26–31.
Spencer TE, Johnson GA, Bazer FW, Burghardt RC. Fetal-maternal interactions during the establishment of pregnancy in ruminants. Soc Reprod Fertil Suppl. 2007;64:379–96.
Johnson GA, Bazer FW, Burghardt RC, Spencer TE, Wu G, Bayless KJ. Conceptus-uterus interactions in pigs: endometrial gene expression in response to estrogens and interferons from conceptuses. Soc Reprod Fertil Suppl. 2009;66:321–32.
Cunha GR, Cooke PS, Kurita T. Role of stromal-epithelial interactions in hormonal responses. Arch Histol Cytol. 2004;67:417–34.
Spencer TE, Bazer FW. Biology of progesterone action during pregnancy recognition and maintenance of pregnancy. Front Biosci. 2002;7:d1879–1898.
Bailey DW, Dunlap KA, Erikson DW, Patel AK, Bazer FW, Burghardt RC, et al. Effects of long-term progesterone exposure on porcine uterine gene expression: progesterone alone does not induce secreted phosphoprotein 1 (osteopontin) in glandular epithelium. Reproduction. 2010;140:595–604.
Bazer FW, Burghardt RC, Johnson GA, Spencer TE, Wu G. Interferons and progesterone for establishment and maintenance of pregnancy: interactions among novel cell signaling pathways. Reprod Biol. 2008;8:179–211.
Geisert RD, Pratt TN, Bazer FW, Mayes JS, Watson GH. Immunocytochemical localization and changes in endometrial progestin receptor protein during the porcine oestrous cycle and early pregnancy. Reprod Fertil Dev. 1994;6:749–60.
Bazer FW, Wu G, Spencer TE, Johnson GA, Burghardt RC, Bayless K. Novel pathways for implantation and establishment and maintenance of pregnancy in mammals. Mol Hum Reprod. 2010;16:135–52.
Geisert RD, Lucy MC, Whyte JJ, Ross JW, Mathew DJ. Cytokines from the pig conceptus: roles in conceptus development in pigs. J Anim Sci Biotechnol. 2014;5:51.
Chen X, Li A, Chen W, Wei J, Fu J, Wang A. Differential gene expression in uterine endometrium during implantation in pigs. Biol Reprod. 2015;92:52.
Xiao C, Jinluan F, Aiguo W. Effect of VNTR polymorphism of the Muc1 gene on litter size of pigs. Mol Biol Rep. 2012;39:6251–8.
Putnova L, Kolarikova O, Knoll A, Dvorák J. Association study of osteopontin (SPP1) and estrogen receptor (ESR) genes with reproduction traits in pigs. Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis (Czech Republic); 2001.
Pereira FA, Qiu Y, Zhou G, Tsai MJ, Tsai SY. The orphan nuclear receptor COUP-TFII is required for angiogenesis and heart development. Genes Dev. 1999;13:1037–49.
Forde N, Mehta JP, Minten M, Crowe MA, Roche JF, Spencer TE, et al. Effects of low progesterone on the endometrial transcriptome in cattle. Biol Reprod. 2012;87:124.
Blomberg LA, Long EL, Sonstegard TS, Van Tassell CP, Dobrinsky JR, Zuelke KA. Serial analysis of gene expression during elongation of the peri-implantation porcine trophectoderm (conceptus). Physiol Genomics. 2005;20:188–94.
Gu T, Zhu MJ, Schroyen M, Qu L, Nettleton D, Kuhar D, et al. Endometrial gene expression profiling in pregnant Meishan and Yorkshire pigs on day 12 of gestation. BMC Genomics. 2014;15:156.
Kim EJ, Yoo YG, Yang WK, Lim YS, Na TY, Lee IK, et al. Transcriptional activation of HIF-1 by RORalpha and its role in hypoxia signaling. Arterioscler Thromb Vasc Biol. 2008;28:1796–802.
Koos RD, Kazi AA, Roberson MS, Jones JM. New insight into the transcriptional regulation of vascular endothelial growth factor expression in the endometrium by estrogen and relaxin. Ann N Y Acad Sci. 2005;1041:233–47.
Ingham PW, McMahon AP. Hedgehog signaling in animal development: paradigms and principles. Genes Dev. 2001;15:3059–87.
Johnson RL, Scott MP. New players and puzzles in the Hedgehog signaling pathway. Curr Opin Genet Dev. 1998;8:450–6.
Kurihara I, Lee DK, Petit FG, Jeong J, Lee K, Lydon JP, et al. COUP-TFII mediates progesterone regulation of uterine implantation by controlling ER activity. PLoS Genet. 2007;3:e102.
Matsumoto H, Zhao X, Das SK, Hogan BL, Dey SK. Indian hedgehog as a progesterone-responsive factor mediating epithelial-mesenchymal interactions in the mouse uterus. Dev Biol. 2002;245:280–90.
McMahon AP. More surprises in the Hedgehog signaling pathway. Cell. 2000;100:185–8.
Webb R, Garnsworthy PC, Campbell BK, Hunter MG. Intra-ovarian regulation of follicular development and oocyte competence in farm animals. Theriogenology. 2007;68 Suppl 1:S22–29.
Zhou Q-Y, Fang M-D, Huang T-H, Li C-C, Yu M, Zhao S-H. Detection of differentially expressed genes between Erhualian and Large White placentas on day 75 and 90 of gestation. BMC Genomics. 2009;10:337.
Soller M. Marker assisted selection - overvier. Anim Biotech. 1994;5:193–207.
Li N, Zhao Y F, Xiao L, Zhang FJ, Chen YZ, Dai RJ, Zhang JS, et al. Candidate gene approach for identification of genetic loci controlling litter size in swine[C]//Proc. 6th World Congress on Genetics Applied to Livestock Production. Armidale, Australia, vol. 26. 1998. p. 183–86.
van Rens BT, Evans GJ, van der Lende T. Components of litter size in gilts with different prolactin receptor genotypes. Theriogenology. 2003;59:915–26.
Vincent V, Goffin V, Rozakis-Adcock M, Mornon JP, Kelly PA. Identification of cytoplasmic motifs required for short prolactin receptor internalization. J Biol Chem. 1997;272:7062–8.
Nichols M, Weih F, Schmid W, DeVack C, Kowenz-Leutz E, Luckow B, et al. Phosphorylation of CREB affects its binding to high and low affinity sites: implications for cAMP induced gene transcription. EMBO J. 1992;11:3337–46.
Lee KA, Masson N. Transcriptional regulation by CREB and its relatives. Biochim Biophys Acta. 1993;1174:221–33.
This study was supported by National Natural Science Foundation of China (No.31172176), China Agriculture Research System (No. CARS-36), Program for Changjiang Scholar and Innovation Research Team in University (IRT1191).
The contributions of the authors are as follows: XC conducted the research, analysis the results and wrote the paper. XC and JLF participated in the animal experiment. AGW was in charge of the whole trail. All authors read and approved the final manuscript.
College of Animal Sciences and Technology, National Engineering Laboratory for Animal Breeding & Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture, China Agricultural University, Beijing, 100193, People's Republic of China
Xiao Chen, Jinluan Fu & Aiguo Wang
Institute of Apicultural Research, Chinese Academy of Agricultural Sciences, Beijing, 100093, People's Republic of China
Xiao Chen
Jinluan Fu
Aiguo Wang
Correspondence to Jinluan Fu or Aiguo Wang.
Primers used for Real-time PCR (RT-PCR). (DOCX 19 kb)
Primer pairs and PCR conditions used for SNPs detection. (DOCX 20 kb)
Chen, X., Fu, J. & Wang, A. Expression of genes involved in progesterone receptor paracrine signaling and their effect on litter size in pigs. J Animal Sci Biotechnol 7, 31 (2016). https://doi.org/10.1186/s40104-016-0090-z
DOI: https://doi.org/10.1186/s40104-016-0090-z
Litter size
SNPs | CommonCrawl |
Sample records for efficient relay beamforming
Robust distributed cognitive relay beamforming
Pandarakkottilil, Ubaidulla
In this paper, we present a distributed relay beamformer design for a cognitive radio network in which a cognitive (or secondary) transmit node communicates with a secondary receive node assisted by a set of cognitive non-regenerative relays. The secondary nodes share the spectrum with a licensed primary user (PU) node, and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. The proposed robust cognitive relay beamformer design seeks to minimize the total relay transmit power while ensuring that the transceiver signal-to-interference- plus-noise ratio and PU interference constraints are satisfied. The proposed design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can be reformulated as a tractable convex optimization problem that can be solved efficiently. Numerical results are provided and illustrate the performance of the proposed designs for different network operating conditions and parameters. © 2012 IEEE.
Spatially Controlled Relay Beamforming
Kalogerias, Dionysios
This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant
Pandarakkottilil, Ubaidulla; Aissa, Sonia
design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can
Distributed cognitive two-way relay beamformer designs under perfect and imperfect CSI
In this paper, we present distributed two-way relay beamformer designs for a cognitive radio network (CRN) in which a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relay nodes. The secondary nodes share the spectrum with a licensed primary user (PU) node, and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. First, we consider relay beamformer designs assuming the availability of perfect channel state information (CSI). For this case, a mean-square error (MSE)-constrained beamformer that minimizes the total relay transmit power, and an MSE-balancing beamformer with a constraint on the total relay transmit power are proposed. Next, we consider relay beamformer designs assuming that the available CSI is imperfect. For this case too, we consider the same problems as those in the case of perfect CSI, and propose beamformer designs that are robust to the errors in the CSI. We show that the proposed designs can be reformulated as convex optimization problems that can be solved efficiently. Through numerical simulations, we illustrate the performance of the proposed designs. © 2011 IEEE.
Coordinated Direct and Relay Transmission with Linear Non-Regenerative Relay Beamforming
Sun, Fan; De Carvalho, Elisabeth; Popovski, Petar
Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying, but also more general traffic scenarios, such as coordinated direct and relay (CDR) transmissions. In a CDR scheme the relay has a central...... role in managing the interference and boosting the overall system performance. In this letter we consider the case in which an amplify-and-forward relay has multiple antennas and can use beamforming to support the coordinated transmissions. We focus on one representative traffic type with one uplink...... user and one downlink user. Two different criteria for relay beamforming are analyzed: maximal weighted sum-rate and maximization of the worst-case weighted SNR. We propose iterative optimal solutions, as well as low-complexity near-optimal solutions....
Efficient incremental relaying
Fareed, Muhammad Mehboob; Alouini, Mohamed-Slim
We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission
Robust distributed two-way relay beamforming in cognitive radio networks
In this paper, we present distributed beamformer designs for a cognitive radio network (CRN) consisting of a pair of cognitive (or secondary) transceiver nodes communicating with each other through a set of secondary non-regenerative two-way relays. The secondary network shares the spectrum with a licensed primary user (PU), and operates under a constraint on the maximum interference to the PU, in addition to its own resource and quality of service (QoS) constraints. We propose beamformer designs assuming that the available channel state information (CSI) is imperfect, which reflects realistic scenarios. The performance of proposed designs is robust to the CSI errors. Such robustness is critical in CRNs given the difficulty in acquiring perfect CSI due to loose cooperation between the PUs and the secondary users (SUs), and the need for strict enforcement of PU interference limit. We consider a mean-square error (MSE)-constrained beamformer that minimizes the total relay transmit power and an MSE-balancing beamformer with a constraint on the total relay transmit power. We show that the proposed designs can be reformulated as convex optimization problems that can be solved efficiently. Through numerical simulations, we illustrate the improved performance of the proposed robust designs compared to non-robust designs. © 2012 IEEE.
Fareed, Muhammad Mehboob
We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.
Cognitive two-way relay beamforming: Design with resilience to channel state uncertainties
Ubaidulla, P.; Alouini, Mohamed-Slim; Aissa, Sonia
In this paper, we propose a robust distributed relay beamformer design for cognitive radio network operating under uncertainties in the available channel state information. The cognitive network consists of a pair of transceivers and a set of non
In this paper, we present distributed two-way relay beamformer designs for a cognitive radio network (CRN) in which a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relay nodes
Ubaidulla, P.
In this paper, we propose a robust distributed relay beamformer design for cognitive radio network operating under uncertainties in the available channel state information. The cognitive network consists of a pair of transceivers and a set of non-regenerative two-way relays that assist the communication between the transceiver pair. The secondary nodes share the spectrum with a licensed primary user node while ensuring that the interference to the primary receiver is maintained below a certain threshold. The proposed robust design maximizes the worst-case signal-to-interference-plus-noise ratio at the secondary transceivers while satisfying constraints on the interference to the primary user and on the total relay transmit power. Though the robust design problem is not a convex problem in its original form, we show that it can be reformulated as a convex optimization problem, which can be solved efficiently. Numerical results are provided and illustrate the merits of the proposed design for various operating conditions and parameters. © 2016 IEEE.
Cooperative beamforming for dual-hop amplify-and-forward multi-antenna relaying cellular networks
Xing, Chengwen
In this paper, linear beamforming design for amplify-and-forward relaying cellular networks is considered, in which base station, relay station and mobile terminals are all equipped with multiple antennas. The design is based on minimum mean-square-error criterion, and both uplink and downlink scenarios are considered. It is found that the downlink and uplink beamforming design problems are in the same form, and iterative algorithms with the same structure can be used to solve the design problems. For the specific cases of fully loaded or overloaded uplink systems, a novel algorithm is derived and its relationships with several existing beamforming design algorithms for conventional MIMO or multiuser systems are revealed. Simulation results are presented to demonstrate the performance advantage of the proposed design algorithms. © 2012 Published by Elsevier B.V. All rights reserved.
Xing, Chengwen; Ma, Shaodan; Xia, Minghua; Wu, Yikchung
Comparison of Beam-Forming and Relaying in Sparse Sensor Networks
Mikuláš Krebs
Full Text Available This study focuses on the differences in power consumption between beam-forming and relaying data transmission methods in a sparse wireless ad-hoc network. These two methods are observed for the same parameters using an identical network topology in a simulation programme that was developed as a part of this study.
Beamforming Design for Coordinated Direct and Relay Systems
Sun, Fan; De Carvalho, Elisabeth; Thai, Chan
Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying based on wireless network coding. Recently, a related set of techniques has emerged, termed coordinated direct and relay (CDR) transmissions......, where the constellation of traffic flows is more general than the two-way. Regardless of the actual traffic flows, in a CDR scheme the relay has a central role in managing the interference. In this paper we investigate the novel transmission modes, based on amplify-and-forward, that arise when the relay...... an iterative solution, as well as derive an upper performance bound. The numerical results demonstrate a clear benefit from usage of multiple antennas at the relay node....
Beamforming-Based Physical Layer Network Coding for Non-Regenerative Multi-Way Relaying
Klein Anja
Full Text Available We propose non-regenerative multi-way relaying where a half-duplex multi-antenna relay station (RS assists multiple single-antenna nodes to communicate with each other. The required number of communication phases is equal to the number of the nodes, N. There are only one multiple-access phase, where the nodes transmit simultaneously to the RS, and broadcast (BC phases. Two transmission methods for the BC phases are proposed, namely, multiplexing transmission and analog network coded transmission. The latter is a cooperation method between the RS and the nodes to manage the interference in the network. Assuming that perfect channel state information is available, the RS performs transceive beamforming to the received signals and transmits simultaneously to all nodes in each BC phase. We address the optimum transceive beamforming maximising the sum rate of non-regenerative multi-way relaying. Due to the nonconvexity of the optimization problem, we propose suboptimum but practical signal processing schemes. For multiplexing transmission, we propose suboptimum schemes based on zero forcing, minimising the mean square error, and maximising the signal to noise ratio. For analog network coded transmission, we propose suboptimum schemes based on matched filtering and semidefinite relaxation of maximising the minimum signal to noise ratio. It is shown that analog network coded transmission outperforms multiplexing transmission.
Efficient high-performance ultrasound beamforming using oversampling
Freeman, Steven R.; Quick, Marshall K.; Morin, Marc A.; Anderson, R. C.; Desilets, Charles S.; Linnenbrink, Thomas E.; O'Donnell, Matthew
High-performance and efficient beamforming circuitry is very important in large channel count clinical ultrasound systems. Current state-of-the-art digital systems using multi-bit analog to digital converters (A/Ds) have matured to provide exquisite image quality with moderate levels of integration. A simplified oversampling beamforming architecture has been proposed that may a low integration of delta-sigma A/Ds onto the same chip as digital delay and processing circuitry to form a monolithic ultrasound beamformer. Such a beamformer may enable low-power handheld scanners for high-end systems with very large channel count arrays. This paper presents an oversampling beamformer architecture that generates high-quality images using very simple; digitization, delay, and summing circuits. Additional performance may be obtained with this oversampled system for narrow bandwidth excitations by mixing the RF signal down in frequency to a range where the electronic signal to nose ratio of the delta-sigma A/D is optimized. An oversampled transmit beamformer uses the same delay circuits as receive and eliminates the need for separate transmit function generators.
Energy efficient circuit design using nanoelectromechanical relays
Venkatasubramanian, Ramakrishnan
Nano-electromechanical (NEM) relays are a promising class of emerging devices that offer zero off-state leakage and behave like an ideal switch. Recent advances in planar fabrication technology have demonstrated that microelectromechanical (MEMS) scale miniature relays could be manufactured reliably and could be used to build fully functional, complex integrated circuits. The zero leakage operation of relays has renewed the interest in relay based low power logic design. This dissertation explores circuit architectures using NEM relays and NEMS-CMOS heterogeneous integration. Novel circuit topologies for sequential logic, memory, and power management circuits have been proposed taking into consideration the NEM relay device properties and optimizing for energy efficiency and area. In nanoscale electromechanical devices, dispersion forces like Van der Waals' force (vdW) affect the pull-in stability of the relay devices significantly. Verilog-A electromechanical model of the suspended gate relay operating at 1V with a nominal air gap of 5 - 10nm has been developed taking into account all the electrical, mechanical and dispersion effects. This dissertation explores different relay based latch and flip-flop topologies. It has been shown that as few as 4 relay cells could be used to build flip-flops. An integrated voltage doubler based flip flop that improves the performance by 2X by overdriving Vgb has been proposed. Three NEM relay based parallel readout memory bitcell architectures have been proposed that have faster access time, and remove the reliability issues associated with previously reported serial readout architectures. A paradigm shift in design of power switches using NEM relays is proposed. An interesting property of the relay device is that the ON state resistance (Ron) of the NEM relay switch is constant and is insensitive to the gate slew rate. This coupled with infinite OFF state resistance (Roff ) offers significant area and power advantages over CMOS
MIMO Beamforming for Secure and Energy-Efficient Wireless Communication
Nghia, Nguyen T.; Tuan, Hoang D.; Duong, Trung Q.; Poor, H. Vincent
Considering a multiple-user multiple-input multiple-output (MIMO) channel with an eavesdropper, this letter develops a beamformer design to optimize the energy efficiency in terms of secrecy bits per Joule under secrecy quality-of-service constraints. This is a very difficult design problem with no available exact solution techniques. A path-following procedure, which iteratively improves its feasible points by using a simple quadratic program of moderate dimension, is proposed. Under any fixed computational tolerance the procedure terminates after finitely many iterations, yielding at least a locally optimal solution. Simulation results show the superior performance of the obtained algorithm over other existing methods.
. The secondary network shares the spectrum with a licensed primary user (PU), and operates under a constraint on the maximum interference to the PU, in addition to its own resource and quality of service (QoS) constraints. We propose beamformer designs assuming
Clustering and Beamforming for Efficient Communication in Wireless Sensor Networks
Francisco Porcel-Rodríguez
Full Text Available Energy efficiency is a critical issue for wireless sensor networks (WSNs as sensor nodes have limited power availability. In order to address this issue, this paper tries to maximize the power efficiency in WSNs by means of the evaluation of WSN node networks and their performance when both clustering and antenna beamforming techniques are applied. In this work, four different scenarios are defined, each one considering different numbers of sensors: 50, 20, 10, five, and two nodes per scenario, and each scenario is randomly generated thirty times in order to statistically validate the results. For each experiment, two different target directions for transmission are taken into consideration in the optimization process (φ = 0° and θ = 45°; φ = 45°, and θ = 45°. Each scenario is evaluated for two different types of antennas, an ideal isotropic antenna and a conventional dipole one. In this set of experiments two types of WSN are evaluated: in the first one, all of the sensors have the same amount of power for communications purposes; in the second one, each sensor has a different amount of power for its communications purposes. The analyzed cases in this document are focused on 2D surface and 3D space for the node location. To the authors' knowledge, this is the first time that beamforming and clustering are simultaneously applied to increase the network lifetime in WSNs.
Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information
Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem
communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service
Large Efficient Intelligent Heating Relay Station System
Wu, C. Z.; Wei, X. G.; Wu, M. Q.
The design of large efficient intelligent heating relay station system aims at the improvement of the existing heating system in our country, such as low heating efficiency, waste of energy and serious pollution, and the control still depends on the artificial problem. In this design, we first improve the existing plate heat exchanger. Secondly, the ATM89C51 is used to control the whole system and realize the intelligent control. The detection part is using the PT100 temperature sensor, pressure sensor, turbine flowmeter, heating temperature, detection of user end liquid flow, hydraulic, and real-time feedback, feedback signal to the microcontroller through the heating for users to adjust, realize the whole system more efficient, intelligent and energy-saving.
Energy Efficiency and SINR Maximization Beamformers for Spectrum Sharing With Sensing Information
an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints
Joint Power Allocation and Beamforming in Amplify-and-Forward Relay Networks under Per-Node Power Constraint
Farzin Azami
Full Text Available Two-way relay networks (TWRN have been intensively investigated over the past decade due to their ability to enhance the performance assessment of networks in terms of cellular coverage and spectral efficiency. Yet, power control in such systems is a nontrivial issue, particularly in multirelay networks where relays are deployed to ensure a required Quality of Service (QoS. In this paper, we envision to address this critical issue by minimizing the sum-power with respect to per-node power consumption and acceptable users' rates. To tackle this, we employ a variable transformation to turn the fractional quadratically constrained quadratic problem (QCQP into semidefinite programming (SDP. This algorithm is also extended to a distributed format. Simulation results of deploying 10 relay stations reveal that the total power consumption will decrease to approximately 8 dBW for 6 bps/Hz sum-rate.
Spectral efficiency enhancement with interference cancellation for wireless relay network
Yomo, Hiroyuki; De Carvalho, Elisabeth
The introduction of relaying into wireless communication system for coverage enhancement can cause severe decrease of spectral efficiency due to the requirement on extra radio resource. In this paper, we propose a method to increase spectral efficiency in such a wireless relay network by employing...... an interference cancellation technique. We focus on a typical scenario of relaying in a cellular system, where a mobile station (MS) requires the help of a relay station (RS) to communicate with the base station (BS). In such a case, interference cancellation can be used to achieve a small reuse distance...... of identical radio resource. We analyze a simple scenario with BS, single RS, and 2 MSs, and show that the proposed method has significant potential to enhance spectral efficiency in wireless relay networks....
Efficient incremental relaying for packet transmission over fading channels
Fareed, Muhammad Mehboob; Alouini, Mohamed-Slim; Yang, Hongchuan
In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes
In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.
Energy Efficient Design for Two-Way AF Relay Networks
Yong Li
Full Text Available Conventional designs on two-way relay networks mainly focus on the spectral efficiency (SE rather than energy efficiency (EE. In this paper, we consider a system where two source nodes communicate with each other via an amplify-and-forward (AF relay node and study the power allocation schemes to maximize EE while ensuring a certain data rate. We propose an optimal energy-efficient power allocation algorithm based on iterative search technique. In addition, a closed-form suboptimal solution is derived with reduced complexity and negligible performance degradation. Numerical results show that the proposed schemes can achieve considerable EE improvement compared with conventional designs.
Alabbasi, Abdulrahman
In this paper we consider a cognitive radio multi-input multi-output environment in which we adapt our beamformer to maximize both energy efficiency and signal to interference plus noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service constraints. Since the optimization of energy efficiency problem is not a convex problem, we transform it into a standard semi-definite programming (SDP) form to guarantee a global optimal solution. Analytical solution is provided for one scheme, while the other scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.
Micro-relay technology for energy-efficient integrated circuits
Kam, Hei
This book describes the design of relay-based circuit systems from device fabrication to circuit micro-architectures. This book is ideal for both device engineers as well as circuit system designers and highlights the importance of co-design across design hierarchies when optimizing system performance (in this case, energy-efficiency). This book is ideal for researchers and engineers focused on semiconductors, integrated circuits, and energy efficient electronics. This book also: · Covers microsystem fabrication, MEMS device design, circuit design, circuit micro-architecture, and CAD · Describes work previously done in the field and also lays the groundwork and criteria for future energy-efficient device and system design · Maximizes reader insights into the design and modeling of micro-relay, micro-relay reliability, integrated circuit design with micro-relays, and more
Energy efficient design for MIMO two-way AF multiple relay networks
Alsharoa, Ahmad M.; Ghazzai, Hakim; Alouini, Mohamed-Slim
This paper studies the energy efficient transmission and the power allocation problem for multiple two-way relay networks equipped with multi-input multi-output antennas where each relay employs an amplify-and-forward strategy. The goal
In this paper, we consider a cognitive radio multi-input-multi-output environment, in which we adapt our beamformer to maximize both energy efficiency (EE) and signal-to-interference-plus-noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints. The analysis of the proposed schemes is classified into two categories based on knowledge of the secondary-transmitter-to-primary-receiver channel. Since the optimizations of EE and SINR problems are not convex problems, we transform them into a standard semidefinite programming (SDP) form to guarantee that the optimal solutions are global. An analytical solution is provided for one scheme, while the second scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.
Relaying Strategies and Protocols for Efficient Wireless Networks
Zafar, Ammar
Next generation wireless networks are expected to provide high data rate and satisfy the Quality-of-Service (QoS) constraints of the users. A significant component of achieving these goals is to increase the effi ciency of wireless networks by either optimizing current architectures or exploring new technologies which achieve that. The latter includes revisiting technologies which were previously proposed, but due to a multitude of reasons were ignored at that time. One such technology is relaying which was initially proposed in the latter half of the 1960s and then was revived in the early 2000s. In this dissertation, we study relaying in conjunction with resource allocation to increase the effi ciency of wireless networks. In this regard, we differentiate between conventional relaying and relaying with buffers. Conventional relaying is traditional relaying where the relay forwards the signal it received immediately. On the other hand, in relaying with buffers or buffer-aided relaying as it is called, the relay can store received data in its buffer and forward it later on. This gives the benefit of taking advantage of good channel conditions as the relay can only transmit when the channel conditions are good. The dissertation starts with conventional relaying and considers the problem of minimizing the total consumed power while maintaining system QoS. After upper bounding the system performance, more practical algorithms which require reduced feedback overhead are explored. Buffer-aided relaying is then considered and the joint user-and-hop scheduler is introduced which exploits multi-user diversity (MUD) and 5 multi-hop diversity (MHD) gains together in dual-hop broadcast channels. Next joint user-and-hop scheduling is extended to the shared relay channel where two source-destination pairs share a single relay. The benefits of buffer-aided relaying in the bidirectional relay channel utilizing network coding are then explored. Finally, a new transmission protocol
Duplex Schemes in Multiple Antenna Two-Hop Relaying
Anja Klein
Full Text Available A novel scheme for two-hop relaying defined as space division duplex (SDD relaying is proposed. In SDD relaying, multiple antenna beamforming techniques are applied at the intermediate relay station (RS in order to separate downlink and uplink signals of a bi-directional two-hop communication between two nodes, namely, S1 and S2. For conventional amplify-and-forward two-hop relaying, there appears a loss in spectral efficiency due to the fact that the RS cannot receive and transmit simultaneously on the same channel resource. In SDD relaying, this loss in spectral efficiency is circumvented by giving up the strict separation of downlink and uplink signals by either time division duplex or frequency division duplex. Two novel concepts for the derivation of the linear beamforming filters at the RS are proposed; they can be designed either by a three-step or a one-step concept. In SDD relaying, receive signals at S1 are interfered by transmit signals of S1, and receive signals at S2 are interfered by transmit signals of S2. An efficient method in order to combat this kind of interference is proposed in this paper. Furthermore, it is shown how the overall spectral efficiency of SDD relaying can be improved if the channels from S1 and S2 to the RS have different qualities.
Cognitive Spectrum Efficient Multiple Access Technique using Relay Systems
Frederiksen, Flemming Bjerge; Prasad, Ramjee
Methods to enhance the use of the frequency spectrum by automatical spectrum sensing plus spectrum sharing in a cognitive radio technology context will be presented and discussed in this paper. Ideas to increase the coverage of cellular systems by relay channels, relay stations and collaborate...
Efficient Resource Scheduling by Exploiting Relay Cache for Cellular Networks
Chun He
Full Text Available In relay-enhanced cellular systems, throughput of User Equipment (UE is constrained by the bottleneck of the two-hop link, backhaul link (or the first hop link, and access link (the second hop link. To maximize the throughput, resource allocation should be coordinated between these two hops. A common resource scheduling algorithm, Adaptive Distributed Proportional Fair, only ensures that the throughput of the first hop is greater than or equal to that of the second hop. But it cannot guarantee a good balance of the throughput and fairness between the two hops. In this paper, we propose a Two-Hop Balanced Distributed Scheduling (TBS algorithm by exploiting relay cache for non-real-time data traffic. The evolved Node Basestation (eNB adaptively adjusts the number of Resource Blocks (RBs allocated to the backhaul link and direct links based on the cache information of relays. Each relay allocates RBs for relay UEs based on the size of the relay UE's Transport Block. We also design a relay UE's ACK feedback mechanism to update the data at relay cache. Simulation results show that the proposed TBS can effectively improve resource utilization and achieve a good trade-off between system throughput and fairness by balancing the throughput of backhaul and access link.
Novel Material Integration for Reliable and Energy-Efficient NEM Relay Technology
Chen, I.-Ru
Energy-efficient switching devices have become ever more important with the emergence of ubiquitous computing. NEM relays are promising to complement CMOS transistors as circuit building blocks for future ultra-low-power information processing, and as such have recently attracted significant attention from the semiconductor industry and researchers. Relay technology potentially can overcome the energy efficiency limit for conventional CMOS technology due to several key characteristics, including zero OFF-state leakage, abrupt switching behavior, and potentially very low active energy consumption. However, two key issues must be addressed for relay technology to reach its full potential: surface oxide formation at the contacting surfaces leading to increased ON-state resistance after switching, and high switching voltages due to strain gradient present within the relay structure. This dissertation advances NEM relay technology by investigating solutions to both of these pressing issues. Ruthenium, whose native oxide is conductive, is proposed as the contacting material to improve relay ON-state resistance stability. Ruthenium-contact relays are fabricated after overcoming several process integration challenges, and show superior ON-state resistance stability in electrical measurements and extended device lifetime. The relay structural film is optimized via stress matching among all layers within the structure, to provide lower strain gradient (below 10E-3/microm -1) and hence lower switching voltage. These advancements in relay technology, along with the integration of a metallic interconnect layer, enable complex relay-based circuit demonstration. In addition to the experimental efforts, this dissertation theoretically analyzes the energy efficiency limit of a NEM switch, which is generally believed to be limited by the surface adhesion energy. New compact (electronic device technology.
Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying
Zafar, Ammar; Radaydeh, Redha Mahmoud Mesleh; Chen, Yunfei; Alouini, Mohamed-Slim
This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal
Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer
Bo-Hee Choi
Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.
This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal-to-noise-ratio (SNR) above a certain peak value and satisfying the peak power constraints at the source and relay nodes. To achieve this objective, an optimal relay selection and power allocation strategy is derived by solving the power minimization problem. Numerical results show that the derived optimal strategy enhances the energy-efficiency as compared to a benchmark scheme in which both the source and the selected relay transmit at peak power. © 2013 IEEE.
Energy-efficient cooperative protocols for full-duplex relay channels
Khafagy, Mohammad Galal
In this work, energy-efficient cooperative protocols are studied for full-duplex relaying (FDR) with loopback interference. In these protocols, relay assistance is only sought under certain conditions on the different link outages to ensure effective cooperation. Recently, an energy-efficient selective decode-And-forward protocol was proposed for FDR, and was shown to outperform existing schemes in terms of outage. Here, we propose an incremental selective decode-And-forward protocol that offers additional power savings, while keeping the same outage performance. We compare the performance of the two protocols in terms of the end-to-end signal-to-noise ratio cumulative distribution function via closed-form expressions. Finally, we corroborate our theoretical results with simulation, and show the relative relay power savings in comparison to non-selective cooperation in which the relay cooperates regardless of channel conditions. © 2013 IEEE.
Khafagy, Mohammad Galal; Ismail, Amr; Alouini, Mohamed-Slim; Aï ssa, Sonia
Energy Efficiency Analysis of a Two Dimensional Cooperative Wireless Sensor Network with Relay Selection
M. Kakitani
Full Text Available The energy efficiency of non-cooperative and cooperative transmissions are investigated in a two-dimensional wireless sensor network, considering a target outage probability and the same end-to-end throughput for all transmission schemes. The impact of the relay selection method in the cooperative schemes is also analyzed. We show that under non line-of-sight conditions the relay selection method has a greater impact in the energy efficiency than the availability of a return channel. By its turn, under line-of-sight conditions a return channel is more valuable to the energy efficiency of cooperative transmission than the specific relay selection method. Finally, we demonstrate that the energy efficiency advantage of the cooperative over the non-cooperative transmission increases with the distance among nodes and with the nodes density.
High Excitation Transfer Efficiency from Energy Relay Dyes in Dye-Sensitized Solar Cells
Hardin, Brian E.
The energy relay dye, 4-(Dicyanomethylene)-2-methyl-6-(4- dimethylaminostyryl)-4H-pyran (DCM), was used with a near-infrared sensitizing dye, TT1, to increase the overall power conversion efficiency of a dye-sensitized solar cell (DSC) from 3.5% to 4.5%. The unattached DCM dyes exhibit an average excitation transfer efficiency (EÌ?TE) of 96% inside TT1-covered, mesostructured TiO2 films. Further performance increases were limited by the solubility of DCM in an acetonitrile based electrolyte. This demonstration shows that energy relay dyes can be efficiently implemented in optimized dye-sensitized solar cells, but also highlights the need to design highly soluble energy relay dyes with high molar extinction coefficients. © 2010 American Chemical Society.
Low bandwidth binaural beamforming
Srinivasan, S.
An efficient beamforming scheme for wireless binaural hearing aids is proposed that provides a trade-off between the transmission bit rate and the amount of noise reduction. It is proposed to transmit only the low-frequency part of the signal from one hearing aid to the other, which is used in a
Hardin, Brian E.; Yum, Jun-Ho; Hoke, Eric T.; Jun, Young Chul; Pe�chy, Peter; Torres, Toma�s; Brongersma, Mark L.; Nazeeruddin, Md. Khaja; Grätzel, Michael; McGehee, Michael D.
The energy relay dye, 4-(Dicyanomethylene)-2-methyl-6-(4- dimethylaminostyryl)-4H-pyran (DCM), was used with a near-infrared sensitizing dye, TT1, to increase the overall power conversion efficiency of a dye-sensitized solar cell (DSC) from 3
Energy-efficient two-hop LTE resource allocation in high speed trains with moving relays
Alsharoa, Ahmad M.; Ghazzai, Hakim; Yaacoub, Elias E.; Alouini, Mohamed-Slim
of this work is to maximize the number of served users by respecting a specific quality-of-service constraint while minimizing the total power consumption of the eNodeB and the moving relays. We propose an efficient algorithm based on the Hungarian method
Alsharoa, Ahmad M.
This paper studies the energy efficient transmission and the power allocation problem for multiple two-way relay networks equipped with multi-input multi-output antennas where each relay employs an amplify-and-forward strategy. The goal is to minimize the total power consumption without degrading the quality of service of the terminals. In our analysis, we start by deriving closed-form expressions of the optimal powers allocated to terminals. We then employ a strong optimization tool based on the particle swarm optimization technique to find the optimal power allocated at each relay antenna. Our numerical results illustrate the performance of the proposed scheme and show that it achieves a sub-optimal solution very close to the optimal one.
Energy-Efficient Power Allocation for Fixed-Gain Amplify-and-Forward Relay Networks with Partial Channel State Information
Zafar, Ammar; Alouini, Mohamed-Slim; Chen, Yunfei; Radaydeh, Redha M.
In this letter, energy-efficient transmission and power allocation for fixed-gain amplify-and-forward relay networks with partial channel state information (CSI) are studied. In the energy-efficiency problem, the total power consumed is minimized
Protective relay
Lim, Mu Ji; Jung, Hae Sang
This book is divided into two chapters, which deals with protective relay. The first chapter deals with the basic knowledge of relay on development of relay, classification of protective relay, rating of protective relay general structure of protective relay, detecting of ground protection, about point of contact, operating relay and trip relaying. The second chapter is about structure and explanation of relay on classification by structure such as motor type and moving-coil type, explanation of other relays over current relay, over voltage relay, short voltage relay, relay for power, relay for direction, test of over voltage relay, test of short voltage relay and test of directional circuit relay.
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation
Maham Waheed
Full Text Available The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs. Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor's battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE and packet error rate (PER are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC coding the cooperative link.
Towards Efficient Wireless Body Area Network Using Two-Way Relay Cooperation.
Waheed, Maham; Ahmad, Rizwan; Ahmed, Waqas; Drieberg, Micheal; Alam, Muhammad Mahtab
The fabrication of lightweight, ultra-thin, low power and intelligent body-borne sensors leads to novel advances in wireless body area networks (WBANs). Depending on the placement of the nodes, it is characterized as in/on body WBAN; thus, the channel is largely affected by body posture, clothing, muscle movement, body temperature and climatic conditions. The energy resources are limited and it is not feasible to replace the sensor's battery frequently. In order to keep the sensor in working condition, the channel resources should be reserved. The lifetime of the sensor is very crucial and it highly depends on transmission among sensor nodes and energy consumption. The reliability and energy efficiency in WBAN applications play a vital role. In this paper, the analytical expressions for energy efficiency (EE) and packet error rate (PER) are formulated for two-way relay cooperative communication. The results depict better reliability and efficiency compared to direct and one-way relay communication. The effective performance range of direct vs. cooperative communication is separated by a threshold distance. Based on EE calculations, an optimal packet size is observed that provides maximum efficiency over a certain link length. A smart and energy efficient system is articulated that utilizes all three communication modes, namely direct, one-way relay and two-way relay, as the direct link performs better for a certain range, but the cooperative communication gives better results for increased distance in terms of EE. The efficacy of the proposed hybrid scheme is also demonstrated over a practical quasi-static channel. Furthermore, link length extension and diversity is achieved by joint network-channel (JNC) coding the cooperative link.
Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel
Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation
Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results
Modeling the efficiency of Förster resonant energy transfer from energy relay dyes in dye-sensitized solar cells
Hoke, Eric T.
Förster resonant energy transfer can improve the spectral breadth, absorption and energy conversion efficiency of dye sensitized solar cells. In this design, unattached relay dyes absorb the high energy photons and transfer the excitation to sensitizing dye molecules by Förster resonant energy transfer. We use an analytic theory to calculate the excitation transfer efficiency from the relay dye to the sensitizing dye accounting for dynamic quenching and relay dye diffusion. We present calculations for pores of cylindrical and spherical geometry and examine the effects of the Förster radius, the pore size, sensitizing dye surface concentration, collisional quenching rate, and relay dye lifetime. We find that the excitation transfer efficiency can easily exceed 90% for appropriately chosen dyes and propose two different strategies for selecting dyes to achieve record power conversion efficiencies. © 2010 Optical Society of America.
Hoke, Eric T.; Hardin, Brian E.; McGehee, Michael D.
Förster resonant energy transfer can improve the spectral breadth, absorption and energy conversion efficiency of dye sensitized solar cells. In this design, unattached relay dyes absorb the high energy photons and transfer the excitation
High-speed railway system equipped with moving relay stations placed on the middle of the ceiling of each train wagon is investigated. The users inside the train are served in two hops via the 3GPP Long Term Evolution (LTE) technology. The objective of this work is to maximize the number of served users by respecting a specific quality-of-service constraint while minimizing the total power consumption of the eNodeB and the moving relays. We propose an efficient algorithm based on the Hungarian method to find the optimal resource allocation over the LTE resource blocks in order to serve the maximum number of users with the minimum power consumption. Moreover, we derive a closed-form expression for the power allocation problem. Our simulation results illustrate the performance of the proposed scheme and compare it with various previously developed algorithms as well as with the direct transmission scenario. © 2014 IFIP.
Efficiency Intra-Cluster Device-to-Device Relay Selection for Multicast Services Based on Combinatorial Auction
Full Text Available In Long Term Evolution-Advanced (LTE-A networks, Device-to-device (D2D communications can be utilized to enhance the performance of multicast services by leveraging D2D relays to serve nodes with worse channel conditions within a cluster. For traditional D2D relay schemes, D2D links with poor channel condition may be the bottleneck of system sum data rate. In this paper, to optimize the throughput of D2D communications, we introduce an iterative combinatorial auction algorithm for efficient D2D relay selection. In combinatorial auctions, the User Equipments (UEs that fails to correctly receive multicast data from eNodeB (eNB are viewed as bidders that compete for D2D relays, while the eNB is treated as the auctioneer. We also give properties of convergency and low-complexity and present numerical simulations to verify the efficiency of the proposed algorithm.
Alternate MIMO AF relaying networks with interference alignment: Spectral efficient protocol and linear filter design
Park, Kihong
In this paper, we study a two-hop relaying network consisting of one source, one destination, and three amplify-and-forward (AF) relays with multiple antennas. To compensate for the capacity prelog factor loss of 1/2$ due to the half-duplex relaying, alternate transmission is performed among three relays, and the inter-relay interference due to the alternate relaying is aligned to make additional degrees of freedom. In addition, suboptimal linear filter designs at the nodes are proposed to maximize the achievable sum rate for different fading scenarios when the destination utilizes a minimum mean-square error filter. © 1967-2012 IEEE.
Park, Kihong; Alouini, Mohamed-Slim
In this paper, we study a two-hop relaying network consisting of one source, one destination, and three amplify-and-forward (AF) relays with multiple antennas. To compensate for the capacity prelog factor loss of 1/2$ due to the half-duplex relaying
Energy-Efficient Relay Selection Scheme for Physical Layer Security in Cognitive Radio Networks
Li Jiang
selection and dynamic power allocation in order to maximize SC and to minimize energy consumption. Moreover, we consider finite-state Markov channels and residual relay energy in the relay selection and power allocation process. Specifically, the formulation of the proposed relay selection and power allocation scheme is based on the restless bandit problem, which is solved by the primal-dual index heuristic algorithm. Additionally, the obtained optimal relay selection policy has an indexability property that dramatically reduces the computational complexity. Numerical results are presented to show that our proposed scheme has the maximum SC and minimum energy consumption compared to the existing ones.
Enhancing the efficiency of constrained dual-hop variable-gain AF relaying under nakagami-m fading
This paper studies power allocation for performance constrained dual-hop variable-gain amplify-and-forward (AF) relay networks in Nakagami- $m$ fading. In this context, the performance constraint is formulated as a constraint on the end-to-end signal-to-noise-ratio (SNR) and the overall power consumed is minimized while maintaining this constraint. This problem is considered under two different assumptions of the available channel state information (CSI) at the relays, namely full CSI at the relays and partial CSI at the relays. In addition to the power minimization problem, we also consider the end-to-end SNR maximization problem under a total power constraint for the partial CSI case. We provide closed-form solutions for all the problems which are easy to implement except in two cases, namely selective relaying with partial CSI for power minimization and SNR maximization, where we give the solution in the form of a one-variable equation which can be solved efficiently. Numerical results are then provided to characterize the performance of the proposed power allocation algorithms considering the effects of channel parameters and CSI availability. © 2014 IEEE.
Time and Energy Efficient Relay Transmission for Multi-Hop Wireless Sensor Networks.
Kim, Jin-Woo; Barrado, José Ramón Ramos; Jeon, Dong-Keun
The IEEE 802.15.4 standard is widely recognized as one of the most successful enabling technologies for short range low rate wireless communications and it is used in IoT applications. It covers all the details related to the MAC and PHY layers of the IoT protocol stack. Due to the nature of IoT, the wireless sensor networks are autonomously self-organized networks without infrastructure support. One of the issues in IoT is the network scalability. To address this issue, it is necessary to support the multi-hop topology. The IEEE 802.15.4 network can support a star, peer-to-peer, or cluster-tree topology. One of the IEEE 802.15.4 topologies suited for the high predictability of performance guarantees and energy efficient behavior is a cluster-tree topology where sensor nodes can switch off their transceivers and go into a sleep state to save energy. However, the IEEE 802.15.4 cluster-tree topology may not be able to provide sufficient bandwidth for the increased traffic load and the additional information may not be delivered successfully. The common drawback of the existing approaches is that they do not address the poor bandwidth utilization problem in IEEE 802.15.4 cluster-tree networks, so it is difficult to increase the network performance. Therefore, to solve this problem in this paper we study a relay transmission protocol based on the standard protocol in the IEEE 802.15.4 MAC. In the proposed scheme, the coordinators can relay data frames to their parent devices or their children devices without contention and can provide bandwidth for the increased traffic load or the number of devices. We also evaluate the performance of the proposed scheme through simulation. The simulation results demonstrate that the proposed scheme can improve the reliability, the end-to-end delay, and the energy consumption.
Time and Energy Efficient Relay Transmission for Multi-Hop Wireless Sensor Networks
Jin-Woo Kim
Full Text Available The IEEE 802.15.4 standard is widely recognized as one of the most successful enabling technologies for short range low rate wireless communications and it is used in IoT applications. It covers all the details related to the MAC and PHY layers of the IoT protocol stack. Due to the nature of IoT, the wireless sensor networks are autonomously self-organized networks without infrastructure support. One of the issues in IoT is the network scalability. To address this issue, it is necessary to support the multi-hop topology. The IEEE 802.15.4 network can support a star, peer-to-peer, or cluster-tree topology. One of the IEEE 802.15.4 topologies suited for the high predictability of performance guarantees and energy efficient behavior is a cluster-tree topology where sensor nodes can switch off their transceivers and go into a sleep state to save energy. However, the IEEE 802.15.4 cluster-tree topology may not be able to provide sufficient bandwidth for the increased traffic load and the additional information may not be delivered successfully. The common drawback of the existing approaches is that they do not address the poor bandwidth utilization problem in IEEE 802.15.4 cluster-tree networks, so it is difficult to increase the network performance. Therefore, to solve this problem in this paper we study a relay transmission protocol based on the standard protocol in the IEEE 802.15.4 MAC. In the proposed scheme, the coordinators can relay data frames to their parent devices or their children devices without contention and can provide bandwidth for the increased traffic load or the number of devices. We also evaluate the performance of the proposed scheme through simulation. The simulation results demonstrate that the proposed scheme can improve the reliability, the end-to-end delay, and the energy consumption.
Sum-Rate Maximization of Coordinated Direct and Relay Systems
Sun, Fan; Popovski, Petar; Thai, Chan
Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying based on wireless network coding. Recently, a related set of techniques has emerged, termed coordinated direct and relay (CDR) transmissions......, where the constellation of traffic flows is more general than the two-way. Regardless of the actual traffic flows, in a CDR scheme the relay has a central role in managing the interference and boosting the overall system performance. In this paper we investigate the novel transmission modes, based...... on amplify-and-forward, that arise when the relay is equipped with multiple antennas and can use beamforming. We focus on one representative traffic type, with one uplink and one downlink users and consider the achievable sum-rate maximization relay beamforming. The beamforming criterion leads to a non...
Fundamentals of differential beamforming
Benesty, Jacob; Pan, Chao
This book provides a systematic study of the fundamental theory and methods of beamforming with differential microphone arrays (DMAs), or differential beamforming in short. It begins with a brief overview of differential beamforming and some popularly used DMA beampatterns such as the dipole, cardioid, hypercardioid, and supercardioid, before providing essential background knowledge on orthogonal functions and orthogonal polynomials, which form the basis of differential beamforming. From a physical perspective, a DMA of a given order is defined as an array that measures the differential acoustic pressure field of that order; such an array has a beampattern in the form of a polynomial whose degree is equal to the DMA order. Therefore, the fundamental and core problem of differential beamforming boils down to the design of beampatterns with orthogonal polynomials. But certain constraints also have to be considered so that the resulting beamformer does not seriously amplify the sensors' self noise and the mism...
Efficient Cooperative Protocols for Full-Duplex Relaying over Nakagami-m Fading Channels
In this work, efficient protocols are studied for full-duplex relaying (FDR) with loopback interference over Nakagami-m block fading channels. Recently, a selective decodeand- forward (DF) protocol was proposed for FDR, and was shown to outperform existing protocols in terms of outage over Rayleigh-fading channels. In this work, we propose an incremental selective DF protocol that offers additional power savings, yet yields the same outage performance. We evaluate their outage performance over independent non-identically distributed Nakagami-m fading links, and study their relative performance in terms of the signal-to-noise ratio cumulative distribution function via closed-form expressions. The offered diversity gain is also derived. In addition, we study their performance relative to their half-duplex counterparts, as well as known non-selective FDR protocols. We corroborate our theoretical results with simulation, and confirm that selective cooperation protocols outperform the known non-selective protocols in terms of outage. Finally, we show that depending on the loopback interference level, the proposed protocols can outperform their half-duplex counterparts when high spectral efficiencies are targeted.
Khafagy, Mohammad Galal; Tammam, Amr; Alouini, Mohamed-Slim; Aissa, Sonia
Fairness-Aware and Energy Efficiency Resource Allocation in Multiuser OFDM Relaying System
Guangjun Liang
Full Text Available A fairness-aware resource allocation scheme in a cooperative orthogonal frequency division multiple (OFDM network is proposed based on jointly optimizing the subcarrier pairing, power allocation, and channel-user assignment. Compared with traditional OFDM relaying networks, the source is permitted to retransfer the same data transmitted by it in the first time slot, further improving the system capacity performance. The problem which maximizes the energy efficiency (EE of the system with total power constraint and minimal spectral efficiency constraint is formulated into a mixed-integer nonlinear programming (MINLP problem which has an intractable complexity in general. The optimization model is simplified into a typical fractional programming problem which is testified to be quasiconcave. Thus we can adopt Dinkelbach method to deal with MINLP problem proposed to achieve the optimal solution. The simulation results show that the joint resource allocation method proposed can achieve an optimal EE performance under the minimum system service rate requirement with a good global convergence.
Efficient power allocation for fixed-gain amplify-and-forward relaying in rayleigh fading
In this paper, we study power allocation strategies for a fixed-gain amplify-and-forward relay network employing multiple relays. We consider two optimization problems for the relay network: 1) optimal power allocation to maximize the end-to-end signal-to-noise ratio (SNR) and 2) minimizing the total consumed power while maintaining the end-to-end SNR over a threshold value. We assume that the relays have knowledge of only the channel statistics of all the links. We show that the SNR maximization problem is concave and the power minimization problem is convex. Hence, we solve the problems through convex programming. Numerical results show the benefit of allocating power optimally rather than uniformly. © 2013 IEEE.
In this report, energy-efficient transmission and power allocation for fixed-gain amplify-and-forward relay networks with partial channel state information (CSI) are studied. In the energy-efficiency problem, the total power consumed is minimized while keeping the signal-to-noise-ratio (SNR) above a certain threshold. In the dual problem of power allocation, the end-to-end SNR is maximized under individual and global power constraints. Closed-form expressions for the optimal source and relay powers and the Lagrangian multiplier are obtained. Numerical results show that the optimal power allocation with partial CSI provides comparable performance as optimal power allocation with full CSI at low SNR.
In this letter, energy-efficient transmission and power allocation for fixed-gain amplify-and-forward relay networks with partial channel state information (CSI) are studied. In the energy-efficiency problem, the total power consumed is minimized while keeping the signal-to-noise-ratio (SNR) above a certain threshold. In the dual problem of power allocation, the end-to-end SNR is maximized under individual and global power constraints. Closed-form expressions for the optimal source and relay powers and the Lagrangian multiplier are obtained. Numerical results show that the optimal power allocation with partial CSI provides comparable performance as optimal power allocation with full CSI at low SNR. © 2012 IEEE.
Robustness Beamforming Algorithms
Sajad Dehghani
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
Zafar, Ammar; Radaydeh, Redha Mahmoud; Chen, Yunfei; Alouini, Mohamed-Slim
-to-end signal-to-noise ratio (SNR) and 2) minimizing the total consumed power while maintaining the end-to-end SNR over a threshold value. We assume that the relays have knowledge of only the channel statistics of all the links. We show that the SNR maximization
Reactive relay selection in underlay cognitive networks with fixed gain relays
Hussain, Syed Imtiaz; Alouini, Mohamed-Slim; Qaraqe, Khalid A.; Hasna, Mazen Omar
Best relay selection is a bandwidth efficient technique for multiple relay environments without compromising the system performance. The problem of relay selection is more challenging in underlay cognitive networks due to strict interference
Effective and versatile software beamformation toolbox
Kortbek, Jacob; Nikolov, Svetoslav; Jensen, Jørgen Arendt
Delay-and-sum array beamforming is an essential part of signal processing in ultrasound imaging. Although the principles are simple, there are many implementation details to consider for obtaining a reliable and computational efficient beamforming. Different methods for calculation of time......-delays are used for different waveforms. Various inter-sample interpolation schemes such as FIR-filtering, polynomial, and spline interpolation can be chosen. Apodization can be any preferred window function of fixed size applied on the channel signals or it can be dynamic with an expanding and contracting...... seconds using a transducer of 128 elements, dynamic apodization and 3rd order polynomial interpolation. This is a decrease in computation time of at least a factor of 15 compared to an implementation directly in MATLAB of a similar beamformer....
An applicable 5.8 GHz wireless power transmission system with rough beamforming to Project Loon
Chang-Jun Ahn
Full Text Available In recent, Google proposed the Project Loon being developed with the mission of providing internet access to rural and remote areas using high-altitude balloons. In this paper, we describe an applicable prototype of 5.8 GHz wireless power transmission system with rough beamforming method to Project Loon. From the measurement results, transmit beamforming phased array antenna can transmit power more efficiently compared to a horn antenna and array antenna without beamforming with increasing the transmission distance. For the transmission distance of 1000Â mm, transmit beamforming phased array antenna can obtain higher received power about 1.46 times compared to array antenna without transmit beamforming.
Synthetic Aperture Sequential Beamforming
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...
An acousto-optic beamformer
Torras Rosell, Antoni; Barrera Figueroa, Salvador; Jacobsen, Finn
There is a great variety of beamforming techniques that can be used for localization of sound sources. The differences among them usually lie in the array layout or in the specific signal processing algorithm used to compute the beamforming output. Any beamforming system consists of a finite numb...
Relay race
Staff Association
The CERN relay race will take place around the Meyrin site on Thursday 19th May starting at 12:15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details on the course, and how to register your team for the relay race, can be found at: https://espace.cern.ch/Running-Club/CERN-Relay Some advice for all runners from the medical service can also be found here: https://espace.cern.ch/Running-Club/CERN-Relay/RelayPagePictures/MedicalServiceAnnoncement.pdf
The CERN relay race will take place around the Meyrin site on Thursday 19th May starting at 12·15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details on the course, and how to register your team for the relay race, can be found at: https://espace.cern.ch/Running-Club/CERN-Relay Some advice for all runners from the medical service can also be found here: https://espace.cern.ch/Running-Club/CERN-Relay/RelayPagePictures/MedicalServiceAnnoncement.pdf
Outage probability of distributed beamforming with co-channel interference
Yang, Liang
In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.
Relaxed Binaural LCMV Beamforming
Koutrouvelis, A.; Hendriks, R.C.; Heusdens, R.; Jensen, Jesper Rindom
In this paper, we propose a new binaural beamforming technique, which can be seen as a relaxation of the linearly constrained minimum variance (LCMV) framework. The proposed method can achieve simultaneous noise reduction and exact binaural cue preservation of the target source, similar to the
Highly Reconfigurable Beamformer Stimulus Generator
Vaviļina E.
Full Text Available The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.
Vaviļina, E.; Gaigals, G.
The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources) and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.
Heterogeneous LTE/802.11a mobile relays for data rate enhancement and energy-efficiency in high speed trains
Atat, Rachad
Performance enhancements of cellular networks for passengers in high speed railway systems are investigated. Relays placed on top of each train car are proposed. These relays communicate with the cellular base station (BS) over Long Term Evolution (LTE) long range links and with the mobile terminals (MTs) inside the train cars using IEEE 802.11a short range links. Scenarios with unicasting and multicasting from the BS are studied, both in the presence and absence of the relays. In addition, LTE resource allocation is taken into account. The presence of the relays is shown to lead to significant enhancements in the effective data rates of the MTs, in addition to leading to huge savings in the energy consumption from the batteries of the MTs. © 2012 IEEE.
Terahertz plasmonic Bessel beamformer
Monnai, Yasuaki; Shinoda, Hiroyuki; Jahn, David; Koch, Martin; Withayachumnankul, Withawat
We experimentally demonstrate terahertz Bessel beamforming based on the concept of plasmonics. The proposed planar structure is made of concentric metallic grooves with a subwavelength spacing that couple to a point source to create tightly confined surface waves or spoof surface plasmon polaritons. Concentric scatterers periodically incorporated at a wavelength scale allow for launching the surface waves into free space to define a Bessel beam. The Bessel beam defined at 0.29 THz has been characterized through terahertz time-domain spectroscopy. This approach is capable of generating Bessel beams with planar structures as opposed to bulky axicon lenses and can be readily integrated with solid-state terahertz sources
PHASED ARRAY FEED CALIBRATION, BEAMFORMING, AND IMAGING
Landon, Jonathan; Elmer, Michael; Waldron, Jacob; Jones, David; Stemmons, Alan; Jeffs, Brian D.; Warnick, Karl F.; Richard Fisher, J.; Norrod, Roger D.
Phased array feeds (PAFs) for reflector antennas offer the potential for increased reflector field of view and faster survey speeds. To address some of the development challenges that remain for scientifically useful PAFs, including calibration and beamforming algorithms, sensitivity optimization, and demonstration of wide field of view imaging, we report experimental results from a 19 element room temperature L-band PAF mounted on the Green Bank 20 Meter Telescope. Formed beams achieved an aperture efficiency of 69% and a system noise temperature of 66 K. Radio camera images of several sky regions are presented. We investigate the noise performance and sensitivity of the system as a function of elevation angle with statistically optimal beamforming and demonstrate cancelation of radio frequency interference sources with adaptive spatial filtering.
-to-end signal-to-noise-ratio (SNR) and the overall power consumed is minimized while maintaining this constraint. This problem is considered under two different assumptions of the available channel state information (CSI) at the relays, namely full CSI
Atat, Rachad; Yaacoub, Elias E.; Alouini, Mohamed-Slim; Abu-Dayya, Adnan A.
(LTE) long range links and with the mobile terminals (MTs) inside the train cars using IEEE 802.11a short range links. Scenarios with unicasting and multicasting from the BS are studied, both in the presence and absence of the relays. In addition, LTE
Digital Beamforming Scatterometer
Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul
This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate
Adaptive Beamforming for Medical Ultrasound Imaging
Holfort, Iben Kraglund
This dissertation investigates the application of adaptive beamforming for medical ultrasound imaging. The investigations have been concentrated primarily on the Minimum Variance (MV) beamformer. A broadband implementation of theMV beamformer is described, and simulated data have been used...... to demonstrate the performance. The MV beamformer has been applied to different sets of ultrasound imaging sequences; synthetic aperture ultrasound imaging and plane wave ultrasound imaging. And an approach for applying MV optimized apodization weights on both the transmitting and the receiving apertures...
On UWB beamforming
T. Kaiser
Full Text Available Ultra-Wideband (UWB communication systems and Multi-Input-Multi-Output (MIMO techniques rank among the few emerging key technologies in wireless communications. For that reason the marriage of these two complementary approaches should deserve attention. Apparently, the extremely large ultra-wide bandwidth creates rich multipath diversity which calls, at a first glance, additional antenna elements into question. However, another point of view is as follows. The attenuation by solid materials usually increases with increasing frequency; e.g. frequencies above, say, 10 GHz are considered to be blocked by walls etc. Since UWB can occupy more than 7 GHz of bandwidth (according to FCC regularisation the performance of a communication link can be physically extended only by adding spatial information, i.e. multiple antennas, even if such extension may play a minor role. From this point of view UWB& MIMO presents an upper physical bound for indoor communications and is therefore at least worth to be investigated. In order to see the forest for the trees, we will focus in this limited contribution on beamforming among all alternative MIMO techniques (like space time coding or spatial multiplexing.
Adaptive Subframe Partitioning and Efficient Packet Scheduling in OFDMA Cellular System with Fixed Decode-and-Forward Relays
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
Synthetic aperture ultrasound Fourier beamformation using virtual sources
Moghimirad, Elahe; Villagómez Hoyos, Carlos Armando; Mahloojifar, Ali
An efficient Fourier beamformation algorithm is presented for multistatic synthetic aperture ultrasound imaging using virtual sources (FBV). The concept is based on the frequency domain wavenumber algorithm from radar and sonar and is extended to a multi-element transmit/receive configuration using...
Telecommunications Relay Services
... Home » Health Info » Hearing, Ear Infections, and Deafness Telecommunications Relay Services On this page: What are telecommunication ... additional information about telecommunication relay services? What are telecommunication relay services? Title IV of the Americans with ...
Alternate transmission relaying based on interference alignment in 3-relay half-duplex MIMO systems
Park, Seongho; Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim
In a half-duplex relaying, the capacity pre-log factor 1/2 is a major drawback in spectral efficiency. This paper proposes a linear precoding/decoding scheme and an alternate relaying protocol in a dual-hop half-duplex system where three relays help the communication between the source and the destination. In our proposed scheme, we consider a phase incoherent method in relays in which the source alternately transmits message signals to the different relays. In addition, we propose a linear interference alignment scheme which can suppress the inter-relay interference resulting from the phase incoherence of relaying. Based on our analysis of degrees of freedom and our simulation results, we show that our proposed scheme achieves additional degrees of freedom compared to the conventional half-duplex relaying. © 2012 IEEE.
Park, Seongho
Numerical analysis of biosonar beamforming mechanisms and strategies in bats.
Müller, Rolf
Beamforming is critical to the function of most sonar systems. The conspicuous noseleaf and pinna shapes in bats suggest that beamforming mechanisms based on diffraction of the outgoing and incoming ultrasonic waves play a major role in bat biosonar. Numerical methods can be used to investigate the relationships between baffle geometry, acoustic mechanisms, and resulting beampatterns. Key advantages of numerical approaches are: efficient, high-resolution estimation of beampatterns, spatially dense predictions of near-field amplitudes, and the malleability of the underlying shape representations. A numerical approach that combines near-field predictions based on a finite-element formulation for harmonic solutions to the Helmholtz equation with a free-field projection based on the Kirchhoff integral to obtain estimates of the far-field beampattern is reviewed. This method has been used to predict physical beamforming mechanisms such as frequency-dependent beamforming with half-open resonance cavities in the noseleaf of horseshoe bats and beam narrowing through extension of the pinna aperture with skin folds in false vampire bats. The fine structure of biosonar beampatterns is discussed for the case of the Chinese noctule and methods for assessing the spatial information conveyed by beampatterns are demonstrated for the brown long-eared bat.
A SDP based design of relay precoding for the power minimization of MIMO AF-relay networks
Rao, Anlei; Park, Kihong; Alouini, Mohamed-Slim
Relay precoding for multiple-input and multiple-output (MIMO) relay networks has been approached by either optimizing the efficiency performance with given power consumption constraints or minimizing the power consumption with quality-of-service (Qo
Resource allocation for transmit hybrid beamforming in decoupled millimeter wave multiuser-MIMO downlink
Ahmed, Irfan; Khammari, Hedi; Shahid, Adnan
This paper presents a study on joint radio resource allocation and hybrid precoding in multicarrier massive multiple-input multiple-output communications for 5G cellular networks. In this paper, we present the resource allocation algorithm to maximize the proportional fairness (PF) spectral efficiency under the per subchannel power and the beamforming rank constraints. Two heuristic algorithms are designed. The proportional fairness hybrid beamforming algorithm provides the transmit precoder ...
The CERN Relay Race will take place around the Meyrin site on Thursday 24th May at 12:00. This annual event is for teams of six runners covering distances of 1000 m, 800 m, 800 m, 500 m, 500 m and 300 m respectively. Teams may be entered in the Seniors, Veterans, Ladies, Mixed or Open categories. There will also this year be a Nordic Walking event, as part of the Medical Service's initiative "Move more, eat better!� The registration fee is 10 CHF per runner, and each runner will receive a souvenir prize. There will be a programme of entertainment from 12:00 on the arrival area (the lawn in front of Restaurant 1): 12:00 - 12:45 Music from the Old Bottom Street band 12:15 Start of the race 12:45 - 13h Demonstrations by the Fitness club and Dancing club 13:00 Results and prize giving (including a raffle to win an iPad2 3G offered by the Micro club) 13:20 à 14:00 Music from "What's next� And many information st...
Simultaneous Wireless Power Transfer and Secure Multicasting in Cooperative Decode-and-Forward Relay Networks.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.
Simultaneous Wireless Power Transfer and Secure Multicasting in Cooperative Decode-and-Forward Relay Networks
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks. PMID:28509841
Jong-Ho Lee
Full Text Available In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.
Relay Selection and Resource Allocation in One-Way and Two-Way Cognitive Relay Networks
In this work, the problem of relay selection and resource power allocation in one- way and two-way cognitive relay networks using half duplex channels with different relaying protocols is investigated. Optimization problems for both single and multiple relay selection that maximize the sum rate of the secondary network without degrading the quality of service of the primary network by respecting a tolerated interference threshold were formulated. Single relay selection and optimal power allocation for two-way relaying cognitive radio networks using decode-and-forward and amplify-and-forward protocols were studied. Dual decomposition and subgradient methods were used to find the optimal power allocation. The transmission process to exchange two different messages between two transceivers for two-way relaying technique takes place in two time slots. In the first slot, the transceivers transmit their signals simultaneously to the relay. Then, during the second slot the relay broadcasts its signal to the terminals. Moreover, improvement of both spectral and energy efficiency can be achieved compared with the one-way relaying technique. As an extension, a multiple relay selection for both one-way and two-way relaying under cognitive radio scenario using amplify-and-forward were discussed. A strong optimization tool based on genetic and iterative algorithms was employed to solve the
formulated optimization problems for both single and multiple relay selection, where discrete relay power levels were considered. Simulation results show that the practical and low-complexity heuristic approaches achieve almost the same performance of the optimal relay selection schemes either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity.
Capacity Bounds and Mapping Design for Binary Symmetric Relay Channels
Majid Nasiri Khormuji
Full Text Available Capacity bounds for a three-node binary symmetric relay channel with orthogonal components at the destination are studied. The cut-set upper bound and the rates achievable using decode-and-forward (DF, partial DF and compress-and-forward (CF relaying are first evaluated. Then relaying strategies with finite memory-length are considered. An efficient algorithm for optimizing the relay functions is presented. The Boolean Fourier transform is then employed to unveil the structure of the optimized mappings. Interestingly, the optimized relay functions exhibit a simple structure. Numerical results illustrate that the rates achieved using the optimized low-dimensional functions are either comparable to those achieved by CF or superior to those achieved by DF relaying. In particular, the optimized low-dimensional relaying scheme can improve on DF relaying when the quality of the source-relay link is worse than or comparable to that of other links.
Parametric Beamformer for Synthetic Aperture Ultrasound Imaging
Nikolov, Svetoslav; Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt
. The beamformer consists of a number of identical beamforming blocks, each processing data from several channels and producing part of the image. A number of these blocks can be accommodated in a modern field-programmable gate array device (FPGA), and a whole synthetic aperture system can be implemented using...... with 255 levels. A beamforming block uses input data from 4 elements and produces a set of 10 lines. Linear interpolation is used to implement sub-sample delays. The VHDL code for the beamformer has been synthesized for a Xilinx V4FX100 speed grade 11 FPGA, where it can operate at a maximum clock frequency...
Adaptive Beamforming using the Reconfigurable Montium TP
van de Burgwal, M.D.; Rovers, K.C.; Blom, K.C.H.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria; López, S.
Until a decade ago, the concept of phased array beamforming was mainly implemented with mechanical or analog solutions. Today, digital hardware has become powerful enough to perform the massive number of operations required for real-time digital beamforming. While more and more applications are
Synthetic Aperture Beamformation using the GPU
Hansen, Jens Munk; Schaa, Dana; Jensen, Jørgen Arendt
A synthetic aperture ultrasound beamformer is implemented for a GPU using the OpenCL framework. The implementation supports beamformation of either RF signals or complex baseband signals. Transmit and receive apodization can be either parametric or dynamic using a fixed F-number, a reference...
APES Beamforming Applied to Medical Ultrasound Imaging
Blomberg, Ann E. A.; Holfort, Iben Kraglund; Austeng, Andreas
Recently, adaptive beamformers have been introduced to medical ultrasound imaging. The primary focus has been on the minimum variance (MV) (or Capon) beamformer. This work investigates an alternative but closely related beamformer, the Amplitude and Phase Estimation (APES) beamformer. APES offers...... added robustness at the expense of a slightly lower resolution. The purpose of this study was to evaluate the performance of the APES beamformer on medical imaging data, since correct amplitude estimation often is just as important as spatial resolution. In our simulations we have used a 3.5 MHz, 96...... element linear transducer array. When imaging two closely spaced point targets, APES displays nearly the same resolution as the MV, and at the same time improved amplitude control. When imaging cysts in speckle, APES offers speckle statistics similar to that of the DAS, without the need for temporal...
80537 based distance relay
Pedersen, Knud Ole Helgesen
A method for implementing a digital distance relay in the power system is described.Instructions are given on how to program this relay on a 80537 based microcomputer system.The problem is used as a practical case study in the course 53113: Micocomputer applications in the power system.The relay...
Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm
Shaobing Huang
Full Text Available Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.
Partial relay selection in underlay cognitive networks with fixed gain relays
Hussain, Syed Imtiaz; Alouini, Mohamed-Slim; Hasna, Mazen Omar; Qaraqe, Khalid A.
In a communication system with multiple cooperative relays, selecting the best relay utilizes the available spectrum more efficiently. However, selective relaying poses a different problem in underlay cognitive networks compared to the traditional cooperative networks due to interference thresholds to the primary users. In most cases, a best relay is the one which provides the maximum end-to-end signal to noise ratio (SNR). This approach needs plenty of instantaneous channel state information (CSI). The CSI burden could be reduced by partial relay selection. In this paper, a partial relay selection scheme is presented and analyzed for an underlay cognitive network with fixed gain relays operating in the vicinity of a primary user. The system model is adopted in a way that each node needs minimal CSI to perform its task. The best relay is chosen on the basis of maximum source to relay link SNR which then forwards the message to the destination. We derive closed form expressions for the received SNR distributions, system outage, probability of bit error and average channel capacity of the system. The derived results are confirmed through simulations. © 2012 IEEE.
Hussain, Syed Imtiaz
Optimized Power Allocation and Relay Location Selection in Cooperative Relay Networks
Jianrong Bao
Full Text Available An incremental selection hybrid decode-amplify forward (ISHDAF scheme for the two-hop single relay systems and a relay selection strategy based on the hybrid decode-amplify-and-forward (HDAF scheme for the multirelay systems are proposed along with an optimized power allocation for the Internet of Thing (IoT. Given total power as the constraint and outage probability as an objective function, the proposed scheme possesses good power efficiency better than the equal power allocation. By the ISHDAF scheme and HDAF relay selection strategy, an optimized power allocation for both the source and relay nodes is obtained, as well as an effective reduction of outage probability. In addition, the optimal relay location for maximizing the gain of the proposed algorithm is also investigated and designed. Simulation results show that, in both single relay and multirelay selection systems, some outage probability gains by the proposed scheme can be obtained. In the comparison of the optimized power allocation scheme with the equal power allocation one, nearly 0.1695 gains are obtained in the ISHDAF single relay network at a total power of 2 dB, and about 0.083 gains are obtained in the HDAF relay selection system with 2 relays at a total power of 2 dB.
Look-Ahead Strategies Based on Store-Carry and Forward Relaying for Energy Efficient Cellular Communications
Panayiotis Kolios
Full Text Available With the increasing availability of Internet type services on mobile devices and the attractive flat rate all-you-can-eat billing system, cellular telecommunication networks are experiencing a tremendous growth in data usage demand. However, there are increasing concerns that current network deployment trends (including more efficient radio access techniques and increased spectrum allocation strategies, will be unable to support the increased Internet traffic in a sustainable way. The delay tolerant nature of mobile Internet traffic allows for a large degree of flexibility in optimizing network performance to meet different design objectives and it's a feature that has mostly gone unexplored by the research community. In this paper, we introduce a novel message forwarding mechanism in cellular networks that benefits from the inherent delay tolerance of Internet type services to provide flexible and adjustable forwarding strategies for efficient network operation while guaranteeing timely deliveries. By capitalizing on the elasticity of message delivery deadlines and the actual mobility of nodes inside the cell, considerable performance gains can be achieved by physically propagating information messages within the network.
Adaptive Beamforming Based on Complex Quaternion Processes
Jian-wu Tao
Full Text Available Motivated by the benefits of array signal processing in quaternion domain, we investigate the problem of adaptive beamforming based on complex quaternion processes in this paper. First, a complex quaternion least-mean squares (CQLMS algorithm is proposed and its performance is analyzed. The CQLMS algorithm is suitable for adaptive beamforming of vector-sensor array. The weight vector update of CQLMS algorithm is derived based on the complex gradient, leading to lower computational complexity. Because the complex quaternion can exhibit the orthogonal structure of an electromagnetic vector-sensor in a natural way, a complex quaternion model in time domain is provided for a 3-component vector-sensor array. And the normalized adaptive beamformer using CQLMS is presented. Finally, simulation results are given to validate the performance of the proposed adaptive beamformer.
Hardware dependencies of GPU-accelerated beamformer performances for microwave breast cancer detection
Salomon Christoph J.
Full Text Available UWB microwave imaging has proven to be a promising technique for early-stage breast cancer detection. The extensive image reconstruction time can be accelerated by parallelizing the execution of the underlying beamforming algorithms. However, the efficiency of the parallelization will most likely depend on the grade of parallelism of the imaging algorithm and of the utilized hardware. This paper investigates the dependencies of two different beamforming algorithms on multiple hardware specification of several graphics boards. The parallel implementation is realized by using NVIDIA's CUDA. Three conclusions are drawn about the behavior of the parallel implementation and how to efficiently use the accessible hardware.
CERN Relay Race
CERN Running Club
The CERN relay race will take place around the Meyrin site on Thursday 20 May, starting at 12.15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details on the route, and how to register your team for the relay race, can be found at: https://espace.cern.ch/Running-Club/CERN-Relay
Radioisotope relay instrument
Pozdnyakov, V.N.; Sazonov, O.L.; Taksar, I.M.; Tesnavs, Eh.R.; Yanushkovskij, V.A.
The paper describes a radioisotope relay device containing a radiation source, a detector, an electronic relay block with a comparative threshold mechanism. The device differs from previously known ones in that, for the purpose of increasing stability and speed of action, the electronic relay block is a separate unit and contains two threshold pulse generators which are joined up, across series-connected ''and'' and ''or'' elements, with one of the inputs of the comparative threshold mechanism, whose second input is connected with a detector and whose outputs are connected with a relay element connected by feedback with the above-mentioned ''and'' elements. (author)
Best relay selection is a bandwidth efficient technique for multiple relay environments without compromising the system performance. The problem of relay selection is more challenging in underlay cognitive networks due to strict interference constraints to the primary users. Generally, relay selection is done on the basis of maximum end-to-end signal to noise ratio (SNR). However, it requires large amounts of channel state information (CSI) at different network nodes. In this paper, we present and analyze a reactive relay selection scheme in underlay cognitive networks where the relays are operating with fixed gains near a primary user. The system model minimizes the amount of CSI required at different nodes and the destination selects the best relay on the basis of maximum relay to destination SNR. We derive close form expressions for the received SNR statistics, outage probability, bit error probability and average channel capacity of the system. Simulation results are also presented to confirm the validity of the derived expressions. © 2012 IEEE.
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill
Performance analysis of two-way amplify and forward relaying with adaptive modulation over multiple relay network
Hwang, Kyusung
In this letter, we propose two-way amplify-and-forward relaying in conjunction with adaptive modulation in order to improve spectral efficiency of relayed communication systems while monitoring the required error performance. We also consider a multiple relay network where only the best relay node is utilized so that the diversity order increases while maintaining a low complexity of implementation as the number of relays increases. Based on the best relay selection criterion, we offer an upper bound on the signal-to-noise ratio to keep the performance analysis tractable. Our numerical examples show that the proposed system offers a considerable gain in spectral efficiency while satisfying the error rate requirements. © 2011 IEEE.
Hwang, Kyusung; Ko, Youngchai; Alouini, Mohamed-Slim
Power system relaying
Horowitz, Stanley H; Niemira, James K
The previous three editions of Power System Relaying offer comprehensive and accessible coverage of the theory and fundamentals of relaying and have been widely adopted on university and industry courses worldwide. With the third edition, the authors have added new and detailed descriptions of power system phenomena such as stability, system-wide protection concepts and discussion of historic outages. Power System Relaying, 4th Edition continues its role as an outstanding textbook on power system protection for senior and graduate students in the field of electric power engineering and a refer
The CERN relay race will take place around the Meyrin site on Thursday 5 June starting at 12:15 p.m. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details on how to register your team for the relay race are given on the Staff Association Bulletin web site. You can access the online registration form at: http://cern.ch/club-running-relay/form.html
The CERN relay race will take place around the Meyrin site on Wednesday 23 May starting at 12:15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details on how to register your team for the relay race are given on the Staff Association Bulletin web site. You can access the online registration form at: http://cern.ch/club-running-relay/form.html
Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading
Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim
, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can
Ready, steady… relay!
Thursday 5 June. With another year comes another success for CERN's Relay Race. With 76 teams taking part it was the second highest turnout in the race's history. 'The Shabbys' won the relay race in 10 minutes 51 seconds.As popular as ever, this year the relay race took on the atmosphere of a mini carnival. Gathering on the lawn outside Restaurant 1, various stalls and attractions added to the party feeling of the event, with beer courtesy of 'AGLUP', the Belgian beer club, and a wandering jazz group entertaining spectators and competitors alike. Reflecting the greater involvement of other associations in the relay race, the president of the Staff Association Clubs Committee, James Purvis, was the guest of honour, launching the start of the race and presenting the prizes. As regular followers of the race could have probably predicted, The Shabbys were once again victorious and claimed first place. The team members th...
CERN Relay Race 2009
The CERN relay race will take place around the Meyrin site on Thursday 14th May starting at 12:15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. More details on how to register your team for the relay race
The CERN relay race will take place around the Meyrin site on Wednesday 17 May starting at 12:15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Details on how to register your team for the relay race are given on the Staff Association Bulletin web site.
Modular relay control
Ivarsson, Mikael
Enics Sweden AB, Västerås, is an electronics manufacturing services company with its main business in manufacturing electronics. Most, if not all, electronic devices that are manufactured are being widely tested before delivery to ensure proper functionality. Often during tests a large number of signals are measured by one to a few digital multimeters and are therefore controlled through relays. Relays are also used when applying stimuli with high currents or voltages to the unit under test. ...
Opportunistic relaying in multipath and slow fading channel: Relay selection and optimal relay selection period
Sungjoon Park,
In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula for opportunistic relaying in the channel, and validate the results by comparing it with the exact outage probability. Also, we suggest a new relay selection algorithm that incorporates shadowing. We consider a protocol of broadcasting the channel gain of the previously selected relay. This saves resources in slow fading channel by reducing collisions in relay selection. We further investigate the optimal relay selection period to maximize the throughput while avoiding selection overhead. © 2011 IEEE.
Amplify-and-forward relaying in wireless communications
Rodriguez, Leonardo Jimenez; Le-Ngoc, Tho
This SpringerBrief explores the advantage of relaying techniques in addressing the increasing demand for high data rates and reliable services over the air. It demonstrates how to design cost-effective relay systems that provide high spectral efficiency and fully exploit the diversity of the relay channel. The brief covers advances in achievable rates, power allocation schemes, and error performance for half-duplex (HD) and full-duplex (FD) amplify-and-forward (AF) single-relay systems. The authors discuss the capacity and respective optimal power allocation for a wide range of HD protocols ov
Generalized instantly decodable network coding for relay-assisted networks
Elmahdy, Adel M.
In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.
Performance Analysis of Selective Decode-and-Forward Multinode Incremental Relaying with Maximal Ratio Combining
Hadjtaieb, Amir
In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond.
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment.
Robust Nearfield Wideband Beamforming Design Based on Adaptive-Weighted Convex Optimization
Guo Ye-Cai
Full Text Available Nearfield wideband beamformers for microphone arrays have wide applications in multichannel speech enhancement. The nearfield wideband beamformer design based on convex optimization is one of the typical representatives of robust approaches. However, in this approach, the coefficient of convex optimization is a constant, which has not used all the freedom provided by the weighting coefficient efficiently. Therefore, it is still necessary to further improve the performance. To solve this problem, we developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The proposed approach defines an adaptive-weighted function by the adaptive array signal processing theory and adjusts its value flexibly, which has improved the beamforming performance. During each process of the adaptive updating of the weighting function, the convex optimization problem can be formulated as a SOCP (Second-Order Cone Program problem, which could be solved efficiently using the well-established interior-point methods. This method is suitable for the case where the sound source is in the nearfield range, can work well in the presence of microphone mismatches, and is applicable to arbitrary array geometries. Several design examples are presented to verify the effectiveness of the proposed approach and the correctness of the theoretical analysis.
Stochastic Beamforming via Compact Antenna Arrays
Alrabadi, Osama; Pedersen, Gert Frølund
The paper investigates the average beamforming (BF) gain of compact antenna arrays when statistical channel knowledge is available. The optimal excitation (precoding vector) and impedance termination that maximize the average BF gain are a compromise between the ones that maximize the array...
Single-station 6C beamforming
Nakata, N.; Hadziioannou, C.; Igel, H.
Six-component measurements of seismic ground motion provide a unique opportunity to identify and decompose seismic wavefields into different wave types and incoming azimuths, as well as estimate structural information (e.g., phase velocity). By using the relationship between the transverse component and vertical rotational motion for Love waves, we can find the incident azimuth of the wave and the phase velocity. Therefore, when we scan the entire range of azimuth and slownesses, we can process the seismic waves in a similar way to conventional beamforming processing, without using a station array. To further improve the beam resolution, we use the distribution of amplitude ratio between translational and rotational motions at each time sample. With this beamforming, we decompose multiple incoming waves by azimuth and phase velocity using only one station. We demonstrate this technique using the data observed at Wettzell (vertical rotational motion and 3C translational motions). The beamforming results are encouraging to extract phase velocity at the location of the station, apply to oceanic microseism, and to identify complicated SH wave arrivals. We also discuss single-station beamforming using other components (vertical translational and horizontal rotational components). For future work, we need to understand the resolution limit of this technique, suitable length of time windows, and sensitivity to weak motion.
Alternate MIMO relaying with three AF relays using interference alignment
In this paper, we study a two-hop half-duplex relaying network with one source, one destination, and three amplify-and-forward (AF) relays equipped with M antennas each. We consider alternate transmission to compensate for the inherent loss of capacity pre-log factor 1/2 in half duplex mode, where source transmit message to two relays and the other relay alternately. The inter-relay interference caused by alternate transmission is aligned to make additional degrees of freedom (DOFs). It is shown that the proposed scheme enables us to exploit 3M/4 DOFs compared with the M/2 DOFs of conventional AF relaying. More specifically, suboptimal linear filter designs for a source and three relays are proposed to maximize the achievable sum-rate. We verify using some selected numerical results that the proposed filter designs give significant improvement of the sum-rate over a naive filter and conventional relaying schemes. © 2012 IEEE.
Full-Duplex Relay Selection in Cognitive Underlay Networks
In this work, we analyze the performance of full-duplex relay selection (FDRS) in spectrum-sharing networks. Contrary to half-duplex relaying, full-duplex relaying (FDR) enables simultaneous listening/forwarding at the secondary relay(s), thereby allowing for a higher spectral efficiency. However, since the source and relay simultaneously transmit in FDR, their superimposed signal at the primary receiver should now satisfy the existing interference constraint, which can considerably limit the secondary network throughput. In this regard, relay selection can offer an adequate solution to boost the secondary throughput while satisfying the imposed interference limit. We first analyze the performance of opportunistic FDRS with residual self-interference (RSI) by deriving the exact cumulative distribution function of its end-to-end signal-to-interference-plus-noise ratio under Nakagami-m fading. We also evaluate the offered diversity gain of relay selection for different full-duplex cooperation schemes in the presence/absence of a direct source-destination link. When the adopted RSI link gain model is sublinear in the relay power, which agrees with recent research findings, we show that remarkable diversity gain can be recovered even in the presence of an interfering direct link. Second, we evaluate the end-to-end performance of FDRS with interference constraints due to the presence of a primary receiver. Finally, the presented exact theoretical findings are verified by numerical simulations.
Relay test program
Bandyopadhyay, K.K.; Kunkel, C.; Shteyngart, S.
This report presents the results of a relay test program conducted by Brookhaven National Laboratory (BNL) under the sponsorship of the US Nuclear Regulatory Commission (NRC). The program is a continuation of an earlier test program the results of which were published in NUREG/CR-4867. The current program was carried out in two phases: electrical testing and vibration testing. The objective was primarily to focus on the electrical discontinuity or continuity of relays and circuit breaker tripping mechanisms subjected to electrical pulses and vibration loads. The electrical testing was conducted by KEMA-Powertest Company and the vibration testing was performed at Wyle Laboratories, Huntsville, Alabama. This report discusses the test procedures, presents the test data, includes an analysis of the data and provides recommendations regarding reliable relay testing
Interference alignment for degrees of freedom improvement in 3-relay half-duplex systems
Park, Seongho; Ko, Youngchai; Park, Kihong; Alouini, Mohamed-Slim
In a half-duplex relaying, the capacity pre-log factor is a major drawback in spectral efficiency. This paper proposes a linear precoding scheme and an alternate relaying protocol in a dual-hop half-duplex system where three relays help
The CERN relay race will take place around the Meyrin site on Thursday 19 May starting at 12-15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. Thank you for your cooperation. Details of the course and of how to register your team for the relay race can be found here. Some advice for all runners from the Medical Service can also be found here. Â
Relay Selection with Limited and Noisy Feedback
Eltayeb, Mohammed E.; Elkhalil, Khalil; Mas'ud, Abdullahi Abubakar; Al-Naffouri, Tareq Y.
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Nonetheless, relay selection algorithms generally require error-free channel state information (CSI) from all cooperating relays. Practically, CSI
Opportunistic Relay Selection With Limited Feedback
Eltayeb, Mohammed E.; Elkhalil, Khalil; Bahrami, Hamid Reza; Al-Naffouri, Tareq Y.
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Generally, relay selection algorithms require channel state information (CSI) feedback from all cooperating relays to make a selection decision
Cognitive Relay Networks: A Comprehensive Survey
Ayesha Naeem
Full Text Available Cognitive radio is an emerging technology to deal with the scarcity and requirement of radio spectrum by dynamically assigning spectrum to unlicensed user . This revolutionary technology shifts the paradigm in the wireless system design by all owing unlicensed user the ability to sense, adapt and share the dynamic spectrum. Cognitive radio technology have been applied to different networks and applications ranging from wireless to public saf ety, smart grid, medical, rela y and cellular applications to increase the throughput and spectrum efficiency of the network. Among these applications, cognitive relay networks is one of the application where cognitive radio technology has been applied. Cognitiv e rela y network increases the network throughput by reducing the complete pa th loss and also by ensuring cooper ation among secondary users and cooperation among primary and secondary users. In this paper , our aim is to provide a survey on cognitive relay network. We also provide a detailed review on existing schemes in cognitive relay networks on the basis of relaying protocol, relay cooperation and channel model.
Secure Communication for Two-Way Relay Networks with Imperfect CSI
Cong Sun
Full Text Available This paper considers a two-way relay network, where two legitimate users exchange messages through several cooperative relays in the presence of an eavesdropper, and the Channel State Information (CSI of the eavesdropper is imperfectly known. The Amplify-and-Forward (AF relay protocol is used. We design the relay beamforming weights to minimize the total relay transmit power, while requiring the Signal-to-Noise-Ratio (SNRs of the legitimate users to be higher than the given thresholds and the achievable rate of the eavesdropper to be upper-bounded. Due to the imperfect CSI, a robust optimization problem is summarized. A novel iterative algorithm is proposed, where the line search technique is applied, and the feasibility is preserved during iterations. In each iteration, two Quadratically-Constrained Quadratic Programming (QCQP subproblems and a one-dimensional subproblem are optimally solved. The optimality property of the robust optimization problem is analyzed. Simulation results show that the proposed algorithm performs very close to the non-robust model with perfect CSI, in terms of the obtained relay transmit power; it~achieves higher secrecy rate compared to the existing work. Numerically, the proposed algorithm converges very quickly, and more than 85% of the problems are solved optimally.
Scalable Intersample Interpolation Architecture for High-channel-count Beamformers
Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt
Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....
Relay Precoder Optimization in MIMO-Relay Networks With Imperfect CSI
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations. © 2011 IEEE.
The CERN Relay Race will take place around the Meyrin site on Wednesday 19 May between 12.15 and 12.35. If possible, please avoid driving on the site during this 20 minute period. If you do meet runners in your car, please STOP until they all have passed. Thank you for your understanding
The CERN Relay Race will take place around the Meyrin site on Wednesday May 21st between 12h15 and 12h35. If possible, please avoid driving on the site during this 20 minute period. If you do meet runners in your car, please STOP until they all have passed. Thank you for your understanding
The CERN Relay Race will take place around the Meyrin site on Wednesday 23 May between 12:20 and 12:35. If possible, please avoid driving on the site during this 15 minute period. If you do meet runners in your car, please stop until they all have passed. Thank you for your understanding.
The CERN Relay Race will take place around the Meyrin site on Wednesday 22 May between 12h20 and 12h35. If possible, please avoid driving on the site during this 15 minute period. If you do meet runners in your car, please STOP until they all have passed. Thank you for your understanding.
The CERN relay race, now in its 39th year, is already a well-known tradition, but this year the organizers say the event will have even more of a festival feeling. Just off the starting line of the CERN relay race.For the past few years, spectators and runners at the CERN relay race have been able to enjoy a beer while listening to music from the CERN music and jazz clubs. But this year the organizers are aiming for "even more of a festival atmosphere". As David Nisbet, President of the CERN running club and organizer of the relay race, says: "Work is not just about getting your head down and doing the theory, it's also about enjoying the company of your colleagues." This year, on top of music from the Santa Luis Band and the Canettes Blues Band, there will be demonstrations from the Aikido and softball clubs, a stretching session by the Fitness club, as well as various stalls and of course, the well-earned beer from AGLUP, the B...
2005 CERN Relay Race
Patrice Loiez
The CERN Relay Race takes place each year in May and sees participants from all areas of the CERN staff. The winners in 2005 were The Shabbys with Los Latinos Volantes in second and Charmilles Technologies a close third. To add a touch of colour and levity, the CERN Jazz Club provided music at the finishing line.
Sungjoon Park,; Stark, Wayne E.
In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula
Optimal Contract Design for Cooperative Relay Incentive Mechanism under Moral Hazard
Nan Zhao
Full Text Available Cooperative relay can effectively improve spectrum efficiency by exploiting the spatial diversity in the wireless networks. However, wireless nodes may acquire different network information with various users' location and mobility, channels' conditions, and other factors, which results in asymmetric information between the source and the relay nodes (RNs. In this paper, the relay incentive mechanism between relay nodes and the source is investigated under the asymmetric information. By modelling multiuser cooperative relay as a labour market, a contract model with moral hazard for relay incentive is proposed. To effectively incentivize the potential RNs to participate in cooperative relay, the optimization problems are formulated to maximize the source's utility while meeting the feasible conditions under both symmetric and asymmetric information scenarios. Numerical simulation results demonstrate the effectiveness of the proposed contract design scheme for cooperative relay.
In a half-duplex relaying, the capacity pre-log factor is a major drawback in spectral efficiency. This paper proposes a linear precoding scheme and an alternate relaying protocol in a dual-hop half-duplex system where three relays help the communication between the source and the destination. In our proposed scheme, we consider a phase incoherent method in relays in which the source alternately transmits message signals to the different relays. In addition, we propose a linear interference alignment scheme which can eliminate the inter-relay interference resulted from the phase incoherence of relaying. Based on our analysis of degrees of freedom and our simulation results, we show that our proposed scheme achieves additional degrees of freedom compared to the conventional half-duplex relaying. © 2011 IEEE.
Reliable quantum communication over a quantum relay channel
Gyongyosi, Laszlo, E-mail: [email protected] [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117, Hungary and Information Systems Research Group, Mathematics and Natural Sciences, Hungarian Ac (Hungary); Imre, Sandor [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117 (Hungary)
We show that reliable quantum communication over an unreliable quantum relay channels is possible. The coding scheme combines the results on the superadditivity of quantum channels and the efficient quantum coding approaches.
Performance Evaluation of Analog Beamforming with Hardware Impairments for mmW Massive MIMO Communication in an Urban Scenario
Sonia Gimenez
Full Text Available The use of massive multiple-input multiple-output (MIMO techniques for communication at millimeter-Wave (mmW frequency bands has become a key enabler to meet the data rate demands of the upcoming fifth generation (5G cellular systems. In particular, analog and hybrid beamforming solutions are receiving increasing attention as less expensive and more power efficient alternatives to fully digital precoding schemes. Despite their proven good performance in simple setups, their suitability for realistic cellular systems with many interfering base stations and users is still unclear. Furthermore, the performance of massive MIMO beamforming and precoding methods are in practice also affected by practical limitations and hardware constraints. In this sense, this paper assesses the performance of digital precoding and analog beamforming in an urban cellular system with an accurate mmW channel model under both ideal and realistic assumptions. The results show that analog beamforming can reach the performance of fully digital maximum ratio transmission under line of sight conditions and with a sufficient number of parallel radio-frequency (RF chains, especially when the practical limitations of outdated channel information and per antenna power constraints are considered. This work also shows the impact of the phase shifter errors and combiner losses introduced by real phase shifter and combiner implementations over analog beamforming, where the former ones have minor impact on the performance, while the latter ones determine the optimum number of RF chains to be used in practice.
Multiline 3D beamforming using micro-beamformed datasets for pediatric transesophageal echocardiography
Bera, D.; Raghunathan, S. B.; Chen, C.; Chen, Z.; Pertijs, M. A. P.; Verweij, M. D.; Daeichin, V.; Vos, H. J.; van der Steen, A. F. W.; de Jong, N.; Bosch, J. G.
Until now, no matrix transducer has been realized for 3D transesophageal echocardiography (TEE) in pediatric patients. In 3D TEE with a matrix transducer, the biggest challenges are to connect a large number of elements to a standard ultrasound system, and to achieve a high volume rate (>200 Hz). To address these issues, we have recently developed a prototype miniaturized matrix transducer for pediatric patients with micro-beamforming and a small central transmitter. In this paper we propose two multiline parallel 3D beamforming techniques (µBF25 and µBF169) using the micro-beamformed datasets from 25 and 169 transmit events to achieve volume rates of 300 Hz and 44 Hz, respectively. Both the realizations use angle-weighted combination of the neighboring overlapping sub-volumes to avoid artifacts due to sharp intensity changes introduced by parallel beamforming. In simulation, the image quality in terms of the width of the point spread function (PSF), lateral shift invariance and mean clutter level for volumes produced by µBF25 and µBF169 are similar to the idealized beamforming using a conventional single-line acquisition with a fully-sampled matrix transducer (FS4k, 4225 transmit events). For completeness, we also investigated a 9 transmit-scheme (3 × 3) that allows even higher frame rates but found worse B-mode image quality with our probe. The simulations were experimentally verified by acquiring the µBF datasets from the prototype using a Verasonics V1 research ultrasound system. For both µBF169 and µBF25, the experimental PSFs were similar to the simulated PSFs, but in the experimental PSFs, the clutter level was ~10 dB higher. Results indicate that the proposed multiline 3D beamforming techniques with the prototype matrix transducer are promising candidates for real-time pediatric 3D TEE.
Acoustic Emission Beamforming for Detection and Localization of Damage
Rivey, Joshua Callen
The aerospace industry is a constantly evolving field with corporate manufacturers continually utilizing innovative processes and materials. These materials include advanced metallics and composite systems. The exploration and implementation of new materials and structures has prompted the development of numerous structural health monitoring and nondestructive evaluation techniques for quality assurance purposes and pre- and in-service damage detection. Exploitation of acoustic emission sensors coupled with a beamforming technique provides the potential for creating an effective non-contact and non-invasive monitoring capability for assessing structural integrity. This investigation used an acoustic emission detection device that employs helical arrays of MEMS-based microphones around a high-definition optical camera to provide real-time non-contact monitoring of inspection specimens during testing. The study assessed the feasibility of the sound camera for use in structural health monitoring of composite specimens during tensile testing for detecting onset of damage in addition to nondestructive evaluation of aluminum inspection plates for visualizing stress wave propagation in structures. During composite material monitoring, the sound camera was able to accurately identify the onset and location of damage resulting from large amplitude acoustic feedback mechanisms such as fiber breakage. Damage resulting from smaller acoustic feedback events such as matrix failure was detected but not localized to the degree of accuracy of larger feedback events. Findings suggest that beamforming technology can provide effective non-contact and non-invasive inspection of composite materials, characterizing the onset and the location of damage in an efficient manner. With regards to the nondestructive evaluation of metallic plates, this remote sensing system allows us to record wave propagation events in situ via a single-shot measurement. This is a significant improvement over
A beamforming system based on the acousto-optic effect
Beamforming techniques are usually based on microphone arrays. The present work uses a beam of light as a sensor element, and describes a beamforming system that locates sound sources based on the acousto-optic effect, this is, the interaction between sound and light. The use of light as a sensin...
Plane Wave Medical Ultrasound Imaging Using Adaptive Beamforming
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
In this paper, the adaptive, minimum variance (MV) beamformer is applied to medical ultrasound imaging. The Significant resolution and contrast gain provided by the adaptive, minimum variance (MV) beamformer, introduces the possibility of plane wave (PW) ultrasound imaging. Data is obtained using...
Fourier beamformation of multistatic synthetic aperture ultrasound imaging
A new Fourier beamformation (FB) algorithm is presented for multistatic synthetic aperture ultrasound imaging. It can reduce the number of computations by a factor of 20 compared to conventional Delay-and-Sum (DAS) beamformers. The concept is based on the wavenumber algorithm from radar and sonar...
Synthetic aperture flow imaging using dual stage beamforming
Li, Ye; Jensen, Jørgen Arendt
A method for synthetic aperture flow imaging using dual stage beamforming has been developed. The main motivation is to increase the frame rate and still maintain a beamforming quality sufficient for flow estimation that is possible to implement in a commercial scanner. This method can generate...
Improved Relay Node Placement Algorithm for Wireless Sensor Networks Application in Wind Farm
Chen, Qinyin; Hu, Y.; Chen, Zhe
-tolerance. Each wind turbine has a potentially large number of data points needing to be monitored and collected, as farms continue to increase in scale; distances between turbines can reach several hundred meters. Optimal placement of relays in a large farm requires an efficient algorithmic solution. A relay...... node placement algorithm is proposed in this paper to approximate the optimal position for relays connecting each turbine. However, constraints are then required to prevent relay nodes being overloaded in 3-dimensions. The algorithm is extended to 3-dimensional Euclidean space for this optimal relay...
Optimized Policies for Improving Fairness of Location-based Relay Selection
Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova
For WLAN systems in which relaying is used to improve throughput performance for nodes located at the cell edge, node mobility and information collection delays can have a significant impact on the performance of a relay selection scheme. In this paper we extend our existing Markov Chain modeling...... framework for relay selection to allow for efficient calculation of relay policies given either mean throughput or kth throughput percentile as optimization criterium. In a scenario with static access point, static relay, and a mobile destination node, the kth throughput percentile optimization...
MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming
Yuteng Xiao
Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.
Relay Selection for Cooperative Relaying in Wireless Energy Harvesting Networks
Zhu, Kaiyan; Wang, Fei; Li, Songsong; Jiang, Fengjiao; Cao, Lijie
Energy harvesting from the surroundings is a promising solution to provide energy supply and extend the life of wireless sensor networks. Recently, energy harvesting has been shown as an attractive solution to prolong the operation of cooperative networks. In this paper, we propose a relay selection scheme to optimize the amplify-and-forward (AF) cooperative transmission in wireless energy harvesting cooperative networks. The harvesting energy and channel conditions are considered to select the optimal relay as cooperative relay to minimize the outage probability of the system. Simulation results show that our proposed relay selection scheme achieves better outage performance than other strategies.
Opportunistic Relay Selection in Multicast Relay Networks using Compressive Sensing
Elkhalil, Khalil
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. However, for relay selection algorithms to make a selection decision, channel state information (CSI) from all cooperating relays is usually required at a central node. This requirement poses two important challenges. Firstly, CSI acquisition generates a great deal of feedback overhead (air-time) that could result in significant transmission delays. Secondly, the fed back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. In this paper, we introduce a limited feedback relay selection algorithm for a multicast relay network. The proposed algorithm exploits the theory of compressive sensing to first obtain the identity of the "strong� relays with limited feedback. Following that, the CSI of the selected relays is estimated using linear minimum mean square error estimation. To minimize the effect of noise on the fed back CSI, we introduce a back-off strategy that optimally backs-off on the noisy estimated CSI. For a fixed group size, we provide closed form expressions for the scaling law of the maximum equivalent SNR for both Decode and Forward (DF) and Amplify and Forward (AF) cases. Numerical results show that the proposed algorithm drastically reduces the feedback air-time and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback channels.
An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt
Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...
Uplink Capacity of 802.16j Mobile Multihop Relay Networks with Transparent Relays
Wang, Hua; Andrews, Jeffrey G.; Iversen, Villy Bæk
-to-end spectral efficiency. Furthermore, the position and the number of relay stations (RSs) have a great impact on the capacity gain. These results are further verified in the evaluation of the system Erlang capacity. The study demonstrates that with proper deployment of RSs and use of MIMO transmission...
Integrated 60GHz RF beamforming in CMOS
Yu, Yikun; van Roermund, Arthur H M
""Integrated 60GHz RF Beamforming in CMOS"" describes new concepts and design techniques that can be used for 60GHz phased array systems. First, general trends and challenges in low-cost high data-rate 60GHz wireless system are studied, and the phased array technique is introduced to improve the system performance. Second, the system requirements of phase shifters are analyzed, and different phased array architectures are compared. Third, the design and implementation of 60GHz passive and active phase shifters in a CMOS technology are presented. Fourth, the integration of 60GHz phase shifters
Multiple and single snapshot compressive beamforming
Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.
and multiple snapshots. CS does not require inversion of the data covariance matrix and thus works well even for a single snapshot where it gives higher resolution than conventional beamforming. For multiple snapshots, CS outperforms conventional high-resolution methods even with coherent arrivals and at low......-lagged superposition of source amplitudes at all hypothetical DOAs. Regularizing with an '1-norm constraint renders the problem solvable with convex optimization, and promoting sparsity gives highresolution DOA maps. Here the sparse source distribution is derived using maximum a posteriori estimates for both single...
Cooperative Spatial Reuse with Transmit Beamforming in Multi-rate Wireless Networks
Lu, Chenguang; Fitzek, Frank; Eggers, Patrick Claus F.
We present a cooperative spatial reuse (CSR) scheme as a cooperative extension of the current TDMA-based MAC to enable spatial reuse in multi-rate wireless networks. We model spatial reuse as a cooperation problem on utilizing the time slots obtained from the TDMA-based MAC. In CSR, there are two...... operation modes. One is TDMA mode while the other is spatial reuse mode in which links transmit simultaneously. Links contribute their own time slots to form a cooperative group to do spatial reuse. Each link joins the group only if it can benefit in capacity or energy efficiency. Otherwise, the link...... will leave spatial reuse mode and switch back to TDMA. In this work, we focus on the transmit beamforming techniques to enable CSR by interference cancellation on MISO (Multiple Input Single Output) links. We compare the CSR scheme using zero-forcing (ZF) transmit beamforming, namely ZF-CSR, to the TDMA...
Body Loss Study of Beamforming Mode in LTE MIMO Mobile Terminals
Zhang, Shuai; Zhao, Kun; Ying, Zhinong
This paper mainly focuses on the investigation of the body loss of beamforming mode in LTE MIMO mobile terminals with CTIA user effects. The research of the body loss and radiation efficiency is carried out over different phase differences between two ports of each MIMO antenna. During studies......, four kinds of typical LTE MIMO antennas are used, namely, collocated ground free (GF), parallel GF, parallel on ground (OG) and orthogonal OG MIMO antennas, under four mobile terminal lengths at low and high frequencies. Two kinds of CTIA user effects are included in the research. From the studies......, the parallel GF MIMO antenna type exhibits the best beamforming performance in the four MIMO antenna types. In order to verify the simulations, envelope correlation coefficients of two MIMO antenna prototypes are measured. All the measured results agree well with the simulated....
A scalable-low cost architecture for high gain beamforming antennas
Bakr, Omar
Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.
Bakr, Omar; Johnson, Mark; Jungdong Park,; Adabi, Ehsan; Jones, Kevin; Niknejad, Ali
In this paper, we study a two-hop half-duplex relaying network with one source, one destination, and three amplify-and-forward (AF) relays equipped with M antennas each. We consider alternate transmission to compensate for the inherent loss
A study on Relay Effect via Magnetic Resonant Coupling for Wireless Power Transfer
Rashid N.A.
Full Text Available Wireless power transfer (WPT transmits electrical energy from a power source to an electrical load wirelessly or without any conductors. The capability of WPT to transmit the energy is limited. Therefore, a relay was introduced to increase the distance of the WPT capabilities. The effect of the relay has been investigated to extend the energy transfer distance. The effect of relay was demonstrated by placing a relay coil between transmitter and receiver, relay biased to transmitter and placing two relay coils in the designed system. Experimental results are provided to prove the concept of the relay effect. The power transmission efficiency can be achieved up to 75% at 1 meter distance.
On the capacity of multiple cognitive links through common relay under spectrum-sharing constraints
Yang, Yuli
In this paper, we consider an underlay cognitive relaying network consisting of multiple secondary users and introduce a cooperative transmission protocol using a common relay to help with the communications between all secondary source-destination pairs for higher throughput and lower realization complexity. A whole relay-assisted transmission procedure is composed of multiple access phase and broadcast phase, where the relay is equipped with multiple antennas, and the secondary sources and destinations are single-antenna nodes. Considering the spectrum-sharing constraints on the secondary sources and the relay, we analyze the capacity behaviors of the underlay cognitive relaying network under study. The corresponding numerical results provide a convenient tool for the presented network design and substantiate a distinguishing feature of introduced design in that multiple secondary users\\' communications do not rely on multiple relays, hence allowing for a more efficient use of the radio resources. © 2011 IEEE.
Joint source and relay optimization for interference MIMO relay networks
Khandaker, Muhammad R. A.; Wong, Kai-Kit
This paper considers multiple-input multiple-output (MIMO) relay communication in multi-cellular (interference) systems in which MIMO source-destination pairs communicate simultaneously. It is assumed that due to severe attenuation and/or shadowing effects, communication links can be established only with the aid of a relay node. The aim is to minimize the maximal mean-square-error (MSE) among all the receiving nodes under constrained source and relay transmit powers. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. Since the exactly optimal solution for this practically appealing problem is intractable, we first propose optimizing the source, relay, and receiver matrices in an alternating fashion. Then we contrive a simplified semidefinite programming (SDP) solution based on the error covariance matrix decomposition technique, avoiding the high complexity of the iterative process. Numerical results reveal the effectiveness of the proposed schemes.
Suliman, Mohamed Abdalla Elhag
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
The CERN running club, in collaboration with the Staff Association, is happy to announce the 2018 relay race edition. It will take place on Thursday, May 24th and will consist as every year in a round trip of the CERN Meyrin site in teams of 6 members. It is a fun event, and you do not have to run fast to enjoy it. Registrations will be open from May 1st to May 22nd on the running club web site. All information concerning the race and the registration are available there too: http://runningclub.web.cern.ch/content/cern-relay-race. A video of the previous edition is also available here : http://cern.ch/go/Nk7C. As every year, there will be animations starting at noon on the lawn in front of restaurant 1, and information stands for many CERN associations and clubs will be available. The running club partners will also be participate in the event, namely Berthie Sport, Interfon and Uniqa.
47th Relay Race!
On Thursday June 1st at 12.15, Fabiola Gianotti, our Director-General, will fire the starting shot for the 47th Relay Race. This Race is above all a festive CERN event, open for runners and walkers, as well as the people cheering them on throughout the race, and those who wish to participate in the various activities organised between 11.30 and 14.30 out on the lawn in front of Restaurant 1. In order to make this sports event accessible for everyone, our Director-General will allow for flexible lunch hours on the day, applicable for all the members of personnel. An alert for the closure of roads will be send out on the day of the event. The Staff Association and the CERN Running Club thank you in advance for your participation and your continued support throughout the years. This year the CERN Running Club has announced the participation of locally and internationally renowned runners, no less! A bit over a week from the Relay Race of 1st June, the number of teams is going up nicely (already almost 40). Am...
This year's CERN Relay Race will take place around the Meyrin site on Thursday 20th May at 12h00. This annual event is for teams of 6 runners covering distances of 1000m, 800m, 800m, 500m, 500m and 300m respectively. Teams may be entered in the Seniors, Veterans, Ladies, Mixed or Open categories. The registration fee is 10 CHF per runner, and each runner receives a souvenir prize. As usual, there will be a programme of entertainments from 12h in the arrival area, in front of the Restaurant no. 1. Drinks, food, CERN club information and music will be available for the pleasure of both runners and spectators. The race starts at 12h15, with results and prize giving at 13:15. For details of the race, and of how to sign up a team, please visit: https://espace.cern.ch/Running-Club/CERN-Relay The event is organised by the CERN Running Club with the support of the CERN Staff Association. Â
Relays undergo seismic tests
Burton, J.C.
Utilities are required by the Nuclear Regulatory Commission to document that seismic vibration will not adversely affect critical electrical equipment. Seismic testing should be designed to determine the malfunction level (fragility testing). Input possibilities include a continuous sine, a decaying sine, a sine beat, random vibrations, and combinations of random vibrations and sine beat. The sine beat most accurately simulates a seismic event. Test frequencies have a broad range in order to accommodate a variety of relay types and cabinet mounting. Simulation of motion along three axes offers several options, but is best achieved by three in-phase single-axis vibration machines that are less likely to induce testing fatigue failure. Consensus on what constitutes relay failure favors a maximum two microsecond discontinuity. Performance tests should be conducted for at least two of the following: (1) nonoperating modes, (2) operating modes, or (3) the transition above the two modes, with the monitoring mode documented for all three. Results should specify a capability curve of maximum safe seismic acceleration and a graph plotting acceleration with sine-beat frequency
Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging
Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...
FPGA implementation of adaptive beamforming in hearing aids.
Samtani, Kartik; Thomas, Jobin; Varma, G Abhinav; Sumam, David S; Deepu, S P
Beamforming is a spatial filtering technique used in hearing aids to improve target sound reception by reducing interference from other directions. In this paper we propose improvements in an existing architecture present for two omnidirectional microphone array based adaptive beamforming for hearing aid applications and implement the same on Xilinx Artix 7 FPGA using VHDL coding and Xilinx Vivado ® 2015.2. The nulls are introduced in particular directions by combination of two fixed polar patterns. This combination can be adaptively controlled to steer the null in the direction of noise. The beamform patterns and improvements in SNR values obtained from experiments in a conference room environment are analyzed.
MIMO Four-Way Relaying
Liu, Huaping; Sun, Fan; De Carvalho, Elisabeth
Two-way relaying in wireless systems has initiated a large research effort during the past few years. Nevertheless, it represents only a specific traffic pattern and it is of interest to investigate other traffic patterns where such a simultaneous processing of information flows can bring...... performance advantage. In this paper we consider a \\emph{four-way relaying} multiple-input multiple-output (MIMO) scenario, where each of the two Mobile Stations (MSs) has a two-way connection to the same Base Station (BS), while each connection is through a dedicated Relay Station (RS). The RSs are placed...... the sum-rate of the new scheme for Decode-and-Forward (DF) operational model for the RS. We compare the performance with state-of-the-art reference schemes, based on two-way relaying with DF. The results indicate that the sum-rate of the two-phase four-way relaying scheme largely outperforms the four...
The CERN Relay Race will take place around the Meyrin site on Wednesday 18 May between 12.15 and 12.35. This year, weather permitting, there will be some new attractions in the start/finish area on the field behind the Main Building. You will be able to: listen to music played by the CERN Jazz Club; buy drinks at the bar organised by the CERN Running Club; buy lunch served directly on the terrace by the restaurant Novae. ATTENTION: concerning traffic, the recommendations are the same as always: If possible, please avoid driving on the site during this 20 minute period. If you do meet runners in your car, please STOP until they all have passed. Thank you for your understanding.
Voz sobre frame relay
D´Elia, Gabriel Anibal
Esta tesis trata el tema de VOFR, desde la digitalización de la voz hasta su transmisión a través de dicha red, así también como la comparación con otros medios de transporte como VOIP. Dada las características del protocolo frame relay y su disponibilidad se eligió como el medio más apropiado para la transmisión de voz y datos en forma integrada sobre una misma red. El trabajo comienza con una breve explicación de la voz, su digitalización y forma actual de transmisión a través de una red di...
Coordination of Regenerative Relays and Direct Users in Wireless Cellular Networks
Thai, Chan; Popovski, Petar
The area of wireless cooperation/relaying has recently been significantly enriched by the ideas of wireless network coding (NC), which bring substantial gains in spectral efficiency. These gains have mainly been demonstrated in scenarios with two-way relaying. Inspired by the ideas of wireless NC......, recently we have proposed techniques for coordinated direct/relay (CDR) transmissions. These techniques embrace the interference among the communication flows to/from direct and relayed users, leveraging on the fact that the interference can be subsequently canceled. Hence, by allowing simultaneous...... transmissions, spectral efficiency is increased. In our prior work, we have considered CDR with non-regenerative relay that uses Amplify-and-Forward (AF). In this paper we consider the case of regenerative Decode-and-Forward (DF) relay. This refers also to joint decoding of the interfering flows received over...
A New Codebook Design for Hybrid Beamforming Systems using ...
Mar 5, 2018 ... wireless personal area networks (WPANs) operating at the ... antenna element, is not practical in terms of the cost and power consumption [4]. To overcome ... hybrid beamformers for data transmission were designed under.
Non-invasive beamforming add-on module
Bader, Ahmed; Alouini, Mohamed-Slim
An embodiment of a non-invasive beamforming add-on apparatus couples to an existing antenna port and rectifies the beam azimuth in the upstream and downstream directions. The apparatus comprises input circuitry that is configured to receive one
Optimal beamforming in MIMO systems with HPA nonlinearity
Qi, Jian
In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance. ©2010 IEEE.
Qi, Jian; Aissa, Sonia
Rao, Anlei
Relay precoding for multiple-input and multiple-output (MIMO) relay networks has been approached by either optimizing the efficiency performance with given power consumption constraints or minimizing the power consumption with quality-of-service (QoS) requirements. For the later type design, previous works has worked on minimizing the approximated power consumption. In this paper, exact power consumption for all relays is derived into a quadratic form by diagonalizing the minimum-square error (MSE) matrix, and the relay precoding matrix is designed by optimizing this quadratic form with the help of semidefinite programming (SDP) relaxation. Our simulation results show that such a design can achieve a gain of around 3 dB against the previous design, which optimized the approximated power consumption. © 2015 IEEE.
Rabie, Khaled M.
Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.
Clinical evaluation of synthetic aperture sequential beamforming
Hansen, Peter Møller; Hemmsen, Martin Christian; Lange, Theis
In this study clinically relevant ultrasound images generated with synthetic aperture sequential beamforming (SASB) is compared to images generated with a conventional technique. The advantage of SASB is the ability to produce high resolution ultrasound images with a high frame rate and at the same...... time massively reduce the amount of generated data. SASB was implemented in a system consisting of a conventional ultrasound scanner connected to a PC via a research interface. This setup enables simultaneous recording with both SASB and conventional technique. Eighteen volunteers were ultrasound...... scanned abdominally, and 84 sequence pairs were recorded. Each sequence pair consists of two simultaneous recordings of the same anatomical location with SASB and conventional B-mode imaging. The images were evaluated in terms of spatial resolution, contrast, unwanted artifacts, and penetration depth...
ESPRIT with multiple-angle subarray beamforming
Xu, Wen; Jiang, Ying; Zhang, Huiquan
This article presents a new approach of implementing signal direction-of-arrival estimation, in which subarray beamforming is applied prior to estimation of signal parameters via rotational invariance techniques (ESPRIT). Different from the previous approaches, the beam-domain data from multiple adjacent pointing angles are combined in a way that the displacement invariance structure required by ESPRIT is maintained. It is intended to further obtain a sub-beamwidth resolution for a conventional multi-beam system already having small beamwidths. Computer simulations show that for typical multi-beam system applications the new approach provides improved estimation mean-square errors over the original ESPRIT, on top of reduced requirements for signal-to-noise ratio, number of snapshots, and computational time.
Loudness estimation of simultaneous sources using beamforming
Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli
An algorithm is proposed for estimating the loudness of several simultaneous sound sources by means of microphone-array beamforming. The algorithm is derived from two listening experiments in which the loudness of two simultaneous sounds (narrow-band noises with 1-kHz and 3.15-kHz center...... frequencies) was matched to a single sound (2-kHz narrow-band noise). The simultaneous sounds were presented from either one sound source or two spatially separated sources, whereas the single sound was presented from the frontal direction. The results indicate that overall loudness can be calculated...... by summing the loudnesses of the individual sources according to a simple psychophysical relationship....
Compact beamforming in medical ultrasound scanners
Tomov, Borislav Gueorguiev
for high-quality imaging is large, and compressing it leads to better compactness of the beamformers. The existing methods for compressing and recursive generation of focusing data, along with original work in the area, are presented in Chapter 4. The principles and the performance limitations...... quality is comparable to that of the very good scanners currently on the market. The performance results have been achieved with the use of a simple oversampled converter of second order. The use of a higher order oversampled converter will allow higher pulse frequency to be used while the high dynamic...... channels, and even more channels are necessary for 3-dimensional (3D) diagnostic imaging. On the other hand, there is a demand for inexpensive portable devices for use outside hospitals, in field conditions, where power consumption and compactness are important factors. The thesis starts...
Robust Frequency Invariant Beamforming with Low Sidelobe for Speech Enhancement
Zhu, Yiting; Pan, Xiang
Frequency invariant beamformers (FIBs) are widely used in speech enhancement and source localization. There are two traditional optimization methods for FIB design. The first one is convex optimization, which is simple but the frequency invariant characteristic of the beam pattern is poor with respect to frequency band of five octaves. The least squares (LS) approach using spatial response variation (SRV) constraint is another optimization method. Although, it can provide good frequency invariant property, it usually couldn't be used in speech enhancement for its lack of weight norm constraint which is related to the robustness of a beamformer. In this paper, a robust wideband beamforming method with a constant beamwidth is proposed. The frequency invariant beam pattern is achieved by resolving an optimization problem of the SRV constraint to cover speech frequency band. With the control of sidelobe level, it is available for the frequency invariant beamformer (FIB) to prevent distortion of interference from the undesirable direction. The approach is completed in time-domain by placing tapped delay lines(TDL) and finite impulse response (FIR) filter at the output of each sensor which is more convenient than the Frost processor. By invoking the weight norm constraint, the robustness of the beamformer is further improved against random errors. Experiment results show that the proposed method has a constant beamwidth and almost the same white noise gain as traditional delay-and-sum (DAS) beamformer.
Comparative analysis of the operation efficiency of the continuous and relay control systems of a multi-axle wheeled vehicle suspension
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
In order to improve the efficiency of the multi-axle wheeled vehicles (MWV) automotive engineers are increasing their cruising speed. One of the promising ways to improve ride comfort of the MWV is the development of the dynamic active suspension systems and control laws for such systems. Here, by the dynamic control systems we mean the systems operating in real time mode and using current (instantaneous) values of the state variables. The aim of the work is to develop the MWV suspension optimal control laws that would reduce vibrations on the driver's seat at kinematic excitation. The authors have developed the optimal control laws for damping the oscillations of the MWV body. The developed laws allow reduction of the vibrations on the driver's seat and increase in the maximum speed of the vehicle. The laws are characterized in that they allow generating the control inputs in real time mode. The authors have demonstrated the efficiency of the proposed control laws by means of mathematical simulation of the MWV driving over unpaved road with kinematic excitation. The proposed optimal control laws can be used in the MWV suspension control systems with magnetorheological shock absorbers or controlled hydropneumatic springs. Further evolution of the research line can be the development of the energy-efficient MWV suspension control systems with continuous control input on the vehicle body.
Control circuit for transformer relay
Wyatt, G.A.
A control circuit for a transformer relay which will automatically momentarily control the transformer relay to a selected state upon energization of the control circuit. The control circuit has an energy storage element and a current director coupled in series and adapted to be coupled with the secondary winding of the transformer relay. A device for discharge is coupled across the energy storage element. The energy storage element and current director will momentarily allow a unidirectional flow of current in the secondary winding of the transformer relay upon application of energy to the control circuit. When energy is not applied to the control circuit the device for discharge will allow the energy storage element to discharge and be available for another operation of the control circuit
Information-guided transmission in decode-and-forward relaying systems: Spatial exploitation and throughput enhancement
In addressing the issue of achieving high throughput in half-duplex relay channels, we exploit a concept of information-guided transmission for the network consisting of a source node, a destination node, and multiple half-duplex relay nodes. For further benefiting from multiple relay nodes, the relay-selection patterns are defined as the arbitrary combinations of given relay nodes. By exploiting the difference among the spatial channels states, in each relay-help transmission additional information to be forwarded is mapped onto the index of the active relay-selection pattern besides the basic information mapped onto the traditional constellation, which is forwarded by the relay node(s) in the active relay-selection pattern, so as to enhance the relay throughtput. With iterative decoding, the destination node can achieve a robust detection by decoupling the signals forwarded in different ways. We investigate the proposed scheme considering "decode-and-forward" protocol and establish its achievable transmission rate. The analytical results on capacity behaviors prove the efficiency of the proposed scheme by showing that it achieves better capacity performance than the conventional scheme. © 2011 IEEE.
The 2009 Relay Race
The 2009 CERN Relay Race was as popular as ever, with a record number of 88 teams competing. var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2009/CERN-MOVIE-2009-048/CERN-MOVIE-2009-048-0753-kbps-480x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2009/CERN-MOVIE-2009-048/CERN-MOVIE-2009-048-Multirate-200-to-753-kbps-480x360.wmv', 'false', 288, 216, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2009/CERN-MOVIE-2009-048/CERN-MOVIE-2009-048-posterframe-480x360-at-10-percent.jpg', '1178303', true, 'Video/Public/Movies/2009/CERN-MOVIE-2009-048/CERN-MOVIE-2009-048-0600-kbps-maxH-360-25-fps-audio-128-kbps-48-kHz-stereo.mp4'); Even the rain didn't dampen the spirits, and it still managed to capture the 'festival feeling' with live music, beer and stalls from various CERN clubs set up outside Restaurant 1. The Powercuts on the podium after win...
A time relay
Yosimura, K.; Sudzuki, Y.
The synchronous micromotor of the time relay by means of a two staged cylindrical gear drive drives the gear wheel and the shaft of an actuating mechanism. The shaped drum of a cam mechanism, equipped with a vertical groove, which interacts in its upper part with a lever for driving the first commutating subassembly and in the lower, with a bent sector of a spring and plate movable contact of the second commutating subassembly, is attached to the lower end of the mechanism's shaft (V). The L-shaped lever of the second commutating subassembly's drive rests on a vertical rocking axle, located parallel to the shaft. Both pairs of spring and plate contacts are bracketed in two dielectric brackets which provide for a plane parallel disposition of the cited contacts. The operational time setting for the unit is a function of the initial angular position of the shaft, which is provided for by the attachment of a handle on its upper end.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond
Nam, Sung Sik
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment. Copyright: © 2016 Nam et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Performance analysis of selective cooperation with fixed gain relays in Nakagami-m channels
Hussain, Syed Imtiaz; Hasna, Mazen Omar; Alouini, Mohamed-Slim
Selecting the best relay using the maximum signal to noise ratio (SNR) among all the relays ready to cooperate saves system resources and utilizes the available bandwidth more efficiently compared to the regular all-relay cooperation. In this paper, we analyze the performance of the best relay selection scheme with fixed gain relays operating in Nakagami-. m channels. We first derive the probability density function (PDF) of upper bounded end-to-end SNR of the relay link. Using this PDF, we derive some key performance parameters for the system including average bit error probability and average channel capacity. The analytical results are verified through Monte Carlo simulations. © 2012 Elsevier B.V.
Near optimal power allocation algorithm for OFDM-based cognitive using adaptive relaying strategy
Soury, Hamza; Bader, Faouzi; Shaat, Musbah M R; Alouini, Mohamed-Slim
Relayed transmission increases the coverage and achievable capacity of communication systems. Adaptive relaying scheme is a relaying technique by which the benefits of the amplifying or decode and forward techniques can be achieved by switching the forwarding technique according to the quality of the signal. A cognitive Orthogonal Frequency-Division Multiplexing (OFDM) based adaptive relaying protocol is considered in this paper. The objective is to maximize the capacity of the cognitive radio system while ensuring that the interference introduced to the primary user is below the tolerated limit. A Near optimal power allocation in the source and the relay is presented for two pairing techniques such that the matching and random pairing. The simulation results confirm the efficiency of the proposed adaptive relaying protocol, and the consequence of choice of pairing technique. © 2012 ICST.
Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference
In this work, we consider a two-hop cooperative setting where a source communicates with a destination through an intermediate relay node with a buffer. Unlike the existing body of work on buffer-aided half-duplex relaying, we consider a hybrid half-/full-duplex relaying scenario with loopback interference in the full-duplex mode. Depending on the channel outage and buffer states that are assumed available at the transmitters, the source and relay may either transmit simultaneously or revert to orthogonal transmission. Specifically, a joint source/relay scheduling and relaying mode selection mechanism is proposed to maximize the end-to-end throughput. The throughput maximization problem is converted to a linear program where the exact global optimal solution is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.
Optimal beamforming in ultrasound using the ideal observer.
Abbey, Craig K; Nguyen, Nghia Q; Insana, Michael F
Beamforming of received pulse-echo data generally involves the compression of signals from multiple channels within an aperture. This compression is irreversible, and therefore allows the possibility that information relevant for performing a diagnostic task is irretrievably lost. The purpose of this study was to evaluate information transfer in beamforming using a previously developed ideal observer model to quantify diagnostic information relevant to performing a task. We describe an elaborated statistical model of image formation for fixed-focus transmission and single-channel reception within a moving aperture, and we use this model on a panel of tasks related to breast sonography to evaluate receive-beamforming approaches that optimize the transfer of information. Under the assumption that acquisition noise is well described as an additive wide-band Gaussian white-noise process, we show that signal compression across receive-aperture channels after a 2-D matched-filtering operation results in no loss of diagnostic information. Across tasks, the matched-filter beamformer results in more information than standard delay-and-sum beamforming in the subsequent radio-frequency signal by a factor of two. We also show that for this matched filter, 68% of the information gain can be attributed to the phase of the matched-filter and 21% can be attributed to the amplitude. A 1-D matched filtering along axial lines shows no advantage over delay-andsum, suggesting an important role for incorporating correlations across different aperture windows in beamforming. We also show that a post-compression processing before the computation of an envelope is necessary to pass the diagnostic information in the beamformed radio-frequency signal to the final envelope image.
Outage performance of two-way DF relaying systems with a new relay selection metric
Hyadi, Amal; Benjillali, Mustapha; Alouini, Mohamed-Slim
This paper investigates a new constrained relay selection scheme for two-way relaying systems where two end terminals communicate simultaneously via a relay. The introduced technique is based on the maximization of the weighted sum rate of both
A genetic algorithm for multiple relay selection in two-way relaying cognitive radio networks
In this paper, we investigate a multiple relay selection scheme for two-way relaying cognitive radio networks where primary users and secondary users operate on the same frequency band. More specifically, cooperative relays using Amplifyand- Forward
Decode and Zero-Forcing Forward Relaying with Relay Selection in Cognitive Radio Systems
In this paper, we investigate a cognitive radio (CR) relay network with multiple relay nodes that help forwarding the signal of CR users. Best relay selection is considered to take advantage of its low complexity of implementation. When the primary
In this work, the problem of relay selection and resource power allocation in one- way and two-way cognitive relay networks using half duplex channels with different relaying protocols is investigated. Optimization problems for both single
PPM-based relay communication schemes for wireless body area networks
Zhang, P.; Willems, F.M.J.; Huang, Li
This paper investigates cooperative communication schemes based on a single relay with pulse-position modulation (PPM) signaling, for enhancing energy efficiency of wireless body area networks (WBANs) in noncoherent channel settings. We explore cooperation between the source and the relay such that
Relay-aided multi-cell broadcasting with random network coding
Lu, Lu; Sun, Fan; Xiao, Ming
We investigate a relay-aided multi-cell broadcasting system using random network codes, where the focus is on devising efficient scheduling algorithms between relay and base stations. Two scheduling algorithms are proposed based on different feedback strategies; namely, a one-step scheduling...
Performance analysis of two-way amplify and forward relaying with adaptive modulation
the best relay selection scheme in this work. Based on the proposed selection criterion for the best relay, we analyze the average spectral efficiency by its approximated upper bound. In addition, we extend the proposed scheme to the case where a direct
Smart Antenna UKM Testbed for Digital Beamforming System
Full Text Available A new design of smart antenna testbed developed at UKM for digital beamforming purpose is proposed. The smart antenna UKM testbed developed based on modular design employing two novel designs of L-probe fed inverted hybrid E-H (LIEH array antenna and software reconfigurable digital beamforming system (DBS. The antenna is developed based on using the novel LIEH microstrip patch element design arranged into 4×1 uniform linear array antenna. An interface board is designed to interface to the ADC board with the RF front-end receiver. The modular concept of the system provides the capability to test the antenna hardware, beamforming unit, and beamforming algorithm in an independent manner, thus allowing the smart antenna system to be developed and tested in parallel, hence reduces the design time. The DBS was developed using a high-performance TMS320C6711TM floating-point DSP board and a 4-channel RF front-end receiver developed in-house. An interface board is designed to interface to the ADC board with the RF front-end receiver. A four-element receiving array testbed at 1.88–2.22 GHz frequency is constructed, and digital beamforming on this testbed is successfully demonstrated.
A Delta-Sigma beamformer with integrated apodization
Tomov, Borislav Gueorguiev; Stuart, Matthias Bo; Hemmsen, Martin Christian
This paper presents a new design of a discrete time Delta-Sigma (ΔΣ) oversampled ultrasound beamformer which integrates individual channel apodization by means of variable feedback voltage in the Delta-Sigma analog to digital (A/D) converters. The output bit-width of each oversampled A/D converter...... remains the same as in an unmodified one. The outputs of all receiving channels are delayed and summed, and the resulting multi-bit sample stream is filtered and decimated to become an image line. The simplicity of this beamformer allows the production of high-channel-count or very compact beamformers....... The data are acquired using 12-bit flash A/D converters at a sampling rate of 70 MHz, and are then upsampled off-line to 560 MHz for input to the simulated ΔΣ beamformer. The latter generates a B-mode image which is compared to that produced by a digital beamformer that uses 10-bit A/D converters...
Relay discovery and selection for large-scale P2P streaming.
Chengwei Zhang
Full Text Available In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS, can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT. When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
Improving the efficiency of deconvolution algorithms for sound source localization
Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.
of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...
Beam-Forming Concentrating Solar Thermal Array Power Systems
Cwik, Thomas A. (Inventor); Dimotakis, Paul E. (Inventor); Hoppe, Daniel J. (Inventor)
The present invention relates to concentrating solar-power systems and, more particularly, beam-forming concentrating solar thermal array power systems. A solar thermal array power system is provided, including a plurality of solar concentrators arranged in pods. Each solar concentrator includes a solar collector, one or more beam-forming elements, and one or more beam-steering elements. The solar collector is dimensioned to collect and divert incoming rays of sunlight. The beam-forming elements intercept the diverted rays of sunlight, and are shaped to concentrate the rays of sunlight into a beam. The steering elements are shaped, dimensioned, positioned, and/or oriented to deflect the beam toward a beam output path. The beams from the concentrators are converted to heat at a receiver, and the heat may be temporarily stored or directly used to generate electricity.
Beamforming under Quantization Errors in Wireless Binaural Hearing Aids
Srinivasan Sriram
Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.
Kees Janse
Beamforming design with proactive interference cancelation in MISO interference channels
Li, Yang; Tian, Yafei; Yang, Chenyang
In this paper, we design coordinated beamforming at base stations (BSs) to facilitate interference cancelation at users in interference networks, where each BS is equipped with multiple antennas and each user is with a single antenna. By assuming that each user can select the best decoding strategy to mitigate the interference, either canceling the interference after decoding when it is strong or treating it as noise when it is weak, we optimize the beamforming vectors that maximize the sum rate for the networks under different interference scenarios and find the solutions of beamforming with closed-form expressions. The inherent design principles are then analyzed, and the performance gain over passive interference cancelation is demonstrated through simulations in heterogeneous cellular networks.
Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging
, a 7 MHz, 128-element, phased array transducer with lambda/2-spacing was used. Data is obtained using a single element as the transmitting aperture and all 128 elements as the receiving aperture. A full SA sequence consisting of 128 emissions was simulated by gliding the active transmitting element...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...... across the array. Data for 13 point targets and a circular cyst with a radius of 5 mm were simulated. The performance of the MV beamformer is compared to DS using boxcar weights and Hanning weights, and is quantified by the Full Width at Half Maximum (FWHM) and the peak-side-lobe level (PSL). Single...
Full-Duplex opportunistic relay selection in future spectrum-sharing networks
Khafagy, Mohammad Galal; Alouini, Mohamed-Slim; Aï ssa, Sonia
We propose and analyze the performance of full-duplex relay selection in primary/secondary spectrum-sharing networks. Contrary to half-duplex relaying, full-duplex relaying (FDR) enables simultaneous listening/forwarding at the secondary relay, thereby allowing for a higher spectral efficiency. However, since the source and relay simultaneously transmit in FDR, their superimposed signal at the primary receiver should now satisfy the existing interference constraint which can considerably limit the secondary network throughput. In this regard, relay selection can offer an adequate solution to boost the secondary throughput while satisfying the imposed interference limit. We first analyze the performance of opportunistic relay selection among a cluster of full-duplex decode-and-forward relays with self-interference by deriving the exact cumulative distribution function of its end-to-end signal-to-noise ratio. Second, we evaluate the end-to-end performance of relay selection with interference constraints due to the presence of a primary receiver. Finally, the presented exact theoretical findings are verified by numerical simulations.
Rate regions for coordination of Decode-and-Forward relays and direct users
Recently, the ideas of wireless network coding (NC) has significantly enriched the area of wireless cooperation/relaying. They bring substantial gains in spectral efficiency mainly in scenarios with two–way relaying. Inspired by the ideas of wireless NC, recently we have proposed techniques...... for coordinated direct/relay (CDR) transmissions. Leveraging on the fact that the interference can be subsequently canceled, these techniques embrace the interference among the communication flows to/from direct and relayed users. Hence, by allowing simultaneous transmissions, spectral efficiency is increased....... In our prior work, we have proposed CDR with Decode–and–Forward (DF) relay in two scenarios. In this paper, we extend the two existing regenerative CDR schemes and proposed for the other two scenarios such that all schemes benefit from the aforementioned principle of containing the interference...
In this paper, we study two-way amplify-and-forward relaying in conjunction with adaptive modulation over a multiple relay network. In order to keep the diversity order equal to the number of relays and maintain a low complexity, we consider the best relay selection scheme in this work. Based on the proposed selection criterion for the best relay, we analyze the average spectral efficiency by its approximated upper bound. In addition, we extend the proposed scheme to the case where a direct path between source and destination exists. Our numerical examples show that the proposed system offers a considerable gain in the spectral efficiency while satisfying the error rates requirements. ©2009 IEEE.
Performance limitations of relay neurons.
Rahul Agarwal
Full Text Available Relay cells are prevalent throughout sensory systems and receive two types of inputs: driving and modulating. The driving input contains receptive field properties that must be transmitted while the modulating input alters the specifics of transmission. For example, the visual thalamus contains relay neurons that receive driving inputs from the retina that encode a visual image, and modulating inputs from reticular activating system and layer 6 of visual cortex that control what aspects of the image will be relayed back to visual cortex for perception. What gets relayed depends on several factors such as attentional demands and a subject's goals. In this paper, we analyze a biophysical based model of a relay cell and use systems theoretic tools to construct analytic bounds on how well the cell transmits a driving input as a function of the neuron's electrophysiological properties, the modulating input, and the driving signal parameters. We assume that the modulating input belongs to a class of sinusoidal signals and that the driving input is an irregular train of pulses with inter-pulse intervals obeying an exponential distribution. Our analysis applies to any [Formula: see text] order model as long as the neuron does not spike without a driving input pulse and exhibits a refractory period. Our bounds on relay reliability contain performance obtained through simulation of a second and third order model, and suggest, for instance, that if the frequency of the modulating input increases or the DC offset decreases, then relay increases. Our analysis also shows, for the first time, how the biophysical properties of the neuron (e.g. ion channel dynamics define the oscillatory patterns needed in the modulating input for appropriately timed relay of sensory information. In our discussion, we describe how our bounds predict experimentally observed neural activity in the basal ganglia in (i health, (ii in Parkinson's disease (PD, and (iii in PD during
Performance analysis of distributed beamforming in a spectrum sharing system
Yang, Liang; Alouini, Mohamed-Slim; Qaraqe, Khalid A.
In this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with the licensed primary users under an interference temperature constraint. We assume that DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit-error rate performance. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an outage probability analysis for a random vector quantization (RVQ) design algorithm. Numerical results are provided to illustrate our mathematical formalism and verify our analysis. © 2012 IEEE.
Silver nanoparticle catalysed redox reaction: An electron relay effect
Mallick, Kaushik; Witcomb, Mike; Scurrell, Mike
A silver cluster shows efficient catalytic activity in a redox reaction because the cluster acts as the electron relay centre behaving alternatively as an acceptor and as a donor of electrons. An effective transfer of electrons is possible when the redox potential of the cluster is intermediate between the electron donor and electron acceptor system
Speech-to-Speech Relay Service
Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...
Relay selection from an effective capacity perspective
Yang, Yuli; Ma, Hao; Aï ssa, Sonia
proposed scheme in certain scenarios. Moreover, the analysis presented herein offers a convenient tool to the relaying transmission design, specifically on which relay selection scheme should be used as well as how to choose the receiving strategy between
Relay self interference minimisation using tapped filter
Jazzar, Saleh; Al-Naffouri, Tareq Y.
In this paper we introduce a self interference (SI) estimation and minimisation technique for amplify and forward relays. Relays are used to help forward signals between a transmitter and a receiver. This helps increase the signal coverage
Handover Framework for Relay Enhanced LTE Networks
Teyeb, Oumer Mohammed; Van Phan, Vinh; Raaf, Bernhard
Relaying is one of the proposed technologies for future releases of UTRAN Long Term Evolution (LTE) networks. Introducing relaying is expected to increase the coverage and capacity of LTE networks. In order to enable relaying, the architecture, protocol and radio resource management procedures...... of LTE, such as handover, have to be modified. A user can be handed over not only between two base stations, but also between relays and base stations, and between two relays. With the introduction of relaying, there is a need for a new procedure to hand over a relay and all its associated users...... to another base station, allowing a flexible and dynamic relay deployment. In this paper, we extend the LTE release 8 handover mechanisms so that it can accommodate these new handover functionalities in a flexible manner....
Implementing Strategic Planning Capabilities Within the Mars Relay Operations Service
Hy, Franklin; Gladden, Roy; Allard, Dan; Wallick, Michael
Since the Mars Exploration Rovers (MER), Spirit and Opportunity, began their travels across the Martian surface in January of 2004, orbiting spacecraft such as the Mars 2001 Odyssey orbiter have relayed the majority of their collected scientific and operational data to and from Earth. From the beginning of those missions, it was evident that using orbiters to relay data to and from the surface of Mars was a vastly more efficient communications strategy in terms of power consumption and bandwidth compared to direct-to-Earth means. However, the coordination between the various spacecraft, which are largely managed independently and on differing commanding timelines, has always proven to be a challenge. Until recently, the ground operators of all these spacecraft have coordinated the movement of data through this network using a collection of ad hoc human interfaces and various, independent software tools. The Mars Relay Operations Service (MaROS) has been developed to manage the evolving needs of the Mars relay network, and specifically to standardize and integrate the relay planning and coordination data into a centralized infrastructure. This paper explores the journey of developing the MaROS system, from inception to delivery and acceptance by the Mars mission users.
Cyclic Communication and the Inseparability of MIMO Multi-way Relay Channels
Chaaban, Anas; Sezgin, Aydin
The K-user MIMO multi-way relay channel (Ychannel) consisting of K users with M antennas each and a common relay node with N antennas is studied in this paper. Each user wants to exchange messages with all the other users via the relay. A transmission strategy is proposed for this channel. The proposed strategy is based on two steps: channel diagonalization and cyclic communication. The channel diagonalization is applied by using zero-forcing beam-forming. After channel diagonalization, the channel is decomposed into parallel sub-channels. Cyclic communication is then applied, where signal-space alignment for network-coding is used over each sub-channel. The proposed strategy achieves the optimal DoF region of the channel if N M. To prove this, a new degrees-of-freedom outer bound is derived. As a by-product, we conclude that the MIMO Y-channel is not separable, i.e., independent coding on separate sub-channels is not enough, and one has to code jointly over several sub-channels.
Chaaban, Anas
76 FR 24442 - Structure and Practices of the Video Relay Service Program; Telecommunications Relay Services and...
... same meaning as the terms ``small business,'' ``small organization,'' and ``small governmental...] Structure and Practices of the Video Relay Service Program; Telecommunications Relay Services and Speech-to... Commission's Structure and Practices of the Video Relay Service Program; Telecommunications Relay Services...
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
Shen, Kaiming; Yu, Wei
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
Quantum cryptography with an ideal local relay
Spedalieri, Gaetana; Ottaviani, Carlo; Braunstein, Samuel L.
We consider two remote parties connected to a relay by two quantum channels. To generate a secret key, they transmit coherent states to the relay, where the states are subject to a continuous-variable (CV) Bell detection. We study the ideal case where Alice's channel is lossless, i.e., the relay ...
In Vivo Evaluation of Synthetic Aperture Sequential Beamforming
Hemmsen, Martin Christian; Hansen, Peter Møller; Lange, Theis
Ultrasound in vivo imaging using synthetic aperture sequential beamformation (SASB) is compared with conventional imaging in a double blinded study using side-by-side comparisons. The objective is to evaluate if the image quality in terms of penetration depth, spatial resolution, contrast...
Adaptive Port-Starboard Beamforming of Triplet Sonar Arrays
Groen, J.; Beerens, S.P.; Been, R.; Doisy, Y.
Abstract—For a low-frequency active sonar (LFAS) with a triplet receiver array, it is not clear in advance which signal processing techniques optimize its performance. Here, several advanced beamformers are analyzed theoretically, and the results are compared to experimental data obtained in sea
On the power amplifier nonlinearity in MIMO transmit beamforming systems
In this paper, single-carrier multiple-input multiple-output (MIMO) transmit beamforming (TB) systems in the presence of high-power amplifier (HPA) nonlinearity are investigated. Specifically, due to the suboptimality of the conventional maximal ratio transmission/maximal ratio combining (MRT/MRC) under HPA nonlinearity, we propose the optimal TB scheme with the optimal beamforming weight vector and combining vector, for MIMO systems with nonlinear HPAs. Moreover, an alternative suboptimal but much simpler TB scheme, namely, quantized equal gain transmission (QEGT), is proposed. The latter profits from the property that the elements of the beamforming weight vector have the same constant modulus. The performance of the proposed optimal TB scheme and QEGT/MRC technique in the presence of the HPA nonlinearity is evaluated in terms of the average symbol error probability and mutual information with the Gaussian input, considering the transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects on the performance of several system parameters, namely, the HPA parameters, numbers of antennas, quadrature amplitude modulation modulation order, number of pilot symbols, and cardinality of the beamforming weight vector codebook for QEGT. © 2012 IEEE.
Digitally assisted analog beamforming for millimeter-wave communication
Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria
The paper addresses the research question on how digital beamsteering algorithms can be combined with analog beamforming in the context of millimeter-wave communication for next generation (5G) cellular systems. Key is the use of coarse quantisation of the individual antenna signals next to the
Analog Gradient Beamformer for a Wireless Ultrasound Scanner
di Ianni, Tommaso; Hemmsen, Martin Christian; Bagge, Jan Peter
This paper presents a novel beamformer architecture for a low-cost receiver front-end, and investigates if the image quality can be maintained. The system is oriented to the development of a hand-held wireless ultrasound probe based on Synthetic Aperture Sequential Beamforming, and has the advant......This paper presents a novel beamformer architecture for a low-cost receiver front-end, and investigates if the image quality can be maintained. The system is oriented to the development of a hand-held wireless ultrasound probe based on Synthetic Aperture Sequential Beamforming, and has...... the advantage of effectively reducing circuit complexity and power dissipation. The array of transducers is divided into sub-apertures, in which the signals from the single channels are aligned through a network of cascaded gradient delays, and summed in the analog domain before A/D conversion. The delay values...... are quantized to simplify the shifting unit, and a single A/D converter is needed for each sub-aperture yielding a compact, low-power architecture that can be integrated in a single chip. A simulation study was performed using a 3.75 MHz convex array, and the point spread function (PSF) for different...
Adaptive Beamforming Algorithms for Tow Ship Noise Canceling
Robert, M.K.; Beerens, S.P.
In towed array sonar, the directional noise originating from the tow ship, mainly machinery and hydrodynamic noise, often limits the sonar performance. When processed with classical beamforming techniques, loud tow ship noise induces high sidelobes that may hide detection of quiet targets in forward
Enhanced linear-array photoacoustic beamforming using modified coherence factor.
Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution
Xiansheng Guo
Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.
Improving beamforming by optimization of acoustic array microphone positions
Malgoezar, A.M.N.; Snellen, M.; Sijtsma, P.; Simons, D.G.
Assigning proper positions to microphones within arrays is essential in order to reduce or eliminate side- and grating lobes in 2D beamform images. In this paper an objective function is derived providing a measure for the presence of artificial sources. Using the global optimization method
Aliasing-free wideband beamforming using sparse signal representation
Tang, Z.; Blacquière, G.; Leus, G.
Sparse signal representation (SSR) is considered to be an appealing alternative to classical beamforming for direction-of-arrival (DOA) estimation. For wideband signals, the SSR-based approach constructs steering matrices, referred to as dictionaries in this paper, corresponding to different
Enhanced linear-array photoacoustic beamforming using modified coherence factor
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively.
Beamforming applied to surface EEG improves ripple visibility.
van Klink, Nicole; Mol, Arjen; Ferrier, Cyrille; Hillebrand, Arjan; Huiskamp, Geertjan; Zijlmans, Maeike
Surface EEG can show epileptiform ripples in people with focal epilepsy, but identification is impeded by the low signal-to-noise ratio of the electrode recordings. We used beamformer-based virtual electrodes to improve ripple identification. We analyzed ten minutes of interictal EEG of nine patients with refractory focal epilepsy. EEGs with more than 60 channels and 20 spikes were included. We computed ∼79 virtual electrodes using a scalar beamformer and marked ripples (80-250 Hz) co-occurring with spikes in physical and virtual electrodes. Ripple numbers in physical and virtual electrodes were compared, and sensitivity and specificity of ripples for the region of interest (ROI; based on clinical information) were determined. Five patients had ripples in the physical electrodes and eight in the virtual electrodes, with more ripples in virtual than in physical electrodes (101 vs. 57, p = .007). Ripples in virtual electrodes predicted the ROI better than physical electrodes (AUC 0.65 vs. 0.56, p = .03). Beamforming increased ripple visibility in surface EEG. Virtual ripples predicted the ROI better than physical ripples, although sensitivity was still poor. Beamforming can facilitate ripple identification in EEG. Ripple localization needs to be improved to enable its use for presurgical evaluation in people with epilepsy. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Incorporating Multiple Energy Relay Dyes in Liquid Dye-Sensitized Solar Cells
Yum, Jun-Ho
Panchromatic response is essential to increase the light-harvesting efficiency in solar conversion systems. Herein we show increased light harvesting from using multiple energy relay dyes inside dye-sensitized solar cells. Additional photoresponse from 400-590 nm matching the optical window of the zinc phthalocyanine sensitizer was observed due to Förster resonance energy transfer (FRET) from the two energy relay dyes to the sensitizing dye. The complementary absorption spectra of the energy relay dyes and high excitation transfer efficiencies result in a 35% increase in photovoltaic performance. © 2011 Wiley-VCH Verlag GmbH& Co. KGaA.
Panchromatic Response in Solid-State Dye-Sensitized Solar Cells Containing Phosphorescent Energy Relay Dyes
Yum, Jun-Ho; Hardin, Brianâ E.; Moon, Soo-Jin; Baranoff, Etienne; Nà ¼ esch, Frank; McGehee, Michaelâ D.; Grà ¤ tzel, Michael; Nazeeruddin, Mohammadâ K.
Running relay: Incorporating an energyrelay dye (ERD) into the hole transporter of a dye-sensitized solar cell increased power-conversion efficiency by 29% by extending light harvesting into the blue region. In the operating mechanism (see picture
Yum, Jun-Ho; Hardin, Brian E.; Hoke, Eric T.; Baranoff, Etienne; Zakeeruddin, Shaik M.; Nazeeruddin, Mohammad K.; Torres, Tomas; McGehee, Michael D.; Grä tzel, Michael
Panchromatic response is essential to increase the light-harvesting efficiency in solar conversion systems. Herein we show increased light harvesting from using multiple energy relay dyes inside dye-sensitized solar cells. Additional photoresponse
Minimax robust power split in AF relays based on uncertain long-term CSI
Nisar, Muhammad Danish; Alouini, Mohamed-Slim
An optimal power control among source and relay nodes in presence of channel state information (CSI) is vital for an efficient amplify and forward (AF) based cooperative communication system. In this work, we study the optimal power split (power
Performance analysis of underlay cognitive multihop regenerative relaying systems with multiple primary receivers
Hyadi, Amal; Benjillali, Mustapha; Alouini, Mohamed-Slim; Da Costa, Daniel Benevides Da
Multihop relaying is an efficient strategy to improve the connectivity and extend the coverage area of secondary networks in underlay cognitive systems. In this work, we provide a comprehensive performance study of cognitive multihop regenerative
Spoiled Onions: Exposing Malicious Tor Exit Relays
Winter, Philipp; Lindskog, Stefan
Several hundred Tor exit relays together push more than 1 GiB/s of network traffic. However, it is easy for exit relays to snoop and tamper with anonymised network traffic and as all relays are run by independent volunteers, not all of them are innocuous. In this paper, we seek to expose malicious exit relays and document their actions. First, we monitored the Tor network after developing a fast and modular exit relay scanner. We implemented several scanning modules for detecting common attac...
Cooperation schemes for rate enhancement in detect-and-forward relay channels
Benjillali, Mustapha
To improve the spectral efficiency of "Detect-and-Forward" (DetF) half-duplex relaying in fading channels, we propose a cooperation scheme where the relay uses a modulation whose order is higher than the one at the source. In a new common framework, we show that the proposed scheme offers considerable gains - in terms of achievable information rates - compared to the conventional DetF relaying schemes for both orthogonal and non-orthogonal source/relay cooperation. This allows us to propose an adaptive cooperation scheme based on the maximization of the information rate at the destination which needs to observe only the average signal-to-noise ratios of direct and relaying links. ©2010 IEEE.
Satellite-Relayed Intercontinental Quantum Network
Liao, Sheng-Kai; Cai, Wen-Qi; Handsteiner, Johannes; Liu, Bo; Yin, Juan; Zhang, Liang; Rauch, Dominik; Fink, Matthias; Ren, Ji-Gang; Liu, Wei-Yue; Li, Yang; Shen, Qi; Cao, Yuan; Li, Feng-Zhi; Wang, Jian-Feng; Huang, Yong-Mei; Deng, Lei; Xi, Tao; Ma, Lu; Hu, Tai; Li, Li; Liu, Nai-Le; Koidl, Franz; Wang, Peiyuan; Chen, Yu-Ao; Wang, Xiang-Bin; Steindorfer, Michael; Kirchner, Georg; Lu, Chao-Yang; Shu, Rong; Ursin, Rupert; Scheidl, Thomas; Peng, Cheng-Zhi; Wang, Jian-Yu; Zeilinger, Anton; Pan, Jian-Wei
We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with ËœkHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive or operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was, on the one hand, the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a video conference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work clearly confirms the Micius satellite as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.
Satellite-Relayed Intercontinental Quantum Network.
We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with ∼kHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive or operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was, on the one hand, the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a video conference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work clearly confirms the Micius satellite as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.
Eltayeb, Mohammed E.
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Nonetheless, relay selection algorithms generally require error-free channel state information (CSI) from all cooperating relays. Practically, CSI acquisition generates a great deal of feedback overhead that could result in significant transmission delays. In addition to this, the fed back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. In this paper, we propose a relay selection algorithm that tackles the above challenges. Instead of allocating each relay a dedicated channel for feedback, all relays share a pool of feedback channels. Following that, each relay feeds back its identity only if its effective channel (source-relay-destination) exceeds a threshold. After deriving closed-form expressions for the feedback load and the achievable rate, we show that the proposed algorithm drastically reduces the feedback overhead and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback from all relays. © 2015 IEEE.
Comparisons of receive array interference reduction techniques under erroneous generalized transmit beamforming
Radaydeh, Redha Mahmoud
This paper studies generalized single-stream transmit beamforming employing receive array co-channel interference reduction algorithms under slow and flat fading multiuser wireless systems. The impact of imperfect prediction of channel state information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements is considered, wherein it can be configured to specified interfering sources. Both dominant interference reduction and adaptive interference reduction techniques for statistically ordered and unordered interferers powers, respectively, are thoroughly studied. The effect of outdated statistical ordering of the interferers powers on the efficiency of dominant interference reduction is studied and then compared against the adaptive interference reduction. For the system models described above, new analytical formulations for the statistics of combined signal-to-interference-plus-noise ratio are presented, from which results for conventional maximum ratio transmission and single-antenna best transmit selection can be directly deduced as limiting cases. These results are then utilized to obtain quantitative measures for various performance metrics. They are also used to compare the achieved performance of various configuration models under consideration. © 1972-2012 IEEE.
Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications
Kun Qian
Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.
Secure amplify-and-forward untrusted relaying networks using cooperative jamming and zero-forcing cancelation
In this paper, we investigate secure transmission in untrusted amplify-and-forward half-duplex relaying networks with the help of cooperative jamming at the destination (CJD). Under the assumption of full channel state information (CSI), conventional CJD using self-interference cancelation at the destination is efficient when the untrusted relay has no capability to suppress the jamming signal. However, if the source and destination are equipped with a single antenna and the only untrusted relay is equipped with N multiple antennas, it can remove the jamming signal from the received signal by linear filters and the full multiplexing gain of relaying cannot be achievable with the conventional CJD due to the saturation of the secrecy rate at the high transmit power regime. We propose in this paper new CJD scheme where neither destination nor relay can acquire CSI of relay-destination link. Our proposed scheme utilizes zero-forcing cancelation based on known jamming signals instead of self-interference subtraction, while the untrusted relay cannot suppress the jamming signals due to the lack of CSI. We show that the secrecy rate of the proposed scheme can enjoy a half of multiplexing gain in half-duplex relaying while that of conventional CJD is saturated at high transmit power for N ???2. The impact of channel estimation error at the destination is also investigated to show the robustness of the proposed scheme against strong estimation errors.
Threshold-Based Relay Selection for Detect-and-Forward Relaying in Cooperative Wireless Networks
Fan Yijia
Full Text Available This paper studies two-hop cooperative demodulate-and-forward relaying using multiple relays in wireless networks. A threshold based relay selection scheme is considered, in which the reliable relays are determined by comparing source-relay SNR to a threshold, and one of the reliable relays is selected by the destination based on relay-destination SNR. The exact bit error rate of this scheme is derived, and a simple threshold function is proposed. It is shown that the network achieves full diversity order ( under the proposed threshold, where is the number of relays in the network. Unlike some other full diversity achieving protocols in the literature, the requirement that the instantaneous/average SNRs of the source-relay links be known at the destination is eliminated using the appropriate SNR threshold.
Alternate transmission with half-duplex relaying in MIMO interference relay networks
In this paper, we consider an alternate transmission scheme for a multiple-input multiple-output interference relay channel where multiple sources transmit their own signals to their corresponding destinations via one of two relaying groups alternately every time phase. Each of the relaying groups has arbitrary number of relays, and each relay operates in half-duplex amplify-and-forward mode. In our scheme, the received signals at the relay nodes consist of desired signals and two different interference signals such as the inter-source interferences and the inter-group interferences which are caused by the phase incoherence of relaying. As such, we propose an iterative interference alignment algorithm to mitigate the interferences. We show that our proposed scheme achieves additional degrees of freedom compared to the conventional half-duplex relaying system in the interference relay channels. © 2013 IEEE.
Multiuser Radio Resource Allocation for Multiservice Transmission in OFDMA-Based Cooperative Relay Networks
Zhang Xing
Full Text Available Abstract The problem of multiservice transmission in OFDMA-based cooperative relay networks is studied comprehensively. We propose a framework to adaptively allocate power, subcarriers, and data rate in OFDMA system to maximize spectral efficiency under the constraints of satisfying multiuser multiservices' QoS requirements. Specifically, first we concentrate on the single-user scenario which considers multiservice transmission in point-to-point cooperative relay network. Based on the analysis of single-user scenario, we extend the multiservice transmission to multiuser point-to-multipoint scenario. Next, based on the framework, we propose several suboptimal radio resource allocation algorithms for multiservice transmissions in OFDMA-based cooperative relay networks to further reduce the computational complexity. Simulation results show that the proposed algorithms yield much higher spectral efficiency and much lower outage probability, which are flexible and efficient for the OFDMA-based cooperative relay system.
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Generally, relay selection algorithms require channel state information (CSI) feedback from all cooperating relays to make a selection decision. This requirement poses two important challenges, which are often neglected in the literature. Firstly, the fed back channel information is usually corrupted by additive noise. Secondly, CSI feedback generates a great deal of feedback overhead (air-time) that could result in significant performance hits. In this paper, we propose a compressive sensing (CS) based relay selection algorithm that reduces the feedback overhead of relay networks under the assumption of noisy feedback channels. The proposed algorithm exploits CS to first obtain the identity of a set of relays with favorable channel conditions. Following that, the CSI of the identified relays is estimated using least squares estimation without any additional feedback. Both single and multiple relay selection cases are considered. After deriving closed-form expressions for the asymptotic end-to-end SNR at the destination and the feedback load for different relaying protocols, we show that CS-based selection drastically reduces the feedback load and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback. © 1972-2012 IEEE.
Impact of Beamforming on the Path Connectivity in Cognitive Radio Ad Hoc Networks
Le The Dung
Full Text Available This paper investigates the impact of using directional antennas and beamforming schemes on the connectivity of cognitive radio ad hoc networks (CRAHNs. Specifically, considering that secondary users use two kinds of directional antennas, i.e., uniform linear array (ULA and uniform circular array (UCA antennas, and two different beamforming schemes, i.e., randomized beamforming and center-directed to communicate with each other, we study the connectivity of all combination pairs of directional antennas and beamforming schemes and compare their performances to those of omnidirectional antennas. The results obtained in this paper show that, compared with omnidirectional transmission, beamforming transmission only benefits the connectivity when the density of secondary user is moderate. Moreover, the combination of UCA and randomized beamforming scheme gives the highest path connectivity in all evaluating scenarios. Finally, the number of antenna elements and degree of path loss greatly affect path connectivity in CRAHNs.
Transmission Power Adaption for Full-Duplex Relay-Aided Device-to-Device Communication
Hui Dun
Full Text Available Device-to-device (D2D communications bring significant improvements of spectral efficiency by underlaying cellular networks. However, they also lead to a more deteriorative interference environment for cellular users, especially the users in severely deep fading or shadowing. In this paper, we investigate a relay-based communication scheme in cellular systems, where the D2D communications are exploited to aid the cellular downlink transmissions by acting as relay nodes with underlaying cellular networks. We modeled two-antenna infrastructure relays employed for D2D relay. The D2D transmitter is able to transmit and receive signals simultaneously over the same frequency band. Then we proposed an efficient power allocation algorithm for the base station (BS and D2D relay to reduce the loopback interference which is inherent due to the two-antenna infrastructure in full-duplex (FD mode. We derived the optimal power allocation problem in closed form under the independent power constraint. Simulation results show that the algorithm reduces the power consumption of D2D relay to the greatest extent and also guarantees cellular users' minimum transmit rate. Moreover, it also outperforms the existing half-duplex (HD relay mode in terms of achievable rate of D2D.
Performance analysis of opportunistic nonregenerative relaying
Tourki, Kamel; Alouini, Mohamed-Slim; Qaraqe, Khalid A.; Yang, Hongchuan
Opportunistic relaying in cooperative communication depends on careful relay selection. However, the traditional centralized method used for opportunistic amplify-and-forward protocols requires precise measurements of channel state information at the destination. In this paper, we adopt the max-min criterion as a relay selection framework for opportunistic amplify-and-forward cooperative communications, which was exhaustively used for the decode-and-forward protocol, and offer an accurate performance analysis based on exact statistics of the local signal-to-noise ratios of the best relay. Furthermore, we evaluate the asymptotical performance and deduce the diversity order of our proposed scheme. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over Rayleigh fading channels, and we compare the max-min relay selection with their centralized channel state information-based and partial relay selection counterparts.
Diversity-Multiplexing Trade-off for Coordinated Relayed Uplink and Direct Downlink Transmissions
Thai, Chan; Popovski, Petar; Sun, Fan
Abstract—There are two basic principles used in wireless network coding to design throughput-efficient schemes: (1) aggregation of communication flows and (2) interference is embraced and subsequently cancelled or mitigated. These principles inspire design of Coordinated Direct/Relay (CDR) schemes......, where each basic transmission involves two flows to a direct and a relayed user. Considering a scenario with relayed uplink and direct downlink, we analyze the Diversity-Multiplexing Tradeoff (DMT) calculating either the exact value or both upper/lower bounds. The CDR scheme is shown to have a higher...
Pandarakkottilil, Ubaidulla; Chockalingam, A.
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a
In this paper, we consider an alternate transmission scheme for a multiple-input multiple-output interference relay channel where multiple sources transmit their own signals to their corresponding destinations via one of two relaying groups
SWIPT in Multiuser MIMO Decode-and-Forward Relay Broadcasting Channel with Energy Harvesting Relays
Benkhelifa, Fatma; Salem, Ahmed Sultan; Alouini, Mohamed-Slim
In this paper, we consider a multiuser multiple- input multiple-output (MIMO) decode-and-forward (DF) relay broadcasting channel (BC) with single source, multiple energy harvesting relays and multiple destinations. Since the end-to-end sum rate
Concentric artificial impedance surface for directional sound beamforming
Kyungjun Song
Full Text Available Utilizing acoustic metasurfaces consisting of subwavelength resonant textures, we design an artificial impedance surface by creating a new boundary condition. We demonstrate a circular artificial impedance surface with surface impedance modulation for directional sound beamforming in three-dimensional space. This artificial impedance surface is implemented by revolving two-dimensional Helmholtz resonators with varying internal coiled path. Physically, the textured surface has inductive surface impedance on its inner circular patterns and capacitive surface impedance on its outer circular patterns. Directional receive beamforming can be achieved using an omnidirectional microphone located at the focal point formed by the gradient-impeding surface. In addition, the uniaxial surface impedance patterning inside the circular aperture can be used for steering the direction of the main lobe of the radiation pattern.
A recurrent neural network for adaptive beamforming and array correction.
Che, Hangjun; Li, Chuandong; He, Xing; Huang, Tingwen
In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.
Synthetic Aperture Flow Imaging Using a Dual Beamformer Approach
Li, Ye
Color flow mapping systems have become widely used in clinical applications. It provides an opportunity to visualize the velocity profile over a large region in the vessel, which makes it possible to diagnose, e.g., occlusion of veins, heart valve deficiencies, and other hemodynamic problems....... However, while the conventional ultrasound imaging of making color flow mapping provides useful information in many circumstances, the spatial velocity resolution and frame rate are limited. The entire velocity distribution consists of image lines from different directions, and each image line...... on the current commercial ultrasound scanner. The motivation for this project is to develop a method lowering the amount of calculations and still maintaining beamforming quality sufficient for flow estimation. Synthetic aperture using a dual beamformer approach is investigated using Field II simulations...
Compact Beamformer Design with High Frame Rate for Ultrasound Imaging
Jun Luo
Full Text Available In medical field, two-dimension ultrasound images are widely used in clinical diagnosis. Beamformer is critical in determining the complexity and performance of an ultrasound imaging system. Different from traditional means implemented with separated chips, a compact beamformer with 64 effective channels in a single moderate Field Programmable Gate Array has been presented in this paper. The compactness is acquired by employing receive synthetic aperture, harmonic imaging, time sharing and linear interpolation. Besides that, multi-beams method is used to improve the frame rate of the ultrasound imaging system. Online dynamic configuration is employed to expand system's flexibility to two kinds of transducers with multi-scanning modes. The design is verified on a prototype scanner board. Simulation results have shown that on-chip memories can be saved and the frame rate can be improved on the case of 64 effective channels which will meet the requirement of real-time application.
Low complexity non-iterative coordinated beamforming in 2-user broadcast channels
We propose a new non-iterative coordinated beamforming scheme to obtain full multiplexing gain in 2-user MIMO systems. In order to find the beamforming and combining matrices, we solve a generalized eigenvector problem and describe how to find generalized eigenvectors according to the Gaussian broadcast channels. Selected simulation results show that the proposed method yields the same sum-rate performance as the iterative coordinated beamforming method, while maintaining lower complexity by non-iterative computation of the beamforming and combining matrices. We also show that the proposed method can easily exploit selective gain by choosing the best combination of generalized eigenvectors. © 2006 IEEE.
Low complexity symbol-wise beamforming for MIMO-OFDM systems
Lee, Hyun Ho
In this paper, we consider a low complexity symbol-wise beamforming for MIMO-OFDM systems. We propose a non-iterative algorithm for the symbol-wise beamforming, which can provide the performance approaching that of the conventional symbol-wise beamforming based on the iterative algorithm. We demonstrate that our proposed scheme can reduce the computational complexity significantly. From our simulation results, it is evident that our proposed scheme leads to a negligible performance loss compared to the conventional symbol-wise beamforming regardless of spatial correlation or presence of co-channel interference. © 2011 IEEE.
Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim
Electric equipment technical regulation on a relay
It is about a relay for power protection. It describes the definitions of structure, a point of contact protection of contact, reclosing, relay scheme and input circuit. It explains normal use condition, special use condition, rated frequency, rated voltage and rated current, fluctuation range of permission of incoming relay. It deals with general structing outer case, correction device, operation indicator and outer terminal condition sort and method of the test. It adds the marks and pictures about currents
Khafagy, Mohammad Galal; Alouini, Mohamed-Slim; Aissa, Sonia
In this work, we analyze the performance of full-duplex relay selection (FDRS) in spectrum-sharing networks. Contrary to half-duplex relaying, full-duplex relaying (FDR) enables simultaneous listening/forwarding at the secondary relay(s), thereby
A Two-Stage MMSE Beamformer for Underdetermined Signal Separation
Koldovský, Zbyněk; Tichavský, Petr; Phan, A. H.; Cichocki, A.
Ro�. 20, �. 12 (2013), s. 1227-1230 ISSN 1070-9908 Grant - others:GA ČR(CZ) GAP103/11/1947 Program:GA Institutional support: RVO:67985556 Keywords : beamforming * underdetermined mixtures * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2014/SI/koldovsky-0424112.pdf
Compressive Sound Speed Profile Inversion Using Beamforming Results
Youngmin Choo; Woojae Seong
Sound speed profile (SSP) significantly affects acoustic propagation in the ocean. In this work, the SSP is inverted using compressive sensing (CS) combined with beamforming to indicate the direction of arrivals (DOAs). The travel times and the positions of the arrivals can be approximately linearized using their Taylor expansion with the shape function coefficients that parameterize the SSP. The linear relation between the travel times/positions and the shape function coefficients enables CS...
Ein Beitrag zur Erweiterung von Beamforming-Methoden
Kern, Marcus
Im automobilen Entwicklungsprozess haben sich akustische Messsysteme etabliert, die mit einer Anordnung von Mikrofonen, einer optischen Kamera und einer nachgeschalteten Signalverarbeitung die Schalleinfallsrichtung detektieren und dadurch die Schalldruckverteilung auf Quellorte im Fernfeld zurückrechnen und visualisieren können. Die Signalverarbeitung beruht i. A. auf dem Delay&Sum-Beamforming, deren Umsetzung im Zeit- oder Frequenzbereich erfolgt. Die Schwächen dieser Messtechnik bezüglich ...
Jazzar, Saleh
In this paper we introduce a self interference (SI) estimation and minimisation technique for amplify and forward relays. Relays are used to help forward signals between a transmitter and a receiver. This helps increase the signal coverage and reduce the required transmitted signal power. One problem that faces relays communications is the leaked signal from the relay\\'s output to its input. This will cause an SI problem where the new received signal at the relay\\'s input will be added with the unwanted leaked signal from the relay\\'s output. A Solution is proposed in this paper to estimate and minimise this SI which is based upon using a tapped filter at the destination. To get the optimum weights for this tapped filter, some channel parameters must be estimated first. This is performed blindly at the destination without the need of any training. This channel parameter estimation method is named the blind-self-interference-channel-estimation (BSICE) method. The next step in the proposed solution is to estimate the tapped filter\\'s weights. This is performed by minimising the mean squared error (MSE) at the destination. This proposed method is named the MSE-Optimum Weight (MSE-OW) method. Simulation results are provided in this paper to verify the performance of BSICE and MSE-OW methods. © 2013 IEEE.
Analysis of errors of radiation relay, (1)
Koyanagi, Takami; Nakajima, Sinichi
The statistical error of liquid level controlled by radiation relay is analysed and a method of minimizing the error is proposed. This method comes to the problem of optimum setting of the time constant of radiation relay. The equations for obtaining the value of time constant are presented and the numerical results are shown in a table and plotted in a figure. The optimum time constant of the upper level control relay is entirely different from that of the lower level control relay. (auth.)
Experimental evaluation of earthquake induced relay chattering
Bandyopadhyay, K.; Hofmayer, C.; Shteyngart, S.
An experimental evaluation of relay performance under vibratory environments is discussed in this paper. Single frequency excitation was used for most tests. Limited tests were performed with random multifrequency inputs. The capacity of each relay was established based on a two-millisecond chatter criterion. The experimental techniques are described and the effects of parameters in controlling the relay capacity levels are illustrated with test data. A wide variation of the capacity levels was observed due to the influence of parameters related to the design of the relay and nature of the input motion. 3 refs., 15 figs
Multidimensional-DSP Beamformers Using the ROACH-2 FPGA Platform
Vishwa Seneviratne
Full Text Available Antenna array-based multi-dimensional infinite-impulse response (IIR digital beamformers are employed in a multitude of radio frequency (RF applications ranging from electronically-scanned radar, radio telescopes, long-range detection and target tracking. A method to design 3D IIR beam filters using 2D IIR beam filters is described. A cascaded 2D IIR beam filter architecture is proposed based on systolic array architecture as an alternative for an existing radar application. Differential-form transfer function and polyphase structures are employed in the design to gain an increase in the speed of operation to gigahertz range. The feasibility of practical implementation of a 4-phase polyphase 2D IIR beam filter is explored. A digital hardware prototype is designed, implemented and tested using a ROACH-2 Field Programmable Gate Array (FPGA platform fitted with a Xilinx Virtex-6 SX475T FPGA chip and multi-input analog-to-digital converters (ADC boards set to a maximum sampling rate of 960 MHz. The article describes a method to build a 3D IIR beamformer using polyphase structures. A comparison of technical specifications of an existing radar application based on phased-array and the proposed 3D IIR beamformer is also explained to illustrate the proposed method to be a better alternative for such applications.
Strong reflector-based beamforming in ultrasound medical imaging.
Szasz, Teodora; Basarab, Adrian; Kouamé, Denis
This paper investigates the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well known Bayesian Information Criteria used in statistical modeling. Moreover, they allow a parametric selection of the level of speckle in the final beamformed image. These methods are applied on simulated data and on recorded experimental data. Their performance is evaluated considering the standard image quality metrics: contrast ratio (CR), contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). A comparison is made with the classical delay-and-sum and minimum variance beamforming methods to confirm the ability of the proposed methods to precisely detect the number and the position of the strong reflectors in a sparse medium and to accurately reduce the speckle and highly enhance the contrast in a non-sparse medium. We confirm that our methods improve the contrast of the final image for both simulated and experimental data. In all experiments, the proposed approaches tend to preserve the speckle, which can be of major interest in clinical examinations, as it can contain useful information. In sparse mediums we achieve a highly improvement in contrast compared with the classical methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Nonfeedback Distributed Beamforming Using Spatial-Temporal Extraction
Pongnarin Sriploy
Full Text Available So far, major phase synchronization techniques for distributed beamforming suffer from the problem related to the feedback procedure as a base station has to send the feedback reference signal back to the transmitting nodes. This requires stability of communication channel or a number of retransmissions, introducing a complicated system to both transmitter and receiver. Therefore, this paper proposes an alternative technique, so-called nonfeedback beamforming, employing an operation in both space and time domains. The proposed technique is to extract a combined signal at the base station. The concept of extraction is based on solving a simultaneous linear equation without the requirement of feedback or reference signals from base station. Also, the number of retransmissions is less compared with the ones available in literatures. As a result, the transmitting nodes are of low complexity and also low power consumption. The simulation and experimental results reveal that the proposed technique provides the optimum beamforming gain. Furthermore, it can reduce Bit Error Rate to the systems.
78 FR 40407 - Structure and Practices of the Video Relay Service Program: Telecommunications Relay Services and...
...] Structure and Practices of the Video Relay Service Program: Telecommunications Relay Services and Speech-to... telecommunications relay services (TRS) program continues to offer functional equivalence to all eligible users and..., identified by CG Docket Nos. 10-51 and 03-123, by any of the following methods: Electronic Filers: Comments...
Protective relaying theory and applications
Elmore, Walter A
Targeting the latest microprocessor technologies for more sophisticated applications in the field of power system short circuit detection, this revised and updated source imparts fundamental concepts and breakthrough science for the isolation of faulty equipment and minimization of damage in power system apparatus. The Second Edition clearly describes key procedures, devices, and elements crucial to the protection and control of power system function and stability. It includes chapters and expertise from the most knowledgeable experts in the field of protective relaying, and describes micropro
Wireless Powered Cooperative Communications: Power-Splitting Relaying With Energy Accumulation (Author's Manuscript)
decreasing power usage, while improving the transmission performance. A key concern of the energy harvesting enabled coop- erative relay communication is the...improving transmission performance via an efficient utiliza- tion of harvested power has been widely studied for conven- tional energy harvesting techniques...can be used as energy sources for cooperative nodes. Moreover, it has been illustrated in [6] that wireless -powered cooperative relay communications
On joint power allocation and multipath routing in femto-relay networks
Hoteit , Sahar; Duhamel , Pierre; Lasaulce , Samson
International audience; —Transmit power allocation techniques are very important to manage interference in small-cell networks. While available power allocation algorithms in the literature rely on a predefined routing protocol, we propose in this paper a power-efficient two-step algorithm that allows power allocation and routing to be performed jointly in femto-relay networks. First, we propose an interference-based partitioning method to cluster the femto-relays, then we adopt an iterative ...
Cell-Edge Multi-User Relaying with Overhearing
Sun, Fan; Kim, Tae Min; Paulraj, Arogyaswami
Carefully designed protocols can turn overheard interference into useful side information to allow simultaneous transmission of multiple communication flows and increase the spectral efficiency in interference-limited regime. In this letter, we propose a novel scheme in a typical cell-edge scenar....... By exploiting the overhearing link through proper relay precoding and adaptive receiver processing, rate performance can be significantly improved compared to the conventional transmission which does not utilize overhearing....
Beamforming transmission in IEEE 802.11ac under time-varying channels.
Yu, Heejung; Kim, Taejoon
The IEEE 802.11ac wireless local area network (WLAN) standard has adopted beamforming (BF) schemes to improve spectral efficiency and throughput with multiple antennas. To design the transmit beam, a channel sounding process to feedback channel state information (CSI) is required. Due to sounding overhead, throughput increases with the amount of transmit data under static channels. Under practical channel conditions with mobility, however, the mismatch between the transmit beam and the channel at transmission time causes performance loss when transmission duration after channel sounding is too long. When the fading rate, payload size, and operating signal-to-noise ratio are given, the optimal transmission duration (i.e., packet length) can be determined to maximize throughput. The relationship between packet length and throughput is also investigated for single-user and multiuser BF modes.
Impact of Primary User Traffic on Adaptive Transmission for Cognitive Radio with Partial Relay Selection
In a cognitive relay system, the secondary user is permitted to transmit data via a relay when licensed frequency bands are detected to be free. Previous studies mainly focus on reducing or limiting the interference of the secondary transmission on the primary users. On the other hand, however, the primary user traffic will also affect the data transmission performance of the secondary users. In this paper, we investigate the impact of the primary user traffic on the bit error rate (BER) of the secondary transmission, when the secondary user adopts adaptive transmission with a relay partially selected. From the numerical results, we can see that the primary user traffic seriously degrades average BER. The worse-link partial selection can perform almost as well as the global selection when the channel conditions of the source-relay links and the relay-destination links differ a lot. In addition, although the relay selection improves the spectral efficiency of the secondary transmission, numerical results show that it only has slight impact on the overall average BER, so that the robustness of the system will not be affected by the relay selection.
A comparison between temporal and subband minimum variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar
An RSS based location estimation technique for cognitive relay networks
Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim
In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can
New results on performance analysis of opportunistic regenerative relaying
Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim; Qaraqe, Khalid A.
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may
Accurate performance analysis of opportunistic decode-and-forward relaying
Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...
Opportunistic Relay Selection with Cooperative Macro Diversity
Yu Chia-Hao
Full Text Available We apply a fully opportunistic relay selection scheme to study cooperative diversity in a semianalytical manner. In our framework, idle Mobile Stations (MSs are capable of being used as Relay Stations (RSs and no relaying is required if the direct path is strong. Our relay selection scheme is fully selection based: either the direct path or one of the relaying paths is selected. Macro diversity, which is often ignored in analytical works, is taken into account together with micro diversity by using a complete channel model that includes both shadow fading and fast fading effects. The stochastic geometry of the network is taken into account by having a random number of randomly located MSs. The outage probability analysis of the selection differs from the case where only fast fading is considered. Under our framework, distribution of the received power is formulated using different Channel State Information (CSI assumptions to simulate both optimistic and practical environments. The results show that the relay selection gain can be significant given a suitable amount of candidate RSs. Also, while relay selection according to incomplete CSI is diversity suboptimal compared to relay selection based on full CSI, the loss in average throughput is not too significant. This is a consequence of the dominance of geometry over fast fading.
Microcomputer relay regulator in the CAMAC standard
Nikolaev, V.P.
The digital relay regulator is developed on the base of the KM001 microcomputer and KK06 controller for automatic control ob ects with transfer functions describing a broad class of systems using actuating motors (stabilitation, follow-up systems). The CAMAC relay-unit realizes the regulation law and provides the possibility to control analogous values by 8 channels
In this paper, we investigate a cognitive radio (CR) relay network with multiple relay nodes that help forwarding the signal of CR users. Best relay selection is considered to take advantage of its low complexity of implementation. When the primary user (PU) is located close to the relay nodes, the performance of the secondary network is severely degraded due to the interference power constraint during the transmission in the second hop. We propose a decode and zero-forcing forward scheme to suppress the interference power at the relay nodes and analyze the statistics of the end-to-end signal-to-noise ratio when the relay nodes are located arbitrarily and experience therefore non-identical Rayleigh fading channels. Numerical results validate our theoretical results and show that our proposed scheme improves the performance of the CR network when the PU is close to the relay nodes. © 2014 IEEE.
On the achievable degrees of freedom of alternate MIMO relaying with multiple AF relays
In this paper, we consider a two-hop relaying network where one source, one destination, and multiple amplify-and-forward (AF) relays equipped with M antennas operate in a half-duplex mode. In order to compensate for the inherent loss of capacity pre-log factor of 1/2 due to half-duplex relaying, we propose a new transmission protocol which combines alternate relaying and inter-relay interference alignment. We prove that the proposed scheme can (i) exploits M degrees of freedom (DOFs) and (ii) perfectly recover the pre-log factor loss if the number of relays is at least six. From our selected numerical results, we show that our proposed scheme gives significant improvement over conventional AF relaying which offers only M/2 DOFs. © 2012 IEEE.
Novel ring resonator-based integrated photonic beamformer for broadband phased array receive antennas - part 1: design and performance analysis
Meijerink, Arjan; Roeloffzen, C.G.H.; Meijerink, Roland; Zhuang, L.; Marpaung, D.A.I.; Bentum, Marinus Jan; Burla, M.; Verpoorte, Jaco; Jorna, Pieter; Huizinga, Adriaan; van Etten, Wim
A novel optical beamformer concept is introduced that can be used for seamless control of the reception angle in broadband wireless receivers employing a large phased array antenna (PAA). The core of this beamformer is an optical beamforming network (OBFN), using ring resonator-based broadband
Bilayer expurgated LDPC codes with uncoded relaying
Md. Noor-A-Rahim
Full Text Available Bilayer low-density parity-check (LDPC codes are an effective coding technique for decode-and-forward relaying, where the relay forwards extra parity bits to help the destination to decode the source bits correctly. In the existing bilayer coding scheme, these parity bits are protected by an error correcting code and assumed reliably available at the receiver. We propose an uncoded relaying scheme, where the extra parity bits are forwarded to the destination without any protection. Through density evolution analysis and simulation results, we show that our proposed scheme achieves better performance in terms of bit erasure probability than the existing relaying scheme. In addition, our proposed scheme results in lower complexity at the relay.
On Alternate Relaying with Improper Gaussian Signaling
Gaafar, Mohamed
In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.
Gaafar, Mohamed; Amin, Osama; Ikhlef, Aissa; Chaaban, Anas; Alouini, Mohamed-Slim
Optically beamformed beam-switched adaptive antennas for fixed and mobile broadband wireless access networks
Piqueras, M.A.; Grosskopf, G.; Vidal, B.; Herrera Llorente, J.; Martinez, J.M.; Sanchis, P.; Polo, V.; Corral, J.L.; Marceaux, A.; Galière, J.; Lopez, J.; Enard, A.; Valard, J.-L.; Parillaud, O.; Estèbe, E.; Vodjdani, N.; Choi, M.-S.; Besten, den J.H.; Soares, F.M.; Smit, M.K.; Marti, J.
In this paper, a 3-bit optical beamforming architecture based in 2×2 optical switches and dispersive media is proposed and demonstrated. The performance of this photonic beamformer is experimentally demonstrated at 42.7 GHz in both transmission and reception modes. The progress achieved for
Investigation of Adaptive Beamforming Algorithms for Cognitive ...
Frequency spectrum is one of the biggest natural resource which has a significant impact on the development of wireless communication technologies. Therefore, utilizing this natural resource in an efficient way accelerates the technological advancement. The spectrum allotment strategy that has been serving well the ...
Location-quality-aware policy optimisation for relay selection in mobile networks
for resulting performance of such network optimizations. In mobile scenarios, the required information collection and forwarding causes delays that will additionally affect the reliability of the collected information and hence will influence the performance of the relay selection method. This paper analyzes...... the joint influence of these two factors in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. Efficient algorithms are developed based on this model to obtain optimal relay policies under consideration of localization errors....... Numerical results show how information update rates, forwarding delays, and location estimation errors affect these optimal policies and allow to conclude on the required accuracy of location-based systems for such mobile relay selection scenarios. A measurement-based indoor scenario with more complex...
Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging
Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo
in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...
Photonic Beamformer Model Based on Analog Fiber-Optic Links' Components
Volkov, V A; Gordeev, D A; Ivanov, S I; Lavrov, A P; Saenko, I I
The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates. (paper)
Beamforming design of sum rate optimization for MU-MISO scenario
ZHAO Pu
Full Text Available This paper investigates the beamforming design based on perfect channel state information (CSI in a multi-user downlink network.The base station (BS is equipped with multiple antennas while each user is equipped with one antenna.The BS will communicate with users through transmit beamforming technology.This paper maximizes the sum rate of all users with the constraint of transmitting power.The object function is complex and non-convex which would bring difficulties to solve this problem.This article proposes a beamforming scheme based on zero-forcing criterion.Based on this method,the original problem will be divided into two sub-problems which can be solved respectively.The simulation results suggest that the proposed beamforming scheme achieves better performance compared with the traditional leakage based beamforming scheme.
Motion compensated beamforming in synthetic aperture vector flow imaging
Oddershede, Niels; Jensen, Jørgen Arendt
. In this paper, these motion effects are considered. A number of Field II simulations of a single scatterer moving at different velocities are performed both for axial and lateral velocities from 0 to 1 m/s. Data are simulated at a pulse repetition frequency of 5 kHz. The signal-to-noise ratio (SNR....... Here the SNR is -10 dB compared to the stationary scatterer. A 2D motion compensation method for synthetic aperture vector flow imaging is proposed, where the former vector velocity estimate is used for compensating the beamforming of new data. This method is tested on data from an experimental flow...
Synthetic Aperture Sequential Beamforming implemented on multi-core platforms
Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian
This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....
Synthetic Aperture Sequential Beamformation applied to medical imaging
Hemmsen, Martin Christian; Hansen, Jens Munk; Jensen, Jørgen Arendt
Synthetic Aperture Sequential Beamforming (SASB) is applied to medical ultrasound imaging using a multi element convex array transducer. The main motivation for SASB is to apply synthetic aperture techniques without the need for storing RF-data for a number of elements and hereby devise a system...... with a reduced system complexity. Using a 192 element, 3.5 MHz, λ-pitch transducer, it is demonstrated using tissue-phantom and wire-phantom measurements, how the speckle size and the detail resolution is improved compared to conventional imaging....
End-to-end performance of cooperative relaying in spectrum-sharing systems with quality of service requirements
Asghari, Vahid Reza; Aissa, Sonia
We propose adopting a cooperative relaying technique in spectrum-sharing cognitive radio (CR) systems to more effectively and efficiently utilize available transmission resources, such as power, rate, and bandwidth, while adhering to the quality
Application of a proposed overcurrent relay in radial distribution networks
Conde, A.; Vazquez, E. [Universidad Autonoma de Nuevo Leon, Facultad de Ingenieria Mecanica y Electrica, A.P. 36-F, CU, CP 66450, San Nicolas de los Garza, Nuevo Leon (Mexico)
This paper contains the application criteria and coordination process for a proposed overcurrent relay in a radial power system with feed from one or multiple sources. This relay uses independent functions to detect faults and to calculate the operation time. Also this relay uses a time element function that allows it to reduce the time relay operation, enhancing the backup protection. Some of the proposed approaches improve the sensitivity of the relay. The selection of the best approach in the proposed relay is defined by the needs of the application. The proposed protection can be considered as an additional function protection to conventional overcurrent relays. (author)
Time of arrival based location estimation for cooperative relay networks
Çelebi, Hasari Burak
In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.
Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Hussain, Syed Imtiaz; Qaraqe, Khalid A.; Alouini, Mohamed-Slim
Cognitive relaying and power allocation under channel state uncertainties
In this paper, we present robust joint relay precoder designs and transceiver power allocations for a cognitive radio network under imperfect channel state information (CSI). The secondary (or cognitive) network consists of a pair of single-antenna transceiver nodes and a non-regenerative two-way relay with multiple antennas which aids the communication process between the transceiver pair. The secondary nodes share the spectrum with a licensed primary user (PU) while guaranteeing that the interference to the PU receiver is maintained below a specified threshold. We consider two robust designs: the first is based on the minimization of the total transmit power of the secondary relay node required to provide the minimum quality of service, measured in terms of mean-square error (MSE) of the transceiver nodes, and the second is based on the minimization of the sum-MSE of the transceiver nodes. The robust designs are based on worst-case optimization and take into account known parameters of the error in the CSI to render the performance immune to the presence of errors in the CSI. Though the original problem is non-convex, we show that the proposed designs can be reformulated as tractable convex optimization problems that can be solved efficiently. We illustrate the performance of the proposed designs through some selected numerical simulations. © 2013 IEEE.
Cobalt: A GPU-based correlator and beamformer for LOFAR
Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.
For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.
Youngmin Choo
Full Text Available Sound speed profile (SSP significantly affects acoustic propagation in the ocean. In this work, the SSP is inverted using compressive sensing (CS combined with beamforming to indicate the direction of arrivals (DOAs. The travel times and the positions of the arrivals can be approximately linearized using their Taylor expansion with the shape function coefficients that parameterize the SSP. The linear relation between the travel times/positions and the shape function coefficients enables CS to reconstruct the SSP. The conventional objective function in CS is modified to simultaneously exploit the information from the travel times and positions. The SSP is estimated using CS with beamforming of ray arrivals in the SWellEx-96 experimental environment, and the performance is evaluated using the correlation coefficient and mean squared error (MSE between the true and recovered SSPs, respectively. Five hundred synthetic SSPs were generated by randomly choosing the SSP dictionary components, and more than 80 percent of all the cases have correlation coefficients over 0.7 and MSE along depth is insignificant except near the sea surface, which shows the validity of the proposed method.
In this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with some licensed primary users under an interference temperature constraint. We assume that the DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit error rate performance metrics. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an analysis for a random vector quantization design algorithm. Specifically, the approximate statistics functions of the squared inner product between the optimal and quantized vectors are derived. With these statistics, we analyze the outage performance. Furthermore, the effects of channel estimation error and number of primary users on the system performance are investigated. Finally, optimal power adaptation and cochannel interference are considered and analyzed. Numerical and simulation results are provided to illustrate our mathematical formalism and verify our analysis. © 2012 IEEE.
Beamforming Through Regularized Inverse Problems in Ultrasound Medical Imaging.
Szasz, Teodora; Basarab, Adrian; Kouame, Denis
Beamforming (BF) in ultrasound (US) imaging has significant impact on the quality of the final image, controlling its resolution and contrast. Despite its low spatial resolution and contrast, delay-and-sum (DAS) is still extensively used nowadays in clinical applications, due to its real-time capabilities. The most common alternatives are minimum variance (MV) method and its variants, which overcome the drawbacks of DAS, at the cost of higher computational complexity that limits its utilization in real-time applications. In this paper, we propose to perform BF in US imaging through a regularized inverse problem based on a linear model relating the reflected echoes to the signal to be recovered. Our approach presents two major advantages: 1) its flexibility in the choice of statistical assumptions on the signal to be beamformed (Laplacian and Gaussian statistics are tested herein) and 2) its robustness to a reduced number of pulse emissions. The proposed framework is flexible and allows for choosing the right tradeoff between noise suppression and sharpness of the resulted image. We illustrate the performance of our approach on both simulated and experimental data, with in vivo examples of carotid and thyroid. Compared with DAS, MV, and two other recently published BF techniques, our method offers better spatial resolution, respectively contrast, when using Laplacian and Gaussian priors.
Self-oscillations in dynamic systems a new methodology via two-relay controllers
Aguilar, Luis T; Fridman, Leonid; Iriarte, Rafael
This monograph presents a simple and efficient two-relay control algorithm for generation of self-excited oscillations of a desired amplitude and frequency in dynamic systems. Developed by the authors, the two-relay controller consists of two relays switched by the feedback received from a linear or nonlinear system, and represents a new approach to the self-generation of periodic motions in underactuated mechanical systems. The first part of the book explains the design procedures for two-relay control using three different methodologies – the describing-function method, Poincaré maps, and the locus-of-a perturbed-relay-system method – and concludes with stability analysis of designed periodic oscillations. Two methods to ensure the robustness of two-relay control algorithms are explored in the second part, one based on the combination of the high-order sliding mode controller and backstepping, and the other on higher-order sliding-modes-based reconstruction of uncertainties and their compensation where...
Relay cropping as a sustainable approach: problems and opportunities for sustainable crop production.
Tanveer, Mohsin; Anjum, Shakeel Ahmad; Hussain, Saddam; Cerdà , Artemi; Ashraf, Umair
Climate change, soil degradation, and depletion of natural resources are becoming the most prominent challenges for crop productivity and environmental sustainability in modern agriculture. In the scenario of conventional farming system, limited chances are available to cope with these issues. Relay cropping is a method of multiple cropping where one crop is seeded into standing second crop well before harvesting of second crop. Relay cropping may solve a number of conflicts such as inefficient use of available resources, controversies in sowing time, fertilizer application, and soil degradation. Relay cropping is a complex suite of different resource-efficient technologies, which possesses the capability to improve soil quality, to increase net return, to increase land equivalent ratio, and to control the weeds and pest infestation. The current review emphasized relay cropping as a tool for crop diversification and environmental sustainability with special focus on soil. Briefly, benefits, constraints, and opportunities of relay cropping keeping the goals of higher crop productivity and sustainability have also been discussed in this review. The research and knowledge gap in relay cropping was also highlighted in order to guide the further studies in future.
Joint Subcarrier Pairing and Resource Allocation for Cognitive Network and Adaptive Relaying Strategy
Soury, Hamza
Recent measurements show that the spectrum is under-utilized by licensed users in wireless communication. Cognitive radio (CR) has been proposed as a suitable solution to manage the inefficient usage of the spectrum and increase coverage area of wireless networks. The concept is based on allowing a group of secondary users (SUs) to share the unused radio spectrum originally owned by the primary user (PUs). The operation of CR should not cause harmful interference to the PUs. In the other hand, relayed transmission increases the coverage and achievable capacity of communication systems and in particular in CR systems. In fact there are many types of cooperative communications, however the two main ones are decode-and-forward (DAF) and amplify-and-forward (AAF). Adaptive relaying scheme is a relaying technique by which the benefits of the amplifying or decode and forward techniques can be achieved by switching the forwarding technique according to the quality of the signal. In this dissertation, we investigate the power allocation for an adaptive relaying protocol (ARP) scheme in cognitive system by maximizing the end-to-end rate and searching the best carriers pairing distribution. The optimization problem is under the interference and power budget constraints. The simulation results confirm the efficiency of the proposed adaptive relaying protocol in comparison to other relaying techniques, and the consequence of the choice of the pairing strategy.
Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks
Hina Nasir
Full Text Available This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs; performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE efficient depth based routing and Enhanced-ACE (E-ACE are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ. E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment.
New perspective on single-radiator multiple-port antennas for adaptive beamforming applications.
Byun, Gangil; Choo, Hosung
One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP) antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays.
Gangil Byun
Full Text Available One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays.
Distributed Antenna Channels with Regenerative Relaying: Relay Selection and Asymptotic Capacity
Aitor del Coso
Full Text Available Multiple-input-multiple-output (MIMO techniques have been widely proposed as a means to improve capacity and reliability of wireless channels, and have become the most promising technology for next generation networks. However, their practical deployment in current wireless devices is severely affected by antenna correlation, which reduces their impact on performance. One approach to solve this limitation is relaying diversity. In relay channels, a set of N wireless nodes aids a source-destination communication by relaying the source data, thus creating a distributed antenna array with uncorrelated path gains. In this paper, we study this multiple relay channel (MRC following a decode-and-forward (D&F strategy (i.e., regenerative forwarding, and derive its achievable rate under AWGN. A half-duplex constraint on relays is assumed, as well as distributed channel knowledge at both transmitter and receiver sides of the communication. For this channel, we obtain the optimum relay selection algorithm and the optimum power allocation within the network so that the transmission rate is maximized. Likewise, we bound the ergodic performance of the achievable rate and derive its asymptotic behavior in the number of relays. Results show that the achievable rate of regenerative MRC grows as the logarithm of the Lambert W function of the total number of relays, that is, �=log�2(W0(N. Therefore, D&F relaying, cannot achieve the capacity of actual MISO channels.
LTE-Advanced Relay Technology and Standardization
Yuan, Yifei
LTE-Advanced Relay Technology and Standardization provides a timely reference work for relay technology with the finalizing of LTE Release 10 specifications. LTE-Advanced is quickly becoming the global standard for 4G cellular communications. The relay technology, as one of the key features in LTE-Advanced, helps not only to improve the system coverage and capacity, but also to save the costs of laying wireline backhaul. As a leading researcher in the field of LTE-Advanced standards, the author provides an in-depth description of LTE-A relay technology, and explains in detail the standard specification and design principles. Readers from both academic and industrial fields can find sections of interest to them: Sections 2 & 4 could benefit researchers in academia and those who are engaged in exploratory work, while Sections 3 & 4 are more useful to engineers. Dr. Yifei Yuan is the Technical Director at the Standards Department of ZTE Inc.
Reversal thyristor-relay direct current commutator
Ivanenko, A.I.
A thyristor-relay commutator used for alteration of the leading magnetic field direction in experiments with polarized neutrons is described. The commutator flowsheet is presented. Thyristors, connected so as to allow the relay trigger operation mode, are used as controllable electronic relay. Two connected in series coils with the total inductance of the order of 0.28 H serve as the electronic relay load. The arc-free current commutation is effected at the moment of the minimal current across the load terminals, which allows to easily reverse the current up to 10 A at a volatage, v <= 150 V. The experience gained within a year of operation has shown that the commutator meets the requirements of reliability and tuning
Developing a Domain Model for Relay Circuits
Haxthausen, Anne Elisabeth
In this paper we stepwise develop a domain model for relay circuits as used in railway control systems. First we provide an abstract, property-oriented model of networks consisting of components that can be glued together with connectors. This model is strongly inspired by a network model...... for railways madeby Bjørner et.al., however our model is more general: the components can be of any kind and can later be refined to e.g. railway components or circuit components. Then we show how the abstract network model can be refined into an explicit model for relay circuits. The circuit model describes...... the statics as well as the dynamics of relay circuits, i.e. how a relay circuit can be composed legally from electrical components as well as how the components may change state over time. Finally the circuit model is transformed into an executable model, and we show how a concrete circuit can be defined...
Evaluation of Harmonics Impact on Digital Relays
Kinan Wannous
Full Text Available This paper presents the concept of the impact of harmonic distortion on a digital protection relay. The aim is to verify and determine the reasons of a mal-trip or failure to trip the protection relays; the suggested solution of the harmonic distortion is explained by a mathematical model in the Matlab Simulink programming environment. The digital relays have been tested under harmonic distortions in order to verify the function of the relays algorithm under abnormal conditions. The comparison between the protection relay algorithm under abnormal conditions and a mathematical model in the Matlab Simulink programming environment based on injected harmonics of high values is provided. The test is separated into different levels; the first level is based on the harmonic effect of an individual harmonic and mixed harmonics. The test includes the effect of the harmonics in the location of the fault point into distance protection zones. This paper is a new proposal in the signal processing of power quality disturbances using Matlab Simulink and the power quality impact on the measurements of the power system quantities; the test simulates the function of protection in power systems in terms of calculating the current and voltage values of short circuits and their faults. The paper includes several tests: frequency variations and decomposition of voltage waveforms with Fourier transforms (model and commercial relay, the effect of the power factor on the location of fault points, the relation between the tripping time and the total harmonic distortion (THD levels in a commercial relay, and a comparison of the THD capture between the commercial relay and the model.
In this work, we consider a cooperative network where multiple relay nodes having different modulation capabilities assist the end-to-end communication between a source and its destination. Firstly, we evaluate the effective capacity (EC) performance of the network under study. According to the analysis, an EC-based relay selection criterion is proposed. Based on the proposed selection rule and half-duplex decode-and-forward protocol, the activated relays cooperatively help with the packet transmission from the source. At the destination, packet combining is taken into account to improve the quality of service. Compared to the popular scheme, opportunistic relay selection, numerical results are provided to prove the validity and advantages of our proposed scheme in certain scenarios. Moreover, the analysis presented herein offers a convenient tool to the relaying transmission design, specifically on which relay selection scheme should be used as well as how to choose the receiving strategy between with and without packet combining at the destination. © 2013 IEEE.
Relay testing parametric investigation of seismic fragility
Bandyopadhyay, K.; Hofmayer, C.; Kassir, M.; Pepper, S.
The seismic capacity of most electrical equipment is governed by malfunction of relays. An evaluation of the existing relay test data base at Brookhaven National Laboratory (BNL) has indicated that the seismic fragility of a relay may depend on various parameters related to the design or the input motion. In particular, the electrical mode, contact state, adjustment, chatter duration acceptance limit, and the frequency and the direction of the vibration input have been considered to influence the relay fragility level. For a particular relay type, the dynamics of its moving parts depends on the exact model number and vintage and hence, these parameters may also influence the fragility level. In order to investigate the effect of most of these parameters on the seismic fragility level, BNL has conducted a relay test program. The testing has been performed at Wyle Laboratories. Establishing the correlation between the single frequency fragility test input and the corresponding multifrequency response spectrum (TRS) is also an objective of this test program. This paper discusses the methodology used for testing and presents a brief summary of important test results. 1 ref., 10 figs
Airborne relay-based regional positioning system.
Lee, Kyuman; Noh, Hongjun; Lim, Jaesung
Ground-based pseudolite systems have some limitations, such as low vertical accuracy, multipath effects and near-far problems. These problems are not significant in airborne-based pseudolite systems. However, the monitoring of pseudolite positions is required because of the mobility of the platforms on which the pseudolites are mounted, and this causes performance degradation. To address these pseudolite system limitations, we propose an airborne relay-based regional positioning system that consists of a master station, reference stations, airborne relays and a user. In the proposed system, navigation signals are generated from the reference stations located on the ground and are relayed via the airborne relays. Unlike in conventional airborne-based systems, the user in the proposed system sequentially estimates both the locations of airborne relays and his/her own position. Therefore, a delay due to monitoring does not occur, and the accuracy is not affected by the movement of airborne relays. We conducted several simulations to evaluate the performance of the proposed system. Based on the simulation results, we demonstrated that the proposed system guarantees a higher accuracy than airborne-based pseudolite systems, and it is feasible despite the existence of clock offsets among reference stations.
Relay Runners Catch The Rays
Athletes sizzled around CERN on Wednesday 19 May at the 34th annual relay race. On one of the warmest days of the year so far, sunkissed competitors ran for the finish line and then straight for the drinks table. The Shabbys were on fire again, hurtling across the line first in a time of 10 min. 42.6 sec. and making an even stronger claim to being hailed as the traditional winners of the race with their fourth triumph in a row. Also on form were the Lynx Runners who won the Veteran's trophy, continuing their winning ways since 2002 and placing 29th overall. Ildefons Magrans of the ALICE Quarks on the Loose team ran the fastest 1000m in a time of 2 min. 47 sec. Second-placed Charmilles Technologies won the Open category in a time of 11 min. 03 sec., taking the prize for teams whose members work in different departments or who come from outside CERN. The OPALadies won the women's trophy and placed 48th. With 9 trophies up for grabs, more than 300 people in 55 teams ran the fun run, covering distances of 1000m ...
Analysis of Simulated and Measured Indoor Channels for mm-Wave Beamforming Applications
Karstensen, Anders; Fan, Wei; Zhang, Fengchun
was investigated using both single beam and multiple beams, with two different power allocation schemes applied to multi-beamforming. Channel measurements were performed at 28-30 GHz using a vector network analyzer equipped with a Biconical antenna as the transmit antenna and a rotated horn antenna as the receive...... antenna. 3D ray tracing simulations were carried out in the same replicated propagation environments. Based on measurement and ray tracing simulation data, it is shown that RT-assisted beamforming performs well both for single and multi-beamforming in these two representative indoor propagation...
Laser beam-forming by deformable mirror for laser isotope separation
Nemoto, Koshichi; Fujii, Takashi; Goto, Naohiko
A rectangular laser beam of uniform intensity is very suitable for laser isotope separation. In this paper, we propose a beam-forming system which consists two deformable mirrors. One of the mirrors changes the beam intensity and the other compensates for phase distortion. We developed a deformable mirror for beam-forming. Its deformed surface is similar to the ideal mirror surface for beam-forming. We reshaped a Gaussian-like He-Ne laser beam into a beam with a more uniform intensity profile by a simple deformable mirror. (author)
Development of an acoustic steam generator leak detection system using delay-and-sum beamformer
Chikazawa, Yoshitaka
A new acoustic steam generator leak detection system using delay-and-sum beamformer is proposed. The major advantage of the delay-and-sum beamformer is it could provide information of acoustic source direction. An acoustic source of a sodium-water reaction is supposed to be localized while the background noise of the steam generator operation is uniformly distributed in the steam generator tube region. Therefore the delay-and-sum beamformer could distinguish the acoustic source of the sodium-water reaction from steam generator background noise. In this paper, results from numerical analyses are provided to show fundamental feasibility of the new method. (author)
Physical Layer Security Using Two-Path Successive Relaying
Qian Yu Liau
Full Text Available Relaying is one of the useful techniques to enhance wireless physical-layer security. Existing literature shows that employing full-duplex relay instead of conventional half-duplex relay improves secrecy capacity and secrecy outage probability, but this is at the price of sophisticated implementation. As an alternative, two-path successive relaying has been proposed to emulate operation of full-duplex relay by scheduling a pair of half-duplex relays to assist the source transmission alternately. However, the performance of two-path successive relaying in secrecy communication remains unexplored. This paper proposes a secrecy two-path successive relaying protocol for a scenario with one source, one destination and two half-duplex relays. The relays operate alternately in a time division mode to forward messages continuously from source to destination in the presence of an eavesdropper. Analytical results reveal that the use of two half-duplex relays in the proposed scheme contributes towards a quadratically lower probability of interception compared to full-duplex relaying. Numerical simulations show that the proposed protocol achieves the ergodic achievable secrecy rate of full-duplex relaying while delivering the lowest probability of interception and secrecy outage probability compared to the existing half duplex relaying, full duplex relaying and full duplex jamming schemes.
Optimal Coordination of Directional Overcurrent Relays Using PSO-TVAC Considering Series Compensation
Nabil Mancer
Full Text Available The integration of system compensation such as Series Compensator (SC into the transmission line makes the coordination of directional overcurrent in a practical power system important and complex. This article presents an efficient variant of Particle Swarm Optimization (PSO algorithm based on Time-Varying Acceleration Coefficients (PSO-TVAC for optimal coordination of directional overcurrent relays (DOCRs considering the integration of series compensation. Simulation results are compared to other methods to confirm the efficiency of the proposed variant PSO in solving the optimal coordination of directional overcurrent relay in the presence of series compensation.
Graphene-based fine-tunable optical delay line for optical beamforming in phased-array antennas.
Tatoli, Teresa; Conteduca, Donato; Dell'Olio, Francesco; Ciminelli, Caterina; Armenise, Mario N
The design of an integrated graphene-based fine-tunable optical delay line on silicon nitride for optical beamforming in phased-array antennas is reported. A high value of the optical delay time (τg=920 ps) together with a compact footprint (4.15 mm2) and optical loss graphene-based Mach-Zehnder interferometer switches and two vertically stacked microring resonators between which a graphene capacitor is placed. The tuning range is obtained by varying the value of the voltage applied to the graphene electrodes, which controls the optical path of the light propagation and therefore the delay time. The graphene provides a faster reconfigurable time and low values of energy dissipation. Such significant advantages, together with a negligible beam-squint effect, allow us to overcome the limitations of conventional RF beamformers. A highly efficient fine-tunable optical delay line for the beamsteering of 20 radiating elements up to ±20° in the azimuth direction of a tile in a phased-array antenna of an X-band synthetic aperture radar has been designed.
Optimal relay selection and power allocation for cognitive two-way relaying networks
Pandarakkottilil, Ubaidulla; Aï ssa, Sonia
In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays
Park, Kihong; Alouini, Mohamed-Slim; Park, Seongho; Ko, Youngchai
In this paper, we consider a two-hop relaying network where one source, one destination, and multiple amplify-and-forward (AF) relays equipped with M antennas operate in a half-duplex mode. In order to compensate for the inherent loss of capacity
Individual Channel Estimation in a Diamond Relay Network Using Relay-Assisted Training
Xianwen He
Full Text Available We consider the training design and channel estimation in the amplify-and-forward (AF diamond relay network. Our strategy is to transmit the source training in time-multiplexing (TM mode while each relay node superimposes its own relay training over the amplified received data signal without bandwidth expansion. The principal challenge is to obtain accurate channel state information (CSI of second-hop link due to the multiaccess interference (MAI and cooperative data interference (CDI. To maintain the orthogonality between data and training, a modified relay-assisted training scheme is proposed to migrate the CDI, where some of the cooperative data at the relay are discarded to accommodate relay training. Meanwhile, a couple of optimal zero-correlation zone (ZCZ relay-assisted sequences are designed to avoid MAI. At the destination node, the received signals from the two relay nodes are combined to achieve spatial diversity and enhanced data reliability. The simulation results are presented to validate the performance of the proposed schemes.
Iterative Relay Scheduling with Hybrid ARQ under Multiple User Equipment (Type II) Relay Environments
-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed IRS-HARQ aims to increase the achievable data rate by iteratively scheduling a relatively better UE relay closer to the end user in a probabilistic sense, provided that the relay-to-end user
Buffer-Aided Relaying with Adaptive Link Selection
Zlatanov, Nikola; Schober, Robert; Popovski, Petar
In this paper, we consider a simple network consisting of a source, a half-duplex decode-and-forward relay, and a destination. We propose a new relaying protocol employing adaptive link selection, i.e., in any given time slot, based on the channel state information of the source-relay and the relay......-destination link a decision is made whether the source or the relay transmits. In order to avoid data loss at the relay, adaptive link selection requires the relay to be equipped with a buffer such that data can be queued until the relay-destination link is selected for transmission. We study both delay......-constrained and delay-unconstrained transmission. For the delay-unconstrained case, we characterize the optimal link selection policy, derive the corresponding throughput, and develop an optimal power allocation scheme. For the delay-constrained case, we propose to starve the buffer of the relay by choosing...
In this paper, we investigate a multiple relay selection scheme for two-way relaying cognitive radio networks where primary users and secondary users operate on the same frequency band. More specifically, cooperative relays using Amplifyand- Forward (AF) protocol are optimally selected to maximize the sum rate of the secondary users without degrading the Quality of Service (QoS) of the primary users by respecting a tolerated interference threshold. A strong optimization tool based on genetic algorithm is employed to solve our formulated optimization problem where discrete relay power levels are considered. Our simulation results show that the practical heuristic approach achieves almost the same performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.
Clinical evaluation of synthetic aperture sequential beamforming ultrasound in patients with liver tumors
Hansen, Peter Møller; Hemmsen, Martin Christian; Brandt, Andreas Hjelm
Medical ultrasound imaging using synthetic aperture sequential beamforming (SASB) has for the first time been used for clinical patient scanning. Nineteen patients with cancer of the liver (hepatocellular carcinoma or colorectal liver metastases) were scanned simultaneously with conventional...
Polarized Uniform Linear Array System: Beam Radiation Pattern, Beamforming Diversity Order, and Channel Capacity
Xin Su
Full Text Available There have been many studies regarding antenna polarization; however, there have been few publications on the analysis of the channel capacity for polarized antenna systems using the beamforming technique. According to Chung et al., the channel capacity is determined by the density of scatterers and the transmission power, which is obtained based on the assumption that scatterers are uniformly distributed on a 3D spherical scattering model. However, it contradicts the practical scenario, where scatterers may not be uniformly distributed under outdoor environment, and lacks the consideration of fading channel gain. In this study, we derive the channel capacity of polarized uniform linear array (PULA systems using the beamforming technique in a practical scattering environment. The results show that, for PULA systems, the channel capacity, which is boosted by beamforming diversity, can be determined using the channel gain, beam radiation pattern, and beamforming diversity order (BDO, where the BDO is dependent on the antenna characteristics and array configurations.
Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation
Premus, Vincent
... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...
Radaydeh, Redha Mahmoud; Alouini, Mohamed-Slim
information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements
Lee, Hyun Ho; Park, Kihong; Yang, Hongchuan; Ko, Youngchai
-wise beamforming based on the iterative algorithm. We demonstrate that our proposed scheme can reduce the computational complexity significantly. From our simulation results, it is evident that our proposed scheme leads to a negligible performance loss compared
Transmit Antenna Selection for Multi-User Underlay Cognitive Transmission with Zero-Forcing Beamforming
Hanif, Muhammad; Yang, Hong-Chuan; Alouini, Mohamed-Slim
We present a transmit antenna subset selection scheme for an underlay cognitive system serving multiple secondary receivers. The secondary system employs zero-forcing beamforming to nullify the interference to multiple primary users and eliminate
A phantom study on temporal and subband Minimum Variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
This paper compares experimentally temporal and subband implementations of the Minimum Variance (MV) adaptive beamformer for medical ultrasound imaging. The performance of the two approaches is tested by comparing wire phantom measurements, obtained by the research ultrasound scanner SARUS. A 7 MHz...... BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...... from conventional Delay-and-Sum (DAS) beamformer. FWHM measured at the depth of 46.6 mm, is 0.02 mm (0.09λ) for both adaptive methods while the corresponding values for Hanning and Boxcar weights are 0.64 and 0.44 mm respectively. Between the MV beamformers a -2 dB difference in PSL is noticed in favor...
Joint sub-carrier pairing and resource allocation for cognitive networks with adaptive relaying
Soury, Hamza; Bader, F.; Shaat, M.; Alouini, Mohamed-Slim
Relayed transmission in a cognitive radio (CR) environment could be used to increase the coverage and capacity of communication system that benefits already from the efficient management of the spectrum developed by CR. Furthermore, there are many types of cooperative communications, including decode-and-forward (DAF) and amplify-and-forward (AAF). In this paper, these techniques are combined in an adaptive mode to benefit from its forwarding advantages; this mode is called adaptive relaying protocol (ARP). Moreover, this work focuses on the joint power allocation in a cognitive radio system in a cooperative mode that operates ARP in multi-carrier mode. The multi-carrier scenario is used in an orthogonal frequency division multiplexing (OFDM) mode, and the problem is formulated to maximize the end-to-end rate by searching the best power allocation at the transmitters. This work includes, besides the ARP model, a sub-carrier pairing strategy that allows the relays to switch to the best sub-carrier pairs to increase the throughput. The optimization problem is formulated and solved under the interference and power budget constraints using the sub-gradient algorithm. The simulation results confirm the efficiency of the proposed adaptive relaying protocol in comparison to other relaying techniques. The results show also the consequence of the choice of the pairing strategy. 2013 Stolojescu-Crisan and Isar; licensee Springer.
Calibration and Evaluation of Fixed and Mobile Relay-Based System Level Simulator
Shahid Mumtaz
Full Text Available Future wireless communication systems are expected to provide more stable and higher data rate transmissions in the whole OFDMA networks, but the mobile stations (MSs in the cell boundary experience poor spectral efficiency due to the path loss from the transmitting antenna and interference from adjacent cells. Therefore, satisfying QoS (Quality of Service requirements of each MS at the cell boundary has been an important issue. To resolve this spectral efficiency problem at the cell boundary, deploying relay stations has been actively considered. As multihop/relay has complex interactions between the routing and medium access control decisions, the extent to which analytical expressions can be used to explore its benefits is limited. Consequently, simulations tend to be the preferred way of assessing the performance of relays. In this paper, we evaluate the performance of relay-assisted OFDMA networks by means of system level simulator (SLS. We consistently observed that the throughput is increased and the outage is decreased in the relay-assisted OFDMA network, which is converted to range extension without any capacity penalty, for the realistic range of values of the propagation and other system parameters investigated.
Performance Analysis of Space Information Networks with Backbone Satellite Relaying for Vehicular Networks
Jian Jiao
Full Text Available Space Information Network (SIN with backbone satellites relaying for vehicular network (VN communications is regarded as an effective strategy to provide diverse vehicular services in a seamless, efficient, and cost-effective manner in rural areas and highways. In this paper, we investigate the performance of SIN return channel cooperative communications via an amplify-and-forward (AF backbone satellite relaying for VN communications, where we assume that both of the source-destination and relay-destination links undergo Shadowed-Rician fading and the source-relay link follows Rician fading, respectively. In this SIN-assisted VN communication scenario, we first obtain the approximate statistical distributions of the equivalent end-to-end signal-to-noise ratio (SNR of the system. Then, we derive the closed-form expressions to efficiently evaluate the average symbol error rate (ASER of the system. Furthermore, the ASER expressions are taking into account the effect of satellite perturbation of the backbone relaying satellite, which reveal the accumulated error of the antenna pointing error. Finally, simulation results are provided to verify the accuracy of our theoretical analysis and show the impact of various parameters on the system performance.
Bader, Ahmed
An embodiment of a non-invasive beamforming add-on apparatus couples to an existing antenna port and rectifies the beam azimuth in the upstream and downstream directions. The apparatus comprises input circuitry that is configured to receive one or more signals from a neighboring node of the linear wireless sensor network; first amplifier circuitry configured to adjust an amplitude of a respective received signal in accordance with a weighting coefficient and invoke a desired phase to a carrier frequency of the received signal thereby forming a first amplified signal; and second amplifier circuitry configured to adjust a gain of the first amplified signal towards upstream and downstream neighbors of the linear wireless sensor in the linear wireless sensor network.
Speech Intelligibility Advantages using an Acoustic Beamformer Display
Begault, Durand R.; Sunder, Kaushik; Godfroy, Martine; Otto, Peter
A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 "Method for Measuring the Intelligibility of Speech Over Communication Systems" was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 deg (re 90 deg as the frontal direction of the array) and the noise at 135 deg (co-located) and 0 deg (separated). A significant improvement in intelligibility from 57% to 73% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios (5 and 10 dB).
An object-oriented multi-threaded software beamformation toolbox
Hansen, Jens Munk; Hemmsen, Martin Christian; Jensen, Jørgen Arendt
Focusing and apodization are an essential part of signal processing in ultrasound imaging. Although the fun- damental principles are simple, the dramatic increase in computational power of CPUs, GPUs, and FPGAs motivates the development of software based beamformers, which further improves image...... new beam formation strategies. It is a general 3D implementation capable of handling a multitude of focusing methods, interpolation schemes, and parametric and dynamic apodization. Despite being exible, it is capable of exploiting parallelization on a single computer, on a cluster, or on both....... On a single computer, it mimics the parallization in a scanner containing multiple beam formers. The focusing is determined using the positions of the transducer elements, presence of virtual sources, and the focus points. For interpolation, a number of interpolation schemes can be chosen, e.g. linear, polyno...
Cross-Layer Admission Control Policy for CDMA Beamforming Systems
Sheng Wei
Full Text Available A novel admission control (AC policy is proposed for the uplink of a cellular CDMA beamforming system. An approximated power control feasibility condition (PCFC, required by a cross-layer AC policy, is derived. This approximation, however, increases outage probability in the physical layer. A truncated automatic retransmission request (ARQ scheme is then employed to mitigate the outage problem. In this paper, we investigate the joint design of an AC policy and an ARQ-based outage mitigation algorithm in a cross-layer context. This paper provides a framework for joint AC design among physical, data-link, and network layers. This enables multiple quality-of-service (QoS requirements to be more flexibly used to optimize system performance. Numerical examples show that by appropriately choosing ARQ parameters, the proposed AC policy can achieve a significant performance gain in terms of reduced outage probability and increased system throughput, while simultaneously guaranteeing all the QoS requirements.
Impact of co-channel interference on the performance of adaptive generalized transmit beamforming
Radaydeh, Redha Mahmoud Mesleh
The impact of co-channel interference on the performance of adaptive generalized transmit beamforming for low-complexity multiple-input single-output (MISO) configuration is investigated. The transmit channels are assumed to be sufficiently separated and undergo Rayleigh fading conditions. Due to the limited space, a single receive antenna is employed to capture desired user transmission. The number of active transmit channels is adjusted adaptively based on statistically unordered and/or ordered instantaneous signal-to-noise ratios (SNRs), where the transmitter has no information about the statistics of undesired signals. The adaptation thresholds are identified to guarantee a target performance level, and the adaptation schemes with enhanced spectral efficiency or power efficiency are studied and their performance are compared under various channels conditions. To facilitate comparison studies, results for the statistics of instantaneous combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. The statistics for combined SNR and combined SINR are then used to quantify various performance measures, considering the impact of non-ideal estimation of the desired user channel state information (CSI) and the randomness in the number of active interferers. Numerical and simulation comparisons for the achieved performance of the adaptation schemes are presented. © 2006 IEEE.
Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...
Double-Stage Delay Multiply and Sum Beamforming Algorithm: Application to Linear-Array Photoacoustic Imaging
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza
Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely Delay-Multiply-and-Sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging...
Ad Hoc Microphone Array Beamforming Using the Primal-Dual Method of Multipliers
Tavakoli, Vincent Mohammad; Jensen, Jesper Rindom; Heusdens, Richard
In the recent years, there have been increasing amount of researches aiming at optimal beamforming with ad hoc microphone arrays, mostly with fusion-based schemes. However, huge amount of computational complexity and communication overhead impede many of these algorithms from being useful in prac...... the distributed linearly-constrained minimum variance beamformer using the the state of the art primal-dual method of multipliers. We study the proposed algorithm with an experiment....
Non-Orthogonal Opportunistic Beamforming: Performance Analysis and Implementation
Xia, Minghua
Aiming to achieve the sum-rate capacity in multi-user multi-antenna systems where $N_t$ antennas are implemented at the transmitter, opportunistic beamforming (OBF) generates~$N_t$ orthonormal beams and serves $N_t$ users during each channel use, which results in high scheduling delay over the users, especially in densely populated networks. Non-orthogonal OBF with more than~$N_t$ transmit beams can be exploited to serve more users simultaneously and further decrease scheduling delay. However, the inter-beam interference will inevitably deteriorate the sum-rate. Therefore, there is a tradeoff between sum-rate and scheduling delay for non-orthogonal OBF. In this context, system performance and implementation of non-orthogonal OBF with $N>N_t$ beams are investigated in this paper. Specifically, it is analytically shown that non-orthogonal OBF is an interference-limited system as the number of users $K \\\\to \\\\infty$. When the inter-beam interference reaches its minimum for fixed $N_t$ and~$N$, the sum-rate scales as $N\\\\ln\\\\left(\\\\frac{N}{N-N_t}\\ ight)$ and it degrades monotonically with the number of beams $N$ for fixed $N_t$. On the contrary, the average scheduling delay is shown to scale as $\\\\frac{1}{N}K\\\\ln{K}$ channel uses and it improves monotonically with $N$. Furthermore, two practical non-orthogonal beamforming schemes are explicitly constructed and they are demonstrated to yield the minimum inter-beam interference for fixed $N_t$ and $N$. This study reveals that, if user traffic is light and one user can be successfully served within a single transmission, non-orthogonal OBF can be applied to obtain lower worst-case delay among the users. On the other hand, if user traffic is heavy, non-orthogonal OBF is inferior to orthogonal OBF in terms of sum-rate and packet delay.
Discrete radioisotopic relays of a cyclic action
Klempner, K.S.; Vasil'ev, A.G.
A functional diagram of discrete radioisotopic relay equipment (RRP) with cyclic action was examined. An analysis of its rapid action and reliability under stationary conditions and transition regimes is presented. A structural diagram of radioisotopic relay equipment shows three radiation detectors, a pulse standardizer, an integrator and a power amplifier with a threshold cut-off device. It was established that the basic properties of the RRP - rapid action and reliability - are determined entirely by the counting rate of the average frequency of pulses from the radiation detector, n 0 and n 1 , in the 0 and 1 states (absence of current in the electromagnetic relay winding and activation of the winding of the output relay), capacities N 1 and N 2 of the dual counters, and the frequency of the transition threshold, f, of the generator. Formulas are presented which allow making engineering calculations for determining the optimum RRP parameters. High speed and reliability are shown, which are determined by the production purposes of the relay
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Counterfactual quantum cryptography network with untrusted relay
Chen, Yuanyuan; Gu, Xuemei; Jiang, Dong; Xie, Ling; Chen, Lijun
Counterfactual quantum cryptography allows two remote parties to share a secret key even though a physical particle is not in fact transmitted through the quantum channel. In order to extend the scope of counterfactual quantum cryptography, we use an untrusted relay to construct a multi-user network. The implementation issues are discussed to show that the scheme can be realized with current technologies. We also prove the practical security advantages of the scheme by eliminating the probability that an eavesdropper can directly access the signal or an untrusted relay can perform false operations.
CERN Relay Race: information for drivers
The CERN relay race will take place around the Meyrin site on Thursday, 24 May starting at 12.15. If possible, please avoid driving on the site during this 20-minute period. If you do meet runners while driving your car, please STOP until they have all passed. In addition, there will be a Nordic Walking event which will finish around 12.50. This should not block the roads, but please drive carefully during this time. Thank you for your cooperation. Details on how to register your team for the relay race can be found here.
Deep space optical communication via relay satellite
Dolinar, S.; Vilnrotter, V.; Gagliardi, R.
The application of optical communications for a deep space link via an earth-orbiting relay satellite is discussed. The system uses optical frequencies for the free-space channel and RF links for atmospheric transmission. The relay satellite is in geostationary orbit and contains the optics necessary for data processing and formatting. It returns the data to earth through the RF terrestrial link and also transmits an optical beacon to the satellite for spacecraft return pointing and for the alignment of the transmitting optics. Future work will turn to modulation and coding, pointing and tracking, and optical-RF interfacing.
Antenna subset selection at multi-antenna relay with adaptive modulation
Choi, Seyeong; Hasna, Mazen Omar; Yang, Hongchuan; Alouini, Mohamed-Slim
In this paper, we proposed several antenna selection schemes for cooperative diversity systems with adaptive transmission. The proposed schemes were based on dual-hop relaying where a relay with multiple-antenna capabilities at reception and transmission is deployed between the source and the destination nodes. We analyzed the performance of the proposed schemes by quantifying the average spectral efficiency and the outage probability. We also investigated the trade-off of performance and complexity by comparing the average number of active antennas, path estimations, and signal-to-noise ratio comparisons of the different proposed schemes. Copyright © 2011 John Wiley & Sons, Ltd.
Choi, Seyeong
Hyadi, Amal
This paper investigates a new constrained relay selection scheme for two-way relaying systems where two end terminals communicate simultaneously via a relay. The introduced technique is based on the maximization of the weighted sum rate of both users. To evaluate the performance of the proposed system, the outage probability is derived in a general case (where an arbitrary channel is considered), and then over independently but not necessarily identically distributed (i.n.i.d.) Rayleigh fading channels. The analytical results are verified through simulations. © 2012 IEEE.
Advanced Strategic and Tactical Relay Request Management for the Mars Relay Operations Service
Allard, Daniel A.; Wallick, Michael N.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.
This software provides a new set of capabilities for the Mars Relay Operations Service (MaROS) in support of Strategic and Tactical relay, including a highly interactive relay request Web user interface, mission control over relay planning time periods, and mission management of allowed strategic vs. tactical request parameters. Together, these new capabilities expand the scope of the system to include all elements critical for Tactical relay operations. Planning of replay activities spans a time period that is split into two distinct phases. The first phase is called Strategic, which begins at the time that relay opportunities are identified, and concludes at the point that the orbiter generates the flight sequences for on board execution. Any relay request changes from this point on are called Tactical. Tactical requests, otherwise called Orbit - er Relay State Changes (ORSC), are highly restricted in terms of what types of changes can be made, and the types of parameters that can be changed may differ from one orbiter to the next. For example, one orbiter may be able to delay the start of a relay request, while another may not. The legacy approach to ORSC management involves exchanges of e-mail with "requests for change" and "acknowledgement of approval," with no other tracking of changes outside of e-mail folders. MaROS Phases 1 and 2 provided the infrastructure for strategic relay for all supported missions. This new version, 3.0, introduces several capabilities that fully expand the scope of the system to include tactical relay. One new feature allows orbiter users to manage and "lock" Planning Periods, which allows the orbiter team to formalize the changeover from Strategic to Tactical operations. Another major feature allows users to interactively submit tactical request changes via a Web user interface. A third new feature allows orbiter missions to specify allowed tactical updates, which are automatically incorporated into the tactical change process
Relay communications strategies for Mars exploration through 2020
Edwards, Charles D., Jr.; Arnold, B.; DePaula, R.; Kazz, G.; Lee, C.; Noreen, G.
In this paper we will examine NASA's strategy for relay communications support of missions planned for this decade, and discuss options for longer-term relay network evolution in support of second-decade missions.
Switched diversity strategies for dual-hop relaying systems
Gaaloul, Fakhreddine; Alouini, Mohamed-Slim; Radaydeh, Redha M.
This paper investigates the effect of different switched diversity configurations on the implementation complexity and achieved performance of dual-hop amplify-and-forward (AF) relaying networks. A low-complexity model of the relay station
protective relay studies for the nigerian national electric 330 kv
Sep 1, 1985 ... protective relay schemes of the National Electric Power Authority. Some of the basic ... Nigerian special system characteristics, schemes to correct existing protection inadequacies .... relays buried in the transformer. A reach of ...
Dynamic Relaying in 3GPP LTE-Advanced Networks
Van Phan Vinh
Full Text Available Relaying is one of the proposed technologies for LTE-Advanced networks. In order to enable a flexible and reliable relaying support, the currently adopted architectural structure of LTE networks has to be modified. In this paper, we extend the LTE architecture to enable dynamic relaying, while maintaining backward compatibility with LTE Release 8 user equipments, and without limiting the flexibility and reliability expected from relaying. With dynamic relaying, relays can be associated with base stations on a need basis rather than in a fixed manner which is based only on initial radio planning. Proposals are also given on how to further improve a relay enhanced LTE network by enabling multiple interfaces between the relay nodes and their controlling base stations, which can possibly be based on technologies different from LTE, so that load balancing can be realized. This load balancing can be either between different base stations or even between different networks.
Capacity gains of buffer-aided moving relays
This work investigates the gain due to reduction in path loss by deploying buffer-aided moving relaying. In particular, the increase in gain due to moving relays is studied for dual-hop broadcast channels and the bidirectional relay channel. It is shown that the exploited gains in these channels due to buffer-aided relaying can be enhanced by utilizing the fact that a moving relay can communicate with the terminal closest to it and store the data in the buffer and then forward the data to the intended destination when it comes in close proximity with the destination. Numerical results show that for both the considered channels the achievable rates are increased as compared to the case of stationary relays. Numerical results also show that more significant increase in performance is seen when the relay moves to-and-fro between the source and the relay.
Teyeb, Oumer Mohammed; Van Phan, Vinh; Redana, Simone
Relaying is one of the proposed technologies for LTE-Advanced networks. In order to enable a flexible and reliable relaying support, the currently adopted architectural structure of LTE networks has to be modified. In this paper, we extend the LTE architecture to enable dynamic relaying, while...... maintaining backward compatibility with LTE Release 8 user equipments, and without limiting the flexibility and reliability expected from relaying.With dynamic relaying, relays can be associated with base stations on a need basis rather than in a fixed manner which is based only on initial radio planning....... Proposals are also given on how to further improve a relay enhanced LTE network by enabling multiple interfaces between the relay nodes and their controlling base stations, which can possibly be based on technologies different from LTE, so that load balancing can be realized. This load balancing can...
Improper Signaling for Virtual Full-Duplex Relay Systems
Virtual full-duplex (VFD) is a powerful solution to compensate the rate loss of half-duplex relaying without the need to full-duplex capable nodes. Inter-relay interference (IRI) challenges the operation of VFD relaying systems. Recently, improper signaling is employed at both relays of the VFD to mitigate the IRI by imposing the same signal characteristics for both relays. To further boost the achievable rate performance, asymmetric time sharing VFD relaying system is adopted with different improper signals at the half-duplex relays. The joint tuning of the three design parameters improves the achievable rate performance at different ranges of IRI and different relays locations. Extensive simulation results are presented and analyzed to show the achievable rate gain of the proposed system and understand the system behavior.
Gaafar, Mohamed; Amin, Osama; Schaefer, Rafael F.; Alouini, Mohamed-Slim
Zafar, Ammar; Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim
Cache Aided Decode-and-Forward Relaying Networks: From the Spatial View
Junjuan Xia
Full Text Available We investigate cache technique from the spatial view and study its impact on the relaying networks. In particular, we consider a dual-hop relaying network, where decode-and-forward (DF relays can assist the data transmission from the source to the destination. In addition to the traditional dual-hop relaying, we also consider the cache from the spatial view, where the source can prestore the data among the memories of the nodes around the destination. For the DF relaying networks without and with cache, we study the system performance by deriving the analytical expressions of outage probability and symbol error rate (SER. We also derive the asymptotic outage probability and SER in the high regime of transmit power, from which we find the system diversity order can be rapidly increased by using cache and the system performance can be significantly improved. Simulation and numerical results are demonstrated to verify the proposed studies and find that the system power resources can be efficiently saved by using cache technique.
Broadcast Reserved Opportunity Assisted Diversity Relaying Scheme and Its Performance Evaluation
Xia Chen
Full Text Available Relay-based transmission can over the benefits in terms of coverage extension as well as throughput improvement if compared to conventional direct transmission. In a relay enhanced cellular (REC network, where multiple mobile terminals act as relaying nodes (RNs, multiuser diversity gain can be exploited. We propose an efficient relaying scheme, referred to as Broadcast Reserved Opportunity Assisted Diversity (BROAD for the REC networks. Unlike the conventional Induced Multiuser Diversity Relaying (IMDR scheme, our scheme acquires channel quality information (CQI in which the destined node (DN sends pilots on a reserved radio resource. The BROAD scheme can significantly decrease the signaling overhead among the mobile RNs while achieving the same multiuser diversity as the conventional IMDR scheme. In addition, an alternative version of the BROAD scheme, named as A-BROAD scheme, is proposed also, in which the candidate RN(s feed back partial or full CQI to the base station (BS for further scheduling purpose. The A-BROAD scheme achieves a higher throughput than the BROAD scheme at the cost of extra signalling overhead. The theoretical analysis given in this paper demonstrates the feasibility of the schemes in terms of their multiuser diversity gains in a REC network.
Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading
Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.
Multi-Destination Cognitive Radio Relay Network with SWIPT and Multiple Primary Receivers
Al-Habob, Ahmed A.
In this paper, we study the performance of simultaneous wireless information and power transfer (SWIPT) technique in a multi-destination dual-hop underlay cognitive relay network with multiple primary receivers. Information transmission from the secondary source to destinations is performed entirely via a decode- and-forward (DF) relay. The relay is assumed to have no embedded power source and to harvest energy from the source signal using a power splitting (PS) protocol and employing opportunistic scheduling to forward the information to the selected destination. We derive analytical expressions for the outage probability assuming Rayleigh fading channels and considering the energy harvesting efficiency at relay, the source maximum transmit power and primary receivers interference constraints. The system performance is also studied at high signal-to-noise ratio (SNR) values where approximate expressions for the outage probability are provided and analyzed in terms of diversity order and coding gain. Monte-Carlo simulations and some numerical examples are provided to validate the derived expressions and to illustrate the effect of various system parameters on the system performance. In contrast to their conventional counterparts where a multi- destination diversity is usually achieved, the results show that the multi-destination cognitive radio relay networks with the SWIPT technique achieve a constant diversity order of one.
Relay protection features of frequency-adjustable electric drive
Kuprienko, V. V.
The features of relay protection of high-voltage electric motors in composition of the frequency-adjustable electric drive are considered in the article. The influence of frequency converters on the stability of the operation of various types of relay protection used on electric motors is noted. Variants of circuits for connecting relay protection devices are suggested. The need to develop special relay protection devices for a frequency-adjustable electric drive is substantiated.
Microprocessor protection relays: new prospects or new problems?
Gurevich, Vladimir
The internal architecture and principles of operation of microprocessor-based devices including so-called "microprocessor protective relays" have little in common with devices called "electric relays". But microprocessor-based relay protection devices are gradually driving out the traditional electromechanical and even electronic relay protection of virtually from all fields of power and electrical engineering. Advantages of microprocessor-based protection means over traditional ones are far ...
Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.
Ricci, E; Di Domenico, S; Cianca, E; Rossi, T
Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.
Adaptive digital beamforming for a CDMA mobile communications payload
Munoz-Garcia, Samuel G.; Ruiz, Javier Benedicto
In recent years, Spread-Spectrum Code Division Multiple Access (CDMA) has become a very popular access scheme for mobile communications due to a variety of reasons: excellent performance in multipath environments, high scope for frequency reuse, graceful degradation near saturation, etc. In this way, a CDMA system can support simultaneous digital communication among a large community of relatively uncoordinated users sharing a given frequency band. Nevertheless, there are also important problems associated with the use of CDMA. First, in a conventional CDMA scheme, the signature sequences of asynchronous users are not orthogonal and, as the number of active users increases, the self-noise generated by the mutual interference between users considerably degrades the performance, particularly in the return link. Furthermore, when there is a large disparity in received powers - due to differences in slant range or atmospheric attenuation - the non-zero cross-correlation between the signals gives rise to the so-called near-far problem. This leads to an inefficient utilization of the satellite resources and, consequently, to a drastic reduction in capacity. Several techniques were proposed to overcome this problem, such as Synchronized CDMA - in which the signature sequences of the different users are quasi-orthogonal - and power control. At the expense of increased network complexity and user coordination, these techniques enable the system capacity to be restored by equitably sharing the satellite resources among the users. An alternative solution is presented based upon the use of time-reference adaptive digital beamforming on board the satellite. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference source. In order to use a time-reference adaptive antenna in a communications system, the main challenge is to obtain a
49 CFR 236.52 - Relayed cut-section.
..., MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Rules and Instructions: All Systems Track Circuits § 236.52 Relayed cut-section. Where relayed cut-section is used in... 49 Transportation 4 2010-10-01 2010-10-01 false Relayed cut-section. 236.52 Section 236.52...
76 FR 72124 - Internet-Based Telecommunications Relay Service Numbering
... Docket No. 10-191; FCC 11-123] Internet-Based Telecommunications Relay Service Numbering AGENCY: Federal..., the information collection associated with the Commission's Internet- Based Telecommunications Relay... Telecommunications Relay Service Numbering, CG Docket No. 03-123; WC Docket No. 05-196; WC Docket No. 10-191; FCC 11...
76 FR 65965 - Contributions to the Telecommunications Relay Services Fund
...] Contributions to the Telecommunications Relay Services Fund AGENCY: Federal Communications Commission. ACTION... Telecommunications Relay Services (TRS) Fund in a manner prescribed by regulation that is consistent with and... . SUPPLEMENTARY INFORMATION: This is a summary of the Commission's Contributions to the Telecommunications Relay...
Asymmetric Modulation Gains in Network Coded Relay Networks
Roetter, Daniel Enrique Lucani; Fitzek, Frank
Wireless relays have usually been considered in two ways. On the one hand, a physical layer approach focused on per-packet reliability and involving the relay on each packet transmission. On the other, recent approaches have relied on the judicious activation of the relay at the network level to ...
A low complexity algorithm for multiple relay selection in two-way relaying Cognitive Radio networks
In this paper, a multiple relay selection scheme for two-way relaying cognitive radio network is investigated. We consider a cooperative Cognitive Radio (CR) system with spectrum sharing scenario using Amplify-and-Forward (AF) protocol, where licensed users and unlicensed users operate on the same frequency band. The main objective is to maximize the sum rate of the unlicensed users allowed to share the spectrum with the licensed users by respecting a tolerated interference threshold. A practical low complexity heuristic approach is proposed to solve our formulated optimization problem. Selected numerical results show that the proposed algorithm reaches a performance close to the performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity. In addition, these results show that our proposed scheme significantly outperforms the single relay selection scheme. © 2013 IEEE.
In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays. The secondary nodes share the spectrum with a licensed primary user (PU), and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. We propose joint relay selection and optimal power allocation among the secondary user (SU) nodes achieving maximum throughput under transmit power and PU interference constraints. A closed-form solution for optimal allocation of transmit power among the SU transceivers and the SU relay is presented. Furthermore, numerical simulations and comparisons are presented to illustrate the performance of the proposed scheme. © 2012 IEEE.
Benkhelifa, Fatma
In this paper, we consider a multiuser multiple- input multiple-output (MIMO) decode-and-forward (DF) relay broadcasting channel (BC) with single source, multiple energy harvesting relays and multiple destinations. Since the end-to-end sum rate maximization problem is intractable, we tackle a simplified problem where we maximize the sum of the harvested energy at the relays, we employ the block diagonalization (BD) procedure at the source, and we mitigate the interference between the relay- destination channels. The interference mitigation at the destinations is managed in two ways: either to fix the interference covariance matrices at the destination and update them at each iteration until convergence, or to cancel the interference using an algorithm similar to the BD method. We provide numerical results to show the relevance of our proposed solution.
Modelling and Verification of Relay Interlocking Systems
Haxthausen, Anne Elisabeth; Bliguet, Marie Le; Kjær, Andreas
This paper describes how relay interlocking systems as used by the Danish railways can be formally modelled and verified. Such systems are documented by circuit diagrams describing their static layout. It is explained how to derive a state transition system model for the dynamic behaviour...
Relay Feedback Analysis for Double Integral Plants
Zhen Ye
Full Text Available Double integral plants under relay feedback are studied. Complete results on the uniqueness of solutions, existence, and stability of the limit cycles are established using the point transformation method. Analytical expressions are also given for determining the amplitude and period of a limit cycle from the plant parameters.
First Things First: Internet Relay Chat Openings.
Rintel, E. Sean; Mulholland, Joan; Pittam, Jeffery
Argues that Internet Relay Chat (IRC) research needs to systematically address links between interaction structures, technological mediation and the instantiation and development of interpersonal relationships. Finds that openings that occur directly following user's entries into public IRC channels are often ambiguous, can disrupt relationship…
In this work, we propose an iterative relay scheduling with hybrid ARQ (IRS-HARQ) scheme which realizes fast jump-in/successive relaying and subframe-based decoding under the multiple user equipment (UE) relay environments applicable to the next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed IRS-HARQ aims to increase the achievable data rate by iteratively scheduling a relatively better UE relay closer to the end user in a probabilistic sense, provided that the relay-to-end user link should be operated in an open-loop and transparent mode. The latter is due to the fact that not only there are no dedicated control channels between the UE relay and the end user but also a new cell is not created. Under this open-loop and transparent mode, our proposed protocol is implemented by partially exploiting the channel state information based on the overhearing mechanism of ACK/NACK for HARQ. Further, the iterative scheduling enables UE-to-UE direct communication with proximity that offers spatial frequency reuse and energy saving.
Using a micromachined magnetostatic relay in commutating a DC motor
Tai, Yu-Chong (Inventor); Wright, John A. (Inventor); Lilienthal, Gerald (Inventor)
A DC motor is commutated by rotating a magnetic rotor to induce a magnetic field in at least one magnetostatic relay in the motor. Each relay is activated in response to the magnetic field to deliver power to at least one corresponding winding connected to the relay. In some cases, each relay delivers power first through a corresponding primary winding and then through a corresponding secondary winding to a common node. Specific examples include a four-pole, three-phase motor in which each relay is activated four times during one rotation of the magnetic rotor.
Microwave beamforming for non-invasive patient-specific hyperthermia treatment of pediatric brain cancer
Burfeindt, Matthew J; Zastrow, Earl; Hagness, Susan C; Van Veen, Barry D; Medow, Joshua E
We present a numerical study of an array-based microwave beamforming approach for non-invasive hyperthermia treatment of pediatric brain tumors. The transmit beamformer is designed to achieve localized heating-that is, to achieve constructive interference and selective absorption of the transmitted electromagnetic waves at the desired focus location in the brain while achieving destructive interference elsewhere. The design process takes into account patient-specific and target-specific propagation characteristics at 1 GHz. We evaluate the effectiveness of the beamforming approach using finite-difference time-domain simulations of two MRI-derived child head models from the Virtual Family (IT'IS Foundation). Microwave power deposition and the resulting steady-state thermal distribution are calculated for each of several randomly chosen focus locations. We also explore the robustness of the design to mismatch between the assumed and actual dielectric properties of the patient. Lastly, we demonstrate the ability of the beamformer to suppress hot spots caused by pockets of cerebrospinal fluid (CSF) in the brain. Our results show that microwave beamforming has the potential to create localized heating zones in the head models for focus locations that are not surrounded by large amounts of CSF. These promising results suggest that the technique warrants further investigation and development.
Microcomb-Based True-Time-Delay Network for Microwave Beamforming With Arbitrary Beam Pattern Control
Xue, Xiaoxiao; Xuan, Yi; Bao, Chengying; Li, Shangyuan; Zheng, Xiaoping; Zhou, Bingkun; Qi, Minghao; Weiner, Andrew M.
Microwave phased array antennas (PAAs) are very attractive to defense applications and high-speed wireless communications for their abilities of fast beam scanning and complex beam pattern control. However, traditional PAAs based on phase shifters suffer from the beam-squint problem and have limited bandwidths. True-time-delay (TTD) beamforming based on low-loss photonic delay lines can solve this problem. But it is still quite challenging to build large-scale photonic TTD beamformers due to their high hardware complexity. In this paper, we demonstrate a photonic TTD beamforming network based on a miniature microresonator frequency comb (microcomb) source and dispersive time delay. A method incorporating optical phase modulation and programmable spectral shaping is proposed for positive and negative apodization weighting to achieve arbitrary microwave beam pattern control. The experimentally demonstrated TTD beamforming network can support a PAA with 21 elements. The microwave frequency range is $\\mathbf{8\\sim20\\ {GHz}}$, and the beam scanning range is $\\mathbf{\\pm 60.2^\\circ}$. Detailed measurements of the microwave amplitudes and phases are performed. The beamforming performances of Gaussian, rectangular beams and beam notch steering are evaluated through simulations by assuming a uniform radiating antenna array. The scheme can potentially support larger PAAs with hundreds of elements by increasing the number of comb lines with broadband microcomb generation.
An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments.
Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J; Dillon, Harvey
Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localization in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable.
Improved Plane-Wave Ultrasound Beamforming by Incorporating Angular Weighting and Coherent Compounding in Fourier Domain.
Chen, Chuan; Hendriks, Gijs A G M; van Sloun, Ruud J G; Hansen, Hendrik H G; de Korte, Chris L
In this paper, a novel processing framework is introduced for Fourier-domain beamforming of plane-wave ultrasound data, which incorporates coherent compounding and angular weighting in the Fourier domain. Angular weighting implies spectral weighting by a 2-D steering-angle-dependent filtering template. The design of this filter is also optimized as part of this paper. Two widely used Fourier-domain plane-wave ultrasound beamforming methods, i.e., Lu's f-k and Stolt's f-k methods, were integrated in the framework. To enable coherent compounding in Fourier domain for the Stolt's f-k method, the original Stolt's f-k method was modified to achieve alignment of the spectra for different steering angles in k-space. The performance of the framework was compared for both methods with and without angular weighting using experimentally obtained data sets (phantom and in vivo), and data sets (phantom) provided by the IEEE IUS 2016 plane-wave beamforming challenge. The addition of angular weighting enhanced the image contrast while preserving image resolution. This resulted in images of equal quality as those obtained by conventionally used delay-and-sum (DAS) beamforming with apodization and coherent compounding. Given the lower computational load of the proposed framework compared to DAS, to our knowledge it can, therefore, be concluded that it outperforms commonly used beamforming methods such as Stolt's f-k, Lu's f-k, and DAS.
Effects of line fiducial parameters and beamforming on ultrasound calibration.
Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S
Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.
Compressive MIMO Beamforming of Data Collected in a Refractive Environment
Wagner, Mark; Nannuru, Santosh; Gerstoft, Peter
The phenomenon of ducting is caused by abnormal atmospheric refractivity patterns and is known to allow electromagnetic waves to propagate over the horizon with unusually low propagation loss. It is unknown what effect ducting has on multiple input multiple output (MIMO) channels, particularly its effect on multipath propagation in MIMO channels. A high-accuracy angle-of-arrival and angle-of-departure estimation technique for MIMO communications, which we will refer to as compressive MIMO beamforming, was tested on simulated data then applied to experimental data taken from an over the horizon MIMO test bed located in a known ducting hot spot in Southern California. The multipath channel was estimated from the receiver data recorded over a period of 18 days, and an analysis was performed on the recorded data. The goal is to observe the evolution of the MIMO multipath channel as atmospheric ducts form and dissipate to gain some understanding of the behavior of channels in a refractive environment. This work is motivated by the idea that some multipath characteristics of MIMO channels within atmospheric ducts could yield important information about the duct.
Rank-Constrained Beamforming for MIMO Cognitive Interference Channel
Duoying Zhang
Full Text Available This paper considers the spectrum sharing multiple-input multiple-output (MIMO cognitive interference channel, in which multiple primary users (PUs coexist with multiple secondary users (SUs. Interference alignment (IA approach is introduced that guarantees that secondary users access the licensed spectrum without causing harmful interference to the PUs. A rank-constrained beamforming design is proposed where the rank of the interferences and the desired signals is concerned. The standard interferences metric for the primary link, that is, interference temperature, is investigated and redesigned. The work provides a further improvement that optimizes the dimension of the interferences in the cognitive interference channel, instead of the power of the interference leakage. Due to the nonconvexity of the rank, the developed optimization problems are further approximated as convex form and are solved via choosing the transmitter precoder and receiver subspace iteratively. Numerical results show that the proposed designs can improve the achievable degree of freedom (DoF of the primary links and provide the considerable sum rate for both secondary and primary transmissions under the rank constraints.
Imperfect generalized transmit beamforming with co-channel interference cancelation
The performance of a generalized single-stream transmit beamforming scheme employing receive co-channel interference -steering algorithms in slowly varying and flat fading channels is analyzed. The impact of imperfect prediction of channel state information (CSI) for the desired user spatially uncorrelated transmit channels is considered. Both dominant interference cancelation and adaptive arbitrary interference cancelation algorithms for closely spaced receive antennas are used. The impact of outdated statistical ordering of the interferers instantaneous powers on the effectiveness of dominant interference cancelation is investigated against the less complex adaptive arbitrary cancelation scheme. For the system models described above, new exact formulas for the statistics of combined signal-to-interference-plus-noise ratio (SINR) are derived, from which results for conventional maximum ratio transmission (MRT) and best transmit channel selection schemes can be deduced as limiting cases. The results presented herein can be used to obtain quantitative measure for various performance metrics, and in addition to investigate the performance-complexity tradeoff for different multiple-antenna system models. © 2010 IEEE.
Relay Architectures for 3GPP LTE-Advanced
Peters StevenW
Full Text Available The Third Generation Partnership Project's Long Term Evolution-Advanced is considering relaying for cost-effective throughput enhancement and coverage extension. While analog repeaters have been used to enhance coverage in commercial cellular networks, the use of more sophisticated fixed relays is relatively new. The main challenge faced by relay deployments in cellular systems is overcoming the extra interference added by the presence of relays. Most prior work on relaying does not consider interference, however. This paper analyzes the performance of several emerging half-duplex relay strategies in interference-limited cellular systems: one-way, two-way, and shared relays. The performance of each strategy as a function of location, sectoring, and frequency reuse are compared with localized base station coordination. One-way relaying is shown to provide modest gains over single-hop cellular networks in some regimes. Shared relaying is shown to approach the gains of local base station coordination at reduced complexity, while two-way relaying further reduces complexity but only works well when the relay is close to the handset. Frequency reuse of one, where each sector uses the same spectrum, is shown to have the highest network throughput. Simulations with realistic channel models provide performance comparisons that reveal the importance of interference mitigation in multihop cellular networks.
Improving source discrimination performance by using an optimized acoustic array and adaptive high-resolution CLEAN-SC beamforming
Luesutthiviboon, S.; Malgoezar, A.M.N.; Snellen, M.; Sijtsma, P.; Simons, D.G.
Beamforming performance can be improved in two ways: optimizing the location of microphones on the acoustic array and applying advanced beamforming algorithms. In this study, the effects of the two approaches are studied. An optimization method is developed to optimize the location of microphones
Opportunistic Fixed Gain Bidirectional Relaying with Outdated CSI
Khan, Fahd Ahmed; Tourki, Kamel; Alouini, Mohamed-Slim; Qaraqe, Khalid A.
In a network with multiple relays, relay selection has been shown as an effective scheme to achieve diversity as well as to improve the overall throughput. This paper studies the impact of using outdated channel state information for relay selection on the performance of a network where two sources communicate with each other via fixed-gain amplify-and-forward relays. For a Rayleigh faded channel, closed-form expressions for the outage probability, moment generating function and symbol error rate are derived. Simulations results are also presented to corroborate the derived analytical results. It is shown that adding relays does not improve the performance if the channel is substantially outdated. Furthermore, relay location is also taken into consideration and it is shown that the performance can be improved by placing the relay closer to the source whose channel is more outdated. © 2015 IEEE.
Khan, Fahd Ahmed
Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging
Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom
Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7Â MHz linear array transducer is used with the SARUS experimental...... ultrasound scanner for the data acquisition. The lateral resolution and the contrast obtained, are evaluated and compared with those from the conventional Delay-and-Sum (DAS) beamformer and the MV temporal implementation (MVT). From the wire phantom the Full-Width-at-Half-Maximum (FWHM) measured at a depth...
Double-Stage Delay Multiply and Sum Beamforming Algorithm Applied to Ultrasound Medical Imaging.
Mozaffarzadeh, Moein; Sadeghi, Masume; Mahloojifar, Ali; Orooji, Mahdi
In ultrasound (US) imaging, delay and sum (DAS) is the most common beamformer, but it leads to low-quality images. Delay multiply and sum (DMAS) was introduced to address this problem. However, the reconstructed images using DMAS still suffer from the level of side lobes and low noise suppression. Here, a novel beamforming algorithm is introduced based on expansion of the DMAS formula. We found that there is a DAS algebra inside the expansion, and we proposed use of the DMAS instead of the DAS algebra. The introduced method, namely double-stage DMAS (DS-DMAS), is evaluated numerically and experimentally. The quantitative results indicate that DS-DMAS results in an approximately 25% lower level of side lobes compared with DMAS. Moreover, the introduced method leads to 23%, 22% and 43% improvement in signal-to-noise ratio, full width at half-maximum and contrast ratio, respectively, compared with the DMAS beamformer. Copyright © 2018. Published by Elsevier Inc.
Exact Performance Analysis of Dual-Hop Semi-Blind AF Relaying over Arbitrary Nakagami-m Fading Channels
Xia, Minghua; Xing, Chengwen; Wu, Yik-Chung; Aissa, Sonia
Relay transmission is promising for future wireless systems due to its significant cooperative diversity gain. The performance of dual-hop semi-blind amplify-and-forward (AF) relaying systems was extensively investigated, for transmissions over Rayleigh fading channels or Nakagami-𝑚 fading channels with integer fading parameter. For the general Nakagami-𝑚 fading with arbitrary 𝑚 values, the exact closed-form system performance analysis is more challenging. In this paper, we explicitly derive the moment generation function (MGF), probability density function (PDF) and moments of the end-to-end signal-to-noise ratio (SNR) over arbitrary Nakagami-𝑚 fading channels with semi-blind AF relay. With these results, the system performance evaluation in terms of outage probability, average symbol error probability, ergodic capacity and diversity order, is conducted. The analysis developed in this paper applies to any semi-blind AF relaying systems with fixed relay gain, and two major strategies for computing the relay gain are compared in terms of system performance. All analytical results are corroborated by simulation results and they are shown to be efficient tools to evaluate system performance.
Relay transmission is promising for future wireless systems due to its significant cooperative diversity gain. The performance of dual-hop semi-blind amplify-and-forward (AF) relaying systems was extensively investigated, for transmissions over Rayleigh fading channels or Nakagami- fading channels with integer fading parameter. For the general Nakagami- fading with arbitrary values, the exact closed-form system performance analysis is more challenging. In this paper, we explicitly derive the moment generation function (MGF), probability density function (PDF) and moments of the end-to-end signal-to-noise ratio (SNR) over arbitrary Nakagami- fading channels with semi-blind AF relay. With these results, the system performance evaluation in terms of outage probability, average symbol error probability, ergodic capacity and diversity order, is conducted. The analysis developed in this paper applies to any semi-blind AF relaying systems with fixed relay gain, and two major strategies for computing the relay gain are compared in terms of system performance. All analytical results are corroborated by simulation results and they are shown to be efficient tools to evaluate system performance.
Asghari, Vahid Reza
We propose adopting a cooperative relaying technique in spectrum-sharing cognitive radio (CR) systems to more effectively and efficiently utilize available transmission resources, such as power, rate, and bandwidth, while adhering to the quality of service (QoS) requirements of the licensed (primary) users of the shared spectrum band. In particular, we first consider that the cognitive (secondary) user\\'s communication is assisted by an intermediate relay that implements the decode-and-forward (DF) technique onto the secondary user\\'s relayed signal to help with communication between the corresponding source and the destination nodes. In this context, we obtain first-order statistics pertaining to the first- and second-hop transmission channels, and then, we investigate the end-to-end performance of the proposed spectrum-sharing cooperative relaying system under resource constraints defined to assure that the primary QoS is unaffected. Specifically, we investigate the overall average bit error rate (BER), ergodic capacity, and outage probability of the secondary\\'s communication subject to appropriate constraints on the interference power at the primary receivers. We then consider a general scenario where a cluster of relays is available between the secondary source and destination nodes. In this case, making use of the partial relay selection method, we generalize our results for the single-relay scheme and obtain the end-to-end performance of the cooperative spectrum-sharing system with a cluster of L available relays. Finally, we examine our theoretical results through simulations and comparisons, illustrating the overall performance of the proposed spectrum-sharing cooperative system and quantify its advantages for different operating scenarios and conditions. © 2011 IEEE.
Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading
Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.
Delay and Standard Deviation Beamforming to Enhance Specular Reflections in Ultrasound Imaging.
Bandaru, Raja Sekhar; Sornes, Anders Rasmus; Hermans, Jeroen; Samset, Eigil; D'hooge, Jan
Although interventional devices, such as needles, guide wires, and catheters, are best visualized by X-ray, real-time volumetric echography could offer an attractive alternative as it avoids ionizing radiation; it provides good soft tissue contrast, and it is mobile and relatively cheap. Unfortunately, as echography is traditionally used to image soft tissue and blood flow, the appearance of interventional devices in conventional ultrasound images remains relatively poor, which is a major obstacle toward ultrasound-guided interventions. The objective of this paper was therefore to enhance the appearance of interventional devices in ultrasound images. Thereto, a modified ultrasound beamforming process using conventional-focused transmit beams is proposed that exploits the properties of received signals containing specular reflections (as arising from these devices). This new beamforming approach referred to as delay and standard deviation beamforming (DASD) was quantitatively tested using simulated as well as experimental data using a linear array transducer. Furthermore, the influence of different imaging settings (i.e., transmit focus, imaging depth, and scan angle) on the obtained image contrast was evaluated. The study showed that the image contrast of specular regions improved by 5-30 dB using DASD beamforming compared with traditional delay and sum (DAS) beamforming. The highest gain in contrast was observed when the interventional device was tilted away from being orthogonal to the transmit beam, which is a major limitation in standard DAS imaging. As such, the proposed beamforming methodology can offer an improved visualization of interventional devices in the ultrasound image with potential implications for ultrasound-guided interventions.
Relay entanglement and clusters of correlated spins
Doronin, S. I.; Zenchuk, A. I.
Considering a spin-1/2 chain, we suppose that the entanglement passes from a given pair of particles to another one, thus establishing the relay transfer of entanglement along the chain. Therefore, we introduce the relay entanglement as a sum of all pairwise entanglements in a spin chain. For more detailed studying the effects of remote pairwise entanglements, we use the partial sums collecting entanglements between the spins separated by up to a certain number of nodes. The problem of entangled cluster formation is considered, and the geometric mean entanglement is introduced as a characteristic of quantum correlations in a cluster. Generally, the lifetime of a cluster decreases with an increase in its size.
On the input distribution and optimal beamforming for the MISO VLC wiretap channel
Arfaoui, Mohamed Amine; Rezki, Zouheir; Ghrayeb, Ali; Alouini, Mohamed-Slim
We investigate in this paper the achievable secrecy rate of the multiple-input single-output (MISO) visible light communication (VLC) Gaussian wiretap channel with single user and single eavesdropper. We consider the cases when the location of eavesdropper is known or unknown to the transmitter. In the former case, we derive the optimal beamforming in closed form, subject to constrained inputs. In the latter case, we apply robust beamforming. Furthermore, we study the achievable secrecy rate when the input follows the truncated generalized normal (TGN) distribution. We present several examples which demonstrate the substantial improvements in the secrecy rates achieved by the proposed techniques.
TX-RX isolation method based on polarization diversity, spatial diversity and TX beamforming
Foroozanfard, Ehsan; Carvalho, Elisabeth De; Pedersen, Gert F.
In this paper, the feasibility of an antenna isolation technique based on null-steer beamforming, polarization diversity and spatial diversity is investigated. The proposed system consists of six patch antennas which are fed by a feeding network to obtain a null-steer beamformer. To achieve spatial...... diversity, antenna elements are located on two layers, facing in a different direction. Moreover, the antenna elements in two layers use different polarization. The measured results of the antenna system present a high TX-RX isolation in the order of 70 dB which shows the feasibility of such a system...
Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7Â MHz linear array transducer is used with the SARUS experimental...
An Optimal Beamforming Algorithm for Phased-Array Antennas Used in Multi-Beam Spaceborne Radiometers
Iupikov, O. A.; Ivashina, M. V.; Pontoppidan, K.
Strict requirements for future spaceborne ocean missions using multi-beam radiometers call for new antenna technologies, such as digital beamforming phased arrays. In this paper, we present an optimal beamforming algorithm for phased-array antenna systems designed to operate as focal plane arrays...... to a FPA feeding a torus reflector antenna (designed under the contract with the European Space Agency) and tested for multiple beams. The results demonstrate an improved performance in terms of the optimized beam characteristics, yielding much higher spatial and radiometric resolution as well as much...
Arfaoui, Mohamed Amine
Adjustable electronic load-alarm relay
Mason, C.H.; Sitton, R.S.
An improved electronic alarm relay for monitoring the current drawn by an ac motor or other electrical load is described. The circuit is designed to measure the load with high accuracy and to have excellent alarm repeatability. Chattering and arcing of the relay contacts are minimal. The operator can adjust the set point easily and can re-set both the high and the low alarm points by means of one simple adjustment. The relay includes means for generating a signal voltage proportional to the motor current. In a preferred form of the invention a first operational amplifier is provided to generate a first constant reference voltage which is higher than a preselected value of the signal voltage. A second operational amplifier is provided to generate a second constant reference voltage which is lower than the aforementioned preselected value of the signal voltage. A circuit comprising a first resistor serially connected to a second resistor is connected across the outputs of the first and second amplifiers, and the junction of the two resistors is connected to the inverting terminal of the second amplifier. Means are provided to compare the aforementioned signal voltage with both the first and second reference voltages and to actuate an alarm if the signal voltage is higher than the first reference voltage or lower than the second reference voltage
Consumption Factor Optimization for Multihop Relaying over Nakagami-m Fading channels
Randrianantenaina, Itsikiantsoa
In this paper, the energy efficiency of multihop relaying over Nakagami-m fading channels is investigated. The "consumption factor�, adopted as a metric to evaluate the energy efficiency, is derived for both amplify-and-forward and decodeand- forward relaying strategies. Then, based on the obtained expressions, we propose a power allocation strategy maximizing the consumption factor. In addition, a sub-optimal, low complexity, power allocation algorithm is proposed and analyzed, and the obtained power allocation scheme is compared in terms of energy efficiency to other power allocation schemes from the literature. Analytical and simulation results confirm the accuracy of our derivations, and assess the performance gains of the proposed approach.
Transmit power optimization for green multihop relaying over Nakagami-m fading channels
In this paper, we investigate the optimal transmit power strategy to maximize the energy efficiency of a multihop relaying network. Considering the communication between a source and a destination through multiple Amplify-and-Forward relays, we first give the expression of the total instantaneous system energy consumption. Then, we define the energy efficiency in our context and obtain its expression in closed-form when the communication is over Nakagami-m fading channels. The analysis yields to the derivation of a global transmit power strategy where each individual node is contributing to the end-to-end overall energy efficiency. Numercial results are presented to illustrate the analysis. Comparison with Monte Carlo simulation results confirms the accuracy of our derivations, and assesses the gains of the proposed power optimization strategy. © 2014 IEEE.
Randrianantenaina, Itsikiantsoa; Benjillali, Mustapha; Alouini, Mohamed-Slim
Three-component ambient noise beamforming in the Parkfield area
Löer, Katrin; Riahi, Nima; Saenger, Erik H.
We apply a three-component beamforming algorithm to an ambient noise data set recorded at a seismic array to extract information about both isotropic and anisotropic surface wave velocities. In particular, we test the sensitivity of the method with respect to the array geometry as well as to seasonal variations in the distribution of noise sources. In the earth's crust, anisotropy is typically caused by oriented faults or fractures and can be altered when earthquakes or human activities cause these structures to change. Monitoring anisotropy changes thus provides time-dependent information on subsurface processes, provided they can be distinguished from other effects. We analyse ambient noise data at frequencies between 0.08 and 0.52 Hz recorded at a three-component array in the Parkfield area, California (US), between 2001 November and 2002 April. During this time, no major earthquakes were identified in the area and structural changes are thus not expected. We compute dispersion curves of Love and Rayleigh waves and estimate anisotropy parameters for Love waves. For Rayleigh waves, the azimuthal source coverage is too limited to perform anisotropy analysis. For Love waves, ambient noise sources are more widely distributed and we observe significant and stable surface wave anisotropy for frequencies between 0.2 and 0.4 Hz. Synthetic data experiments indicate that the array geometry introduces apparent anisotropy, especially when waves from multiple sources arrive simultaneously at the array. Both the magnitude and the pattern of apparent anisotropy, however, differ significantly from the anisotropy observed in Love wave data. Temporal variations of anisotropy parameters observed at frequencies below 0.2 Hz and above 0.4 Hz correlate with changes in the source distribution. Frequencies between 0.2 and 0.4 Hz, however, are less affected by these variations and provide relatively stable results over the period of study.
An assessment of fire vulnerability for aged electrical relays
Vigil, R.A.; Nowlen, S.P.
There has been some concern that, as nuclear power plants age, protective measures taken to control and minimize the impact of fire may become ineffective, or significantly less effective, and hence result in an increased fire risk. One objective of the Fire Vulnerability of Aged Electrical Components Program is to assess the effects of aging and service wear on the fire vulnerability of electrical equipment. An increased fire vulnerability of components may lead to an overall increase in fire risk to the plant. Because of their widespread use in various electrical safety systems, electromechanical relays were chosen to be the initial components for evaluation. This test program assessed the impact of operational and thermal aging on the vulnerability of these relays to fire-induced damage. Only thermal effects of a fire were examined in this test program. The impact of smoke, corrosive materials, or fire suppression effects on relay performance were not addressed in this test program. The purpose of this test program was to assess whether the fire vulnerability of electrical relays increased with aging. The sequence followed for the test program was to: identify specific relay types, develop three fire scenarios, artificially age several relays, test the unaged and aged relays in the fire exposure scenarios, and compare the results. The relays tested were Agastat GPI, General Electric (GE) HMA, HGA, and HFA. At least two relays of each type were artificially aged and at least two relays of each type were new. Relays were operationally aged by cycling the relay under rated load for 2,000 operations. These relays were then thermally aged for 60 days with their coil energized
Moments Based Framework for Performance Analysis of One-Way/Two-Way CSI-Assisted AF Relaying
When analyzing system performance of conventional one-way relaying or advanced two-way relaying, these two techniques are always dealt with separately and, thus, their performance cannot be compared efficiently. Moreover, for ease of mathematical tractability, channels considered in such studies are generally assumed to be subject to Rayleigh fading or to be Nakagami-$m$ channels with integer fading parameters, which is impractical in typical urban environments. In this paper, we propose a unified moments-based framework for general performance analysis of channel-state-information (CSI) assisted amplify-and-forward (AF) relaying systems. The framework is applicable to both one-way and two-way relaying over arbitrary Nakagami-$m$ fading channels, and it includes previously reported results as special cases. Specifically, the mathematical framework is firstly developed under the umbrella of the weighted harmonic mean of two Gamma-distributed variables in conjunction with the theory of Pad\\\\\\'e approximants. Then, general expressions for the received signal-to-noise ratios of the users in one-way/two-way relaying systems and the corresponding moments, moment generation function, and cumulative density function are established. Subsequently, the mathematical framework is applied to analyze, compare, and gain insights into system performance of one-way and two-way relaying techniques, in terms of outage probability, average symbol error probability, and achievable data rate. All analytical results are corroborated by simulation results as well as previously reported results whenever available, and they are shown to be efficient tools to evaluate and compare system performance of one-way and two-way relaying.
User Multiplexing in Relay Enhanced LTE-Advanced Networks
Teyeb, Oumer Mohammed; Frederiksen, Frank; Redana, Simone
is radio relaying. This uses relay nodes that act as surrogate base stations for mobile users whose radio links with the base stations are not experiencing good enough conditions. In the downlink, the data that is destined for the relayed users may first have to be multiplexed by the base station, sent...... over the wireless backhaul link towards the relay node, and de-multiplexed and forwarded to the individual users by the relay node. The reverse process also has to be undertaken in the uplink. In this paper, we present a novel multiplexing scheme which is able to adapt the addressing and bitmapping...... of user identification to the actual number of users being served by the relay nodes, and thus greatly reduce the multiplexing overhead....
A performance analysis in AF full duplex relay selection network
Ngoc, Long Nguyen; Hong, Nhu Nguyen; Loan, Nguyen Thi Phuong; Kieu, Tam Nguyen; Voznak, Miroslav; Zdralek, Jaroslav
This paper studies on the relaying selective matter in amplify-and-forward (AF) cooperation communication with full-duplex (FD) activity. Various relay choice models supposing the present of different instant information are investigated. We examine a maximal relaying choice that optimizes the instant FD channel capacity and asks for global channel state information (CSI) as well as partial CSI learning. To make comparison easy, accurate outage probability clauses and asymptote form of these strategies that give a diversity rank are extracted. From that, we can see clearly that the number of relays, noise factor, the transmittance coefficient as well as the information transfer power had impacted on their performance. Besides, the optimal relay selection (ORS) model can promote than that of the partial relay selection (PRS) model.
Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization
Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen
Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.
A Novel Secure Transmission Scheme in MIMO Two-Way Relay Channels with Physical Layer Approach
Qiao Liu
Full Text Available Security issue has been considered as one of the most pivotal aspects for the fifth-generation mobile network (5G due to the increasing demands of security service as well as the growing occurrence of security threat. In this paper, instead of focusing on the security architecture in the upper layer, we investigate the secure transmission for a basic channel model in a heterogeneous network, that is, two-way relay channels. By exploiting the properties of the transmission medium in the physical layer, we propose a novel secure scheme for the aforementioned channel mode. With precoding design, the proposed scheme is able to achieve a high transmission efficiency as well as security. Two different approaches have been introduced: information theoretical approach and physical layer encryption approach. We show that our scheme is secure under three different adversarial models: (1 untrusted relay attack model, (2 trusted relay with eavesdropper attack model, and (3 untrusted relay with eavesdroppers attack model. We also derive the secrecy capacity of the two different approaches under the three attacks. Finally, we conduct three simulations of our proposed scheme. The simulation results agree with the theoretical analysis illustrating that our proposed scheme could achieve a better performance than the existing schemes.
SER Derivation and Power Optimization of a Two-Way MultiRelay Cooperative Communication System
Shakeel-Ur-Rehman Rehman
Full Text Available In this paper, we consider Rayleigh fading based cooperative communication system with AaF (Amplify and Forward relaying using multiple relays. We take spectrally efficient two-way model of cooperative communication terminals and formulate performance evaluation framework in terms of SER (Symbol Error Rate. We not only consider fading channel for this performance evaluation but also consider the effect of relay terminal location into our model which does not require any CSI (Channel State Information at transmitting nodes. We have proposed power allocation framework for these nodes and analytically derived SER performance results. We have numerically evaluated this framework for power optimization as well as minimizing required SER. Significant performance improvement as compared with equal power sharing among the cooperating terminals is achieved using our proposed framework. It is shown that virtual cooperative antenna configurations is able to demonstrate up to 3dB gain as compared with co-located antenna configurations. Thus incorporating relay location information for performance evaluation results significant power savings
Cross-layer combining of information-guided transmission withnetwork coding relaying for multiuser cognitive radio systems
For a cognitive radio relaying network, we propose a cross-layer design by combining information-guided transmission at the physical layer and network coding at the network layer. With this design, a common relay is exploited to help the communications between multiple secondary source-destination pairs, which allows for a more efficient use of the radio resources, and moreover, generates less interference to primary licensees in the network. Considering the spectrum-sharing constraints on the relay and secondary sources, the achievable data rate of the proposed cross-layer design is derived and evaluated. Numerical results on average capacity and uniform capacity in the network under study substantiate the efficiency of our proposed design. © 2013 IEEE.
Joint User Scheduling and MU-MIMO Hybrid Beamforming Algorithm for mmWave FDMA Massive MIMO System
Jing Jiang
Full Text Available The large bandwidth and multipath in millimeter wave (mmWave cellular system assure the existence of frequency selective channels; it is necessary that mmWave system remains with frequency division multiple access (FDMA and user scheduling. But for the hybrid beamforming system, the analog beamforming is implemented by the same phase shifts in the entire frequency band, and the wideband phase shifts may not be harmonious with all users scheduled in frequency resources. This paper proposes a joint user scheduling and multiuser hybrid beamforming algorithm for downlink massive multiple input multiple output (MIMO orthogonal frequency division multiple access (OFDMA systems. In the first step of user scheduling, the users with identical optimal beams form an OFDMA user group and multiplex the entire frequency resource. Then base station (BS allocates the frequency resources for each member of OFDMA user group. An OFDMA user group can be regarded as a virtual user; thus it can support arbitrary MU-MIMO user selection and beamforming algorithms. Further, the analog beamforming vectors employ the best beam of each selected MU-MIMO user and the digital beamforming algorithm is solved by weight MMSE to acquire the best performance gain and mitigate the interuser inference. Simulation results show that hybrid beamforming together with user scheduling can greatly improve the performance of mmWave OFDMA massive MU-MIMO system.
An Integrated Real-Time Beamforming and Postfiltering System for Nonstationary Noise Environments
Gannot Sharon
Full Text Available We present a novel approach for real-time multichannel speech enhancement in environments of nonstationary noise and time-varying acoustical transfer functions (ATFs. The proposed system integrates adaptive beamforming, ATF identification, soft signal detection, and multichannel postfiltering. The noise canceller branch of the beamformer and the ATF identification are adaptively updated online, based on hypothesis test results. The noise canceller is updated only during stationary noise frames, and the ATF identification is carried out only when desired source components have been detected. The hypothesis testing is based on the nonstationarity of the signals and the transient power ratio between the beamformer primary output and its reference noise signals. Following the beamforming and the hypothesis testing, estimates for the signal presence probability and for the noise power spectral density are derived. Subsequently, an optimal spectral gain function that minimizes the mean square error of the log-spectral amplitude (LSA is applied. Experimental results demonstrate the usefulness of the proposed system in nonstationary noise environments.
Artificial lateral-line system for imaging dipole sources using Beamforming techniques
Dagamseh, A.M.K.; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; Krijnen, Gijsbertus J.M.
In nature, fish have the ability to localize prey, school, navigate, etc. using the lateral-line organ [1]. Here we present the use of biomimetic artificial hair-based flow-sensors arranged as lateral-line system in combination with beamforming techniques for dipole source localization in air.
Uplink transmit beamforming design for SINR maximization with full multiuser channel state information
Xi, Songnan; Zoltowski, Michael D.
Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.
Enhancing the beamforming map of spherical arrays at low frequencies using acoustic holography
Tiana Roig, Elisabet; Torras Rosell, Antoni; Fernandez Grande, Efren
Recent studies have shown that the localization of acoustic sources based on circular arrays can be improved at low frequencies by combining beamforming with acoustic holography. This paper extends this technique to the three dimensional case by making use of spherical arrays. The pressure captur...
High Resolution Ultrasound Imaging Using Adaptive Beamforming with Reduced Number of Active Elements
is proposed. By reducing the number of active sensor elements, an increased resolution can be obtained with the MV beamformer. This observation is directly opposite the well-known relation between the spatial extent of the aperture and the achievable resolution. The investigations are based on Field II...
Simulation of a ring resonator-based optical beamformer system for phased array receive antennas
Tijmes, M.R.; Meijerink, Arjan; Roeloffzen, C.G.H.; Bentum, Marinus Jan
A new simulator tool is described that can be used in the field of RF photonics. It has been developed on the basis of a broadband, continuously tunable optical beamformer system for phased array receive antennas. The application that is considered in this paper is airborne satellite reception of
Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions
Lylloff, Oliver Ackermann; Fernandez Grande, Efren
Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...
Tightness of Semidefinite Programming Relaxation to Robust Transmit Beamforming with SINR Constraints
Yanjun Wang
Full Text Available This paper considers a multiuser transmit beamforming problem under uncertain channel state information (CSI subject to SINR constraints in a downlink multiuser MISO system. A robust transmit beamforming formulation is proposed. This robust formulation is to minimize the transmission power subject to worst-case signal-to-interference-plus-noise ratio (SINR constraints on the receivers. The challenging problem is that the worst-case SINR constraints correspond to an infinite number of nonconvex quadratic constraints. In this paper, a natural semidifinite programming (SDP relaxation problem is proposed to solve the robust beamforming problem. The main contribution of this paper is to establish the tightness of the SDP relaxation problem under proper assumption, which means that the SDP relaxation problem definitely yields rank-one solutions under the assumption. Then the SDP relaxation problem provides globally optimum solutions of the primal robust transmit beamforming problem under proper assumption and norm-constrained CSI errors. Simulation results show the correctness of the proposed theoretical results and also provide a counterexample whose solutions are not rank one. The existence of counterexample shows that the guess that the solutions of the SDP relaxation problem must be rank one is wrong, except that some assumptions (such as the one proposed in this paper hold.
Application of a Beamforming Technique to the Measurement of Airfoil Leading Edge Noise
Thomas Geyer
Full Text Available The present paper describes the use of microphone array technology and beamforming algorithms for the measurement and analysis of noise generated by the interaction of a turbulent flow with the leading edge of an airfoil. Experiments were performed using a setup in an aeroacoustic wind tunnel, where the turbulent inflow is provided by different grids. In order to exactly localize the aeroacoustic noise sources and, moreover, to separate airfoil leading edge noise from grid-generated noise, the selected deconvolution beamforming algorithm is extended to be used on a fully three-dimensional source region. The result of this extended beamforming are three-dimensional mappings of noise source locations. Besides acoustic measurements, the investigation of airfoil leading edge noise requires the measurement of parameters describing the incident turbulence, such as the intensity and a characteristic length scale or time scale. The method used for the determination of these parameters in the present study is explained in detail. To demonstrate the applicability of the extended beamforming algorithm and the experimental setup as a whole, the noise generated at the leading edge of airfoils made of porous materials was measured and compared to that generated at the leading edge of a common nonporous airfoil.
Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array
Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert
A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.
Relay Telecommunications for the Coming Decade of Mars Exploration
Edwards, C.; DePaula, R.
Over the past decade, an evolving network of relay-equipped orbiters has advanced our capabilities for Mars exploration. NASA's Mars Global Surveyor, 2001 Mars Odyssey, and Mars Reconnaissance Orbiter (MRO), as well as ESA's Mars Express Orbiter, have provided telecommunications relay services to the 2003 Mars Exploration Rovers, Spirit and Opportunity, and to the 2007 Phoenix Lander. Based on these successes, a roadmap for continued Mars relay services is in place for the coming decade. MRO and Odyssey will provide key relay support to the 2011 Mars Science Laboratory (MSL) mission, including capture of critical event telemetry during entry, descent, and landing, as well as support for command and telemetry during surface operations, utilizing new capabilities of the Electra relay payload on MRO and the Electra-Lite payload on MSL to allow significant increase in data return relative to earlier missions. Over the remainder of the decade a number of additional orbiter and lander missions are planned, representing new orbital relay service providers and new landed relay users. In this paper we will outline this Mars relay roadmap, quantifying relay performance over time, illustrating planned support scenarios, and identifying key challenges and technology infusion opportunities.
Cooperative relay-based multicasting for energy and delay minimization
Relay-based multicasting for the purpose of cooperative content distribution is studied. Optimized relay selection is performed with the objective of minimizing the energy consumption or the content distribution delay within a cluster of cooperating mobiles. Two schemes are investigated. The first consists of the BS sending the data only to the relay, and the second scheme considers the scenario of threshold-based multicasting by the BS, where a relay is selected to transmit the data to the mobiles that were not able to receive the multicast data. Both schemes show significant superiority compared to the non-cooperative scenarios, in terms of energy consumption and delay reduction. © 2012 IEEE.
Scalable DeNoise-and-Forward in Bidirectional Relay Networks
Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar
In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...
Two-way cooperative AF relaying in spectrum-sharing systems: Enhancing cell-edge performance
In this contribution, two-way cooperative amplify-and-forward (AF) relaying technique is integrated into spectrumsharing wireless systems to improve spectral efficiency of secondary users (SUs). In order to share the available spectrum resources originally dedicated to primary users (PUs), the transmit power of a SU is optimized with respect to the average tolerable interference power at primary receivers. By analyzing outage probability and achievable data rate at the base station and at a cell-edge SU, our results reveal that the uplink performance is dominated by the average tolerable interference power at primary receivers, while the downlink always behaves like conventional one-way AF relaying and its performance is dominated by the average signal-to-noise ratio (SNR). These important findings provide fresh perspectives for system designers to improve spectral efficiency of secondary users in next-generation broadband spectrum-sharing wireless systems. © 2012 IEEE.
Relay testing at Brookhaven National Laboratory
Bandyopadhyay, K.; Hofmayer, C.
Brookhaven National Laboratory (BNL) is conducting a seismic test program on relays. The purpose of the test program is to investigate the influence of various designs, electrical and vibration parameters on the seismic capacity levels. The first series of testing has been completed and performed at Wyle Laboratories. The major part of the test program consisted of single axis, single frequency sine dwell tests. Random multiaxis, multifrequency tests were also performed. Highlights of the test results as well as a description of the testing methods are presented in this paper. 10 figs
Robust Transceiver with Tomlinson-Harashima Precoding for Amplify-and-Forward MIMO Relaying Systems
Xing, Chengwen; Xia, Minghua; Gao, Feifei; Wu, Yik-Chung
forwarding matrices at relays and linear equalizer at destination is proposed. With novel applications of elegant characteristics of multiplicative convexity and matrix-monotone functions, the optimal structure of the nonlinear transceiver is first derived. Based on the derived structure, the transceiver design problem reduces to a much simpler one with only scalar variables which can be efficiently solved. Finally, the performance advantage of the proposed robust design over non-robust design is demonstrated by simulation results.
Performance of hybrid-ARQ with incremental redundancy over relay channels
Chelli, Ali; Alouini, Mohamed-Slim
In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears
A study of optimization problem for amplify-and-forward relaying over weibull fading channels
Ikki, Salama Said; Aissa, Sonia
This paper addresses the power allocation and relay positioning problems in amplify-and-forward cooperative networks operating in Weibull fading environments. We study adaptive power allocation (PA) with fixed relay location, optimal relay location
An economically viable space power relay system
Bekey, Ivan; Boudreault, Richard
This paper describes and analyzes the economics of a power relay system that takes advantage of recent technological advances to implement a system that is economically viable. A series of power relay systems are described and analyzed which transport power ranging from 1,250 megawatts to 5,000 megawatts, and distribute it to receiving sites at transcontinental distances. Two classes of systems are discussed—those with a single reflector and delivering all the power to a single rectenna, and a second type which has multiple reflectors and distributes it to 10 rectenna sites, sharing power among them. It is shown that when offering electricity at prices competitive to those prevalent in developed cities in the US that a low IRR is inevitable, and economic feasibility of a business is unlikely. However, when the target market is Japan where the prevalent electricity prices are much greater, that an IRR exceeding 65% is readily attainable. This is extremely attractive to potential investors, making capitalization of a venture likely. The paper shows that the capital investment required for the system can be less than 1 per installed watt, contributing less than 0.02 /KW-hr to the cost of energy provision. Since selling prices in feasible regions range from 0.18 to over 030 $/kW-hr, these costs are but a small fraction of the operating expenses. Thus a very large IRR is possible for such a business.
CERN Relay Race: sporty and colourful
Andy Butterworth, CERN Running Club
On Thursday 23 May, the 43rd CERN Relay Race took place, with 108 teams on the starting line, the largest participation ever! Â Â Â The DG was present at the start and said a few words to encourage the runners. At 12:15, the Solar Club and handbike racers, led by Jean-Yves Le Meur, were the first to set off. And as last year, the relay runners were accompanied by an enthusiastic group of Nordic walkers. The first team across the finish line was "Velo City", in a very fast time of 10'31". New this year was a prize category for the best fancy dress, which was won by Les Schtroumpfs from the BE Department. The challenge for the best represented department was won for the third year in a row by FP, but second and third were HR and IT, up from 6th and 9th places last year. To see all the pictures of the event, click here.
The CERN Relay Race: A Runaway Success!
24th May saw the traditional Relay Race take place at CERN, organised jointly by the Running Club and the CERN Staff Association. In 2018, the Relay Race lived up to expectations with a record number of participants, with no fewer than 848 entries across different categories! In total 135 teams of 6 runners and 38 walkers completed the course on the Meyrin site in beautiful sunshine. Congratulations to all those who took part! Ghislain Roy, President of the Staff Association, fired the starting pistol for the first batch of runners, which included a team from the Directorate, with the Director General also taking part. Demonstrating interest in this event at the highest level of the Organization. Thank you for this much-appreciated commitment! Also a number of very high-level runners brought added excitement to the 2018 edition. The 1000-meter men's race was won by Marcin Patecki from the CERN Running Club in 2'40, just in front of Baptiste Fieux from the Berthie Sport team who came in at...
Record Participation in the Relay Race!
CERN has a more sporting spirit than ever before. This is not the result of any survey, but the impression you got as soon as you saw the 62 teams of six runners each speeding around the laboratory in the 32nd annual relay race. This year 11 more teams competed than in 2001. First changeover: Hervé Cornet takes over from Camille Ruiz Llamas for The Shabbys, and Sebastian Dorthe from Daniel Matteazzi for Charmilles Technologies. Jérôme Bendotti (EP/TA1) just holding off the team from the WHO at the finish. A total of 372 people ran together last Wednesday in this year's relay race, making for a record participation. It also seems that women are becoming more and more attracted by this competition, since this year there were eight ladies teams, also a new record. The first team were The Shabbys in a time of 10 minutes 45 seconds, finishing almost before the second team had started its last 300 metre leg. The 6 runners in each team cover distances of 1000, 800, 800,...
77 FR 6949 - Tracking and Data Relay Satellite System (TDRSS) Rates for Non-U.S. Government Customers
... Tracking and Data Relay Satellite System (TDRSS) Rates for Non- U.S. Government Customers AGENCY: National... customer flexibility, allowing more efficient use of the system. This notion was never implemented in the... commercial customers, as well as Arctic and Antarctic science programs. In this direct final rule, NASA is... | CommonCrawl |
DERIVATION OF THE GRAVITATIONAL MULTI-LENS EQUATION FROM THE LINEAR APPROXIMATION OF EINSTEIN FIELD EQUATION
KANG SANGJUN 75
When a bright astronomical object (source) is gravitationally lensed by a foreground mass (lens), its image appears to be located at different positions. The lens equation describes the relations between the locations of the lens, source, and images. The lens equation used for the description of the lensing behavior caused by a lens system composed of multiple masses has a form with a linear combination of the individual single lens equations. In this paper, we examine the validity of the linear nature of the multi-lens equation based on the general relativistic point of view.
IMPLICATION OF STELLAR PROPER MOTION OBSERVATIONS ON RADIO EMISSION OF SAGITTARIUS A
CHANG HEON-YOUNG;CHOI CHUL-SUNG 81
It is suggested that a flying-by star in a hot accretion disk may cool the hot accretion disk by the Comptonization of the stellar emission. Such a stellar cooling can be observed in the radio frequency regime since synchrotron luminosity depends strongly on the electron temperature of the accretion flow. If a bright star orbiting around the supermassive black hole cools the hot disk, one should expect a quasi-periodic modulation in radio, or even possible an anti-correlation of luminosities in radio and X-rays. Recently, the unprecedentedly accurate infrared imaging of the Sagittarius A$\ast$ for about ten years enables us to resolve stars around it and thus determine orbital parameters of the currently closest star S2. We explore the possibility of using such kind of observation to distinguish two quite different physical models for the central engine of the Sagittarius A$\ast$, that is, a hot accretion disk model and a jet model. We have attempted to estimate the observables using the observed parameters of the star S2. The relative difference in the electron temperature is a few parts of a thousand at the epoch when the star S2 is near at the pericenter. The relative radio luminosity difference with and without the stellar cooling is also small of order $10^{-4}$, particularly even when the star S2 is near at the pericenter. On the basis of our findings we tentatively conclude that even the currently closest pass of the star S2 is insufficiently close enough to meaningfully constrain the nature of the Sagittarius A$\ast$ and distinguish two competing models. This implies that even though Bower et al. (2002)have found no periodic radio flux variations in their data set from 1981 to 1998, which is naturally expected from the presence of a hot disk, a hot disk model cannot be conclusively ruled out. This is simply because the energy bands they have studied are too high to observe the effect of the star S2 even if it indeed interacts with the hot disk. In other words, even if there is a hot accretion disk the star like S2 has imprints in the frequency range at v $\le$ 100 MHz.
FORMATION AND EVOLUTION OF SELF-INTERACTING DARK MATTER HALOS
AHN KYUNGJIN;SHAPIRO PAUL R. 89
Observations of dark matter dominated dwarf and low surface brightness disk galaxies favor density profiles with a flat-density core, while cold dark matter (CDM) N-body simulations form halos with central cusps, instead. This apparent discrepancy has motivated a re-examination of the microscopic nature of the dark matter in order to explain the observed halo profiles, including the suggestion that CDM has a non-gravitational self-interaction. We study the formation and evolution of self-interacting dark matter (SIDM) halos. We find analytical, fully cosmological similarity solutions for their dynamics, which take proper account of the collisional interaction of SIDM particles, based on a fluid approximation derived from the Boltzmann equation. The SIDM particles scatter each other elastically, which results in an effective thermal conductivity that heats the halo core and flattens its density profile. These similarity solutions are relevant to galactic and cluster halo formation in the CDM model. We assume that the local density maximum which serves as the progenitor of the halo has an initial mass profile ${\delta}M / M {\propto} M^{-{\epsilon}$, as in the familiar secondary infall model. If $\epsilon$ = 1/6, SIDM halos will evolve self-similarly, with a cold, supersonic infall which is terminated by a strong accretion shock. Different solutions arise for different values of the dimensionless collisionality parameter, $Q {\equiv}{\sigma}p_br_s$, where $\sigma$ is the SIDM particle scattering cross section per unit mass, $p_b$ is the cosmic mean density, and $r_s$ is the shock radius. For all these solutions, a flat-density, isothermal core is present which grows in size as a fixed fraction of $r_s$. We find two different regimes for these solutions: 1) for $Q < Q_{th}({\simeq} 7.35{\times} 10^{-4}$), the core density decreases and core size increases as Q increases; 2) for $Q > Q_{th}$, the core density increases and core size decreases as Q increases. Our similarity solutions are in good agreement with previous results of N-body simulation of SIDM halos, which correspond to the low-Q regime, for which SIDM halo profiles match the observed galactic rotation curves if $Q {\~} [8.4 {\times}10^{-4} - 4.9 {\times} 10^{-2}]Q_{th}$, or ${\sigma}{\~} [0.56 - 5.6] cm^2g{-1}$. These similarity solutions also show that, as $Q {\to}{\infty}$, the central density acquires a singular profile, in agreement with some earlier simulation results which approximated the effects of SIDM collisionality by considering an ordinary fluid without conductivity, i.e. the limit of mean free path ${\lambda}_{mfp}{\to} 0$. The intermediate regime where $Q {\~} [18.6 - 231]Q_{th}$ or ${\sigma}{\~} [1.2{\times}10^4 - 2.7{\times}10^4] cm^2g{-1}$, for which we find flat-density cores comparable to those of the low-Q solutions preferred to make SIDM halos match halo observations, has not previously been identified. Further study of this regime is warranted.
COSMOLOGICAL APPLICATIONS OF MULTIPLY IMAGED GRAVITATIONAL LENS SYSTEMS
PARK MYEONG-GU 97
We now have more than 70 multiple image gravitational lens systems. Since gravitational lensing occurs through gravitational distortions in cosmic space, cosmological informations can be extracted from multiple image systems. Specifically, Hubble constant can be determined by the time delay mea-surement, curvature of the universe can be measured by the distribution of image separations in lens systems, and limits on matter density and cosmological constant can be set by the statistics of gravitationallens systems. Uncertainties, however, still exist in various steps, and results may be taken with some caution. Larger systematic survey and better understanding of galaxy properties would definitely help.
CLUSTERS OF GALAXIES: SHOCK WAVES AND COSMIC RAYS
RYU DONGSU;KANG HYESUNG 105
Recent observations of galaxy clusters in radio and X-ray indicate that cosmic rays and magnetic fields may be energetically important in the intracluster medium. According to the estimates based on theses observational studies, the combined pressure of these two components of the intracluster medium may range between $10\%{\~}100\%$ of gas pressure, although their total energy is probably time dependent. Hence, these non-thermal components may have influenced the formation and evolution of cosmic structures, and may provide unique and vital diagnostic information through various radiations emitted via their interactions with surrounding matter and cosmic background photons. We suggest that shock waves associated with cosmic structures, along with individual sources such as active galactic nuclei and radio galaxies, supply the cosmic rays and magnetic fields to the intracluster medium and to surrounding large scale structures. In order to study 1) the properties of cosmic shock waves emerging during the large scale structure formation of the universe, and 2) the dynamical influence of cosmic rays, which were ejected by AGN-like sources into the intracluster medium, on structure formation, we have performed two sets of N-body /hydrodynamic simulations of cosmic structure formation. In this contribution, we report the preliminary results of these simulations.
COSMIC RAY ACCELERATION AT COSMOLOGICAL SHOCKS: NUMERICAL SIMULATIONS OF CR MODIFIED PLANE-PARALLEL SHOCKS
KANG HYESUNG 111
In order to explore the cosmic ray acceleration at the cosmological shocks, we have performed numerical simulations of one-dimensional, plane-parallel, cosmic ray (CR) modified shocks with the newly developed CRASH (Cosmic Ray Amr SHock) numerical code. Based on the hypothesis that strong Alfven waves are self-generated by streaming CRs, the Bohm diffusion model for CRs is adopted. The code includes a plasma-physics-based 'injection' model that transfers a small proportion of the thermal proton flux through the shock into low energy CRs for acceleration there. We found that, for strong accretion shocks with Mach numbers greater than 10, CRs can absorb most of shock kinetic energy and the accretion shock speed is reduced up to $20\%$, compared to pure gas dynamic shocks. Although the amount of kinetic energy passed through accretion shocks is small, since they propagate into the low density intergalactic medium, they might possibly provide acceleration sites for ultra-high energy cosmic rays of $E\ll10^{18}eV$. For internal/merger shocks with Mach numbers less than 3, however, the energy transfer to CRs is only about $10-20\%$ and so nonlinear feedback due to the CR pressure is insignificant. Considering that intracluster medium (ICM) can be shocked repeatedly, however, the CRs generated by these weak shocks could be sufficient to explain the observed non-thermal signatures from clusters of galaxies.
LYMANα EMITTERS BEYOND REDSHIFT 5: THE DAWN OF GALAXY FORMATION
TANIGUCHI YOSHIAKI;SHIOYA YASUHIRO;AJIKI MASARU;FUJITA SHINOBU S.;NAGAO TOHRU;MURAYAMA TAKASHI 123
The 8m class telescopes in the ground-based optical astronomy together with help from the ultra-sharp eye of the Hubble Space Telescope have enabled us to observe forming galaxies beyond red shift z = 5. In particular, more than twenty Ly$\alpha$-emitting galaxies have already been found at z > 5. These findings provide us with useful hints to investigate how galaxies formed and then evolved in the early universe. Further, detailed analysis of Ly$\alpha$ emission line profiles are useful in exploring the nature of the intergalactic medium because the trailing edge of cosmic reionization could be close to z $\~$ 6 -7, at which forming galaxies have been found recently. We also discuss the importance of superwinds from forming galaxies at high redshift, which has an intimate relationship between galaxies and the intergalactic medium. We then give a review of early cosmic star formation history based on recent progress in searching for Ly$\alpha$-emitting young galaxies beyond red shift 5.
SINGLY-PEAKED P-CYGNI TYPE LYα FROM STARBURST GALAXIES
AHN SANG-HYEON 145
P-Cygni type Ly$\alpha$ from starburst galaxies, either nearby galaxies or Lyman Break galaxies, are believed to be formed by galactic outflows such as galactic supershells or galactic superwinds. We develope a Monte Carlo code to calculate the Ly$\alpha$ line transfer in a galactic supershell which is expanding and formed of uniform and dusty neutral hydrogen gas. The escape of Ly$\alpha$ photons from the system is achieved by a number of back-scatterings. A series of emission peaks are formed by back-scatterings. When we observe P-Cygni type Ly$\alpha$ emissions of starforming galaxies, we can usually see merely singly-peaked emission. Hence the secondary and the tertiary emission humps should be destroyed. In order to do this, dust should be spatially more extended into the inner cavity than neutral supershell. We find that the kinematic information of the expanding supershell is conserved even in dusty media. We discuss the astrophysical applications of our results.
THE COSMIC EVOLUTION OF LUMINOUS INFRARED GALAXIES: STRONG INTERACTIONS/MERGERS OF GAS-RICH DISKS
SANDERS D. B. 149
Deep surveys at mid-infared through submillimeter wavelengths indicate that a substantial fraction of the total luminosity output from galaxies at high redshift (z > 1) emerges at wavelengths 30 - 300${\mu}m$. In addition, much of the star formation and AGN activity associated with galaxy building at these epochs appears to reside in a class of luminous infrared galaxies (LIGs), often so heavily enshrouded in dust that they appear as 'blank-fields' in deep optical/UV surveys. Here we present an update on the state of our current knowledge of the cosmic evolution of LIGs from z = 0 to z $\~$ 4 based on the most recent data obtained from ongoing ground-based redshift surveys of sources detected in ISO and SCUBA deep fields. A scenario for the origin and evolution of LIGs in the local Universe (z < 0.3), based on results from multiwavelength observations of several large complete samples of luminous IRAS galaxies, is then discussed.
OPTICAL AND NEAR-INFRARED IMAGING OF THE IRAS 1-JY SAMPLE OF ULTRALUMINOUS INFRARED GALAXIES
KIM D.-C. 159
Optical (R) and near-infrared (K') images of the IRAS 1-Jy sample of 118 ultraluminous infrared galaxies have been studied. All but one object in the 1-Jy sample show signs of strong tidal interaction/merger. Most of them harbor a single disturbed nucleus and are therefore in the later stages of a merger event. Single-nucleus ULIGs show a broad distribution in host magnitudes with significant overlap with those of quasars. The same statement applies to R - K' colors in ULIG and quasar hosts. An analysis of the surface brightness profiles of the host galaxies in single-nucleus sources reveals that about $35\%$ of the Rand K' surface brightness profiles are well fit by an elliptical-like $R^{1/4}$-law, while only $2\%$ are well fit by an exponential disk. Another $38\%$ of the single-nucleus systems are fit equally well with an exponential or de Vaucouleurs profile. Elliptical-like hosts are most common among merger remnants with Seyfert 1 nuclei ($83\%$) and Seyfert 2 optical characteristics ($69\%$). The mean effective radius of these ULIGs is 4.80 $\pm$ 1.37 kpc at Rand 3.48 $\pm$ 1.39 kpc at K'. These values are in excellent agreement with recent quasar measurements obtained at H with HST. The hosts of elliptical-like 1-Jy systems follow with some scatter the same ${\mu}e - r_e$ relation, giving credence to the idea that some of these objects may eventually become elliptical galaxies if they get rid of their excess gas or transform this gas into stars.
STARBURST AND AGN CONNECTIONS AND MODELS
SCOVILLE NICK 167
There is accumulating evidence for a strong link between nuclear starbursts and AGN. Molecular gas in the central regions of galaxies plays a critical role in fueling nuclear starburst activity and feeding central AGN. The dense molecular ISM is accreted to the nuclear regions by stellar bars and galactic interactions. Here we describe recent observational results for the OB star forming regions in M51 and the nuclear star burst in Arp 220 - both of which have approximately the same rate of star formation per unit mass of ISM. We suggest that the maximum efficiency for forming young stars is an Eddington-like limit imposed by the radiation pressure of newly formed stars acting on the interstellar dust. This limit corresponds to approximately 500 $L_{\bigodot} / M_{\bigodot}$ for optically thick regions in which the radiation has been degraded to the NIR. Interestingly, we note that some of the same considerations can be important in AGN where the source of fuel is provided by stellar evolution mass-loss or ISM accretion. Most of the stellar mass-loss occurs from evolving red giant stars and whether their mass-loss can be accreted to a central AGN or not depends on the radiative opacity of the mass-loss material. The latter depends on whether the dust survives or is sublimated (due to radiative heating). This, in turn, is determined by the AGN luminosity and the distance of the mass-loss stars from the AGN. Several AGN phenomena such as the broad emission and absorption lines may arise in this stellar mass-loss material. The same radiation pressure limit to the accretion may arise if the AGN fuel is from the ISM since the ISM dust-to-gas ratio is the same as that of stellar mass-loss.
MASSIVE BLACK HOLE EVOLUTION IN RADIO-LOUD ACTIVE GALACTIC NUCLEI
FLETCHER ANDRE B. 177
Active galactic nuclei (AGNs) are distant, powerful sources of radiation over the entire electromagnetic spectrum, from radio waves to gamma-rays. There is much evidence that they are driven by gravitational accretion of stars, dust, and gas, onto central massive black holes (MBHs) imprisoning anywhere from $\~$1 to $\~$10,000 million solar masses; such objects may naturally form in the centers of galaxies during their normal dynamical evolution. A small fraction of AGNs, of the radio-loud type (RLAGNs), are somehow able to generate powerful synchrotron-emitting structures (cores, jets, lobes) with sizes ranging from pc to Mpc. A brief summary of AGN observations and theories is given, with an emphasis on RLAGNs. Preliminary results from the imaging of 10000 extragalactic radio sources observed in the MITVLA snapshot survey, and from a new analytic theory of the time-variable power output from Kerr black hole magnetospheres, are presented. To better understand the complex physical processes within the central engines of AGNs, it is important to confront the observations with theories, from the viewpoint of analyzing the time-variable behaviours of AGNs - which have been recorded over both 'short' human ($10^0-10^9\;s$) and 'long' cosmic ($10^{13} - 10^{17}\;s$) timescales. Some key ingredients of a basic mathematical formalism are outlined, which may help in building detailed Monte-Carlo models of evolving AGN populations; such numerical calculations should be potentially important tools for useful interpretation of the large amounts of statistical data now publicly available for both AGNs and RLAGNs.
ON THE FORMATION OF GIANT ELLIPTICAL GALAXIES AND GLOBULAR CLUSTERS
LEE MYUNG GYOON 189
I review the current status of understanding when, how long, and how giant elliptical galaxies formed, focusing on the globular clusters. Several observational evidences show that massive elliptical galaxies formed at z > 2 (> 10 Gyr ago). Giant elliptical galaxies show mostly a bimodal color distribution of globular clusters, indicating a factor of $\approx$ 20 metallicity difference between the two peaks. The red globular clusters (RGCs) are closely related with the stellar halo in color and spatial distribution, while the blue globular clusters (BGCs) are not. The ratio of the number of the RGCs and that of the BGCs varies depending on galaxies. It is concluded that the BGCs might have formed 12-13 Gyr ago, while the RGCs and giant elliptical galaxies might have formed similarly 10-11 Gyr ago. It remains now to explain the existence of a gap between the RGC formation epoch and the BGC formation epoch, and the rapid metallicity increase during the gap (${\Delta}t{\approx}$ 2 Gyr). If hierarchical merging can form a significant number of giant elliptical galaxies > 10 Gyr ago, several observational constraints from stars and globular clusters in elliptical galaxies can be explained.
CHANDRA X-RAY OBSERVATIONS OF EARLY TYPE GALAXIES
KIM DONG-WOO 213
We review recent observational results on early type galaxies obtained with high spatial resolution Chandra data. With its unprecedented high spatial resolution, Chandra reveals many intriguing features in early type galaxies which were not identified with the previous X-ray missions. In particular, various fine structures of the hot ISM in early type galaxies are detected, for example, X-ray cavities which are spatially coincident with radio jets/lobes, indicating the interaction between the hot ISM and radio jets. Also point sources (mostly LMXBs) are individually resolved down to Lx = a few x $10^{37}\;erg\;sec^{-1}$ and it is for the first time possible to unequivocally investigate their properties and the X-ray luminosity function. After correcting for incompleteness, the XLF of LMXBs is well reproduced by a single power law with a slope of -1.0 - -1.5, which is in contrast to the previous report on the existence of the XLF break at Lx, Eddington = 2 x $10^{38}\;erg\;sec^{-1}$ (i.e., Eddington luminosity of a neutron star binary). Carefully considering both detected and undetected, hidden populations of point sources we further discuss the XLF of LMXBs and the metal abundance of the hot ISM and their impact on the properties of early type galaxies.
SECULAR EVOLUTION OF SPIRAL GALAXIES
ZHANG XIAOLEI 223
It is now a well established fact that galaxies undergo significant morphological transformation during their lifetimes, manifesting as an evolution along the Hubble sequence from the late to the early Hubble types. The physical processes commonly believed to be responsible for this observed evolution trend, i.e. the major and minor mergers, as well as gas accretion under a barred potential, though demonstrated applicability to selected types of galaxies, on the whole have failed to reproduce the most important statistical and internal properties of galaxies. The secular evolution mechanism reviewed in this paper has the potential to overcome most of the known difficulties of the existing theories to provide a natural and coherent explanation of the properties of present day as well as high-redshift galaxies.
SECULAR EVOLUTION OF BARRED GALAXIES
ANN HONG BAE 241
Owing to several observational evidences and theoretical predictions for morphological evolution of galaxies, it is now widely accepted that galaxies do evolve from late types to early ones along the Hubble sequence. It is also well established that non-axisymmetric potentials of bar-like or oval mass distributions can change the morphology of galaxies significantly during the Hubble time. Here, we review the observational and theoretical grounds of the secular evolution driven by bar-like potentials, and present the results of SPH simulations for the response of the gaseous disks to the imposed potentials to explore the secular evolution in the central regions of barred galaxies.
THE ASTRO-F ALL SKY SURVEY
PEARSON CHRIS;LEE HYUNG MOK;TEAM ASTRO-F 249
ASTRO-F is the next generation Japanese infrared space mission of the Institute of Space and Astronautical Science. ASTRO-F will be dedicated to an All Sky Survey in the far-infrared in 4 bands from 50-200microns with 2 additional mid-infrared bands at 9microns and 20microns. This will be the first all sky survey in the infrared since the ground breaking IRAS mission almost 20 years ago and the first ever survey at 170microns. The All Sky Survey should detect 10's of millions of sources in the far-infrared bands most of which will be dusty luminous and ultra-luminous star forming galaxies, with as many as half lying at redshifts greater than unity. In this contribution, the ASTRO-F mission and its objectives are reviewed and many of the mission expectations are discussed. | CommonCrawl |
Henry Berthold Mann
Henry B. Mann, our friend and former colleague, passed away on February 1, 2000 in Tucson, Arizona. A mathematician of international fame, Mann, in a career of more than fifty years, made significant contributions to algebra, number theory, statistics, and combinatorics.
Henry Mann was born October 27, 1905, in Vienna to Oscar and Friedrike (Schönnhof) Mann. He received his Ph.D. degree in mathematics in 1935 from the University of Vienna where. as a student of Philipp Furtwängler, he wrote his dissertation in algebraic number theory. After a year of teaching school in Vienna and a couple of years spent in research and tutoring, he emigrated in 1938 to the United States.
In New York he earned his living for several years primarily by tutoring. He had by then developed an interest in mathematical statistics, particularly in the analysis of variance, and in the problem of designing experiments with a view to their statistical analysis. He later contributed to this subject in a number of research papers and in his book (1949) "Analysis and Design of Experiments."
One of Mann's most remarkable achievements was his discovery in 1941 of a proof of a celebrated conjecture of Schnirelmann and Landau in additive number theory. This conjecture had its origin in the work of L. Schnirelmann in the early 1930s. Let $A (B,C)$ be sets of positive integers. Form $A^0$, $B^0$ by adjoining $0$ to $A$ and $B$ respectively. Let $A(n)$ be the number of positive integers in $A$ that are $\le n$. The greatest lower bound of the quotients $A(n)/n$ is called the density of $A$. Let $C^0$ consist of all integers of the form $a+b$ ($a\in A^0$, $b\in B^0$).
Let $\alpha$ be the density of $A$, $\beta$ the density of $B$, and $\gamma$ the density of $C$. It had been conjectured by E. Landau, I. Schur, and A. Khintchine that
$$\gamma \ge \alpha + \beta \,\, or \,\, = 1\qquad (*)$$
Approximations to this inequality had been obtained by Landau in 1930, who showed that $\gamma \ge \alpha + \beta -\alpha \beta$, and by A. Brauer in 1941, who showed that $\gamma \ge (9/10)(\alpha + \beta)$. Schnirelmann had shown that
$\alpha + \beta \ge \alpha+\beta - \alpha \beta$.
$C$ contains all positive integers if $\alpha + \beta \ge 1$.
From these two rules Schnirelmann obtained (readily) the result that any set having positive density is a basis for the integers (that is, if $\alpha > 0$, then the sum of $A$ with itself sufficiently many times contains all positive integers). As an application of these ideas, Schnirelmann proved (for the first time) the existence of a value $k$ such that every integer greater than 1 is the sum of at most $k$ primes. This he did by showing that $P + P - P$ is the set of primes together with 1 - has positive density, hence is a basis of the integers.
Out of further study of these ideas by Schnirelmann and by E. Landau, there arose the conjecture that (1) and (2) may be replaced by the much stronger statement $(*)$: Either $\gamma \ge \alpha + \beta$ or $C$ contains all positive integers.
This conjecture, appealing in its apparent simplicity, soon attracted wide attention. Many distinguished mathematicians attempted to find a proof; indeed, partial results were obtained over the next decade by E. Landau, A. Khintchine, A. Besicovitch, I. Schur, and A. Brauer.
It was this conjecture that Mann succeeded in proving in 1941. His interest in the problem had been aroused through the lectures of A. Brauer at New York University. Actually, he proved the still sharper statement:
\frac{C(n)}{n}\geq\min_{
\substack{0 < m \le n \\
m\not\in C}
}\left(1,\frac{A(m)+B(m)}{m}\right).
For his proof he was awarded the Cole Prize in Number Theory by the American Mathematical Society in 1946. The technique that Mann introduced in his proof, and its various modifications, have led to further important results in additive number theory and have also proved useful in the more general setting of additive problems in groups.
In 1942 Mann was the recipient of a Carnegie Fellowship for the study of statistics at Columbia University. At Columbia he had the opportunity of working with Abraham Wald in the department of economics, which at that time was headed by Harold Hotelling. He taught for a year (1943-1944) in the Army Specialized Training Program at Bard College; he spent a year (1944-1945) as research associate at Ohio State University, and six months as research associate at Brown University. In 1946 he returned to Ohio State to join the mathematics faculty where, as associate professor (1946-1948) and full professor (1948-1964), he was actively engaged in teaching and research for many years. After retiring from Ohio State, he held professorships at the University of Wisconsin Mathematics Research Center from 1964 to 1971, and at the University of Arizona from 1971 until his second retirement in 1975.
Mann's research interests in algebra and combinatorics covered a wide range. He had a special fondness, though, for algebraic number theory and Galois theory. and imparted his enthusiasm for these subjects to many students over the years. Besides his dozen or so papers that contribute directly to these subjects, several of his papers on difference sets and coding theory contain beautiful applications of theorems on algebraic numbers and Galois theory.
Mann married Anna Löffler on July 19, 1935, and had one son Michael.
Bibliography of Henry B. Mann
Ein Satz Ãber Normalteiler, Anz. Österreich. Akad. Wiss. Math.-Naturwiss. Kl. (1935), Nr. 6, 49-50.
Über eine notwendige Bedingung fïr die Ordnung einfacher Gruppen, Anz. Österreich. Akad. Wiss. Math.-Naturwiss. Kl.(1935). Nr. 19. 209-210.
Untersuchungen Ãber Wabenzellen bei allgemeiner Minkowskischer Metrik, Mh. Math. Phys. 42 (1935), 417-424.
Über die Erzeugung von Darstellungen von Gruppen durch Darstellungen von Untergruppen, Mh. Math. Phys. 46 (1937), 74-83.
A proof of the fundamental theorem on the density of sums of sets of positive integers. Ann. of Math. 43 (1942), 523-527.
On the choice of the number of class intervals in the application of the chi square test. Ann. Math. Stat. 13 (1942), 306-317 (with A. Wald).
The construction of orthogonal Latin squares, Ann. Math. Stat. 13 (1942), 418-423.
Quadratic forms with linear constraints, Amer. Math. Monthly 50 (1943), 430-433.
On stochastic limit and order relationships, Ann. Math. Stat. 14 (1943), 217-226 (with A. Wald).
On the statistical treatment of linear stochastic difference equations. Econometrica 11 (1943), 173-220 (with A. Wald).
On the construction of sets of orthogonal Latin squares, Ann. Math. Stat. 14 (1943), 401-414.
On orthogonal Latin squares, Bull. Amer. Math. Soc. 50 (1944), 249-257.
On certain systems which are almost groups, Bull. Amer. Math. Soc. 50 (1944), 879-881.
On a problem of estimation occurring in public opinion polls, Ann. Math. Stat. 16 (1945), 85-90. [A correction appears in Ann. Math. Stat. 17 (1946), 87-88.]
On a test for randomness based on signs of differences, Ann. Math. Stat. 16 (1945), 193-199.
Note on a paper by C. W. Cotterman and L. U. Snyder, Ann. Math. Stat. 16 (1945), 311-312.
Nonparametric tests against trend, Econometrica 13 (1945), 245-259.
Correction of G-M counter data, Phys. Rev. 68(1945), 40-43 (with J. D. Kurbatov).
A note on the correction of Geiger Mãller counter data. Quart. J. Mech. Appl. Math. 4 (1946), 307-309.
On a test of whether one of two random variables is stochastically larger than the other, Ann. Math. Stat. 18 (1947), 50-60 (with D. R. Whitney).
Integral extensions of a ring, BulL Amer. Math. Soc. 55(1949). 592-594 (with H. Chatland).
On the field of origin of an ideal, Canad. J. Math. 2 (1950), 16-21.
On the number of integers in the sum of two sets of positive integers, Pacific J. Math. 1(1951), 249-253.
On the realization of stochastic processes by probability distributions in function spaces, Sankhya 11(1951), 3-8.
The estimation of parameters in certain stochastic processes, Sankhya 11(1951), 97-106.
On simple difference sets, Sankhya 11(1951), 357-364 (with T. A. Evans).
On products of sets of group elements, Canad. J. Math. 4 (1952), 64-66.
Some theorems on difference sets, Canad. J. Math. 4 (1952), 222-226.
On the estimation of parameters determining the mean value function of a stochastic process, Sankhya 12 (1952), 117-120.
An addition theorem for sets of elements of Abelian groups, Proc Amer. Math Soc. 4(1953), 423.
Systems of distinct representatives, Amer. Math. Monthly 60 (1953), 397-401 (with H. J. Ryser).
On the moments of stochastic integrals, Sankya 12(1953), 347-350 (with A. P. Calderãn).
On integral closure, Canad. J. Math. 6 (1954), 471-473 (with H. S. Butts and M. Hall Jr.).
On an exceptional phenomenon in certain quadratic extensions. Canad. J. Math. 6 (1954), 474-476.
A generalization of a theorem of Ankeny and Rogers, Rend. Circ. Mat. Palermo 3(1954), 106-108.
A theory of estimation for the fundamental random process and the Ornstein Uhlenbeck process, Sankhya 13 (1954), 325-350.
On the efficiency of the least square estimates of parameters in the Ornstein Uhlenbeck process, Sankhya 13 (1954). 351-358 (with P. B. Moranda).
Corresponding residue systems in algebraic number fields, Pacific J. Math. 6 (1956), 211-224 (with H. S. Butts).
On integral bases, Proc. Amer. Math. Sac. 9 (1958), 167-172.
A note to the paper "On integral bases" by H. B. Mann, Proc Amer. Math. Soc. 9(1958), 173-174 (with V. Hanly).
Some applications of the Cauchy-Davenport theorem. Norske Vid. Selsk. Forh. (Trondheim) 32 (1959), 74-80 (with S. Chowla and F. G. Straus).
The algebra of a linear hypothesis. Ann. Math. Stat. 31(1960), 1-15.
A refinement of the fundamental theorem on the density of the sum of two sets of integers, Pacific J. Math. 10 (1960), 909-915.
Intrablock and interblock estimates, in "Contributions to Probability and Statistics," pp. 293-298. Stanford Univ. Press, Stanford, California, 1960 (with M. V. Menon).
On modular computation, Math. Comput. 15 (1961), 190-192.
An inequality suggested by the theory of statistical inference, Illinois J. Math. 6 (1962), 131-136.
On the number of information symbols in Bose-Chaudhuri codes, information and Control 5(1962), 153-162.
Main effects and interactions, Sankhya Ser. A 24 (1962), 185-202.
Balanced incomplete block designs and Abelian difference sets, Illinois J. Math. 8 (1964), 252-261.
On the casus irreducibilis, Amer. Math. Monthly 71(1964), 289-290.
Decomposition of sets of group elements, Pacific J. Math. 14 (1964), 547-558 (with W. B. Lalfer).
On multipliers of difference sets, Canad. J. Math. 17 (1965), 541-542 (with R. L. McFarland).
Difference sets in elementary Abelian groups, Illinois J. Math. 9(1965), 212-219.
On linear relations between roots of unity, Mathematika 12 (1965), 107-117.
Recent advances in difference sets, Amer. Math. Monthly 74 (1967), 229-235.
On canonical bases of ideals, J. Combinatorial Theory Ser. 0 2 (1967), 71-76 (with K. Yamamoto).
Sums of sets in the elementary Abelian group of type (p,p), J. Combinatorial Theory 2 (1967), 275-284 (with J. E. Olson).
Two addition theorems, J. Combinatorial Theory 3 (1967), 233-235.
Properties of differential forms in n real variables. Pacific J. Math. 21(1967), 525-529 (with J. Mitchell and L. Schoenfeld). [A correction appears in Pacific J. Math. 23 (1967), 631.]
On the p-rank of the design matrix of a difference set, Information and Control 12 (1968), 474-488 (with F. J. MacWilliams).
On orthogonal m-pods on a cone, J. Combinatorial Theory Ser. 0 5 (1968), 302-307.
A new proof of the maximum principle for doubly-harmonic functions, Pacific J. Math. 27 (1968). 567-571 (with J. Mitchell and L. Schoenfeld).
On canonical bases for subgroups of an Abelian group. in "Combinatorial Mathematics and its Applications" (Proc. Conf., Univ. North Carolina, Chapel Hill, 1967), 38-54, Univ. of North Carolina Press, Chapel Hill, 1969.
A note on balanced incomplete block designs, Ann. Math. Stat. 40 (1969), 679-680.
On multipliers of difference sets, Illinois J. Math. 13(1969), 378-382 (with S. K. Zaremba).
On the difference between the geometric and the arithmetic mean of n quantities, Advances in Math. 5 (1970), 472-473 (with C. Loewner).
Linear equations over a commutative ring, J. Algebra 18(1971), 432-446 (with P. Camion and L. S. Levy).
Antisymmetric difference sets, J. Number Theory 4 (1972), 266-268 (with P. Camion).
Representations by kth powers in GF(q), J.Number Theory 4 (1972), 269-273 (with G. T. Diderrich).
A necessary and sufficient condition for primality and its source, J. Combinatorial Theory Ser. A. 13(1972), 131-134 (with D. Shanks).
Combinatorial problems in finite Abelian groups, in "A Survey of Combinatorial Theory" (Proc. Internat. Symp. Combinatorial Math, and Its Appl., Colorado State Univ.. Fort Collins, Colo. 1971), pp.95-100. North-Holland. Amsterdam, 1973 (with G.T. Diderrich).
On Hadamard difference sets, in "A Survey of Combinatorial Theory" (Proc. Internat. Symp. Combinatorial Math. and Its Appl. Colorado State Univ., Fort Collins, 1971). pp. 333-334. North-Holland, Amsterdam, 1973 (with R. L. McFarland).
Prãfer rings, J. Number Theory 5(1973), 132-138 (with P. Camion and L. S. Levy).
Additive group theory- a progress report, Bull. Amer. Math. Soc. 79 (1973), 1069-1075.
The solution of equations by radicals, J. Algebra 29(1974), 551-554.
Lectures on error correcting codes, The University of Arizona Department of Mathematics Lecture Note Series, University of Arizona, Tucson, Ariz., 1974, iii+88 pp. (with D. K. Ray-Chaudhuri).
On normal radical extensions of the rationals, Linear and Multilinear Algebra 3 (1975), 73-80 (with W. Y. Vélez).
Prime ideal decomposition in F, Monatsh. Math. 81 (1976), 131-139 (with W. Y. Vélez).
A short proof of Fermat's theorem for n = 3, Math. Student 46 (1978), 103-104 (with W. A. Webb).
An addition theorem for the elementary abelian group of type (p,p), Monatsh. Math. 102(1986), 273-308 (with Y. F. Wou).
"Analysis and Design of Experiments," Dover, New York, 1949.
"Introduction to Algebraic Number Theory," Ohio State Univ. Press, Columbus, 1955.
"Addition Theorems: The Addition Theorems of Group Theory and Number Theory," Wiley (Interscience), New York, 1965.
Ph. D. Students of Henry B. Mann
Donald Ransom Whitney, Ohio State University, 1949
George Marsaglia, Ohio State University, 1951
Hubert Spence Butts, Jr., Ohio State University, 1953
Walter Wilson Hoy, Ohio Ohio State University, 1953
Chio-Shih Lin, Ohio State University, 1955
Leon Royce McCulloh, Ohio State University, 1959
Manavazhi Vijaya Krishna Menon, Ohio State University, 1959
Walter Ball Laffer, I, Ohio State University, 1963
George T. Diderrich, University of Wisconsin-Madison, 1972
William Yslas Vélez, University of Arizona, 1975
Ying Fou Wou, University of Arizona, 1980 | CommonCrawl |
How to find the slope of a line with two points?
We can calculate the slope of a line using two points that are part of the line. Therefore, we form a fraction, where the numerator corresponds to the change in the y-coordinates of the points and the denominator corresponds to the change in the x-coordinates.
Here, we will learn about the formula that we can apply to calculate the slope using two points. Then, we will apply this formula to solve some problems.
Calculating the slope of a line using two points.
Formula for the slope using two points
Slope of a common lines
Examples with answers of slope of a line using two points
Slope of a line using two points – Practice problems
We can find the formula for the slope by using the coordinates of two points that are part of the line. The slope equals the change in y divided by the change in x. Therefore, if we have the points $latex A=(x_{1}, y_{1})$ and $latex B = (x_{2}, y_{2})$, the slope formula is:
Formula for the slope
$latex m=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}$
Using the slope formula, we can determine the slope of some common lines for reference
Slope of a horizontal line
A horizontal line has no inclination with respect to the x-axis, so its slope is equal to 0. The y coordinates of all points on a horizontal line are the same, so when using the slope formula, we have:
$latex =\frac{0}{x_{2}-x_{1}}$
$latex m=0$
Slope of a vertical line
A vertical line has an undefined slope. All points on a vertical line have coordinates in x that are the same, so when applying the slope formula, we have:
$latex =\frac{y_{2}-y_{1}}{0}$
We know that division by 0 is undefined.
Slope of parallel lines
For two or more lines to be parallel, their slopes must be equal. For example, suppose we have the lines $latex l_{1}$ and $latex l_{2}$ with slopes $latex m_{1}$ and $latex m_{2}$ respectively. If these lines are parallel, we must have:
$latex m_{1}=m_{2}$
Slope of perpendicular lines
The slopes of two parallel lines are equal to the negative reciprocal of each other. For example, suppose we have the lines $latex l_{1}$ and $latex l_{2}$ with slopes $latex m_{1}$ and $latex m_{2}$ respectively. If these lines are perpendicular, we must have:
$latex m_{1}=-\frac{1}{m_{2}}$
The formula for the slope of a line is applied using the two points given to obtain the answer. Try to solve the problems yourself before looking at the solution.
We have a line that contains the points (1, 3) and (3, 7). What is its slope?
We have the two points:
$latex (x_{1}, y_{1})=(1, 3)$
We apply the slope formula with the two given points:
$latex m=\frac{7-3}{3-1}$
$latex m=\frac{4}{2}$
The slope of the line is 2.
What is the slope of a line that has the points (3, 2) and (8, 3)?
We have the following coordinates of the points:
We use these coordinates in the slope formula and we have:
The slope of the line is $latex \frac{1}{5}$.
The points (-1, 3) and (6, -4) are part of a line. What is its slope?
We have the following points:
$latex (x_{1}, y_{1})=(-1, 3)$
$latex (x_{2}, y_{2})=(6, -4)$
When we apply the slope formula with these coordinates, we have:
$latex m=\frac{-4-3}{6-(-1)}$
$latex m=\frac{-7}{7}$
$latex m=-1$
The slope of the line is $latex -1$.
What is the slope of a line that contains the points (-3, -2) and (3, -5)?
We can write as follows:
$latex (x_{1}, y_{1})=(-3, -2)$
We use the slope formula with the given coordinates:
$latex m=\frac{-5-(-2)}{3-(-3)}$
$latex m=-\frac{1}{2}$
The slope of the line is $latex -\frac{1}{2}$.
Solve the following practice problems using the slope formula with the given points. If you need help with this, you can look at the solved examples above.
Find the slope of a line that passes through the points (1, 2) and (5, 3).
$latex m=0.25$
$latex m=0.5$
If the points (-3, 1) and (2, 4) are part of a line, what is the slope?
What is the slope of a line that has the points (-2, 1) and (2, -3)?
$latex m=-0.5$
$latex m=-0.75$
Find the slope of a line that contains the points (-3, -2) and (1, -10).
Interested in learning more about the midpoint, slope, and distance on the plane? Take a look at these pages:
How to find the midpoint of a line segment?
Distance between two points – Formula and examples | CommonCrawl |
Influence of socio-economic, demographic and climate factors on the regional distribution of dengue in the United States and Mexico
Matthew J. Watts1,
Panagiota Kotsila1,4,
P. Graham Mortyn1,5,
Victor Sarto i Monteys1,3 &
Cesira Urzi Brancati2
This study examines the impact of climate, socio-economic and demographic factors on the incidence of dengue in regions of the United States and Mexico. We select factors shown to predict dengue at a local level and test whether the association can be generalized to the regional or state level. In addition, we assess how different indicators perform compared to per capita gross domestic product (GDP), an indicator that is commonly used to predict the future distribution of dengue.
A unique spatial-temporal dataset was created by collating information from a variety of data sources to perform empirical analyses at the regional level. Relevant regions for the analysis were selected based on their receptivity and vulnerability to dengue. A conceptual framework was elaborated to guide variable selection. The relationship between the incidence of dengue and the climate, socio-economic and demographic factors was modelled via a Generalized Additive Model (GAM), which also accounted for the spatial and temporal auto-correlation.
The socio-economic indicator (representing household income, education of the labour force, life expectancy at birth, and housing overcrowding), as well as more extensive access to broadband are associated with a drop in the incidence of dengue; by contrast, population growth and inter-regional migration are associated with higher incidence, after taking climate into account. An ageing population is also a predictor of higher incidence, but the relationship is concave and flattens at high rates. The rate of active physicians is associated with higher incidence, most likely because of more accurate reporting. If focusing on Mexico only, results remain broadly similar, however, workforce education was a better predictor of a drop in the incidence of dengue than household income.
Two lessons can be drawn from this study: first, while higher GDP is generally associated with a drop in the incidence of dengue, a more granular analysis reveals that the crucial factors are a rise in education (with fewer jobs in the primary sector) and better access to information or technological infrastructure. Secondly, factors that were shown to have an impact of dengue at the local level are also good predictors at the regional level. These indices may help us better understand factors responsible for the global distribution of dengue and also, given a warming climate, may help us to better predict vulnerable populations on a larger scale.
The dengue virus (DENV) is one of the most important mosquito-borne viral diseases in the world today. Two main arthropod vectors are responsible for transmission of dengue viruses: Aedes aegypti (commonly known as yellow fever mosquito) and Aedes albopictus (commonly known as tiger mosquito). A. aegypti mainly feeds on humans and is highly adapted to human habitations and urban areas; A. albopictus feeds on animals and humans and is more prevalent in rural and peri-urban environments. While A. albopictus is also responsible for dengue transmission among humans, it is a less likely vector than A. aegypti since it is adapted to a wider range of environments and has less restrictive feeding habits [1]. Both Aedes mosquitoes are highly adapted to breeding in aquatic habitats like ponds and lakes, but also micro habitats, such as tree-holes, rock crevices and even leaf axils [2]. The latter behaviour in recent times has benefited both species by allowing them to exploit a range of man-made aquatic breeding habitats, where water can accumulate, like urban gardens, vases in cemeteries, discarded bottles and plant pots; therefore, both species can survive in drier climates than expected, by exploiting artificial water sources.
Dengue is a disease caused by any one of four closely related viruses: DENV 1, DENV 2, DENV 3, or DENV 4. Currently, all four dengue serotypes are in circulation in the Americas and can co-circulate within a region; the actual distribution of each serotype is difficult to establish for a number of reasons, such as inadequate surveillance, under reporting, high numbers of asymptomatic carriers, and so on, as laid out by [3]. DENV causes an acute flu-like illness that affects people of all age groups. Those who recover from a dengue infection can expect lifelong immunity against that serotype and some partial, but temporary, cross-immunity to the other serotypes, although secondary infections by other serotypes increase the risk of developing severe dengue, which may cause lethal complications, and sometimes death [4].
There is currently no specific antiviral therapy for dengue fever; once the disease is contracted, there is no way to combat it other than relying on the host's immune response. Several vaccines are currently in development; however, given the current cost-effectiveness, efficacy, safety and estimated impact of vaccination, the WHO's present recommendation is to introduce it only in geographic settings (national or sub national) where the disease is particularly problematic [5].
Motivation for study
Climate change, specifically rising temperatures, is likely playing a crucial role in dengue transmission, potentially driving its expansion across the globe, as predicted by several studies [6,7,8,9,10]. Socio-economic conditions in a given location can be vital for a disease to persist once local transmission has occurred [11,12,13,14,15]; however, research in this domain, generally, does not account for socio-economic factors other than gross domestic product (GDP), which is a standard measure of the market value of all the final goods and services produced over a specific time period in a given location. Some studies have looked at the interaction between climate, socio-economic factors and demographics at a local level [16,17,18,19,20], focusing on factors specific to local areas, which means that their findings cannot be easily extrapolated to the macro level. To get better estimates of where dengue may spread, there is a need to understand how climate factors, socio-economic factors and demographic factors interact over a greater geographic scale to reveal common global patterns.
The original contribution of this article is that it selects factors shown to predict dengue at a local level and tests whether the association can be generalized to the regional or state level. In addition, we propose a more comprehensive set of socio-economic predictors of dengue transmission, to disentangle the role of GDP from other measures. Although a useful and parsimonious indicator, GDP is a very broad measure and it is not necessarily reflective of population health and well-being, distribution of wealth, discrimination and spending on public welfare [21]. More importantly, GDP alone may not be able to capture cross-regional differences. The predominance of using GDP as an indicator has been largely questioned [22,23,24,25]; for some time now researchers in human health geography, critical public health, and social epidemiology have requested more careful consideration of the contextual social and economic conditions that shape diseases at the local level [26, 27].
To this end, this study investigates regional differences in the incidence of dengue by evaluating the impact of socio-economic and demographic factors such as household income, regional rates of education, housing overcrowding, life expectancy, medical resources, migration flows, age structure of the population (the proportion of people under 14 and over 65), and population density.
The study focuses on the occurrence and distribution of dengue in Mexico and southern regions of the United States (US) where dengue has been reported, as some US regions share very similar environmental conditions but have distinct socio-economic conditions [12]. This study takes advantage of time series data between 2011 and 2019 and it is, therefore, able to exploit cross sectional variation between states, and variation over time for each state.
Dengue transmission is determined by interactions between host, vector and pathogen, and modulated by ecological, climatic and geographic factors, including socio-economic factors. Regions were selected for the empirical analysis if conditions were met in terms of their receptivity and vulnerability, based on principles laid out in the WHO's framework for malaria elimination [28].
Receptivity is defined as the ability of an ecosystem to allow transmission of a virus (dengue in this study). An ecosystem can be considered receptive if competent vectors, a suitable climate and a susceptible population are present; in other words, regions are selected if autochthonous virus transmission may occur because human populations and vector populations overlap/interact. Vulnerability occurs when either (1) a region was receptive and had regularly reported cases over the study period (endemic) or (2) bordered an endemic region and occasionally reported cases (likely due to spread or importation from neighbouring regions). We defined modulating factors as variables that influence the transmission dynamics of dengue such as host population size, host density, climate factors and medical interventions.
Since dengue is a vector borne disease, understanding the key ecological requirements of its vectors is crucial to assessing the receptivity of a region. As explained below, some of the main factors determining the receptivity of a region to dengue (due to the presence of its vectors) are: its physical environment (land use), the overlap with the human population, and its climate.
Both types of Aedes mosquitoes that transmit the dengue virus are ectothermic organisms and are highly sensitive to colder temperatures and extreme high temperatures. A. albopictus adults can survive in temperatures from 15 to 35 °C and A. aegypti from 10 to 35 °C [29], while their growth and development are severely inhibited in ambient temperatures below 13 °C or above 35 °C. A. albopictus eggs though, can go through diapause (suspended development) when exposed to extreme cold (down to − 10 °C). This adaptation allows them to inhabit environments with a wider annual temperature range, with more distinct seasonal changes than in tropical climates, where climate is more homogeneous. A. aegypti can endure a wider range of temperatures, but its survival at temperatures below 14–15 °C is limited to short periods, since its mobility is severely restricted and its ability to imbibe blood impeded. A. aegypti is also highly sensitive to fluctuations in temperatures. As for most mosquito species, availability of freshwater habitat, humidity and precipitation are highly indicative of their distribution in the environment.
To account for this, we selected a range of humidity and temperature variables for analysis which would capture mosquitoes' living requirements.
As direct measures of vulnerability we include spatial effects (neighbourhood structures) in our models in order to explicitly account for spill over effects with infected neighbouring regions (for a more detailed description see the methods section). Indirect measures of vulnerability can be derived from traditional patterns of travel and population flow in the area; indeed, well connected areas, in terms of trade and transport with considerable human movement, can benefit both mosquito species and dengue, by facilitating their movement and spread [30,31,32].
Modulating factors
Modulating factors can either speed up or slow down transmission. The transmission cycle of dengue is complex, since there are several key interactions at play between the virus, host and vector. Density of both the vector and host are fundamental factors in disease transmission, as contact between infected vectors and susceptible hosts is the source of new infections [33]. Mosquito breeding habitat can be increased by precipitation and flooding [34], temperature heavily influences mosquito hatching rate, development time [35, 36] and optimal temperature can shorted the extrinsic incubation period (EIP) [37]. While there are no datasets covering mosquito population abundance in all of our study regions, we selected meteorological variables that predict mosquito abundance and therefore are related to dengue transmission. Furthermore, there are several socio-economic risk factors of dengue including home water storage (rather than receiving piped water), poor sanitation, and poor public services (e.g. litter not collected) [12, 38,39,40,41,42]. Such factors can be responsible for creating breeding habitat for mosquitoes and bringing them into closer contact with humans, therefore increasing the risk of dengue. By contrast, use of mosquito nets, insect screens, and air-conditioning, can limit the chance of being bitten. Similarly, knowledge and education of mosquito ecology can also help people make personal interventions and reduce risk of being bitten [43]. Because there are no direct measures of home water storage or the use of mosquito nets, we use a range of socio-economic indicators as proxies, capturing a latent variable that would represent vector risk. The rationale is that people living in locations with better socio-economic conditions can avoid contact with mosquitoes and restrict virus transmission, either from the bottom-up (e.g. personal interventions) or the top-down (e.g. regional government pest control). However, it is important to note that factors associated with higher economic status can also bring humans into closer contact with mosquitoes, for example home owners with gardens, potted plants and ponds, or having good access to recreational space where mosquitoes can breed [44]. In terms of post-infection factors that influence dengue transmission, access to health care, risk perception and access to information on dengue infection symptoms had positive effects on people's decision to seek medical help when presented with dengue infection symptoms [14, 43, 45]. To reflect this in the conceptual framework we selected variables that would proxy access to health care and variables which would represent access to information and personal knowledge. Finally, younger people are more likely to be infected by dengue [46], so we selected variables that represent the age structure of the population.
In this study, we compiled a spatial temporal data-set that would reflect the conceptual framework. By predicting the distribution of A. albopictus and A. aegypti in Mexico and the United States, we could determine which regions were receptive i.e. there was an overlap between the vector distribution and the human population at risk. By combing these results with reported cases of dengue, we could determine which regions were vulnerable. We then went on to collect data on modulating factors of dengue transmission in vulnerable regions. Furthermore, our vector distribution maps allowed us to extract more accurate data on the host population at risk and climatic factors that contribute to disease transmission.
Species distribution models to estimate regional susceptibility
Because the exact distribution of vectors is unknown, we estimated the likelihood that a vector would occur in a region conditional on a set of covariates. More specifically, we estimated the distribution of the Aedes mosquitoes using a generalized additive logistic regression, with point location occurrence data as our dependent variable, and annual temperature range, mean temperature of the coldest quarter, precipitation during the driest quarter as covariates. Predictions were then used to select susceptible regions.
Point location occurrence data for A. aegypti and A. albopictus were obtained from a global geographic database of known occurrences between 1960 and 2014, compiled by members of the Institute of Biodiversity, Animal Health and Comparative Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow [47]. Point occurrence data represent spatial geo-coordinates of a location in which a given individual organism was sampled or sighted. Many of the samples in this data-set consists of museum records or unpublished studies including national entomological surveys. Since the data-set contained sparse information relating to the timings and frequency of each sample, we selected global observations from 1970 onward to capture the entire range of climatic conditions that each species can survive in, and to limit potential sample bias caused through the selection of localised seasonal collections. We also removed any duplicate observations i.e. replicate coordinates.
Climate data were extracted using R's DISMO package in all point locations where mosquitoes occurred. Climate data for the species distribution prediction modelling were sourced from the MERRAclim, a database complied by members of the Department of Biology and Geology, Physics and Inorganic Chemistry, Rey Juan Carlos University [48]. This data-set was built using 2 m above surface air temperature (Kelvin degrees) and 2 m above surface specific humidity (kg of water/kg of air) satellite observations from NASA's Modern Era Retrospective Analysis for Research and Applications Reanalysis.
Figure 1 shows the results of the modelling and Aedes sample locations. Tables providing summary statistics for the climate values at Aedes point locations can be found in Additional file 1. More specific information on statistical methods and results from this analysis can also found in Additional file 1.
Aedes sample locations and SDM results 1 Top left: Aedes point locations. 2 Top right: Results of Aedes aegypti SDM 3 Bottom left: Results of Aedes albopictus SDM 4 Bottom right: Receptive regions/data extraction locations
Data extraction and methods to assess the impact of climate, demographic and socio-economic factors on dengue
The Global Administrative Unit Layers [49] data-set along with our Aedes distribution maps (results Fig. 1, bottom right) were combined using R's Sf package to create regional shape files that could spatially capture and process the human population and climate data for the main analysis. The GAUL data set contains geographic information in the form of shape files that lay out within country boundaries linked to a unique nomenclature. Countries are broken down into statistical subdivisions e.g., ADM0 representing data at country level (e.g. US), ADM1 at regional level (e.g. California).
Climate data for the main analysis i.e. measuring the impact of the climate variables on dengue transmission, were sourced from the Climate Prediction Center (CPC) of the National Centers for Environmental Prediction (NCEP), see [50]. These data represent a global summary of daily weather data. The CPC extracts surface synoptic weather observations from the Global Telecommunications System (GTS), which collects global data from a combination of weather station and satellite observations. Files were processed in R with the NetCDF, Raster and Dismo packages in order to create annual bio-climatic variables. The bio-climatic variables in this study were derived from daily maximum temperatures, daily minimum temperatures and total daily rainfall.
Population count data to predict the number of persons at risk in a region were sourced from the Socioeconomic Data and Applications Center's Gridded Population of the World data set [51]. This data set estimates population count for the years 2000, 2005, 2010, 2015, and 2020, consistent with national censuses and population registers. Data were extracted from areas where vector presence was predicted. R's Zoo package was used to replace values for missing years, by implementing a linear interpolation method that would predict trends between years. This way increases or decreases in human population were controlled for in the final model.
All spatial data was aggregated to the state level.
Dengue case data
Dengue case data for Mexico 2011–2019 were obtained from the Mexican Deputy General of Epidemiology web-page, which provided reports on all positive serious and non-serious cases of dengue (https://www.gob.mx). All data were provided at the regional level (ADM1 level). Case data for the United States were extracted from https://www.cdc.gov/arbonet, since data are provided at the county level (ADM2 level) we needed to aggregate them to the state level (ADM1 level) in order to match them with the main data-set.
OECD socio-economic and demographic data
Socio-economic and demographic data were extracted from the OECD's Regional Statistics and Indicators Database [52]. This database provides comparable statistics and indicators and is presented in yearly time series. To capture factors determining the vulnerability of a region, we selected the variables "Inter-regional migration rate", "Population density growth" and "Gross domestic product (GDP)". For factors representing the socio-economic position of residents in a region we selected: "Household income", "Life expectancy at birth", and a measure of housing overcrowding "Number of rooms per person". Furthermore, we selected "Secondary education" which would also help to capture areas where there is a higher proportion of manual labourer, e.g. agricultural workers or people working outdoors who may be more exposed to mosquitoes. We also selected "Perceived social network support", "Self-evaluation of life satisfaction", and "Perception of corruption" to try to capture additional features of a region, such as quality of life. Since these three variables yield some indication of how people perceive their surroundings and quality of life, we assume that poorer scores will capture poor infrastructure, poor public services, lack of basic provisions and lack of beneficial government intervention. To represent access to healthcare we selected "Active Physicians rate", and variables which would represent access to information and personal knowledge i.e., "Broadband access" (however knowledge is also captured by "Secondary education"). Finally, younger people are more likely to be infected by dengue [47], so we selected variables that represent the age structure of the population i.e. "Percentage of Old Population Group (65+)" and "Percentage of Youth population group (0–14)". Missing values were filled based on values for previous years or subsequent years, depending on their position in the data set.
All data were joined using the year of observation and region code, using R's Dplyr package.
Table 1 provides summary statistics of all the collected data for the final models.
Table 1 Final dataset 2011–2019
Factor analysis—data processing for regional analysis
A preliminary correlation analysis (see Additional file 1: Figures S8, S9- diagnostics) revealed how some of the socio-economic variables are strongly correlated with each other, and if included in a regression would give rise to multi-collinearity issues. By over-inflating the standard errors, multi-collinearity makes some variables statistically insignificant when they should be significant. To address this issue, following similar methods to [53], a factor analysis by maximum likelihood (VARIMAX rotation) was performed on socio-economic variables.
Factor analysis is a method for investigating whether a number of variables of interest \(Y_1,Y_2,\ldots ,Y_n\), are linearly related to a smaller number of latent (i.e. ∼ not directly measured) factors \(F_1,F_2,\ldots ,F_k\). The basic concept of factor analysis is that multiple observed variables have similar patterns because they are all associated with a latent variable. The factors are constructed in such a way that they capture the maximum amount of common variance (correlation) of the original items; the eigenvalue is a measure of how much of the variance of the observed variables a factor explains. The factor analysis can be formalized as follows:
$$\begin{aligned}Y_1=\beta _{10}+\beta _{11}F_1+\beta _{12}F_2+\cdots +\beta _{1k} F_k+\epsilon \\Y_2=\beta _{20}+\beta _{21}F_1+\beta _{22}F_2+\cdots +\beta _{2k}F_k+\epsilon \\Y_N=\beta _{n0}+\beta _{n1}F_1+\beta _{n2}F_2+\cdots +\beta _{nk}F_k+\epsilon \end{aligned}$$
Before performing the factor analysis, all variables had to be standardized to z-scores \((x-\mu )/\sigma \) to ensure that they were on the same scale. After performing the factor analysis, the predicted values for the factors for any individual region can be estimated. These predictions, known as factor scores, are weighted sums of the values of the observed items. Roughly, items with a stronger correlation with a factor component (i.e. those with larger loadings) will receive higher weights in the calculation of a score for that factor.
Quality of life index—data processing for regional analysis
We created a 'Quality of Life Index' by combining 3 variables from the OECD regional database: 'Self-evaluation of life satisfaction', 'Perceived social network support' and 'Perception of corruption'. The variables were standardised, harmonised and combined into a composite indicator, capturing a latent quality of life measure, because each element on its own is unlikely to have a direct relationship with dengue.
General additive regression model to assess impact of independent variables on dengue case data at regional level
One of the main issues with our data-set is that it did not meet some basic assumptions for statistical inference, and specifically the data are not independent and identically distributed random variables (iid). More specifically, the data-set captured repeated measurements over the same regions, and observations were not independent because of spill over effects from neighbouring regions, therefore we needed to implement an appropriate statistical design to control for both temporal and spatial pseudo replication (lack of independence). We could deal with this in two ways, (1) either using a generalized linear mixed model (GLMM) approach, relaxing the assumption of independence and estimating the spatial/temporal correlation between residuals, or (2) model the spatial and temporal dependence in the systematic part of the model [54]. We opted to use a Generalized Additive Model (GAM) using R's Mgcv statistical package because of its versatility and ability to fit complex models that would converge even with low numbers of observations and could capture potential complex non-linear relationships. One of the advantages of GAMs is that we do not need to determine the functional form of the relationship beforehand. In general, such models transform the mean response to an additive form so that additive components are smooth functions (e.g., splines) of the covariates, in which functions themselves are expressed as basis-function expansions. The spatial auto-correlation in the GAM model was approximated by a Markov random field (MRF) smoother, defined by the geographic areas and their neighbourhood structure. We used R's Spdep package to create a queen neighbours list (adjacency matrix) based on regions with contiguous boundaries i.e. those sharing one or more boundary point. We used a medium rank MRF, which represented roughly one coefficient for two areas. The local Markov property assumes that a region is conditionally independent of all other regions unless regions share a boundary. This feature allowed us to model the correlation between geographical neighbours and smooth over contiguous spatial areas, summarising the trend of the response variable as a function of the predictors, for further information see section 5.4.2 of [55]. In order to account for variation in the response variable over time, not attributed to the other explanatory variables in our model, we used a saturated time effect for years, where a separate effect per time point is estimated.
We first tried to fit our model using a Poisson distribution. However, the mean of our dependent variable (dengue cases by region and year) was lower than its variance − E(Y) < Var(Y), suggesting that the data are over-dispersed. We also tried to fit our models using the negative binomial, quasi poisson and tweedie distribution, all particularly suited when the variance is much larger than the mean. After several tests, we concluded that the tweedie distribution worked well with our data and allowed us to model the incident rate. Analysis of model diagnostic tests didn't reveal any major issues, in general residuals appeared to be randomly distributed (see Additional file 1: S10–S19—diagnostics).
Tweedie distributions are defined as subfamily of (reproductive) exponential dispersion models (ED), with a special mean-variance relationship.
A random variable \(Y\) is Tweedie distributed \(TW_{p}(\mu , \sigma ^2)\) if \(Y \, ED(\mu , \sigma ^2)\), with mean = \(\mu \) = \(E(Y)\), positive dispersion parameter \(\sigma ^2\) and \(Var(Y) = \mu \sigma ^2\).
The empirical model can then be written as:
$$\begin{aligned} E(Y) = f_1(\text{X}_{it}) + f_n(\text{Year}_{t}) + f_m(\text{Region}_{i}) \end{aligned}$$
where the \(f(.)\) stands for smooth functions; \(E(Y)_{it}\) is equal to dengue incidence in region \(i\) at time \(t\), which we assume to be Tweedie distributed; \(X{it}\)—is a vector of socio-economic, demographic and climate variables. \(Year_{t}\) is a function of the time intercept and \(Region_{i}\) represents neighbourhood structure of region.
We run two separate sets of analyses: one comparing regions in the US and Mexico and another one looking at Mexico only, to check for robustness.
Figures 2 and 3 provide a descriptive overview of the study regions, a characterisation of their environments and the reported disease incidence for those years. As we can observe, the majority of dengue cases are reported in tropical and sub tropical climates.
(Source: koeppen-geiger.vu-wien.ac.at)
Koppen-Geiger climate classification in study regions
Crude incidence rates of dengue per 100,0000 people
Tables 2 and 3 provide the results of the factor analysis i.e. the weighting of our socio-economic indicators. Table 4 shows the results for the regression model comparing confirmed dengue cases in the US and Mexico for 2011–2019. Table 5 restricts the analysis to Mexico only since we could exploit a better data-set in terms of case reporting, scale, and we could explore the impact of the socio-economic variables individually since, there was less correlation between Mexican regions.
Table 2 Socio-economic factor analysis results US/MEX
Table 3 Socio-economic factor analysis results Mexico
Table 4 Final regression models US/MEX: EDF value is reported as the coefficient, and DF is included in parentheses (not standard errors because a chi-square test is used for the smooth terms)
Table 5 Final regression models Mexico: EDF value is reported as the coefficient, and DF is included in parentheses (not standard errors because a chi-square test is used for the smooth terms)
US/Mex analysis
Socio-economic and demographic indices Mexico/US
It was not possible to explore the individual impact of all of the variables in our data-set because of collinearity issues. Population density was found to be positively correlated with GDP and primary income. "Percentage of Old Population Group (65+)" was negatively correlated with "Percentage of Youth Population Group (0–14)" (see Additional file 1: S8, S9) diagnostics). For this reason, we performed a factor analysis to reduce the number of variables, as explained in more detail in the section on statistical methods. The Mexico/US factor analysis captured the variance in 4 highly correlated variables: higher share of labour force with at least secondary education, more rooms per inhabitant, life expectancy at birth, primary income of households, and yielded one composite indicator (see Table 2) , which we included as a regressor. A priori, the socio-economic indicator is expected to have a negative association with dengue.
We built our statistical model in a stepwise fashion so we could analyse it using the lowest Akaike Information Criterion (AIC) which would help us validate the quality of statistical models for our dataset. The first column of Table 4 (GDP Model) shows the association between regional GDP and dengue cases across the regions; the second column (SE Model) shows the association between regional dengue cases and the socio-economic indicator derived through factor analysis, plus other variables such as active physician rate, broadband access and the quality of life index. Column 3 (Dem Model) includes demographic variables, such as inter-regional migration rate, population density growth and the percentage of older population (65+). Column 4 (Clim Model) includes the climate variables mean temperature of the coldest quarter and precipitation in the warmest quarter. The "full model" in column 5 shows the relationship between dengue incidence and all explanatory variables in our final model. Table 4 also summarises the relevant statistics (AIC, Deviance, Adjusted R squared and so on) to compare the different specifications; the full model has the best fit (lower AIC and higher adjusted R squared), followed by the one in which we control only for the climate variables (as well as the year, regional effects); the first model, controlling for GDP alone, has the highest AIC and has a worse fit than the specification including the socio-economic indicators.
When controlling for demographic and climate variables, the impact of the socio-economic indicators still remains statistically significant, as well as the impact of temperature.
Please note that as we are not estimating a standard regression model, the figures reported should not be read as coefficients, but degrees of freedom of the smooth terms. Given that we cannot interpret the coefficients to infer the sign and magnitude of the relationship, we visualise it by plot. Figure 4 plots the partial effects—the relationship between a change in each of the covariates and a change in the fitted values in the full model; the first plot shows that the socio-economic index has linear negative impact, but the relationship becomes weaker at very high scores; given the weight of each variable in the factor analysis, the results can be interpreted as an increase in the share of labour force with at least secondary education, more rooms per inhabitant, life expectancy at birth, primary income of households are associated with fewer dengue cases. Regions with better broadband access tend to be those with lower incidence rates of dengue, however in this case the relationship is flat at low levels of broadband coverage (below 40 percent) and then turns negative and quadratic at higher levels of access. These results could suggest residents are more likely to search for information on dengue prevention measures consequently lowering transmission potential, or when suffering with symptoms may be more likely to seek medical advice, therefore breaking the transmission cycle; these results are consistent with findings by [56, 57]. This result also could be an indicator of more advance and urbanized regions vs agricultural and less developed regions. It is reported that dengue tends to affect more those working in labour-intensive industries, such as agriculture or fishing [58, 59].
Partial effects of explanatory variables: GAM Mex/US model
The variable representing active physician rates has a positive impact on the incidence of dengue, in that regions with more active physicians tend to have higher incidence; however, this is likely due to more accurate reporting. Even in this case, the relationship is concave—positive up to 3 percent rate and flat afterwards.
The impact of the demographic variables on the incidence of dengue also follows the expected sign, with inter-regional migration rate and population density growth being associated with a linear increase in the incidence of dengue; the presence of an older population is associated with higher incidence of dengue up to a certain level—it peaks at around 14 percent—and then a reduction, as can be seen from Fig. 4. One possible explanation for this is that a higher proportion of older people means a more vulnerable population, however very high rates are also associated with wealthier regions, which offset the main impact of age. Figure 4 also show the impact of the Mean temperature (°C) of coldest quarter variable is almost linear. We can see that most cases occur in regions which have particularly mild cold seasons. This is concurrent with the literature, we would expect to see more cases of dengue in regions with tropical climates, where there is a distinct absence of a cold season, during which low temperatures would kill the mosquitoes off or cause mosquitoes to overwinter effectively inhibiting disease transmission, instead such conditions allow the virus and mosquitoes to persist throughout the year.
The relationship between rainfall and dengue incidence in the full model is slightly negative and significant; even though this finding could appear counter intuitive, it is probably due to the fact that mosquito larvae can be washed away during intense rainfall [60]. Furthermore, both Aedes mosquitoes can survive in drier climates than expected, by exploiting artificial water sources and man-made habitats, as already mentioned "Introduction" section.
Mex analysis
For our second analysis focusing on differential diffusion of dengue within Mexican regions, we were able to analyse variables individually since there there is significantly less correlation between the socio-economic variables. However, we could not select "Population density" because of a correlation with "Primary income of household"s and "GDP". "Percentage of old population Group (65+)" was negatively correlated with "Population density growth" so was not included in the final model. Furthermore, "Percentage of population share (0–14)" was highly correlated with "Access to broadband" and "Workforce with secondary education" (and negatively correlated with population 65+), so we didn't include it in the study. We again built our second statistical model in a stepwise fashion so we could analyse it using the lowest Akaike Information Criterion (AIC) which would help us validate the quality of statistical models for our data set. Figure 5 and Table 5 present the results of our second analysis focusing only on Mexican regions.
Partial effects of explanatory variables: GAM Mexico model
Our findings for the second analysis are similar to the first: the most significant variables are "Share of households with internet access", "Active Physicians Rate (1000 pop)" and "Mean temperature (C) of coldest quarter". Our socio-economic indicator was a good predictor of dengue incidence, although when "GDP" was paired with other individual variables from the factor analysis (except primary income) it helped to create a very useful model. The best fit model was our final specification using our socio-economic variables individually; however, primary income of households is not a reliable predictor of dengue, since, by the concave relationship, it would appear that gains in economic activity may increase the spread of the virus (for instance because of movement of goods and people), but could also be correlated with higher reporting. One of the strongest predictors of dengue in our final specification is "Share of labour force with secondary education". As previously noted, this is consistent with other findings by [58, 59] as dengue tends to affect more those working in labour-intensive industries, such as agriculture or fishing.
Discussion and conclusions
The study investigated the impact of socio-economic, demographic and climate variables on the distribution of dengue. Its original contribution is that it selected factors shown to predict dengue at a local level and tested whether the association could be generalized to the regional or state level. In addition, it showed the potential development of more sophisticated socio-economic indicators using regional and internationally available data. The study identified which regions are most at risk, by estimating where dengue vectors are likely to occur given their suitability to climate conditions in terms of receptivity and vulnerability. By estimating the chance of a vector occurring in a region, we could then assess the impact of socio-economic, demographic and climate factors on the incidence of dengue. The results confirmed a strong association between our novel indices of socio-economic factors and dengue cases per region. Such results are consistent with the findings reported by [12, 14, 39,40,41,42, 45, 61]. Two main lessons can be drawn from this study: first, while higher GDP is generally associated with a drop in the incidence of dengue, a more granular analysis revealed that the crucial factors are a rise in education (with fewer jobs in the primary sector) and better access to information or technological infrastructure. For this reason, the use of more sophisticated measures, aside from GDP, should be taken into account when building models that try to predict disease distribution. The use of more granular socio-economic indicators can explain with greater accuracy the differences in the spread of disease in places with similar physical geography and ecological characteristics. In addition, public health authorities should be aware of the presence of non-linearities in relationships between dengue and income. Secondly, factors that were shown to have an impact of dengue at the local level are also good predictors at the regional level. Given that data for these indicators are available at a sub-national scale for OECD countries and selected OECD non-member economies, these indices may help us better understand factors responsible for the global distribution of dengue and also, given a warming climate, may help us to better predict vulnerable populations. Although the variables used in this study do not represent disease transmission mechanisms directly, understanding the relative impact of socio-economic, demographic and climate factors on disease outcomes can help risk assessors predict where diseases are likely to occur in the future, by identifying locations with vulnerabilities in public health systems and/or by identifying impoverished areas that tend to be susceptible to disease. Our findings are not only useful for public health, but also contribute to a wider scholarly debate on whether and to what extent can economic growth (measured via GDP) contribute to better outcomes of health and well-being. Finally, it is important to note that, with any analysis dealing with regional data, results should be taken with caution because of issues of scale and uncertainty introduced by the aggregation procedure. Further studies seeking to test the robustness of the indicators examined in this study should try to source data at a more refined scale, and test how these indicators can generalise across the different scales.
The R project folder, main spatial dataset and R code for the project is available from https://doi.org/10.5281/zenodo.887909.
GDP:
Mex:
SDM:
Species distribution model
ADM:
Murray NEA, Quam MB, Wilder-Smith A. Epidemiology of dengue: past, present and future prospects. Clin Epidemiol. 2013;5:299. https://www.dovepress.com/getfile.php?fileID=17199.
Anosike JC, Nwoke BE, Okere AN, Oku EE, Asor JE, Emmy-Egbe IO, et al. Epidemiology of tree-hole breeding mosquitoes in the tropical rainforest of Imo state, south-east Nigeria. Ann Agric Environ Med. 2007;14:31–8.
Gomez-Dantes H, Ramsey Willoquet J. Dengue in the Americas: challenges for prevention and control. Cadernos De Saude Publica. 2009;25:S19–31. https://doi.org/10.1590/s0102-311x2009001300003.
WHO. Dengue and severe dengue. 2020. https://www.who.int/news-room/fact-sheets/detail/dengue-and-severe-dengue.
WHO. Immunization, vaccines and biologicals. 2017; 2018. http://www.who.int/immunization/research/development/dengue_vaccines/en/.
Butterworth MK, Morin CW, Comrie AC. An analysis of the potential impact of climate change on dengue transmission in the southeastern United States. Environ Health Perspect. 2017;125:579–85. https://doi.org/10.1371/journal.pntd.0007213.
Ryan SJ, Carlson CJ, Mordecai EA, Johnson LR. Global expansion and redistribution of aedes-borne virus transmission risk with climate change. PLOS Neglect Trop Dis. 2019;13:e0007213. https://doi.org/10.1371/journal.pntd.0007213.
Messina JP, Brady OJ, Golding N, Kraemer MUG, Wint GRW, Ray SE, et al. The current and future global distribution and population at risk of dengue. Nat Microbiol. 2019;4:1508–15. https://doi.org/10.1038/s41564-019-0476-8.
Xu Z, Bambrick H, Frentiu FD, Devine G, Yakob L, Williams G, et al. Projecting the future of dengue under climate change scenarios: progress, uncertainties and research needs. PLOS Neglect Trop Dis. 2020;14:e0008118. https://doi.org/10.1371/journal.pntd.0008118.
Ebi KL, Nealon J. Dengue in a changing climate. Environ Res. 2016;151:115–23. https://doi.org/10.1016/j.envres.2016.07.026.
Bouzid M, Colon-Gonzalez FJ, Lung T, Lake IR, Hunter PR. Climate change and the emergence of vector-borne diseases in Europe: Case study of dengue fever. BMC Public Health. 2014. https://doi.org/10.1186/1471-2458-14-781.
Brunkard JM, Lopez JLR, Ramirez J, Cifuentes E, Rothenberg SJ, Hunsperger EA, et al. Dengue fever seroprevalence and risk factors, Texas-Mexico border, 2004. Emerg Infect Dis. 2007;13:1477–83. https://doi.org/10.3201/eid1310.061586.
Abelz A, Smith B, Fournier M, Betz T, Gaul L, Robles-Lopez JL, et al. Dengue hemorrhagic fever - us-mexico border, 2005. Morbidity and Mortality Weekly Report. 2007;56:785–9.
Ramos EF. Hemoterapia e febre dengue. Revis Brasil de Hematol e Hemot. 2008;30:64–6. https://doi.org/10.1590/s1516-84842008000100016.
Magori K, Drake JM. The population dynamics of vector-borne diseases. Book. 2013.
Vincenti-Gonzalez MF, Grillet ME, Velasco-Salas ZI, Lizarazo EF, Amarista MA, Sierra GM, et al. Spatial analysis of dengue seroprevalence and modeling of transmission risk factors in a dengue hyperendemic city of Venezuela. PLOS Neglect Trop Dis. 2017. https://doi.org/10.1371/journal.pntd.0005317.
Toan DTT, Hoat LN, Hu W, Wright P, Martens P. Risk factors associated with an outbreak of dengue fever/dengue haemorrhagic fever in Hanoi, Vietnam. Epidemiol Infect. 2015;143:1594–8. https://doi.org/10.1017/s0950268814002647.
Tipayamongkholgul M, Lisakulruk S. Socio-geographical factors in vulnerability to dengue in Thai villages: a spatial regression analysis. Geospat Health. 2011. https://doi.org/10.4081/gh.2011.171.
Teurlai M, Menkès CE, Cavarero V, Degallier N, Descloux E, Grangeon J-P, et al. Socio-economic and climate factors associated with dengue fever spatial heterogeneity: a worked example in new Caledonia. PLoS Neglect Trop Dis. 2015;9:e0004211. https://doi.org/10.1371/journal.pntd.0004211.
Akter R, Naish S, Hu W, Tong S. Socio-demographic, ecological factors and dengue infection trends in Australia. PLoS ONE. 2017;12:e0185551.
Robert C, Kubiszewski I, Giovannini E, Lovins H, McGlade J, Pickett K, et al. Time to leave GDP behind. Nature. 2014;505.
Stiglitz JE, Sen A, Fitoussi J-P. Mismeasuring our lives: Why GDP doesn't add up. New York: The New Press Book; 2010.
Bleys B. Beyond GDP: Classifying alternative measures for progress. Soc Indicat Res. 2012;109:355–76.
Van den Bergh JC. The GDP paradox. J Econ Psychol. 2009;30:117–35.
Costanza R, Kubiszewski I, Giovannini E, Lovins H, McGlade J, Pickett KE, et al. Development: time to leave GDP behind. Nat News. 2014;505:283.
Navarro V. Assessment of the world health report 2000. Lancet. 2000;356:1598–601.
Berkman LF, Kawachi I, Glymour MM. Social epidemiology. Oxford: Oxford University Press Book; 2014.
WHO. A framework for malaria elimination. Geneva: WHO; 2017.
Brady OJ, Johansson MA, Guerra CA, Bhatt S, Golding N, Pigott DM, et al. Modelling adult Aedes aegypti and Aedes albopictus survival at different temperatures in laboratory and field settings. Parasit Vectors. 2013;6:351. https://doi.org/10.1186/1756-3305-6-351.
Gubler DJ. Dengue, urbanization and globalization: the unholy trinity of the 21(st) century. Trop Med Health. 2011;39(4 Suppl):3–11. https://doi.org/10.2149/tmh.2011-S05.
Lana RM, da Costa Gomes MF, de Melo Lima TF, Honorio NA, Codeco CT. The introduction of dengue follows transportation infrastructure changes in the state of acre, brazil: A network-based analysis. PLOS Neglect Trop Dis. 2017. https://doi.org/10.1371/journal.pntd.0006070.
Lana RM, da Gomes MFC, de Lima TFM, Honório NA, Codeço CT. The introduction of dengue follows transportation infrastructure changes in the state of acre, brazil: A network-based analysis. PLOS Neglect Trop Dis. 2017;11:e0006070. https://doi.org/10.1371/journal.pntd.0006070.
Begon M. Ecological epidemiology. In: The Princeton guide to ecology. Princeton University Press; 2009. http://www.jstor.org/stable/j.ctt7t14n.
Moore CG, Cline BL, Ruiz-Tibén E, Lee D, Romney-Joseph H, Rivera-Correa E. Aedes aegypti in Puerto Rico: environmental determinants of larval abundance and relation to dengue virus transmission. Am J Trop Med Hyg. 1978;27:1225–31.
Mohammed A, Chadee DD. Effects of different temperature regimens on the development of Aedes aegypti (l.)(Diptera: Culicidae) mosquitoes. Acta Trop. 2011;119:38–433.
Tun-Lin W, Lenhart A, Nam VS, Rebollar-Tellez E, Morrison AC, Barbazan P, et al. Reducing costs and operational constraints of dengue vector control by targeting productive breeding places: A multi-country non-inferiority cluster randomized trial. Trop Med Int Health. 2009;14:1143–53. https://doi.org/10.1111/j.1365-3156.2009.02341.x.
Watts DM, Burke DS, Harrison BA, Whitmire RE, Nisalak A. Effect of temperature on the vector efficiency of Aedes aegypti for dengue 2 virus. Am J Trop Med Hygiene. 1987;36:143–52.
Thammapalo S, Chongsuwiwatwong V, McNeil D, Geater A. The climatic factors influencing the occurrence of dengue hemorrhagic fever in Thailand. Southeast Asian J Trop Med Public Health. 2005;36.
Stewart Ibarra AM, Ryan SJ, Beltrán E, Mejía R, Silva M, Muñoz Á. Dengue vector dynamics (Aedes aegypti) influenced by climate and social factors in Ecuador: implications for targeted control. PLOS ONE. 2013;8:e78263. https://doi.org/10.1371/journal.pone.0078263.
Thammapalo S, Chongsuwiwatwong V, Geater A, Lim A, Choomalee K. Socio-demographic and environmental factors associated with Aedes breeding places in Phuket, Thailand. Southeast Asian J Trop Med Public Health. 2005;36:426–33.
Qi X, Wang Y, Li Y, Meng Y, Chen Q, Ma J, et al. The effects of socioeconomic and environmental factors on the incidence of dengue fever in the pearl river delta, China, 2013. PLOS Neglect Trop Dis. 2015;9:e0004159. https://doi.org/10.1371/journal.pntd.0004159.
Clark GG. Dengue and dengue hemorrhagic fever in northern Mexico and south Texas: Do they really respect the border? Am J Trop Med Hyg. 2008;78:361–2.
Khun S, Manderson L. Health seeking and access to care for children with suspected dengue in Cambodia: an ethnographic study. BMC Public Health. 2007;7:262.
Unlu I, Farajollahi A, Strickman D, Fonseca DM. Crouching tiger, hidden trouble: urban sources of Ades albopictus (Diptera: Culicidae) refractory to source-reduction. PloS ONE. 2013;8:e77999–9. https://doi.org/10.1371/journal.pone.0077999.
Elsinga J, Lizarazo EF, Vincenti MF, Schmidt M, Velasco-Salas ZI, Arias L, et al. Health seeking behaviour and treatment intentions of dengue and fever: a household survey of children and adults in venezuela. PLOS Neglect Trop Dis. 2015;9:e0004237. https://doi.org/10.1371/journal.pntd.0004237.
ACAPS. ACAPS briefing note: Mexico - dengue fever (16 september 2019). 2019. https://reliefweb.int/report/mexico/acaps-briefing-note-mexico-dengue-fever-16-september-2019..
Kraemer MUG, Sinka ME, Duda KA, Mylne A, Shearer FM, Brady OJ, et al. The global compendium of Aedes aegypti and Ae. Albopictus occurrence. Sci Data. 2015;2:150035. https://doi.org/10.1038/sdata.2015.35.
C. Vega G, Pertierra LR, Olalla-Tárraga MÁ. MERRAclim, a high-resolution global dataset of remotely sensed bioclimatic variables for ecological modelling. Sci Data. 2017;4:170078. https://doi.org/10.1038/sdata.2017.78https://www.nature.com/articles/sdata201778#supplementary-information.
FAO-UN. Global administrative unit layers (gaul). 2014. http://www.fao.org/geonetwork/srv/en/metadata.show?id=12691.
CPC/NCEP. National center for atmospheric research. 1987. http://rda.ucar.edu/datasets/ds512.0/.
SEDAC. Gridded population of the world, version 4 (gpwv4): Population count, revision 11. 2018. https://doi.org/10.7927/H4JW8BX5.
OECD. Regional statistics and indicators database. 2018. http://stats.oecd.org/Index.aspx?DataSetCode=REGION_DEMOGR.
GFC. Spatial data analysis and modeling with r. 2018;2018. http://rspatial.org/index.html.
Aswi A, Cramb SM, Moraga P, Mengersen K. Bayesian spatial and spatio-temporal approaches to modelling dengue fever: a systematic review. Epidemiol Infect. 2018;147:1–14. https://doi.org/10.1017/S0950268818002807.
Wood SN. Generalized additive models: An introduction with r. Boca Raton: CRC Press; 2017.
Gluskin RT, Johansson MA, Santillana M, Brownstein JS. Evaluation of internet-based dengue query data: Google dengue trends. Plos Neglect Trop Dis. 2014. https://doi.org/10.1371/journal.pntd.0002713.
Romero-Alvarez D, Parikh N, Osthus D, Martinez K, Generous N, del Valle S, et al. Google health trends performance reflecting dengue incidence for the Brazilian states. BMC Infect Dis. 2020;20:252. https://doi.org/10.1186/s12879-020-04957-0.
Nakano K. Future risk of dengue fever to workforce and industry through global supply chain. Mitigat adapta strateg Glob Change. 2018;23:433–49. https://doi.org/10.1007/s11027-017-9741-4.
Jakobsen F, Nguyen-Tien T, Pham- Thanh L, Bui VN, Nguyen-Viet H, Tran- Hai S, et al. Urban livestock-keeping and dengue in urban and peri-urban Hanoi, Vietnam. PLOS Neglect Trop Dis. 2019;13:e0007774. https://doi.org/10.1371/journal.pntd.0007774.
Cabrera M, Taylor G. Modelling spatio-temporal data of dengue fever using generalized additive mixed models. Spat Spatio Temp Epidemiol. 2019;28:1–13. https://doi.org/10.1016/j.sste.2018.11.006.
Brunkard JM, Cifuentes E, Rothenberg SJ. Assessing the roles of temperature, precipitation, and enso in dengue re-emergence on the texas-mexico border region. Salud Publica De Mexico. 2008;50:227–34.
This research was funded by ICTA's Maria de Maeztu Unit of Excellence, awarded by the Spanish Ministry of Economy and Competitiveness. The award is the highest institutional recognition of scientific research in Spain. Thanks also to Patrizia Ziveri and Pedro Manuel Gonzalez Hernandez for supporting the project.
Institute of Environmental Science and Technology (ICTA), Autonomous University of Barcelona (UAB), Bellaterra, Spain
Matthew J. Watts, Panagiota Kotsila, P. Graham Mortyn & Victor Sarto i Monteys
Joint Research Centre (JRC-Seville), European Commission, Seville, Spain
Cesira Urzi Brancati
Servei de Sanitat Vegetal, DARP, Generalitat de Catalunya, Av. Meridiana, 38, 08018, Barcelona, Spain
Victor Sarto i Monteys
Barcelona Laboratory for Urban Environmental Justice and Sustainability (BCNEJ), Institute of Environmental Science and Technology (ICTA), Autonomous University of Barcelona (UAB), Bellaterra, Spain
Panagiota Kotsila
Department of Geography, Autonomous University of Barcelona (UAB), Bellaterra, Spain
P. Graham Mortyn
Matthew J. Watts
MW led the work and was responsible for the conceptualization of the project, data curation, data processing, formulation of the methodology, statistical analysis, modelling and writing the original draft and interpreting the results. PK made substantive contributions to writing and formulating some of the concepts and interpretation of the results. All contributors revised the manuscript and copy-edited the final submission version. All the authors were also involved in revising the manuscript critically for important intellectual content. CUB supervised statistical analysis and made substantive contributions to formulating some of the statistical models. PK, PGM VSM supervised the project and are responsible for formulating ideas for the umbrella project Impacts of Climate Change (CC) on Human Health (HH) at ICTA-UAB: Integrating socio-economic and policy studies with natural science studies to enhance consilience of climate policy science. All authors read and approved the final manuscript.
Correspondence to Matthew J. Watts.
All authors and co-authors have approved the publication.
No competing interests.
Species distribution methods and results to estimate receptivity + model diagnostics (all models).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Watts, M.J., Kotsila, P., Mortyn, P.G. et al. Influence of socio-economic, demographic and climate factors on the regional distribution of dengue in the United States and Mexico. Int J Health Geogr 19, 44 (2020). https://doi.org/10.1186/s12942-020-00241-1
Climate-change
Global-warming
Mosquito-borne
Vector-borne-diseases | CommonCrawl |
second derivative of a circle
We can take the second, third, and more derivatives of a function if possible. It is most certainly not coincidental. To determine concavity, we need to find the second derivative f″(x). For an equation written in its parametric form, the first derivative is. Which tells us the slope of the function at any time t . The second derivative of a function \(y=f(x)\) is defined to be the derivative of the first derivative; that is, Solution: To illustrate the problem, let's draw the graph of a circle as follows We also use third-party cookies that help us analyze and understand how you use this website. Informations sur votre appareil et sur votre connexion Internet, y compris votre adresse IP, Navigation et recherche lors de l'utilisation des sites Web et applications Verizon Media. I'd like to add another article, one that takes a less formal route (I figured here was the best place.) Hopefully someone can … Parametric Derivatives. that the first derivative and second derivative of f at the given point are just constants. Vous pouvez modifier vos choix à tout moment dans vos paramètres de vie privée. So, all the terms of mathematics have a graphical representation. Second Derivative (Read about derivatives first if you don't already know what they are!). The volume of a circle would be V=pi*r^3/3 since A=pi*r^2 and V = anti-derivative[A(r)*dr]. Determine the first and second derivatives of parametric equations; ... On the left and right edges of the circle, the derivative is undefined, and on the top and bottom, the derivative equals zero. Learn how to find the derivative of an implicit function. Hopefully someone can point out a more efficient way to do this: x2 + y2 = r2. This website uses cookies to improve your experience while you navigate through the website. Algebra. When differentiated with respect to r, the derivative of πr2 is 2πr, which is the circumference of a circle. This vector is normal to the curve, its norm is the curvature κ ( s ) , and it is oriented toward the center of curvature. Now that we know the derivatives of sin(x) and cos(x), we can use them, together with the chain rule and product rule, to calculate the derivative of any trigonometric function. the derivative \(f'\left( x \right)\) is also a function in this interval. Take the first derivative using the power rule and the basic differentiation rules: \[y^\prime = 12{x^3} – 6{x^2} + 8x – 5.\]. First and Second Derivative of a Function. Thus, x 2 + y 2 = 25 , y 2 = 25 - x 2, and , where the positive square root represents the top semi-circle and the negative square root represents the bottom semi-circle. Explore animations of these functions with their derivatives here: Differentiation Interactive Applet - trigonometric functions. Second, this formula is entirely consistent with our understanding of circles. This shows a straight line. I spent a lot of time on the algebra and finally found out what's wrong. Grab a solid circle to move a "test point" along the f(x) graph or along the f '(x) graph. Listen, so ya know implicit derivatives? By adding all areas of the rectangles and multiplying this by four, we can approximate the area of the circle. This category only includes cookies that ensures basic functionalities and security features of the website. Of course, this always turns out to be zero, because the difference in the radius is zero since circles are only two dimensional; that is, the third dimension of a circle, when measured, is z = 0. The following problems range in difficulty from average to challenging. The second derivative would be the number of radians in a circle. y = ±sqrt [ r2 –x2 ] 2. The second derivative is shown with two tick marks like this: f''(x) Example: f(x) = x 3. Let \(z=f(x,y)\) be a function of two variables for which the first- and second-order partial derivatives are continuous on some disk containing the point \((x_0,y_0).\) To apply the second derivative test to find local extrema, use the following steps: The curvature of a circle is constant and is equal to the reciprocal of the radius. You cannot differentiate a geometric figure! This applet displays a function f(x), its derivative f '(x) and its second derivative f ''(x). Second Derivative. This second method illustrates the process of implicit differentiation. Select the second example from the drop down menu, showing the spiral r = θ.Move the th slider, which changes θ, and notice what happens to r.As θ increases, so does r, so the point moves farther from the origin as θ sweeps around. The evolute will have a cusp at the center of the circle. The first derivative of x is 1, and the second derivative is zero. $\begingroup$ Thank you, I've visited that article three times in the last couple years, it seems to be the definitive word on the matter. Its derivative is f'(x) = 3x 2; The derivative of 3x 2 is 6x, so the second derivative of f(x) is: f''(x) = 6x . • Note that the second derivative test is faster and easier way to use compared to first derivative test. Differentiate once more to find the second derivative: \[y^{\prime\prime} = 36{x^2} – 12x + 8.\], \[y^\prime = 10{x^4} + 12{x^3} – 12{x^2} + 2x.\], The second derivative is expressed in the form, \[y^{\prime\prime} = 40{x^3} + 36{x^2} – 24x + 2.\], The first derivative of the cotangent function is given by, \[{y^\prime = \left( {\cot x} \right)^\prime }={ – \frac{1}{{{{\sin }^2}x}}.}\]. The second derivative can also reveal the point of inflection. More Examples of Derivatives of Trigonometric Functions. Just as with derivatives of single-variable functions, we can call these second-order derivatives, third-order derivatives, and so on. I spent a lot of time on the algebra and finally found out what's wrong. I have a function f of x here, and I want to think about which of these curves could represent f prime of x, could represent the derivative of f of x. The Covariant Derivative in Electromagnetism We're talking blithely about derivatives, but it's not obvious how to define a derivative in the context of general relativity in such a way that taking a derivative results in well-behaved tensor. Select the second example from the drop down menu, showing the spiral r = θ.Move the th slider, which changes θ, and notice what happens to r.As θ increases, so does r, so the point moves farther from the origin as θ sweeps around. The standard rules of Calculus apply for vector derivatives. Let's look at the parent circle equation [math]x^2 + y^2 = 1[/math]. HTML5 app: First and second derivative of a function. This website uses cookies to improve your experience. The parametric equations are x(θ) = θcosθ and y(θ) = θsinθ, so the derivative is a more complicated result due to the product rule. Each of these partial derivatives is a function of two variables, so we can calculate partial derivatives of these functions. When a function's slope is zero at x, and the second derivative at x is: less than 0, it is a local maximum; greater than 0, it is a local minimum; equal to 0, then the test fails (there may be other ways of finding out though) Calculate the first derivative using the product rule: \[{y' = \left( {x\ln x} \right)' }={ x' \cdot \ln x + x \cdot {\left( {\ln x} \right)^\prime } }={ 1 \cdot \ln x + x \cdot \frac{1}{x} = \ln x + 1. So: Find the derivative of a function Parametric curves are defined using two separate functions, x(t) and y(t), each representing its respective coordinate and depending on a new parameter, t. First and Second Derivatives of a Circle. This applet displays a function f(x), its derivative f '(x) and its second derivative f ''(x). We used these Derivative Rules: The slope of a constant value (like 3) is 0 2pi radians is the same as 360 degrees. 4.5.4 Explain the concavity test for a function over an open interval. Differentiate it again using the power and chain rules: \[{y^{\prime\prime} = \left( { – \frac{1}{{{{\sin }^2}x}}} \right)^\prime }={ – \left( {{{\left( {\sin x} \right)}^{ – 2}}} \right)^\prime }={ \left( { – 1} \right) \cdot \left( { – 2} \right) \cdot {\left( {\sin x} \right)^{ – 3}} \cdot \left( {\sin x} \right)^\prime }={ \frac{2}{{{{\sin }^3}x}} \cdot \cos x }={ \frac{{2\cos x}}{{{{\sin }^3}x}}.}\]. * If we map these values of d2w/dz2 in the complex plane a = £+¿77, the mapping points will therefore fill out a region of this plane. These cookies will be stored in your browser only with your consent. We can take the second, third, and more derivatives of a function if possible. Grab open blue circles to modify the function f(x). In the previous example we took this: h = 3 + 14t − 5t 2. and came up with this derivative: h = 0 + 14 − 5(2t) = 14 − 10t. 4.5.3 Use concavity and inflection points to explain how the sign of the second derivative affects the shape of a function's graph. For the second strip, we get and solved for , we get . Nonetheless, the experience was extremely frustrating. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. Example. Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics. The third derivative of [latex]x[/latex] is defined to be the jerk, and the fourth derivative is defined to be the jounce. Assuming we want to find the derivative with respect to x, we can treat y as a constant (derivative of a constant is zero). We have seen curves defined using functions, such as y = f (x).We can define more complex curves that represent relationships between x and y that are not definable by a function using parametric equations. Well, to think about that, we just have to think about, well, what is a slope of the tangent line doing at each point of f of x and see if this corresponds to that slope, if the value of these functions correspond to that slope. Psst! In physics, when we have a position function \(\mathbf{r}\left( t \right)\), the first derivative is the velocity \(\mathbf{v}\left( t \right)\) and the second derivative is the acceleration \(\mathbf{a}\left( t \right)\) of the object: \[{\mathbf{a}\left( t \right) = \frac{{d\mathbf{v}}}{{dt}} }={ \mathbf{v}^\prime\left( t \right) = \frac{{{d^2}\mathbf{r}}}{{d{t^2}}} }={ \mathbf{r}^{\prime\prime}\left( t \right).}\]. In general, they are referred to as higher-order partial derivatives. I got somethin' ta tell ya. Only part of the line is showing, due to setting tmin = 0 and tmax = 1. d y d x = d y d t d x d t \frac{dy}{dx} = \frac{\hspace{2mm} \frac{dy}{dt}\hspace{2mm} }{\frac{dx}{dt}} d x d y = d t d x d t d y The x x x and y y y time derivatives oscillate while the derivative (slope) of the function itself oscillates as well. You can differentiate (both sides of) an equation but you have to specify with respect to what variable. }\) The tangent line to the circle at \((a,b)\) is perpendicular to the radius, and thus has slope \(m_t = -\frac{a}{b}\text{,}\) as shown on … If we consider the radius from the origin to the point \((a, b)\), the slope of this line segment is \(m_r = b a\). Differentiate again using the power and chain rules: \[{y^{\prime\prime} = \left( {\frac{1}{{\sqrt {{{\left( {1 – {x^2}} \right)}^3}} }}} \right)^\prime }={ \left( {{{\left( {1 – {x^2}} \right)}^{ – \frac{3}{2}}}} \right)^\prime }={ – \frac{3}{2}{\left( {1 – {x^2}} \right)^{ – \frac{5}{2}}} \cdot \left( { – 2x} \right) }={ \frac{{3x}}{{{{\left( {1 – {x^2}} \right)}^{\frac{5}{2}}}}} }={ \frac{{3x}}{{\sqrt {{{\left( {1 – {x^2}} \right)}^5}} }}.}\]. Simplify your answer.f(x) = (5x^4+ 3x^2)∗ln(x^2) check_circle Expert Answer. Pre Algebra. In particular, it can be used to determine the concavity and inflection points of a function as well as minimum and maximum points. describe in parametric form the equation of a circle centered at the origin with the radius \(R.\) In this case, the parameter \(t\) varies from \(0\) to \(2 \pi.\) Find an expression for the derivative of a parametrically defined function. Solution for Find the second derivative of the implicitly defined function x2+y2=R2 (canonical equation of a circle). It depends on what first derivative you're taking. Radius of curvature. Differentiating once more with respect to \(x,\) we find the second derivative: \[y^{\prime\prime} = {y^{\prime\prime}_{xx}} = \frac{{{\left( {{y'_x}} \right)}'_t}}{{{x'_t}}}.\]. Google Classroom Facebook Twitter. Without having taken a course on differential equations, it might not be obvious what the function \(x(t)\) could be. Click or tap a problem to see the solution. If we discuss derivatives, it actually means the rate of change of some variable with respect to another variable. 4.5.5 Explain the relationship between a function and its first and second derivatives. See Answer. Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function f. This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The sign of the second derivative of curvature determines whether the curve has … The slope of the radius from the origin to the point \((a,b)\) is \(m_r = \frac{b}{a}\text{. Découvrez comment nous utilisons vos informations dans notre Politique relative à la vie privée et notre Politique relative aux cookies. If the curve is twice differentiable, that is, if the second derivatives of x and y exist, then the derivative of T(s) exists. As we all know, figures and patterns are at the base of mathematics. Learn how the second derivative of a function is used in order to find the function's inflection points. • Process of identifying static point of function f(a) by second derivative test. It is important to note that the derivative expression for explicit differentiation involves x only, while the derivative expression for implicit differentiation may involve BOTH x AND y. It also examines when the volume-area-circumference relationships apply, and generalizes them to 2D polygons and 3D polyhedra. You also have the option to opt-out of these cookies. Figure \(\PageIndex{4}\): Graph of the curve described by parametric equations in part c. Second, this formula is entirely consistent with our understanding of circles. Want to see this answer and more? A derivative can also be shown as dydx, and the second derivative shown as d 2 ydx 2. Just to illustrate this fact, I'll show you two examples. To find the derivative of a circle you must use implicit differentiation. Solution for Find the second derivative of the function. There's a trick, ya see. We will set the derivative and second derivative of the equation of the circle equal to these constants, respectively, and then solve for R. The first derivative of the equation of the circle is d … Pour autoriser Verizon Media et nos partenaires à traiter vos données personnelles, sélectionnez 'J'accepte' ou 'Gérer les paramètres' pour obtenir plus d'informations et pour gérer vos choix. We'll assume you're ok with this, but you can opt-out if you wish. 1928] SECOND DERIVATIVE OF A POLYGENIC FUNCTION 805 to the oo2 real elements of the second order existing at every point, d2w/dzz assumes oo2 values for every value of z. Second Derivative Test. *Response times vary by subject and question complexity. The second derivative would be the number of radians in a circle. Second-Degree Derivative of a Circle? The second derivatives satisfy the following linear relationships: \[{{\left( {u + v} \right)^{\prime\prime}} = {u^{\prime\prime}} + {v^{\prime\prime}},\;\;\;}\kern-0.3pt{{\left( {Cu} \right)^{\prime\prime}} = C{u^{\prime\prime}},\;\;}\kern-0.3pt{C = \text{const}. Nos partenaires et nous-mêmes stockerons et/ou utiliserons des informations concernant votre appareil, par l'intermédiaire de cookies et de technologies similaires, afin d'afficher des annonces et des contenus personnalisés, de mesurer les audiences et les contenus, d'obtenir des informations sur les audiences et à des fins de développement de produit. Equation 13.1.2 tells us that the second derivative of \(x(t)\) with respect to time must equal the negative of the \(x(t)\) function multiplied by a constant, \(k/m\). Want to see the step-by-step answer? Of course, this always turns out to be zero, because the difference in the radius is zero since circles are only two dimensional; that is, the third dimension of a circle, when measured, is z = 0. The second derivative can also reveal the point of inflection. f(x) = (x2 + 3x)/(x − 4) The point where a graph changes between concave up and concave down is called an inflection point, See Figure 2.. The standard rules of Calculus apply for vector derivatives. Well, Ima tell ya a little secret 'bout em. Archimedean Spiral. The first derivative is f′ (x) = 3x2 − 12x + 9, so the second derivative is f″(x) = 6x − 12. Archimedean Spiral. If the second derivative is positive/negative on one side of a point and the opposite sign on … As you would expect, dy/dxis constant, based on using the formulas above: The second derivatives of the metric are the ones that we expect to relate to the Ricci tensor \(R_{ab}\). Grab open blue circles to modify the function f(x). If we discuss derivatives, it actually means the rate of change of some variable with respect to another variable. }\], \[{y^{\prime\prime} = {\left( {\ln x + 1} \right)^\prime } }= {\frac{1}{x} + 0 = \frac{1}{x}.}\]. As we all know, figures and patterns are at the base of mathematics. That is an intuitive guess - the line turns around at constant rate (i.e. Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes! Second-Degree Derivative of a Circle? Similarly, even if [latex]f[/latex] does have a derivative, it may not have a second derivative. The parametric equations are x(θ) = θcosθ and y(θ) = θsinθ, so the derivative is a more complicated result due to the product rule. Category: Integral Calculus, Differential Calculus, Analytic Geometry, Algebra "Published in Newark, California, USA" If the equation of a circle is x 2 + y 2 = r 2, prove that the circumference of a circle is C = 2πr. Determining concavity of intervals and finding points of inflection: algebraic. A function [latex]f[/latex] need not have a derivative—for example, if it is not continuous. It's just that there is also a … Def. * \[{y^\prime = \left( {\frac{x}{{\sqrt {1 – {x^2}} }}} \right)^\prime }={ \frac{{x^\prime\sqrt {1 – {x^2}} – x\left( {\sqrt {1 – {x^2}} } \right)^\prime}}{{{{\left( {\sqrt {1 – {x^2}} } \right)}^2}}} }={ \frac{{1 \cdot \sqrt {1 – {x^2}} – x \cdot \frac{{\left( { – 2x} \right)}}{{2\sqrt {1 – {x^2}} }}}}{{1 – {x^2}}} }={ \frac{{\sqrt {1 – {x^2}} + \frac{{{x^2}}}{{\sqrt {1 – {x^2}} }}}}{{1 – {x^2}}} }={ \frac{{\frac{{{{\left( {\sqrt {1 – {x^2}} } \right)}^2} + {x^2}}}{{\sqrt {1 – {x^2}} }}}}{{1 – {x^2}}} }={ \frac{{1 – {x^2} + {x^2}}}{{\sqrt {{{\left( {1 – {x^2}} \right)}^3}} }} }={ \frac{1}{{\sqrt {{{\left( {1 – {x^2}} \right)}^3}} }}.}\]. Assume [math]y[/math] is a function of [math]x[/math]. A derivative basically finds the slope of a function. The volume of a circle would be V=pi*r^3/3 since A=pi*r^2 and V = anti-derivative[A(r)*dr]. The same holds true for the derivative against radius of the volume of a sphere (the derivative is the formula for the surface area of the sphere, 4πr 2).. The "Second Derivative" is the derivative of the derivative of a function. Figure 10.4.4 shows part of the curve; the dotted lines represent the string at a few different times. If the derivative of curvature κ'(t) is zero, then the osculating circle will have 3rd-order contact and the curve is said to have a vertex. • If a second derivative of function f(x*) is smaller than zero, then function is concave than it is said to be local maximum. The point where a graph changes between concave up and concave down is called an inflection point, See Figure 2.. Necessary cookies are absolutely essential for the website to function properly. sin/cos/tan for any angle; Inscribed Angle Investigation Median response time is 34 minutes and may be longer for new subjects. Finding a vector derivative may sound a bit strange, but it's a convenient way of calculating quantities relevant to kinematics and dynamics problems (such as rigid body motion). Nonetheless, the experience was extremely frustrating. If the second derivative is positive/negative on one side of a point and the opposite sign on … One way is to first write y explicitly as a function of x. that the first derivative and second derivative of f at the given point are just constants. Find the second derivative of the below function. The curvature of a circle whose radius is 5 ft. is This means that the tangent line, in traversing the circle, turns at a rate of 1/5 radian per foot moved along the arc. 1: You titled this "differentiation of a circle" which makes no sense. The area of the rectangles can then be calculated as: (1) The same rectangle is present four times in the circle (once in each quarter of it). the first derivative changes at constant rate), which means that it is not dependent on x and y coordinates. Finding a vector derivative may sound a bit strange, but it's a convenient way of calculating quantities relevant to kinematics and dynamics problems (such as rigid body motion). You can take d/dx (which I do below), dx/dyor dy/dx. E'rybody hates 'em, right? It is mandatory to procure user consent prior to running these cookies on your website. In words, we would say: The derivative of sin x is cos x, The derivative of cos x is −sin x (note the negative sign!) Since f″ is defined for all real numbers x, we need only find where f″(x) = 0. Select the third example from the drop down menu. So, all the terms of mathematics have a graphical representation. Learn which common mistakes to avoid in the process. The circle has the uniform shape because a second derivative is 1. These cookies do not store any personal information. Let the function \(y = f\left( x \right)\) have a finite derivative \(f'\left( x \right)\) in a certain interval \(\left( {a,b} \right),\) i.e. Other applications of the second derivative are considered in chapter Applications of the Derivative. Check out a sample Q&A here. Come ova here! > Psst. How could we find the derivative of y in this instance ? Substituting into the formula for general parametrizations gives exactly the same result as above, with x replaced by t. If we use primes for derivatives with respect to the parameter t. Hey, kid! Discover Resources. Several, equivalent functions can satisfy this equation. Find parametric equations for this curve, using a circle of radius 1, and assuming that the string unwinds counter-clockwise and the end of the string is initially at $(1,0)$.
Opencv Image Processing, History Of Social Work In Africa, Organic Zucchini Spirals, Installing Blackarch On Virtualbox, How To Draw Fall Leaves Easy, Yarn Barn Ct, Paloma Mix Drink, Psychiatric History Mnemonic, Andrew Bolton Contact,
second derivative of a circle 2020 | CommonCrawl |
Official Journal of the Japan Wood Research Society
Micro-FTIR spectroscopy and partial least-squares regression for rapid determination of moisture content of nanogram-scaled heat-treated wood
Hanmeng Yuan1,
Shiyao Tang2,
Qiuyan Luo1,
Teng Xiao1,
Wenlei Wang1,
Qiang Ma1,
Xin Guo1 &
Yiqiang Wu2
Journal of Wood Science volume 66, Article number: 1 (2020) Cite this article
Moisture sorption has a significant impact on the performance of heat-treated wood. In order to better characterize moisture sorption of heat-treated wood, a method for rapid determination of moisture content (MC) of nanogram-scaled heat-treated wood is proposed in this work. During moisture adsorption process, micro-Fourier transform infrared (FTIR) spectra of heat-treated wood were recorded. Spectral analysis was applied to these measured spectra, and then moisture adsorption sites and spectral ranges affected by moisture sorption were identified. Meanwhile, moisture contents (MCs) of heat-treated wood at various relative humidity (RH) levels were measured by using dynamic vapor sorption (DVS) setup. Based on these spectral ranges and MCs, a quantitative forecasting model was established using partial least-square regression (PLSR). Furthermore, the developed forecasting model was applied to acquire moisture sorption isotherm of heat-treated wood, in which a very positive correlation between predicted and measured MCs was observed. It was confirmed that this method was effective for rapid detection of MC of nanogram-scaled heat-treated wood which had unique advantages of rapid analysis (second level) and less sample consumption (nanogram level).
Wood is a green and renewable building material, and it has been widely used in construction industry, furniture production and pulping and papermaking industry [1,2,3]. Heat treatment is considered to be an effective technique for wood physical modification, while moisture sorption has a significant impact on the performance of heat-treated wood [4,5,6,7]. Hence, a deeper research on moisture sorption of heat-treated wood is extremely important.
Moisture sorption, an important property of heat-treated wood, has been studied from various aspects [8,9,10]. Moisture content (MC) is one of the key aspects, which is mainly measured by gravimetric methods, especially dynamic vapor sorption (DVS). For example, Metsä-Kortelainen et al. [11] confirmed that heat-treated Scots pine (Pinus sylvestris) and Norway spruce (Picea abies) have lower MCs than those untreated samples whose dimensions were 22 × 65 × 150 mm3. Besides, Hill et al. [12] employed the DVS apparatus to acquire MCs of heat-treated Scots pine (Pinus sylvestris L.) in the relative humidity (RH) range from 0 to 95%. Further, using DVS apparatus, the sorption isotherms of other heat-treated wood including acacia (Acacia mangium) [13], sesendok (Endospermum malaccense) [14], scots pine (Pinus sylvestris L.) [15,16,17] and Eucalyptus pellita [18] were determined. Although the results are promising, DVS technique is blocked in certain problems, such as sample consumption (milligram level) and experiment time (minute level).
Many spectroscopic methods such as near-infrared spectroscopy [19, 20], Fourier transform infrared spectroscopy (FTIR) [21,22,23] and Raman spectroscopy [24] have been employed to study moisture sorption of heat-treated wood at molecular level. For example, Esteves et al. [25] demonstrated that the near-infrared spectroscopy has enough ability to predict MC of heat-treated pine (Pinus pinaster) and eucalypt (Eucalyptus globulus). Boonstra et al. [26] took advantage of FTIR spectroscopy to study the moisture sorption, and showed that the decrease in MC of heat-treated wood was attributed to the cross-linking of lignin and the reducing of OH group. Guo et al. [24] examined moisture sorption using Raman spectroscopy. Among these spectroscopic methods, FTIR spectroscopy has been widely applied, for it has many merits, for example: high spectral quality [27, 28], fast data collection speed [29,30,31,32], higher signal-to-noise ratio [33, 34], high sensitivity for the detection of moisture [35]. Moreover, micro-FTIR spectroscopy is a superior analytical technique for investigating micron-sized sample [36]. Through the use of a light microscope, an infrared spectrophotometer, a mercury cadmium telluride detector, and an extensive on-line software library of organic chemical spectra, this new technique is capable of identifying micron-scaled sample.
Considering that micro-FTIR spectroscopy has the ability to study micron-sized sample [37, 38], in this study, we developed a method for rapid detection of MC of nanogram-scaled heat-treated wood. First, we collected micro-FTIR spectra of nanogram-scaled heat-treated wood during moisture adsorption process. Second, these collected spectra were used to determine moisture sorption sites and spectral ranges correlated with moisture sorption. Third, on account of these determined spectral ranges and measured MCs, the micro-FTIR forecasting model was generated. Finally, the developed forecasting model was applied to acquire moisture sorption isotherm of nanogram-scaled heat-treated wood.
Wood specimens (dimensions 100 × 30 × 10 mm in length, width, and thickness) were collected from straight stem of Ginkgo biloba L. (Ginkgoaceae). Then heat treatment was used for these wood specimens in electric vacuum drying oven under controlled condition of 180 ± 1 °C. This heat treatment lasted 4 h. From these heat-treated wood specimens, transverse sections were prepared without embedding and any chemical treatment. These sections, 5 mm × 5 mm × 10 μm, were cut using a manual rotary microtome (Leica RM2135). Prior to the spectral measurement, the transverse section of heat-treated wood specimen was dried at 102 ± 3 °C in the oven for 2 h.
Experimental setup for measurement of micro-FTIR spectroscopy
Figure 1a shows an experimental setup for measurement of micro-FTIR spectroscopy. The main section of the experimental setup was a spectrometer (Nicolet IN 10). This spectrometer included one microscope which provided a new function for selecting observation area. During the spectral measurement, one observation area (30 μm by 30 μm) was randomly selected in the transverse section of heat-treated wood specimen, in which small quantities (~ 1 ng) of heat-treated wood was present. The micro-FTIR spectroscopy in the wavenumber range from 720 to 4000 cm−1 was recorded with 4 cm−1 resolution; 32 scans were collected. Meanwhile, the spectral data were recorded at a constant temperature of 25 °C. Figure 1b shows the sample cell. One prepared transverse section was placed on the base of sample cell which was made of ZnSe plate. Then this sample cell was mounted on the stage of the spectrometer. Meanwhile, through this sample cell, nitrogen gas with specific RH was circulated.
Before the spectral measurement, a kinetic spectroscopy test was conducted to calculate the balance time. Figure 2 shows typical changes of the set RH, the real RH, temperature and peak height of real-time spectrum with time. When the set RH was changed to a new value (such as 5 and 10%), the latency of 3–4 min appeared. During the latency period, the real RH was more and more close to the set RH, and then arrived at stabilization. Meanwhile, 15 min later the real-time spectrum nearly did not change (peak height of the main peak at 3352 cm−1 associated with moisture sorption was advocated to detect spectral change). Based on these results, 60 min were set as the balance time.
Typical changes of the set RH, the real RH, temperature and peak height of real-time spectrum with time
Determination of MC using DVS apparatus
Moisture content was measured using DVS apparatus (DVS AdvantagePlus). First, a heat-treated wood sample was put on the sample tray which was hung on the microbalance which was situated in a thermostatically controlled cabinet. The apparatus recorded sample mass at the set RH (adsorption): 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90 and 95%, at a temperature of 25 °C and in the reverse sequence for the desorption isotherm. Meanwhile, every set RH was maintained for enough time until the sample mass of the heat-treated wood specimen altered less than 0.002% per minute during 10 min.
Moisture content was calculated using the following formula:
$${\text{MC}} = \frac{{m - m_{\text{d}} }}{{m_{\text{d}} }} \times 100,$$
where md was dry sample mass and m is real-time sample mass.
Figure 3 shows typical changes of MC and RH with time. When the set RH was changed to the next value (for example, 5 and 10%), a time delay of 4–10 min appeared. During the latency period, the real RH got close to the set RH, and then arrived at stabilization afterward. Therefore, MC would increase to a steady value at the set RH which was recorded as reference value. It should be noticed that three replicates were exposed and each MC collected as reference value was mean of three replicates.
Typical changes of MC and RH vs. time
Spectral data processing
Acquiring difference spectrum
To further analyze the moisture sorption of heat-treated wood qualitatively and quantitatively, a difference spectrum technique was introduced.
Establishment of the forecasting model based on micro-FTIR spectra
A micro-FTIR forecasting model was generated using TQ Analyst™ qualitative and quantitative analysis software, one of the OMINIC software suites. In description table, quantitative analysis was set to "partial least squares". In pathlength table, pathlength type was set to "constant". In components table, component was set to "moisture content" whose maximum and minimum values were both entered, and these two values were acquired using DVS apparatus. In standards table, sample spectra collected at 35 different relative humidities were introduced. Four-fifths of sample spectra measured at 28 different relative humidities with six replicates (i.e., 168 spectra) were assigned to the calibration set, and the remaining sample spectra acquired at seven different relative humidities with six replicates (i.e., 42 spectra) were allocated to the validation set. In spectra table, no smoothing and multipoint baseline correction were programmed. In regions table, spectral range was adjusted according to the qualitative description of heat-treated wood moisture sorption. When these parameters were enabled, the TQ Analyst™ qualitative and quantitative analysis software program was run, and one forecasting model was established. Model performance was estimated using the correlation of determination (R2), root-mean-square error of cross-validation (RMSECV) and root-mean-square error of prediction (RMSEP).
Qualitatively analyzing moisture sorption in heat-treated wood
Figure 4 shows heat-treated wood spectra collected at the moisture adsorption process. The development of micro-FTIR spectra could be seen over a range of MC from 0 to 15.0% in this figure. At the MC of 15.0%, the main band at 3358 cm−1 assigned to O–H stretching vibration increased, showing the OH group was moisture adsorption sites of heat-treated wood. The band at 1736 cm−1 was assigned to C=O stretching vibration of carboxylic acid, the 1600 cm−1 band belonged to the aromatic skeletal vibration plus the C=O stretching vibration, and the band at 1158 cm−1 was attributed to the glucosidic C–O–C vibration. For comparison, heat-treated wood spectrum measured at 0% MC was also displayed, in which these three bands appeared at 1739, 1604, and 1160 cm−1. When RH increased, the band positions of these three bands exhibited continuous red shifts. These peak shifts indicated the carbonyl and C–O groups were moisture adsorption sites of heat-treated wood. What's more, it was confirmed that two micro-FTIR spectral ranges correlated with moisture sorption were 3700–3100 and 1780–1700 cm−1.
Heat-treated wood spectra collected during the moisture adsorption process
Figure 5 presents difference spectra at various MC levels during the moisture adsorption process. The broad envelope range of 3700–2800 cm−1 containing many component bands was observed to rise which resulted from moisture sorption. Further, the first spectral range correlated with moisture sorption was precisely identified as from 3700 to 2800 cm−1. Meanwhile, the 1755 cm−1 band belonged to free carbonyl group decreased, while the 1725 cm−1 band was assigned to hydrogen bonded carbonyl group had reverse trend. Meanwhile, the band around 1642 cm−1 was assigned to H–O–H bending vibration. Therefore, second spectral range was precisely identified as from 1770 to 1580 cm−1. Moreover, with an increase of RH, the 1171 cm−1 band was shown to decrease, while the 1142 cm−1 band was found to rise. The same reflections were happened in two bands located at 1171 and 1142 cm−1. This further suggested the third spectral range correlated with moisture sorption could be precisely confirmed as from 1180 to 1140 cm−1.
Difference spectra collected in the MC range from 1.5 to 15.0%
Further, the variation in the peak height for three peaks impacted by water sorption vs. MC is shown in Fig. 6. Clearly the variation for three peaks was diverse, suggesting that water molecule was absorbed by all these sorption sites. However, none of these three peaks could predict moisture sorption isotherm. Therefore, a method for determination of MC of heat-treated wood is needed urgently.
The variation in the peak height for three peaks impacted by moisture sorption vs. MC. Quadrate dot: the 3602 cm−1 band. Circular dot: the 3149 cm−1 band. Triangle dot: the 1642 cm−1 band
Quantitative detection of MC of heat-treated wood
As shown previously, DVS has offered vast amounts of moisture sorption isotherms [39,40,41]. Hence, this technique was introduced to collect MCs as measured values. Figure 7 shows the experimental sorption isotherm. This isotherm curve presented a typical sigmoidal shape commonly observed for other lignocellulosic materials [42].
a Change in moisture content of heat-treated wood with the varying RH levels over the time profile in the isotherm run. b Equilibrium moisture content of heat-treated wood over the full set RH range in the adsorption process
In order to establish a method for rapid determination of MC, the forecasting model based on micro-FTIR spectroscopy ought to be determined firstly. As mentioned earlier, spectral range was an important parameter for the forecasting model. Three spectral ranges of 3700–2800, 1770–1580, and 1180–1140 cm−1 correlated with moisture sorption were proposed as Case A. Further, the widened and narrowed spectral ranges were introduced separately as Case B and C (Case B: 3700–2800, 2800–2700, 1770–1580, and 1180–1140 cm−1; Case C: 3700–3000, 1770–1580, and 1180–1140 cm−1). In all three cases, the micro-FTIR forecasting model was generated and corresponding parameters such as RESECV, RESEP, and R2 could be acquired in TQ Analyst™ software (as shown in Table 1). The established forecasting model in Case A had highest forecast accuracy, for it possessed highest values of R2 as well as the lowest values of RMSEP and RMSECV. What's more, this model made use of the whole spectral ranges correlated with moisture sorption, and the change of spectral range (increase and decrease) could decrease the accuracy of the established forecasting model in Case B and C.
Table 1 PLSR quality parameters for cross- and test set-validation for the proposed three cases
Based on the established forecasting model, MCs of heat-treated wood were predicted. Moreover, measured values using DVS setup are displayed in Fig. 8. During the moisture adsorption process, the predicted MCs were much closed to the measured values (relative error was lower than 3%). Results indicated that this method for rapid detection of MC in nanogram-scaled heat-treated wood using micro-FTIR spectroscopy and partial least-squares regression was effective and efficient. Compared to the traditional DVS, it has unique advantages of rapid analysis (second level) and less sample consumption (nanogram level).
Moisture sorption isotherm estimated by micro-FTIR forecasting model and that measured by DVS approach. Solid quadrate spot: MCs predicted using micro-FTIR forecasting model. Hollow circular spot: MCs measured using DVS setup
One method for rapid determination of moisture content of nanogram-scaled heat-treated wood was proposed here. Micro-FTIR spectra were measured during the moisture adsorption process. An analysis of these spectra confirmed that hydroxyl and carbonyl groups were moisture sorption sites of heat-treated wood. Moreover, three spectral ranges, such as 3700–2800, 1770–1580, and 1180–1140 cm−1 were identified that related to moisture sorption. Based on these three spectral ranges and referential values, a quantitative forecasting model was built using PLSR. Further, the developed forecasting model was applied to acquire moisture sorption isotherm of heat-treated wood, in which a very positive correlation between the forecasts and recorded values. It was confirmed that this method for rapid detection of moisture content in nanogram-scaled heat-treated wood was effective which had unique advantages of rapid analysis (second level) and less sample consumption (nanogram level).
The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
FTIR:
Fourier transform infrared
MCs:
moisture contents
DVS:
dynamic vapor sorption
PLSR:
partial least-square regression
RMSECV:
root-mean-square error of cross-validation
RMSEP:
root-mean-square error of prediction
Wålinder MEP, Gardner DJ (1999) Factors influencing contact angle measurements on wood particles by column wicking. J Adhes Sci Technol 13:1363–1374
Obataya E, Norimoto M, Gril J (1998) The effects of adsorbed water on dynamic mechanical properties of wood. Polymer 39:3059–3064
Maeda H, Fukada E (1987) Effect of bound water on piezoelectric, dielectric, and elastic properties of wood. J Appl Polym Sci 33:1187–1198
Rekola J, Aho AJ, Gunn J, Matinlinna J, Hirvonen J, Viitaniemi P, Vallittu PK (2009) The effect of heat treatment of wood on osteoconductivity. Acta Biomater 5:1596–1604
Kartal SN, Hwang WJ, Imamura Y (2007) Water absorption of boron-treated and heat-modified wood. J Wood Sci 53:454–457
Salmen L, Possler H, Stevanic JS, Stanzltschegg SE (2008) Analysis of thermally treated wood samples using dynamic FT-IR-spectroscopy. Holzforschung 62:676–678
Pelit H, Budakçı M, Sönmez A, Burdurlu E (2015) Surface roughness and brightness of scots pine (Pinus sylvestris) applied with water-based varnish after densification and heat treatment. J Wood Sci 61:586–594
Temiz A, Terziev N, Jacobsen B, Eikenes M (2010) Weathering, water absorption, and durability of silicon, acetylated, and heat-treated wood. J Appl Polym Sci 102:4506–4513
Huang X, Kocaefe D, Kocaefe Y, Boluk Y, Pichette A (2012) Changes in wettability of heat-treated wood due to artificial weathering. Wood Sci Technol 46:1215–1237
Wang Y, Iida I, Minato K (2007) Mechanical properties of wood in an unstable state due to temperature changes, and analysis of the relevant mechanism IV: effect of chemical components on destabilization of wood. J Wood Sci 53:381–387
Metsä-Kortelainen S, Antikainen T, Viitaniemi P (2006) The water absorption of sapwood and heartwood of Scots pine and Norway spruce heat-treated at 170 °C, 190 °C, 210 °C and 230 °C. Eur J Wood Wood Prod 64:192–197
Hill CAS, Ramsay J, Keating B, Laine K, Rautkari L, Hughes M, Constant B (2012) The water vapour sorption properties of thermally modified and densified wood. J Mater Sci 47:3191–3197
Willems W (2014) The water vapor sorption mechanism and its hysteresis in wood: the water/void mixture postulate. Wood Sci Technol 48:499–518
Jalaludin Z, Hill CAS, YanJun X, Samsi HW, Husain H, Awang K, Curling SF (2010) Analysis of the water vapour sorption isotherms of thermally modified acacia and sesendok. Wood Mater Sci Eng 5:194–203
Hosseinpourpia R, Adamopoulos S, Holstein N, Mai C (2017) Dynamic vapour sorption and water-related properties of thermally modified Scots pine (Pinus sylvestris L.) wood pre-treated with proton acid. Polym Degrad Stabil 138:161–168
Kymäläinen M, Mlouka SB, Belt T, Merk V, Liljeström V, Hänninen T, Uimonen T, Kostiainen M, Rautkari L (2018) Chemical, water vapour sorption and ultrastructural analysis of Scots pine wood thermally modified in high-pressure reactor under saturated steam. J Mater Sci 53:3027–3037
Hosseinpourpia R, Adamopoulos S, Mai C (2017) Effects of acid pre-treatments on the swelling and vapor sorption of thermally modified scots pine (Pinus sylvestris L.) wood. BioResources 13:331–345
Sun B, Wang Z, Liu J (2017) Changes of chemical properties and the water vapour sorption of Eucalyptus pellita wood thermally modified in vacuum. J Wood Sci 63:133–139
Mitsui K, Inagaki T, Tsuchikawa S (2008) Monitoring of hydroxyl groups in wood during heat treatment using NIR spectroscopy. Biomacromolecules 9:286–288
Sandak A, Sandak J, Allegretti O (2015) Quality control of vacuum thermally modified wood with near infrared spectroscopy. Vacuum 114:44–48
Akgül M, Gümüşkaya E, Korkut S (2007) Crystalline structure of heat-treated Scots pine [Pinus sylvestris L.] and Uludağ fir [Abies nordmanniana (Stev.) subsp. Bornmuelleriana (Mattf.)] wood. Wood Sci Technol 41:281–289
Özgenç Ö, Durmaz S, Boyaci IH, Eksi-Kocak H (2017) Determination of chemical changes in heat-treated wood using ATR-FTIR and FT Raman spectrometry. Spectrochim Acta A 171:395–400
Kotilainen RA, Toivanen TJ, Alén RJ (2000) FTIR monitoring of chemical changes in softwood during heating. J Wood Chem Technol 20:307–320
Guo X, Wu Y, Yan N (2016) Characterizing spatial distribution of the adsorbed water in wood cell wall of Ginkgo biloba L. by u-FTIR and confocal Raman spectroscopy. Holzforschung 71:415–423
Esteves B, Pereira H (2008) Quality assessment of heat-treated wood by NIR spectroscopy. Holz als Roh- und Werkstoff 66:323–332
Boonstra MJ, Tjeerdsma B (2006) Chemical analysis of heat treated softwoods. Holz als Roh- und Werkstoff 64:204–211
Gerwert K, Hess B, Michel H, Buchanan S (1988) FTIR studies on crystals of photosynthetic reaction centers. FEBS Lett 232:303–307
Zhang J, Zhang X, Zhang F, Yu S (2017) Solid-film sampling method for the determination of protein secondary structure by Fourier transform infrared spectroscopy. Anal Bioanal Chem 409:4459–4465
Jangir DK, Charak S, Mehrotra R, Kundu S (2011) FTIR and circular dichroism spectroscopic study of interaction of 5-fluorouracil with DNA. J Photochem Photobiol B 105:143–148
Bunaciu AA, Aboul-Enein HY, Fleschin S (2012) FTIR spectrophotometric methods used for antioxidant activity assay in medicinal plants. Appl Spectrosc Rev 47:245–255
Amir RM, Anjum FM, Khan MI, Khan MR, Pasha I (2013) Application of Fourier transform infrared (FTIR) spectroscopy for the identification of wheat varieties. J Food Sci Technol 50:1018–1023
Ahmad I, Ullah J, Ishaq M, Khan H, Gul K, Siddiqui S, Ahmad W (2015) Monitoring of oxidation behavior in mineral base oil additized with biomass derived antioxidants using FT-IR spectroscopy. RSC Adv 5:101089–101100
González-Gaitano G, Isasi JR (2001) Analysis of the rotational structure of CO2 by FTIR spectroscopy. Chem Educ 6:362–364
Heidi N, Nils Kristian A, Young JF, Bertram HC, Pedersen ME, Stine G, Gjermund V, Achim K (2011) Monitoring cellular responses upon fatty acid exposure by Fourier transform infrared spectroscopy and Raman spectroscopy. Analyst 136:1649–1658
Célino A, Goncalves O, Jacquemin F, Fréour S (2014) Qualitative and quantitative assessment of water sorption in natural fibres using ATR-FTIR spectroscopy. Carbohydr Polym 101:163–170
Yong L, Zhiwei Y, Yury D, Gassman PL, Hai W, Alexander L (2008) Hygroscopic behavior of substrate-deposited particles studied by micro-FT-IR spectroscopy and complementary methods of particle analysis. Anal Chem 80:633
Guo X, Liu L, Wu J, Fan J, Wu Y (2018) Qualitatively and quantitatively characterizing water adsorption of a cellulose nanofiber film using micro-FTIR spectroscopy. RSC Adv 8:4214–4220
He X, Leng C, Pang S, Zhang Y (2017) Kinetics study of heterogeneous reactions of ozone with unsaturated fatty acid single droplets using micro-FTIR spectroscopy. RSC Adv 7:3204–3213
Argyropoulos D, Alex R, Kohler R, Muller J (2012) Moisture sorption isotherms and isosteric heat of sorption of leaves and stems of lemon balm (Melissa officinalis L.) established by dynamic vapor sorption. LWT Food Sci Technol 47:324–331
Garbalinska H, Bochenek M, Malorny W, Von Werder J (2017) Comparative analysis of the dynamic vapor sorption (DVS) technique and the traditional method for sorption isotherms determination—exemplified at autoclaved aerated concrete samples of four density classes. Cement Concr Res 91:97–105
Fang L, Xiong X, Wang X, Hong C, Mo X (2016) Effects of surface modification methods on mechanical and interfacial properties of high-density polyethylene-bonded wood veneer composites. J Wood Sci 63:65–73
Avramidis S (1989) Evaluation of "three-variable" models for the prediction of equilibrium moisture content in wood. Wood Sci Technol 23:251–257
The authors are grateful for the financial support from National Natural Science Foundation of China (Grant Numbers 31890771, 31670563 and 31500475), Special projects of scientific and technological innovation in Hunan forestry (Grant No. XLK201982), Natural Science Foundation of Hunan Province, China (Grant Number 2019JJ50981), Research Foundation of Education Bureau of Hunan Province, China (Grant Number 18A166), and Scientific Innovation Fund for Post-graduates of Central South University of Forestry and Technology (Grant Number CX20192073).
This study was supported by National Natural Science Foundation of China (Grant Numbers 31890771, 31670563 and 31500475), Special projects of scientific and technological innovation in Hunan forestry (Grant No. XLK201982), Natural Science Foundation of Hunan Province, China (Grant Number 2019JJ50981), Research Foundation of Education Bureau of Hunan Province, China (Grant Number 18A166), and Scientific Innovation Fund for Post-graduates of Central South University of Forestry and Technology (Grant Number CX20192073).
College of Science, Central South University of Forestry and Technology, Changsha, 410004, China
Hanmeng Yuan, Qiuyan Luo, Teng Xiao, Wenlei Wang, Qiang Ma & Xin Guo
College of Material Science and Engineering, Central South University of Forestry and Technology, Changsha, 410004, China
Shiyao Tang & Yiqiang Wu
Hanmeng Yuan
Shiyao Tang
Qiuyan Luo
Teng Xiao
Wenlei Wang
Xin Guo
Yiqiang Wu
All the authors have contributed to the manuscript and take all responsibilities for the entire content of the manuscript. All authors read and approved the final manuscript.
Correspondence to Xin Guo or Yiqiang Wu.
Yuan, H., Tang, S., Luo, Q. et al. Micro-FTIR spectroscopy and partial least-squares regression for rapid determination of moisture content of nanogram-scaled heat-treated wood. J Wood Sci 66, 1 (2020). https://doi.org/10.1186/s10086-020-1848-7
Moisture sorption
Micro-FTIR spectroscopy
Partial least-squares regression | CommonCrawl |
Computation of the ideal class group of certain complex quartic fields
Author: Richard B. Lakein
Journal: Math. Comp. 28 (1974), 839-846
MSC: Primary 12A50
DOI: https://doi.org/10.1090/S0025-5718-1974-0374090-1
Abstract | References | Similar Articles | Additional Information
Abstract: The ideal class group of quartic fields $K = F(\sqrt \mu )$, where $F = {\mathbf {Q}}(i)$, is calculated by a method adapted from the method of cycles of reduced ideals for real quadratic fields. The class number is found in this way for 5000 fields $K = F(\sqrt \pi ),\pi \equiv \pm 1 \bmod 4,\pi$ a prime of F. A tabulation of the distribution of class numbers shows a striking similarity to that for real quadratic fields with prime discriminant. Also, two fields were found with noncyclic ideal class group $C(3) \times C(3)$.
References [Enhancements On Off] (What's this?)
P. G. L. Dirichlet, "Recherches sur les formes quadratiques à coefficients et à indéterminées complexes," Werke I, pp. 533-618.
David Hilbert, Ueber den Dirichlet'schen biquadratischen Zahlkörper, Math. Ann. 45 (1894), no. 3, 309–340 (German). MR 1510866, DOI https://doi.org/10.1007/BF01446682
A. Hurwitz, Über die Entwicklung complexer Grössen in Kettenbrüche, Acta Math. 11 (1887), no. 1-4, 187–200 (German). MR 1554754, DOI https://doi.org/10.1007/BF02418048
Julius Hurwitz, Über die Reduction der Binären Quadratischen Formen mit Complexen Coefficienten und Variabeln, Acta Math. 25 (1902), no. 1, 231–290 (German). MR 1554944, DOI https://doi.org/10.1007/BF02419027
E. L. Ince, Cycles of Reduced Ideals in Quadratic Fields, British Association Tables, vol. 4, London, 1934.
Sigekatu Kuroda, Über den Dirichletschen Körper, J. Fac. Sci. Imp. Univ. Tokyo Sect. I. 4 (1943), 383–406 (German). MR 0021031
R. B. Lakein, A Gauss bound for a class of biquadratic fields, J. Number Theory 1 (1969), 108–112. MR 240073, DOI https://doi.org/10.1016/0022-314X%2869%2990028-6
R. B. Lakein, "Class numbers and units of complex quartic fields," in Computers in Number Theory, Academic Press, London, 1971, pp. 167-172. G. B. Mathews, "A theory of binary quadratic arithmetical forms with complex integral coefficients," Proc. London Math. Soc. (2), v. 11, 1913, pp. 329-350.
Daniel Shanks, On Gauss's class number problems, Math. Comp. 23 (1969), 151–163. MR 262204, DOI https://doi.org/10.1090/S0025-5718-1969-0262204-1
Daniel Shanks and Peter Weinberger, A quadratic field of prime discriminant requiring three generators for its class group, and related theory, Acta Arith. 21 (1972), 71–87. MR 309899, DOI https://doi.org/10.4064/aa-21-1-71-87
P. G. L. Dirichlet, "Recherches sur les formes quadratiques à coefficients et à indéterminées complexes," Werke I, pp. 533-618. D. Hilbert, "Über den Dirichletschen biquadratishen Zahlkörper," Math. Ann., v. 45, 1894, pp. 309-340. (Werke I, pp. 24-52) A. Hurwitz, "Über die Entwicklung komplexer Grössen in Kettenbrüche," Acta Math., v. 11, 1887-1888, pp. 187-200. (Werke II, pp. 72-83) J. Hurwitz, "Über die Reduction der binären quadratischen Formen mit complexen Coefficienten und Variabein," Acta Math., v. 25, 1902, pp. 231-290. E. L. Ince, Cycles of Reduced Ideals in Quadratic Fields, British Association Tables, vol. 4, London, 1934. S. Kuroda, "Über den Dirichletschen Körper," J. Fac. Sci. Imp. Univ. Tokyo Sect. I, v. 4, 1943, pp. 383-406. MR 9, 12. R. B. Lakein, "A Gauss bound for a class of biquadratic fields," J. Number Theory, v. 1, 1969, pp. 108-112. MR 39 #1427. R. B. Lakein, "Class numbers and units of complex quartic fields," in Computers in Number Theory, Academic Press, London, 1971, pp. 167-172. G. B. Mathews, "A theory of binary quadratic arithmetical forms with complex integral coefficients," Proc. London Math. Soc. (2), v. 11, 1913, pp. 329-350. D. Shanks, "Review of table: Class number of primes of the form $4n + 1$," Math. Comp., v. 23, 1969, pp. 213-214. D. Shanks & P. Weinberger, "A quadratic field of prime discriminant requiring three generators for its class group, and related theory," Acta Arith., v. 21, 1972, pp. 71-87.
Retrieve articles in Mathematics of Computation with MSC: 12A50
Retrieve articles in all journals with MSC: 12A50
Article copyright: © Copyright 1974 American Mathematical Society | CommonCrawl |
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
About AIMS
E-journal Policy
Limiting dynamics for non-autonomous stochastic retarded reaction-diffusion equations on thin domains
DCDS Home
Classification of linear skew-products of the complex plane and an affine route to fractalization
July 2019, 39(7): 3749-3765. doi: 10.3934/dcds.2019152
Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces
Wen Tan , Bo-Qing Dong and Zhi-Min Chen ,
School of Mathematics and Statistics, Shenzhen University, Shenzhen 518052, China
* Corresponding author. Zhi-Min Chen
Received January 2018 Published April 2019
Full Text(HTML)
Related Papers
This paper is devoted to the study of the modified quasi-geostrophic equation
$ \partial_t\theta+u\cdot\nabla\theta+\nu\Lambda^\alpha\theta = 0 \ \ \mbox{ with } \ \ u = \Lambda^\beta\mathcal{R}^\perp\theta $
$ \mathbb{R}^2 $
. By the Littlewood-Paley theory, we obtain the local well-posedness and the smoothing effect of the equation in critical Besov spaces. These results are applied to show the global existence of regular solutions for the critical case
$ \beta = \alpha-1 $
and the existence of regular solutions for large time
$ t>T $
with respect to the supercritical case
$ \beta >\alpha -1 $
in Besov spaces. Earlier results for the equation in Hilbert spaces
$ H^s $
spaces are improved.
Keywords: Modified quasi-geostrophic equations, Besov spaces, local well-posedness, smoothing effect, large-time global regular solutions.
Mathematics Subject Classification: 35Q35, 76D03.
Citation: Wen Tan, Bo-Qing Dong, Zhi-Min Chen. Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3749-3765. doi: 10.3934/dcds.2019152
H. Bae, Global well-posedness of dissipative quasi-geostrophic equations in critical spaces, Proc. Am. Math. Soc., 136 (2008), 257-261. doi: 10.1090/S0002-9939-07-09060-0. Google Scholar
H. Bahouri, J. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften 343, Springer-Verlag, Heidelberg, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar
L. Caffarelli and A. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Ann. Math., 171 (2010), 1903-1930. doi: 10.4007/annals.2010.171.1903. Google Scholar
D. Chae and J. Lee, Global well-posedness in the super-critical dissipative quasi-geostrophic equations, Comm. Math. Phys., 233 (2003), 297-311. doi: 10.1007/s00220-002-0750-z. Google Scholar
Z. M. Chen and Z. Xin, Homogeneity Criterion for the Navier-Stokes equations in the whole spaces, J. Math. Fluid Mech., 3 (2001), 152-182. doi: 10.1007/PL00000967. Google Scholar
Q. Chen, C. Miao and Z. Zhang, A new Bernstein's inequality and the 2D dissipative quasi-geostrophic equation, Comm. Math. Phys., 271 (2007), 821-838. doi: 10.1007/s00220-007-0193-7. Google Scholar
P. Constantin, G. Iyer and J. Wu, Global regularity for a modified critical dissipative quasi-geostrophic equation, Indiana Univ. Math. J., 57 (2008), 2681-2692. doi: 10.1512/iumj.2008.57.3629. Google Scholar
P. Constantin, A. Majda and E. Tabak, Formation of strong fronts in the 2-D quasi-geostrophic thermal active scalar, Nonlinearity, 7 (1994), 1495-1533. Google Scholar
P. Constantin and V. Vicol, Nonlinear maximum principles for dissipative linear nonlocal operators and applications, Geom. Funct. Anal., 22 (2012), 1289-1321. doi: 10.1007/s00039-012-0172-9. Google Scholar
P. Constantin and J. Wu, Behavior of solutions of 2D quasi-geostrophic equations, SIAM J. Math. Anal., 30 (1999), 937-948. doi: 10.1137/S0036141098337333. Google Scholar
P. Constantin and J. Wu, Regularity of Hölder continuous solutions of the supercritical quasi-geostrophic equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 1103-1110. doi: 10.1016/j.anihpc.2007.10.001. Google Scholar
P. Constantin and J. Wu, Hölder Continuity of solutions of supercritical dissipative hydrodynamic transport equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 159-180. doi: 10.1016/j.anihpc.2007.10.002. Google Scholar
A. Córdoba and D. Córdoba, A maximum principle applied to the quasi-geostrophic equations, Comm. Math. Phys., 249 (2004), 511-528. doi: 10.1007/s00220-004-1055-1. Google Scholar
M. Dabkowski, Eventual regularity of the solutions to the supercritical dissipative quasi-geostrophic equation, Geom. Funct. Anal., 21 (2011), 1-13. doi: 10.1007/s00039-011-0108-9. Google Scholar
R. Danchin, Local theory in critical spaces for compressible viscous and heat-conductive gases, Comm. Partial Differ. Equ., 26 (2001), 1183-1233. doi: 10.1081/PDE-100106132. Google Scholar
H. Dong, Dissipative quasi-geostrophic equations in critical Sobolev spaces: smoothing effect and global well-posedness, Discrete Contin. Dyn. Syst., 26 (2010), 1197-1211. doi: 10.3934/dcds.2010.26.1197. Google Scholar
H. Dong and D. Li, On the 2D critical and supercritical dissipative quasi-geostrophic equation in Besov spaces, J. Differential Equations, 248 (2010), 2684-2702. doi: 10.1016/j.jde.2010.02.015. Google Scholar
H. Dong and N. Pavlović, Regularity criteria for the dissipative quasi-geostrophic equations in Hölder spaces, Comm. Math. Phys., 290 (2009), 801-812. doi: 10.1007/s00220-009-0756-x. Google Scholar
H. Dong and N. Pavlović, A regularity criterion for the dissipative quasi-geostrophic equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 1607-1619. doi: 10.1016/j.anihpc.2008.08.001. Google Scholar
N. Ju, Dissipative quasi-geostrophic equation: Local well-posedness, global regularity and similarity solutions, Indiana Univ. Math. J., 56 (2007), 187-206. doi: 10.1512/iumj.2007.56.2851. Google Scholar
A. Kiselev, Regularity and blow up for active scalars, Math. Model. Math. Phenom., 5 (2010), 225-255. doi: 10.1051/mmnp/20105410. Google Scholar
A. Kiselev, Nonlocal maximum principles for active scalars, Adv. Math., 227 (2011), 1806-1826. doi: 10.1016/j.aim.2011.03.019. Google Scholar
A. Kiselev and F. Nazarov, A variation on a theme of Caffarelli and Vasseur, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov., 370 (2009), 58-72. doi: 10.1007/s10958-010-9842-z. Google Scholar
A. Kiselev, F. Nazarov and A. Volberg, Global well-posedness for the critical 2D dissipative quasi-geostrophic equation, Invent. Math., 167 (2007), 445-453. doi: 10.1007/s00222-006-0020-3. Google Scholar
H. Koch and D. Tataru, Well-posedness for the Navier-Stokes equations, Adv. Math., 157 (2001), 22-35. doi: 10.1006/aima.2000.1937. Google Scholar
J. Leray, Sur le mouvement dun liquide visqueux emplissant lespace, Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar
R. May, Global well-posedness for a modified dissipative surface quasi-geostrophic equation in the critical Sobolev space $H^1$, J. Differential Equations, 250 (2011), 320-339. doi: 10.1016/j.jde.2010.09.021. Google Scholar
C. Miao and L. Xue, On the regularity of a class of generalized quasi-geostrophic equations, J. Differential Equations, 251 (2011), 2789-2821. doi: 10.1016/j.jde.2011.04.018. Google Scholar
C. Miao and L. Xue, Global well-posedness for a modified critical dissipative quasi-geostrophic equation, J. Differential Equations, 252 (2012), 792-818. doi: 10.1016/j.jde.2011.08.018. Google Scholar
H. Miura, Dissipative quasi-geostrophic equation for large initial data in the critical Sobolev space, Comm. Math. Phys., 267 (2006), 141-157. doi: 10.1007/s00220-006-0023-3. Google Scholar
S. Resnick, Dynamical Problems in Nonlinear Advective Partial Differential Equations, Ph.D. thesis, University of Chicago, 1995.Google Scholar
L. Silvestre, Eventual regularization for the slightly supercritical quasi-geostrophic equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 693-704. doi: 10.1016/j.anihpc.2009.11.006. Google Scholar
J. Wu, Quasi-geostrophic-type equations with initial data in Morrey spaces, Nonlinearity, 10 (1997), 1409-1420. doi: 10.1088/0951-7715/10/6/002. Google Scholar
J. Wu, Lower bounds for an integral involving fractional Laplacians and the generalized Navier-Stokes equations in Besov spaces, Comm. Math. Phys., 263 (2006), 803-831. doi: 10.1007/s00220-005-1483-6. Google Scholar
K. Yamazaki, A remark on the global well-posedness of a modified critical quasi-geostrophic equation, arXiv: 1006.0253 [math.AP].Google Scholar
Hongjie Dong. Dissipative quasi-geostrophic equations in critical Sobolev spaces: Smoothing effect and global well-posedness. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1197-1211. doi: 10.3934/dcds.2010.26.1197
Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095
Qingshan Chen. On the well-posedness of the inviscid multi-layer quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3215-3237. doi: 10.3934/dcds.2019133
May Ramzi, Zahrouni Ezzeddine. Global existence of solutions for subcritical quasi-geostrophic equations. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1179-1191. doi: 10.3934/cpaa.2008.7.1179
Goro Akagi, Kei Matsuura. Well-posedness and large-time behaviors of solutions for a parabolic equation involving $p(x)$-Laplacian. Conference Publications, 2011, 2011 (Special) : 22-31. doi: 10.3934/proc.2011.2011.22
Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865
Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934/krm.2015.8.395
Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087
Jihong Zhao, Ting Zhang, Qiao Liu. Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 555-582. doi: 10.3934/dcds.2015.35.555
Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5047-5066. doi: 10.3934/dcds.2016019
Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461
Timur Akhunov. Local well-posedness of quasi-linear systems generalizing KdV. Communications on Pure & Applied Analysis, 2013, 12 (2) : 899-921. doi: 10.3934/cpaa.2013.12.899
Luiz Gustavo Farah. Local solutions in Sobolev spaces and unconditional well-posedness for the generalized Boussinesq equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1521-1539. doi: 10.3934/cpaa.2009.8.1521
T. Tachim Medjo. Multi-layer quasi-geostrophic equations of the ocean with delays. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 171-196. doi: 10.3934/dcdsb.2008.10.171
Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030
Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1
Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934/cpaa.2011.10.287
Yong Zhou, Jishan Fan. Local well-posedness for the ideal incompressible density dependent magnetohydrodynamic equations. Communications on Pure & Applied Analysis, 2010, 9 (3) : 813-818. doi: 10.3934/cpaa.2010.9.813
Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations I: Local well-posedness. Evolution Equations & Control Theory, 2012, 1 (1) : 195-215. doi: 10.3934/eect.2012.1.195
Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1699-1712. doi: 10.3934/dcds.2013.33.1699
2018 Impact Factor: 1.143
Download XML
PDF downloads (46)
HTML views (127)
on AIMS
Wen Tan Bo-Qing Dong Zhi-Min Chen
Copyright © 2019 American Institute of Mathematical Sciences
Export File
RIS(for EndNote,Reference Manager,ProCite)
Citation and Abstract
Recipient's E-mail* | CommonCrawl |
Optimizing ultraviolet B radiation exposure to prevent vitamin D deficiency among pregnant women in the tropical zone: report from cohort study on vitamin D status and its impact during pregnancy in Indonesia
Raden Tina Dewi Judistiani ORCID: orcid.org/0000-0002-3265-07081,2,
Sefita Aryuti Nirmala1,2,
Meilia Rahmawati3,
Reni Ghrahani2,4,5,
Yessika Adelwin Natalia1,
Adhi Kristianto Sugianli5,6,
Agnes Rengga Indrati2,5,6,
Oki Suwarsa2,5,7 &
Budi Setiabudiawan ORCID: orcid.org/0000-0002-4842-24512,4,5
Vitamin D deficiency during pregnancy carries potential threat to fetal well being. Natural conversion of vitamin D in the skin can be facilitated by direct ultra violet B (UVB) radiation, but the effect is reduced by wearing umbrellas, clothes, or sunblock cream. Muslim women wear hijab that allows only face and hands to be seen. With increasing proportion of muslim women wearing hijab and the lack of vitamin D fortification and fish consumption in Indonesia, it poses a problem for vitamin D deficiency among pregnant women. This study aimed at finding the best timing of UVB exposure and the duration of exposure which can be suggested to prevent vitamin D deficiency among pregnant women, for those wearing hijab or not.
This study recruited 304 pregnant women in the first trimester, 75–76 women from 4 cities of the most populated province, West Java, Indonesia which represented 70–80% percent of pregnancy per year. A 3-day notes on duration, time and type of outdoor activity and the clothing wore by the women were collected. UVB intensity radiation were obtained. Calculation on body surface area exposed to direct UVB radiation and UVB radiation intensity were done. Measurement of vitamin D level in sera were done on the same week.
The median of maternal sera vitamin D level was 13.6 ng/mL and the mean exposed area was around 0.48 m2 or 18.59% of total body surface area. Radiation intensity reached its peak around 10.00 and 13.00, but the mean duration of exposure to UVB during this window was lower than expected. Significant correlation was found between maternal sera vitamin D level and exposed body surface area (r = 0.36, p < 0.002) or percentage of exposed body surface (r = 0.39, p < 0.001) and radiation intensity (r = 0.15, p = 0.029). Further analysis showed that duration of exposure to UVB should be longer for pregnant women wearing hijab as compared to women without hijab.
This study suggested that the best timing to get UVB exposure was between 10.00–13.00, with longer duration for women wearing hijab (64.5 vs 37.5 min) of continuous exposure per day.
Vitamin D deficiency has been recognized as a global public health problem and it plays a wide role in health and disease prevention [1]. Previously, it has been presumed that vitamin D deficiency will be more common in temperate climate region such as North America and Europe [2]. However, the occurrence of vitamin D deficiency is also common in countries around the equator line or tropical zone such as South Asia and Southeast Asia [3, 4]. Some countries have vitamin D deficiency prevalence of more than 40% among adult population. The prevalence was even higher in pregnant women, which affected more than 60% of them [1]. It may be due to imbalance of supply and demand during pregnancy.
Vitamin D, a lipophilic hormone, presents in two forms: natural ergocalciferol (vitamin D2), mainly derived from plant sources through radiation of ergosterol produced by yeasts, and cholecalciferol (vitamin D3), mainly produced in the skin through conversion by ultraviolet B (UVB) radiation. Other sources come from animal products such as fatty fish, mushrooms, egg yolks, liver, and dairy products [5, 6]. UVB radiation is an important factor to convert 7-dehydrocholesterol in the skin into pre-vitamin D, isomerized by body heat into vitamin D3 (cholecalciferol) then transported by the blood to the liver, where it is converted to 25-hydroxyvitamin D (25-OH Vit D) [7].
Increased calcium and vitamin D requirements during pregnancy increased the risk of vitamin D deficiency in pregnant women [8]. It has been reported that low maternal vitamin D increased the risk of adverse pregnancy outcomes such as preeclampsia, gestational diabetes mellitus, preterm birth, and small for gestational age babies [9,10,11,12]. The use of supplementations or food fortification have been recommended. However, a systematic review reported that concurrent use of vitamin D and calcium supplementation increased the risk of preterm birth [13]. Achieving optimal level of vitamin D through adequate exposure to sunlight is then considered safer.
Optimal sunlight exposure in different region could be influenced by many causes: environmental factors such as solar zenith angle, clouds, ozone, surface reflection, altitude; and human factors such as age, skin pigmentation, duration of exposure, use of sunscreen, type of clothing, total body surface area exposed to sun, and body mass index [7, 14]. Moreover, in many Asian and Middle East countries, cultural and religion practice highly influence daily exposure to sunlight [15, 16]. Sun-seeking behaviour is also uncommon in tropical Asian populations due to warm climate most of the year and cultural view that fair skin is associated with beauty [3]. Unfortunately, reports regarding vitamin D status in Indonesian population are scarce. A study conducted by Setiati et al. in 2008 reported the prevalence of vitamin D deficiency among Indonesian elderly women aged 60 years and older in nursing care was around 35.1%. Most deficiency cases occurred in patients that went out-door only once a week, wore veil, and exposed to sun around 30–60 min a week [17]. However, there has not been any study which investigated the association between UVB radiation exposure and vitamin D level among pregnant women in Indonesia. The aim of this study was to explore the effects of exposure to UVB in daily activity, maternal clothing style on vitamin D level in the first trimester.
This study was set to begin the Cohort Study on Vitamin D Status and Its Impact During Pregnancy in Indonesia, conducted from July 2016. West Java Province was chosen as it has the largest population of pregnant women, it is located from 5o50' – 7o50' S to 104o48' – 108o48' E [18] Women recuited form Bandung, Sukabumi, Waled and Cimahi to represent different geographical areas of the province. The midwives offered the pregnant women to participate in this study on their first encounter, explained the whole procedure and complete information. Pregnant women were met at their clinic, health centres, or at Posyandu, a voluntary-cadre lead post which is conducted once a month. Interested candidates were referred to the appointed hospital for ultrasound examination by the attending obstetricians. Pregnant women were recruited if they were (1) resident of the city, (2) in gestational age between 10 and 14 weeks as confirmed by ultrasonography and (3) had a normal singleton pregnancy.
Every eligible women gave her consent to participate in the study and allow publication of this study results.
The number of sample needed for this a study was 97, based on the formula for correlation study as follow
$$ \mathrm{n}={\left\{\frac{Z_{\alpha +{Z}_{\beta }}}{0,5 In\left[\left(1+r\right)/\left(1-r\right)\right]}\right\}}^2+3 $$
n = sample number, Zα = 1.64, Zβ= 1.28, r=0.3
We suspected that vitamin D deficiency was high in Indonesia. In order to increase the chances of getting more pregnant women with normal level of vitamin D and also to reduce bias, we recruited subjects more than 3 times the number of samples needed, 75 from each city.
Interviews to obtain demographic data, obstetric history were conducted by trained midwives during recruitment. The participating women were trained to record their daily activity which had direct exposure to sunlight. The data consisted of date, duration and hour of the day activity, and the clothing they used during their activity in the following 3 days. The record were checked and recollected by the midwives.
Calculation of total body surface area (TBSA) in meter square was done using Mosteller formula [19] as shown below.
$$ \mathrm{TBSA}\ \left({\mathrm{m}}^2\right)=\sqrt{\frac{\mathrm{height}\ \left(\mathrm{cm}\right)\ast \mathrm{weight}\ \left(\mathrm{kg}\right)}{3600}} $$
We also calculated the Percentage of body area which were exposed to sunlight using combined Mosteller and Wallace formula [20]. An example of calculation based on the type of clothing and the practice of wearing hijab is shown in Table 1.
Table 1 Calculations of total body surface area exposed to ultra violet B radiation: Combination of Mosteller and Wallace Formula
To obtain data for individual vitamin D level, 10 cc of blood from median cubiti vein was drown. Approximately 5 cc was used for this study and the serum was separated. The remaining blood was used for complete blood count and hepatitis screening as in routine antenatal screening. The sera for vitamin D analysis were transferred in cool box to Dr. Hasan Sadikin Hospital in Bandung. The whole sera were stored in -20o Celcius to maintain its quality, awaiting for collection of at least 80 samples which might take more than a month or two. After thawing, vitamin D level measurement were done using enzyme-linked immunosorbent assay (ELISA). The kit for ELISA was VIDAS® 25 OH Vitamin D Total from bioMérieux SA. The minimum reading for detection was 8.1 ng/mL. Any lower results were rounded as 8 ng/mL.
In the interpretation of the result, the women were further classified based on their vitamin D status into: deficient (< 20 ng/mL), insufficient (21–29 ng/mL), and normal (≥30 ng/mL) as described by American Endocrinology Society [21].
Hourly UV radiation intensity data was obtained by a machine, Vantage Pro 2®, from the Indonesian National Institute of Aeronautics and Space office which was located in Bandung. According the standard from Indonesian Government only one machine was available for every province. Records were obtained from September 2016 until January 2017 that matched the period of study participants' recruitment. To establish the dosage of UV radiation, the intensity in watt/m2 was converted into minimal erythema dose (MED) per hour. One MED is defined as the amount of UVB radiation that will produce minimal erythema (redness caused engorgement of capillaries) of an individual's skin within a few hours following exposure [22].
The main analysis was performed using IBM SPSS Statistics for Windows version 24 (IBM Corp., Armonk, N.Y., USA). Descriptive statistics were presented as frequencies and proportions for categorical variables, median and interquartile range for continuous variables. Spearman's rank correlation was used to assess the association among total body surface area exposed to sun, UV intensity, and maternal sera vitamin D level. Further we used multinomial logistic regression analyses to analyze the associations among factors influencing UVB exposure and serum vitamin D categories. Normal vitamin D level was considered as the base category to which the other two categories were compared. Age, parity, pre-pregnancy BMI, education, and the use of sunscreen were included a priori in the adjusted model as potential confounders.
The community midwives were able to approach 345 pregnant women at the community level. These pregnant women were referred to hospital for further screening. Three hundred and four women fulfilled our inclusion criteria by ultrasound, but then there were 5 women who withdrew their participation prior to blood sampling. There were some more reasons for exclusions after that as shown in Fig. 1.
Selection of study participants
The final sample of this study was 204 subjects. Seventy four women (36.37%) did not wear hijab. The mean age of these women was 28.4 years old and body mass index (BMI) 23.7 kg/m2. The mean for total duration of outdoor activity of all these women was 69.75 min between 06.00–18.00 h each day. The duration of each activity lasted between 5 to 10 min, and a sub total of 29.1 min were done between 10.00–13.00. The type of outdoor activity in all groups were similar, which were walking to grocery stores, drying clothes under the sun, watering garden or dropping and picking up children from school. No sport activity was recorded.
Their characteristics were shown in Table 2.
Table 2 Characteristics of Pregnant Women
Due to the limitation of ELISA tool, every value less than 8.1 ng/mL was recorded as 8 ng/mL. Maternal vitamin D was ranged from 8.0 to 39.0 ng/mL. The mean (SD) was 14.7 (6.5) ng/mL, the median (interquartile range/IQR) of 13.6 (10) ng/mL. We found that 42 women (20.5%) had very low vitamin D level (< 8.1 ng/mL).
Based on daily maternal clothing data, we calculated the TBSA exposed to sunlight using Mosteller formula and found the median TBSA in our study participants around 0.48 m2 (IQR = 0.46). With simplified Wallace formula, we found that the median percentage of exposed body area was 18.59% (IQR = 19.5).
Data on daily ultraviolet radiation from the sun during the months of observations were obtained, it is presented in Table 3 It showed that in general, the optimum radiation intensity increased gradually from 10.00 to reach its peak at 13.00. It became a bit lower between 13.00–15.000 and reduced gradually further after 15.00. The lowest intensity was recorded between 06.00–07.00 and between 17.00–18.00. Table 3 also showed that January 2017 had the lowest point of UVB radiation. After conversion of radiation intensity from watt/m2 to MED per hour unit, we found that the median of UVB intensity was around 0.39 MED/hour (IQR = 0.43).
Table 3 Average hourly UVB intensity in September 2016 – March 2017
Spearman's rank correlation showed that there was significant correlation between maternal serum vitamin D level with TBSA, percentage of body area exposed to sun, and UVB intensity as shown in Table 4.
Table 4 Correlation of total body surface area, percentage of body area exposed, and ultra violet B intensity, with maternal serum vitamin D level
Univariable multinomial logistic regression detected decreased odds of vitamin D deficiency with increased TBSA (OR (95% CI) = 1.89E-107 (2.24E-204 - 1.6E-10), p-value = 0.03) and percentage of body area (OR (95% CI) = 0.93 (0.87–0.99), p-value = 0.02) exposed to sun. However, the significant association diminished after adjustment with potential confounders as shown in Table 5.
Table 5 Associations among factors influencing UV radiation and maternal serum vitamin D level. Data collection form (Additional file 1)
This study demonstrated that majority (80.4%) of pregnant women in of West Java had vitamin D deficiency. Only educational level was found significantly different in baseline characteristics between women with vitamin D deficiency compared to women with insufficiency or normal vitamin D level.
More than 70% of pregnant women in this study did not use sunblock. Although sunblock use have been promoted to prevent cutaneous carcinogenesis, a recent study in Belgium reported that a 50+ sun protection factors (SPF) sunblock significantly decreased cutaneous vitamin D production following single UVB exposure independent of the TBSA with minimal effect to circulating 25-OH Vit D [23]. Regulation of serum vitamin D level is a complex process and many factors influence the bioavailability of circulating vitamin D [2, 24]. High proportion of vitamin D deficiency despite rare use of sunblock found in our study suggested that other endogenous or exogenous processes influenced vitamin D level.
Most pregnant women in this study were at the optimal reproductive age, i.e. between 20 and 34 years old. No significant difference of vitamin D level was found among age groups, most likely due to relatively younger and short range of age (16–43 years). The effect of age on vitamin D level will be more prominent in elderly people since they have thinner dermal layer, and consequently less ability of synthesizing vitamin D [25].
Overweight or obese individuals are prone to vitamin D deficiency. Vitamin D is a fat soluble vitamin and fat deposition all over the body would disturb transportation and conversion of previtamin D3 into provitamin D3. Thus, overweight or obese individuals have reduced capacity of vitamin D synthesis [26, 27]. Since more than 50% of our study participants had normal pre-pregnancy BMI, no significant difference was observed.
This study found that there was a significant difference of education level between vitamin D deficiency group and insufficient-normal group. Lower educational level has been associated with vitamin D deficiency in Saudi Arabia and Poland [28, 29]. Similar observation was found in this study since more than half of the participants (116 women, 56.86%) had education level of middle school or lower. This would influence dietary pattern and other daily life style related to individual vitamin D status [30,31,32]. The free supplement for pregnant women from the government did not provide supplement with vitamin D. Very few subjects (9 women) stated that they consumed vitamin D containing supplement for less than a month, but they were still deficient in vitamin D. Previous reports from our cohort had shown that the proportion of anemia increased by trimester among women with colecalciferol deficiency and that colecalciferol level in blood was associated with better fetal growth as indicated by biparietal diameter and abominal circumference [18, 33]. It is very likely that the changes in life style, which is exposing adequate skin to sun at appropriate time and duration to enhance vitamin D conversion may improve fetal growth.
The median of maternal serum vitamin D in this report was lower than the previous report (15.34 vs 13.6 ng/mL), far below the normal level of 20 ng/mL [18]. The commercially available vitamin D supplements in Indonesia were expensive, it can be 10 times higher than the original price at the exporting countries, which makes it less affordable for most pregnant women.
Significant associations of vitamin D level with biparietal diameter and abdominal circumference were consistent after adjustment with maternal age, pre-pregnancy body mass index, parity, serum ferritin level, and hemoglobin level [33].
This would make the result of this study very important for the Indonesian government, to change recommendation and improve the health promotion program. As Indonesia is a country in tropical region, sunlight should be available all year long with abundant ultra violet B, but caution should be put as this study also showed there was a lowest point for ultraviolet B radiation in January 2017. Adequate UVB for vitamin D conversion also depends on the duration of exposure and TBSA exposed. Based on the UVB intensity data, it was best to be exposed to the sun from 10.00 until 13.00. However, most subjects tend to do less outdoor activities during that period, which was also reported in Pakistan and Italy [34, 35]. In countries with large Muslim population, the religious practice of wearing hijab is common and it has been demonstrated to be an independent factor for vitamin D deficiency in the Middle East and South Asia region [36,37,38,39]. Similar finding was also found in our study as previously shown in Table 1.
UVB radiation data retrieved from the local institute of aeronautics and space was recorded in watt/m2. However, human skin sensitivity to sun exposure varies depending on skin pigmentation [40]. Based on Fitzpatrick classification, majority (80–90%) of Indonesian population is classified as type III and IV with melano-competent features [41]. Previous study at Hasan Sadikin Hospital in Bandung, Indonesia, reported that for skin type III and IV the UVB radiation intensity to achieve 1 MED was around 69.7 J/cm2, which was similar to findings in India, despite the most common skin type reported in India was skin type V, which was darker than type III and IV [42].
Hollick et al. reported that sun exposure to face, arms, and hands could achieve adequate dosage of UVB radiation [43]. When it was converted into percentage of body area exposed to sun based on combined Mosteller and Wallace formula (Table 1), the minimum area that needs UVB radiation was around 22.5%. More than half of pregnant women in this study did not achieve adequate UVB radiation since the median of body area exposed to sun was only 18.59%.
This study found that the median of UVB intensity was 0.39 MED/hour. In order to achieve 1 MED, daily exposure to sun requires approximately 2.5 h., but according to Hollick, the minimum duration for obtaining adequate vitamin D conversion was only 25% of time duration to achieve 1 MED [43]. Thus, the minimum duration of sun exposure was around 37.5 min per day. This amount of exposure would not be valid for women using hijab since the body area exposed to sun was only 13.5% or less. Therefore, women with hijab should increase their exposure time to at least 64.5 min per day.
This study found significant correlations between the width or percentage of body surface area exposed to sun, UVB intensity, and maternal sera vitamin D level.
Multinomial regression analysis failed to support the associations, which may indicate that larger samples are needed to identify the most influencing factor for vitamin D level among these pregnant women.
Several other limitations could have influenced the results of this study. Firstly, this study could not detect vitamin D level below 8.1 ng/mL due to limitation of the ELISA methods. Second, UVB intensity data was only available from Bandung, as the office would place the tool only in every capital of the Indonesian provinces. In this study it was used as an approximation for other cities in West Java. On the other hand, the strength of this study lies on its population-based design and multiple locations that represented several geographical areas, north to south, west to east and urban/rural area, of West Java. We were able to produce information on the urgency of promoting outdoor activity between the optimum hours of utilizing the energy from the sun to prevent vitamin D deficiency among pregnant women, however a randomized clinical trial is still needed to assess its efectivity.
This study found that vitamin D deficiency was prevalent among pregnant women in West Java, Indonesia. There were significant correlations between maternal vitamin D level with TBSA, percentage of body area exposed to sun, and UVB intensity. The best time of achieving UVB exposure between 10.00 and 13.00 on a daily basis, and that outdoor activity within this time period should be encouraged. Pregnant women without hijab can be advised to have continuous exposure for approximately 37.5 min per day, while for women with hijab the advisable duration was around 64.5 min per day. A carefully designed clinical trial may be proposed to prove that these findings could be incorporated in maternity education and health promotion to prevent vitamin D deficiency.
BMI:
ELISA:
IQR:
Interquartile range
Minimal erythemal dose
SPF:
Sun protecting factor
TBSA:
Total body surface area
UVB:
Ultraviolet B
Palacios C, Gonzalez L. Is vitamin D deficiency a major global public health problem? J Steroid Biochem Mol Biol. 2014;144 Pt A:138–45.
Holick MF. Vitamin D Deficiency. N Engl J Med. 2007;357(3):266–81.
Mithal A, Wahl DA, Bonjour JP, Burckhardt P, Dawson-Hughes B, Eisman JA, El-Hajj Fuleihan G, Josse RG, Lips P, Morales-Torres J. Global vitamin D status and determinants of hypovitaminosis D. Osteoporos Int. 2009;20(11):1807–20.
Man RE, Li LJ, Cheng CY, Wong TY, Lamoureux E, Sabanayagam C. Prevalence and Determinants of Suboptimal Vitamin D Levels in a Multiethnic Asian Population. Nutrients. 2017;9(3).
Contractor P, Gandhi A, Solanki G, Shah PA, Shrivastav PS. Determination of ergocalciferol in human plasma after Diels-Alder derivatization by LC–MS/MS and its application to a bioequivalence study. Journal of Pharmaceutical Analysis. 2017;7(6):417–22.
Carmeliet G, Dermauw V, Bouillon R. Vitamin D signaling in calcium and bone homeostasis: a delicate balance. Best Pract Res Clin Endocrinol Metab. 2015;29(4):621–31.
Engelsen O. The relationship between ultraviolet radiation exposure and vitamin D status. Nutrients. 2010;2(5):482–95.
Bowyer L, Catling-Paull C, Diamond T, Homer C, Davis G, Craig ME. Vitamin D, PTH and calcium levels in pregnant women and their neonates. Clin Endocrinol (Oxf). 2009;70(3):372–7.
Wei SQ, Qi HP, Luo ZC, Fraser WD. Maternal vitamin D status and adverse pregnancy outcomes: a systematic review and meta-analysis. J Matern Fetal Neonatal Med. 2013;26(9):889–99.
Miliku K, Vinkhuyzen A, Blanken LM, McGrath JJ, Eyles DW, Burne TH, Hofman A, Tiemeier H, Steegers EA, Gaillard R, et al. Maternal vitamin D concentrations during pregnancy, fetal growth patterns, and risks of adverse birth outcomes. Am J Clin Nutr. 2016;103(6):1514–22.
Boyle VT, Thorstensen EB, Mourath D, Jones MB, McCowan LM, Kenny LC, Baker PN. The relationship between 25-hydroxyvitamin D concentration in early pregnancy and pregnancy outcomes in a large, prospective cohort. Br J Nutr. 2016;116(8):1409–15.
Eggemoen AR, Jenum AK, Mdala I, Knutsen KV, Lagerlov P, Sletner L. Vitamin D levels during pregnancy and associations with birth weight and body composition of the newborn: a longitudinal multiethnic population-based study. Br J Nutr. 2017;117(7):985–93.
De-Regil LM, Palacios C, Lombardo LK, Pena-Rosas JP. Vitamin D supplementation for women during pregnancy. Cochrane Database Syst Rev. 2016;(1):Cd008873.
Wacker M, Holick MF. Sunlight and Vitamin D: A global perspective for health. Dermato-endocrinology. 2013;5(1):51–108.
Nichols E, Khatib I, Aburto N, Sullivan K, Scanlon K, Wirth J, Serdula M. Vitamin D status and determinants of deficiency among non-pregnant Jordanian women of reproductive age. Eur J Clin Nutr. 2012;66(6):751–6.
Granlund L, Ramnemark A, Andersson C, Lindkvist M, Fharm E, Norberg M. Prevalence of vitamin D deficiency and its association with nutrition, travelling and clothing habits in an immigrant population in Northern Sweden. Eur J Clin Nutr. 2016;70(3):373–9.
Setiati S. Vitamin D status among Indonesian elderly women living in institutionalized care units. Acta Med Indones. 2008;40(2):78–83.
Judistiani RTD, Gumilang L, Nirmala SA, Irianti S, Wirhana D, Permana I, Sofjan L, Duhita H, Tambunan LA, Gurnadi JI, et al. Association of Colecalciferol, Ferritin, and Anemia among Pregnant Women: Result from Cohort Study on Vitamin D Status and Its Impact during Pregnancy and Childhood in Indonesia. Anemia. 2018;2018(6):1–6.
Mosteller RD. Simplified calculation of body-surface area. N Engl J Med. 1987;317(17):1098.
Wallace AB. The exposure treatment of burns. Lancet (London, England). 1951;1(6653):501–4.
Holick MF, Binkley NC, Bischoff-Ferrari HA, Gordon CM, Hanley DA, Heaney RP, Murad MH, Weaver CM. Evaluation, Treatment, and Prevention of Vitamin D Deficiency: an Endocrine Society Clinical Practice Guideline. J Clin Endocrinol Metabol. 2011;96(7):1911–30.
Heckman CJ, Chandler R, Kloss JD, Benson A, Rooney D, Munshi T, Darlow SD, Perlis C, Manne SL, Oslin DW. Minimal Erythema Dose (MED) Testing. Journal of Visualized Experiments : JoVE. 2013(75):50175.
Libon F, Courtois J, Le Goff C, Lukas P, Fabregat-Cabello N, Seidel L, Cavalier E, Nikkels AF. Sunscreens block cutaneous vitamin D production with only a minimal effect on circulating 25-hydroxyvitamin D. Archives of osteoporosis. 2017;12(1):66.
Holick MF. Vitamin D status : measurement, interpretation, and clinical application. Ann Epidemiol. 2009;19(2):73–8.
MacLaughlin J, Holick MF. Aging decreases the capacity of human skin to produce vitamin D3. J Clin Investig. 1985;76(4):1536–8.
Contreras-Manzano A, Villalpando S, Robledo-Perez R. Vitamin D status by sociodemographic factors and body mass index in Mexican women at reproductive age. Salud Publica Mex. 2017;59(5):518–25.
Delle Monache S, Di Fulvio P, Iannetti E, Valerii L, Capone L, Nespoli MG, Bologna M, Angelucci A. Body mass index represents a good predictor of vitamin D status in women independently from age. Clin Nutr. 2019;38(2):829–34.
Al-Musharaf S, Fouda AM, Turkestani ZI, Al-Ajlan A, Sabico S, Alnaami MA, Wani K, Hussain DS, Alraqebah B, Al-Serehi A, et al. Vitamin D Deficiency Prevalence and Predictors in Early Pregnancy among Arab Women. Nutrients. 2018;10(4):1–10.
Wyskida M, Owczarek A, Szybalska A, Brzozowska A, Szczerbowska I, Wieczorowska-Tobis K, Puzianowska-Kuźnicka M, Franek E, Mossakowska M, Grodzicki T, et al. Socio-economic determinants of vitamin D deficiency in the older Polish population: results from the PolSenior study. Public Health Nutr. 2018;21(11):1995–2003.
Al-Faris AN. High Prevalence of Vitamin D Deficiency among Pregnant Saudi Women. Nutrients. 2016;8(2).
Shiraishi M, Haruna M, Matsuzaki M, Murayama R. Demographic and lifestyle factors associated with vitamin D status in pregnant Japanese women. J Nutr Sci Vitaminol. 2014;60(6):420–8.
Ganmaa D, Holick MF, Rich-Edwards JW, Frazier LA, Davaalkham D, Ninjin B, Janes C, Hoover RN, Troisi R. Vitamin D deficiency in reproductive age Mongolian women: A cross sectional study. J Steroid Biochem Mol Biol. 2014;139:1–6.
Judistiani RTD, Madjid TH, Irianti S, Natalia YA, Indrati AR, Ghozali M, Sribudiani Y, Yuniati T, Abdulah R, Setiabudiawan B. Association of first trimester maternal vitamin D, ferritin and hemoglobin level with third trimester fetal biometry: result from cohort study on vitamin D status and its impact during pregnancy and childhood in Indonesia. BMC Pregnancy Childbirth. 2019;19(1):112.
Roomi MA, Farooq A, Ullah E, Lone KP. Hypovitaminosis D and its association with lifestyle factors. Pakistan journal of medical sciences. 2015;31(5):1236–40.
Colao A, Muscogiuri G, Rubino M, Vuolo L, Pivonello C, Sabatino P, Pizzo M, Campanile G, Fittipaldi R, Lombardi G, et al. Hypovitaminosis D in adolescents living in the land of sun is correlated with incorrect life style: a survey study in Campania region. Endocrine. 2015;49(2):521–7.
Gannage-Yared MH, Maalouf G, Khalife S, Challita S, Yaghi Y, Ziade N, Chalfoun A, Norquist J, Chandler J. Prevalence and predictors of vitamin D inadequacy amongst Lebanese osteoporotic women. Br J Nutr. 2009;101(4):487–91.
Buyukuslu N, Esin K, Hizli H, Sunal N, Yigit P, Garipagaoglu M. Clothing preference affects vitamin D status of young women. Nutr Res. 2014;34(8):688–93.
Nimri LF. Vitamin D status of female UAE college students and associated risk factors. J Public Health (Oxf). 2018;40(3):e284–90.
Bawaskar PH, Bawaskar HS, Bawaskar PH, Pakhare AP. Profile of Vitamin D in patients attending at general hospital Mahad India. Indian J Endocrinol Metab. 2017;21(1):125–30.
D'Orazio J, Jarrett S, Amaro-Ortiz A, Scott T. UV radiation and the skin. Int J Mol Sci. 2013;14(6):12222–48.
Irene E, CRT K, Krutmann J. Disorders due to Ultraviolet Radiation. In: Lowell A, Goldsmith MDMPHSIKMP, Barbara A, Gilchrest MDASPM, David J, Leffell MD, Klaus Wolff MDFRCP, editors. Fitzpatrick's Dermatology in General Medicine. United States of America: The McGraw-Hill Companies, Inc; 2012.
Mehta R, Shenoi S, Balachandran C, Pai S. Minimal erythema response (MED) to solar simulated irradiation in normal Indian skin. Indian j of dermatol, venereol and leprol. 2004;70(5):277–9.
Holick MF. Vitamin D: importance in the prevention of cancers, type 1 diabetes, heart disease, and osteoporosis. Am J Clin Nutr. 2004;79(3):362–71.
We express our gratitude for the cooperation of staff at the Bandung, Cimahi, Waled and Sukabumi Health Office and Primary Health Care Centers, as well as Rumah Sakit Cibabat, Rumah Sakit Al Mulk Sukabumi, Rumah Sakit Umum Daerah Waled Cirebon and Rumah Sakit dr Hasan Sadikin Bandung. We thank all our field researchers, Bunga Mars, Putri Anisa Faiziah, Devi Agustini, Dina Andiani and Sri Devi for their efforts in this study.
Funding for this study was obtained from Academic Leadership Grant from Universitas Padjadjaran number 2476/UN6.C/2018 and partial contribution from BP3IPTEK – The research and development office for science and technology of the West Java Province Government. Recipient of both funds was Budi Setiabudiawan.
Funders had no role in the design, analyses, or interpretation of the study.
Public Health Department, Faculty of Medicine Universitas Padjadjaran, Jalan Eijkman 38, Bandung, Jawa Barat, 40161, Indonesia
Raden Tina Dewi Judistiani
, Sefita Aryuti Nirmala
& Yessika Adelwin Natalia
Centre of Immunology Studies, Faculty of Medicine Universitas Padjadjaran, Bandung, Indonesia
, Reni Ghrahani
, Agnes Rengga Indrati
, Oki Suwarsa
& Budi Setiabudiawan
Master in Midwifery Program, Faculty of Medicine Universitas Padjadjaran, Bandung, Indonesia
Meilia Rahmawati
Department of Child Health, Faculty of Medicine Universitas Padjadjaran, Bandung, Indonesia
Reni Ghrahani
dr Hasan Sadikin Hospital, Bandung, Indonesia
, Adhi Kristianto Sugianli
Clinical Pathology Department, Faculty of Medicine Universitas Padjadjaran, Bandung, Indonesia
Adhi Kristianto Sugianli
& Agnes Rengga Indrati
Department of Dermatovenereology, Faculty of Medicine Universitas Padjadjaran, Bandung, Indonesia
Oki Suwarsa
Search for Raden Tina Dewi Judistiani in:
Search for Sefita Aryuti Nirmala in:
Search for Meilia Rahmawati in:
Search for Reni Ghrahani in:
Search for Yessika Adelwin Natalia in:
Search for Adhi Kristianto Sugianli in:
Search for Agnes Rengga Indrati in:
Search for Oki Suwarsa in:
Search for Budi Setiabudiawan in:
RTDJ were the primary investigator who developed the idea, research design, and manuscript writing. MR, RG and SAN contributed to grant proposal writing and data collection. YAN contributed to data analysis and manuscript writing. AKS and AI contributed to blood sample collection, transfer, laboratory examination and interpretation. OS and BS contributed to grant development, discussion and revision. All authors read and approved the final manuscript.
Correspondence to Raden Tina Dewi Judistiani.
Ethical approval was given by Health Research Ethical Committee of Faculty of Medicine, Universitas Padjadjaran number 34/UN6.C1.3.2/KEPK/PN/2016.
All study subjects had given written informed consent to participate in this study.
All women understood that the whole procedures were done for research purposes and therefore oral consent for publication was obtained during recruitment.
Additional file 1:
Data collection form. (DOCX 14 kb)
Judistiani, R.T.D., Nirmala, S.A., Rahmawati, M. et al. Optimizing ultraviolet B radiation exposure to prevent vitamin D deficiency among pregnant women in the tropical zone: report from cohort study on vitamin D status and its impact during pregnancy in Indonesia. BMC Pregnancy Childbirth 19, 209 (2019) doi:10.1186/s12884-019-2306-7
Pregnancy and childbirth in low and middle income countries | CommonCrawl |
Is it possible to use a balloon to float so high in the atmosphere that you can be gravitationally pulled towards a satellite?
A recent joke on the comedy panel show 8 out of 10 cats prompted this question. I'm pretty sure the answer's no, but hopefully someone can surprise me.
If you put a person in a balloon, such that the balloon ascended to the upper levels of the atmosphere, is it theoretically possible that an orbiting satellite's (i.e. a moon's) gravity would become strong enough to start pulling you towards it, taking over as the lifting force from your buoyancy?
Clearly this wouldn't work on Earth, as there's no atmosphere between the Earth and the moon, but would it be possible to have a satellite share an atmosphere with its planet such that this would be a possibility, or would any shared atmosphere cause too much drag to allow for the existence of any satellite?
If it were possible, would it also be possible to take a balloon up to the satellite's surface, or would the moon's gravity ensure that its atmosphere was too dense near the surface for a landing to be possible thus leaving the balloonist suspended in equilibrium? Could you jump up from the balloon towards the moon (i.e. jumping away from the balloon in order to loose the buoyancy it provided).
http://www.channel4.com/programmes/8-out-of-10-cats/4od#3430968
forces newtonian-gravity atmospheric-science earth
JohnLBevanJohnLBevan
No, a shared atmosphere between body and moon is not possible.
For a natural satellite to remain, the orbit must be very stable, because those satellites exist for billions of years. Even the tiniest bit of atmosphere (a few molecules) would cause a tiny drag. However, drag adds up, so over a long time period, even a heavy object (such as the moon) would be dragged down due to drag and ultimately collide with the body it is rotating around.
A balloon needs a quite significant atmosphere to be used. Present balloons can reach up to 30–35 km altitude. Since atmospheric density (in the heterosphere) drops off exponentially with elevation, balloons would get gigantic even to reach a little bit higher. Reaching an elevation where the atmosphere has negligible density is, in a balloon, impossible.
One can however, in theory, try to go as high as possible with a balloon, and then use other methods (such as rockets) from there, thus bypassing the densest part of the atmosphere and save a lot of fuel.
Edit: one more way to look at it: if a satellite would have enough gravitational pull to pull up an observer in a balloon, it would certainly pull up the atmosphere; therefore the satellite would be in the atmosphere, which is impossible. Therefore, a a satellite can never have enough gravitational pull to pull up an observer inside the atmosphere.
$\begingroup$ Thanks @Gerrit; matches my suspicions - I was hoping there may be some workaround for the drag issue such as the planet (& its atmosphere) rotating in sync with the moon so as to negate any drag, but that seems pretty unlikely. $\endgroup$ – JohnLBevan Nov 2 '12 at 22:04
$\begingroup$ @JohnLBevan If the planet was spinning that quickly, it would not be a planet - rather it would be a rapidly-disintegrating saw-blade. $\endgroup$ – wizzwizz4 May 20 '17 at 19:56
$\begingroup$ And yet... it seems like Pluto and Charon share an atmosphere, though a very thin one: newscientist.com/article/… , and Robert L. Forward believed that two planets of close to equal mass could be close enough to share an atmosphere, what he coined a Rocheworld: en.wikipedia.org/wiki/Rocheworld. So if you adjust some of your physics here I believe it might be possible. $\endgroup$ – Len Feb 8 '18 at 15:21
$\begingroup$ @Len That would be a tiny exosphere, orders of magnitude too tiny for a balloon to travel. $\endgroup$ – gerrit Feb 8 '18 at 22:30
When the balloon is rising, it's doing so because it is less dense than the surrounding air and there is a net gravitational pull "down." The air is pulled more than the balloon - hence the buoyant force. If the balloon is to be found falling "up" toward the satellite, then surely the air around it would be falling even faster in that direction, since it is by hypothesis more dense.
The best you could hope for is for the two transitions \begin{align} \text{balloon less dense than air} & \to \text{balloon more dense than air} \\ \text{gravity dominated by planet} & \to \text{gravity dominated by satellite} \end{align} to occur at the same altitude, and for your inertia to carry you from one to the other. However, this kind of precarious situation wouldn't last very long.
Not the answer you're looking for? Browse other questions tagged forces newtonian-gravity atmospheric-science earth or ask your own question.
The feasibility of a satellite orbiting at a fixed time
Can refraction of the atmosphere be so high that the surface of a planet seems concave to its inhabitants?
Can an evacuated metal sphere be made such that it can float in the air?
Which equation can we use for a ball that is constantly bouncing on the ground? | CommonCrawl |
Spinning conditions affect structure and properties of Nephila spider silk
Part of a collection:
MRS Bulletin Impact Section
Robert J. Young1,
Chris Holland2,
Zhengzhong Shao3 &
Fritz Vollrath4
MRS Bulletin volume 46, pages 915–924 (2021)Cite this article
509 Accesses
Raman spectroscopy is used to elucidate the effect of spinning conditions upon the structure and mechanical properties of silk spun by Nephila spiders from the major ampullate gland. Silk fibers produced under natural spinning conditions with spinning rates between 2 and 20 mm s−1 differed in microstructure and mechanical properties from fibers produced either more slowly or more rapidly. The data support the "uniform strain" hypothesis that the reinforcing units in spider silk fibers are subjected to the same strain as the fiber, to optimize the toughness. In contrast, in the case of synthetic high-performance polymer fibers, the both units and the fiber experience uniform stress, which maximizes stiffness. The comparison of Nephila major and minor ampullate silks opens an intriguing window into dragline silk evolution and the first evidence of significant differences between the two silks providing possibilities for further testing of hypotheses concerning the uniform strain versus uniform stress models.
It is well established that the microstructure and mechanical properties of engineering materials are controlled by the conditions employed to both synthesize and process them. Herein, we demonstrate that the situation is similar for a natural material, namely spider silk. We show that for a spider that normally produces silk at a reeling speed of between 2 and 20 mm s−1, silk produced at speeds outside this natural processing window has a different microstructure that leads to inferior tensile properties. Moreover, we also show that the silk has a generic microstructure that is optimized to respond mechanically to deformation such that the crystals in the fibers are deformed under conditions of uniform strain. This is different from high-performance synthetic polymer fibers where the microstructure is optimized such that crystals within the fibers are subjected to uniform stress.
Graphic abstract
Spiders make diverse and extensive use of silks typically synthesizing and spinning silks from up to six different types of glands and associated spinnerets.1 Each of these fibers has a specific purpose probably linked to dedicated sets of amino acid compositions, processing conditions, and subsequent mechanical properties.2,3,4,5,6
Most studies investigating structure–property relationships in spider silks concentrate on dragline silks secreted from the set of two major ampullate MaA glands generally because the large size of the gland and the thickness of the filaments make it the easiest silk to handle and investigate in detail.4,7,8,9,10,11,12,13,14,15,16 Moreover, this silk also displays well the rare characteristics of combined strength and elasticity (toughness) and torsional memory that make it such an interesting material.17,18,19,20 In addition to the MaA fibers, a spider often uses accessory MiA fibers produced in the set of two minor ampullate glands that seem to serve the purpose of reinforcing not only the dragline fibers, but also the threads of the frame and radials of the orb web.21 It is sometimes assumed that the minor ampullate silks are smaller in diameter as they act as support threads to web threads rather than the principal safety line that needs to carry the whole weight of the spider.22 Both major MaA and minor MiA silks were investigated in this study, but the focus of the study was to elucidate the response of the Raman spectra of the different filaments under controlled deformation.
Raman spectroscopy has been employed in a number of studies of silks, both spider23 and silkworm.24,25,26,27,28 It has been used both as a characterization technique and as a means to study differences in secondary conformation between films, powders, and fibers,29,30,31,32,33,34 the denaturation process35,36, and the effect of solvent on fibers.37 Clearly, it is an important tool when studying structural differences.36,38
The assignment of Raman bands for spider silk has followed from the results of studies on silkworm silks. The main difference with spider silk is seen in the C–C backbone vibration found at approximately 1085 cm−1 in silkworm silk, but in the MaA spider silk spectra, it appears consistently at approximately 1095 cm−1.39,40 Using the analogy of model polypeptides and silkworm silk, it is not unreasonable to assign this 1095 cm−1 band to the β-sheet portion of the C–C backbone although there may be contributions from other secondary structures.23
Quantitative Raman techniques have been used to estimate the antiparallel β-sheet content of spider silk to be 22 ± 5%.31 Less β-sheet structure was found in all spider dragline silks than in the Bombyx mori silkworm silk studied by Shao et al.23 with difference between the two taxa that is perfectly in line with current x-ray diffraction data.41
Raman polarization techniques have been employed to study the orientation of crystallites along the fiber axis. The amide I C = O vibration is oriented normally to the fiber axis; therefore, the signal is strongest when scattered perpendicular to the chain axis. The amide III C–N vibration is oriented along the axis and the signal is strongest when excited parallel to the fiber axis. These studies have confirmed that the β-sheet and a small amount of α-helix are highly oriented along the fiber axis, whereas the disordered phase is not preferentially aligned.31
Young et al.39,42 have followed the deformation of both B. mori and Nephila edulis silk fibers using Raman spectroscopy, including a study that examined their behavior under cyclic loading. They found that the 1095 cm−1 peak in spider silk and the 1085 cm−1 band in silkworm silk, assigned to the C–C backbone vibrations,43,44,45 each behave in a similar manner to bands in other high-performance polymer fibers. They also found the amide III band at approximately 1230 cm−1 to shift linearly with stress.39,40,42 The observations led Young et al.39,42 to suggest that it is the polypeptide backbone that takes the strain when the silk is loaded. They also suggested tentatively, based upon their limited data, that the deformation behavior might be consistent with a uniform stress series model. More recently, Kremer et al.46,47,48 demonstrated that the deformation of spider silk can also be followed using fourier transform infrared (FTIR) spectroscopy. This can only be done on bundles of fibers rather than the individual filaments employed for Raman spectroscopy, but analogous shifts of IR bands with stress are obtained. They also found that the shift of the 964 cm−1 IR band with stress was approximately linear, but nevertheless suggested that a pure uniform stress series model would be an oversimplification for this material.
Micrographs of the major and minor ampullate silks of the N. senegalensis spider reeled under different conditions (Figure 1) are shown in Figure 2.
(a) Nephila senegalensis spider. (b) Schematic diagram of the silk-reeling equipment.
Micrographs of N. senegalensis dragline thread. (a) Light microscopy of section of a typical dragline composite thread as reeled with the thicker major and the thinner minor ampullate filaments after gently breathing onto it to demonstrate how humidity causes contraction (partial super-contraction) of the major filaments, but not the minor filaments, which are slack and buckle outward. In both cases, the threads were reeled at the natural walking and frame/radial drawing speeds of 25 mms−1. (b) Scanning electron micrograph (SEM) of a section of such a thread showing the significant size differences. Lower panels are SEM micrographs of sections of N. senegalensis major ampullate filament reeled slowly (c) at 0.5 mms−1 and fast (d) at 128.6 mms−1.
Reeling speed has a marked effect upon filament diameter and our data agree with the results of an in-depth study of the interaction of spinning speed and temperature by others.49 Indeed, the sample reeled at 128.6 mms−1 is almost half the diameter of the silks reeled at the natural reeling speed ranging between approximately 2 mms−1 and 20 mms−1 [Figure S1 in the Supplementary Material (SM)]. Not entirely a surprise, the high reeling speed created small fluctuations in filament diameter presumably brought about by the high-speed conditions being too fast for the spider to control the diameter.49 Importantly, varying the reeling speed allows us to probe the effects of pultrusion conditions as demonstrated earlier by x-ray scattering,50 birefringence,51 and DMTA.52 Here, we use Raman spectroscopy to probe the system further.
While the spinning conditions affect the gross morphology of a filament, they also affect the fine morphology of the silk on the molecular level,50 which manifests itself in the mechanical properties of the fibers that we can measure directly (Table I).
Table I Mechanical properties of MAA filaments of N. senegalensis. Major ampullate filament reeled at different speeds.
Figure 3 shows a typical stress–strain curve for N. senegalensis major and minor ampullate silk filaments. The force–elongation response is almost linear and nearly identical for both silks in the first 2% of the stress–strain curves, which gives us the initial Young's modulus. While both silks display similar modulus values, the stress at break of the MiA silk is lower than that for the MaA silk, although the strain to failure is higher for the MiA silk than for the MaA silk (Figure 3a). Thus, both silks will have similar values of toughness (i.e., the ability to absorb energy [given by the area under each stress–strain curve]) albeit using rather different fundamental processes. Although both types of fiber have a similar toughness, the MiA silk appears to undergo a more distinct yield process, presumably to optimize its mechanical functionality in the dragline and web. The difference in toughening mechanisms for the two fibers is probably due to the differences in chemical structure (i.e., the amino acid composition). The crystalline poly-ala interactions silk that contribute to the overall strength of the MaA silk are interrupted in the MiA silk by serine spacer regions, while MiA silks also have an increased glycine content.53,54
Representative stress–strain curves obtained for N. senegalensis major (black) and minor (red) ampullate silks collected at natural reeling speed from one representative animal. (a) Stress and strain at breaking with average data for five samples for each type of silk: Initial modulus (GPa x± SD): MaA: 8.4 ± 1.7 MiA: 8.2 ± 0.6; tensile strength (GPa): MaA: 0.7 ± 0.1, MiA: 0.4 ± 0.1; Load at break (N) MaA: 0.130 ± 0.003, MiA: 0.004 ± 0.001; strain at break (%): MaA: 19.6 ± 1.7, MiA: 30.5 ± 4.3; (b) Load/unloading/reloading curves for representative filaments of the two silks demonstrating significant underlying differences between them.
Of special interest, here is the distinct and very different behavior pattern of the two silks under repeated stressing and straining shown in Figure 3b [i.e., extension to a strain of 0.17 (a), but well below the breaking point followed by relaxation and then followed by another extension (b)]. The energy absorption (areas enclosed in the loops) is lower for the MiA, but Figure 3a shows that this fiber can be deformed to higher elongations at lower stresses. Hence, both silk fibers have similar potential for energy absorption.
Raman spectrum of the N. senegalensis dragline silk was obtained using the near-Infra red, helium/neon, and argon ion lasers. From the Ar+ Raman spectra shown in Figure S2 in the SM, we can see a large fluorescent background that would require an excessive amount of data manipulation before results were ready to be analyzed. This extended treatment of spectra before analysis can introduce unnecessary errors in the intensity and peak position data.
Although the He/Ne laser reduces fluorescence and, therefore, the amount of data manipulation prior to final analysis, the spectrum requires 45 × 60 s spectra and ,therefore, 45 min. in total to collect in a useable form as shown in Figure S2a in the SM. The near-IR laser was found to be the most efficient excitation to use due to the shorter collection time, 60 s and 20 accumulations, and the lack of data manipulation required. This reduces the risk of any damage to the fiber while in the laser light and gives a well-resolved spectrum that cuts down on errors that may be introduced by smoothing spectra and adjusting baselines. Using the near-IR excitation also means that the peaks investigated as part of this study are the most prominent peaks in the silk spectra. The 1095 cm−1 peak is better defined than with the He/Ne laser as are the 1670 cm−1 and 970 cm−1 peaks.
Raman analysis of spider silks is still a developing field23,28,31,36,37,39,42,55,56 and the full assignment and comparison with other fibrous proteins attempted here will aid further investigations into these superior fibrous materials. Figure S2b in the SM shows the near-IR Raman spectrum from 800 to 1800 cm−1. The wavenumbers of the peaks have been labeled and Table S1 in the SM shows the vibrational assignments made for these peaks.
Figure 4a shows a comparison between the silk filaments spun from the major and minor ampullate glands where we were able to detect clear differences. In order to allow for the significant diameter differences, we needed to normalize the intensities to the 1450 cm−1 peak with the smaller diameter MiA filaments making for weaker scattering resulting in a "noisier" signal. The most noticeable real difference in the spectra was between the 830/855 cm−1 doublet. The minor ampullate spectrum had a much larger 855 cm−1 peak indicating a weaker hydrogen bonding environment in this silk that may be responsible for the lower yield stress of the MiA silk (Figure 3). This is confirmed by a decrease in relative intensity of the 1095 cm−1 and 1230 cm−1 peaks. It may be of relevance here that the amino acid sequence of Nephila's minor ampullate silk contains spacer regions, which may account for part of a weaker hydrogen bonding environment.53
Raman spectra of N. senegalensis dragline silk. (a) Spectra of N. senegalensis major and minor ampullate silk reeled at natural walking speed of 23 mms−1. (b) Normalized intensity spectra of filaments reeled at 0.5 mms−1, 1.9 mms−1, 10.7 mms−1, 23.1 mms−1, and 128.6 mms−1.
A similar issue over signal strength was found for the MaA filaments reeled at extreme speeds (Figure 4b) where there can be a diameter difference of as much as 5 μm (Table I). The particularly small diameter of the faster reeling speed sample made it very difficult to focus the 2-μm laser spot onto a 3-μm fiber, with obvious effects on the signal-to-noise ratio and spectral intensity. Overall, however, the main features of the spectra were similar between all of the MaA silk filaments processed at different spinning speeds, showing that the chemical structure of the silk is controlled by the type of gland used rather than the spinning conditions.
To probe the molecular reconfigurations of Nephila major ampullate silk during deformation, we examined the band shifts of the 970 cm−1, 1095 cm−1, 1230 cm−1, and 1400 cm−1 Raman peaks (Figure S3a–d in the SM) and found that the strain-induced band shifts were indeed nonlinear confirming previous studies.39,42 The stress-induced band shifts of the 1095 cm−1 and 1230 cm−1 peaks were reported previously for N. edulis39,40,42 to be approximately linear up to a stress of about 0.8 GPa. There was no report in these studies of the mechanical behavior of the MaA silk at larger values of stress. The loading technique used in this present investigation and the higher strength of the N. senegalensis major ampullate MaA silk has allowed Raman spectra to be obtained at stresses up to 1.8 GPa. We found that the band shifts of the 1095 cm−1 and 1230 cm−1 peaks, along with those of peaks at 970 cm−1 and 1400 cm−1, are approximately linear up to around 0.4 GPa, but at larger stresses the rate of band shift with stress decreases as shown in Figure 5. The behavior of the 1095 cm−1 and 1230 cm−1 bands was found to be similar. It should also be noted that, for these fibers, the Raman bands show a broadening that has not been quantified in detail as part of this present study, but indicates local stress distributions within the molecules occurring while the fiber is being stressed.42,57
Raman band shifts responding to stressing and straining N. senegalensis major ampullate silk filaments reeled at natural walking speed of 23 mms−1. Analysis of band shifts in response to two key positions in response to two critical stresses. (a) 970 cm−1 Raman band at low stress, (b) 970 cm−1 band at high stress, (c) 1400 cm−1 Raman band at low stress, and (d) 1400 cm−1 band at high stress. (It should be noted that there is some variation in the zero stress values as the result of calibration variations). The lines in the figures are second-order fits to the data to guide the eye.
The linear band shift rates up to 300 MPa from Figure 5 and those of the 1095 cm−1 and 1230 cm−1 bands are listed in Table S2 in the SM and they are consistent with previous reports.39,40,42 In particular, the stress-induced shifts of the 970 cm−1 and 1400 cm−1 Raman bands are new observations.
The initial stress-induced band shift rates of the MaA samples reeled at different speeds are shown in Table S3 in the SM. The overall shapes of the curves are similar to the stress-induced band shifts for the natural reeling speed samples. The curves are all linear up to approximately 300 MPa and then become nonlinear. The stress-induced band shift rates are found to be comparable for the 1.9 mms−1, 10.7 mms−1, and 23.1 mms−1 reeling speeds (these speeds are within the normal spinning range of the Nephila spiders49), whereas larger values are found for all Raman bands for the highest and lowest reeling speeds of 0.5 mms−1 and 128.6 mms−1. This implies that although the MaA filaments have the same chemical structure (Figure 4b), their microstructures and mechanical properties appear to depend upon the processing conditions.47,48
In order to model and understand the deformation behavior of the spider silks studied, a number of different observations relevant to this present study need to be highlighted.
The Raman band shifts are nonlinear with strain.
The Raman band shifts are linear with stress up to about 300 MPa and then become nonlinear.
The Raman bands broaden with stress.
The Raman band shift rates per unit stress depend upon reeling speed and are higher for the highest and lowest reeling speeds.
It is well established that high-performance fibers such as PPTA43,44,45,54 and poly(ethylene terephthalate) PET57 have microstructures in which the stress-bearing units are in series (see Figure S4 in the SM). In this situation, during deformation, the stress on the different units in the microstructure of the fiber is the same as the overall fiber stress. This is generally known as the uniform stress series model, which will be considered first.
The analysis of the deformation mechanism in spider silk using Raman spectroscopy has been considered by Brookes et al.58 The starting point in their analysis is that the change in Raman wavenumber, Δν, that occurs during the deformation of high-performance polymer fibers is due to chain stretching and proportional to the stress on the crystalline reinforcing units, σr. This is a well-established relationship that has been demonstrated to be applicable to a number of different types of fibers.45 A similar relationship with stress is also found for the IR bands in spider silk.47 Therefore, for an increment of stress
$$ {\text{d}}\Delta \nu \propto {\text{d}}\sigma_{{\text{r}}} .$$
Since for the uniform stress model the stress is uniform throughout the microstructure, σr equals σf, the fiber stress. Hence, by dividing by an increment of fiber strain, εf, Equation (1) becomes
$$ \frac{{{\text{d}}\Delta \nu }}{{{\text{d}}\upvarepsilon_{{\text{f}}} }} \propto \frac{{{\text{d}}\sigma_{{\text{f}}} }}{{{\text{d}}\upvarepsilon_{{\text{f}}} }} = E_{{\text{f}}}, $$
where Ef is the fiber's Young's modulus. Figure S5 in the SM shows for the Raman bands at around 1610 cm−1 literature data on the dependence of dΔν/dεf upon fiber modulus. This is the result of the stretching of the p-phenylene groups in PPTA and PET fibers and the data follow the prediction given by Equation (2). A consequence of this uniform stress series model is that the Raman band shift per unit stress, dΔν/dσf, for the 1610 cm−1 Raman band in PPTA and PET fibers is constant around − 4.0 cm−1/GPa and independent of their Young's modulus, microstructure, and processing conditions.45
The Raman band shift data for the spider silk fibers in Table S3 in the SM show that the value of dΔν/dσf for each Raman band varies with reeling speed (i.e., processing conditions). As pointed out by Brookes et al.,58 this implies that the uniform stress series model is not appropriate for the MaA spider silk. Other investigators came to a similar conclusion from the analysis of the deformation of silk fibers with simultaneous x-ray diffraction.59 They found that the crystal modulus of the B. mori fibers varied with the degree of crystallinity. Moreover, the classic model of Termonia60 for the mechanical behavior of silk is not a uniform stress model.
In view of these discrepancies, Brookes et al.58 suggested the use of an alternative model, the uniform strain parallel model,Footnote 1 shown in Figure 6a, for which the strain on the reinforcing units, εr, is the same as the overall fiber strain, εf. The assumption of uniform strain leads to
$$ \frac{{\sigma_{{\text{r}}} }}{{E_{{\text{r}}} }} = \frac{{\sigma_{{\text{f}}} }}{{E_{{\text{f}}} }}, $$
where Er is the Young's modulus of the reinforcing units. Combining with the general relationship in Equation (1) gives the following relation:
$$ \frac{{{\text{d}}\Delta \nu }}{{{\text{d}}\sigma_{{\text{f}}} }} \propto \frac{{E_{{\text{r}}} }}{{E_{{\text{f}}} }}. $$
(a) The hypothetical microstructure of spider silk consistent with the uniform strain model.60 (b) Band shift rates per unit stress for spider silk reeled at different speeds give different microstructures and mechanical properties.60 The line is a fit of the data to Equation (4).
It can be assumed that the modulus of the reinforcement in the fibers, Er, is constant and Ef can be taken as the initial slope of the stress–strain curves of the silk fibers (Table I). Hence the uniform strain model predicts that dΔν/dσf should be proportional to the reciprocal of the modulus of the fiber, 1/Ef. Figure 6b shows a plot of dΔν/dσf as a function of 1/Ef using the data for the 1095 cm−1 Raman band from Tables S2 and S3 in the SM. This band was chosen as it has been assigned to the C–C backbone in the β-sheets that are thought to be the main reinforcing units in the silk.60
The main difference between the model in Figure 6b and the uniform stress model in Figure S4 is that the reinforcing units are not lined up in series. This arrangement of the reinforcing units explains why the stress/strain curves of the spider silk are quite different from those of high-performance polymer fibers. The silk has a lower Young's modulus, but is very much more extensible, leading to outstanding levels of toughness.61 It also explains why crystal modulus values determined from simultaneous x-ray diffraction and deformation experiments on different types of silk do not agree well with computed values,59,62 unlike similar experiments on PPTA.63 Our previous studies upon stress-induced Raman band shifts in silk39,42 had suggested that the uniform stress model might be applicable. This was based upon limited data and the observation that the shifts were more linear when plotted against stress than against strain, as has been found with PET fibers.57 Our present study has demonstrated that it is necessary to use a range of spider silk fibers processed in different ways (e.g., by varying reeling speed), such that they have different microstructures and mechanical properties, before the behavior can be fully modeled.64,65
It should be pointed out that this modeling is based only upon the analysis of the elastic deformation of the material, but it also provides information upon how the microstructure responds to higher levels of deformation. It is possible to speculate upon how the microstructures will affect the overall toughness of the silk fibers by considering what will happen at high levels of overall strain. In the case of the uniform strain situation (Figure 6a), the strain in the crystals and amorphous regions will be similar. In contrast, in the case of the uniform stress parallel model (Figure S4), the flexible amorphous chains between the crystals will experience high levels of stress leading high local strains and failure at low levels of overall fiber strain. Hence, a fiber microstructure following the uniform strain model such as PPTA will lead to fibers with high levels of stiffness, but low strains to failure. In contrast, a microstructure following the uniform stress model, such as spider silk, will lead to fibers with a lower initial stiffness, but higher levels of strain-to-failure and so higher toughness.
We have demonstrated that the mechanical properties of spider silk fibers can only be fully understood by analyzing the behavior of silk processed under different conditions. We found that the stress-induced Raman band shifts enabled the behavior to be interpreted in terms of a uniform strain model, which leads fibers with high levels of toughness. The derived model is consistent with earlier observations upon silk fibers using x-ray diffraction. The structure–property relationships in the silk can be contrasted with those of high-performance polymer fibers that can be analyzed in terms of a uniform stress model. This leads to fibers with higher levels of Young's modulus than spider silk, but much lower levels of toughness. Clearly, spiders have evolved to produce fibers with mechanical properties optimized for applications in their environments where toughness is an essential requirement.
Spider silk
N. senegalensis spiders (Figure 1a) were reared in controlled conditions in a greenhouse. Webs were sprayed with water every few days, and spiders were fed with flies, Musca domestica for adults and Drosophila for juveniles. The silk samples were obtained from the spiders by natural spinning at around 10 mms−1 and by forced reeling at speeds 0.5 mms−1, 1.9 mms−1, 10.7 mms−1, 23.1 mms−1, and 128 mms−1 from fully awake N. senegalensis spiders using the procedures and apparatus described next.
The spider was restrained using a circular pad and cling film and the spinnerets viewed using an Olympus SZ40 optical microscope with 150 HL universal flexilux light source. The major and minor ampullate silk was collected from the spinneret using tweezers and then separated. The silk to be reeled was then taped to the bobbin and the motor started with the other fibers taped out of the way to avoid mixing of the samples as shown in Figure 1b. All samples were kept at 50 ± 5% humidity and 23 ± 1°C for at least 7 days prior to testing.
Scanning electron microscopy
A Philips field emission gun scanning electron microscope (FEG-SEM) XL30 system operated at 2 kV was used in conjunction with a PC running the standard Philips microscope control software to obtain images of all samples. Specimens of spider silk were prepared by laying single fibers onto an adhesive carbon tab on an aluminum SEM specimen stub. These were then coated with a thin layer of carbon using an Edwards E306A system to avoid charging in the microscope. The SEM and software were calibrated using a standard calibration specimen grid and then used to measure the diameters of spider monofilaments. This enabled the stress values to be calculated when studying fiber deformation. Average diameters were calculated using measurements taken from 10 different fibers of each type, with the diameter of each fiber being measured at 10 different points along its length. The fibers were assumed to have circular cross sections in the determination of fiber cross-sectional area for the calculations of stress.
Individual fibers were mounted across cardboard windows using slow setting Araldite epoxy resin. These were then left to set for 7 days at room temperature and kept in an atmosphere controlled at 23 ± 1°C and 50 ± 5% relative humidity for at least 3 days before testing. These cards had fiber gauge lengths of 20 mm, 50 mm, or 100 mm. The fiber-mounted window was placed in the grips of the Instron 1121 universal testing machine and the card on either side of the fiber carefully cut using a burner, to separate the ends. A cross-head speed of 2 mm/min was used for the 20-mm gauge length samples, 5 mm/min for the 50-mm samples, and 10 mm/min for the 100-mm samples. A full-scale load of 10 N was used and a standard weight of 1 N was used to calibrate the instrument prior to and during testing. A minimum of 20 samples per gauge length were tested in controlled conditions of 23 ± 1°C and 50 ± 5% relative humidity. Stress values for each individual fiber were calculated using the diameters investigated using a calibrated SEM.
A Renishaw 1000 Raman microprobe system was used to study the samples. Low power (> 25 mW) near-infrared, helium/neon, and argon ion laser light sources of wavelengths 785, 633, and 514 nm were employed. The near-infrared laser was found to be the most efficient for use in the analysis of the spider silk. This lower energy laser reduces the fluorescence/luminescence often seen with natural fiber samples and therefore helps to gain a better-defined spectrum. The specimens were viewed using an Olympus BH-2 optical microscope with the laser focused to a spot of approximately 2 μm on the surface of the sample by means of a 50 × microscope objective lens with a 0.65-mm aperture. Exposure times of 60 to 120 s, depending on the luminescence and strength of signal from each sample at full power, were used to obtain well-defined Raman spectra of the samples.
Spectroscopy and deformation
Spectra were obtained using single fiber specimens on fiber cards prepared as described previously.37 These were mounted onto a single fiber stress rig that was connected to a transducer for reading the applied load in grams. The fiber was fixed between the two aluminum blocks using cyanoacrylate adhesive and the card burned as for the tensile testing procedure to leave a fixed gauge length sample. The fibers were deformed stepwise up to failure by moving the block with the attached micrometer, accurate to within ± 0.005 mm. Strain was calculated form the change in fiber length divided by the original gauge length. Spectra were obtained of each fiber using the conditions described previously, with the same exposure time and accumulation number used for each stress value of a particular sample. Stress values were calculated using calibrated SEM diameter measurements and cross-sectional areas calculated for that specific fiber or portion of fiber.
We note out that this uniform strain parallel model for silk fibers is mathematically similar to the uniform strain Voigt model for a composite with long uniaxially aligned fibers (rule of mixtures). However, a long fiber reinforced composite and silk fibers have quite different microstructures, as shown in Figure 6a, and the only similarity is that the two components in each model are both subject to uniform strain.
F. Vollrath, D. Porter, C. Holland, The science of silks. MRS Bull. 38, 73 (2013)
F. Vollrath, D. Porter, C. Holland, There are many more lessons still to be learned from spider silks. Soft Matter 7, 9595 (2011)
C. Guo, C. Li, X. Mu, D.L. Kaplan, Engineering silk materials: From natural spinning to artificial processing. Appl. Phys. Rev. 7, 011313 (2020)
S.J. Blamires, T.A. Blackledge, I.-M. Tso, Physicochemical property variation in spider silk: Ecology, evolution, and synthetic production. Annu. Rev. Entomol. 62, 443 (2017)
D. Ebrahimi, O. Tokareva, N.G. Rim, J.Y. Wong, D.L. Kaplan, M.J. Buehler, Silk–Its mysteries, how it is made, and how it is used. ACS Biomater. Sci. Eng. 1, 864 (2015)
C. Holland, K. Numata, J. Rnjak-Kovacina, F.P. Seib, The biomedical use of silk: Past, present, future. Adv. Healthc. Mater. 8, 1800465 (2019)
A.D. Malay, K. Arakawa, K. Numata, Analysis of repetitive amino acid motifs reveals the essential features of spider dragline silk proteins. PLoS ONE 12, e0183397 (2017)
J. Bauer, T. Scheibel, Conformational stability and interplay of helical N- and C-terminal domains with implications on major ampullate spidroin assembly. Biomacromolecules 18, 835 (2017)
J. Bauer, D. Schaal, L. Eisoldt, K. Schweimer, S. Schwarzinger, T. Scheibel, Acidic residues control the dimerization of the N-terminal domain of black widow spiders' major ampullate spidroin 1. Sci. Rep. 6, 34442 (2016)
S.J. Blamires, C.-P. Liao, C.-K. Chang, Y.-C. Chuang, C.-L. Wu, T.A. Blackledge, H.-S. Sheu, I.-M. Tso, Mechanical performance of spider silk is robust to nutrient-mediated changes in protein composition. Biomacromolecules 16, 1218 (2015)
Y. Liu, A. Sponner, D. Porter, F. Vollrath, Proline and processing of spider silks. Biomacromolecules 9, 116 (2008)
H.C. Craig, D. Piorkowski, S. Nakagawa, M.M. Kasumovic, S.J. Blamires, Meta-analysis reveals materiomic relationships in major ampullate silk across the spider phylogeny. J. R. Soc. Interface 17, 20200471 (2020)
M. Wojcieszak, G. Gouadec, A. Percot, P. Colomban, Micromechanics of fresh and 30-year-old Nephila inaurata madagascariensis dragline silk. J. Mater. Sci. 52, 11759 (2017)
R. Madurga, G.R. Plaza, T.A. Blackledge, G.V. Guinea, M. Elices, J. Pérez-Rigueiro, Material properties of evolutionary diverse spider silks described by variation in a single structural parameter. Sci. Rep. 6, 18991 (2016)
M. Marhabaie, T.C. Leeper, T.A. Blackledge, Protein composition correlates with the mechanical properties of spider (Argiope trifasciata) dragline silk. Biomacromolecules 15, 20 (2014)
S.J. Blamires, C.-L. Wu, T.A. Blackledge, I.-M. Tso, Post-secretion processing influences spider silk performance. J. R. Soc. Interface 9, 2479 (2012)
D. Porter, J. Guan, F. Vollrath, Spider silk: Super material or thin fibre? Adv. Mater. 25, 1275 (2013)
A. Koeppel, C. Holland, Progress and trends in artificial silk spinning: A systematic review. ACS Biomater. Sci. Eng. 3, 226 (2017)
D. Liu, A. Tarakanova, C.S. Hsu, M. Yu, S. Zheng, L. Yu, J. Liu, Y. He, D.J. Dunstan, M.J. Buehler, Spider dragline silk as torsional actuator driven by humidity. Sci. Adv. 5, eaau9183 (2019)
O. Emile, A. Le Floch, F. Vollrath, Shape memory in spider draglines. Nature 440, 621 (2006)
B. Mortimer, A spider's vibration landscape: Adaptations to promote vibrational information transfer in orb webs. Integr. Comp. Biol. 59, 1636 (2019)
E.K. Tillinghast, M.A. Townley, "Silk Glands of the Araneid Spiders," in Silk Polymers, ACS Symposium Series 544, D. Kaplan, W.W. Adams, B. Farmer, C. Viney, Eds. (American Chemical Society, Washington, DC, 1994), p. 29
Z. Shao, F. Vollrath, J. Sirichaist, R.J. Young, Analysis of spider silk in native and supercontracted states using Raman spectroscopy. Polymer 40, 2493 (1999)
P. Monti, G. Freddi, A. Bertouzza, N. Kasai, M. Tsukada, Raman spectral studies of silk fibroin. J. Raman Spectrosc. 29, 297 (1995)
P. Monti, P. Taddei, G. Freddi, T. Asakura, M. Tsukada, Raman spectroscopic characterization of Bombyx mori silk fibroin: Raman spectrum of silk I. J. Raman Spectrosc. 32, 103 (2001)
H.G.M. Edwards, D.W. Farwell, Raman spectral studies of silk. J. Raman Spectrosc. 26, 901 (1995)
M.-E. Rousseau, T. Lefèvre, L. Beaulieu, T. Asakura, M. Pézolet, Study of protein conformation and orientation in silkworm and spider silk fibers using Raman microspectroscopy. Biomacromolecules 5, 2247 (2004)
T. Lefèvre, M.-E. Rousseau, M. Pézolet, Protein secondary structure and orientation in silk as revealed by Raman spectromicroscopy. Biophys. J. 92, 2885 (2007)
M. Tsukada, G. Freddi, P. Monti, A. Bertoluzza, N. Kasai, Structure and molecular conformation of tussah silk fibroin films: Effect of methanol. J. Polym. Sci. B 33, 1995 (1995)
P. Monti, G. Freddi, A. Bertoluzza, N. Kasai, M. Tsukada, Raman spectroscopic studies of silk fibroin from Bombyx mori. J. Raman Spectrosc. 29, 297 (1998)
D.B. Gillespie, C. Viney, P. Yager, "Raman Spectroscopic Analysis of the Secondary Structure of Spider Silk Fiber," in Silk Polymers, ACS Symposium Series 544, D. Kaplan, W.W. Adams, B. Farmer, C. Viney, Eds. (American Chemical Society, Washington, DC, 1994), p. 155
P. Colomban, H.M. Dinh, J. Riand, L.C. Prinsloo, B. Mauchamp, Nanomechanics of single silkworm and spider fibres: A Raman and micromechanical in situ study of the conformation change with stress. J. Raman Spectrosc. 39, 1749 (2008)
M. Preghenella, G. Pezzotti, C. Migliaresi, Comparative Raman spectroscopic analysis of orientation in fibers and regenerated films of Bombyx mori silk fibroin. J. Raman Spectrosc. 38, 522 (2007)
M.E. Rousseau, L. Beaulieu, T. Lefèvre, J. Paradis, T. Asakura, M. Pézolet, Characterization by Raman microspectroscopy of the strain-induced conformational transition in fibroin fibers from the silkworm Samia cynthia ricini. Biomacromolecules 7, 2512 (2006)
S. Zheng, G. Li, W. Yao, T. Yu, Raman spectroscopic investigation of the denaturation process of silk fibroin. Appl. Spectrosc. 43, 1269 (1989)
T. Lefèvre, F. Paquet-Mercier, J.-F. Rioux-Dubé, M. Pézolet, Structure of silk by raman spectromicroscopy: From the spinning glands to the fibers. Biopolymers 97, 322 (2012)
Z. Shao, R.J. Young, F. Vollrath, The effect of solvents on spider silk studied by mechanical testing and single fibre Raman spectroscopy. Int. J. Biol. Macromol. 24, 295 (1999)
K.A. Trabbic, P. Yager, Comparative structural characterization of naturally and synthetically spun fibers of Bombyx mori fibroin. Macromolecules 31, 462 (1998)
J. Sirichaist, R.J. Young, F. Vollrath, Molecular deformation in spider dragline silk subjected to stress. Polymer 41, 1223 (2000)
J. Sirichaist, UMIST, Manchester (2000).
Y. Takahashi, "Crystal Structure of Silk of Bombyx mori," in Silk Polymers, ACS Symposium Series 544, D. Kaplan, W.W. Adams, B. Farmer, C. Viney, Eds. (American Chemical Society, Washington, DC, 1994), p. 169
J. Sirichaist, R.J. Young, F. Vollrath, V. Brookes, Analysis of structure/property relationships in silkworm (Bombyx mori) and spider dragline (Nephila edulis) silks using Raman spectroscopy. Biomacromolecules 4, 387 (2003)
R.J. Young, D. Lu, R.J. Day, Raman spectroscopy of kevlar fibers during deformation caveat emptor. Polym. Int. 24, 71 (1991)
R.J. Young, D. Lu, R.J. Day, W.F. Knoff, H.A. Davis, Relationship between structure and mechanical properties for aramid fibers. J. Mater. Sci. 27, 5431 (1992)
R.J. Young, Monitoring deformation processes in high-performance fibres using Raman spectroscopy. J. Text. Inst. 86, 360 (1995)
A.M. Anton, W. Kossack, C. Gutsche, R. Figuli (Ene), P. Papadopoulos, J. Ebad-Allah, C. Kuntscher, F. Kremer, Pressure-dependent FTIR-spectroscopy on the counterbalance between external and internal constraints in spider silk of Nephila pilipes. Macromolecules 46, 4919 (2013)
R. Ene, P. Papadopoulos, F. Kremer, Combined structural model of spider dragline silk. Soft Matter 5, 4568 (2009)
P. Papadopoulos, J. Sölter, F. Kremer, Hierarchies in the structural organization of spider silk—A quantitative model. Colloid Polym. Sci. 287, 231 (2008)
F. Vollrath, B. Madsen, Z.Z. Shao, The effect of spinning conditions on the mechanics of a spider's dragline silk. Proc. R. Soc. Lond. Ser. B 268, 2339 (2001)
C. Riekel, B. Madsen, D. Knight, F. Vollrath, X-ray diffraction on spider silk during controlled extrusion under a synchrotron radiation x-ray beam. Biomacromolecules 1, 622 (2000)
C. Holland, K. O'Neil, F. Vollrath, C. Dicko, Distinct structural and optical regimes in natural silk spinning. Biopolymers 97, 368 (2012)
B. Mortimer, J. Guan, C. Holland, D. Porter, F. Vollrath, Linking naturally and unnaturally spun silks through the forced reeling of Bombyx mori. Acta Biomater. 11, 247 (2015)
M.A. Colgin, R.V. Lewis, Spider minor ampullate silk proteins contain new repetitive sequences and highly conserved non-silk-like "spacer regions." Protein Sci. 7, 667 (1998)
C.Y. Hayashi, N.H. Shipley, R.V. Lewis, Hypotheses that correlate the sequence, structure, and mechanical properties of spider silk proteins. Int. J. Biol. Macromol. 24, 271 (1999)
J. Dionne, T. Lefèvre, P. Bilodeau, M. Lamarre, M. Auger, A quantitative analysis of the supercontraction-induced molecular disorientation of major ampullate spider silk. Phys. Chem. Chem. Phys. 19, 31487 (2017)
M. Gauthier, J. Leclerc, T. Lefèvre, S.M. Gagné, M. Auger, Effect of pH on the structure of the recombinant C-terminal domain of Nephila clavipes dragline silk protein. Biomacromolecules 15, 4447 (2014)
W.Y. Yeh, R.J. Young, Molecular deformation processes in aromatic high modulus polymer fibres. Polymer 40, 857 (1999)
V.L. Brookes, R.J. Young, F. Vollrath, Deformation micromechanics of spider silk. J. Mater. Sci. 43, 3728 (2008)
A. Sinsawat, S. Putthanarat, Y. Magoshi, R. Pachter, R.K. Eby, X-ray diffraction and computational studies of the modulus of silk (Bombyx mori). Polymer 43, 1323 (2002)
Y. Termonia, Molecular modeling of spider silk elasticity. Macromolecules 27, 7378 (1994)
F. Vollrath, D. Porter, Spider silk as a model biomaterial. Appl. Phys. A 82, 205 (2005)
T. Nishino, K. Nakamae, Elastic modulus of the crystalline regions of Tussah silk. Polymer 33, 1328 (1992)
M.A. Montes-Moran, R.J. Davies, C. Riekel, R.J. Young, Deformation studies of single rigid-rod polymer-based fibres, Part 1, Determination of crystal modulus. Polymer 43, 5219 (2002)
F. Fraternali, N. Stehling, A. Amendola, B.A. Tiban Anrango, C. Holland, C. Rodenburg, Tensegrity modelling and the high toughness of spider dragline silk. Nanomaterials 10, 1510 (2020)
D. López Barreiro, J. Yeo, A. Tarakanova, F.J. Martin-Martinez, M.J. Buehler, Multiscale modeling of silk and silk-based biomaterials—A review. Macromol. Biosci. 19, 1800253 (2019)
The authors are grateful to V.L. Brookes for undertaking some of the experimental investigations reported in this study. We thank the UK Research and Innovation Council, the Royal Society, the Natural Science Foundation of China, the European Research Council, the EU-Horizon 2020 Program and the Air Force Office of Scientific Research for funding our research and collaborations over the years, including this study.
National Graphene Institute, and Department of Materials, The University of Manchester, Manchester, UK
Robert J. Young
Department of Materials Science and Engineering, The University of Sheffield, Sheffield, UK
Chris Holland
Department of Macromolecular Science, and The Lab of Advanced Materials, Fudan University, Shanghai, China
Zhengzhong Shao
Department of Zoology, University of Oxford, Oxford, UK
Fritz Vollrath
Correspondence to Fritz Vollrath.
Supplementary file1 (DOCX 342 kb)
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Young, R.J., Holland, C., Shao, Z. et al. Spinning conditions affect structure and properties of Nephila spider silk. MRS Bulletin 46, 915–924 (2021). https://doi.org/10.1557/s43577-021-00194-1
Accepted: 13 September 2021
Stress/strain relationship | CommonCrawl |
A randomized, controlled field study to assess the efficacy and safety of lotilaner (Credelio™) in controlling fleas in client-owned cats in Europe
Daniela Cavalleri ORCID: orcid.org/0000-0001-8489-21461,
Martin Murphy1,
Wolfgang Seewald1 &
Steve Nanchen1
Lotilaner is a new isoxazoline developed as an oral ectoparasiticide for cats and dogs. Its safety, rapid killing onset of action and sustained speed of fleas and ticks kill for a minimum of one month after administration, were demonstrated in a number of laboratory studies in cats.
This study was performed to demonstrate the efficacy and safety of lotilaner flavored chewable tablets for cats (Credelio™, Elanco) in controlling fleas under field conditions in European countries.
Seventeen veterinary practices in France and Spain, located in high flea prevalence regions, participated in the study. Households with a maximum of three cats and two dogs were randomized 2:1 to a lotilaner (minimum dose rate 6 mg/kg) or a topical fipronil/(S)-methoprene combination (Frontline Combo® Spot-on Cats, Merial) group (administered according to label). In each household, efficacy against fleas and flea allergy dermatitis (FAD) signs were assessed in one primary cat (bearing a minimum of five fleas on Day 0) while safety was evaluated in all cats. There were 121 households included in the lotilaner and 61 in the fipronil/(S)-methoprene groups, respectively. Treatments were administered by the cats' owners on Day 0. Flea counts and FAD assessments were made on Days 0, 14, and 28. Efficacy calculations were based on geometric mean percent reductions of live flea counts versus baseline pre-treatment counts.
Lotilaner efficacy was 97.2 and 98.1% on Days 14 and 28, respectively. Corresponding efficacy for fipronil/(S)-methoprene was 48.3 and 46.4%. Lotilaner was superior to fipronil/(S)-methoprene at all post-Day 0 assessments and over the whole study period (P < 0.0001). At every post-administration evaluation, at least 81% of lotilaner-treated cats were flea-free as opposed to 25% in the fipronil/(S)-methoprene group. Lotilaner improved or eliminated clinical signs of FAD, including pruritus. Both products were well tolerated.
Under field conditions in Europe, lotilaner flavored chewable tablets for cats displayed an efficacy against fleas higher than 97%; clinical signs of FAD were improved or eliminated. Lotilaner tablets were safe and provided superior flea control to fipronil/(S)-methoprene.
The isoxazoline class of compounds are the newest parasiticides marketed for companion animals. These agents differ from other historical parasiticide agents e.g. topically administered compounds, with a new mode of action [1]. Lotilaner, a pure enantiomer of the isoxazoline class, is the newest compound approved for the treatment of flea and tick infestations in dogs (Credelio™ chewable tablets for dogs; Elanco Europe Ltd., Greenfield, IN, USA) [2]. This broad-spectrum parasiticide is a potent inhibitor of the gamma-aminobutyric acid-gated chloride channels, resulting in rapid death of ticks and fleas, after oral administration to dogs [3,4,5].
Other isoxazolines previously approved for the treatment of flea and tick infestations in dogs since 2014 are afoxolaner, fluralaner and sarolaner. These compounds are available as oral and topical (fluralaner only) formulations. Fluralaner is the first isoxazoline that was approved in cats, formulated as a solution for topical application (Bravecto® spot-on solution for cats; Merck Animal Health, Madison, NJ, USA) [6]. Currently there is no isoxazoline-containing ectoparasiticide product for oral administration available for the treatment of fleas and tick infestations in cats.
During a market research conducted as part of the development of lotilaner for cats (unpublished data), pet owners expressed specific, negative emotions related to the administration of topical spot-on products to cats and the disruption that occurs in the bond between the owner and their cat when topical products are applied. Many of these owners responded positively to the idea of an easy-to-give flavoured, oral tick and flea option for cats. A small, flavoured, cat-friendly, oral tablet would, therefore, be a welcomed novel product filling the gap in tick and flea control in cats.
In a number of pivotal laboratory studies, the safety and the efficacy of lotilaner flavored chewable tablets for cats (Credelio™, Elanco) against fleas (C. felis) and ticks (Ixodes ricinus) for 1 month, following oral administration at the minimum dose rate of 6.0 mg/kg, was demonstrated [7, 8].
A pivotal tolerance study in 8-week-old kittens had shown lotilaner tablets to be safe at doses up to 130 mg lotilaner/kg (high dose of 130 mg/kg; actual high dose levels of 131.24 mg/kg for males and 131.30 mg/kg, respectively for females) for monthly treatment over 8 months [9].
In this study, the authors evaluated the efficacy and safety of lotilaner administered once, at the dose rates intended for the marketed product (6.0 to 22.9 mg/kg body weight), to cats naturally infested with fleas under field conditions in Europe. A fipronil/(S)-methoprene combination (Frontline Combo® Spot-on Cats, Merial, Lyon, France) was used as the positive control. The effect of the product on clinical signs associated with flea allergy dermatitis (FAD) was also evaluated.
This assessor-blinded, randomized, positive-controlled, non-inferiority, multicentre field trial was conducted according to the study authorizations issued by the Agencia Española de Medicamentos y Productos Sanitarios (Spanish regulatory authorities) and the Agence Française de Sécurité Sanitaire des aliments (AFSSA) (French regulatory authorities), and in compliance with the applicable regulatory guidelines, which were current at the time the study was performed [10,11,12,13,14,15].
Seventeen veterinary practices in France and Spain participated in the study. Sites were selected in areas with a known high prevalence of fleas. Households with a maximum of three cats and two dogs were eligible to participate, provided that cats and dogs did not regularly or frequently contact each other or share resting places, for the whole duration of the study.
Cats aged ≥ 8 weeks and weighing ≥ 1 kg were eligible for enrolment. At least one cat from each household (primary cat) had to be found to be infested with ≥ 5 fleas prior to treatment. All cats were required to be clinically healthy or with conditions judged not to interfere with the study by the study veterinarian. Inclusion of cats showing signs of FAD was encouraged.
Cats with known hypersensitivity to the active ingredients and/or excipients of the investigational veterinary product: Credelio™ (lotilaner chewable tablets for cats, Elanco, Greenfield, IN, USA) or control product (Frontline Combo® Spot-on Cat, Merial, Duluth, Georgia) were not eligible for inclusion in the study. Pre-treatments with other ectoparasiticide compounds, pregnancy or lactation were criteria that further excluded cats, as well as planned routine surgical procedures, until cats fully recovered from any intervention and no influence on the study procedures was expected. Other exclusion criteria were plans for the animals to be used for breeding within 4 months of treatment, convalescence from any serious conditions, pre-existing medical and/or surgical conditions other than flea infestation and FAD (unless such conditions did not interfere with the suitability for the study treatments administration, were mild or chronic, stable and under control, according to the judgment of the examining veterinarian). During the study, animals could be withdrawn due to concomitant disease, death or euthanasia, or serious adverse events (SAEs) not compatible with the study. Early withdrawal could also result from non-compliance with the protocol, owner decision, or pre-termination of the study as decided by the sponsor.
All animals stayed with their owners throughout the study. The participating households were not permitted to use any environmental treatments to control flea infestations during this period. All animals were provided with food and water per the owners' usual practices.
Randomisation and treatment
At each site, cats were randomised per household in the sequence of inclusion according to the random treatment allocation plan. All cats from the same household were randomized to the same treatment. The random treatment allocation plan was created using a block design and a 2:1 ratio (lotilaner:fipronil/(S)-methoprene). The target number of enrolled subjects for efficacy analysis (primary cats) was 180, divided 2:1 between lotilaner-treated subjects and fipronil/(S)-methoprene-treated subjects. In each household, there could be one primary cat only; any other cats (up to two) in the same household were supplementary cats, treated with the same product as the primary cat but only assessed for safety.
Treatment was administered once, on Day 0 of the study by the animals' owners. All animals in Group 1 received Credelio™ and all animals in Group 2 received Frontline Combo® Spot-on Cat. Credelio™ was administered orally within 30 min following feeding. The tablets (strengths: 12 or 48 mg lotilaner) were administered based on each cat's individual body weight to achieve a minimum dose rate of 6.0 mg/kg and a maximum of 22.9 mg/kg. Frontline Combo® Spot-on Cat (fipronil 50 mg/(S)-methoprene 60 mg) was administered topically per the manufacturer's product label, applied as a single 0.5 ml pipette regardless of body weight. Dogs (maximum of two per household) and other animals in the household posing a risk of flea transmission to cats were to be treated with a suitable oral ectoparasiticide efficacious against fleas.
Study assessments
This study evaluated the efficacy against fleas and safety of lotilaner chewable tablets compared with Frontline Combo® Spot-on Cats, both administered once, to cats naturally infested with fleas. The effect of the product on clinical signs associated with FAD was also evaluated. All efficacy analyses were performed for the primary cats whereas safety analyses were performed for all cats enrolled in the study.
The primary efficacy criterion was the average efficacy of lotilaner compared with fipronil/(S)-methoprene over the entire treatment period, as based on flea counts for each visit compared with baseline flea counts, averaged on all visits, in a non-inferiority test. The secondary efficacy criteria were efficacy of the lotilaner compared with the control product for each visit, based again on the comparison between post-treatment and baseline flea counts, and the assessment of FAD signs for primary cats with FAD on Day 0. All efficacy analyses were performed on 14 (± 2) and 28 (± 2) days post-treatment.
A full body flea count was performed for each cat with a flea comb per the procedure defined in the protocol. Each cat was combed for at least 10 min, and combing continued for another 5 min after the last flea was found. In the event that more than 100 fleas were counted and the counting was not finished, the total number of fleas was recorded as > 100. All cats (primary and secondary) were assessed for safety based on health observations for 28 (± 2) days post-treatment. In addition, primary cats with clinical signs of FAD were assessed for signs of FAD on Days 0, 14 (± 2), and 28 (± 2). FAD signs (alopecia, crusts, erythema, hyperpigmentation, miliary dermatitis, eosinophilic granuloma, eosinophilic plaque, eosinophilic ulcer, papules, pruritus and scales) were classified as absent, mild, moderate, or severe and assigned a score from 0 (absent) to 3 (severe) by the Investigator. For the sign "pruritus", the scoring was done as follows: absent, no scratching; mild, occasionally scratching; moderate, frequently scratching and/or biting itself; and severe, intense scratching/biting itself. Animals were observed for AEs (adverse events) for the whole duration of the study.
The environmental pressure of flea infestation at the sites where the trial was conducted was also evaluated throughout the study, based on the estimated overall number of animals (cats and dogs) presented in the veterinary practice or clinic diagnosed with a flea infestation as well as the estimated number of products supplied for flea prophylaxis and/or treatment in the last 7 days prior to the study visit of a cat.
All study animals were divided into the following three analysis sets: intent-to-treat (ITT) efficacy population, comprising all subjects that were randomized to a treatment and that presented with ≥ 5 fleas at inclusion (one cat per household, primary cat); per protocol (PP) efficacy population, comprising subjects (primary cat) without major protocol deviations; safety population, comprising all subjects that were randomized to a treatment and received one dose of lotilaner or the fipronil/(S)-methoprene (primary and supplementary cats).
The Clinsight® Electronic Data Capture System was used for data collection. All calculations were performed using SAS® version 9.2 (SAS Institute Inc., Cary, NC, USA). The statistical hypotheses were tested on a 2-sided level of significance of 0.05. P-values ≤ 0.05 were considered significant.
For the demographics and related variables such as sex, age, body weight, breed, hair length, and the time animal spends indoor/outdoor, summary statistics and/or frequencies were calculated and the two groups were compared with a non-parametric test (Kruskal-Wallis, Mann-Whitney, or Fisher's exact test, depending on the parameter).
Efficacy endpoints were assessed in the two efficacy populations (ITT and PP). Percent efficacy was defined in relation to baseline values, i.e.
$$ \%\mathrm{Efficacy}=100\times \left(\mathrm{Flea}\ \mathrm{count}\ \mathrm{day}\ 0-\mathrm{Flea}\ \mathrm{count}\ \mathrm{actual}\ \mathrm{day}\right)/\left(\mathrm{Flea}\ \mathrm{count}\ \mathrm{day}\ 0\right) $$
Flea counts recorded as "higher than 100" were assigned a nominative value of 101 for the purposes of statistical analysis. Flea counts and reduction of flea counts versus baseline were analysed statistically. Summary statistics including arithmetic and geometric mean, minimum, maximum, and median were provided for all parameters of interest. Treatment groups were compared by analysis of covariance (ANCOVA) methods, on original scale or after possible log-transformation. To avoid taking the log of zero, one (1) was added to all flea counts before log-transformation. In the ANCOVA, the number of cats per household was used as a covariate. Non-inferiority was claimed when the 2-sided 95% confidence interval (CI) for the ratio of flea counts for lotilaner, divided by the same value for Frontline Combo® Spot-on Cats, was within the interval [0, 1/0.80] or [0, 1.25]. This indicated that the results showed (with 97.5% confidence) that flea counts with lotilaner were not higher than flea counts with Frontline Combo® Spot-on Cats, up to a non-inferiority margin of 20%.
Safety endpoints were assessed in the safety population on Days 0, 14 (± 2 days; primary cats only) and Day 28 (± 2 days; all animals). The cats were observed for AEs, SAEs, and changes in body weight. Summary statistics including arithmetic and geometric means, minimum, maximum, and median were calculated for all parameters of interest. Treatment groups were compared by analysis of variance (ANOVA) methods; body weight data were log-transformed in order to improve normality. Adverse events were counted in each group and classified using the VeDDRA coding system. The relationship with the product administration was assessed according to the ABON classification (A, probable; B, possible; O, unclassified/unknown; N, unlikely/unrelated) both by the examining veterinarian and the sponsor representative.
French translation of the Abstract is available in Additional file 1.
A total of 320 cats (182 primary and 138 secondary), from 182 households, were randomised to either treatment at 17 veterinary practices in France and Spain. The majority of primary cats (n = 83; 46%) belonged to households where only one cat was included in the study, followed by households with two cats enrolled (n = 60; 33%), and by households with three cats (n = 39; 21%).
Efficacy evaluation was performed on primary cats only, in the ITT and PP populations. The ITT population comprised all primary cats included in the study (n = 182; 121cats in the lotilaner group and 61 in the control group). The PP population comprised 178 primary cats (120 and 58 in the lotilaner group and control group, respectively) as four animals had deviations that prevented their inclusion in the PP analysis. One cat was excluded for one visit only (Day 14). All 320 cats (primary and secondary) were analysed for safety, comprising 217 cats in the lotilaner group and 103 cats in the control group.
The efficacy results obtained in primary cats were almost identical for the ITT population (n = 182 cats) and the PP population (n = 178 cats); therefore, only efficacy results of the ITT population are presented here.
Both treatment groups from the ITT population were homogeneous for all variables analysed prior to treatment administration: sex (Z = 0.254, P = 0.8741); age (Z = 0.452, P = 0.6510); body weight (Z = 0.267, P = 0.7896); breed (χ2 = 12.30, df = 7, P = 0.0911); hair length (Z = 0.991, P = 0.3216); lifestyle (mostly indoors, mostly outdoors, indoors and outdoors; χ2 = 2.66, df = 2, P = 0.2650); number of cats in the household (Z = 0.900, P = 0.3680); and flea counts (t(178) = 0.50, P = 0.6159) (Table 1). Results for the safety population were similar, except for the breed variable (χ2 = 15.34, df = 7, P = 0.0319), with more European cats in the lotilaner group (23%) compared with the fipronil/(S)-methoprene group (13%). Seven different pure breeds of cats were included in the ITT population, of which the most common were European (n = 38; 21%), Persian (n = 6; 3%), and Siamese (n = 4; 2%). All cats enrolled in the study were successfully dosed by their owners.
Table 1 Demographics and baseline characteristics of the enrolled animals (ITT population)
One cat from each of the treatment groups was prematurely withdrawn from the study: in the lotilaner-treated group, a supplementary cat died on Day 23 after being run over by a car, a primary cat from the fipronil/(S)-methoprene-treated group died on Day 3 following presentation with clinical signs of dehydration and severe dyspnoea.
Flea efficacy assessment
The average arithmetic (± standard deviation, SD) and geometric mean flea counts over the study period were, respectively, 0.41 and 0.19 in the lotilaner-treated group and 8.87 and 3.59 in the fipronil/(S)-methoprene-treated group. The arithmetic and geometric mean flea counts over time are displayed in Table 2. The geometric mean flea counts over time are also shown in Fig. 1.
Table 2 Flea count data for each treatment group
Geometric mean flea counts of lotilaner- and fipronil/(S)-methoprene-treated cats at each assessment time-point. Difference between groups was significant: t(176)≥ 11.5, P < 0.0001
The overall geometric mean percentage flea reduction for the study period was 97.7% in cats treated with lotilaner compared with a reduction of 47.4% for cats treated with fipronil/(S)-methoprene. Percentage flea reductions for each assessment time-point are presented in Fig. 2.
Geometric mean percent flea reduction of lotilaner- and fipronil/(S)-methoprene-treated cats at each assessment time-point. Difference between groups was significant: P < 0.0001 (t(176) = 7.96 and t(176) = 8.13 on days 14 and 28, respectively)
ANCOVA analysis of post-treatment flea counts and percentage reductions in flea counts, including 95% confidence intervals (CIs), showed significant reductions in the lotilaner-treated cats on Days 14 and 28 and over the whole study (P < 0.0001) compared with the control product-treated animals. Analysis of the CIs for flea counts revealed that not only non-inferiority to fipronil/(S)-methoprene could be shown for lotilaner (i.e. upper confidence limit was below 1.25); superiority could be demonstrated at all time-points and for the entire study period as well (P < 0.0001).
In the lotilaner group, 81.0 and 81.8% cats were flea-free on Day 14 and 28, respectively. In the fipronil/(S)-methoprene group, 25.0% cats were flea-free at the same time-points (Table 3).
Table 3 Number and percentage of flea-free (cured) cats at each time-point
FAD assessment
Assessment of FAD signs for primary cats with FAD on Day 0 was performed on ten cats in the lotilaner-treated group and six cats in the fipronil/(S)-methoprene-treated group. Baseline analysis of clinical signs of FAD prior to start of treatment administration did not reveal any statistically significant differences between treatment groups, thus confirming that they were balanced at the beginning of the study. All clinical signs associated with FAD could be evaluated during the study except eosinophilic granuloma, which was not observed in any of the study animals evaluated for FAD.
In the lotilaner group, there was a significant decrease in the mean total FAD score on Day 14 and 28 (Wilcoxon paired-sample test: S = 22.5, P = 0.0039 for day 14; S = 27.5, P = 0.0020 for day 28); on Day 0 the score was 5.2, which declined to 1.8 by Day 14 and 1.3 at the end of the study. In the fipronil/(S)-methoprene group, the mean total FAD score decreased from 6.8 on Day 0, to 6.3 and 4.8 on Days 14 and 28 respectively, and appeared to be not statistically significant (S = 4.5 and 6.5, P = 0.41 and 0.25 on Days 14 and 28, respectively) but due to the low number of animals, statistical significance could not be definitively assessed (Fig. 3).
FAD mean scores of lotilaner- and fipronil/(S)-methoprene-treated cats at each assessment time-point. Statistically significant difference from baseline: *S ≥ 22.5, P ≤ 0.0039
Pruritus mean scores followed the same pattern as mean total FAD scores, decreasing significantly, in the lotilaner group, from 1.8 on Day 0 to 0.6 and 0.4 on Days 14 and 28, respectively (S = 22.5, P = 0.0039 for day 14; S = 27.5, P = 0.0020 for day 28). In the fipronil/(S)-methoprene group the decrease from 1.8 (Day 0) to 1.5 (Day 14 and Day 28) was not significant (S = 2.5 and 1.5, P = 0.6250 and 0.7500, respectively) (Fig. 4). Statistically significant differences were also observed between the lotilaner and control groups, in the pruritus score (t(12) = 2.50 and 3.71, P = 0.0281 and P = 0.00340 on Days 14 and 28, respectively), and in the total FAD score (t(12) = 3.11, P = 0.0091), averaged over the entire study duration, with lower scores in the lotilaner group.
Pruritus mean scores of lotilaner- and fipronil/(S)-methoprene-treated cats at each assessment time-point. Statistically significant difference from baseline: *S ≥ 22.5, P ≤ 0.0039
Safety was evaluated in 320 cats (182 primary and 138 secondary) enrolled in the study and included 217 cats that were treated with lotilaner and 103 cats treated with fipronil/(S)-methoprene.
Fifteen out of the 217 cats in the lotilaner-treated group (6.91%) and five of 103 cats (4.85%) in the fipronil/(S)-methoprene-treated group were affected by non-serious, mild adverse events.
Four animals had SAEs (three in the lotilaner group and one in the control group: 0.014% and 0.010%, respectively) during the study. Signs included abdominal pain, digestive tract stenosis and obstruction, urinary tract obstruction, dyspnoea, pyothorax, dehydration, lethargy and death. Two cats died during the study - one cat in the lotilaner group was run over by a car and one cat in the fipronil/(S)-methoprene group was diagnosed with pyothorax. Two other cats in the lotilaner group presented one with urinary tract obstruction requiring surgical intervention and the other one with the presence of a foreign body in the gastrointestinal tract requiring surgery. These cats made a full recovery post-intervention and completed the study. None of the SAEs was assessed as being related to the study treatment.
Fisher's exact test showed that the number of cats affected by adverse events or serious adverse events was not significantly different between the two groups for each of the signs (Z ≤ 2.05, P ≥ 0.1029 and Z ≤ 1.44, P ≥ 0.3219, respectively).
The average body weight of the lotilaner-treated cats was 3.95 kg (SD 1.59, range 1.00–10.50 kg) and for the fipronil/(S)-methoprene-treated cats was 3.89 kg (SD 1.57, range 1.00–8.00 kg), at baseline (Day 0). There were no significant differences between treatment groups in body weights of cats on Day 0 (t(160) = 0.12, P = 0.9064) and body weights as well as body weight gain on Days 14 and 28 (t(176) = 1.76, P = 0.0798 and 0.8177, for Day 14; t(154) = 0.23, P = 0.8177, for Day 28); see Table 4.
Table 4 Mean body weight and body weight changes over time
Environmental pressure
Data on environmental pressure in the week before a scheduled visit were recorded on Days 0, 14 (± 2) and 28 (± 2). The number of animals (cats and dogs) diagnosed with a flea infestation during the last 7 days prior to each case visit ranged over all study sites between eight (week of 26 October 2015) and 32 cases (week of 20 July 2015). The estimated average number of products supplied at the clinic for flea prophylaxis and/or treatment in the last 7 days prior to the study visit of a cat ranged between 21 (week of 26 October 2015) and 97 (week of 20 July 2015), while the estimated average number of animals (cats and dogs) diagnosed with a flea infestation, ranged, over all study sites and countries between 8 (week of 26 October 2015) and 32 cases (week of 20 July 2015).
Both lotilaner and fipronil/(S)-methoprene groups demonstrated post-treatment flea counts reduction. Results showed that cats treated with lotilaner had significantly lower flea counts on Days 14 and 28 and over the entire study (P < 0.0001) compared with animals treated with fipronil/(S)-methoprene. Credelio™ was shown to be superior to Frontline Combo® Spot-on (P < 0.0001) at both time points and on average.
A percentage of 6.91 Credelio™ cats and 4.85 cats treated with Frontline Combo® Spot-on were affected by adverse events. The difference was not statistically significant. In addition, no significant differences between the two groups of cats in body weight change were observed.
The choice of the two different regions in which the study was performed, ensured assessment of the product efficacy in different climatic and geographic conditions and with a high environmental infestation pressure, in compliance with the European guidelines.
The sub-optimal comparison between an orally administered product (Credelio™) against a topically applied treatment (Frontline Combo® Spot-on) was driven by the lack of availability of an oral product for cats, which was active against fleas and ticks. The study described in this publication was designed to evaluate efficacy against fleas only. The geographical regions in which the study was conducted were known to have a high prevalence of ticks. Although oral products with efficacy against fleas on cats were available, the sponsor chose not to use those since to have used these products would have exposed the cats to the risks of vector-borne disease transmission from infected ticks. Lotilaner tablets had proven to be efficacious against the main European cat tick (Ixodes ricinus) in three pivotal laboratory studies [7] while its efficacy against all ticks of relevance in Europe (I. ricinus, I. hexagonus, Dermacentor reticulatus and Rhipicephalus sanguineus), was demonstrated in a large field study performed in three different European countries [16].
Since a topical isoxazoline for cats was not available at the time the study was performed, the applicant decided to choose one of the most commonly used cat parasiticides.
The choice of the comparator product dictated the minimum body weight of the cats for inclusion (1 kg). In the pivotal target animal safety studies, lotilaner was shown to be safe for cats as light as 0.5 kg [9] but since the control product label indicated a higher minimum body weight, in order to maintain the blinding and prevent the introduction of a bias, the minimum body weight of 1 kg at inclusion was selected.
Flea counts and analysis of the demographics and related variables showed that the Credelio™ and Frontline Combo® Spot-on populations were homogeneous at baseline, with the exception of the cat breeds, with a higher percentage of cats of European breed in the Credelio™ group. This was considered of no relevance since the breed per se has no impact on the performance of an ectoparasiticide product. The only related variable potentially confounding study results might have been a higher number of cats with long hair in one of the groups, but the comparison of hair length showed that the two treatment groups were not different for this variable, at baseline.
The evaluation of the efficacy against fleas was performed without consideration of the flea species, since Ctenocephalides felis is recognised to be the most prevalent species in cats in Europe [17]. For the other relevant European flea species (Ctenocephalides canis), a previous in vitro study, in which the susceptibilities of European strains of C. felis and of C. canis to lotilaner were compared in a contact test (unpublished data), had demonstrated an equivalent or higher susceptibility of C. canis when compared to C. felis. The efficacy of lotilaner against C. canis was confirmed in a dose confirmation laboratory study and in a European field study in dogs (unpublished data and [18], respectively). Both studies were pivotal, well controlled, randomized, blinded and performed in compliance with GCP (good clinical practice) standards.
Since there were only ten cats in the lotilaner group and six in the fipronil/(S)-methoprene group showing signs of FAD at baseline, the study has limited power for the non-parametric comparison to baseline in the latter, for the evaluation of the improvement in the clinical signs of FAD. A similar consideration is valid for the non-parametric comparison to baseline in the lotilaner group, with a maximum of five cats showing each sign at baseline, except for pruritus, crusts, and total FAD score, with nine to ten animals affected at baseline. Still, from the analysis within the lotilaner group only, it can be concluded that FAD signs improved substantially over the course of the study.
Administration compliance was 100% in the Credelio™ group, showing that the tablets were easy for pet owners to administer and well accepted by the cats.
Lotilaner chewable tablets for cats (Credelio™) at the recommended minimum dose rate of 6 mg/kg body weight as a single oral administration in fed state, were shown to be efficacious and safe when administered in the field to client-owned cats. Lotilaner was non-inferior to the approved positive control (Frontline Combo® Spot-on Cat, fipronil/(S)-methoprene) in the treatment of natural flea infestations for 28 ± 2 days on cats presented as veterinary patients in France and Spain. Moreover, Credelio™ was superior to Frontline Combo® Spot-on on both assessment days (14, 28) and for the entire study period (P < 0.0001). Analysis of clinical signs of FAD showed that animals treated with lotilaner had significantly lower levels of pruritus, crusts and the total FAD score compared with Frontline Combo Spot-on Cat for the entire study duration. Both products were well tolerated.
AE:
ANCOVA:
analysis of covariance
ANOVA:
CI:
FAD:
flea allergy dermatitis
GCP:
intent-to-treat
per protocol
SAE:
serious adverse event
VeDDRA:
Veterinary Dictionary for Drug Regulatory Activities
Weber T, Selzer PM. Isoxazolines: a novel chemotype highly effective on ectoparasites. ChemMedChem. 2016;11:270–6.
European Medicines Agency, Committee for Medicinal Products for Veterinary Use. Credelio Summary of opinion. London; 2017. http://www.ema.europa.eu/docs/en_GB/document_library/Summary_of_opinion_-_Initial_authorisation/veterinary/004247/WC500221827.pdf. Accessed 30 May 2018.
European Medicines Agency, Committee for Medicinal Products for Veterinary Use, Credelio European Product Information. 2017. http://www.ema.europa.eu/docs/en_GB/document_library/EPAR_-_Product_Information/veterinary/004247/WC500227518.pdf. Accessed 30 May 2018.
Garcia-Reynaga P, Zhao C, Sarpong R, Casida JE. New GABA/glutamate receptor target for [(3)H]isoxazoline insecticide. Chem Res Toxicol. 2013;26:514–6.
Lucien R, Vanessa D, Daniel B, Heinz S. The novel isoxazoline ectoparasiticide lotilaner (Credelio™): a non-competitive antagonist specific to invertebrates γ-aminobutyric acid-gated chloride channels (GABACls). Parasit Vectors. 2017;10:530.
European Medicines Agency, Committee for Medicinal Products for Veterinary Use, Bravecto® European Product Information. http://www.ema.europa.eu/docs/en_GB/document_library/EPAR_-_Product_Information/veterinary/002526/WC500163859.pdf. Accessed 30 May 2018.
Cavalleri D, Murphy M, Seewald W, Drake J, Nanchen S. Laboratory evaluation of the efficacy and speed of kill of lotilaner (Credelio™) against Ixodes ricinus ticks on cats. Parasit Vectors. 2018 (In press).
Cavalleri D, Murphy M, Seewald W, Nanchen S. Laboratory evaluation of the efficacy and speed of kill of lotilaner (Credelio™) against Ctenocephalides felis on cats. Parasit Vectors. 2018 (In press).
Kuntz EA, Kammanadiminti S. Safety of lotilaner flavoured chewable tablets (Credelio™) after oral administration in cats. Parasit Vectors. 2018 (In press).
Directive 2001/82/EC of the European Parliament and of the Council of 6 November 2001, as amended, on the Community code relating to veterinary medicinal products. https://ec.europa.eu/health/sites/health/files/files/eudralex/vol-5/dir_2001_82_cons2009/dir_2001_82_cons2009_en.pdf. Accessed 06 June 2018.
International Cooperation on Harmonization of Technical Requirements for Registration of Veterinary Medicinal Products,VICH GL9, Good Clinical Practice, July 2000. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/10/WC500004343.pdf. Accessed 06 June 2018.
European European Medicines Agency, Committee for Medicinal Products for Veterinary Use, EMA/CVMP/EWP/81976/2010: Guideline on statistical principles for clinical trials for veterinary medicinal products (pharmaceuticals), 01 August 2012. http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/general/general_content_000381.jsp&mid=WC0b01ac058002ddc2. Accessed 27 June 2018.
The rules governing medicinal products in the European Union, volume VII: 7AE17a, Guidelines for the testing of veterinary medicinal products: Demonstration of efficacy of ectoparasiticides. 1994. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/10/WC500004662.pdf. Accessed 30 May 2018.
European Medicines Agency, Committee for Medicinal Products for Veterinary Use, EMEA/CVMP/EWP/005/2000-Rev.2: Guideline for the testing and evaluation of the efficacy of antiparasitic substances for the treatment and prevention of tick and flea infestation in dogs and cats, 12 Nov 2007. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/10/WC500004596.pdf. Accessed 27 June 2018.
Marchiondo AA, Holdsworth PA, Fourie LJ, Rugg D, Hellmann K, Snyder DE, Dryden MW. World Association for the Advancement of Veterinary Parasitology. World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) second edition: guidelines for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestations on dogs and cats. Vet Parasitol. 2013;194:84–97.
Cavalleri D, Murphy M, Seewald W, Nanchen S. A randomized, controlled field study to assess the efficacy and safety of lotilaner (Credelio™) in controlling ticks in client-owned cats in Europe. Parasit Vectors. 2018 (In press).
Gálvez R, Musella V, Descalzo MA, Montoya A, Checa R, Marino V, et al. Modelling the current distribution and predicted spread of the flea species Ctenocephalides felis infesting outdoor dogs in Spain. Parasit Vectors. 2017;10:428.
Cavalleri D, Murphy M, Seewald W, Drake J, Nanchen S. A randomised, blinded, controlled field study to assess the efficacy and safety of lotilaner tablets (Credelio™) in controlling fleas in client-owned dogs in European countries. Parasit Vectors. 2017;10:526.
The authors would like to thank the participating veterinarians, the cat owners who agreed to enrol their cats in the study and Geetanjali Tonpe of Eli Lilly Services Private Limited, Bangalore, India for assistance with the manuscript.
The study was funded by Elanco.
Due to commercial confidentiality of the research, data not included in the manuscript can only be made available to bona fide researchers, subject to a non-disclosure agreement.
Elanco Animal Health, Mattenstrasse 24a, 4058, Basel, Switzerland
Daniela Cavalleri
, Martin Murphy
, Wolfgang Seewald
& Steve Nanchen
Search for Daniela Cavalleri in:
Search for Martin Murphy in:
Search for Wolfgang Seewald in:
Search for Steve Nanchen in:
All authors participated in the design and completion of the studies and were involved in the drafting of the manuscript. All authors read and approved the final manuscript.
Correspondence to Daniela Cavalleri.
This clinical field study was conducted in compliance with local and national regulatory requirements in France and Spain and consistent with the principles of Good Clinical Practices the VICH guideline on good clinical practice (GCP; VICH GL 9) and Directive 2001/82/EC as amended; the Rules Governing Medicinal Products in the European Union, Volume VIIA: Guidelines for the testing of veterinary medicinal products: Demonstration of Efficacy of Ectoparasiticides, 7AE17a, page 215–222; EMEA/CVMP/EWP/ 005/2000-Rev.2: Guideline for the testing and evaluation of the efficacy of antiparasitic substances for the treatment and prevention of flea and flea infestation in dogs and cats, 12 Nov 2007; and with the guidelines of the World Association for the Advancement of Veterinary Parasitology. Participating cat owners were required to sign an informed consent form for their cat(s) to participate in the study after details of the study design and products under investigation had been explained.
DC, WS, MM and SN are employees of Elanco Animal Health.
French version: Please see Additional file 1 (https://doi.org/10.1186/s13071-018-2971-9) for the French translation of the Abstract of this article.
French translation of the Abstract. (PDF 18 kb)
Cavalleri, D., Murphy, M., Seewald, W. et al. A randomized, controlled field study to assess the efficacy and safety of lotilaner (Credelio™) in controlling fleas in client-owned cats in Europe. Parasites Vectors 11, 410 (2018) doi:10.1186/s13071-018-2971-9
Lotilaner
Credelio
Fipronil/(S)-methoprene
Credelio for cats | CommonCrawl |
Inference following multiple imputation for generalized additive models: an investigation of the median p-value rule with applications to the Pulmonary Hypertension Association Registry and Colorado COVID-19 hospitalization data
Matthew A. Bolt1,
Samantha MaWhinney1,
Jack W. Pattee1,
Kristine M. Erlandson ORCID: orcid.org/0000-0003-0808-67292,
David B. Badesch2 &
Ryan A. Peterson ORCID: orcid.org/0000-0002-4650-57981
Missing data prove troublesome in data analysis; at best they reduce a study's statistical power and at worst they induce bias in parameter estimates. Multiple imputation via chained equations is a popular technique for dealing with missing data. However, techniques for combining and pooling results from fitted generalized additive models (GAMs) after multiple imputation have not been well explored.
We simulated missing data under MCAR, MAR, and MNAR frameworks and utilized random forest and predictive mean matching imputation to investigate a variety of rules for combining GAMs after multiple imputation with binary and normally distributed outcomes. We compared multiple pooling procedures including the "D2" method, the Cauchy combination test, and the median p-value (MPV) rule. The MPV rule involves simply computing and reporting the median p-value across all imputations. Other ad hoc methods such as a mean p-value rule and a single imputation method are investigated. The viability of these methods in pooling results from B-splines is also examined for normal outcomes. An application of these various pooling techniques is then performed on two case studies, one which examines the effect of elevation on a six-minute walk distance (a normal outcome) for patients with pulmonary arterial hypertension, and the other which examines risk factors for intubation in hospitalized COVID-19 patients (a dichotomous outcome).
In comparison to the results from generalized additive models fit on full datasets, the median p-value rule performs as well as if not better than the other methods examined. In situations where the alternative hypothesis is true, the Cauchy combination test appears overpowered and alternative methods appear underpowered, while the median p-value rule yields results similar to those from analyses of complete data.
For pooling results after fitting GAMs to multiply imputed datasets, the median p-value is a simple yet useful approach which balances both power to detect important associations and control of Type I errors.
Before the onset of data collection in many studies, sample size calculations are conducted to ensure that the study is adequately powered to find a meaningful clinical effect size if one exists. In all but the rarest circumstances, some data cannot be collected as planned, which leaves the analyst with a dataset that is either underpowered, biased in some manner, or both. Typically, missing data are categorized into three common patterns. Datasets with missingness completely at random (MCAR) occur when the probability of a data point being missing is completely independent of its value or the value of other variables. Missingness at random (MAR) instead refers to when the probability of a value being missing is dependent on other covariates in the dataset, but not dependent on the true value of the variable. The most difficult missing data pattern, missingness not at random (MNAR), occurs when the probability of missingness of a variable is dependent on its value [1].
Since missing data are commonplace, several methods exist which address their ensuing problems. The most straightforward approach to dealing with missing data is simply to use only rows of the data that are not missing any values. This approach, commonly referred to as the "complete case" or "list-wise deletion" approach, is useful when the amount of missing data is small (some recommend no more than 5% of the data be missing), but inappropriate when there is substantial missing data [2,3,4]. In the latter situation, a complete case approach is likely to be both underpowered and biased. A popular approach to mitigate these issues, multiple imputation (MI), involves replicating the analytic dataset m times and then in each replicated dataset filling in missing values with a stochastic "reasonable guess" (imputation) of what the true value might be. Due to the stochastic component, each of the m datasets is filled in with slightly different guesses. A variety of imputation methods exist, two of which will be analyzed in this paper: predictive mean matching (PMM) and random forest (RF) imputation [5]. Multiple imputation is often paired with chained equations, also known as the fully conditional specification, which specifies the multivariate imputation model one variable at a time. Starting with an initial simple imputation (e.g., mean imputation), multivariate imputation by chained equations uses a sequence of iterations from the conditionally-specified models to generate imputed values that reflect relationships in the data. This technique can be generalized well to both continuous and categorical data [6].
PMM imputation is particularly useful for continuous variables and is conducted by estimating a value for a missing data cell with multiple regression based on other columns that are not missing data. For a given "modeled" column, observations with similar predicted values (of the same column) are placed together into small groups regardless of whether any data were initially missing. Then, the missing observations are randomly assigned the true values of the rows from their group which were not missing the modeled column. This process allows realistic values to be used in imputations that certainly exist on the domain of the data as well as maintain some variability [5, 7]. RF imputation is a powerful non-parametric method which involves building a random forest for each variable with missing values and using the results of that random forest to impute data. Typically, a sample of non-missing data will be sampled with replacement to create a classification or regression tree. This process will be repeated multiple times on bootstrap resamples, creating a variety of trees, which results in a bootstrapped random forest based on observed data. With this random forest, one can randomly select observed values from the terminal nodes of each of the trees in the forest to replace the missing data. Multiple random samples from these terminal nodes can create multiple imputed datasets [8]. Random forests can conveniently handle both categorical and continuous data and will often perform well in the presence of interacting or non-linear relationships.
Regardless of the exact imputation method, once all m copies of the original dataset have had their missing data stochastically filled in with imputed data, statistical analyses are performed on each imputed dataset to produce m results pertaining to the research question at hand. The final step of the MI process is to then pool the m results together and treat the combined outcome as the result [9]. When the statistical analysis of the imputed datasets involves an estimate or test statistic that is normally distributed, the pooling of results can be accomplished using a process called Rubin's rules. To obtain the combined estimate, one can simply take the mean of the estimates over the m analyses. Obtaining the combined variance involves a calculation with the variance of the estimate across imputations with the average variances of the estimate within each imputation. Then, one can draw final conclusions with a Wald test based on the mean estimate and the combined variance [9, 10]. However, this pooling process is designed to work only with estimates that are normally distributed; when presented with estimates or test statistics that are non-normally distributed, one must use alternative methods to pool results from m multiply imputed datasets.
One such instance of complex models that do not have normally distributed estimates or test statistics are generalized additive models (GAMs). GAMs are useful for their ability to fit a line through data with varying "curvy-ness" in a more efficient and elegant manner than other traditional polynomial functions. Although most often employed for purposes of prediction and description, GAMs will sometimes be used for hypothesis testing and inference; our later applications concern situations where a flexible model is desired alongside hypothesis tests. Several fitting procedures exist to estimate the components involved in GAMs, most of which have a penalty term which can optimize model fit while protecting against overfitting. However, with such flexibility comes the cost of no longer having interpretable slope estimates or normally distributed test statistics. Thus, GAM parameters have no meaningful interpretation and cannot be combined with straightforward pooling methods. In general, beyond plotting a GAM, the only way to examine the importance of a GAM association numerically is to examine its effective degrees of freedom along with an approximate F statistic. These approximations are detailed in Wood [11, 12] and can provide a p-value which, if small, suggests that the relationship between a predictor in the GAM and the outcome is not a perfectly horizontal line. Wood's approximations for these F statistics and effective degrees of freedom (and their corresponding p-values) are implemented in the R package mgcv [11]. Therefore, rather than attempting to apply normal pooling rules to these non-normal statistics, we suggest that additional methods of pooling are necessary when using GAMs together with multiple imputation.
We set out to examine several methods of pooling GAMs after MI based on their F statistics, effective degrees of freedom (edfs), and p-values. A simple combination method to pool GAMs after MI, and the focus of this study, is to take the median of all imputed within-imputation GAM p-values as the pooled p-value measuring the strength of evidence for the association. Eekhout, Wiel, and Heymans [13] aptly named this method the median p-value (MPV) rule, and they investigated its utility in determining the significance of categorical predictor variables in logistic regression models after MI. They demonstrated that in null models, the Type I error rate of results based on the MPV rule is only slightly inflated. In situations where the alternative hypothesis is true, the MPV rule performs equal to or better than other conventional methods. Several other methods for combining GAMs after MI are applicable as well; the most complex of which is the D2 method outlined by van Buuren [5] and originally proposed by Rubin [14], which involves combining test statistics in such a way to test against a "pooled" F distribution. Another method, the Cauchy combination test proposed by Lui and Xie [15], involves transforming then summing p-values into a joint Cauchy-distributed test statistic which can then be compared against the Cauchy cumulative distribution function. The Cauchy combination test was introduced in the context of genomic data, and to our knowledge has not been applied in the context of MI. Finally, we will investigate some alternative ad-hoc approaches described in the following section.
Our primary motivation in this work is to demonstrate the viability (or lack thereof) of the MPV rule in situations where MI and GAMs are used in conjunction. Such an exceptionally straightforward pooling method, if empirically valid, would be a welcome and cogent solution to this complex problem. In this work, we compare the empirical performance of the MPV in terms of power and type I error control relative to a suite of possible alternatives, using simulations and case studies. Our simulation studies were adapted from Friedman [16] to fit GAMs on multiply imputed data to variables which have varying degrees of "true" signal (including one variable with a true signal of zero). We additionally vary the imputation methods (PMM and RF), the missingness mechanisms (MCAR, MAR, and MNAR), and the outcome type (continuous/dichotomous). We then apply our proposed MPV rule and its competitors in two case studies, one examining the effect of home elevation (i.e. meters above sea level) on a six-minute walk distance for patients with pulmonary arterial hypertension (PAH), and the other investigating the extent to which C-reactive protein is associated with the risk of mechanical ventilation for patients infected with the novel Coronavirus disease (COVID-19).
Proposed multiple imputation pooling methods
In this section, we describe the various methods we examined in this study. The first, D2, involves pooling F-statistics from the m GAM fits and then comparing a pooled version of the F-statistic to an F-distribution whose numerator and denominator degrees of freedom are based upon the variance of F-statistics across imputations, the effective degrees of freedom from each GAM, and the number of imputations. Further details regarding this method can be found in van Buuren [5]. This D2 method is the most complex of the approaches evaluated in this study, and while understandable, it is not immediately intuitive. Some functionality is provided in R for D2 in the "mice" package [17], but the existing implementation is not applicable in the GAM context without custom programming. Furthermore, a component of the D2 method involves dividing the average test statistic across imputations by the number of parameters in the model; however, the number of parameters in GAMs are not fixed between models that are run on different imputations, so the correct denominator to use in the equation is unclear. Therefore, we evaluate two versions of the D2 method, one in which the average test statistic across imputations is divided by the average number of parameters across imputations (D2) and the other which is the average of the quotients of each test parameter divided by the number of parameters within imputation models (Alt. D2), as shown below:
$$D2:\frac{mean(test\;statistics)}{mean(number\;of\;parameters)}$$
$$Alt.\ D2:mean{\left(\frac{test\ statistics}{number\ of\ parameters}\right)},$$
where means are taken across m imputations
The second method we investigated is the Cauchy combination test, which deals only with p-values (as do all other considered combination methods). The Cauchy combination test is conducted by summarizing p-values from all imputations using the formula below into t0, then comparing this statistic against a standard Cauchy distribution:
$${t}_{0}= \sum_{i=1}^{m}\mathit{tan}\{(.5-{p}_{i})\pi\}$$
$$p\text{-value}= \frac{1}{2}-(\mathit{arctan}\ {t}_{0})/\pi$$
This method requires fewer calculations and is easier to implement than the D2 method. The Cauchy combination method was originally conceived to assess significance among large numbers of tests in genome-wide association studies and was designed to be most accurate for small p-values. The fatter tails of the Cauchy distribution reduce the effects of correlation between p-values of similar tests, and in our case, tests from multiply imputed datasets are likely to be correlated [15].
Finally, our proposed solution, the MPV rule, is simply to take the median p-value across the m GAM results. Ultimately, in this context, the analyst needs a singular summary measure, and the median of all of the p-values is a concrete (if ad-hoc) method that has been shown to work in other situations [13]. The median p-value across all GAMs captures the central tendency of all individual p-values in a robust manner. We also explore the empirical properties of several other ad-hoc methods, including the mean p-value rule (identical to the MPV but with a different measure of central tendency), a single imputation approach (simply using results from a single imputed dataset, avoiding any pooling procedures) and a complete case (list-wise deletion) approach.
Data for the simulation was generated using a model adapted from Friedman ([16], p. 37):
$$f\left(x\right)=10\mathit{sin}({x}_{1}{x}_{2}\pi )+20{({x}_{3}-.5)}^{2}+10{x}_{4}+5{x}_{5}+0{x}_{6}+\varepsilon$$
$$\varepsilon \sim N\left(0,9\right)$$
$${x}_1,\dots,{x}_6\sim unif\left(0,1\right)$$
We chose this true generating model due to its reasonable number of variables, its variety of beta parameter sizes, and its diversity of functional forms (trigonometric, polynomial, and linear). The GAM approach should feasibly be able to approximately capture the relationship between each covariate (except for x6) and the response, while results pertaining to x6 can be considered as evaluating the type I error rate (since the true relationship between x6 and y is a horizontal line, i.e., the null is true). Each covariate was generated from an independent uniform(0, 1) distribution. While the independence assumption is somewhat unrealistic and will limit imputation quality, it allows us to examine the power of each covariate's relationship with the outcome more precisely. A similar model was utilized in simulations with a binary outcome; a standardized version of this formula produced a series of normally distributed random variables which were then transformed into a probability through a logit link and fed into a binomial process. This results in a binary outcome for which the log-odds of success are related to each covariate according to the same functional form as above.
Given a missingness mechanism and outcome type, each simulation study was carried out using the steps below (also outlined in a flow chart in Fig. 1):
The Friedman generating model was used to simulate S = 10,000 datasets.
GAMs were fit to all full datasets wherein each covariate was modeled with its own smoothed term to produce our primary "full-data" (i.e. "gold-standard") benchmarks. Because GAMs can vary in their number of basis functions, we specified that each fit be limited to a maximum of 10 basis functions to maintain model consistency across imputations and simulations. Note that for binary outcomes, GAMs were fit using a binomial-family model with a logit link.
Next, using the mice package [17], we simulated missingness within each dataset at a rate of 35% under the prespecified missing data pattern. That is, after simulating missingness, roughly 65% of rows in a dataset had no missing data, while the other 35% were missing data for at least one variable. Description and verification of the procedure to simulate these missing data patterns is outlined in Schouten, Lugtig, and Vink [18].
As a second benchmark, GAMs were fit using list-wise deletion (which reduced the sample to 65% of its original size) to allow for comparison of methods to the complete case approach (arguably, the simplest possible approach to missing data).
Each dataset was then multiply imputed using chained equations with m = 25. Default options were utilized under both random forest and predictive mean matching approaches. Imputation quality is described by Figure A7 in Additional file 1: Appendix.
We ran similar GAMs (as in steps 2 and 4) on each of the m imputed datasets.
We implemented the D2, Cauchy combination, MPV, and mean p-value rules to pool results on the GAMs from step 7.
Pooled p-values were compared to a significance level of 0.05 to compute power to detect the associations for x1 through x5 and to calculate Type I error rates for x6. These were then compared to both the full-data results (gold-standard) and complete case results.
Simulation study flow chart
These steps were repeated for normal and binary outcome data and under MCAR, MAR, and MNAR missing data patterns, which resulted in six total simulation studies. Attention is primarily paid in this paper to MAR data, but results under the MCAR and MNAR framework are also presented in Additional file 1: Appendix.
Preliminary analysis revealed that the default GAM fitting option, generalized cross-validation (GCV), often produced results with a much higher type I error rate than the anticipated 5%. A short comparison of model fitting techniques revealed that between maximum likelihood (ML), restricted maximum likelihood (REML), and GCV, the ML method produced type I error rates closest to, although still slightly higher than, the expected rate of 5%. This finding is consistent with writings from Wood [11], where GCV is found to produce less accurate results than ML, and from Wood [19, 12], where it is argued that GCV is more at risk than ML or REML of global optimization failure, which in turn under-penalizes over-fitting and leads to a higher Type I error rate. Therefore, all models in the simulation study utilized ML for GAM fitting; however, all three fitting methods are examined and compared in our first application. Finally, we repeated the simulation studies with cubic B-splines in lieu of GAMs to evaluate and compare the performance of the MPV using an alternative semi-parametric spline model for normal outcomes.
Results for each of the pooling methods, as well as the complete case analysis and full data analysis, are shown in Fig. 2 (normal outcome) and Fig. 3 (binary outcome).
Proportion of tests that rejected the null hypothesis (normal outcome, MAR, GAM)
Proportion of tests that rejected the null hypothesis (binary outcome, MAR, GAM)
In Fig. 2, we find the pooling methods have a somewhat consistent order in regard to proportion of tests rejected at p < 0.05 across imputation methods and variables, with the proportion of rejections being highest for the Cauchy combination method, followed by the MPV rule, a single imputation approach, a mean p-value rule, and then the two D2 methods. The exceptions to this ordering occur for the third and sixth covariates, for which single imputation rejects a larger proportion of tests than the MPV rule.
The Cauchy combination rule seems to be the highest-powered approach for pooling; in all situations it has the highest proportions of tests with p < 0.05, sometimes suspiciously finding more significant findings than the "gold standard" models run on the full data. However, for the x6 variable which has a null effect, the Cauchy combination rule significantly over-rejects tests compared to other methods. Curiously, almost all methods over-rejected tests (i.e., rejected tests at a rate higher than 5%) for the x6 variable, including the full data analysis and complete case analysis. In terms of power, the MPV rule performs only slightly worse than the full data analysis for variables x1 through x5 and performs much better than the complete case analysis in most settings. On the other hand, both D2 methods perform rather poorly (e.g., low power) in most situations compared to the other approaches, although they still typically perform better than the complete case analysis. The D2 methods additionally have low Type I error rates (rates that are much lower than those for the full data or complete case approaches).
Findings for the binary outcome data under a MAR framework are very similar, as shown in Fig. 3. Additionally, findings regarding the capability of these pooling methods were similar in simulations conducted under MCAR and MNAR frameworks (see Additional file 1: appendix). A point of peculiarity is that in both the normal and dichotomous outcome simulations, the Type I error rate in the perfect, full-data data case deviates further than anticipated from 5%. We conjecture that the p-value distribution even in the full-data case is biased due to the inherent variability in the smoothing parameter selection that is not being accounted for when testing the null hypothesis. However, as mentioned previously, these GAMs were fit with a ML approach which in preliminary analyses had lower Type I error rates than REML or GCV. So, although the Type I error rate is not as low as one might hope for in the perfect, full-data data case, we have utilized what we believe to be currently the most conservative method of all fitting options.
A natural follow-up question that arises for the MPV is: how many imputed datasets are necessary to achieve satisfactory performance? To shed light on this, we illustrate the proportion of tests with p < 0.05 using the MPV and several choices of the number of imputed datasets in Fig. 4. Generally, the performance of the MPV rule improved with an increased number of imputations; as the number of imputed datasets increased, power increased for x1, x2, and x4, and remained constant for x5, while the type I error rate decreased toward the full-data level. Improvements were greatest for low number of imputations and tapered off after 10 imputations. Strangely, this was not the case for x3, where the MPV rule's power decreased with additional imputations. This is because both PMM and RF have difficulties capturing the nature of the x3 relationship to Y (see Additional file 1: Appendix Figure A7), particularly PMM, since the relationship is non-linear in such a fashion that the linear relationship is biased toward the null (U-shaped).
MPV rule performance by number of imputations (normal outcome, MAR, GAM)
PMM imputation seemed to yield results more consistent with the full data approach than RF imputation, with the exceptions of x3 and the null variable x6, where the type I error rate was higher for PMM imputed data. As stated above, the better performance of RF over PMM for x3 is likely due to RF imputation's ability to better identify and model non-linear relationships.
In addition to GAMs, the effectiveness of the MPV in comparison to other methods was also examined on B-spline models. B-spline models, like GAMs, are a technique for fitting a smooth curve through data, where the curviness of the line is possible due to the combination of "knots" being placed along the x-axis and constrained polynomial terms. In our investigation, we examined cubic B-spline regression models with 10 degrees of freedom with knots placed at seven internal equally-spaced quantiles of the covariate. B-splines were examined only with normal data, and inference was conducted using likelihood ratio tests (LRT). Figure 5 presents results for B-spline models on normal data with a baseline MAR missing data structure. The alternative D2 method was not used in this analysis, as the number of degrees of freedom in the case of B-splines is more tractable.
Proportion of tests that rejected the null hypothesis (normal outcome, MAR, B-spline)
Again, Fig. 5 shows that the MPV performs comparably well in terms of power for covariates x1 through x5. Now, we also see a satisfactory performance of MPV for the null case (x6), in which, the MPV rule is closest to the expected Type I error rate while remaining slightly conservative. Further B-spline results are shown in the Additional file 1: Appendix for MCAR and MNAR missing data patterns. Figure 6 shows that the performance of the MPV generally improves or stays steady with additional imputations when used with LRTs of B-spline models, with the same exceptions as GAMs with the x3 variable.
MPV rule performance by number of imputations (normal outcome, MAR, B-splines)
It is clear we observe bias away from the null in both B-spline and GAM settings. To investigate this, we plot the p-values for the null effect of x6 with the GAM results Fig. 7. We see that not only are PMM and RF p-values for null effects non-uniform (with a heavy lean towards smaller p-values), but the complete case and full data situations likewise are left-heavy. Although we found that the type I error rate was better controlled for B-splines than for GAMs, similar histograms indicate that the B-splines still show evidence of bias away from the null after imputations (Fig. 8). However, the full data and complete case rejections of the null are near-uniform with rejection rates almost exactly at 5%, in contrast to the higher rates seen with GAMs. In conclusion, we have found that default imputation methods (PMM and RF) bias results slightly against the null, but also that GAM p-values are biased against the null regardless of imputation technique as demonstrated by the inflated type I error rates even in the full data analysis.
Distribution of p-values for null effect with GAMs
Distribution of p-values for null effect with B-splines
Normal outcome application: six-minute walk distance based on elevation in PAH patients
A recent paper [20] examined the effect of home elevation from sea-level (based on ZIP code) on distance walked during six minutes in patients presenting with pulmonary arterial hypertension. The six-minute walk distance (6-MWD) is an important clinical metric used for evaluating progression of disease. Several variables in this dataset were missing non-negligible amounts of data and the study authors used multiple imputation via chained equations with m = 25 imputations to address the missing data problem, using predictive mean matching as their imputation method for continuous variables and logistic or multinomial regression for categorical variables.
A central model of interest from this study was a GAM in which 6-MWD was modeled by various demographic and clinical covariates, as well as a smoothed term for continuous elevation. Elevation was provided based on patient home address. It was in this context that we apply the various rules of GAM combination across multiple imputations to evaluate how these methods perform on real-world data. Figure 9 shows the relevant covariate curves of 25 GAMs fit with ML to data from each of the m imputations. Although standard errors of the curves are not plotted for the sake of clarity, it is visibly evident how some curves might suggest a stronger relationship between elevation and distance walked than others.
Effects of smoothed elevation on 6-MWD for each imputation
This heterogeneity across imputations is further demonstrated in Fig. 10, which plots all p-values from each imputation's GAM as well as the results from all investigated pooling methods. The three panels represent three GAM fitting methods: ML, REML, and GCV, the last of which is the default GAM fitting method. The motivation for conducting GAMs with all three fitting methods was derived from preliminary concerns about inflated Type I error rates with the GCV approach.
Imputed and combined p-values across fit methods
For all model fitting options, the MPV approach yielded a p-value as anticipated – in the middle of all single imputation models' p-values. The Cauchy combination method gave smaller p-values than the other combination methods, while the D2 method and alternative D2 method produced p-values on the high end, if not outside the range, of the single imputation p-values. In this application, the selected pooling method could clearly impact the results of the study if the authors leaned heavily on the alpha = 0.05 criteria for statistical significance of p-values.
Binary outcome application: Intubation risk by C-reactive protein level
Our second application uses data from Peterson [21] and Windham et al. [22] which modeled the risk of intubation for hospitalized patients infected with COVID-19 based on patient characteristics including C-reactive protein (CRP) levels, age, body mass index (BMI), race, sex, lactate dehydrogenase, and additional clinical measures. Limited clinical insight of risk factors for COVID-19 complications early in the pandemic necessitated the need for flexible modeling without restrictive parametric assumptions. Therefore, a GAM with a logit link was chosen with a smoothed term for CRP level to maximize its predictive ability on intubation risk.
Missing data were non-negligible and dependent on observed patterns in the data. Therefore, we assumed a MAR framework and employed multiple imputation via chained equations with m = 25 imputations for analysis. As before, we examine the behavior of the MPV rule in this analysis in comparison to other pooling methods. ML was used in this application as the fitting method. Figure 11 shows the distribution of p-values after MI. The rank order of the combined p-values was similar between predictive mean matching and random forest methods. Again, we see that the Cauchy combination rule results in the smallest p-value, while the D2 and mean p-value rules have larger p-values. Unlike the prior application however, all methods lead to the same inferential decision (using a 5% significance level), so the choice of pooling method is less consequential. It is worth noting that a listwise deletion approach yields a p-value of 0.098, much higher than all other methods. Even though there were only 8 missing values for CRP out of the original 158 participants, only 51 of participants had complete data for all covariates, so listwise deletion cuts the sample to only 32% of its original size. Hence, multiple imputation was necessary in this example to fully leverage the available data and to maximize power/precision.
Imputed and combined p-values for CRP and intubation relationship
When missing data arise, MI has proven to be a useful approach to minimizing the loss of power and bias that often accompany missing data. When using flexible modeling techniques on MI datasets such as GAMs, pooling methods are not always straightforward, especially when test statistics or parameters are not normal and cannot be combined with Rubin's rules. We have demonstrated that the MPV is a relatively valid, valuable, and straightforward way to pool results in comparison to other methods for estimation with GAMs or B-splines after MI. This result was found under MCAR, MAR, and MNAR missing data frameworks. This finding aligns with similar results from Eekhout, van de Wiel, and Heymans [13] in their application of the MPV to multi-categorical variables in logistic regression. Certainly, other methods have also demonstrated utility; if avoiding a Type II error significantly outweighs the cost of making a Type I error, then the Cauchy combination test is recommended. If the analyst is concerned with avoidance of Type I errors, then one of the D2 approaches or the mean p-value rule might prove useful. Aside from these cases, we conclude that true to its namesake, the MPV rule strikes an excellent middle ground by balancing decent power with a moderately controlled rate of false positives. Not only is the MPV an empirically helpful tool, but its simplicity facilitates its implementation in any software package.
The performance of D2 was somewhat poor in our simulation studies, but this is in line with what other empirical studies have found. In his description of the D2 method, van Buuren [5] describes that, in comparison to other combination methods, it is often underpowered since it utilizes less of the information that the data provides compared to other methods. However, in other simulations, the D2 method has performed too liberally, particularly with large sample sizes, large amounts of missing data, and few imputations [5]. Thus, the performance of D2 remains unpredictable.
There was a non-negligible amount of bias towards the alternative hypothesis in GAMs, even for models fit to the full dataset using maximum likelihood. This bias in the full-data models was attenuated using the B-spline LRT approach. However, since default options were used in the multiple imputation PMM and RF models, the imputation models were not correctly specified, which induced additional bias in both B-spline and GAM approaches when performed on imputed datasets (as seen in Figure A7 of the Additional file 1: Appendix). This observation indicates that a more flexible imputation model (e.g. one which utilizes splines) or a better-tuned imputation model would improve likely the performance of all methods by attenuating this remaining source of bias; this is a promising avenue for future research and additional simulation studies.
In short, we have shown the MPV rule can be a useful analytic tool for pooling GAM or B-spline results after multiple imputation for statisticians who need a straightforward, accurate, and easily implemented pooling approach when dealing with missing data in MCAR, MAR, and MNAR situations. Not only does the MPV have adequate or superior power for detecting a variety of linear and non-linear relationships compared to its alternatives, but it also does a reasonable job of controlling the Type I error rate which seems to improve with the quality of the imputation model. While arguably the simplest method is to use the complete-case analysis and avoid the imputation procedure altogether, we have demonstrated that the MPV rule maintains simplicity while also leveraging all observed data to improve power and precision. Further research into the effectiveness of the MPV rule in additional settings such as nonparametric analyses, exact tests, and penalized regression may continue to expand upon on its utility in multiple imputation.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
CRP:
GAM:
Generalized additive model
GCV:
Generalized cross validation
LRT:
Likelihood ratio test
ML:
MPV:
Median p-value rule
PAH:
PMM:
Predictive mean matching (imputation method)
REML:
Restricted maximum likelihood
RF:
Random forest (imputation method)
6-MWD:
Six-minute walk distance
Mack C, Su Z, Westreich, D. Types of Missing Data. Managing Missing Data in Patient Registries: Addendum to Registries for Evaluating Patient Outcomes: A User's Guide, Third Edition [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); Types of Missing Data. https://www.ncbi.nlm.nih.gov/books/NBK493614/. Accessed 20 April 2021.
Harrell F. Regression Modeling Strategies. Switzerland: Springer International Publishing; 2015.
Graham JW. Missing data analysis: making it work in the real world. Annu Rev Psychol. 2009;60:549–76. https://doi.org/10.1146/annurev.psych.58.110405.085530 PMID: 18652544.
Schafer JL. Multiple imputation: a primer. Stat Methods Med Res. 1991;1:3–15. https://doi.org/10.1177/096228029900800102 PMID: 10347857.
van Buuren S. Flexible imputation of missing data. Taylor & Francis Group: CRC Press; 2018.
Azur MJ, Stuart EA, Frangakis C, et al. Multiple imputation by chained equations: what is it and how does it work? Int J Methods Psychiatr Res. 2011;20:40–9. https://doi.org/10.1002/mpr.329.
Allision P. Imputation by predictive mean matching: promise & peril. Statistical Horizons. https://statisticalhorizons.com/predictive-mean-matching. Accessed 15 April 2020.
Bartlett J. Methodology for multiple imputation for missing data in electronic health record data. International Biometric Conference. http://thestatsgeek.com/wp-content/uploads/2014/09/RandomForestImpBiometricsConf.pdf. Accessed 15 April 2020.
Rubin D. Multiple Imputation After 18 Years. J Am Stat Assoc. 1996;91(434):473–89. https://doi.org/10.2307/2291635.
Heymans M, Eekhout, I. Applied Missing Data Analysis with SPSS and (R) Studio. Amsterdam, Netherlands. 2019. https://bookdown.org/mwheymans/bookmi/
Wood SN. On p-values for smooth components of an extended generalized additive model. Biometrika. 2013;100(1):221–8. https://doi.org/10.1093/biomet/ass048.
Wood SN. Generalized Additive Models: An Introduction with R (2nd edition). New York: Chapman and Hall/CRC; 2017.
Eekhout I, van de Wiel MA, Heymans MW. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis. BMC Med Res Methodol. 2017;17:129. https://doi.org/10.1186/s12874-017-0404-7.
Rubin D. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons; 1987.
Liu Y, Xie J. Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures. J Am Stat Assoc. 2020;115(529):393–402. https://doi.org/10.1080/01621459.2018.1554485.
Friedman J. Multivariate Adaptive Regression Splines. The Annals of Statistics. 1991;19(1):1–67 (http://www.jstor.org/stable/2241837).
van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate Imputation by Chained Equations in R. Journal of Statistical Software. 2011;45(3):1–67 (Accessed May 19, 2021, from https://www.jstatsoft.org/v45/i03/.).
Schouten RM, Lugtig P, Vink G. Generating missing values for simulation purposes: a multivariate amputation procedure. J Stat Comput Simul. 2018;88(15):2909–30. https://doi.org/10.1080/00949655.2018.1491577.
Wood SN. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J Royal Stat Soc (B). 2011;73(1):3–36.
Fakhri S, Hannon K, Moulden K, Peterson R, Hountras P, Bull T, et al. Residence at moderately high altitude and its relationship with WHO Group 1 pulmonary arterial hypertension symptom severity and clinical characteristics: the Pulmonary Hypertension Association Registry. Pulmonary Circulation. 2020. https://doi.org/10.1177/2045894020964342.
Peterson R. A Simple Aggregation Rule for Penalized Regression Coefficients after Multiple Imputation. J Data Sci. 2021;19(1):1–14. https://doi.org/10.6339/21-JDS995.
Windham, et al. The Predictive Potential of Elevated Serum Inflammatory Markers in Determining the Need for Intubation in CoVID-19 Patients. J Crit Care Med. 2022;8(1):14–22. https://doi.org/10.2478/jccm-2021-0035.
The Pulmonary Hypertension Association Registry (PHAR) is supported by Pulmonary Hypertension Care Centers, Inc., a supporting organization of the Pulmonary Hypertension Association. The authors thank the other investigators, the staff, and particularly participants of the PHAR for their valuable contributions. A full list of participating PHAR sites and institutions can be found at www.PHAssociation.org/PHAR. The authors would also like to thank the CCTSI for additional support with data and REDcap management as well as the first author's colleagues, Samantha and MLE, for their insightful and supportive discussion.
The data acquisition for this project's applied analyses was supported by R01 AG054366-05S1 (NIA) and NIH/NCATS Colorado CTSA Grant Number UL1 TR002535.
Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado-Denver Anschutz Medical Campus, 13001 E. 17th Pl, Aurora, CO, USA
Matthew A. Bolt, Samantha MaWhinney, Jack W. Pattee & Ryan A. Peterson
School of Medicine, University of Colorado-Denver Anschutz Medical Campus, Aurora, CO, USA
Kristine M. Erlandson & David B. Badesch
Matthew A. Bolt
Samantha MaWhinney
Jack W. Pattee
Kristine M. Erlandson
David B. Badesch
Ryan A. Peterson
RP developed the research question, RP and MB developed the research plan, and MB performed statistical analysis and drafted initial versions of the manuscript. SM and JP contributed to the project's direction and conceptualization. KE and DB contributed toward data and results interpretation for the applications. All authors read and approved the manuscript.
Correspondence to Ryan A. Peterson.
The data collection process for the COVID-19 application was reviewed and approved by the Colorado Multiple Institutional Review Board as exempt research. The University of Pennsylvania institutional review board approved PHAR protocols and study-related activities for the six-minute walk distance application (Federal Wide Assurance number FWA00004028). Informed consent was obtained from each patient prior to enrollment. Data was de-identified before use. All methods were carried out in accordance with relevant guidelines and regulations.
Additional file 1:
Appendix.
Bolt, M.A., MaWhinney, S., Pattee, J.W. et al. Inference following multiple imputation for generalized additive models: an investigation of the median p-value rule with applications to the Pulmonary Hypertension Association Registry and Colorado COVID-19 hospitalization data. BMC Med Res Methodol 22, 148 (2022). https://doi.org/10.1186/s12874-022-01613-w
DOI: https://doi.org/10.1186/s12874-022-01613-w
Pooling results
Simulation study
Cauchy combination test | CommonCrawl |
Why were 18th century mathematicians interested in extending the factorial to non-integers?
As far as I understand, the Gamma function was developped as a way of calculating "the" factorial of a non-integer number. Why did this problem interest 18th century mathematicians? Was it just a puzzle, or did they have some specific application in mind?
mathematics 18th-century
edited Oct 2 '20 at 6:18
Jack MJack M
The problem can be seen as part of more general attempts to extend the domain of operations defined initially for naturals (integers, or rationals), only. A prominent natural example would be taking an $n$-th power (another would be the binomial coefficients, and related Newtonian series). The problem is, thus, not an isolated one, but rather part of a general and natural project of that time.
An application that directly motivated the invention of the Gamma function was Goldbach trying to find something like a closed form for $\sum_{k=1}^n k!$.
This answer is based on the information given in Section 1 of:
Gronau, D. Why is the gamma function so as it is. Teaching Mathematics and Computer Science, 1, 43-53 (2003).
The reference mentioned there to support this point of view specifically is :
J. Christoph Scriba, "Von Pascals Dreieck zu Eulers Gamma-Funktion, Zur Entwicklung der Methodik der Interpolation", in: Mathematical Perspectives, Essays on Mathematics and Its Historical Development, (Joseph W. D auben, ed.), Academic Press, New York, 1981
I could not read this reference, but even only the title (my translation) "From Pascal's triangle to Euler's Gamma function, On the developpment of the method of interpolation." supports it.
The MO-question Who invented the gamma function? contains related information.
Euler's motivation
R. Hilfer on pg. 18 of "Threefold Introduction to Fractional Derivatives" states, "Derivatives of non-integer (fractional) order motivated Euler to introduce the Gamma function ...." Euler introduced in the same reference given by Hilfer essentially
$$\displaystyle\frac{d^{\beta}}{dx^\beta}\frac{x^{\alpha}}{\alpha!}=\frac{x^{\alpha-\beta}}{(\alpha-\beta)!}$$
Related MSE-Q&A.
Tom CopelandTom Copeland
$\begingroup$ See also "Construction and physical application of the fractional calculus" by N. Wheeler $\endgroup$ – Tom Copeland Jun 9 '15 at 18:00
$\begingroup$ See also "Fractional derivatives and special functions" by Lavoie, Osler, and Trembley. $\endgroup$ – Tom Copeland Jan 1 '16 at 17:07
The reasons are the same as extending the binomial formula to non-positive exponents (Newton binomial). It involves generalized binomial coefficients which cannot be expressed in factorials. Binomial formula with non-integer exponent arises, for example in the computation of the arc length of an ellipse, and in other natural problems.
The first part of the survey by J. Lagarias, Euler's constant, in Bull AMS, 50 (2013) 527-628 (freely available online) contains a detailed analysis of Euler's work, and in particular addresses the need to interpolate various functions of integers with analytic functions.
Alexandre EremenkoAlexandre Eremenko
Interpolation became a big thing after Wallis's Algebra, where he first interpolated various sequences just for the fun of it. Since Wallis's "proofs" were hardly more than a sequence of educated guesses, Goldbach and Euler used Wallis's work as a collection of open problems.
Later, Euler would use the same methods for extending functions such as the zeta function to negative numbers and could even deduce (conjecturally) the functional equation of zeta(s) as well as for the Dirichlet L-function for the Dirichlet character $(-1/p)$ with conductor 4.
Not the answer you're looking for? Browse other questions tagged mathematics 18th-century or ask your own question.
Why were so many pre-18th century Mathematicians polymaths?
Division of the circle and compass constructions
Was 18th century algebra more symbolic/formal than the modern conception?
The history and motivation of eigenvectors
Why is the Halting problem attributed to Alan Turing?
History of the definition of Injective & Surjective Function
In which 1644 publication did Pietro Mengoli first pose the Basel Problem?
History of the Gauss Circle Problem
Why are permutations ($_nP_r$) called differently in non-English languages ("variations" in German)? | CommonCrawl |
Technology & the future
People & medicine
Space & time
Policy & features
Search search-icon search-icon
Image credit: Australian Electoral Commission.
The mathematics of voting
There are almost as many different voting systems in the world as there are elected assemblies. The one thing they all have in common is their reliance on mathematics.
Expert reviewers
Australian Electoral Commission
All voting systems are underpinned by mathematics
In Australia, a candidate needs a majority of the votes to win a seat (as opposed to the 'first past the post' system, where whoever has the greatest number of votes wins)
There are 150 Members elected to the House of Representatives, from electorates of around 94,000 voters
To elect the House of Representatives, voters use a 'preferential system' to vote for several candidates in their order of preference, rather than just a single candidate. Preferences are allocated until a candidate has a majority of votes
There are eight Senate electorates—the six states and two Territories. Regardless of their population numbers, each state elects 12 Senators, and each Territory elects two.
To elect Senators, voters again use a preferential system, but because more than one candidate is elected at a time, a complex equation quota system is used to determine the winning candidates
There are almost as many different voting systems in the world as there are elected assemblies. The one thing they all have in common is their reliance on mathematics to calculate the results.
In Australia, we elect a group of people, a Parliament, to make decisions and laws on behalf of the whole country. We elect a new Parliament approximately every 3 years. At this time each adult citizen votes for the candidates in their electorate who they think will make the best decisions on their behalf. Candidates are then elected according to votes received, in accordance with the Australian electoral system.
While there are many different electoral systems around the world, all are based on mathematical criteria of fairness. However, electoral systems are generally chosen or refined for political rather than mathematical reasons.
The system used in Australia was based partly on the British Westminster system and partly on the American system. Over time some distinctly Australian characteristics have developed, such as the principle of 'one person, one vote', and a candidate needing to receive a majority of votes to be elected.
Many countries still use the 'first past the post' system, where the candidate with the highest number of votes wins. The problem with this system is that the person with the most votes may still have less than half of all the votes.
For example, let's say that Anna, Brett, Christine and Daniel are candidates in an electorate where 100 votes are cast as follows: Anna 36, Brett 30, Christine 23, and Daniel 11.
You can see that even though Anna got the most votes (36), there were almost twice as many people who didn't vote for her (64).
An option used in some countries is to have several rounds of elections—the candidates with the least number of votes drop out, and the whole election is held again with fewer candidates, until someone gets more than half of all votes.
Rather than have several rounds of elections, which requires asking voters to vote more than once, it is mathematically possible to achieve the same result by asking voters to mark their preferences for all candidates. This is called a preferential system and it is used in Australia to elect our Parliament.
In a preferential voting system you vote for each candidate in order of preference, rather than voting for just one candidate. This tells us every voter's first preference, second preference, third preference, and so on. After first votes are allocated, the candidate with the least number of votes drops out and their votes are given to the others according to preferences. This process is repeated until someone gets more than half the votes, as the following example shows.
Anna, Brett, Christine, and Daniel are again candidates in an election with 100 voters. The count starts with the first preferences, cast as follows.
Anna 36
Brett 30
Christine 23
Daniel, with the least number of votes, drops out. His 11 votes are now given to their second preference: 3 votes had Anna at number 2, 6 had Brett at number 2, and 2 had Christine at number 2.
Reassigned votes:
Anna 36 3 39
Brett 30 6 36
Christine 23 2 25
Because no-one has more than half the number of votes, another candidate is dropped. This time Christine has the least number of votes so her votes are distributed according to their second preferences: 8 to Anna, 13 to Brett, and 2 to Daniel. But Daniel has already dropped out, so those last 2 go to the third preference, one each to Anna and Brett. The two votes that previously went from Daniel to Christine are now given to their third preference, this time both go to Brett. So now we have:
Anna 36 3 8 1 0 48
Brett 30 6 13 1 2 52
Total 66 9 21 2 2 100
Brett now has more than half the total vote so he gets elected.
Electing the Australian Parliament
The Australian Parliament has two chambers—the House of Representatives and the Senate.
Australia has 150 electorates that each elect a single member to the House of Representatives, using the preferential voting system.
Around 150,000 people live in the House of Representative electorates, with an average of 94,000 voters. Electorates have similar numbers of voters because each electorate gets exactly one vote in Parliament. It means that all Australians have roughly equal representation. But it does mean that more than half of all seats come from New South Wales and Victoria because more people live there.
There are only eight electorates for the Senate—the six States and two Territories. These electorates are much bigger than those for the House of Representatives but each State elects twelve Senators and each Territory elects two. Notice that New South Wales, with 5,126,651 voters, has as many senators as Tasmania with 375,024 voters—less than one-tenth as many as New South Wales (figures from September 2016).
One of the reasons for creating the Senate was to give people in the less populated States more of a say in Parliament.
The Senate also uses preferential voting, but because more than one person is elected from each electorate the situation is more complex. Usually, a large number of candidates stand for Senate seats and for a voter, placing them all in order of preference can take a long time. For example, the ballot paper for the 2013 Senate election had so many candidates it was over one meter long and voters were handed magnifying glasses to help them read the document!
When voting for the Senate, you can place a vote 'above the line' as a shortcut, so you don't need to number all your preferences. Image source: Australian Electoral Commission.
The ballot paper listing the Senate candidates for a particular state is divided by a horizontal line into two sections, and traditionally, voters had the option to vote 'above the line' or 'below the line'. Single parties (the Labor Party, the Greens etc.) are listed above the line and a voter only had to select one box to tick. Below the line, all individual candidates are listed and, until the electoral changes of 2016, voters were required to allocate a preference to every candidate. Failure to do so rendered the vote invalid.
Obviously, not many voters choose the option of numbering all the boxes below the line to make their preferences clear, less than 5 per cent in fact. As a shortcut, the different political groups would work out their own preferences prior to the election. Known as 'Group Voting Tickets', they were a pre-assigned sequence of preferences—essentially, each party or group would negotiate with the other parties and candidates, making deals as to who got their preferences and how they were allocated. For example, if you voted above the line for the Australian Liberal Party, then your vote was distributed according to the wishes or preferences of that party.
Group Voting Tickets were criticised due to the fact that most voters had no way of knowing what deals their selected party had made or where their vote actually ended up. Although this information was available online, it was confusing and complex.
The situation came to a head at the 2013 election, when 'preference whispering' and backroom deals resulted in Motoring Enthusiast Party member Ricky Muir being elected with less than 0.51 per cent of the overall vote. His win was based solely on the distribution of preferences. His success, and that of several other 'micro-party' candidates elected via preferences, resulted in the decision being made that the system had to change.
Electoral reform for the Senate was introduced in 2016. These changes abolished Group Voting Tickets and instead replaced them with optional preferential voting. This means voters can now select multiple boxes above the line to assign their preferences for parties, or they can vote for individual candidates below the line, but now do not have to fill out every box, as they did previously (a welcome change considering there are often upwards of 70 boxes below the line). These changes are expected to result in fewer 'informal' (invalid) votes and should enable voters to have more of a say in where their vote ends up.
It was also decided that party logos would be included on the ballot paper to help voters distinguish between parties with similar-sounding names (such as the Liberal Party of Australia and the Liberal Democratic Party).
Another complication is that the State senators usually sit through two Parliaments – six Senate seats are filled at one election and the other half at the next election. (There are rare situations when all twelve Senate seats are up for election.) The Territories elect two Senators at each election.
The number of votes needed to win a Senate seat is called a quota GLOSSARY quotaThe number of votes needed to win a Senate seat. The value of the quota is determined by how many senators are being elected. .
the quota needed when there are two senators being elected is one vote more than one-third of all votes.
the quota needed when there are six senators being elected is one vote more than one-seventh of all votes.
the quota needed when there are twelve senators being elected is one vote more than one-thirteenth of all votes.
Counting the Senate vote
Let's have a look at how votes and preferences are allocated. Emiko, Gerard, Harry and Ingrid are candidates for two Senate seats in the Australian Capital Territory, in an election where there were 200,987 formal votes. First, the quota that a candidate requires to be elected is calculated.
$$\text{quota} = ( 200,987 ÷ 3 ) + 1 = 66,996$$
So, a candidate needs to get 66,996 votes to be elected.
The first step in counting is the allocation of the first preferences.
Ingrid 18,237
Emiko 83,498 (quota reached)
Harry 44,471
Gerard 54,781
Emiko has reached the quota and is elected.
Emiko received 16,502 votes more than the required quota of 66,996; these are surplus votes GLOSSARY surplus votesThe remaining votes a candidate receives beyond the quota. . Her surplus votes will now be transferred to the voters' next preferred candidate. However, the voters that voted for Emiko as their first preference will not always list the same candidate as their second preference, and we have to take this into account when we allocate Emiko's surplus votes. The process will be to transfer all her votes, but at a reduced value, known as the transfer value GLOSSARY transfer valueThe value that transferred surplus votes are multiplied by, to reduce the value of those votes. The transfer value is the number of surplus votes from the person whose votes are being transferred, divided by the total number of that person's votes. .
The transfer value is calculated as a decimal proportion of the candidate's total vote:
$$\text{transfer value} = \text{number of surplus votes} ÷ \text{total number of votes}$$
Emiko's transfer value is:
$$16,502 ÷ 83,498 = 0.19763347$$
The result is calculated to eight decimal places, with no rounding.
All of Emiko's 83,498 votes are then transferred to the voter's second preference, and multiplied by the transfer value of 0.19763347.
55,009 of Emiko's second preferences go to Ingrid.
$$55,009 \times 0.19763347 = 10,871 \text{ votes}$$
After the transfer value is applied, Ingrid gets 10,871 extra votes. The exact result of this calculation was 10,871.61, but only full votes can be transferred, so this figure is rounded down to the nearest whole number.
Harry gets 24,977 of Emiko's second preferences.
$$24,977 ÷ 0.19763347 = 4,936 \text{ votes}$$
Gerard gets 3,512 of Emiko's second preferences.
$$3,512 ÷ 0.19763347 = 694 \text{ votes}$$
So, now we have:
Transfer votes from Emiko's surplus
Gerard 54,781 694 55,475
Harry 44,471 4,936 49,407
Ingrid 18,237 10,871 29,108
The number of transfer votes allocated after application of the transfer value matches Emiko's surplus:
$$694 + 4,396 + 10,871 = 16,501$$
Application of the transfer value divides all of Emiko's first preference votes into the portion required for her to reach the quota, and the portion that is available to be transferred to the next preferred candidate.
At this stage, none of the three remaining candidates have reached the quota of 66,996 votes. And even though Ingrid received most of Emiko's surplus, at this point she will be excluded because she has the least number of votes.
Ingrid's 18,237 first preference votes are now distributed as follows:
Votes with Gerard or Harry as second preference go straight to them.
Votes with Emiko as second preference go to the voter's third preference, which must be either Gerard or Harry.
Gerard receives 1,223 of these preference votes and Harry 17,014 (the people that voted for Ingrid were much less likely to preference Gerard!).
This situation now is:
Votes after transfer of Emiko's surplus
Ingrid's votes, redistributed by second preference
Gerard 55,475 1,223 56,698
Harry 49,407 17,014 66,421
Now we need to consider the 55,009 transfer votes that Ingrid received from Emiko's surplus. These votes are transferred to either Gerard or Harry, whichever candidate is given the highest preference. They are transferred at the same reduced transfer value at which Ingrid received them.
Of Emiko's surplus votes received by Ingrid, 16,879 of them preferenced Gerard, so he gets:
$$16,897 \times 0.19763347 = 3,335 \text{ extra votes}$$
Harry was preferenced in 38,130 of these votes, so he gets:
Again, the total number of votes transferred matches the reduced value of the votes received by Ingrid from Emiko's surplus:
$$3,335 + 7,535 = 10,870$$
The final picture is:
Votes after transfer of Emiko's surplus and Ingrid's exclusion
Transfer votes from Ingrid's share of Emiko's surplus
Harry 66,421 7,535 73,956 (quota reached)
Harry reaches the quota and is elected to the second seat. Congratulations Harry!
Ethnomathematics
The physics of speeding cars
Keeping track of time
Curious 101
Last updated 06-04-2016 | CommonCrawl |
Force on a Current Carrying Conductor in a Magnetic Field
Electronics & ElectricalElectronDigital Electronics
When a current carrying conductor is placed at right angles to a magnetic field, it is found that a force acts on the conductor in a direction perpendicular to the direction of both the magnetic field and the current.
Consider a straight conductor carrying a current of I amperes. If the magnetic flux density is B, the effective length of the conductor is l and θ is the angle which the conductor makes with the direction of the magnetic field.
It has been found by experiments that the magnitude of the force (F) acting on the conductor is directly proportional to −
Magnetic flux density (B),
Current through the conductor (I), and
Sine of the angle θ i.e. sinθ.
$$\mathrm{\mathit{F\propto BIl\:sinθ}}$$
$$\mathrm{\Longrightarrow \mathit{F= K BIl\:sinθ}}$$
Where, k is constant of proportionality and its value is unity in SI units. Thus,
$$\mathrm{\mathit{F = BIl\:sinθ}\:\:\:\:\:\:...(1)}$$
Case 1 − When θ = 0° or 180° then sin θ = 0, hence,
$$\mathrm{\mathit{F }= 0\:\:\:\:\:\:...(2)}$$
Case 2 − When θ = 90°, then sin θ = 1, hence,
$$\mathrm{\mathit{F=BIl}\:\:\:\:\:\:...(3)(𝑖i.e F is maximum)}$$
Numerical Example
A straight wire 0.5 m long carries a current of 150 A and lies at an angle of 60° to a uniform magnetic field of 2.5 Wb/m2.Find the mechanical force on the conductor when (a) it lies in the given position, (b) it lies in a position such that it is at right angles to the magnetic field.
When conductor lies at 60° to the magnetic field
$$\mathrm{\mathit{F=BIl\:sinθ}=2.5\times150\times0.5\times sin60=162.38N}$$
When conductor lies at right angles to the magnetic field
$$\mathrm{\mathit{F=BIl}=2.5\times150\times0.5=187.5N}$$
Manish Kumar Saini
Updated on 23-Jul-2021 12:52:20
Magnetic Field around a Current Carrying Conductor
(a) A current-carrying conductor is placed perpendicularly in a magnetic field. Name the rule which can be used to find the direction of force acting on the conductor.(b) State two ways to increase the force on a current-carrying conductor in a magnetic field. (c) Name one device whose working depends on the force exerted on a current-carrying coil placed in a magnetic field.
When is the force experienced by a current-carrying conductor placed in a magnetic field largest?
What happens when a current-carrying conductor is placed in a magnetic field?
State the form of magnetic field lines around a straight current-carrying conductor.
(a) Draw a sketch to show the magnetic lines of force due to a current-carrying straight conductor.(b) Name and state the rule to determine the direction of magnetic field around a straight current-carrying conductor.
State the rule to determine the direction of a $(i)$. magnetic field produced around a straight conductor-carrying current,$(ii)$. the force experienced by a current-carrying straight conductor placed in a magnetic field which is perpendicular to it, and$(iii)$. current induced in a coil due to its rotation in a magnetic field.
The force experienced by a current-carrying conductor placed in a magnetic field is the largest when the angle between the conductor and the magnetic field is:(a) 45° (b) 60° (c) 90° (d) 180°
Name the rule for finding the direction of magnetic field produced by a straight current-carrying conductor.
What is the shape of a current-carrying conductor whose magnetic field pattern resembles that of a bar magnet?
What is the force on a current-carrying wire that is parallel to a magnetic field? Give reason for your answer.
The shape of the earth's magnetic field resembles that of an imaginary:(a) U-shaped magnet (b) Straight conductor carrying current (c)Current-carrying circular coil (d) Bar magnet
(a) Draw the magnetic lines of force due to a circular wire carrying current.(b) What are the various ways in which the strength of magnetic field produced by a current-carrying circular coil can be increased?
The magnetic field associated with a current-carrying straight conductor is in anticlockwise direction. If the conductor was held along the east-west direction, what will be the direction of current through it?
(a) What is a solenoid? Draw a sketch to show the magnetic field pattern produced by a current-carrying solenoid.(b) Name the type of magnet with which the magnetic field pattern of a current-carrying solenoid resembles. (c) What is the shape of field lines inside a current-carrying solenoid? What does the pattern of field lines inside a current-carrying solenoid indicate? (d) List three ways in which the magnetic field strength of a current-carrying solenoid can be increased? (e) What type of core should be put inside a current-carrying solenoid to make an electromagnet? | CommonCrawl |
Sea stars generate downforce to stay attached to surfaces
Understanding of remora's "hitchhiking" behaviour from a hydrodynamic point of view
Yunxin Xu, Weichao Shi, … Yigit Kemal Demirel
The role of suction thrust in the metachronal paddles of swimming invertebrates
Sean P. Colin, John H. Costello, … Kevin T. Du Clos
Interaction among sea urchins in response to food cues
Jiangnan Sun, Zihe Zhao, … Yaqing Chang
The Levantine jellyfish Rhopilema nomadica and Rhizostoma pulmo swim faster against the flow than with the flow
Dror Malul, Tamar Lotan, … Uri Shavit
Large size in aquatic tetrapods compensates for high drag caused by extreme body proportions
Susana Gutarra, Thomas L. Stubbs, … Michael J. Benton
Rethinking swimming performance tests for bottom-dwelling fish: the case of European glass eel (Anguilla anguilla)
P. Vezza, F. Libardoni, … P. S. Kemp
Experimental tests of bivalve shell shape reveal potential tradeoffs between mechanical and behavioral defenses
Erynn H. Johnson
Seal and Sea lion Whiskers Detect Slips of Vortices Similar as Rats Sense Textures
Muthukumar Muthuramalingam & Christoph Bruecker
Olfactory-cued navigation in shearwaters: linking movement patterns to mechanisms
Milo Abolaffio, Andy M. Reynolds, … Stefano Focardi
Mark Hermes1 &
Mitul Luhar1
Intertidal sea stars often function in environments with extreme hydrodynamic loads that can compromise their ability to remain attached to surfaces. While behavioral responses such as burrowing into sand or sheltering in rock crevices can help minimize hydrodynamic loads, previous work shows that sea stars also alter body shape in response to flow conditions. This morphological plasticity suggests that sea star body shape may play an important hydrodynamic role. In this study, we measured the fluid forces acting on surface-mounted sea star and spherical dome models in water channel tests. All sea star models created downforce, i.e., the fluid pushed the body towards the surface. In contrast, the spherical dome generated lift. We also used Particle Image Velocimetry (PIV) to measure the midplane flow field around the models. Control volume analyses based on the PIV data show that downforce arises because the sea star bodies serve as ramps that divert fluid away from the surface. These observations are further rationalized using force predictions and flow visualizations from numerical simulations. The discovery of downforce generation could explain why sea stars are shaped as they are: the pentaradial geometry aids attachment to surfaces in the presence of high hydrodynamic loads.
Intertidal sea stars often function in environments with extreme hydrodynamic loads1 that can compromise their ability to remain attached to and move on surfaces2. While behavioral responses such as burrowing into sand or sheltering in rock crevices can help minimize hydrodynamic loads, previous work shows that sea stars also alter body shape in response to flow conditions3. This morphological plasticity suggests that sea star body shape and size may play an important hydrodynamic role. Specifically, Hayne and Palmer3 demonstrated that the arms of the purple sea star (Pisaster ochraceous) narrow and lose mass when transplanted to a more wave-exposed environment. The authors hypothesized that this transformation is a functional response to wave intensity: that by changing shape they are minimizing drag or related hydrodynamic forcing. Further, Computational Fluid Dynamics (CFD) simulations of wave-exposed sea star models showed a decrease in both lift and drag coefficients compared to models of sheltered sea stars, suggesting that the observed shape adaptation conferred a hydrodynamic benefit.
In this paper, we evaluate the hydrodynamics associated with sea star body shapes in turbulent flow conditions via laboratory experiments and CFD simulations. Though natural sea stars can exhibit significant diversity in surface texture, number of arms, and other morphological properties, we limit our study to smooth pentaradial shapes to isolate the effect of arm aspect ratio. Specifically, we consider models of comparable shape and size to adult purple sea stars. Sea stars are known to increase dramatically in size from the early juvenile to the adult stage4,5. The evolution in the hydrodynamic response during this development is outside the scope of the present study. In addition to providing insight into previous biological observations, the present effort has potential applications in fields ranging from flow control to shape optimization for vehicles.
Existing research investigating flow over surface-mounted objects has focused on characterizing the effect of surface fouling, designing drag-reducing structures, and studying airflow around buildings6,7,8,9. A variety of geometries have been considered, including cubes10,11, circular cylinders12, hemispheres12,13,14, pyramids15,16, cones17,18,19, and triangular cylinders20,21. Most of these studies provide drag and drag-coefficient estimates. Measurements of lift are rarer, in part because lift forces may be less important for the applications described. However, lift is a useful performance metric for cases in which surface attachment is a concern.
Perhaps the closest shapes to the sea star that have been studied are pyramids and triangular cylinders. Measurements made by Ikhwan15 indicate that pyramids generate lift forces that increase with increasing aspect ratio. On the other hand, Iungo and Baresti20 show that triangular cylinders generate downforce. This downforce is shown to originate from an upward deflection of the wake behind the cylinders, and its magnitude increases with increasing steepness of the triangular cross-section.
The experiments performed in this paper similarly show downforce generation for sea star models that is associated with an upward deflection of fluid flow around the body. However, downforce is not observed in experiments with spherical domes of comparable size to the sea star models. These observations, together with results from complementary CFD simulations of flow over cones, pyramids, and triangular cylinders, suggest that the radially symmetric sea star geometry generates a hydrodynamic response similar to (nearly) two-dimensional triangular cylinders.
We mounted 3-D printed sea star and spherical dome models to a load cell in a water channel to study the effect of morphology on the mean drag and lift forces (\(F_d\), \(F_L\)) generated by these objects across a range of flow speeds (U). The effect of sea star morphology was studied by varying the arm aspect ratio AR over the range of values reported for Pisaster ochraceous3; here, AR is defined as the ratio between the arm length measured from the distal tip to the central axis and the arm width measured at the base intersection. We also measured the effect of body orientation with respect to the flow, \(\Theta \), for the pentaradial sea star models using a servo motor positioning system. We supplemented these force measurements with PIV-based flow visualization and control volume analyses. To provide additional qualitative and quantitative insight into the experimental observations, we also pursued CFD simulations for a limited range of geometries. Details regarding model design, experimental apparatus, and analysis methods can be found in Sect. 4.
Effect of aspect ratio and orientation on drag and lift
Mean (a) drag force, (b) lift force, (c) drag coefficient, and (d) lift coefficient values for sea star and spherical dome models shown as functions of freestream velocity (a,b) and Reynolds number (c,d). The sea star models have aspect ratios \(AR = 4.0\) (red symbols), 2.5 (green symbols), and 1.5 (blue symbols).
Figure 1 compares the mean drag and lift generated by three sea star models of varying aspect ratio against the drag and lift generated by a spherical dome of similar height and base diameter as the sea star models. For these measurements, the sea star models were oriented at \(\Theta = 0^\circ \), i.e., with one limb pointing into the oncoming flow. As expected, the magnitude of the drag and lift forces generated by the models increases with increasing freestream velocity. However, the corresponding drag and lift coefficients (\(C_d\), \(C_L\); see Eqs. (2) and (3)) show more limited variation with Reynolds number (Re, defined using base diameter; see Eq.(4)). Perhaps the most striking feature of the results presented in Fig. 1 is that all sea star models generate downforce (i.e., \(F_L<0\) and \(C_L < 0\)) while the spherical dome model generates positive lift forces that are much higher in magnitude. Lift coefficient values for all three sea star models are similar within uncertainty at the lowest Reynolds number. However, measurements made at higher Reynolds numbers suggest that the \(AR = 1.5\) sea star model (blue symbols) generates the least downforce and has the lowest \(|C_L|\). Importantly, though the pentaradial sea star models generate downforce, they also incur a drag penalty compared to the spherical dome. The drag coefficients for the sea star models (\(C_d > 0.9\)) are significantly higher than those for the spherical dome (\(C_d < 0.6\)). Further, the drag coefficients for the sea star models increase with increasing aspect ratio.
Drag and lift coefficient values for sea star models for varying orientation angles at flow speed \(U \approx 0.46\) ms\(^{-1}\). Given the pentaradial symmetry of the sea star models, \(C_d\) and \(C_L\) for \(\Theta = 36\,^\circ \) to \(\Theta = 72\,^\circ \) can be estimated by mirroring the data shown in this figure.
Figure 2 shows drag and lift coefficients for all three sea star models as a function of the orientation angle \(\Theta \) for the highest Reynolds number case shown in Fig. 1. Consistent with the results from Fig. 1, there is a monotonic increase in \(C_d\) as a function of aspect ratio. However, the drag coefficient values show no consistent trend with respect to orientation. Lift coefficients for all three geometries similarly show no clear trend with respect to orientation, though there is a consistent increase in \(C_L\) with \(\Theta \) for the model with \(AR = 2.5\) (green symbols). In general, measured \(C_L\) values for the models with \(AR = 4.0\) (red symbols) and \(AR = 2.5\) (green symbols) are significantly lower than the values measured for \(AR=1.5\) (blue symbols). Together, these observations indicate that the drag and lift forces generated by the sea star models are relatively insensitive to orientation, and confirm that \(C_d\) and \(|C_L|\) increase with increasing AR.
PIV flow visualization and control volume analysis
Mean flow visualization from experiments (a–c) and CFD simulations (d–f) for: \(AR = 4.0\) sea star model (a,c); \(AR = 1.5\) sea star model (b,e); and spherical dome (c,f). The experiments show results for \(U = 0.47 \pm 0.01\) ms\(^{-1}\) while the simulations were performed for \(U = 0.35\) ms\(^{-1}\). Panels (a–c) show the vector field estimated from PIV while panels (d–f) show contours of the mean velocity in the streamwise direction. All panels show the flow field at the central (or median) plane of the models. The sea star models are oriented at \(\Theta = 0^\circ \). Figure created using Ansys Fluent 2019 R2 https://www.ansys.com/.
To provide further insight into the force measurements shown in Fig. 1, we pursued PIV experiments at freestream velocity \(U = 0.47 \pm 0.01\). Though the fluid forces acting on the objects arise from three-dimensional flow fields, the planar mean flow visualizations shown in Fig. 3a–c provide a partial physical explanation for the observed trends in drag and lift. Specifically, the sharp apex creates a distinct separation point for the flow over the sea star models. In contrast, the flow over the spherical dome has a separation point much further down the body, resulting in a smaller wake region compared to the sea star models. Further, the wake behind the high aspect ratio sea star model is larger than the wake behind the low aspect ratio sea star model. These observations are qualitatively consistent with the drag force measurements shown in Fig. 1a: the sea star models generate more drag than the spherical dome, and the drag force generated increases with increasing aspect ratio. Importantly, the wakes behind the sea star models clearly show an upward redirection of the flow beyond the apex. An upward redirection of the flow is not observed for the spherical dome due to the delayed separation. These observations provide a qualitative explanation for the lift trends observed in Fig. 1b. The downforce experienced by the sea star models is a consequence of the upward redirection of fluid momentum in the wake.
Table 1 Mean values for lift (\(F_L{^\prime }\)) and drag (\(F_d{^\prime }\)) per unit length estimated from control volume analyses of planar vector fields obtained from PIV.
We also used a control volume approach (described in Sect. 4) to estimate drag and lift forces per unit length (\(F_d{^\prime }\), \(F_L{^\prime }\)) from the planar PIV measurements. Figure 1 lists the mean values for \(F_d{^\prime }\) and \(F_L{^\prime }\) obtained after averaging over all PIV frames.
Consistent with the load cell measurements of lift shown in Fig. 1b, \(F_L{^\prime }\) is positive for the spherical dome and negative for the two sea star models. In addition, the magnitude of the estimated lift per unit length (\(|F_L{^\prime }|\)) for the spherical dome is nearly twice that for the sea star models. Both sea star models have similar \(F_L{^\prime }\) values though the \(AR=4.0\) model is estimated to generate slightly higher downforce per unit length. Similarly, the estimates for \(F_d{^\prime }\) are also consistent with the drag measurements shown in Fig. 1a. The high aspect ratio sea star model has higher \(F_d{^\prime }\) than the low aspect ratio model, and the low aspect ratio star has higher \(F_d{^\prime }\) than the spherical dome.
CFD simulation results
To supplement the experiments, we pursued CFD simulations in ANSYS Fluent for a subset of the sea star models, the spherical dome, and several related geometries. The additional shapes (hemisphere, cone, pyramid, triangular prism; see Fig. 5) were created to have the same frontal area and height as the sea star. For the triangular prism, the streamwise length was set to be similar to the base width of the sea star model, \(L = 19\) cm, such that the prism cross-section was identical to the midplane cross-section of the sea star model shown in Fig. 3a. These shapes were tested in water flow with an inlet speed of \(U = 0.35\) ms\(^{-1}\). Experimental data for the sea star and spherical dome models shown in Fig. 1 were linearly interpolated for comparison at this flow speed.
As shown in Fig. 4, the drag and lift coefficients computed from the simulations agree, within uncertainty, with the values obtained in experiments for the \(AR = 4.0\) sea star and spherical dome. Further, as shown in Fig. 3, the mean flow fields and wake structures obtained in the simulations are in good qualitative agreement with the PIV results. For instance, the vertical extent of the wake region is largest for the \(AR = 4.0\) sea star and smallest for the spherical dome. These observations give us confidence that the CFD simulations can reasonably reproduce the flow physics observed in the real world experiments.
(a) Drag and (b) lift coefficients of models for experiments and simulations in a flow with speed 0.35 ms\(^{-1}\).
The simulation results in Fig. 4b also suggest that downforce is not obtained for the pyramid or cone shapes, despite these objects having a sharp apex similar to the sea star models. Downforce is only observed for the triangular prism, and this downforce carries a significant drag penalty. These observations can be explained by considering the pathline visualizations shown in Fig. 5. For the downforce-producing shapes, the pathline visualizations show that the streamwise vorticity is negative (blue) on the left and positive (red) on the right side of the shapes. This results in a significant upwelling of fluid from the central region of the wake into the freestream. The momentum transport associated with this upwelling flow explains the high drag and negative lift generated by the sea star models and the triangular prism. Pathlines near the apex for the cone and pyramid shapes also show evidence of a similar arrangement in streamwise vorticity. However, pathlines at the base of the pyramid and cone shapes show a reversal in sign for the streamwise vorticity: the streamwise vorticity is positive (red) on the left and negative (blue) on the right, similar to the flow field observed around the spherical dome. This is indicative of a downwelling flow that transports high momentum fluid from the freestream into the wake and explains the lower drag and positive lift forces experienced by spherical dome, cone, and pyramid shapes. Note that the pathlines and vorticity distributions around the base of the lift-producing spherical dome, cone, and pyramid shapes in Fig. 5 are consistent with the horseshoe vortex systems typically observed in flows around surface mounted bodies12,22,23. Visualizations for the sea star bodies do not show evidence of this horseshoe vortex system.
Pathlines of flow over a spherical dome, cone, pyramid, \(AR=1.5\) sea star, \(AR=4.0\) sea star, and triangular prism. The pathlines are colored based on the local streamwise vorticity. Objects are placed in order of descending lift force. Figure created using Ansys Fluent 2019 R2 https://www.ansys.com/.
The ability to stay attached to surfaces plays an important role in sea star locomotion and survival. The results presented in this study show that pentaradial sea star body shapes generate downforce independent of the incoming flow direction. This downforce could help sea stars avoid hydrodynamic dislodgement.
Hayne and Palmer3 showed that Pisaster ochraceous sea stars exhibit significant morphological plasticity in response to hydrodynamic conditions. Specifically, observations made in different environments showed a linear relationship between mean wave speed and sea star aspect ratio. Transplant studies confirmed this trend: sea stars moved into higher energy environments showed an increase in body aspect ratio. One hypothesis proposed to explain this correlation was that the change in body shape may enable sea stars to better resist hydrodynamic forces. Our results show that an increased aspect ratio produces a larger downforce, but this comes at the expense of a larger drag force. In the present study, sea star height and frontal area were maintained constant across the different aspect ratios tested. In contrast, the observations made by Hayne and Palmer3 indicate that the increase in sea star aspect ratio is also accompanied by a decrease in height, i.e., sea stars exhibit higher aspect ratios and reduced height in more energetic environments. It is possible that the high aspect ratio body shape generates greater downforce while the reduction in height limits the drag penalty.
Sea stars may also prioritize downforce maximization over drag minimization. Per Martinez24, the following condition, derived from a simple balance of moments, can be used to evaluate the possibility of animal detachment in steady flows:
$$\begin{aligned} \frac{(F_d)h}{(W-B-F_L)(L/2)} >1. \end{aligned}$$
Here \(F_d\) is the drag force, h is the height of the center of mass, W is the weight of the organism, B is buoyancy, \(F_L\) is the lift force, and L is the base length defined in Sect. 4. For a sea star of comparable size to the models tested here, \(L \approx 19\) cm and \(h \approx 5\) cm, the net vertical force, \(W-B-F_L\), has approximately double the effect of the horizontal drag force, \(F_d\). Thus, the higher downforce may still be beneficial for sea stars staying attached to surfaces despite the drag penalty incurred.
The preceding discussion suggests that the pentaradial body shape and morphological plasticity exhibited by sea stars enable them to better resist hydrodynamic loads. Sea urchins are often found in the same environment as sea stars and have a comparable biological adhesion mechanism25,26,27. Yet it is unlikely that the spiny spheroidal geometry typical of sea urchins leads to downforce generation. A characterization of the hydrodynamic forces acting on sea urchin body shapes may provide additional insight into how these organisms remain attached to surfaces in energetic flow conditions.
Although the ability to generate downforce has been observed previously for bilaterally symmetric aquatic organisms such as clams28 and crabs24, the orientation-independent nature of the downforce observed in this study is unique to the pentaradial sea star geometry. bilaterally symmetric organisms must either be passively aligned in the flow to generate downforce, as is the case for clams while swash-riding28, or actively posturing in the flow, as is the case with crabs in certain flow conditions24. In other words, bilaterally symmetric organisms require some degree of passive or active reorientation to produce downforce for different flow directions. On the other hand, sea star body shapes produce downforce independent of orientation relative to the flow. Since downforce is not produced by radially symmetric spherical domes and cones, we suggest that the pentaradial geometry of sea star bodies is unique in that it generates a hydrodynamic response that is similar to a (nearly) two-dimensional triangular prism but also insensitive to the incoming flow direction. Our observations suggest that the downforce generated by the sea star bodies and the triangular prism arises from the upwelling flow created along the centerline, with the breakdown of the horseshoe vortex system around the base also playing a role.
We recognize that the present work has some important limitations. Specifically, we only consider a limited range of (smooth) sea star morphologies in steady flow. Intertidal sea stars exhibit significant diversity in body shape, size, and surface texture29. Moreover, the intertidal zone is likely to be dominated by wave-driven unsteady flows in which inertial effects (e.g., added mass) can also play a role1. Nevertheless, this study presents the first evidence for downforce generation with pentaradial sea star body shapes. This orientation-independent downforce could have important implications for sea star locomotion and survival.
Flow facility and experiment setup
All experiments were performed in a large-scale free surface water channel facility in the Fluid-Structure Interactions laboratory at USC. This facility has a glass-walled test section of length 7.6 m, width 0.9 m, and depth 0.6 m, and is capable of generating flows with freestream velocities up to \(U \approx 0.6\) ms\(^{-1}\). As shown in Fig. 6b, 3D-printed sea star and spherical dome models were mounted towards the leading edge a flat plate setup in the water channel. The models were mounted 3 cm from the leading edge of the plate to limit boundary layer development and positioned 5 mm from the plate surface. This distance was chosen to approximate the height of the tube feet (or podia) below the sea stars. Additional force measurements conducted with the models placed 25 mm from the plate surface showed very similar trends to the results presented in Sect. 2.1. A positioning system was used to precisely control model orientation \(\Theta \) with respect to flow and vertical distance with respect to the smooth plate surface. An Arduino and a high torque Hitec servo motor controlled the rotation system. An Actuonix linear servo motor with 100 mm stroke controlled the vertical positioning. The models were tested in flows with freestream velocity ranging from roughly \(U \approx 0.24\) ms\(^{-1}\) to \(U \approx 0.47\) ms\(^{-1}\). A Laser Doppler Velocimeter (MSE miniLDV) placed 3 m downstream from the end of the flat plate setup was used to monitor the flow speed.
(a) Schematic showing the sea star and spherical dome models tested in the experiments. (b) Schematic of load cell-model attachment assembly, including: (1) servo for controlling orientation angle \(\Theta \), (2) bearing and load cell coupling mount, (3) ATI Gamma load cell, (4) linear servo for vertical positioning.
The models tested in the experiments were designed using SolidWorks and manufactured from polylactic acid (PLA) using a Prusa i3 3D-printer. Hydrodynamic forces on the models were measured using an ATI Gamma 6-axis load cell. A PIV system comprising a 5 W 532 nm continuous wave laser and a Phantom VEO high-speed camera was used for flow visualization and control volume analyses. All measurements were made with the models placed below the plate to eliminate free surface effects. A special faring was designed to isolate the positioning system above the flat plate from the flow, thereby ensuring that the forces measured originated from the models alone. Additional details are provided in the subsections below.
3D-printed models
The sea star models were created in SolidWorks by circular patterning an arm of specified aspect ratio around the central axis. Each arm profile is formed from a conic line of curvature \(\gamma = 0.75\), length 10 cm and apex height 5 cm, as shown in Fig. 6. Three different sea star models were created by varying the arm width at the base, resulting in models with aspect ratios \(AR = 4.0\), \(AR = 2.5\), and \(AR = 1.5\). Here, the aspect ratio is defined as the ratio between the arm length from distal tip to the central axis (10 cm) and the width at the base arm intersections.
The spherical dome used in this study is the top slice of a sphere with radius 11 cm. The slicing plane was placed at \(30^\circ \) from the bisecting plane such that the base diameter for the dome was comparable to the frontal width of the sea star models, \(L = 19\) cm, and the height of the dome was comparable to the apex height of the sea star, 5.4 cm. Frontal and planform areas for the models are shown in Table 2. All models were designed with a cylindrical clamp at the base that connected with the positioning system.
Table 2 Lengths and areas for experimental models. For the sea star models, \(L = 19\) cm corresponds to the frontal width when one of the arms is oriented into the flow, \(\Theta = 0^\circ \).
Load cell measurements
The hydrodynamic drag and lift forces acting on the models were measured using an ATI Gamma 6-axis load cell capable of 0.00625N resolution in lateral forces and 0.0125 N resolution in vertical forces. Data from the load cell were logged to a PC using a 16-bit data acquisition system (National Instruments NI PCIe-6321). The sampling rate was set to 5kHz based on load cell manufacturer specifications. For each configuration, force data were collected for 60 s, yielding 300,000 samples. Prior to each measurement made in flow, a zero reading was collected to eliminate the effects of model weight, buoyancy, and load cell drift error from the measured hydrodynamic forces.
Following standard convention, the measured drag and lift forces were converted into drag and lift coefficients and expressed as a function of the Reynolds number. These dimensionless parameters were calculated using the following relations:
$$\begin{aligned} C_d&= \frac{2F_d}{\rho \ A_{f} \ U^2}, \end{aligned}$$
$$\begin{aligned} C_L&= \frac{2F_L}{\rho \ A_{p} \ U^2}, \end{aligned}$$
$$\begin{aligned} Re&= \frac{U \ L}{\nu }, \end{aligned}$$
where \(\rho \) is fluid density, U is freestream velocity, \(A_f\) is model frontal area, \(A_p\) is model planform area, L is a characteristic length, and \(\nu \) is the kinematic viscosity. The fluid density and kinematic viscosity were set to values expected for water at ambient temperature \(20\,^\circ \)C. Note that the drag coefficient is calculated using frontal area while the lift coefficient is calculated using the planform area.
PIV and control volume analysis
The two-dimensional, two-component PIV system comprised a 5 W 532 nm continuous laser and a Phantom VEO 410 L high-speed camera fitted with a 50 mm f/1.4 Nikon lens. The camera recorded images at a rate of 400 Hz and the spatial resolution for the experiments was 0.23 mm per pixel. Because the laser sheet was not wide enough to illuminate both the fore and aft sections of the model, we fixed the position of the camera and moved the laser to obtain two sets of images that were then spliced together at the center of the object. Because of this splicing, we only report mean velocities and force estimates obtained after averaging over 997 frame-pairs. We used PIVLAB30 for background correction and subsequent PIV analyses. We used a multi-pass fast Fourier transform algorithm for the PIV analysis with a final interrogation window of size \(32 \times 32\) pixels and 50% overlap.
The vector field data obtained from PIV were used to estimate lift and drag forces per unit length using a control volume approach. Specifically, for a fixed control volume and steady state conditions, the hydrodynamic force (\({\mathbf {F}}\)) can be estimated using the following relation
$$\begin{aligned} \int _{S} \rho ({\mathbf {u}} \ \cdot \ {\mathbf {n}}) {\mathbf {u}} \ dS = {\mathbf {F}}, \end{aligned}$$
in which S is the control surface bounding the control volume that encompasses the body, \({\mathbf {u}}\) is the velocity vector, \({\mathbf {n}}\) is the outward normal for the control surface, and \({\mathbf {F}}\) is the force imparted on fluid by the body. Though we do not have access to the full three-dimensional flow field from PIV, we can estimate drag and lift forces per unit length acting on the body (\(F_d{^\prime }\), \(F_L{^\prime }\)) by considering a planar approximation to Eq. (5). Separating the momentum conservation relation into the streamwise (x) and wall-normal (y) directions, \(F_d{^\prime }\) and \(F_L{^\prime }\) can be estimated using:
$$\begin{aligned} \rho \left( \int _{H} u_{in}^2 dy - \int _W v_{top}u_{top} dx - \int _H u_{out}^2 dy \right) = F_d{^\prime } \end{aligned}$$
$$\begin{aligned} \rho \left( \int _H u_{in}v_{in} dy - \int _W v_{top}^2 dx - \int _H u_{out}v_{out} \mathbf {e_y} dy \right) = F_L{^\prime } \end{aligned}$$
where \(u_{in}\) and \(v_{in}\) are the streamwise and wall-normal velocities at the upstream (inlet) of the control area, \(u_{out}\) and \(v_{out}\) are the velocities at the downstream (outlet), and \(u_{top}\) and \(v_{top}\) are the velocities at the upper bounding surface. The height of the control area is H and the width is W.
CFD simulation setup
ANSYS Fluent was used to simulate the flow field over the sea star and spherical dome models tested in the experiments, as well as several related geometries. These simulations were carried out using a coupled pressure-velocity method with a built-in steady-state \(k-\epsilon \) turbulence model. The coefficients of the turbulence model were set to the default settings. For all models, the outer flow mesh was a coarse hex-dominant grid. For the near-field flow around the object and for three body-lengths downstream from the rear edge of the models, a fine tetrahedral mesh was used. This also ensured that the sharp geometries involved near the apex of the sea star, pyramid, cone, and triangular prism geometries were adequately resolved. Convergence tests indicated that meshes with roughly 800,000 elements provided a reasonable balance between accuracy and speed. For example, doubling the number of mesh elements beyond this value led to changes of \(\le 4\%\) in the computed drag and lift forces for the highest aspect ratio sea star model.
For all the simulations, the working fluid was assumed to be water at 20 \(^\circ \)C. The models were set 5 mm from the bounding floor in the simulation and 3 cm the edge of the plate, consistent with the experiment setup shown in Fig. 6. The inlet velocity was set at \(U = 0.35\) ms\(^{-1}\), which is roughly the midpoint of the velocity range tested in the experiments.
Helmuth, B. & Denny, M. W. Predicting wave exposure in the rocky intertidal zone: do bigger waves always lead to larger forces?. Limnol. Oceanogr. 48, 1338–1345 (2003).
Santos, R., Gorb, S., Jamar, V. & Flammang, P. Adhesion of echinoderm tube feet to rough surfaces. J. Exp. Biol. 208, 2555–2567 (2005).
Hayne, K. J. & Palmer, A. R. Intertidal sea stars (pisaster ochraceus) alter body shape in response to wave action. J. Exp. Biol. 216, 1717–1725 (2013).
Orton, J. & Fraser, J. Rate of growth of the common starfish, asterias rubens. Nature 126, 567–567 (1930).
Feder, H. M. Growth and predation by the ochre sea star, pisaster ochraceus (brandt), in monterey bay, california. Ophelia 8, 161–185 (1970).
Schlichting, H. Experimental investigation of the problem of surface roughness. NACA Tech. Memor. 823, 2 (1937).
Jimenez, J. Turbulent flows over rough walls. Annu. Rev. Fluid Mech. 36, 173–196 (2004).
Article ADS MathSciNet Google Scholar
Hoerner, S. F. Fluid-dynamic drag. Hoerner Fluid Dyn. (1965).
Hoerner, S. Fluid-dynamic lift. Hoerner Fluid Dyn. (1985).
Schofield, W. & Logan, E. Turbulent shear flow over surface mounted obstacles. J. Fluids Eng. 112, 376–385 (1990).
da Silva, B. L., Chakravarty, R., Sumner, D. & Bergstrom, D. J. Aerodynamic forces and three-dimensional flow structures in the mean wake of a surface-mounted finite-height square prism. Int. J. Heat Fluid Flow 83, 108569 (2020).
Savory, E. & Toy, N. Hemisphere and hemisphere-cylinders in turbulent boundary layers. J. Wind Eng. Ind. Aerodyn. 23, 345–364 (1986).
Taniguchi, S., Sakamoto, H., Kiya, M. & Arie, M. Time-averaged aerodynamic forces acting on a hemisphere immersed in a turbulent boundary. J. Wind Eng. Ind. Aerodyn. 9, 257–273 (1982).
Wood, J. N., De Nayer, G., Schmidt, S. & Breuer, M. Experimental investigation and large-eddy simulation of the turbulent flow past a smooth and rigid hemisphere. Flow Turbul. Combust. 97, 79–119 (2016).
Ikhwan, M. & Ruck, B. Wind load coefficients for pyramidal buildings. Proc. 12. GALA-Tagung Lasermethoden in der Stromungmesstechnik, B. Ruck, A. Leder, D. Dopheide (Ed.), Karlsruhe, Deutschland (2004).
Martinuzzi, R. & AbuOmar, M. Study of the flow around surface-mounted pyramids. Exp. Fluids 34, 379–389 (2003).
Vosper, S. Three-dimensional numerical simulations of strongly stratified flow past conical orography. J. Atmos. Sci. 57, 3716–3739 (2000).
Okamoto, T., Yagita, M. & Kataoka, S.-I. Flow past cone placed on flat plate. Bull. JSME 20, 329–336 (1977).
Gaster, M. Vortex shedding from slender cones at low reynolds numbers. J. Fluid Mech. 38, 565–576 (1969).
Iungo, G. V. & Buresti, G. Experimental investigation on the aerodynamic loads and wake flow features of low aspect-ratio triangular prisms at different wind directions. J. Fluids Struct. 25, 1119–1135 (2009).
Heist, D. & Gouldin, F. Turbulent flow normal to a triangular cylinder. J. Fluid Mech. 331, 107–125 (1997).
Baker, C. The laminar horseshoe vortex. J. Fluid Mech. 95, 347–367 (1979).
Baker, C. The turbulent horseshoe vortex. J. Wind Eng. Ind. Aerodyn. 6, 9–23 (1980).
Martinez, M. M., Full, R. & Koehl, M. Underwater punting by an intertidal crab: a novel gait revealed by the kinematics of pedestrian locomotion in air versus water. J. Exp. Biol. 201, 2609–2623 (1998).
Hennebert, E. et al. Sea star tenacity mediated by a protein that fragments, then aggregates. Proc. Nat. Acad. Sci. 111, 6317–6322 (2014).
Lengerer, B. et al. Interspecies comparison of sea star adhesive proteins. Philos. Trans. R. Soc. B 374, 20190195 (2019).
Pjeta, R. et al. Integrative transcriptome and proteome analysis of the tube foot and adhesive secretions of the sea urchin paracentrotus lividus. Int. J. Mol. Sci. 21, 946 (2020).
Ellers, O. Form and motion of donax variabilis in flow. Biol. Bull. 189, 138–147 (1995).
Paine, R. T. Food web complexity and species diversity. Am. Nat. 100, 65–75 (1966).
Thielicke, W. & Stamhuis, E. Pivlab-towards user-friendly, affordable and accurate digital particle image velocimetry in matlab. J. Open Res. Softw. 2, 1 (2014).
This work was supported by the US Office of Naval Research under grant number N00014-17-1-2062 (Program Manager : Dr. Thomas McKenna). The authors would also like to thank Mike Tolley, Eva Kanso, Matt McHenry, and Michael Ishida for the useful discussions related to this work.
University of Southern California, Aerospace and Mechanical Engineering, Los Angeles, 90089, USA
Mark Hermes & Mitul Luhar
Mark Hermes
Mitul Luhar
M.H. and M.L. conceived the experiments, M.H. conducted the experiments, and M.H. and M.L. analyzed the results. All authors reviewed the manuscript.
Correspondence to Mark Hermes.
Competing interests.
Hermes, M., Luhar, M. Sea stars generate downforce to stay attached to surfaces. Sci Rep 11, 4513 (2021). https://doi.org/10.1038/s41598-021-83961-z | CommonCrawl |
Si unit magnetic field
SI Unit of Magnetic Field. We can define the magnetic field in many ways corresponding to the effect it has on our surroundings or environment as a result of which, we have the B-field and the H-field (magnetic field denoted by symbol B or H). B-field is a kind of magnetic field, which refers to the force it exerts on a moving charged particle. A magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents,: ch1 and magnetized materials. A charge that is moving in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field.: ch13 The effects of magnetic fields are commonly seen in permanent magnets, which pull on magnetic materials such as.
Tesla is the SI unit of magnetic field. 1 Tesla = 1 Weber per square meter Gauss is CGS unit. 1 Gauss = 1 Maxwell per square cm Weber and Maxwell are the corresponding MKS and CGS units for magnetic flux Magnetic Field Strengths are often measured in the unit gauss ( the SI unit is the tesla). The strength of the Earth's Magnetic Field, measured at its core, is 25 gauss In SI units, B is measured in teslas (symbol: T) and correspondingly ΦB (magnetic flux) is measured in webers (symbol: Wb) so that a flux density of 1 Wb/m2 is 1 tesla. The SI unit of tesla is equivalent to (newton. second)/(coulomb. metre). - Wikipedi
Electric and magnetic SI units. Analogies can be found between the electrical circuit and the magnetic circuit. According to the ohmic resistance, a magnetic resistance is therefore defined in a magnetic circuit. In an electrical circuit the voltage is the reason for the electric current Magnetic Field Intensity Units. The International System (SI) unit of field intensity for magnetic fields is Tesla (T). One tesla (1 T) is defined as the field intensity generating one newton of force per ampere of current per meter of conductor: T = N · A-1 · m-1 = kg · s-2 · A-1. Certain other non-SI units, like Gauss (G), are still. Magnetic field strength is defined as the magnetomotive force per unit length, and its SI unit of measurement is the ampere per metre (usually spoken as 'ampere-turn per metre')
Unit of Magnetic Field - SI Unit and Other Common Units
Definition. A particle, carrying a charge of one coulomb, and moving perpendicularly through a magnetic field of one tesla, at a speed of one metre per second, experiences a force with magnitude one newton, according to the Lorentz force law.As an SI derived unit, the tesla can also be expressed as = ⋅ = ⋅ = ⋅ = ⋅ = = ⋅ = ⋅ ⋅ = ⋅ (The last equivalent is in SI base units) The tesla (symbol T) is the SI derived unit used to measure the strength of magnetic fields.Tesla can be measured in different ways; for example, one tesla is equal to one weber per square meter.. The tesla was first defined in 1960 by the General Conference on Weights and Measures (CGPM). It was named in honor of the physicist, electrical engineer, and inventor, Nikola Tesla
Dimensions of Magnetic Field - Click here to know the dimensional formula of magnetic field. Learn to derive the expression for dimensions of magnetic field with detailed explanation In SI units, B is measured in teslas (symbol: T) and correspondingly ϕ B (magnetic flux) is measured in webers (symbol: Wb) so that a flux density of 1 W b / m 2 is 1 tesla. The SI unit of tesla is equivalent to (newton. second)/(coulomb. metre) Magnetic Field Units. The standard SI unit for magnetic field is the Tesla, which can be seen from the magnetic part of the Lorentz force law F magnetic = qvB to be composed of (Newton x second)/(Coulomb x meter). A smaller magnetic field unit is the Gauss (1 Tesla = 10,000 Gauss). The magnetic quantity B which is being called magnetic field here is sometimes called magnetic flux density A/m was often expressed as ampere-turn per meter when used for magnetic field strength. g. Magnetic moment per unit volume. h. The designation emu is not a unit. i. Recognized under SI, even though based on the defition B = μ o H + J. See footnote c. j. μ r = μ/μ o = 1 + χ, all in SI. μ r is equal to Gaussian μ. k
In SI units, B(magnetic field) is measured in teslas (symbol: T) and correspondingly Φ (magnetic flux) is measured in webers (symbol: Wb). so that a flux density of 1 Wb/m2 is 1 tesla. The SI unit of tesla is equivalent to (newton. second)/(coulom.. CGS unit of magnetic field strength is oersted, and SI unit is ampere/meter. Magnetisation defines the material's response- it is magnetic moment per unit volume of material. Flux density (magnetic induction) describes the resulting field in the material, which is a combination of an applied field and the magnetization The SI unit for magnetic flux is the weber. The number of webers is a measure of the total number of field lines that cross a given area. Magnetic fields may be represented mathematically by quantities called vectors that have direction as well as magnitude Units in Electricity and Magnetism. The tables below list the systems of electrical and magnetic units. They only include units of interest in the field of Radio. The older systems were the CGS and Gaussian systems.The Gaussian system being based on a mix of Electrostatic units (ESU) and Electromagnetic units (EMU)
Magnetic field - Wikipedi
i Get the answers you need, now
e. Both oersted and gauss are expressed as em-1I2.g1l2.S-1in terms ofbase units. /. A/m was often expressed as ampere-turnper meter when used for magnetic field strength. g. Magnetic moment per unit volume. h. The designation emu is not a unit. i. Recognized under SI, even though based on the definition B =1J-oll+J. See footnote c
KCET 1991: SI unit of magnetic field is (A) gauss (B) oersted (C) tesla (D) pascal. Check Answer and Solution for above question from Physics in Physical World, Units and Measurements - Tardigrad
The magnetic field intensity at any point in the magnetic field is defined as the force experienced by a unit north pole of one weber strength, placed at that point. The total magnetic lines of force i.e. magnetic flux crossing a unit area in a plane at right angles to the direction of flux is called magnetic flux density
What is the SI unit of a magnetic field? - Quor
The Standard International (SI) unit used to measure magnetic fields is the Tesla, while smaller magnetic fields are measured in terms of Gauss (1 Tesla = 10,000 Guass) Thus the unit \(\text{A m}^2\) is also a correct SI unit for magnetic moment, though, unless the concept of current in a coil needs to be emphasized in a particular context, it is perhaps better to stick to \(\text{N m T}^{-1}\) 2. (a) The SI unit for magnetic field strength is Tesla. What is the equivalent to this unit in terms of the basic (i.e., length, mass, time, current) SI units? [Hint: See an earlier handout on units. NIST also has a section on derived units] [2 points] [A fairly recent value for the Bohr magneton in SI units is 9.274 009 15x10-24 + 0.000 000. In this video I will explain and derive the units of the magnetic field, Tesla. Next vide... Skip navigation Sign in. Search. Loading... Close. This video is unavailable. Watch Queu
What is the SI unit of a magnetic field? - Answer
Answer:SI unit of Magnetic field is tesla hope it helps..☺️ 1. Log in. Join now. 1. Log in. Join now. Secondary School. Physics. 5 points Si unit of magnetic field Ask for details ; Follow Report by Ankursharma14 17.05.2019 teslas thanks sweetheart Log in to add a comment What do you need to.
Magnetic Force on a Compass A Compass Needle in a Magnetic Field. Because compass needles align with the magnetic field, the magnetic field at each point must be tangent to a circle around the wire. The figure shows the magnetic field by drawing field vectors. Notice that the field is weaker (shorter vectors) at greater distances from the wire
SI and CGS Units in Electromagnetism Jim Napolitano January 7, 2010 These notes are meant to accompany the course Electromagnetic Theory for the Spring 0 turns out to describe the magnetic properties of the vacuum. It shows up, for example, in the inductance of a loop of wire surrounding empty space
Fundamental Unit of Magnetic Flux. The fundamental unit of magnetic flux is Volt-seconds. Understanding the Term Magnetic Flux Density. The force acting per unit length on a wire placed perpendicular (at right angles) to the magnetic field per unit current is the magnetic flux density (B). Tesla (T) or Kg s-2 A-1 is the SI unit of magnetic flux.
The SI unit of the magnetic field strength is ampere per meter (A/m); in CGS it is measured in oersteds (Oe). In a vacuum, if the magnetizing field strength is 1 Oe, then the magnetic flux density is 1 Gs. Using the Magnetic Field Strength Converter Converter
Question: The SI units of magnetic field T Tesla is equivalent to: a) J/A{eq}m^2 {/eq} b) Vs/{eq}m^2 {/eq} c) 10 kG d) all of the above e) none of these is true
When placed in a magnetic field, magnetic dipoles are in one line with their axes to be parallel with the field lines, as can be seen when iron filings are in the presence of a magnet. The magnetic field is measured in the units of teslas ( SI units ) or gauss (cgs units)
Magnetic field is the region around a magnet where magnetic force is experienced.these magnetic field forms concentric circles for a current carrying conductor.and the closer the concentric circle the greater the force experienced. according to Faraday's law of force and distance(the force,F between two charge object A and B(attraction or repulsion) is inversely proportional to the square of. The strength of a magnetic field or magnetic flux density B can be measured by the force per unit current per unit length acting on a current-carrying conductor placed perpendicular to the lines of a uniform magnetic field. The SI unit of magnetic flux density B is the tesla (T), equal to 1 N A-1 m-1 The name of the SI unit for magnetic field strength, such as that created around a current-carrying wire, is the tesla A point charge of 5.7 µC moves at 4.5 × 105 m/s in a magnetic field that has a field strength of 3.2 mT, as shown in the diagram Contributors and Attributions; Magnetic flux density is a vector field which we identify using the symbol \({\bf B}\) and which has SI units of tesla (T). Before offering a formal definition, it is useful to consider the broader concept of the magnetic field.. Magnetic fields are an intrinsic property of some materials, most notably permanent magnets
You do not need to know the meaning of this equation for A-level. F, E, v and B are all underlined meaning that they are all vectors because they are all quantities that have direction.vxB is know as a 'cross product' which accounts for charges moving through a magnetic field that aren't perpendicular. Often we either only deal with a magnetic field or an electric field Magnetic Field of a Solenoid • The field lines in the interior are - approximately parallel to each other - uniformly distributed Copyright © 2008 Pearson. The SI unit for magnetic field strength H is A/m. However, if you wish to use units of T, either refer to magnetic flux density B or magnetic field strength symbolized as µ 0 H.Use the center dot to separate compound units, e.g., A·m 2. V. H ELPFUL H INTS A. Figures and Tables Because IEEE will do the final formatting of your paper, you do not need to position figures and tables at the.
Magnetic fields also have their own energy and momentum, with an energy density proportional to the square of the field intensity. The magnetic field is measured in the units of teslas or gauss (cgs units). There are some notable kinds of magnetic field. For the physics of magnetic materials, see magnetism and magnet, and more specifically. Unit SI unit system to CGS unit system CGS unit system to SI unit system; SI unit system CGS unit system; Name (relational expression) Symbol Name (relational expression) Symbol; Magnetic flux φ: Weber (φ=BA) Wb: Maxwell (φ=BA) Mx: 1Wb=10 8 Mx: 1Mx =10-8 Wb: Magnetic flux density B: Tesla: T: Gauss: G: 1T=10 4G: 1G =10-4 T: Magnetic strength. • Unit of magnetic flux in the International System of Units (SI) • Baron Karl Maria Friedrich Ernst von Weber German conductor and composer of romantic operas (1786-1826) • SI unit of magnetic flux (the magnetic field strength multiplied by the area through which the field passes) One who was in jail a long time, having stolen 10 cents. 'unit of magnetic field strength' is the definition
Question: What is the SI unit for the strength of a magnetic field? Systems of Measurements. Most scientists use the SI (International System) of units, although engineers may still use the. A magnetic field is a vector, which means it has magnitude and direction.If electric current flows in a straight line, the right hand rule shows the direction invisible magnetic field lines flow around a wire. If you imagine gripping the wire with your right hand with your thumb pointing in the direction of the current, the magnetic field travels in the direction of the fingers around the wire Magnetic Field Intensity Unit. Magnetic field intensity is also known as the magnetizing force which is measured is ampere-turns per meter (A-t/m). Of primary concern, however, is the magnetomotive force needed to establish a certain flux density, B in a unit length of the magnetic circuit
The SI unit of magnetic field is expressed in? - 3453908 dandanpama is waiting for your help. Add your answer and earn points The SI unit for magnetic fields is the tesla, T. As you can see, all the values above are in micro teslas, µT, which is 10 −6 T. The biggest value of 150 µT, still very small, is the value only if you were to sit right next to a wire all day, which is unrealistic The SI unit for magnitude of the magnetic field strength is called the tesla (T), which is equivalent to one Newton per ampere-meter. Sometimes the smaller unit gauss (10-4 T) is used instead. When the expression for the magnetic force is combined with that for the electric force, the combined expression is known as the Lorentz force
electromagnetic - What is the SI unit of magnetic force
Magnetic Field. The Magnetic Field is the space around a magnet or current carrying conductor around which magnetic effects can be experienced. Furthermore, it is a vector quantity and its SI unit is Tesla (T) or Wbm ‒2. Browse more Topics under Moving Charges And Magnetism. Ampere's Circuital Law; Magnetic Field Due to a Current Element.
Magnetic field created by a current carrying wire. Magnetic force between two currents going in the same direction. Up Next. Magnetic force between two currents going in the same direction. Our mission is to provide a free, world-class education to anyone, anywhere
The SI derived units for these derived quantities are obtained from these equations and the seven SI base units. Examples of such SI derived units are given in Table 2, where it should be noted that the symbol 1 for quantities of dimension 1 such as mass fraction is generally omitted
Table for Electric and magnetic SI units - Sourcetronic Gmb
The International System unit of field intensity for magnetic fields is Tesla (T). One tesla (1 T) is defined as the field intensity generating one newton (N) of force per ampere (A) of current per meter of conductor: T = N × A-1 × m-1 = kg × s-2 × A-1. Certain other non-SI units, like Gauss (G), are still occasionally used That, though, is a fundamental difference - because the SI includes the ampere as a base unit, which CGS does not. It also explains why the dimensions of magnetic induction in the SI are different from those of magnetizing force. People with knowledge of the B-field and H-field only in the SI tend to see them as physically different The strength of a magnetic pole such that two of them, of the same sign, will repel each other with a force of 1 dyne if they are placed 1 centimeter apart in a vacuum. The concept of the unit magnetic pole is used in defining various units in the centimeter-gram-second electromagnetic system of units.. The dimensions of the unit magnetic pole are What is the SI unit for the magnetic field? tesla (T) Name three conditions that are requirements for a particle to experience a magnetic force when placed in a magnetic field. The particle must be moving, the particle must be charged,. Reference : Magnetic Field Unit Magnetic field strength is defined by vector field which has a direction and a magnitude (or strength). The number of magnetic flux lines which go through the unit area perpendicular to magnetic field is called Flux density B
Magnetic Field Intensity Units - GreenFact
Magnetic flux density. The International System (SI) unit of field magnetic flux density is the tesla (T). A magnetic field of one tesla is relatively strong. That is why magnetic fields are also expressed in militesla (mT) and microtesla (µT). 1 T = 1 000 mT = 1 000 000 µ The above expression can be taken as the working definition of the magnetic field at a point in space. The magnitude of FB G is given by FqB =||vBsinθ (8.2.2) The SI unit of magnetic field is the tesla (T): Newton N N 1tesla1T1 1 1 (Coulomb)(meter/second) C m/s A m == = = ⋅ ⋅ Another commonly used non-SI unit for B G is the gauss (G. The older (pre 1980) paleomagnetic and rock magnetic literature is primarily in CGS units. Because SI are now the units of choice, we begin with current loops. Consider a loop of radius r and current i, roughly equivalent to an atom with orbiting electrons. A magnetic field H will be produced at the center of the loop given b SI units and symbols used in the physics guide. Subject. Physical Quantity. Symbol. Name. Unit. Mechanics. Mass. m, M. kilogram. kg. Linear positio In simple words, Magnetic field x area perpendicular to the magnetic field (B) is called Magnetic Flux which is denoted by Φ or Φ m or Φ B. Or it is the amount of magnetic field or magnetic lines of force passing through a surface like conducting area, space, air, etc. The SI Unit of magnetic flux is Wb (Weber)
What is the si units for magnetic field strength? - Answer
The SI unit for measuring magnetic field strength is the tesla (T). The Earth's magnetic field strength changes from place to place, but it is in the order of microteslas. Magnets used in MRI machines in hospitals tend to produce magnetic fields of a few teslas, and the strongest magnetic field we have managed to create is about 90 T
In SI units, the integral form of the original Ampere's circuital law is a line integral of the magnetic field around some closed curve C (arbitrary but must be closed). The curve C in turn bounds both a surface S through which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by S), and encloses the current
Radial Magnetic Field Radial magnetic field on a Pulley Magnet. The Radial Magnetic Field has poles running in the same direction as the rotation of the conveyor or drum and with the flow of the material. Magnetically susceptible material is attracted to the points of highest magnetic intensity (the poles) and held until it is dragged out of.
In actuality, the magnetic field can be either represented by: H, the magnetic field intensity (A/m, mks-SI; orested, cgs) - or - B, the magnetic induction (Teslas, mks-SI; gauss, cgs); 1 A/m = 4 x 10-3 oersteds 1 Tesla = 10 4 gauss. Magnetic induction (B) originates from all currents both at the microscopic (atomic) and macroscopic level and is considered the number of lines of force per.
The field can be greatly strengthened by the addition of an iron core. Such cores are typical in electromagnets. In the above expression for the magnetic field B, n = N/L is the number of turns per unit length, sometimes called the turns density. The magnetic field B is proportional to the current I in the coil
The SI unit of magnetic field is Tesla(T).Tesla is a bigger unit and the smaller unit is Gauss(G)..! 1T=10^4G!! This standard unit was kept in the honour of a great american engineer Nikola Tesla..
Tesla (unit) - Wikipedi
Question: The SI Unit Of Magnetic Field Is The Tesla, Which Is Equivalent To A 1 Multiple Choice O No. A/m. NA M). Magnon None Of These Choices Are Correct. Grow Prey 1 Of 2018 Next > 3 A Proton Is Moving At 2.0 * 107 M/s In A Magnetic Field Of 30 Mt
SI unit of magnetic dipole moment is . Options (a) A m⁻¹ (b) A m² (c) T m A⁻¹ (d) m A⁻². Correct Answer: A m². Explanation: The SI unit of magnetic dipole moment is A m². Related Questions: The moment of inertia of a thin uniform rod of mass M and length L about an axis; Two stones of masses m and 2m are whirled in horizontal.
Synonyms, crossword answers and other related words for MAGNETIC FIELD UNIT [oersted] We hope that the following list of synonyms for the word oersted will help you to finish your crossword today. We've arranged the synonyms in length order so that they are easier to find
Magnetic field strength, also called magnetic field intensity, magnetic field, auxiliary magnetic field H and magnetizing field is a measure of the intensity and direction of a magnetic field. It is a vector value and is called H-field. The SI unit of the magnetic field strength is ampere per meter (A/m); in CGS it is measured in oersteds (Oe)
Generally, the magnetic field can be defined in several specific ways in relation to the effect it has on the environment. Click the answer to find similar crossword clues. Unit of magnetic field strength is a crossword puzzle clue that we have spotted 2 times
Magnetic Field Units. Magnetic fields are created or produced when the electric charge/current moves within the vicinity of the magnet. SI Unit of Magnetic Field. One gauss is defined as one maxwell per square centimeter. Magnetic field is an invisible space around a magnetic object. The magnetic quantity B which is being called magnetic field. 196 Units in electromagnetism Units in magnetism Table A.1: Units in the SI system and the cgs system. The abbreviations are m=metre, g=gramme, N=Newton,J=Joule,T=Tesla,G=Gauss,A=Amp,Oe=Oersted,Wb=Weber,Mx=Maxwell Units for Magnetic Properties Symbol Quantity Conversion from Gaussian and cgs emu to SI Φ magnetic flux 1 Mx → 10−8 Wb = 10−8 V·s B magnetic flux density, magnetic induction 1 G → 10−4 T = 10−4 Wb/m2 H magnetic field strength 1 Oe → 103/(4π) A/m m magnetic moment 1 erg/G = 1 emu → 10−3 A·m2 = 10−3 J/ Gauss (G, Gs) is a unit in the category of Magnetic field strength.This unit is commonly used in the cgs unit system. Gauss (G, Gs) has a dimension of MT-2 I-1 where M is mass, T is time, and I is electric current. It can be converted to the corresponding standard SI unit T by multiplying its value by a factor of 0.0001
There is the magnetic field H and the magnetic induction B, which are related (in vacuum and in SI units) through where μ 0 is the magnetic constant. The force between the poles of the magnets shown in the figure is the same, irrespective of the system of units used to express the North poles. We formulate the Coulomb force in two ways, If Q 2. Magnetic field - a state of space described mathematically, with a direction and a magnitude, where electric currents and magnetic materials influence each other. Together with the electrical field it creates the electromagnetic field Moving Charges and Magnetism Important Questions for CBSE Class 12 Physics Magnetic Field Laws and their Applications. 1.The space in the surroundings of a magnet or a current-carrying conductor in which its magnetic influence can be experienced is called magnetic field. Its SI unit is Tesla (T) Magnetism: quantities, units and relationships. If you occasionally need to design a wound component, but do not deal with the science of magnetic fields on a daily basis, then you may become confused about what the many terms used in the data sheet for the core represent, how they are related and how you can use them to produce a practical inductor.. Magnetic field intensity is denoted by 'H' and also known as intensity of magnetising field or magnetising force. Column magnetic separator is developed on the principle of magnetic agglomeration of strongly magnetic minerals such as magnetite in a low-intensity magnetic field, and is primarily used as cleaner to produce a sufficiently high-grade magnetic concentrate
Yet, every moving electric charge has a magnetic field, so the orbiting electrons of atoms produce a magnetic field; there is a magnetic field associated with power lines; and hard discs and speakers rely on magnetic fields to function. Key SI units of magnetism include the tesla (T) for magnetic flux density, weber (Wb) for magnetic flux. In SI units, the electric field in an electromagnetic wave is described by Ey = 120 sin(1.40 107x − ωt). (a) Find the amplitude of the corresponding magnetic field oscillations. µT (b) Find the wavelength λ. µm (c) Find the frequency f. H
Tesla (unit) - Simple English Wikipedia, the free encyclopedi
Moving electric charges produce magnetic fields. Magnetic fields exert forces on other moving charge. The force a magnetic field exerts on a charge q moving with velocity v is called the magnetic Lorentz force. It is given by F = qv × B. (The SI unit of B is Ns/(Cm) = T ) The force F is perpendicular to the direction of the magnetic field B
Oersteds are used to measure the $\mathbf H$ field in CGS units. Teslas are used to measure the $\mathbf B$ field in SI units. In the SI system, the two fields are related via $\mathbf B=\mu_0(\mathbf H+\mathbf M)$ where $\mu_0$ is the vacuum permeability and $\mathbf M$ is the magnetization (volumetric density of magnetic dipole moment)
I think,H is a absolute quantity which does not vary with the material and remain constant for same deriving force(eg.current carrying wire or magnet).But the value of B depends upon the material .Value of B depends upon how much magnetic field of lines ,any material allows to pass through it.Hence mu_0 is a conversion factor which relates the total applied magnetic field H(which is absolute.
The SI unit of magnetic flux density or in other words strength of the magnetic field is Tesla (T). Gauss (G) is the CGS unit of magnetic flux density. Henry (H) is the SI unit of electrical inductance. Becquerel (Bq) is the SI unit of radioactivity. 1 Bq is the activity of a radioactive material in which 1 nucleus decay in 1 second Magnetic intensity is a quantity used in describing the magnetic phenomenon in terms of their magnetic fields. The strength of magnetic field at a point can be given in terms of vector quantity called magnetic intensity (H). S.I. unit of magnetization is A/m and its dimensions are [AL-1]. By definition, magnetic intensit
What is Dimensional Formula of Magnetic Field and its
A huge improvement in civilization, It necessary to improve measuring standards. Nowadays International Standard (SI) units are used as a global measurement system. Magnetic Field Strength Calculator - Unit Converter: It is a free online magnetic field strength conversion calculator
The tesla (symbol T) is the derived SI unit of magnetic flux density, which represents the strength of a magnetic field. One tesla represents one weber per square meter. The equivalent, and superseded, cgs unit is the gauss (G); one tesla equals exactly 10,000 gauss. Most current medical magnetic resonance imaging (MRI) units utilize 1.5 T or 3 T field strengths
Notes: Although susceptibility is dimensionless, it differs by a factor of 4 between the two systems.; The defining equations above require Earth magnetic field values to be given in oersted (cgs) or A/m (SI).However in the geophysical literature, Earth magnetic field values are commonly given in gammas (cgs) or nanotesla (SI) which is what GM-SYS expects
The S.I. unit of magnetic field intensity i
SI base units SI units & symbols SI / metric prefixes Unit definitions SI (metric) / Imperial conversion There are many abbreviations used to denote different measurements and quantities. The chances are that any scientific measurement or quantity will be measured using SI Units - the International System of Units SI Derived Units / Abgeleitete SI-Einheiten English / German Frequency / Frequenz hertz: Hz = 1/s Force / Kraft newton: N = m kg/s 2 Pressure, stress / Druck, mechanische Spannung pascal: Pa = N/m 2 = kg/m s 2 Energy, work, quantity of heat / Energie, Arbeit, Wärmemenge joule: J = N m = m 2 kg/s 2 Power, radiant flux / Leistung watt: W = J/s = m 2 kg/s 3 Quantity of electricity, electric. The strength of the magnetic field is expressed in units of Tesla (T) or microtesla (µT). Another unit, which is commonly used is the Gauss (G) or milligauss (mG), where 1 G is equivalent to 10-4 T (or 1 mG = 0.1µT). There are a range of different instruments that can measure magnetic field strength The SI unit for flux density, or induction, is the tesla (T). This property is also referred to as the B field. In the equations from our Surface Fields article, we use a B to denote this term.. Unlike Magnetic Flux above, the Flux Density defines some size for the loop of wire in that example The SI unit of Magnetic Field is Tesla (T). 0 ; About Us; Blog; Terms & Conditions; Our Result
Magnetic field - Georgia State Universit
The gauss is the unit of magnetic flux density B in the system of Gaussian units and is equal to Mx/cm 2 or g/Bi/s 2, while the oersted is the unit of H-field.One tesla (T) corresponds to 10 4 gauss, and one ampere (A) per meter corresponds to 4π × 10 −3 oersted.. The units for magnetic flux Φ, which is the integral of magnetic B-field over an area, are the weber (Wb) in the SI and the. Electricity is ubiquitous in daily life and electrical metrology covers a wide range of quantities. Typical examples are voltage, current, resistance, capacitance, inductance, power, electrical field strength, magnetic field strength, antenna factors, radiofrequency scattering parameters and others In classical physics, the magnetic field of a dipole is calculated as the limit of either a current loop or a pair of charges as the source shrinks to a point while keeping the magnetic moment m constant. For the current loop, this limit is most easily derived for the vector potential.Outside of the source region, this potential is (in SI units) = × = ×, magnetic field: A condition in the space around a magnet or electric current in which there is a detectable magnetic force, and where two magnetic poles are present. frame of reference: A coordinate system or set of axes within which to measure the position, orientation, and other properties of objects in it
Magnetic Units
SI unit for magnetic field strength is Ampere-turn/metre (A/m). CGS unit is oersted (Oe). 1Oe = 80 A/m. Force on a current carrying conductor in a magnetic field: Consider a conductor of length , carrying current I and is placed in a magnetic field of flux density B then the magnitude of the force is given as,
g a magnetic field in a classical vacuum.Until 20 May 2019, the magnetic constant had the exact (defined.
^sizes.com - details of SI units ^ Surprises from the Edge of the Solar System.NASA (2006-09-21). ^ Smith, Hans-Jørgen. Magnetic resonance imaging. Medcyclopaedia Textbook of Radiology.GE Healthcare. Retrieved on 2007-03-26. ^ Frog defies gravity. ^ World's Most Powerful Magnet Tested Ushers in New Era for Steady High Field Research.National High Magnetic Field Laboratory
Tesla (T), magnetic field. The tesla (symbol T) is the SI derived unit of magnetic field. The tesla is equal to one weber per square metre and was defined in 1960 in honor of inventor, scientist and electrical engineer Nikola Tesla
Magnetic field is expressed in SI units as a tesla (T), which is also called a weber per square meter: The direction of F is found from the right‐hand rule, shown in Figure 1. Figure 1: Using the right-hand rule to find the direction of magnetic force on a moving charge
Its SI unit is A-m. 3.Magnetic Field Lines These are imaginary lines which give pictorial representation for the magnetic field inside and around the magnet. Their properties are given as below: These lines form continuous closed loops. The tangent to the field line gives direction of the field at that point
The magnetic constant μ 0 (also known as vacuum permeability or permeability of free space) is a universal physical constant, an electromagnetic property of classical vacuum, relating mechanical and electromagnetic units of measurement.In the International System of Units (SI), its value is exactly expressed by: . μ 0 = 4π × 10 −7 N/A 2 = 4π×10 −7 henry/metre (H/m) , or approximately.
Define magnetic flux. magnetic flux synonyms, magnetic flux pronunciation, magnetic flux translation, English dictionary definition of magnetic flux. n. A measure of the quantity of magnetism, being the total number of magnetic lines of force passing through a specified area in a magnetic field magnetic sources external to the earth, but rather had to be caused by sources within the earth. Geophysical exploration using measurements of the earth's magnetic field was employed earlier than any other geophysical technique. von Werde located deposits of ore by mapping variations in the magnetic field in 1843 We'll call it unit sub B. So let's see. If we divide both sides by coulombs and meters per second, we get newtons per coulomb. And then if we divide by meters per second, that's the same thing as multiplying by seconds per meter. Equals the magnetic field units. So the magnetic field in SI terms, is defined as newton seconds per coulomb meter magnetic field strength: rad: rad: former unit of absorbed radiation dose: rem: rem: former unit of radiation dosage: roentgen / röntgen: R: dose of X-rays or gamma rays: second angle = 1/60 of a minute: tonne / metric ton: t: mass = 1000 kilograms: unified atomic mass unit: u: approximate mass of a hydrogen atom, a proton, or a neutro
What is the unit of magnetic field? - Quor
Table D.1: The centimetre-gram-seconds (CGS) and the metre-kilogram-seconds (SI) unit systems. To convert from one system to the other, cgs unit factor mks unit. Data from Purcell (1985) and Jackson (1999 Magnetic fields permeate space and are strongest near a permanent magnet or electromagnet. IThe SI unit for B is the tesla (1 T = 1 Vs/m 2). The tesla is a fairly large unit of magnetic field, so we often list magnetic field strengths in terms of Gauss (1 G = 10-4 T). The magnetic field of the earth is about one-half gauss in strength The SI Unit for flux density is the Tesla (T) which is defined as; If one line of magnetic field passes normally through m 2 area, the magnetic flux density, B, will be one Tesla, Example of Magnetic Flux Density. Calculate the flux density in a ferromagnetic material with a cross-sectional area of 0.01 m 2 containing 100 lines. Solution Metrolab is the global market leader for precision magnetometers used to measure strong magnetic fields with great precision. Contact us +41 22 884 A world standard A standard is the internationally agreed-upon physical representation of a unit. For example, a caesium Metrolab Technology SA ch. du Pont-du-Centenaire 110 1228 Plan-les. Solution for In SI units, the electric field in an electromagnetic wave is described by E, = 120 sin(1.40 × 107x - wt). (a) Find the amplitude of th
Magnetic Field Units - Conversion Calculator and Formula
Define electromagnetic unit. electromagnetic unit synonyms, electromagnetic unit pronunciation, electromagnetic unit translation, English dictionary definition of electromagnetic unit. n. Abbr. EMU Any of various units used in the centimeter-gram-second system of units to describe electric and magnetic field strengths, electric current.. Magnetic flux density is the amount of magnetic flux per unit area of a section that is perpendicular to the direction of flux. It is also sometimes known as magnetic induction or simply magnetic field. It can be thought of as the density of the magnetic field lines - the closer they are together, the higher the magnetic flux density In the proton magnetometer, a magnetic field that is not parallel to the Earth's field is applied to a fluid rich in protons causing them to partly align with this artificial field. When the controlled field is removed, the protons tend to return to its original direction in the earth's magnetic field by precessing around the Earth's field at a frequency depending on the intensity of the Earth. Synonyms, crossword answers and other related words for UNIT OF MAGNETIC FIELD STRENGTH [oersted] We hope that the following list of synonyms for the word oersted will help you to finish your crossword today. We've arranged the synonyms in length order so that they are easier to find. 7 letter words
Https www jem og fix.
Carlos santana wiesbaden.
Nyår i wien.
Få betalt for å oversette.
P piller katt virkningstid.
Flokken kryssord.
Henry danger season 5.
Equatorial guinea map.
Muffins med jordbærsyltetøy.
Spise sur ribbe.
Potsdamer platz hotel.
Kjøpe prestekjole.
Alternativ julefeiring skole.
Integrering i norge statistikk.
Arcteryx tromsø.
App for studenter.
Vincent van gogh sunflowers.
Konkurranse vinn iphone.
Whatsapp ordner auf pc nicht sichtbar.
Nøkken peer gynt.
Meatoplastik urethra.
Norsk bokmål rumensk ordbok.
Ncs pantone.
Black jack online gratis.
Kran med uttak for vaskemaskin.
Samiske verdier og tenkemåter.
Kvinnevold mot menn.
Villsvinjakt sverige pris.
China ecke delmenhorst speisekarte.
Grønn salat med avokado.
Pink familie.
Skinntrekk til bilseter.
Herrelandslaget håndball.
Bonn rheinaue kostenlos parken.
Iles hawaii carte.
Jobs oberfranken.
Hva er prosessert mat.
Kalorier reker i lake.
Grønne planter som trenger lite lys. | CommonCrawl |
Current Search: Research Repository (x) » Masters Abstracts International (x) » Digitized Theses and Dissertations 1952-2002 (x)
The "noble experiment" in Tampa: A study of prohibition in urban America.
Alduino, Frank William., Florida State University
Prohibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society,...
Show moreProhibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society, particularly on the rapidly expanding immigrant population. These alien newcomers epitomized the transformation of the country from rural to urban, from agricultural to industrial., Rapidly-expanding urban centers were often the battleground between prohibitionists and supporters of the whiskey traffic. European immigrants, retaining their traditional values, gravitated to metropolitan areas such as Boston, New York, and Chicago. With the opening of the cigar industry in the mid-1880s, Tampa, Florida also began attracting large numbers of immigrants. Because of its pluralistic composition, the city might serve as a microcosm of the national struggle between the "wet" and "dry" forces., Using newspapers, oral interviews, and other primary materials, this study traces the various aspects of the prohibition movement in the city of Tampa. In addition, it details other peripheral areas associated with the advent of the Eighteenth Amendment including the drug and alien trades. Finally, this study examines the lengthy efforts to repeal the "Noble Experiment" and return legalized drinking back to Tampa.
THE "OLD SUMPTER HERO": A BIOGRAPHY OF MAJOR-GENERAL ABNER DOUBLEDAY.
RAMSEY, DAVID MORGAN., The Florida State University
Abner Doubleday was an unusual and often a controversial person. Born into a family staunchly supporting Andrew Jackson, Doubleday reflected the determined Unionist position of the strong-willed president. Abner's attitude towards the Union was later vividly demonstrated at Fort Sumter. A mediocre career at West Point illustrated Doubleday's lack of desire to excel although he possessed the ability to do so. The controversy over the origin of baseball, although Doubleday was never directly...
Show moreAbner Doubleday was an unusual and often a controversial person. Born into a family staunchly supporting Andrew Jackson, Doubleday reflected the determined Unionist position of the strong-willed president. Abner's attitude towards the Union was later vividly demonstrated at Fort Sumter. A mediocre career at West Point illustrated Doubleday's lack of desire to excel although he possessed the ability to do so. The controversy over the origin of baseball, although Doubleday was never directly involved in the question, was the first of several controversies with which Abner Doubleday's name is associated., Doubleday never seemed satisfied with his early life. In his papers he continually referred to people, prominent in later years, which he knew. While serving in the Mexican War, Doubleday continually felt the need to relate the dangerous situations in which he was placed. He seemed to want to demonstrate his personal responsibilities, which while actually meager, he viewed as of supreme importance. Doubleday apparently wanted to be a famous, bold cavalier, but realized he failed to accomplish his objective and stressed his "noble" deeds., Doubleday loved large cities and the benefits they offered a person. He liked being in the right social circles and enjoyed the "good life." By 1852, while serving as a commissioner for the Senate, Doubleday had come to despise Mexico and the Mexicans. By 1858, while serving in Florida, he disliked the inconveniences of chasing "savages." With secession in 1860 Doubleday no longer liked Charlestonians; later extending his revulsion to all Confederates., With the crisis at Sumter in 1861 Doubleday was greatly troubled. The affront to the United States government was almost more than he could bear. With the outbreak of the war, Doubleday was more than willing to fight the rebels. A dependable, if unspectacular soldier, Doubleday served well during the Civil War. While no one accused him of original thinking militarily, his men always fought well. Gettysburg was Doubleday's finest hour but became his final hour in the Civil War when he could not countenance serving under a junior officer., It seems strange that Doubleday served in the Freedmen's Bureau since his superior was none other than his old enemy from Gettysburg, O.O. Howard. Doubleday's service in California brought the controversy over the origin of the cable car. Retirement from the army in 1873 brought out several new qualities in Abner Doubleday. He wrote books, read French and Spanish literature, and became interested in the occult and became a believer in theosophy., Doubleday was a colorful figure in nineteenth century America. He was associated with several significant events in the growth of the nation. Doubleday represented, possibly to an extreme, the attitude of many American Unionists and supporters of Manifest Destiny. His commitment to a united nation is similar to Lincoln's attitude. Doubleday not only vocalized this sentiment, but, like Lincoln, was prepared to fight for his belief. Abner Doubleday was an intense American. He desired a strong, powerful United States and opposed those not supporting such a course.
THE "SACRED HARP" SINGING GROUP AS AN INSTANCE OF NON-FORMAL EDUCATION.
MITCHELL, HENRY CHESTERFIELD., The Florida State University
The "talk" of returning women graduate students: An ethnographic study of reality construction.
McKenna, Alexis Yvonne., Florida State University
This study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the...
Show moreThis study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the construction of the data?, Data were collected through a series of three ethnographic interviews with 12 returning women, ranging in age from 28 to 50. Two of the twelve women were single, two were widowed, seven were divorced and one was divorced and remarried. Eight of the women had children., Analysis of the data showed that returning women, as a group, "named and framed" their experience in terms of change. Some women wanted to change self-image or self-concept while others wanted to acquire a new set of skills or credentials. Individually, the women "named and framed" their experiences in terms of an internalized "meaning-making map" acquired in the family of origin but modified through adult experiences. This "map" told them who they were and what kind of a life they could have. It gave their "talk" and behavior a consistency that could be recognized; it could make life easier or harder. A woman who felt she must "prove" herself, for example, found graduate school more difficult than a woman who wanted to "work smart.", The researcher-as-interviewer influenced the construction of data through her presence as well as through the kinds of questions she asked. The women understood and gave meaning to their experiences through the process of explaining them to the interviewer. The insights gained through this process of "shared talk" influenced future action and decisions.
THE 'PRESENT ETERNITE' OF 'TROILUS AND CRISEYDE.'.
LORRAH, JEAN., The Florida State University
THE (CARBON-12,BERYLLIUM-8) AND (CARBON-12,CARBON-12) REACTIONS ON EVEN CALCIUM ISOTOPES.
MORGAN, GORDON REESE., The Florida State University
THE (CARBON-12,BERYLLIUM-8) AND (OXYGEN-16,BERYLLIUM-8) REACTIONS ON CARBON-12, OXYGEN-16, AND SILICON-28 NUCLEI.
ARTZ, JERRY LEE., The Florida State University
(oxygen-16 + thorium-232) incomplete fusion followed by fission at 140 MeV.
Gavathas, Evangelos P., Florida State University
Cross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments...
Show moreCross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments were He, Li, Be, B, C, N and O. Fission fragments were detected by three surface barrier detectors using time of flight for particle identification. The reaction cross section due to incomplete fusion is 747 $\pm$ 112 mB, or 42% of the total fission cross section. The strongest incomplete fusion channels were the helium and carbon channels. The average transferred angular momentum for each incomplete fusion channel was calculated using the $Q\sb{opt}$ model of Wilczynski, and the angular correlation was calculated using the saddle point transition state model. The K distribution was determined from the Rotating Liquid Drop model. The theoretical angular distributions were fitted to the experimental angular distributions with the angular momentum J and the dealignment factor $\alpha\sb{o}$ as free parameters. The fitted parameter J was in excellent agreement with the $Q\sb{opt}$ model predictions. The conclusions of this study are that the incomplete fusion cross section is a large part of the total cross section, and that the saddle point transition state model adequately describes the observed angular correlations for fission following incomplete fusion.
125-Iodine: a probe in radiobiology.
Warters, Raymond Leon
THE 1928 PRESIDENTIAL ELECTION IN FLORIDA.
HUGHES, MELVIN EDWARD, JR., The Florida State University
THE 1964 WISCONSIN PRESIDENTIAL PRIMARY: GEORGE C. WALLACE.
WINDLER, CHARLES WILLIAM, JR., Florida State University
In 1963, Alabama Governor George C. Walace defied a court order by Attorney General Nicholas Katzenbach to integrate the University of Alabama. This incident turned the governor into a national celebrity and led to a number of speaking engagements across the country. During one of these engagements, Wallace indicated an interest in entering certain presidential primaries in the North in order to campaign against the pending national civil rights legislation. The Wisconsin Democratic...
Show moreIn 1963, Alabama Governor George C. Walace defied a court order by Attorney General Nicholas Katzenbach to integrate the University of Alabama. This incident turned the governor into a national celebrity and led to a number of speaking engagements across the country. During one of these engagements, Wallace indicated an interest in entering certain presidential primaries in the North in order to campaign against the pending national civil rights legislation. The Wisconsin Democratic presidential primary was the first of these races., Since President Lyndon Johnson had the Democratic presidential nomination for the asking, little attention was given to the Wallace candidacy. Governor John Reynolds was selected to run against Wallace as the Democraic favorite-son candidate, and the Republicans chose Representative John Byrnes as their favorite-son candidate. When the votes were cast on April 7, the entire nation was surprised at the large number of votes obtained by Wallace., Upon examination of the conditions and events prior to and during the presidential primary campaign, the following factors apparently contributed to the surprising showing of Governor Wallace: (1) An open primary system existed in Wisconsin that allowed a large Republican cross-over vote for Wallace; (2) The Republican favorite-son candidate had no opponent; (3) The Democratic party was divided over their favorite-son candidate, one of the most unpopular Governors in the political history of Wisconsin; (4) Wallace's opponents waged a personal defamation campaign based on Wallace's reputation as a racist to which Wallace did not respond; and (5) Some white residents of Wisconsin were afraid of the increasing civil rights demands of the black population. These factors served to gain support and sympathy for the Wallace candidacy and to focus national attention on the Alabama governor as he conducted subsequent campaigns in Maryland and Indiana.
The 1988 World Bank policy study on education in sub-Saharan Africa revisited: A value-critical policy inquiry.
Ota, Cleaver Chakawuya., Florida State University
The spirit and logic of the 1988 World Bank report resides in the trilogy that is its subtitle: adjustment, revitalization and expansion. In the context of ongoing austerity in Africa, it is strongly asserted that a fundamental restructuring of education is necessary to improve efficiency, effectiveness and equity in education. Controversial adjustment reforms proposed include measures that will substantially shift the burden of educational finance from government to students, parents, and...
Show moreThe spirit and logic of the 1988 World Bank report resides in the trilogy that is its subtitle: adjustment, revitalization and expansion. In the context of ongoing austerity in Africa, it is strongly asserted that a fundamental restructuring of education is necessary to improve efficiency, effectiveness and equity in education. Controversial adjustment reforms proposed include measures that will substantially shift the burden of educational finance from government to students, parents, and other parties. Such measures include cost recovery and the reduction of teachers' salaries among other things., If and only if, adjustment measures have been implemented and begun to take hold, then revitalization and selective expansion may be undertaken. Revitalization and selective expansion will reportedly improve quality and access in education. They include the provision of a minimum package of textbooks and other instructional materials and expansion of primary education to provide universal access., The purpose of this study was to investigate and critically evaluate the knowledge base that undergirds the World Bank study and the technical and political feasibility of the proposed reforms. A multi-methodological research strategy including critical public policy analysis and value-critical policy inquiry was employed., The main findings of this study are that: the data used in the Bank study are unreliable, the knowledge base narrow, the arguments underlying the policy framework of the report, unpersuasive and controversial and the agenda for action internally inconsistent. These criticisms should not detract from the immense value and importance of the document in that it is the first document that critically looks at education in the crisis beleaguered continent.
50 MeV lithium-6 scattering from carbon-12, oxygen-16, and beryllium-9 and the calibration of the tensor-polarized lithium-6 beam.
Trcka, Darryl Eugene., Florida State University
The experimental work reported consists of (1) the measurements of the angular distributions for the scattering of $\sp6$Li from the targets $\sp9$Be, $\sp{12}$C, and $\sp{16}$O at a lithium bombarding energy of 50 MeV, and (2) the measurement of the tensor polarization of the FSU polarized $\sp6$Li source. 50 MeV data were taken for elastic and inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C, the 5/2$\sp-$ (2.43 MeV) state in $...
Show moreThe experimental work reported consists of (1) the measurements of the angular distributions for the scattering of $\sp6$Li from the targets $\sp9$Be, $\sp{12}$C, and $\sp{16}$O at a lithium bombarding energy of 50 MeV, and (2) the measurement of the tensor polarization of the FSU polarized $\sp6$Li source. 50 MeV data were taken for elastic and inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C, the 5/2$\sp-$ (2.43 MeV) state in $\sp9$Be, and the unresolved 0$\sp+$/3$\sp-$ (6.05/6.13 MeV) and $2\sp{+}/1\sp{-}$ (6.92/7.12 MeV) states in $\sp{16}$O. The measurement of the tensor polarization of the FSU $\sp6$Li source allowed the absolute polarization efficiency of the source-accelerator system to be determined., The analytical work reported consists of a determination of the energy dependence of the optical potential parameters for $\sp6$Li + $\sp{12}$C scattering over the energy range from 11 MeV to 210 MeV. This has been attempted previously and the results have not been successful. A large body of data for $\sp6$Li + $\sp{12}$C allows more severe constraints than in previous studies. The inclusion of an angular momentum-dependent imaginary potential provides a good description of the elastic scattering data and the parameters determined in this study are smoothly varying with energy using Woods-Saxon form factors for the real and imaginary potentials. Inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C are described well using the constructed energy dependent potentials in DWBA calculations. Analysis using the double folded real potential and a Woods-Saxon imaginary potential were performed on the same $\sp6$Li + $\sp{12}$C scattering data from 11 MeV to 210 MeV., The scattering data for 50 MeV $\sp6$Li scattering from the targets $\sp{16}$O and $\sp9$Be are described using optical potentials and DWBA calculations. Less information is obtained from these analyses because data do not exist at this time over a wide enough energy range to provide a constraint on the interaction potentials.
A COMPARATIVE EVALUATION OF FACULTY AND STUDENT PARAPROFESSIONAL ACADEMICADVISEMENT PROGRAMS AT THE FLORIDA STATE UNIVERSITY.
MAC ALEESE, ROBERT WILLIAM., Florida State University
A comparison of two distinctive preparations for quantitative items in the Scholastic Aptitude Test.
Kelly, Frances Smith., Florida State University
The SAT is a major milestone for many high school juniors and seniors. Scoring as high as possible is of utmost concern for college bound students because SAT scores often determine the college or university they may attend and the scholarships they may receive. As a result, those who can financially afford to take prep courses for the SAT do., Over the past forty years research studies have found that SAT preparation increases test scores. These previous studies have been concerned only with...
Show moreThe SAT is a major milestone for many high school juniors and seniors. Scoring as high as possible is of utmost concern for college bound students because SAT scores often determine the college or university they may attend and the scholarships they may receive. As a result, those who can financially afford to take prep courses for the SAT do., Over the past forty years research studies have found that SAT preparation increases test scores. These previous studies have been concerned only with increasing test scores. To date, no study has investigated if one method of preparation produces higher gains than another, nor has any study identified those students for whom preparation is most beneficial. A comparison of methods among existing studies is impossible because most reports do not include the methods or materials used., The contents of most SAT preparatory books deal primarily with a review of the mathematical concepts involved. However, an inspection of several SAT items reveals that the SAT tests more than mere rote calculations and algebraic manipulations--it tests "understanding," "application," and "nonroutine" methods of problem solving. Therefore, the present study was proposed to examine and assess the effectiveness of two methods of student preparation for the SAT-M: the first method of preparation explored content review, solving each item in a rigid traditional manner, and the second method of preparation examines the use of flexible problem solving strategies to answer the items rather than using routine mathematical manipulations., Sixty-two juniors and seniors participated in the study. The results of the study showed that the students taught test-taking strategies scored significantly better than the control group. However, this strategies group did not score significantly better than the group who was taught content. The content group did not score significantly better than the control group. This indicates that students could benefit from instruction in flexible, nonroutine methods of solving SAT-M items efficiently.
A CRITICAL EDITION OF THE FIRST TWO MONTHS OF W. B. YEATS'S AUTOMATIC SCRIPT (IRELAND).
ADAMS, STEVE LAMAR., Florida State University
William Butler Yeats's involvement in the esoteric and the occult has attracted considerable interest in the past decade, but much remains unknown about his philosophical development during the period of his life when he was engaged in the most profound spiritual or psychical investigation or experiment of his brilliant career, an experiment which gave birth to A Vision. Often described as the most important work in the canon to the understanding of his art and thought if not his life, this...
Show moreWilliam Butler Yeats's involvement in the esoteric and the occult has attracted considerable interest in the past decade, but much remains unknown about his philosophical development during the period of his life when he was engaged in the most profound spiritual or psychical investigation or experiment of his brilliant career, an experiment which gave birth to A Vision. Often described as the most important work in the canon to the understanding of his art and thought if not his life, this ambitious work represents Yeats's attempt to explain the basic psychological polarities of the human personality, the course of Western civilization, and the evolution and movement of the soul after death. The cogency and gravity of the experiment of investigation which produced a book of these epic proportions cannot be underestimated; indeed, the contents of this well-recorded experiment may well be the most significant body of unexplored Yeats material. The fundamental aim of this study, which includes only the first crucial months of the Automatic Script, is to present to the scholarly world for the first time a transcript of the often obscure, often complex body of materials that led directly to Yeats's most profound work of art. In order to place this manuscript in its proper biographical and critical context, explanatory notes have been included, explicating the essential features of the experiment (i.e., the recording of dates, the authors of questions and responses, the placement of diagrams and notes by George and Yeats, the physical state of the manuscript, etc.) and unraveling or spelling out the numerous references to Yeats's primary works, those appearing prior to as well as those growing directly out of the Automatic Script; special attention has been focused on those materials which were eventually embodied in the 1925 version of A Vision. An editorial, introduction preceding the transcript demonstrates how this momentous experiment was the logical extension of a series of psychical investigations and, in much broader terms, the culmination of a spiritual odyssey that Yeats had begun almost as early as the days of his youth.
A critical edition of W. B. Yeats's automatic script, 11 March-30 December 1918.
Frieling, Barbara Johnston., Florida State University
Professor George Mills Harper writes in his recent book The Making of Yeats's 'A Vision': A Study of the Automatic Script that, despite his copious quotations from these unpublished manuscripts, "nothing but the whole will satisfy the truly involved reader." Perhaps the most comprehensive occult papers that have been preserved in the history of psychical research, the 3627 existing pages of the Automatic Script are of extreme interest to Yeats scholars, not only as the source for A Vision but...
Show moreProfessor George Mills Harper writes in his recent book The Making of Yeats's 'A Vision': A Study of the Automatic Script that, despite his copious quotations from these unpublished manuscripts, "nothing but the whole will satisfy the truly involved reader." Perhaps the most comprehensive occult papers that have been preserved in the history of psychical research, the 3627 existing pages of the Automatic Script are of extreme interest to Yeats scholars, not only as the source for A Vision but also as documentation of the creative collaboration between Yeats and his new wife George during the 450 sittings held between 5 Nov 1917 and 28 Mar 1920. This critical edition provides the complete text for that portion of the Automatic Script written during the Yeatses' first visit to Ireland following their marriage. (Under the direction of Professor Harper, Steve L. Adams has edited the first two months of the Script as a doctoral dissertation in 1982, and Sandra Sprayberry is preparing that portion of the Script written between 2 Jan 1919 and 28 Mar 1920.) Included in this dissertation is an editorial introduction describing the methods used by the Yeatses in the automatic writing and its subsequent "codification"; the relationship of the Script to Yeats's 1918 poetry and plays; and the synthesis of his life-long involvement in the occult Yeats achieves in the two versions of A Vision. Extensive endnotes relate the Automatic Script to Yeats's Card File and Vision notebooks as well as to his poetry, plays, and the two versions. Of special note is the emergence of the tower as a major symbol as the Yeatses first occupied Thoor Ballylee, and their growing conviction that their expected child would be the Irish Avatar. The 1918 Script demonstrates clearly that George Yeats was an equal partner in the amazing collaboration that produced A Vision and that provided her husband with metaphors for his later poetry.
A Female Education. (Original writing);.
Foster, Patricia Ann
9321881, 119517, FSDT119517, fsu:68419
A Glorious Work: The American Missionary Association and Black North Carolinians, 1863-1880.
Jones, Maxine Deloris
The American Missionary Association played an important role in the slaves' transition to freedmen. This study examines the work of the AMA with black North Carolinians during the Civil War and Reconstruction. Life for Yankee teachers in the South is described, along with their motives for coming, the various tasks they performed and the Southern reaction to their presence and labors. Attention is given to the relief, religious and missionary activities of the Association, but the emphasis is...
Show moreThe American Missionary Association played an important role in the slaves' transition to freedmen. This study examines the work of the AMA with black North Carolinians during the Civil War and Reconstruction. Life for Yankee teachers in the South is described, along with their motives for coming, the various tasks they performed and the Southern reaction to their presence and labors. Attention is given to the relief, religious and missionary activities of the Association, but the emphasis is on Education. Freedmen's desire and eagerness to learn, black academic progress, curriculum, obstacles and discipline are discussed in chapters II, III, and IV. The role of black teachers in the AMA and the contributions of native blacks to the education movement are also delineated. In addition, the AMA's relationship with and its labors in the black community and its work with the state's poor whites are analyzed and adds valuable new information to Freedmen's Aid Literature.
8308673, 117180, fsu:67257
Education, Guidance and Counseling (221) + -
Literature, Modern (200) + -
Education, Teacher Training (184) + -
Physics, Nuclear (162) + -
Sociology, Criminology and Penology (160) + -
Psychology, Experimental (158) + -
Political Science, General (154) + -
Home Economics (147) + -
Literature, American (138) + -
Chemistry, Organic (127) + -
Theater (127) + -
Psychology, General (118) + -
Psychology, Social (115) + -
Chemistry, Biochemistry (113) + -
Education, Music (112) + -
Speech Communication (112) + -
Chemistry, Physical (103) + -
Africa, Eastern (1) + -
Anderson County (1) + -
Anderson County (S.C.) (1) + -
Andrews (Tex.) (1) + -
Apalachicola (1) + -
Apalachicola (Fla.) (1) + -
Apalachicola Embayment (1) + -
Atlanta (1) + -
Atlantic Ocean (1) + -
English (3303) + -
Florida State University (4928) + -
Gregory, Agnes (31) + -
Edwards, W. (29) + -
Dean, Harris William (24) + -
Clapp, Robert George (22) + -
Srygley, Sara Krentzman (21) + -
Swearingen, Mildred E. (18) + -
Reed, Sarah Rebecca (16) + -
Strickland, Virgil E. (14) + -
Nichols, Eugene Douglas (13) + -
Rockwood, Ruth H. (13) + -
Black, Marian W. (12) + -
Eyman, Ralph Lee (12) + -
Shores, Louis (12) + -
Moon, Robert C. (11) + -
Briggs, Robert L. (10) + -
Curtis, H. A. (10) + -
Hayes, Dorothy D. (10) + -
Leeper, Sarah Hammond (10) + -
Cottingham, Harold F. (9) + -
Dean, Harris Williams (9) + -
Stone, Mode L. (9) + -
Burton, Dwight L. (7) + -
Goulding, Robert Lee (7) + -
McCarthy, Paul J. (7) + -
Anders, Mary Edna (6) + -
Dame, J. Frank (6) + -
Heerema, Nickolas (6) + -
Murray, Stewart (6) + -
Document (PDF) (9597) + -
Special Collections, Florida State University Libraries, Tallahassee, Florida. (4) + -
FTaSU FSUER (2) + -
Special Collections, Florida State University Libraries, Tallahassee, Fla. (1) + -
FSU (10321) + - | CommonCrawl |
Development and fabrication of disease resistance protein in recombinant Escherichia coli
Sefli Sri Wahyu Effendi1 na1,
Shih-I Tan1 na1,
Chien-Hsiang Chang1,
Chun-Yen Chen2,
Jo-Shu Chang1,3,4 &
I-Son Ng ORCID: orcid.org/0000-0003-1659-58141
Cyanobacteria and Spirulina produce C-phycocyanin (CPC), a water soluble protein associated pigment, which is extensively used in food and pharmaceutical industries. Other therapeutic proteins might exist in microalgal cells, of which there is limited knowledge. Such proteins/peptides with antibiotic properties are crucial due to the emergence of multi-drug resistant pathogens. In addition, the native expression levels of such disease resistant proteins are low, hindering further investigation. Thus, screening and overexpression of such novel proteins is urgent and important. In this study, a protein which was identified as a putative disease resistance protein (DRP) in the mixture of Spirulina product has been explored for the first time. To improve protein expression, DRP was cloned in the pET system, co-transformed with pRARE plasmid for codon optimization and was significantly overexpressed in E. coli BL21(DE3) under induction with isopropyl-β-d-1-thiogalactopyranoside (IPTG). Furthermore, soluble DRP exhibited intense antimicrobial activity against predominant pathogens, and an inhibition zone of 1.59 to 1.74 cm was obtained for E. coli. At a concentration 4 mg/mL, DRP significantly elevated the growth of L. rhamnosus ZY up to twofold showing probable prebiotic activities. Moreover, DRP showed potential as an effective antioxidant, and the scavenging ability for ROS was in the order of hydroxyl > DPPH > superoxide radicals. A putative disease resistance protein (DRP) has been identified, sequenced, cloned and over-expressed in E. coli as a functional protein. Thus expressed DRP showed potential anti-microbial and antioxidant properties, with promising therapeutic applications.
Microalgae, including diatoms of Bacillariophyta, green algae Chlorella sp. and blue-green algae cyanobacteria, serve as a natural carbon sink, and are known as a sustainable feedstock for biodiesel and biofuel production. The protein rich microalgal biomass is also known for the co-production of a number of high-value products viz., carbohydrates, bioplastic polymers, cosmetics, and food additives (Li et al. 2018; Allen et al. 2018). Microalgae and cyanobacteria are naturally protein-rich (Teuling et al. 2019), and C-phycocyanin (CPC) is the dominant phycobiliprotein commonly seen in cyanobacteria (Eriksen 2008). CPC has been explored in pharmaceuticals as antibacterial, anticancer, antioxidants, health supplements, and vitamins mainly due to the increasing demand for alternative antimicrobial agents to counteract the rising antibiotic resistance in pathogens (Singh et al. 2011; Waghmare et al. 2016). The presence of carotenoids and chlorophylls alongside CPC is the major bottleneck in the CPC purification process. Thus the development of a pigment extraction cascade without any loss in essential proteins is vital (Marzorati et al. 2020). Other than that, extraction cascades have also been successfully applied to isolate fatty acids from the spent biomass after CPC extraction, which consisted of high amounts of PUFAs (Imbimbo et al. 2019). Furthermore, microalgae and cyanobacteria might possibly contain other therapeutic proteins/peptides with applications as novel drugs. However, discovery, extraction and purification of such novel algal proteins in a sustainable way is a critical issue.
Tandem mass spectrometry serves as a powerful tool to identify proteins and studying the relationship between protein functions and cellular behavior. Protein discovery related to a specific function is essential for advancement in emerging technologies such as synthetic biology which helps in solving global issues (Coon et al. 2005). For instance, the presence of asiaticoside in ethyl acetate extract from medicinal plant Centella asiatica Urban was confirmed by LC–MS (Gupta et al. 2018). Therefore, the advent of high-throughput tandem mass spectrometry invigorated proteomics—the classification of the protein complement expressed by the genome of an organism (Wolters et al. 2001). The most common application of proteomics is in the medical sector for therapeutics and diagnosis by identifying novel biomarkers of disease. Recently, Marchand and his colleagues developed a non-natural amino acid system in E. coli which was integrated by proteomics (Marchand et al. 2019). Moreover, proteomics plays a vital role in revealing the metabolism under different physiological stimulation and discovering new proteins for unique applications. For example, a robust multiple copper oxidase from an electrogen Proteus Hauseri ZMD44 was discovered and overexpressed in E. coli to be applied in gold recovery, because the organism exhibited tolerance to copper ions by automatically overexpressing the multiple copper oxidase (Ng et al. 2016; Tan et al. 2017).
Recombinant technology has become an essential tool at hand for high-level production of heterologous proteins (Rosano et al. 2014). The choice of the host cell and it's protein synthesizing machinery is decisive in initiating the outline of the whole process. As a model organism, E. coli has been routinely used to produce heterologous proteins (Schlegel et al. 2017). Also, the advantages of using E. coli over other organisms are well-known, such as doubling time in glucose-salts media is about 20 min, and it is able to reach high cell density easily and stationary phase can be attained in a few hours (Sezonov et al. 2007). Previous studies have reported that CPC from cyanobacteria was successfully expressed in E. coli (Zhao et al. 2006; Guan et al. 2007; Yu et al. 2016).
In this study, a putative disease resistance protein (DRP) was screened from the mixture of Spirulina product and identified by MS/MS for the first time. Afterward, the whole DNA sequence of DRP was synthesized, cloned into pET21a(+) and further expressed in E. coli. The optimization of DRP production involved optimal codon usage and temperature effect. Furthermore, the novel functionalities of DRP are explored regarding antimicrobial activity, prebiotic promoting activity and anti-oxidant activity.
Protein identification from nature product
The natural product, which was initially extracted from Spirulina species, was purchased from Febico Bio-Tec, Taiwan. The powder was dissolved in deionized water to a concentration of 10 g/L, and then the solution was centrifuged to collect the supernatant. The protein concentration of the supernatant was measured by Bradford assay (Bio-Rad, USA) with bovine serum albumin (BSA) as a protein standard. The concentration of the protein solution was then adjusted to 1 g/L. The final solution was subjected to sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE) to analyze the protein pattern, which was visualized by staining with Coomassie blue R-250. The targeted protein bands were sent for tandem MS/MS analysis.
Synthesis and cloning of DRP and co-transformation with pRARE plasmid
After identifying the amino acid sequence by MS/MS analysis, the DNA sequence of DRP was deduced by reverse translation using Vector NTI (Life Technologies, USA) as shown in Additional file 1: Figure S1, and then the entire gene sequence was synthesized by IDT (Coralville, USA). The DRP fragment was amplified by polymerase chain reaction and cloned into pET21a(+) plasmid at the restriction sites NdeI and XhoI (NEB, USA) (Additional file 1: Figure S2). The strains, plasmids, and primers used in this study are listed in Table 1.
Table 1 Strains, plasmids and primers used in this study
Culture conditions and overexpression of recombinant DRP
The Disease resistance protein (DRP) was cloned and expressed in E. coli BL21(DE3). First, recombinant colonies were grown on LB plates (1.5% tryptone, 1.5% NaCl and 0.5% yeast extract) with antibiotics (50 mg/L ampicillin for pET21a(+)-DRP and 12.5 mg/L chloramphenicol for pRARE) at 37 °C for 12 h. Next, a single colony was inoculated in LB medium with appropriate antibiotics for pre-culture at 37 °C for 12 h with shaking at 200 rpm. Then, the cells were diluted 1:100 in LB medium with antibiotics and cultured for about 3 h. Growth was monitored by measuring the biomass or optical density at 600 nm (OD600) using the spectrophotometer (SpectraMax 340, Molecular Devices, USA). As the OD600 reached 0.6 ∼ 0.8, the cells were induced by the addition of 0.1 mM IPTG and further incubated at 25 and 30 °C for up to 12 h. Finally, the cells were harvested by centrifuging at 12,000×g for 10 min and washed with deionized water for two times. Then, the OD was adjusted to an appropriate concentration, and the cells were disrupted using a One-Shot high-pressure crusher to obtain soluble DRP. The whole-cell proteins and the soluble DRP protein were analyzed by SDS-PAGE.
Antibacterial activity
Antibacterial activity determination was performed by the agar well diffusion method, which was seeded with pathogen strains (Rani et al. 2018). Aeromonas hydrophila, Bacillus cereus, Escherichia coli, and Staphylococcus aureus were inoculated in 20 mL of LB agar at 2% (v/v). For testing the antibacterial activity, various concentrations of soluble protein DRP were absorbed on the filter paper (diameter ± 0.25 cm) in a cooled agar plate. The size of the bacterial inhibition zone for each concentration was measured after culturing at 37 °C for 6 h. The positive control was CPC from natural product purchased from Febico Bio-Tec, Taiwan. All experiments were performed in duplicate independently.
Prebiotic activity
The prebiotic activity test was carried out by culturing 5% (v/v) of Lactobacillus rhamnosus ZY on 20 mL of MRS medium in a 100 mL of Erlenmeyer flask (Lai et al. 2019). Each culture flask was supplemented with different amounts of BL21(DE3) wild type and the recombinant strain expressing DRP. The cells were cultured at 37 °C with shaking at 200 rpm for 12 h. After that, samples were taken and diluted with deionized water 107 times before the screening process on MRS agar plate. The colony-forming units (CFU) of prebiotics were counted after incubation at 37 °C for 24 h.
Antioxidant activity
DRP samples were prepared in different protein concentrations (i.e., 0.1 ~ 4 mg/mL). Phycocyanin (CPC), one of the functional proteins, was extracted from the commercial product (Febico Bio-Tec, Taiwan) and diluted to 1 mg/mL as a control solution.
DPPH radical scavenging assay
The α, α-diphenyl-β-picrylhydrazil (DPPH) radical scavenging activity was performed according to a previous study (Xia et al. 2011) with some modification. First, 0.039 g of DPPH was added to 1 mL of anhydrous alcohol, and the DPPH solution was diluted 100 times with anhydrous alcohol. A 100 μL of DPPH solution was mixed with 100 μL of protein sample in a centrifuge tube. Samples were incubated in darkness at 25 °C for 30 min. After the reaction was completed, the precipitate was removed by centrifugation at 12,000×g for 1 min. A control was measured by replacing the protein sample with water. The mixture reaction (Ai) and control (A0) samples were transferred into 96-well microplate and the absorbance was measured at the wavelength of 517 nm. The DPPH scavenging rate was determined using the following equation:
$$ DPPH\;Scave\;nging\;rate\;\left( {\% } \right)\;\text{ = }\;\left[ {1\text{ - }\;\left( {A_{i} \;\text{ - }\;A_{0} } \right)\;\text{/}A_{i} } \right]\; \times \;100{\% }. $$
OH− radical scavenging assay
The hydroxyl scavenging assay was conducted according to the Fenton reaction (Sies et al. 1993; Wang et al. 2017; Huang et al. 2019) with some modification. First, 0.085 g of FeSO4·7H2O was added to 50 mL of H2O2 to prepare a FeSO4(aq) solution (6 mM). Then, 200 μL of FeSO4(aq), 100 μL of sodium salicylate solution (10 mM), and 20 μL protein sample were added in a centrifuge tube, mixed well and incubated for 30 min. The hydroxyl radical reacts with salicylic acid to form 2,3-dihydroxybenzoic acid which absorbs UV light at 510 nm. After the reaction was completed, the precipitate was removed by centrifugation at 2000×g for 5 min. The reaction mixture was transferred into a 96-well microplate and the absorbance Ai was measured. The background absorbance Aj was measured by replacing the protein sample with water, and the blank absorbance A0 was measured by replacing the sodium salicylate solution with water. The following formula was used for the calculation of the scavenging rate of hydroxyl radical (OH−):
$$ \text{OH}^{\text{ - }} \;\text{Scavenging}\;\text{rate}\;\left( {\% } \right)\;\text{ = }\;\left[ {\text{1}\;\text{ - }\left( {\text{A}_{\text{i}} \;\text{ - }\;\text{A}_{\text{0}} } \right)\text{/A}_{\text{j}} \;} \right]\;{ \times }\;{100\% }\text{.} $$
O2− radical scavenging assay
The superoxide radical scavenging activity was performed as reported previously (Patel et al. 2018) with some modifications. First, a 10 mM pyrogallol (C6H6O3) solution in 10 mM HCl and a 50 mM Tris–HCl buffer at pH 8.2 were prepared. Then, 500 μL of Tris–HCl buffer, 470 μL distilled water, and 10 μL of protein sample were added in a centrifuge tube, mixed well and incubated for 20 min at 25 °C. A 200 μL of the mixture was added to 20 μL of preheated 10 mM pyrogallol solution at 25 °C. The reaction mixture was transferred to a 96-well microplate and the kinetic absorbance (Ai) was measured at the wavelength of 325 nm for 2 min. The oxidation rate of pyrogallol ΔA was estimated by calculating the difference of absorbance per minute in the linear range. The auto-oxidation rate of pyrogallol ΔA0 was also estimated as above by replacing protein samples with pyrogallol solution. The following formula calculated the scavenging rate of superoxide anion (O2−):
$$ O_{2}^{\text{ - }} \;Scavenging\;rate\;\left( {\% } \right)\;\text{ = }\;\left( {1\;\text{ - }\;\Delta A\text{/}\Delta A_{0} } \right)\; \times \;100{\% }\text{.} $$
Identification of disease resistance protein from nature product
The native proteins from the commercial product were analyzed by SDS-PAGE (Fig. 1). The results showed a dominant band at a molecular weight of 17 kDa (NP-17), which was indicated to be the CPC beta subunit with 30% coverage by tandem MS analysis (Table 2). It is reasonable that the dominant band was CPC, because Spirulina species are known as a rich source of bioactive products, one of which was CPC (Demay et al. 2019). Besides, CPC has several functional properties such as anti-oxidative function, anti-inflammatory activity, anti-cancer function, immune enhancement function, liver, and kidney protection and other pharmacological effects (Jiang et al. 2017).
Identification protein expression of natural products from Spirulina
Table 2 MASCOT analysis of protein identification and full sequence of target protein
Another protein at a molecular weight of 30 kDa (NP-30) was observed, which was further identified as a putative disease resistance protein (DRP) (Table 2). Interestingly, the homologous protein of DRP was present in a higher plant, Dichanthelium oligosanthes, according to the OEL30137 sequence in the GenBank database. We considered that DRP might be present in the natural product purchased from Febico Bio-Tec. On the other hand, the coverage of DRP from MS/MS result was only 5%, because it was present in trace amounts in the natural product, and has never been reported previously. Consequently, the whole DNA sequence for DRP was synthesized artificially and cloned into the pET21a(+) plasmid for overexpression in E. coli. The functional properties of the over-expressed DRP were further explored.
Over-expression of recombinant DRP in E. coli
The DRP gene acquired by DNA synthesis was amplified by PCR (Fig. 2a), and cloned in pET21a(+) vector. The recombinant gene expression was driven by T7/lac promoter and overexpressed in E. coli BL21 (DE3). Protein expression induced by IPTG was evaluated by SDS PAGE analysis. The recombinant protein was not overexpressed under the experimental conditions with the single plasmid (Fig. 2b).
a Gel electrophoresis analysis of DRP sequence by colony PCR. The N1 and N2 was the negative control by E. coli and pET21a(+) plasmid in the PCR reaction. The P indicates the positive control with DNA synthesis of DRP as a template, while numbers 1–4 represent each colony from pET21a-DRP in E. coli. M is the molecular weight of DNA marker. The targeted size is 832 bp. b DRP and DRP co-expression with pRARE vector for rare-codon optimization by SDS-PAGE analysis. S and WC represent the proteins from soluble and whole cell. The red arrows indicate the size of DRP
Next, to enhance the expression level of DRP in E. coli, pRARE plasmid was co-transformed with pET21a-DRP as dual plasmids and was cultured at 37 °C for 12 h. The pRARE plasmid was used to improve protein expression as it contains the rare codon encoding tRNAs and optimize codon usage for heterologous protein expression (Liu et al. 2006). As expected, the recombinant protein was successfully expressed at approximately 30 kDa; however, most DRP was aggregated into the inclusion body (Fig. 2b). To express the DRP as a soluble protein, the post- IPTG induction temperature was reduced to 30 °C and 25 °C instead of 37 °C. This is because inclusion bodies are formed due to improper folding or conformation of the overexpressed protein at high temperature, and induction at low temperature might assist proper folding of the proteins. This was a common phenomenon when using E. coli to produce most of the heterologous protein (Yu et al. 2016). As a result (Fig. 3), it was obvious that the protein expression at lower temperatures was maintained at the same level as that of 37 °C. Also, the soluble protein was significantly enhanced, and the highest soluble DRP was obtained by culturing at 25 °C. Due to the limited knowledge of DRP from commerical Spirulina extract, it is hard to evaluate the effect of post-translational modification of DRP at this juncture. In this study, the DRP expressed in E. coli is without any post-translational modification. Thus, we need to investigate the functional properties of DRP.
Optimization of DRP protein expression in E. coli. Lane 1: M, lane 2: non-induction by IPTG, lane 3 to 7 are with 0.1 mM IPTG. M, X, W25, W30, C25, C30, and WT mean marker, whole cell which cultured at 25 °C, whole cell which cultured at 30 °C, crude enzyme which cultured at 25 °C, crude enzyme which cultured at 30 °C, and wild type of BL21(DE3). DRP has indicated by red arrow
Antibacterial activity of DRP
The recombinant DRP present in the cell-free extract was screened for antimicrobial activity against some relevant pathogens, namely S. aureus and B. cereus which are Gram positive; E. coli and A. hydrophila which are Gram-negative (Additional file 1: Table S1, Fig. 4). Among the pathogen strains, B. cereus was the most sensitive towards the DRP at 30 °C and 25 °C, with 0.99 ~ 1.91 cm and 1.01 ~ 1.86 cm clearance zone, respectively. Interestingly, S. aureus is also Gram-positive, but it was more resistant and showed the minimum clearance zone (0.83 ~ 1.11 cm). The resistance towards CPC was similar for S. aureus. In contrast, other studies have reported that S. aureus has high sensitivity compared to Gram-negative bacteria towards CPC from exopolysaccharide of S. thermophilus GST-6 (Zhang et al. 2016). We considered that the antibacterial compound contained in soluble DRP was slightly different from the one reported by Zhang et al., thus affecting the capability of inhibition in different pathogen strains. Furthermore, 0.25 mg/mL of DRP at 25 °C and 30 °C have shown 1.59 cm and 1.74 cm of clearance zone for E. coli, which is a highly virulent pathogen in nature (Silhavy et al. 2010; Najdenski et al. 2013). Besides, extracted commercial CPC has the strongest inhibition ability for E. coli (1.74 cm). Also, the exopolysaccharides from the cyanobacteria Nostoc commune (Quan et al. 2015) was highly active against E. coli. A high proportion of such antibacterial compound producing strains might be associated with an ecological role, probably displaying defensive measures to maintain their niche, or allow the invasion of strains into established microbial communities (Gillor et al. 2008). On the other hand, both Spirulina CPC and the recombinant DRP showed substantial inhibition zones with an increase in protein concentration. The antibacterial activity of DRP at 25 °C and 30 °C is in the order of B. cereus > E. coli > A. hydrophila > S. aureus.
Effect of protein added on the antibacterial activities of Aeromonas hydrophila (red bar), Bacillus cereus (green bar), Escherichia coli (grey bar), and Staphylococcus aureus (white bar). C-PC is the positive control, while DRP_25 and DRP_30 represent to samples incubator at 25 °C and 30 °C, respectively. All the proteins concentrations are the same as 0.25 mg
Prebiotic activity of DRP
Figure 5 demonstrates the effect of adding different protein supplements on the growth of probiotic strain L. rhamnosus ZY. The evaluation was based on the CFU of L. rhamnosus ZY screened on MRS agar plate. After 24 h, the CFU of L. rhamnosus ZY without protein addition was 6.2 × 108. It is interesting to note that low levels of DRP supplementation successfully promoted the prebiotic activity, even higher than that of CPC at the same concentration (0.5 to 2 mg). Meanwhile, the highest CFU was attained by adding 4 mg/mL of CPC and soluble DRP, and the cell count was elevated up to twofolds (17.5 × 108) and 1.5-folds (16.0 × 108), respectively. Similar to antibacterial activity, the prebiotic activity was concentration-dependent. Despite this, the addition of WT-BL21(DE3) cell free extract seems to be inhibitory, since the CFU declined as soluble extract of WT-BL21(DE3) increased. Furthermore, the prebiotic activity of DRP was higher that the proteins obtained from Chlorella vulgaris FSP-E and Chlorella sorokiniana, since both the algal proteins attained only 7.8 to 8.7 × 108 CFU using a high concentration of protein (Lai et al. 2019).
Effect of protein amounts added from BL21(DE3) (white bar), DRP at 25 °C (black bar), and DRP at 30 °C (grey bar) on the prebiotic activity of L. rhamnosus ZY
Antioxidant activity of DRP
The anti-oxidant potential of DRP was assessed by evaluating the scavenging potential of DRP for DPPH, hydroxyl, and superoxide radicals with a wide range of protein concentrations. DPPH is a stable nitrogen-centered free radical substance, and it is commonly used in free-radical scavenging assay (Sánchez-Moreno et al. 2002; Mahmoudi et al. 2020). Stable DPPH exhibits a violet color; and when the DPPH radical accepts an electron from an antioxidant compound, the color of DPPH turns yellow. The degree of discoloration indicates the scavenging potential of the antioxidant extract. Besides, it is most likely a decrease in absorbance caused by phenolic compounds in the reduction reaction between DPPH radicals and antioxidant molecules in protein. The results in Fig. 6a showed that the antioxidant activity of DRP expressed at 30 °C is higher than that at 25 °C. The antioxidant activity in both the samples improved by increasing the concentration of DRP, and 4 mg/mL showed the highest rate at 32.7% and 29.6%, respectively. The increasing of scavenging activity in protein extract might be due to the presence of antioxidant compounds, which are good electron donors (Easwar and Viswanatha 2020).
Functional test based on (a) DPPH, (b) Hydroxyl, and (c) Superoxide radical scavenging activity of DRP protein, which cultured at 25 °C (black bar) and at 30 °C (grey bar)
Hydroxyl radical is an extremely reactive free radical formed in a biological system. It has been implicated as a highly damaging species in free radical pathology, capable of damaging almost every molecule found in living cells. This radical has the capacity to join nucleotides in DNA and cause strand breakage (Kaur et al. 2019). In this study, the results showed that soluble DRP effectively scavenged the reactive hydroxyl radical, and the antioxidant activity reached 56.4% and 54.0% at 25 °C and 30 °C, respectively (Fig. 6b). The hydroxyl radical scavenging activity of soluble DRP was higher than a previous study, which used Spirulina extract (Zayadi et al. 2020).
The superoxide radical scavenging activity results are shown in Fig. 6c. Among various concentrations of soluble DRP, only high concentrations (2 and 4 mg/mL) scavenged superoxide radical about 0.7 ~ 6.2%, while soluble DRP at 0.1 and 0.5 mg/mL had no superoxide radical scavenging activity. According to a previous study, the antioxidant activity against superoxide radical might be supported by CPC (Santiago-Morales et al. 2018).
By comparing the antioxidant activities of DRP and CPC from the commercial product as a control (Table 3), the extracted CPC revealed higher anti-oxidant potential than crude DRP at all assessments. Herein, DRP and CPC displayed the highest activity against hydroxyl radicals, which was 90.3% and 56.4%, respectively. The difference between the effect of CPC and DRP on the DPPH scavenging activity was not significant; meanwhile, the superoxide scavenging activity of CPC was tenfolds higher compared to DRP. However, the protein concentration of DRP was apparently 4 times higher than that of CPC. The results suggested that the scavenging capability was truly influenced by pure phycocyanin, although at less amount. Besides, the antioxidant potential of crude protein from E. coli BL21(DE3) was also assessed; however, no anti-oxidant activity was detected (data not shown).
Table 3 Comparison antioxidant activity of different protein sources
Furthermore, early studies from varied species exhibited high antioxidant activity. Zhang and his colleagues evaluated antioxidant activity from Lactobacillus plantarum C88 (2013) and microalgae strains (2019), including Chlorococcum sp., Scenedemus sp., and C. pyrenoidosa FACHB-9. The results demonstrated that originally microalgae strains displayed higher scavenging ability than L. plantarum C88 even though the concentration was four times lower, and it successfully scavenged DPPH and OH− radical about 36.5 ~ 58% and 63.1 ~ 77.5%, respectively. It is worth noting that most of the studies cited use a purified product. However, to our best knowledge, purification of the recombinant DRP is laborious and expensive. Recently, researchers have applied some techniques to modify the biological activities of polysaccharides and CPC as well as enhance the antioxidant ability. Box-Behnken design was utilized beneficially to improve the antioxidant potential of microalgae P. versicolor NCC466, which showed an anti-oxidant activity of 88.7% for OH− and 87.4% for O2− (Gammoudi et al. 2019). Yu et al. (2016) successfully scavenged the three free radicals (OH−, DPPH, and O2−) by Synechocystis PCC6803 by combining Plackett–Burman design and Box-Behnken design which was up to 78, 83, and 64%, respectively.
For a cost-effective process and screening a novel antioxidant agent candidate, this study successfully characterized the antioxidant activity of DRP using cell-free extract protein, which efficiently reduced process costs due to non-intricated techniques. The crude extract with DRP showed the highest anti-oxidant activity against the hydroxyl radicals. Despite this, purification of the protein is still needed for further examination to guarantee the safety of DRP use. Besides, another viable strategy is to use the probiotic E. coli Nissle 1917 for the over-expression of DRP, which produces neither hemolysin nor other toxins.
We hereby demonstrated the process flow of the discovery of a novel protein from natural products and subsequent identification as a disease resistance protein (DRP). The overexpression of DRP was facilitated in the recombinant E. coli using pRARE plasmid for codon optimization. The crude extract containing DRP displayed potent inhibitory effects against common bacterial pathogens and showed prebiotic activity on the growth of L. rhamnosus ZY cells. DRP also showed strong anti-oxidant activity against hydroxyl radicals. The results reveal that DRP is a powerful candidate for nullifying free radicals.
The authors approved the availability of data and materials for publishing the manuscript.
Allen J, Unlu S, Demirel Y, Black P, Riekhof W (2018) Integration of biology, ecology and engineering for sustainable algal-based biofuel and bioproduct biorefinery. Bioresour Bioprocess 5(1):47
Coon JJ, Syka JE, Shabanowitz J, Hunt DF (2005) Tandem mass spectrometry for peptide and protein sequence analysis. Biotechniques 38(4):519–523
Demay J, Bernard C, Reinhardt A, Marie B (2019) Natural products from Cyanobacteria: focus on beneficial activities. Mar Drugs 17(6):320
CAS PubMed Central Article PubMed Google Scholar
Easwar RD, Viswanatha CK (2020) Changes in the antioxidant intensities of seven different soybean (Glycine max (L.) Merr.) cultivars during drought. J Food Biochem 44(2):e13118
Eriksen NT (2008) Production of phycocyanin—a pigment with applications in biology, biotechnology, foods and medicine. Appl Microbiol Biotechnol 80(1):1–4
Gammoudi S, Athmouni K, Nasri A, Diwani N, Grati I, Belhaj D, Bouaziz-Ketata H, Fki L, El Feki A, Ayadi H (2019) Optimization, isolation, characterization and hepatoprotective effect of a novel pigment-protein complex (phycocyanin) producing microalga: Phormidium versicolor NCC-466 using response surface methodology. Int J Biol Macromol 137:647–656
Gillor O, Etzion A, Riley MA (2008) The dual role of bacteriocins as anti-and probiotics. Appl Microbiol Biotechnol 81(4):591–606
Guan X, Qin S, Su Z, Zhao F, Ge B, Li F, Tang X (2007) Combinational biosynthesis of a fluorescent cyanobacterial holo-α-phycocyanin in Escherichia coli by using one expression vector. Appl Biochem Biotechnol 142(1):52–59
Gupta S, Bhatt P, Chaturvedi P (2018) Determination and quantification of asiaticoside in endophytic fungus from Centella asiatica (L.) Urban. World J Microbiol Biotechnol 34(8):111
Huang G, Lin Y, Zhang L, Yan Z, Wang Y, Liu Y (2019) Synthesis of sulfur-selenium doped carbon quantum dots for biological imaging and scavenging reactive oxygen species. Sci Rep 9(1):1–9
Imbimbo P, Romanucci V, Pollio A, Fontanarosa C, Amoresano A, Zarrelli A, Olivieri G, Monti DM (2019) A cascade extraction of active phycocyanin and fatty acids from Galdieria phlegrea. Appl Microbiol Biotechnol 103(23–24):9455–9464
Jiang L, Wang Y, Yin Q, Liu G, Liu H, Huang Y, Li B (2017) Phycocyanin: a potential drug for cancer treatment. J Cancer 8(17):3416
Kaur P, Purewal SS, Sandhu KS, Kaur M (2019) DNA damage protection: an excellent application of bioactive compounds. Bioresour Bioprocess 6(1):2
Lai YC, Chang CH, Chen CY, Chang JS, Ng IS (2019) Towards protein production and application by using Chlorella species as circular economy. Bioresour Technol 289:121625
Li SY, Ng IS, Chen PT, Chiang CJ, Chao YP (2018) Biorefining of protein waste for production of sustainable fuels and chemicals. Biotechnol Biofuels 11(1):1–5
Liu Z, Zhen Z, Zuo Z, Wu Y, Liu A, Yi Q, Li W (2006) Probing the catalytic center of porcine aminoacylase 1 by site-directed mutagenesis, homology modeling and substrate docking. J Biochem 139:421–430
Mahmoudi R, Aghaei S, Salehpour Z, Mousavizadeh A, Khoramrooz SS, Taheripour SM, Christiansen G, Baneshi M, Karimi B, Bardania H (2020) Antibacterial and antioxidant properties of phyto-synthesized silver nanoparticles using Lavandula stoechas extract. Appl Organomet Chem 34(2):e5394
Marchand JA, Neugebauer ME, Ing MC, Lin CI, Pelton JG, Chang MC (2019) Discovery of a pathway for terminal-alkyne amino acid biosynthesis. Nature 567(7748):420–424
Marzorati S, Schievano A, Idà A, Verotta L (2020) Carotenoids, chlorophylls and phycocyanin from Spirulina: supercritical CO 2 and water extraction methods for added value products cascade. Green Chem 22(1):187–196
Najdenski HM, Gigova LG, Iliev II, Pilarski PS, Lukavský J, Tsvetkova IV, Ninova MS, Kussovski VK (2013) Antibacterial and antifungal activities of selected microalgae and cyanobacteria. Int J Food Sci Technol 48(7):1533–1540
Ng IS, Ye C, Li Y, Chen BY (2016) Insights into copper effect on Proteus hauseri through proteomic and metabolic analyses. J Biosci Bioeng 121(2):178–185
Patel HM, Rastogi RP, Trivedi U, Madamwar D (2018) Structural characterization and antioxidant potential of phycocyanin from the cyanobacterium Geitlerinema sp. H8DM. Algal Res 32:372–383
Quan Y, Yang S, Wan J, Su T, Zhang J, Wang Z (2015) Optimization for the extraction of polysaccharides from Nostoc commune and its antioxidant and antibacterial activities. J Taiwan Inst Chem Eng 52:14–21
Rani RP, Anandharaj M, Ravindran AD (2018) Characterization of a novel exopolysaccharide produced by Lactobacillus gasseri FR4 and demonstration of its in vitro biological properties. Int J Biol Macromol 109:772–783
Rosano GL, Ceccarelli EA (2014) Recombinant protein expression in Escherichia coli: advances and challenges. Front Microbiol 5:172
Sánchez-Moreno C (2002) Methods used to evaluate the free radical scavenging activity in foods and biological systems. Food Sci Technol Int 8(3):121–137
Santiago-Morales IS, Trujillo-Valle L, Márquez-Rocha FJ, Hernández JF (2018) Tocopherols, phycocyanin and superoxide dismutase from microalgae: as potential food antioxidants. Appl Food Biotechnol 5(1):19–27
Schlegel S, Genevaux P, de Gier JW (2017) Isolating Escherichia coli strains for recombinant protein production. Cell Mol Life Sci 74(5):891–908
Sezonov G, Joseleau-Petit D, d'Ari R (2007) Escherichia coli physiology in Luria-Bertani broth. J Bacteriol 189(23):8746–8749
Sies H (1993) Strategies of antioxidant defense. Eur J Biochem 215(2):213–219
Silhavy TJ, Kahne D, Walker S (2010) The bacterial cell envelope. Cold Spring Harb Perspec Biol 2(5):a000414
Singh RK, Tiwari SP, Rai AK, Mohapatra TM (2011) Cyanobacteria: an emerging source for drug discovery. J Antibiot 64(6):401–412
Tan SI, Ng IS, Yu YJ (2017) Heterologous expression of an acidophilic multicopper oxidase in Escherichia coli and its applications in biorecovery of gold. Bioresour Bioprocess 4(1):1–10
Tegel H, Tourle S, Ottosson J, Persson A (2010) Increased levels of recombinant human proteins with the Escherichia coli strain Rosetta (DE3). Protein Expr Purif 69(2):159–167
Teuling E, Schrama JW, Gruppen H, Wierenga PA (2019) Characterizing emulsion properties of microalgal and cyanobacterial protein isolates. Algal Res 39:101471
Waghmare AG, Salve MK, LeBlanc JG, Arya SS (2016) Concentration and characterization of microalgae proteins from Chlorella pyrenoidosa. Bioresour Bioprocess 3(1):16
Wang Y, Feng C, Yan L (2017) Enhancement of emerging contaminants removal using fenton reaction driven by H2O2− producing microbial fuel cell. Chem Eng J 307:679–686
Wolters DA, Washburn MP, Yates JR (2001) An automated multidimensional protein identification technology for shotgun proteomics. Anal Chem 73(23):5683–5690
Xia F, Fan J, Zhu M, Tong H (2011) Antioxidant effects of a water-soluble proteoglycan isolated from the fruiting bodies of Pleurotus ostreatus. J Taiwan Inst Chem Eng 42(3):402–407
Yu P, Li P, Chen X, Chao X (2016) Combinatorial biosynthesis of Synechocystis PCC6803 phycocyanin holo-α-subunit (CpcA) in Escherichia coli and its activities. Appl Microbiol Biotechnol 100(12):5375–5388
Zayadi RA, Bakar FA (2020) Comparative study on stability, antioxidant and catalytic activities of bio-stabilized colloidal gold nanoparticles using microalgae and cyanobacteria. J Environ Chem Eng 10:103843
Zhang L, Liu C, Li D, Zhao Y, Zhang X, Zeng X, Yang Z, Li S (2013) Antioxidant activity of an exopolysaccharide isolated from Lactobacillus plantarum C88. Int J Biol Macromol 54:270–275
Zhang J, Liu L, Ren Y, Chen F (2019) Characterization of exopolysaccharides produced by microalgae with antitumor activity on human colon cancer cells. Int J Biol Macromol 128:761–767
Zhao KH, Su P, Li J, Tu JM, Zhou M, Bubenzer C, Scheer H (2006) Chromophore attachment to phycobiliprotein β-subunits phycocyanobilin: Cysteine-β84 phycobiliprotein lyase activity of CpeS-like protein from Anabaena sp. PCC7120. J Biol Chem 281(13):8573–8581
The authors are grateful for the financial support received for this project from the Ministry of Science and Technology (MOST 108-2218-E-006-006 and MOST 109-2218-E-006-015) in Taiwan.
The authors are grateful for the financial support received from the Ministry of Science and Technology (MOST 108-2218-E-006-006 and MOST 109-2218-E-006-015) in Taiwan.
Sefli Sri Wahyu Effendi and Shih-I Tan contributed equally to this work
Department of Chemical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
Sefli Sri Wahyu Effendi, Shih-I Tan, Chien-Hsiang Chang, Jo-Shu Chang & I-Son Ng
University Center for Bioscience and Biotechnology, National Cheng Kung University, Tainan, Taiwan
Chun-Yen Chen
Department of Chemical and Materials Engineering, College of Engineering, Tunghai University, Taichung, Taiwan
Jo-Shu Chang
Research Center for Energy Technology and Strategy, National Cheng Kung University, Tainan, Taiwan
Sefli Sri Wahyu Effendi
Shih-I Tan
Chien-Hsiang Chang
I-Son Ng
ISN and SIT designed the experiment and analyzed the data, SSWE and SIT performed all of experiments. ISN, SSWE and SIT wrote the manuscript. CHC, JSC and CYC gave conceptual suggestion. All authors read and approved the final manuscript.
Correspondence to I-Son Ng.
All the authors have read and agreed the ethics for publishing the manuscript.
The deducded DNA sequence of DRP. Figure S2. The gel electrophoresis analysis of the construction of pET21a-DRP by colony PCR. The N1 and the N2 was the negative control by adding the W3110 colony and pET21a plasmid into the PCR reaction. The P indicates the positive control with the addition of DNA synthesis of DRP as a template into the PCR reaction. The number 1 to 4 represents each colony. The targeted size is 832 bp. Table S1. Inhibition zone results of DRP protein in different pathogen strains.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Effendi, S.S.W., Tan, SI., Chang, CH. et al. Development and fabrication of disease resistance protein in recombinant Escherichia coli. Bioresour. Bioprocess. 7, 57 (2020). https://doi.org/10.1186/s40643-020-00343-5
Disease resistance protein
Recombinant technology
Rare codon | CommonCrawl |
SupportManualMesh commandsADD_MASS
*ADD_MASS
coid, entype, enid, $m_{add}$, distribution
options: N, G, P, PS
$m_{add}$ Added mass or added mass per unit area (constant, CURVE or FUNCTION)
options: constant, fcn
distribution Mass distribution type
0 $\rightarrow$ node mass weighted distribution ($m_{add}$ defines total mass)
1 $\rightarrow$ area weighted distribution ($m_{add}$ defines total mass)
2 $\rightarrow$ area weighted distribution ($m_{add}$ defines mass per unit area)
3 $\rightarrow$ directional area weighted distribution ($m_{add}$ defines mass per unit area)
This command is used to add mass to a node or geometrical region.
Directional distribution
The option distribution=3 can be used to model added mass effects for bodies submerged in a liquid. The added node mass is then a vector $\mathbf{v}_m = \left[ v_{m,x} \; v_{m,y} \; v_{m,z} \right]^T$:
$\displaystyle{ \mathbf{v}_m = \left\{ \begin{array}{c} v_{m,x} \\ v_{m,y} \\ v_{m,z} \end{array} \right\} = m_{add} \cdot A \cdot \hat{\mathbf{n}} = m_{add} \cdot A \cdot \left\{ \begin{array}{c} n_x \\ n_y \\ n_z \end{array} \right\}}$
where $A$ is the surface area represented by the node and $\hat{\mathbf{n}}$ is the surface normal vector at the node. That is, the added mass effect is assumed to be proportional to the area (of the body) projected in the acceleration direction. The node acceleration $\mathbf{a}$ at a given node force $\mathbf{F} = \left[ F_x \; F_y \; F_z \right]^T$ becomes:
$\displaystyle{ \mathbf{a} = \left\{ \begin{array}{c} a_x \\ a_y \\ a_z \end{array} \right\} = \left\{ \begin{array}{c} F_x / (m + v_{m,x}) \\ F_y / (m + v_{m,y}) \\ F_z / (m + v_{m,z}) \end{array} \right\} = \left\{ \begin{array}{c} F_x / (m + m_{add} A n_x) \\ F_y / (m + m_{add} A n_y) \\ F_z / (m + m_{add} A n_z) \end{array} \right\} }$
where $m$ is the physical node mass. Note that it remains for the user to define $m_{add}$. As a rule of thumb it should be proportional to the smallest projected side length of the body (in the acceleration direction) multiplied by the liquid density.
One can take a rigid square plate with side length $L$ submerged in a liquid with density $\rho$ as an example. According to potential flow theory, $m_{add}$ should be (Patton K.T., Tables of Hydrodynamic Mass Factors for Translational Motion, ASME, 1965):
$\displaystyle{ m_{add} = 0.3758 \rho L }$
Node mass
The following command adds a point mass of 0.13kg to a node with ID=1001.
"point mass"
1, N, 1001, 0.13
Time dependent mass
A mass of 2.5kg is distributed to a surface and then removed after 0.01s.
"distributed mass"
1, G, 55, fcn(345), 1
0.023, 0.45, 0.01
H(0.01-t) * 2.5 | CommonCrawl |
Livestock Entomology ID/Knowledge Quiz 7-10
forhottiesonly
Which is the most important tapeworm species in Sheep and Goats?
Moniezia expansa
_______________ causes the Sheep and Goat disease Scrapie?
Goat lice are not host specific and will attack cattle, horses, dogs, and cats?
The ____________ deposits living larvae in or around the nostrils of the goat in spring and summer months.
Female nose bot fly, Oestrus ovis
Which arthropod pest causes feeding punctures known as "cockle"?
Sheep Ked
Which pest transmits Swinepox?
Haematopinus suis
Swine Influenza has high morbidity-low mortality?
How does a human get infected with trichinosis?
Consuming undercooked pork
Which mite is responsible for swine mange?
Sarcoptes scabei suis
The lifecycle of Hog Lice can be completed in how many days?
The two most effective methods for controlling ticks on cattle are the combination of pasture rotation and ______________.
Insecticide application
Which tick is responsible for "gotch ear" in cattle?
Gulf Coast Tick
Ten or more female Lone Star Ticks can impact weight gain and performance of cattle?
Which one of the following is not a concern for Dairy Cattle?
Chigger
For house flies, what is the action guideline with a spot card?
100 spots per week
What is not a problem with the use of insecticides for CAFO's?
Smell of insecticides affects cattle behavior
__________________ is the passage of parasite directly to squbsequent life stages or generations within vector populations.
Vertical transmission
Which type of transmission would include an arthropod vector?
Indirect transmission
Which stage of the cattle movement system has the least opportunity for direct transmission, but the highest probability of arthropod disease transmission?
A ____________ supports parasite development, remains infected for long periods, and serves as a source of vector infection, but usually does not develop acute disease.
Reservoir host
Ticks can be biological vectors for two parasitesin horses that cause equine piroplasmosis which are Theileria equi and ___________________?
Babesia caballi
Of these fly pests, which one does not typically feed as an adult on horses?
House fly
West Nile Encephelitis has been reported previously in Oklahoma?
What causes Sweet Itch, which is also known as summer eczema?
Culicoides
Which of the following is the vest way to control stable flies on horse operations?
Removal of larval habitat, such as hay
Which is not a symptom of Eastern Equine Encephalitis in horses?
Sluggish movement
Healthy animals cannot tolerate fly infestations any better than sick or convalescent animals?
Which of the stages below for Bot flies is incorrect?
Larvae then migrate to the lungs and remain there for up to 10 months
Which of viruses below are not a horse Alphavirus?
Brazilian Equine Encephalitis
For African Horse Sickness, what is the most fatal form?
Pulmonary (peracute)
Vector-borne, Direct, and ____________ are the types of transmission discussed related to poultry pathogens.
What is the most important external pest of poultry?
Northern Fowl Mite
The Common Red Chicken Mite feeds in daylight hours.
What is the problem with managing mites in poultry facilities?
All of the above:
No product labeled to kill mites
Re-treatment in 7 days needed
Pesticide levels in eggs
Blood loss
What biological pathogen has been shown to kill 90% of Darkling Beetle larvae?
Beauveria bassiana
Darkling Beetles are more of a problem in commercial production than other poultry systems?
Match each species to its common name.
Ornithonyssus sylvarium:Northern Fowl Mite
Dermanyssus gallinae: Red Chicken Mite
Knemidocoptes mutans: Scaly Leg Mite
Argas mineatus: Fowl Tick
Echidnophaga gallinacea: Sticktight Flea
Cimex lectularius: Bedbug
Menacanthus stramineus: Chicken Body Louse
Alphitobius diaperinus: Darkling Beetle
Musca domestica: House fly
This pest is commonly mistaken as a tick but is actually a fly.
There are two types of lice that affect goats: biting and sucking. Which type are these?
This pest can be prevented by clipping the wool from the crutch area.
Wool maggots
This pest commonly infests hogs and is large enough to be seen with the naked eye.
Hog lice
Which of the following is the correct identification of theis biting fly that commonly causes Sweet Itch or Summer Eczema in horses?
Culicoides (Biting Midges)
Which of the following is the correct name for the pest pictured and causes cantharidin poisoning in horses?
Blister Beetles
Which mite is considered the #1 external parasite of poultry?
Shown below is structural damage in a poultry facility. Which pest causes this?
Darkling Beetles
Who am I? I infest broiler houses and like to hang out around the nesting boxes?
Which is the most commonly found external parasite of backyard poultry?
Chicken Body Louse
Livestock Entomology ID/Knowledge Quiz 3-6
Livestock Entomology Exam 3 Review
Livestock Entomology ID/Knowledge Quiz 1
General Genetics Overview
Strong Bases & Strong Acids
Chemistry Exam II:
A homemade capacitor is assembled by placing two $9 \mathrm{~in}$. pie pans $4 \mathrm{~cm}$ apart and connecting them to the opposite terminals of a $9 \mathrm{~V}$ battery. Which of the following values change if a dielectric is inserted: the capacitance, the charge on each plate, the electric field halfway between the plates and/or the work done by the battery to charge them.
M.E. is a $62$ -year-old woman who has a $5$ -year history of progressive forgetfulness. She is no longer able to care for herself, has become increasingly depressed and paranoid, and recently started a fire in the kitchen. After extensive neurologic evaluation, M.E. is diagnosed as having Alzheimer's disease. Her husband and children have come to the Alzheimer's unit at your extended care facility for information about this disease and to discuss the possibility of placement for M.E. You reassure the family that you have experience dealing with the questions and concerns of most people in their situation. The husband asks, "How did she get Alzheimer's? We don't know anyone else who has it." How would you respond?
Determine the steady-state surface temperature of an electric cable, 25 cm in diameter, which is suspended horizontally in still air in which heat is dissipated by the cable at a rate of 27 W per meter of length. The air temperature is $30^{\circ} \mathrm{C}$.
Derive equations $$ \int_{0}^{T} \sin n \omega_{T} t \sin m \omega_{T} t d t=\left\{\begin{array}{ll}{0} & {m \neq n} \\ {T / 2} & {m=n}\end{array}\right. $$ , $$ \int_{0}^{T} \cos n \omega_{T} t \cos m \omega_{T} t d t=\left\{\begin{array}{ll}{0} & {m \neq n} \\ {T / 2} & {m=n}\end{array}\right., $$ and $\int_{0}^{T} \cos n \omega_{T} t \sin m \omega_{T} t d t=0$ and hence verify the equations for the Fourier coefficient given by equations $a_{0}=\frac{2}{T} \int_{0}^{T} F(t) d t,$ $a_{n}=\frac{2}{T} \int_{0}^{T} F(t) \cos n \omega_{T} t d t \quad n=1,2, \ldots,$ and $b_{n}=\frac{2}{T} \int_{0}^{T} F(t) \sin n \omega_{T} t d t \quad n=1,2, \ldots.$ | CommonCrawl |
arXiv:2111.05841 (cs)
[Submitted on 10 Nov 2021 (v1), last revised 3 Nov 2022 (this version, v2)]
Title:Physics-enhanced deep surrogates for PDEs
Authors:Raphaël Pestourie, Youssef Mroueh, Chris Rackauckas, Payel Das, Steven G. Johnson
Abstract: We present a ''physics-enhanced deep-surrogate'' (''PEDS'') approach towards developing fast surrogate models for complex physical systems, which is described by partial differential equations (PDEs) and similar models. Specifically, a unique combination of a low-fidelity, explainable physics simulator and a neural network generator is proposed, which is trained end-to-end to globally match the output of an expensive high-fidelity numerical solver. We consider low-fidelity models derived from coarser discretizations and/or by simplifying the physical equations, which are several orders of magnitude faster than a high-fidelity ''brute-force'' PDE solver. The neural network generates an approximate input, which is adaptively mixed with a downsampled guess and fed into the low-fidelity simulator. In this way, by incorporating the limited physical knowledge from the differentiable low-fidelity model ''layer'', we ensure that the conservation laws and symmetries governing the system are respected by the design of our hybrid system. Experiments on three test problems -- diffusion, reaction-diffusion, and electromagnetic scattering models -- show that a PEDS surrogate can be up to 3$\times$ more accurate than a ''black-box'' neural network with limited data ($\approx 10^3$ training points), and reduces the data needed by at least a factor of 100 for a target error of $5\%$, comparable to fabrication uncertainty. PEDS even appears to learn with a steeper asymptotic power law than black-box surrogates. In summary, PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers, offering accuracy, speed, data efficiency, as well as physical insights into the process.
Subjects: Machine Learning (cs.LG); Applied Physics (physics.app-ph)
From: Youssef Mroueh [view email]
[v1] Wed, 10 Nov 2021 18:43:18 UTC (4,269 KB)
[v2] Thu, 3 Nov 2022 15:27:05 UTC (5,325 KB)
physics.app-ph
Youssef Mroueh
Steven G. Johnson | CommonCrawl |
Karyotype diversity and 2C DNA content in species of the Caesalpinia group
Polliana Silva Rodrigues1,
Margarete Magalhães Souza1,
Cláusio Antônio Ferreira Melo1,
Telma Nair Santana Pereira2 &
Ronan Xavier Corrêa1
The Leguminosae family is the third-largest family of angiosperms, and Caesalpinioideae is its second-largest subfamily. A great number of species (approximately 205) are found in the Caesalpinia group within this subfamily; together with these species' phenotypic plasticity and the similarities in their morphological descriptors, make this a complex group for taxonomic and phylogenetic studies. The objective of the present work was to evaluate the karyotypic diversity and the 2C DNA content variation in 10 species of the Caesalpinia group, representing six genera: Paubrasilia, Caesalpinia, Cenostigma, Poincianella, Erythrostemon and Libidibia. The GC-rich heterochromatin and 45S rDNA sites (which are used as chromosome markers) were located to evaluate the karyotype diversity in the clade. The variation in the 2C DNA content was determined through flow cytometry.
The fluorochrome banding indicated that the chromomycin A3+/4′,6-diamidino-2-phenylindole− blocks were exclusively in the terminal regions of the chromosomes, coinciding with 45S rDNA sites in all analyzed species. Physical mapping of the species (through fluorescence in situ hybridization) revealed variation in the size of the hybridization signals and in the number and distribution of the 45S rDNA sites. All hybridization sites were in the terminal regions of the chromosomes. In addition, all species had a hybridization site in the fourth chromosome pair. The 2C DNA content ranged from 1.54 pg in Erythrostemon calycina to 2.82 pg in the Paubrasilia echinata large-leaf variant. The Pa. echinata small-leaf variant was isolated from the other leaf variants through Scoot-Knott clustering.
The chromosome diversity and the variation in the 2C DNA content reinforce that the actual taxonomy and clustering of the analyzed taxa requires more genera that were previously proposed. This fact indicates that taxonomy, phylogeny and cytoevolutionary inference related to the complex Caesalpinia group have to be done through integrative evaluation.
Leguminosae is the third-largest family among angiosperms [1], and Caesalpinioideae, its second-largest subfamily, is represented by about 170 genera, many of them with complex and confused taxonomies. This subfamily's phenotypic plasticity is a challenge for taxonomies that are based on morphology [2]. This group commonly occurs in Brazil, which is home to about 790 described species [3, 4]. The Caesalpinia group within the Caesalpinioideae subfamily is a pantropical clade that presents about 205 pantropical species [5], including important Brazilian species that are threatened with extinction [6].
Taxonomic and phylogenetic changes have been done for some species and genera from Leguminosae, including the clustering of taxa in a new genetic system for the Caesalpinia group [5]. The major problem in the Caesalpinia clade taxonomy and its phylogenetic classification relates to morphological similarities, as there is little variation for some descriptors [7]. Solving this problem requires a broad mode of analysis for the taxa characterization [8, 9], which has been helpful for the systematic distribution and taxonomy of the Caesalpinia group [10, 11].
The banding obtained from the application of chromomycin A3 (CMA3) and 4′,6-diamidino-2-phenylindole (DAPI) fluorochromes, and from the localization of 45S rDNA sites using the fluorescence in situ hybridization (FISH) technique, has been used to identify specific sites; the positions and sizes of such tags may be useful as cytological markers. These data allow us to define the location and quantity of the chromosome markers that are commonly observed in a group of species, as well as the specific chromosome pattern of the markers for each species [12].
Flow cytometry has been used in biosystematics analyses, mainly to provide results regarding nuclear DNA content and, consequently, the level of ploidy. This allows for better species detection and delimitation, which is helpful in the study of a particular genus's phylogenetic relationships and evolutionary characteristics [13].
Previous studies in which fluorochrome staining was applied to the Caesalpinioideae subfamily revealed inter- and intraspecific differences. The heterochromatic blocks observed (CMA3+/DAPI−) were distributed in regions proximal to the nuclear organizer regions, but the presence of CMA3+/DAPI− blocks was also observed in previous studies [14]. The DNA content indicated the existence of intra- and interspecific variability in some genera within Fabaceae [15,16,17]. However, these analyses included only one DNA-content analysis for a species of the genus Caesalpinia (Caesalpinia crista) [17].
This study aimed to evaluate the karyotype diversity in 10 species (representing six genera) of the pantropical Caesalpinia clade, using the location of GC-rich heterochromatin and the number and position of 45S rDNA sites. In addition, 2C DNA content was quantified using flow cytometry.
Botanical material and pretreatment
Seeds of 10 species of the Caesalpinia group were collected from several locations in the state of Bahia in Brazil (Table 1). The seeds were randomly collected, with the name of obtaining as many species from the state as possible. After field collection, the seeds were treated with Captan (Fersol®) fungicide and germinated on moistened filter paper in a humid chamber at room temperature. Root tips of approximately 3 mm in length were collected shortly after germination and pretreated with an anti-mitotic solution of 0.002 M 8-hydroxyquinoline for 6 h; the root tips were then washed twice in distilled water, dried on filter paper fixed in Carnoy I (3:1 glacial acetic acid to absolute ethanol, v/v) [18] for 2 h at room temperature, and maintained at − 20 °C until the time of use. After radicles were collected, the seedlings were planted in 2 kg bags with organic soil and monitored in a greenhouse, resulting in matrix plants for the cytogenetic characterization.
Table 1 Estimates of nuclear genome size (2C DNA content) for Caesalpinia group
Preparation of slides and banding with CMA3 and DAPI fluorochromes
For the localization of base regions specific to GC and AT, the fluorochromes CMA3 and DAPI were used in a double-staining process. Distamycin A solution was added to the cytological preparation. This protocol followed the one proposed by Guerra and Souza [19], with some modifications. The slides were prepared through enzymatic digestion with 2% cellulase and 20% pectinase for 1 h; this was followed by maceration in a drop of 45% acetic acid and then by freezing in liquid nitrogen to remove the cover slip. The slides containing the cytological preparations were aged for 3 days at room temperature, after which 0.25 mg− 1 of CMA3 was added for 1 h; this was followed by washing with distilled water and air-drying. Next, 0.1 mg− 1 distamycin A was added for 30 min, followed by another round of washing with distilled water and air-drying. Finally, DAPI was added for 30 min, followed by a last round of washing in distilled water and air-drying. The slides were assembled with 20 × 20 mm cover slips with 1:1 glycerol/McIlvaine medium (v/v), plus 2.5 mM MgCl2. After the application of the double staining, the slides were aged for another three days before analysis with epifluorescence microscopy.
Fluorescent in situ hybridization
The application of FISH was performed following the protocol developed by Souza et al. [20], with some modifications, such as eliminating the pretreatment of the slides and adding digestion with pepsin (for better interference of the cytoplasm and cellular walls). The cytological preparations were digested with RNAse (100 μg/ml) and washed twice in 2xSSC (salt, sodium citrate) for 5 min. Next, 50 μl HCl (10 mM) was added and incubated for 5 min at room temperature. After removal of the cover slip, 50 μL of pepsin solution was added (0.75 μL of pepsin and 49.25 μL HCl, 10 mM), and the slide was kept in a humid chamber at 37 °C for 20 min. Next, the following steps were carried out: two washes with 2xSSC (5 min each); incubation in 4% paraformaldehyde for 10 min; two washes with 2xSSC (5 min each); and dehydration in an alcoholic series (70% ethanol and 95% ethanol; 5 min each). The slide was then air-dried for at least 30 min. The hybridization mixture was composed of 100% formamide (7.5 μL), 50% dextran (3.0 μL), 20xSSC (1.5 μL), 10% sodium dodecyl sulfate (0.2 μL) and the 45S probe (2.8 μL). This mix was heated in a thermocycler at 75 °C for 10 min, transferred to ice for at least 2 min and then placed on a slide, which was then denatured in a thermocycler at 75 °C for 10 min and placed in a humid chamber at 37 °C overnight. For the post-hybridization baths, the slide was washed with 2xSSC at room temperature, followed by two washes in 2xSSC at 42 °C (5 min each), two washes in 0.1xSSC at 42 °C (5 min each), two washes in 2xSSC at 42 °C (5 min each) and one wash in 4xSSC/0.2% Tween20 at room temperature (5 min). For detection, 50 μl of 5% BSA was applied to the slide for 10 min at room temperature, an antibody solution containing 0.7 μL of avidin-fluorescein isothiocyanate and 19.3 μL of 5% BSA was then added to the slide, which was kept in a dark, humid chamber for 1 h at 37 °C. Three washes were performed in 4xSSC/0.2% Tween 20 at room temperature, while still in the dark. Excess 4xSSC/0.2% Tween 20 was removed with a blade rinse in 2xSSC, and slide assembly was completed with 15 μL of DAPI-conjugated Vectashield® (Vector® Laboratories). The slides were refrigerated in a dark container for at least 24 h. Blade analysis was performed using the Olympus® BX41 fluorescence microscope; the images were captured with a DP25 digital camera and DP2-BSW software from Olympus®. The overlap of the images and the drawing of the boards were completed using Adobe Photoshop® software.
Analysis of the 2C DNA
Five plants of each analyzed species were used in the analysis of the 2C DNA. For this, five leaves of each species were sampled for the analysis. The species Zea mays CV Kukurice (with 2C = 5.43 pg of DNA) and Glycine max L. (with 2C = 2.50 pg of DNA) [21] were used as internal standards to estimate the species' genome sizes. The Zea mays species was used as an internal standard for the DNA content of all species except Poincianella pluviosa, for which Glycine max was used as the standard. Suspensions of intact nuclei were prepared using the Cystain PI Absolut P kit (Partec®). About 17 mg of leaf tissue from the target species and 20 mg of leaf tissue from the standard were minced simultaneously on a slide in Petri dish with 1 mL of extraction buffer. The suspension material was filtered through a nylon mesh screen of 50 μm. Then, 2 mL of solution containing RNAse and propidium iodide was added, and the material was incubated in a light-protected vessel for at least 30 min at room temperature. The evaluation of the 2C nuclear DNA was conducted using the Partec® PAII flow cytometer. The gain parameter was adjusted so that the peak for the nuclei of target species G1 was positioned over channel 50. At least 10,000 nuclei were analyzed for each sample. The fluorescence intensity of the nuclei, after staining with propidium iodide, was analyzed at rates of 20-50 nuclei/s. The positions of the peaks, their areas and their coefficients of variation were obtained from the cytometer. The size of the nuclear genome was calculated according to Dolezel [22]:
$$ 2\mathrm{C}\mathrm{DNA}=\frac{\mathrm{Average}\ \mathrm{peak}\ \mathrm{G}0/\mathrm{G}1\ \mathrm{for}\ Caesalpinia}{\mathrm{Average}\ \mathrm{peak}\ \mathrm{G}0/\mathrm{G}1\ \mathrm{for}\ \mathrm{standard}}\mathrm{x}\ 2\mathrm{C}\ \mathrm{DNA}\ \mathrm{standard}\ \left(\mathrm{pg}\right) $$
Analysis of variance (ANOVA) was used for the evaluation of significant differences in the flow cytometry data using a completely randomized design and five repetitions for each species. Additionally, the mean of the 2C DNA was clustered using Scott-Knott clustering. The ANOVA and the mean clustering were done using Sisvar software [23]. The editing of the Partec® flow cytometer histograms was carried out using Corel Draw® X7 software.
Localization of GC-rich heterochromatin
The location of GC-rich heterochromatin using base-specific fluorochromes revealed CMA3+/DAPI− terminal blocks in all analyzed species. However,CMA3+ pericentromeric blocks were also observed in metaphase chromosomes in Libidibia ferrea and Po. microphylla (Figs. 1 and 2). In general, the terminal heterocyclic blocks were of distinct sizes. CMA3+/DAPI− terminal blocks were observed in two chromosome pairs of Cenostigma macrophyllum, Po. pluviosa, Caesalpinia pulcherrima and Pa. echinata. Three chromosome pairs with CMA3+/DAPI− terminal blocks were observed in Po. bracteosa, Po.laxiflora, Po. microphylla and L. ferrea. Erythrostemon calycina had seven chromosomes with CMA3+/DAPI− terminal blocks. The highest number of CMA3+/DAPI− terminal blocks was observed in Po. pyramidalis, which had four chromosome pairs with GC-rich heterochromatic blocks.
Application of fluorochromes in the Caesalpinia group. The fluorochromes DAPI (a, d, g, j) and CMA3 (b, e, h, k), as well as the FISH (c, g, i, l) with a probe for 45S rDNA on metaphase chromosomes. a - c Cynostigma macrophyllum, (d - f) Erythrostemon calycina, (g - i) Poincianella pluviosa and (j - l) Libidibia ferrea. White arrows indicate CMA3+/DAPI− blocks, orange arrows indicate CMA3+ blocks, and red arrows indicate 45S rDNA sites; a bar is 10 μm
Application of fluorochromes in the Caesalpinia group. The fluorochromes DAPI (a, d, g, j, m, p) and CMA3 (b, e, h, k, n, q), as well as the FISH (c, g, i, l, o, r) with a probe for 45S rDNA on metaphase chromosomes. a - c Poincianella bracteosa, (d - f) Po. laxiflora, (g - i) Po. microphylla, (j - l) Po. pyramidalis (m - o), Caesalpinia pulcherrima and (p - r) Paubrasilia echinata (SV). White arrows indicate CMA3+/DAPI− blocks, orange arrows indicate CMA3+ blocks, and red arrows indicate 45S rDNA sites
Location of 45S rDNA sites
The application of FISH to localize the 45S rDNA sites allowed for the visualization of 45S rDNA in pairs of chromosomes (three, four or five, depending on the species), along with terminal 45S rDNA hybridization sites (Figs. 1 and 2). The karyotypes of the species C. macrophyllum, Po. pluviosa and L. ferrea each had three chromosome pairs with 45S rDNA hybridization sites (Fig. 1). The species Po. bracteosa, Po. laxiflora, Po. microphylla, Ca. pulcherrima and Po. pyramidalis each had four chromosome pairs with 45S rDNA sites (Fig. 2). Five chromosome pairs with hybridization sites for 45S rDNA were observed in E. calycina (Fig. 1f) and Pa. echinata (Fig. 2r).
All species demonstrated 45S rDNA hybridization sites in the fourth chromosome pair, and only Ca. pulcherrima did not show a marking on the seventh chromosome pair. The 45S rDNA hybridization sites in the eighth and tenth chromosome pairs were limited to Po. microphylla and Po. pyramidalis, respectively. The most frequent markers were located in the second chromosome pair (E. calycina, L. ferrea and Po. pyramidalis), in the fifth chromosome pair (E. calycina, Po. bracteosa and Po. microphylla) and in the eleventh chromosome pair (E. calycina, Pa. echinata and Ca. pulcherrima).
The histograms in Fig. 3 show the fluorescence distribution as a function of the number of nuclei in the examined sample (Fig. 3). The Pa. echinata large-leaf variant (LV) was the taxon with the highest 2C value of DNA (2.82 pg). The Pa. echinata LV was then used as the reference for high DNA content; compared to this reference value, the DNA content of the other species was lower by 46.3% for L. ferrea, 45.4% for E. calycina, 42.2% for Ca. pulcherrima, 35.1% for C. macrophyllum, 33.7% for Po. microphylla, 33.3% for Po. pluviosa, 32.6% % for Po. laxiflora, 31.9% for Po. bracteosa and Po. pyramidalis, 2.1% for the Pa. echinata small-leaf variant (SV), and 0.4% for the Pa. echinata medium-leaf variant (MV).
Histograms with 2C DNA content for species of the Caesalpinia group. Internal standards used: Glycine max (L.) Merr and Zea mays L
The ANOVA for the 2C DNA of the species revealed a highly significant difference, with a low coefficient of variation: 1.45% (Table 2). The Scott-Knott test clustered the taxa into six groups based on the average 2C values, with a minimum significant difference of 0.0647 (Table 1). Species from the genus Poincianella were arranged into group C. Two species remained in isolated groups: E. calycina and C. macrophyllum. The species L. ferrea and Ca. pulcherrima were placed in group E.
Table 2 Summary of the ANOVA for the characteristic 2C DNA content among the analyzed species
An ANOVA was carried out to estimate the variation of the DNA content for only the three morphotypes of Pa. echinata, thus demonstrating the existence of a significant difference at p-value = < 0.05 (Table 3). The Scott-Knott test with only the morphotypes of Pa. echinata showed a separation between the morphotypes, as the SV type was isolated in group B with a statistically significant difference in relation to the MV and LV types in group A.
Table 3 Summary of the ANOVA for 2C DNA content among the three morphological leaf variants of Paubrasilia echinata
Prior studies have been carried out to locate heterochromatin rich in AT and GC among species from the Caesalpinia group using the fluorochromes CMA3 and DAPI. Previous studies in the Senna and Chamaecrista genera revealed the presence of CMA3+/DAPI− and small CMA3−/DAPI+ terminal or subterminal blocks [14]. In Copaifera, only CMA3+/DAPI− blocks were observed [24], similar to the terminal pattern observed in the present study, for which the analyzed species showed heterochromatin blocks that were rich in GC and poor in AT. These CMA3+/DAPI− bands have been observed only in the terminal regions of chromosomes and have ranged from two to four pairs of chromosomes in the analyzed species. Additionally, CMA3+ blocks have been observed in the metaphase chromosomes of L. ferrea and Po. microphylla, indicating either the existence of GC-rich pericentromeric sequences in related genera (suggesting a shared trait with a common ancestor) or changes related to the composition of centromeric satellite DNA, which is qualitatively rich in GC in these species.
Fluorochrome CMA3−/DAPI+ blocks have already been observed in Senna obtusifolia (L.) H.S. Irwin & Barneby and in one population of Chamaecrista nictitans Moench [14]. However, these bands were not reported in the present study. The absence of this type of heterochromatin may be a typical characteristic of species in the analyzed genera of Poincianella, Libidia, Erytrostemon, Paubrasilia, Caesalpinia and Cenostigma. However, the karyotype characterization on more population can relieve the interspecific variation in cytogenetic markers. In the Leguminosae family, various patterns of AT-rich and GC-poor heterochromatin were also observed for the Mimosa [25] and Erythrina [26] genera. These variations reinforce the need to characterize heterochromatin in other species so as to understand the distribution pattern and evolution of this class of DNA, which is variously colored within the Caesalpinia group.
In the present study, CMA3+/DAPI− terminal blocks coincided with certain 45S rDNA hybridization sites, reinforcing the fact that these CMA3+ blocks relate to these 45S rDNA, which in turn are rich in GC bases [27,28,29]. However, the application of base-specific fluorochromes did not reveal rDNA sites with few repetitions, as the small heterochromatic block made detection and photographic documentation unviable for the epifluorescence microscope [30].
Molecular cytogenetics techniques have been widely used to localize specific in situ DNA sequences [31]. Genetically related species tend to have karyotypes with similar characteristics in terms of sequence localization and are useful in studies of plant systematics, taxonomy and evolution mainly contributing to groupings of species or cytotypes that share common characteristics, thus suggesting primitiveness or deactivation for a given cytological marker that is shared among a group of plants [32,33,34,35,36,37,38].
The hybridization sites of 45S rDNA probes in species of the Caesalpinia group have shown variations in both the number and the location of these sequences. The differences in the number of such sequences generally occur due to chromosomal rearrangements such as translocations, inversions, duplications and deletions [39], whereas variations in the signal intensity of hybridization are observed between sites with different numbers of rDNA replicates. Any changes in these sites' patterns of distribution are levels of speciation; this may assist in determining how evolution has occurred within a group of taxonomically complex plants [31, 40]. Lower quantitative variation has been observed in the Poincianella genus, suggesting greater stability in the number of 45S rDNA sites (a total of eight). Conservation in the location of the rDNA genes (as revealed using FISH) was observed for species from the genus Trifolium (Leguminosae: Papilionoideae), which may indicate that some Leguminosae have great stability in this region [41]. This stability, which is based on the number and location of a chromosomal marker, is a good characteristic for species identification and delimitation through karyotype analysis.
The species in this study showed significant variations in 2C DNA and, consequently, in the size of the genomes for the evaluated species and genera. In addition, the low coefficient of variation among the replicates indicates the precision of the sample, as analyzed using flow cytometry. The variation in the amount of DNA across species can be attributed to the loss or gain of DNA sequences, which usually consist of repetitive DNA; this may occur due to evolutionary changes in accumulation and/or loss of repeating monomers in the micro and macro environments during the species' evolution [42, 43]. This suggests that such losses or additions to the genome become stabilized during microevolution and selection [43].
In this work, only Ca. pulcherrima had a lower estimated amount of 2C DNA (1.63 pg) than its previous estimate (1.80 pg) [16]. This may be due to the variation in number of chromosomes for the two analyzed populations of Ca. pulcherrima, as the population evaluated in this study presented 2n = 24, but the population evaluated by OHRI et al. [16] presented 2n = 28. The estimated DNA nuclear content for diploid Ca. crista (2n = 24) indicated a too-high value of 0.707 pg per chromosome, which leads to 17.67 pg for the 2C DNA. This value, which is much higher than our result, can be attributed to the different cytophotometric methods, as Allium cepa L.'s DNA value was computed as a DNA size pattern [17]. The 2C value that we found for Ca. crista was also considerably higher than those that OHRI et al. [16] found for the species in the genus Caesalpinioideae. Analyzing the DNA content and chromosomal differences observed for taxa from the Caesalpinia genus, together with the results from the literature, requires an interdisciplinary mode in order to indicate the species' taxonomy, delimitation and clustering.
Many species in the Caesalpinia group exhibit a high degree of phenotypic plasticity, especially in foliage and leaflets. This has resulted in multiple nomenclature for the species, with each leaf-size variant having a specific condition, thus resulting in taxonomic problems [1, 5, 7, 9]. This fact can be observed in Pa. echinata, which was previously arranged in the Caesalpinia genus and which has three morphotypes that were previously characterized using chloroplast DNA sequences [44]. The three morphotypes (leaf-size variants) presented small variations in 2C DNA, with values of 2.76, 2.81 and 2.82 pg for the SV, MV and LV types, respectively. The Scott-Knott test separated these morphotypes into two groups, one with only Pa. echinata SV and one composed of Pa. echinata MV and LV; this shows that, although the variations among Pa. echinata leaf-size morphotypes are not large, the values are sufficient to separate the SV morphotype from the other two variants, with this type's low DNA content acting as a differentiating feature.
In legume species, a positive correlation has been observed between leaf size and nuclear DNA content [45]. This relationship was also observed in this study, wherein the variants with the relatively large leaves (MV and LV) had more DNA than the SV variants. Therefore, diversification of genome size results from speciation, which, along with phenotypic changes in quantitative descriptors, is an adaptation response such as the ones observed in polyploid plants [46]. Thus, plants' 2C DNA can be used to estimate the taxonomic differentiation between species, as seen here for the variants of Pa. echinata.
The data obtained in our studies corroborate the new classification of the species that were initially placed in the Caesalpinia group [5, 9,10,11], showing that the species that had at least 1.87 pg of 2C DNA in this study should actually be grouped in the Poincianella genus. Among the species analyzed in the present study, the only representative of the Erythrostemon genus was E. calycina, which had the least amount of 2C DNA (1.54 pg) and which was also the only species to present five pairs of chromosomes with the presence of 45S rDNA sites; these were unique characteristics of this genus. In previous analyses, similarities in chromosome morphology have been reported for the six species studied herein, and karyotype formulas have shown the predominance of metacentric chromosomes [47].
Only one species was evaluated for the Cenostigma genus, C. macrophyllum; this species which was initially collected under the belief that it belonged to the Caesalpinia genus. The similarities between species of these two genera have also been visualized previously, as the species Cenostigma sclerophyllum Tul. was later described as a synonym for Caesalpinia marginata Tul. [8, 48]. The analyses of C. macrophyllum enabled us to observe that it was the only species to present only six CMA3+/DAPI− bands, showing that the amount of GC-rich heterochromatin was lower than that of the other species evaluated herein; this could be a feature exclusive to the Cenostigma genus.
In this work, the distribution pattern for heterochromatin, the physical location of 45S rDNA regions and the amount of DNA were all useful to corroborate studies of systematics and of evolution in Caesalpinia-group species. Although the quantity of species evaluated herein is only a small fraction of the diversity already described as belonging to the Caesalpinia group, and although some species were relocated within new genera, it was possible to observe a distinctive pattern for individual cytogenetic characteristics in the genera currently specified as Poncianella. This shows that the karyotypic analysis and the quantification of 2C DNA are valid methods to support taxonomic and biosystematics studies.
The quantitative variation in GC-rich heterochromatin among species in the Caesalpinia group indicates not only the variable number of satellite-related rDNA sites but also the existence of chromosomes with pericentromeric repetitive DNA with GC-rich heterochromatin in L. ferrea and Po. microphylla species. The intra- and interspecific variations in size of the GC-rich chromosomal blocks related to rDNA and satellites (relative to the location of these regions), as determined using the FISH technique with a probe for 45S rDNA. This relation suggests that the use of fluorochromes for the localization of 45S rDNA is not indicated for loci numbers identification in species of the Caesalpinia group. This fact is attributed to the minor size replications of the 45S rDNA genes in some chromosomes, which make it difficult to observe variations using CMA3 fluorochromes. The 2C DNA may not be even related to morphological leaf size for Paubrasilia echinata, so this relationship must be evaluated again with other populations to increase the number of analyzed plants. On the other hand, the 2C DNA helped us to see all the variation in the Caesalpinia group within this trait. All the data indicate that the actual taxonomy appropriated by the Caesalpinia group is due to the larger chromosomal and genome-size variations, which could be clustered in a specific genus. This information shows the group from point of view that differs from the old taxonomy, which grouped all species analyzed herein as just one genus.
2C DNA:
2C DNA content
2n:
Diploid number
ANOVA:
Analysis of variance
CMA3 :
Chromomycin A3
DAPI:
4′-6-diamidino-2-phenylindole
FISH:
LV:
Large-leaf variant
MV:
Medium-leaf variant
rDNA:
SSC:
Salt sodium citrate
SV:
Small-leaf variant
Lewis GP, Schrire B, Mackinder B, Lock M. Legumes of the world. Kew: Roy Bot Gard; 2005. p. 577.
Herendeen PS. Structural evolution in the Caesalpinioideae (Leguminosae). In: Herendeen PS, Bruneau A, editors. Advances in Legume Systematics, part 9. Kew: Roy Bot Gard, 2000. p. 45–64.
Barroso GM, Peixoto AL, Costa GC, Ichasso CLF, Guimarães EF, Lima HC. Sistemática de angiospermas do Brasil, vol. 337. Viçosa: UFV; 1984.
Bortoluzzi RLC, Biondo E, Miotto STS, Schifino-Witmann MT. Abordagens taxonômicas e citogenéticas em Leguminosae-Caesalpinioideae na região sul do Brasil. Revista Brasileira de Biociências. 2007;5:339–41.
Gagnon E, Bruneau A, Hughes CE, Queiroz LP, Lewis GP. A new generic system for the pantropical Caesalpinia group (Leguminosae). PhytoKeys. 2016;71:1–160.
IUCN. IUCN red list of threatened species; 2013.
Polhill R, Vidal J. Caesalpinieae. In: Polhill R, Raven PH, editors. Advances in legume systematics, part 1. Richmond: Roy Bot Gard; 1981. p. 81–95.
Lewis GP. Caesalpinia, a revision of the Poincianella – Erythrostemon group. Kew: Roy Bot Gard; 1998.
Gagnon E, Lewis GP, Sotuyo JS, Hughes CE, Bruneau AA. Molecular phylogeny of Caesalpinia sensu lato: increased sampling reveals new insights and more genera than expected. S Afr J Bot. 2013;89:111–27.
Queiroz LP. Leguminosas da Caatinga. Feira de Santana: Universidade Estadual de Feira de Santana. Associação Plantas do Nordeste; 2009.
Queiroz LP. New combinations in Libidibia (DC.) Schltdl. And Poincianella Britton & Rose (Leguminosae, Caesalpinioideae). Neodiversity. 2010;5:11–2.
Melo CAF, Martins MIG, Oliveira MBM, Benko-Iseppon AMR, Carvalho R. Karyotype analysis for diploid and polyploid species of the Solanum L. Plant Syst Evol. 2011;293:227–35.
Kron P, Suda J, Husband BC. Applications of flow cytometry to evolutionary and population biology. Annu Rev Ecol Evol Syst. 2007;38:847–76.
Soares-Scott MD, Meletti LMM, Bernacci LC, Passos IRS. Citogenética clássica e molecular em passifloras. In: Faleiro FG, Souza MGC, Benko-Iseppon AM. Cytogenetics and chromosome banding patterns in Caesalpinioideae and Papilionioideae species of Pará, Amazonas, Brazil. Bot J Linn Soc. 2004;144:181–91.
Héla EFO, Naghmouchi S, Walker DJ, Correal E, Boussaïd M, Khouja ML. Variability in the pod and seed parameters and nuclear DNA content of Tunisian populations of Ceratonia siliqua L. Caryologia. 2008;61(4):354–62.
Ohri D, Kumar A, Pal M. Correlations between 2C DNA values and habit in Cassia (Leguminosae: Caesalpinioideae). Plant Syst Evol. 1986;153:223–7.
Jena S, Sahoo P, Mohanty S, Das AB. Identification of RAPD markers, in situ DNA content and structural chromosomal diversity in some legumes of the mangrove flora of Orissa. Genetica. 2004;122:217–26.
Johansen DA. Plant Microtechique. New York: McGraw-Hill Book Company; 1940.
Guerra M, Souza MJ. Como observar cromossomos: Um guia de práticas em citogenética vegetal, animal e humana. Ribeirão Preto: Editora Funpec; 2002.
Souza MM, Urdampilleta JD, Forni-Martins ER. Improvements in cytological preparations for fluorescent in situ hybridization in Passiflora. Genet Mol Res. 2010;9(4):2148–55.
Dolezel J, Greilhuber J, Lucretti S, Meister A, Lysa KMA, Nardi L, Obermayer R. Plant genome size estimation by flow cytometry: inter-laboratory comparison. Ann Bot. 1998;82:17–26.
Dolezel J, Gohde W. Sex determination in dioecious plants Melandrium album and M. Rubrum using high-resolution flow cytometry. Cytometry. 1995;19:103–6.
Ferreira DF. Programa Sisvar. Software 5.0: UFLA; 2003.
Gaeta ML, Yuyama PM, Sartori D, MHP F, ALL V. Occurrence and chromosome distribution of retroelements and NUPT sequences in Copaifera langsdorffii Desf. (Caesalpinioideae). Chromosom Res. 2010;18:515–24.
Sousa SM, Reis AC. Viccini. Polyploidy, B chromosomes, and heterochromatin characterization of Mimosa caesalpiniifolia Benth. (Fabaceae-Mimosoideae). Tree Genet Genomes. 2013;9:613–9.
Silva SC, Martins MIG, Santos RC, Peñaloza APS, Melo-Filho PA, Benko-Iseppon A, Valls JFM, Carvalho R. Karyological features and banding patterns in Arachis species belonging to the Heteranthae section. Plant Syst Evol. 2010;285:201–7.
Schweizer D. Fluorescent chromosome banding in plants: application, mechanisms, and implications for chromosome structure, Proceedings of the fourth John Innes symposium, held September; 1979. p. 61–72.
Sumner AT. Chromosome banding. London: Unwin and Hyman; 1990.
Robledo G, Seijo G. Species relationships among the wild B genome of Arachis species (section Arachis) based on FISH mapping of rDNA loci and heterochromatin detection: a new proposal for genome arrangement. Theor Appl Genet. 2010;121(6):1033–46.
Melo CAF, Souza MM, Abreu PP, Viana AJC. Karyomorphology and GC-rich heterochromatin pattern in Passiflora (Passifloraceae) wild species from Decaloba and Passiflora subgenera. Flora. 2014;209:620–31.
Schwarzacher T, Leitch AR, Heslop-Harrison JS. DNA : DNA in situ hybridization and methods for light microscopy. In: Harris N, Oparka KJ, editors. Plant cell biology: a practical approach. Oxford: Oxford University Press; 1994. p. 127–55.
Maluszynska J, Heslop-Harrison JS. Physical mapping of rDNA loci in Brassica species. Genome. 1993;36:774–81.
Galasso I, Pignone D, Frediani M, Maggiani M. Chromatin characterization by banding techniques, in situ hybridization, and nuclear DNA content in Cicer L. (Leguminosae). Genome. 1996;39:258–65.
Thomas HM, Harper JÁ, Meredith MR, Morgan WG, Thomas ID, Timms E, King IP. Comparison of ribosomal DNA sites in Lolium species by fluorescente in situ hibridization. Chromosom Res. 1996;4:486–90.
Zhang D, Sang T. Physical mapping of ribosomal RNA genes in peonies (Paeonia, Paeoniaceae) by fluorescent in situ hybridization and concert evolution. Am J Bot. 1999;85:735–40.
Seijo JG, Lavia GI, Fernández A, Krapovickas A, Ducasse D, Shan F, Yan G, Plummer JA. Karyotype evolution in the genus Boronia (Rutaceae). Bot J Linn Soc. 2003;142:309–20.
Sede SM, Fortunato RH, Poggio L. Chromosome evaluation of southern south American species of Camptosema and allied genera (Diocleinae-Phaseolae-Papilionoideae-Leguminosae). Bot J Linn Soc. 2006;52:235–43.
Robledo G, Seijo G. Characterization of the Arachis (Leguminosae) D genome using fluorescence in situ hybridization (FISH) chromosome markers and total genome DNA hybridization. Genet Mol Biol. 2008;31(3):717–24.
Moscone EA, Klein F, Lambrou M, Fuchs J, Schweizer D. Quantitative karyotyping and dual-color FISH mapping of 5S and 18S-25S rDNA probes in the cultivated Phaseolus species (Leguminosae). Genome. 1999;42(6):1224–33.
JS H-H. Comparative genome organization in plants: from sequence and markers to chromatin and chromosomes. Plant Cell. 2000;12:617–35.
Ansari HA, Ellison NW, Reader SM, Badaevas ED, Friebe B, Miller TE, Williams WM. Molecular cytogenetic of 5S and 18S-26S rDNA loci in white clover (Trifolium repens L.) and related species. Ann Bot. 1999;83:199–206.
Price HJ. Nuclear DNA content variation within angiosperm species. Evolution Trends. 1998;1(2):53–60.
Mohanty S, Das AB. Interespecific genetic in 15 species of Cassia L. evident by chromosome and 4C nuclear DNA analysis. J Biolo Sci. 2006;6(4):664–70.
Juchum FS. Phylogenetic relationships among morphotypes of Caesalpinia echinata lam. (Caesalpinioideae: Leguminosae) evidenced by trnL intron sequences. Naturwissenschaften. 2008;95:1085–91.
ChungJ LJH, Arumuganathan K, Graef GL, Specht JE. Relationships between nuclear DNA content and seed and leaf size in soybean. Theo Appl Genet. 1998;96:1064–8.
Sugiyama S-I. Polyploidy and cellular mechanisms changing leaf size: comparison of diploid and autotetraploid populations in two species of Lolium. Ann Bot. 2005;96:931–8.
Rodrigues PS, Souza MM, Correa RX. Karyomorphology and karyotype asymmetry in the south American Caesalpinia species (Leguminosae and Caesalpinioideae). Genet Mol Biol. 2014;13:8278–93.
Warwick MC, Lewis GP. A revision of Cenostigma (Leguminosae - Caesalpinioideae - Caesalpinieae), a genus endemic to Brazil. Kew Bull. 2009;64:135–46.
The authors would like to thank José Lima Paixão, a technician who assisted in the collection and identification of the botanical material, as well as the collaborators at the herbarium of Universidade Estadual de Feira de Santana for confirming the identification of the Cenostigma species.
This research received financial support from Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) (Grant number 473393/2007-7) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for the collection of germoplasm and for the acquisition of laboratory supplies for the analysis. The cytogenetics techniques were performed at the Plant Breeding Laboratory at Universidade Estadual de Santa Cruz, which provided the financial support for the equipment and physical structures used in this study.
All data sets that support this article's conclusions are included in the article.
Departamento de Ciências Biológicas, Centro de Biotecnologia e Genética, Universidade Estadual de Santa Cruz, Rodovia Jorge Amado, km 16, CEP, Ilhéus, BA, 45662-900, Brazil
Polliana Silva Rodrigues, Margarete Magalhães Souza, Cláusio Antônio Ferreira Melo & Ronan Xavier Corrêa
Centro de Ciências e Tecnologias Agropecuárias, Laboratório de Melhoramento Genético Vegetal, Universidade Estadual do Norte Fluminense, Campos dos Goytacazes, Brazil
Telma Nair Santana Pereira
Polliana Silva Rodrigues
Margarete Magalhães Souza
Cláusio Antônio Ferreira Melo
Ronan Xavier Corrêa
PSR performed the cytogenetic laboratory procedures and prepared the manuscript text. RXC helped write and review the text and made several contributions on the relations between plants, as well as obtaining financial support. CAFM helped with the cytogenetic procedures, analyzed the results and reviewed the text. TNSP helped with the flow cytometry procedure and the data interpretation. MMS participated in the molecular cytogenetic analysis and helped with the data interpretation and the protocol adjustment. All authors have read and approved the manuscript and consent to its publication.
Correspondence to Ronan Xavier Corrêa.
Permission for seeds collection was not necessary because the collections were made in public areas. We also found some species near roads and in rural areas. Taxonomists at the herbarium museums of Universidade Estadual de Santa Cruz (Prof. Dr. André Márcio Araújo Amorim) and Universidade Estadual de Feira de Santana (Prof. Dr. Luciano Paganucci de Queiroz) certified all species in Bahia, Brazil.
Rodrigues, P.S., Souza, M.M., Melo, C.A.F. et al. Karyotype diversity and 2C DNA content in species of the Caesalpinia group. BMC Genet 19, 25 (2018). https://doi.org/10.1186/s12863-018-0610-2
Caesalpinioideae
CMA3 +/DAPI−
Pau-Brasil | CommonCrawl |
Publications of the Astronomical Society of Australia (3)
Geological Magazine (1)
MRS Bulletin (1)
Microscopy and Microanalysis (1)
Animal consortium (1)
MSA - Microscopy Society of America (1)
The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue
Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith
Journal: Publications of the Astronomical Society of Australia / Volume 37 / 2020
Published online by Cambridge University Press: 01 June 2020, e018
The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy.
Interactive effects of dietary vitamin K3 and Bacillus subtilis PB6 on the growth performance and tibia quality of broiler chickens with sex separate rearing
S. Guo, J. Xv, Y. Li, Y. Bi, Y. Hou, B. Ding
Journal: animal / Volume 14 / Issue 8 / August 2020
Published online by Cambridge University Press: 14 February 2020, pp. 1610-1618
Both vitamin K and probiotics can promote the bone health of poultry and mammals. The present study was conducted to investigate the interactive effects between vitamin K3 (VK3) and Bacillus subtilis PB6 on the growth performance and tibia quality of broiler chickens with sex separate rearing. In a 3 × 2 × 2 factorial arrangement, 720 one-day-old broiler chicks (Arbor Acres) were assigned to 12 groups with three levels of dietary VK3 (0, 0.5 and 4.0 mg/kg), with or without probiotic supplementation (500 g/t) and with sex separation (male and female). Each group included 3 replicates with 20 birds per replicate. During day 1 to 21, 0.5 and 4.0 mg/kg of VK3 increased average daily gain (ADG) of all birds and average daily feed intake of male birds (P < 0.05). During day 22 to 42, probiotic supplementation increased the ADG of birds (P < 0.05). Probiotic addition increased the weight, length, diameter and strength of tibia in all birds, and 0.5 and 4.0 mg/kg of VK3 increased the tibial breaking strength of male birds at day 21 (P < 0.05). Vitamin K3 and probiotic synergistically increased tibial breaking strength at day 42 and ash content at day 21 (P < 0.05). Three factors exhibited interactive effects on the chemical composition of tibia at day 42, and female birds fed 4 mg/kg of VK3 and probiotic had the highest contents of ash, calcium and phosphorus (P < 0.05). Bacillus subtilis PB6 increased the serum phosphorus level of male birds at day 21 and serum calcium level of female ones at day 42 (P < 0.05). At day 21, in the probiotic-supplemented birds, serum osteocalcin (OCN) and bone-specific alkaline phosphatase (BALP) were increased by 0 and 4.0 mg/kg of VK3, respectively (P < 0.05). Probiotic increased serum OCN and cooperated with VK3 to increase the serum BALP at day 42 (P < 0.05). Vitamin K3 and probiotic synergistically down-regulated the mRNA expression of Runt-related transcription factor 2 and OCN at day 21 (P < 0.05). Vitamin K3 down-regulated the alkaline phosphatase (liver/bone/kidney) expression in male birds at day 21 and 42, but probiotic up-regulated the expression of these genes at day 42 (P < 0.05). In conclusion, VK3 and B. subtilis PB6 promoted the growth performance of broilers during starter and grower phases, respectively. They synergistically improved the physical and chemical traits of tibias, especially in grower phase, by modulating calcium and phosphorus metabolism as well as osteogenic gene expression.
UHP metamorphism recorded by coesite-bearing metapelite in the East Kunlun Orogen (NW China)
Hengzhe Bi, Shuguang Song, Liming Yang, Mark B. Allen, Shengsheng Qi, Li Su
Journal: Geological Magazine / Volume 157 / Issue 2 / February 2020
The East Kunlun Orogen (EKO) is the NW part of the Central China Orogenic Belt, which records the evolutionary history of the Proto- and Palaeo-Tethys Oceans from the Cambrian to the Triassic. An Early Palaeozoic eclogite belt has been recognized in recent years, which extends discontinuously for ∼500 km as three eclogite-bearing terranes. In this study, we report an integrated study of zircon grains from mica-schists accompanying the eclogites, in terms of mineral inclusions, U–Pb age systematics and P–T conditions. The presence of coesite is identified, as inclusions within the metamorphic domain of zircons, which provides unambiguous evidence for subducted terrigenous clastic rocks of the Proto-Tethys Ocean exhumed from coesite-forming depths. U–Pb dating of the metamorphic zircons yields a concordia age of 426.5 ± 0.88 Ma, which is likely to be the time of ultrahigh-pressure metamorphism in the Kehete terrane. P–T calculations suggest that metapelite may have experienced a clockwise P–T path with peak P/T conditions of 685 ± 41 °C and >28 kbar, and equilibrated at 482–566 °C and 5.6–8.9 kbar during subsequent exhumation. The high-pressure – ultrahigh-pressure (HP-UHP) metamorphic belt within the EKO may have formed by collision between the Qaidam Block and the South Kunlun Block, as a consequence of the closure of the Proto-Tethys Ocean.
Crises and opportunities at the energy-water interface
Eva Karatairi, Seth B. Darling
Journal: MRS Bulletin / Volume 43 / Issue 6 / June 2018
Calibration and Stokes Imaging with Full Embedded Element Primary Beam Model for the Murchison Widefield Array
Murchison Widefield Array
M. Sokolowski, T. Colegate, A. T. Sutinjo, D. Ung, R. Wayth, N. Hurley-Walker, E. Lenc, B. Pindor, J. Morgan, D. L. Kaplan, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, A. R. Offringa, P. Procopio, L. Staveley-Smith, C. Wu, Q. Zheng
Published online by Cambridge University Press: 27 November 2017, e062
The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science programme, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programmes for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky Murchison Widefield Array (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10–20% in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 × 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen, and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200–231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U.
A High-Resolution Foreground Model for the MWA EoR1 Field: Model and Implications for EoR Power Spectrum Analysis
P. Procopio, R. B. Wayth, J. Line, C. M. Trott, H. T. Intema, D. A. Mitchell, B. Pindor, J. Riding, S. J. Tingay, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. Offringa, L. Staveley-Smith, Chen Wu, Q. Zheng
Published online by Cambridge University Press: 10 August 2017, e033
The current generation of experiments aiming to detect the neutral hydrogen signal from the Epoch of Reionisation (EoR) is likely to be limited by systematic effects associated with removing foreground sources from target fields. In this paper, we develop a model for the compact foreground sources in one of the target fields of the MWA's EoR key science experiment: the 'EoR1' field. The model is based on both the MWA's GLEAM survey and GMRT 150 MHz data from the TGSS survey, the latter providing higher angular resolution and better astrometric accuracy for compact sources than is available from the MWA alone. The model contains 5 049 sources, some of which have complicated morphology in MWA data, Fornax A being the most complex. The higher resolution data show that 13% of sources that appear point-like to the MWA have complicated morphology such as double and quad structure, with a typical separation of 33 arcsec. We derive an analytic expression for the error introduced into the EoR two-dimensional power spectrum due to peeling close double sources as single point sources and show that for the measured source properties, the error in the power spectrum is confined to high k ⊥ modes that do not affect the overall result for the large-scale cosmological signal of interest. The brightest 10 mis-modelled sources in the field contribute 90% of the power bias in the data, suggesting that it is most critical to improve the models of the brightest sources. With this hybrid model, we reprocess data from the EoR1 field and show a maximum of 8% improved calibration accuracy and a factor of two reduction in residual power in k-space from peeling these sources. Implications for future EoR experiments including the SKA are discussed in relation to the improvements obtained.
Diamond Heteroepitaxial Lateral Overgrowth
Y-H Tang, B. Bi, B. Golding
Journal: MRS Online Proceedings Library Archive / Volume 1734 / 2015
Published online by Cambridge University Press: 24 February 2015, mrsf14-1734-r05-04
A method of diamond heteroepitaxial lateral overgrowth is demonstrated which utilizes a photolithographic metal mask to pattern a thin (001) epitaxial diamond surface. Significant structural improvement was found, with a threading dislocation density reduced by two orders of magnitude at the top surface of a thick overgrown diamond layer. In the initial stage of overgrowth, a reduction of diamond Raman linewidth in the overgrown area was also realized. Thermally-induced stress and internal stress were determined by Raman spectroscopy of adhering and delaminated diamond films. The internal stress is found to decrease as sample thickness increases.
Genetic diversity in African yam bean accessions based on AFLP markers: towards a platform for germplasm improvement and utilization
B. D. Adewale, I. Vroh-Bi, D. J. Dumet, S. Nnadi, O. B. Kehinde, D. K. Ojo, A. E. Adegbite, J. Franco
Journal: Plant Genetic Resources / Volume 13 / Issue 2 / August 2015
Accurate knowledge of intra-specific diversity of underutilized crop species is a prerequisite for their genetic improvement and utilization. The diversity of 77 accessions of African yam bean (AYB, Sphenostylis stenocarpa) was assessed by amplified fragment length polymorphism (AFLP) markers. A total of EcoRI/MseI primer pairs were selected and 227 AFLP bands were generated, of which 59(26%) were found to be polymorphic in the 77 accessions of AYB. The most efficient primer combination for polymorphic detection was E-ACT/M-CAG with a polymorphic efficiency of 85.5%, while the least efficient was E-AGC/M-CAG with a polymorphic efficiency of 80.6%. The Jaccard genetic distance among the accessions of AYB ranged between 0.048 and 0.842 with a mean of 0.444. TSs98 and TSs104B were found to be the most similar accessions with a genetic similarity of 0.952. The neighbour-joining dendrogram grouped the 77 accessions of AYB into four distinct clusters comprising 8, 20, 21 and 28 accessions. The major clustering of the accessions was not related to their geographical origin. Cluster I was found to be the most diverse. The mean fixation index (0.203) and the mean expected heterozygosity (0.284) revealed a broad genetic base of the AYB accessions. The same germplasm set was previously evaluated for several agro-morphological traits. As the collection of additional AYB germplasm continues, the phenotypic profile, the clustering of the accessions and the AFLP primer combinations from this study can be used to augment breeding programmes.
By M. A. Allison, D. M. Alongi, N. Bi, T. S. Bianchi, G. Billen, N. Blair, D. Bombar, A. Borges, S. Bouillon, W. P. Broussard III, W.-J. Cai, J. Callens, S. Chakraborty, C. T. Arthur Chen, N. Chen, D. R. Corbett, M. Dai, J. W. Day, J. W. Dippner, S. Duan, C. Duarte, T. I. Eglinton, G. Erkens, C. France-Lanord, J. Gaillardet, V. Galy, J. Gan, J. Garnier, M. Goñi, S. L. Goodbred, K. Gundersen, L. Guo, D. Nhu Hai, A. Han, P. J. Harrison, C. Hein, P. J. Hernes, R. D. Hetland, R. M. Holmes, T. J. Hsu, G. Hunsinger, A. Kolker, S. A. Kuehl, H. S. Kung, Z. Lai, N. Ngoc Lam, E. L. Leithold, P. Liu, S. E. Lohrenz, N. Loick-Wilde, R. Macdonald, B. A. McKee, E. Meselhe, H. Middelkoop, S. Mitra, W. Moufaddal, M. C. Murrell, C. A. Nittrouer, A. S. Ogston, P. Passy, M. van der Perk, A. Ramanathan, P. A. Raymond, A. I. Robertson, B. E. Rosenheim, G. P. Shaffer, A. M. Shiller, M. Silvestre, R. G. M. Spencer, R. G. Striegl, A. Stubbins, S. E. Tank, V. Thieu, J. M. Visser, M. Voss, J. P. Walsh, H. Wang, W. R. Woerner, Y. Wu, J. Xu, Z. Yang, K. Yin, Z. Yin, G. L. Zhang, J. Zhang, Z. Y. Zhu, A. R. Zimmerman
Edited by Thomas S. Bianchi, Texas A & M University, Mead A. Allison, University of Texas, Austin, Wei-Jun Cai, University of Delaware
Book: Biogeochemical Dynamics at Major River-Coastal Interfaces
Print publication: 28 October 2013, pp ix-xii
Examining Atomistic Defect-Boundary Interactions Induced by Ion Irradiation using Aberration Corrected Transmission Electron Microscopy
J.A. Aguiar, M. Chi, P. Kotula, Z. Bi, O. Anderoglu, J.K. Baldwin, J.A. Valdez, A. Misra, B. Uberuaga
Journal: Microscopy and Microanalysis / Volume 19 / Issue S2 / August 2013
Published online by Cambridge University Press: 09 October 2013, pp. 1982-1983
Extended abstract of a paper presented at Microscopy and Microanalysis 2013 in Indianapolis, Indiana, USA, August 4 – August 8, 2013.
Genetic diversity assessment of extra-early maturing yellow maize inbreds and hybrid performance in Striga-infested and Striga-free environments
I. C. AKAOGU, B. BADU-APRAKU, V. O. ADETIMIRIN, I. VROH-BI, M. OYEKUNLE, R. O. AKINWALE
Journal: The Journal of Agricultural Science / Volume 151 / Issue 4 / August 2013
Maize (Zea mays L.), a major staple food crop in West and Central Africa (WCA), is adapted to all agro-ecologies in the sub-region. Its production in the sub-region is greatly constrained by infestation of Striga hermonthica (Del.) Benth. The performance and stability of the extra-early maturing hybrids, which are particularly adapted to areas with short growing seasons, were assessed under Striga-infested and Striga-free conditions. A total of 120 extra-early hybrids and an open-pollinated variety (OPV) 2008 Syn EE-Y DT STR used as a control were evaluated at two locations each under Striga-infested (Mokwa and Abuja) and Striga-free (Ikenne and Mokwa) conditions in 2010/11. The Striga-resistant hybrids were characterized by higher grain yield, shorter anthesis–silking interval (ASI), better ear aspect, higher numbers of ears per plant (EPP), lower Striga damage rating, and lower number of emerged Striga plants at 8 and 10 weeks after planting (WAP) compared with the susceptible inbreds. Under Striga infestation, mean grain yield ranged from 0·71 to 3·18 t/ha and 1·19 to 3·94 t/ha under Striga-free conditions. The highest yielding hybrid, TZEEI 83×TZEEI 79, out-yielded the OPV control by 157% under Striga infestation. The hybrids TZEEI 83×TZEEI 79 and TZEEI 67×TZEEI 63 were the highest yielding under both Striga-infested and Striga-free conditions. The genotype main effect plus genotype×environment interaction (GGE) biplot analysis identified TZEEI 88×TZEEI 79 and TZEEI 81×TZEEI 95 as the ideal hybrids across research environments. Twenty-three pairs of simple sequence repeat (SSR) markers were used to assess the genetic diversity among the inbred lines. The correlations between the SSR-based genetic distance (GD) estimates of parental lines and the means observed in F1 hybrid under Striga infestation and optimum growing conditions were not significant for grain yield and other traits except ASI under optimum conditions. Grain yield of inbreds was not significantly correlated with that of F1 hybrids. However, a significant correlation existed between F1 hybrid grain yield and heterosis under Striga infestation (r=0·72, P<0·01). These hybrids have the potential for increasing maize production in Striga endemic areas in WCA.
A comparison of the clinical characteristics of women with recurrent major depression with and without suicidal symptomatology
B. Bi, X. Xiao, H. Zhang, J. Gao, M. Tao, H. Niu, Y. Wang, Q. Wang, C. Chen, N. Sun, K. Li, J. Fu, Z. Gan, W. Sang, G. Zhang, L. Yang, T. Tian, Q. Li, Q. Yang, L. Sun, Ying Li, H. Rong, C. Guan, X. Zhao, D. Ye, Y. Zhang, Z. Ma, H. Li, K. He, J. Chen, Y. Cai, C. Zhou, Y. Luo, S. Wang, S. Gao, J. Liu, L. Guo, J. Guan, Z. Kang, D. Di, Yajuan Li, S. Shi, Yihan Li, Y. Chen, J. Flint, K. Kendler, Y. Liu
Journal: Psychological Medicine / Volume 42 / Issue 12 / December 2012
The relationship between recurrent major depression (MD) in women and suicidality is complex. We investigated the extent to which patients who suffered with various forms of suicidal symptomatology can be distinguished from those subjects without such symptoms.
We examined the clinical features of the worst episode in 1970 Han Chinese women with recurrent DSM-IV MD between the ages of 30 and 60 years from across China. Student's t tests, and logistic and multiple logistic regression models were used to determine the association between suicidality and other clinical features of MD.
Suicidal symptomatology is significantly associated with a more severe form of MD, as indexed by both the number of episodes and number of MD symptoms. Patients reporting suicidal thoughts, plans or attempts experienced a significantly greater number of stressful life events. The depressive symptom most strongly associated with lifetime suicide attempt was feelings of worthlessness (odds ratio 4.25, 95% confidence interval 2.9–6.3). Excessive guilt, diminished concentration and impaired decision-making were also significantly associated with a suicide attempt.
This study contributes to the existing literature on risk factors for suicidal symptomatology in depressed women. Identifying specific depressive symptoms and co-morbid psychiatric disorders may help improve the clinical assessment of suicide risk in depressed patients. These findings could be helpful in identifying those who need more intense treatment strategies in order to prevent suicide.
Prevalence of haemorrhagic fever with renal syndrome in mainland China: analysis of National Surveillance Data, 2004–2009
X. LIU, B. JIANG, P. BI, W. YANG, Q. LIU
The monthly and annual incidence of haemorrhagic fever with renal syndrome (HFRS) in China for 2004–2009 was analysed in conjunction with associated geographical and demographic data. We applied the seasonal autoregressive integrated moving average (SARIMA) model to fit and forecast monthly HFRS incidence in China. HFRS was endemic in most regions of China except Hainan Province. There was a high risk of infection for male farmers aged 30–50 years. The fitted SARIMA(0,1,1)(0,1,1)12 model had a root-mean-square-error criterion of 0·0133 that indicated accurate forecasts were possible. These findings have practical applications for more effective HFRS control and prevention. The conducted SARIMA model may have applications as a decision support tool in HFRS control and risk-management planning programmes.
Study of Crystallinity in μc-Si:H Films Deposited by Cat-CVD for Thin Film Solar Cell Applications
Cheng-Hang Hsu, Yi-Peng Hsu, Fang-Hong Yao, Yen-Tang Huang, Chuang-Chuang Tsai, Hsiao-Wen Zan, Chien-Chung Bi, Chun-Hsiung Lu, Chih-Hung Yeh
Published online by Cambridge University Press: 01 February 2011, 1245-A21-05
The crystallinity of the hydrogenated microcrystalline silicon (μc-Si:H) film was known to influence the solar cell efficiency greatly. Also hydrogen was found to play a critical role in controlling the crystallinity. Instead of employing conventional plasma deposition techniques, this work focused on using catalytic chemical vapor deposition (Cat-CVD) to study the effect of hydrogen dilution and the filament-to-substrate distance on the crystallinity, deposition rate, microstructure factor and electrical property of the μc-Si:H film. We found that the substrate material and structure can affect the crystallinity of the μc-Si:H film and the incubation effect. Comparing bare glass, TCO-coated glass, a-Si:H-coated glass and μc-Si:H-coated glass, the microcrystalline phase grows the fastest onto μc-Si:H surface, but the slowest onto a-Si:H surface. Surprisingly, the template effect lasted for more than a thousand atomic layers of silicon.
Fabrication and Investigation of the Metal-Ferroelectric-Semiconductor Structure with Pb(Zr0.53Ti0.47)O3 on AlxGa1-xN/GaN Heterostructures
B. Shen, W. P. Li, X. S. Wang, F. Yan, R. Zhang, Z. X. Bi, Y. Shi, Z. G. Liu, Y. D. Zheng, T. Someya, Y. Arakawa
Journal: MRS Online Proceedings Library Archive / Volume 693 / 2001
Published online by Cambridge University Press: 21 March 2011, I11.41.1
An AlxGa1-xN/GaN-based metal-ferroelectric-semiconductor (MFS) structure is developed by depositing a Pb(Zr0.53Ti0.47)O3 film on a modulation-doped Al0.22Ga0.78N/GaN heterostructure. In high-frequency capacitance-voltage (C-V) measurements, the sheet concentration of the two-dimensional electron gas at the Al0.22Ga0.78N/GaN interface in the MFS structure decreases from 1.56 × 1013cm-2to 5.6 × 1012cm-2under the –10 V applied bias. A ferroelectric C-V window of 0.2 V in width near –10V bias is observed, indicating that the AlxGa1-xN/GaN MFS structure can achieve memory performance without the reversal of the ferroelectric polarization. The results indicate that AlxGa1-xN/GaN heterostructures are promising semiconductor channel candidates for MFS field effect transistors.
Prepare of ZnAl2O4/-Al2O3 complex substrates and growth of GaN films
Z.X. Bi, R. Zhang, W. P. Li, X.S. Wang, S.L. Gu, B. Shen, Y. Shi, Z.G. Liu, Y.D. Zheng
Published online by Cambridge University Press: 21 March 2011, I6.39.1
With the solid phase reaction between the ZnO film and -Al2O3 substrate, the ZnAl2O4/-Al2O3 complex substrate have been prepared. GaN films were then directly grown on this new kind of substrate using light-radiation heating low-pressure metalorganic chemical vapor deposition (LRH-LP-MOCVD) without any nitride buffer layer. The structure and surface morphology of the ZnAl2O4/-Al2O3 substrates and GaN epilayers have been characterized by employing X-ray diffraction (XRD) and scanning electron microscope (SEM). The result show that as the thickness of ZnAl2O4 layer is increased, the film changes from a (111)-oriented single crystal to a poly-crystal, together with the surface morphology transforms from uniform islandsa a to the bulgy-line structure, leading to GaN films grown on ZnAl2O4/ -Al2O3 substrates varying from c-axis oriented single-crystal to poly-crystal.
Effect of annealing on fluorescence of Ce3+-doped silica prepared by sol-gel process
H. J. Bi, W. P. Cai, H. Z. Shi, L. D. Zhang, B. D. Yao
Journal: Journal of Materials Research / Volume 15 / Issue 11 / November 2000
We prepared Ce3+-doped silica by the sol-gel method and studied the effect of annealing on fluorescence of these samples. Different fluorescence was observed for samples annealed at different temperatures, changing gradually from solution like fluorescence to fluorescence similar to that observed in the Ce3+-doped silica prepared by chemical vapor deposition. It was found that the emission intensity first decreased with increasing annealing temperature until 500 °C, and then increased with the temperature ranging from 500 to 950 °C. Meanwhile, the emission peak showed a large red shift and an obvious broadening. These changes were attributed to the annealing-induced structural evolution in silica: Ce3+ ions changed from coordinating with water and terminal OH-groups to embedding in silica network.
A Study of Low-Temperature Grown Gap by Gas-Source Molecular Beam Epitaxy
W. G. Bi, X. B. Mei, K. L. Kavanagh, C. W. Tu, E. A. Stach, R. Hull
Published online by Cambridge University Press: 10 February 2011, 293
We report the effects of growth conditions on the strain and crystalline quality of lowtemperature (LT) grown GaP films by gas-source molecular beam epitaxy. At temperatures below 160 °C, poly-crystalline GaP films are always obtained, regardless of the PH3 low rate used, while at temperatures above 160 °C, the material quality is affected by the PH3 flow rate. Contrary to compressively strained LT GaAs, high-resolution X-ray rocking curve measurement indicates a tensile strain of the LT GaP films, which is considered to be due to PGa antisite defects. The strain is found to be affected by the PH3 flow rate, the growth temperature, and post-growth annealing. Contrary to LT GaAs, no P precipitates are observed in cross-sectional transmission electron microscopy.
Enhanced Photoluminescence from Erbium-Doped Gap Microdisk Resonator
D. Y. Chu, X. Z. Wang, W. G. Bi, R. P. Espindola, S. L. Wu, B. W. Wessels, C. W. Tu, S. T. Ho
The fabrication and optical properties of an erbium-doped gallium phosphide microdisk resonator pumped by a Ti-sapphire laser at 980 nm were investigated. Enhanced Er3+ intra-4f-shell photoluminescence was observed in the microdisk resonator compared to a thin film, and is attributed to a microcavity effect. At low pumping power intensity, the photoluminescence from erbium-doped gallium phosphide microdisks is an order of magnitude more intense than that from a thin film sample. | CommonCrawl |
Results for 'Aleksandra Brokman'
Sterility and Suggestion: Minor Psychotherapy in the Soviet Union, 1956–1985.Aleksandra Brokman - 2018 - History of the Human Sciences 31 (4):83-106.details
This article explores the concept of minor or general psychotherapy championed by physicians seeking to popularise psychotherapy in the post-Stalin Soviet Union. Understood as a set of skills and principles meant to guide behaviour towards and around patients, this form of psychotherapy was portrayed as indispensable for physicians of all specialities as well as for all personnel of medical institutions. This article shows how, as a result of Soviet teaching on the power of suggestion to influence human organisms, every interaction (...) with patients was conceptualised as a form of psychotherapy, leading to the embrace of placebo as a legitimate form of therapy, and to the blurring of the boundary between therapy and other activities in the clinic. The principles of minor psychotherapy reveal a concept of psychotherapy that is much wider, and rooted in different priorities, than the dominant understanding of this type of treatment found in Western Europe and North America. This article addresses the ethical principles implicit in the Soviet perspective, demonstrating that despite fighting against the uncaring and dismissive attitude of other physicians, Soviet psychotherapists remained rooted in the paternalistic tradition. Finally, it traces the efforts to establish minor psychotherapy as standard practice in medical institutions, which, like many other plans and ambitions of Soviet psychotherapists, were constrained by a lack of resources in the healthcare system. (shrink)
20th Century Soviet Philosophy in European Philosophy
History of Psychology, Misc in Philosophy of Cognitive Science
Psychiatry and Psychotherapy in Cognitive Sciences
Psychotherapy in Philosophy of Cognitive Science
Aleksandra Koyrégo analiza paradoksów Zenona z Elei.Aleksandra Schoen-żmijowa - 2002 - Zagadnienia Filozoficzne W Nauce 31.details
The Default Position: Optimizing Pediatric Participation in Medical Decision Making.Aleksandra E. Olszewski & Sara F. Goldkind - 2018 - American Journal of Bioethics 18 (3):4-9.details
Inclusion of children in medical decision making, to the extent of their ability and interest in doing so, should be the default position, ensuring that children are routinely given a voice. However, optimizing the involvement of children in their health care decisions remains challenging for clinicians. Missing from the literature is a stepwise approach to assessing when and how a child should be included in medical decision making. We propose a systematic approach for doing so, and we apply this approach (...) in a discussion of two challenging clinical cases. The approach is informed by a literature review, and is anchored by case studies of teenagers' refusal of clinical care, regulatory requirements for research assent, and the accepted approach to involving cognitively impaired adults in medical decisions. (shrink)
Ireneusz Ziemiński, Śmierć, Niesmiertelność, Sens Życia. Egzystencjalny Wymiar Filozofii Ludwiga Wittgensteina [Death, Immortality, the Meaning of Life. The Existential Dimension of Ludwig Wittgenstein's Philosophy] by Aleksandra Derra.Aleksandra Derra - 2008 - Forum Philosophicum: International Journal for Philosophy 13 (2):379-385.details
Piotr Sikora, Slowa I Zbawienie, Dyskurs Religijny W Perspektywie Filozofii Hilarego Putnama [Words and Salvation. Religious Discourse in the Perspective of Hilary Putnam's Philosophy] by Aleksandra Derra.Aleksandra Derra - 2007 - Forum Philosophicum: International Journal for Philosophy 12 (2):458-464.details
Religious Topics in Philosophy of Religion
"I Would Kill the Director and Teachers in the School" Cyberbullying of Hunters in Poland.Aleksandra Matulewska & Dariusz J. Gwiazdowicz - 2020 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 34 (4):985-1010.details
The aim of the paper is to focus on cyberbullying :33–42, 2012) affecting the community of hunters in Poland. The investigation reveals that linguistic aggression pervades more and more spheres of our lives and the Internet, which gives anonymity and physical distance, is the main forum of cyberbullying. The researchers investigate the material gathered from websites such as "Ludzie przeciw myśliwym" [Humans against hunters], hunting-related blogs and Facebook sites devoted to hunting and related to persons who are known to be (...) hunters. The problem of the stereotypical perception of hunting is also raised. The issues of prejudice, stereotyping and lack of knowledge result in the possibility of inciting people to cyberbully others. People brought up in cities, far away from nature, are easily convinced to attack other groups which they perceive as deviant. The verbal aggression deeply rooted in stereotypes and prejudice based on limited knowledge of nature, overly idealistic and naïve worldviews becomes more and more widespread. Therefore, the authors intend to provide some insight into the problem of cyberbullying of hunters in Poland in order to find the patterns of that activity from socio-semiotic perspective analyzing verbal signs and symbols used to justify that sort of behaviour as well as socio-linguistic perspective concerning the usage of emotion-loaded language. Additionally psycho-linguistic issues will be touched upon as well as the problem of shaping the image of hunters by media will be discussed. (shrink)
Third Space of Legal Translation: Between Protean Meanings, Legal Cultures and Communication Stratification.Aleksandra Matulewska & Anne Wagner - 2020 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 34 (5):1245-1260.details
Legal translation is a complex transfer of the text formulated in a source language into a target language which needs to take into account a wide array of factors to ensure the equality of parties to the process of interlingual communication. It is an autonomous realm of cross-cultural events within which the system-bound of legal concepts/notions deeply rooted in language, history and societal evolution of one country are transformed and integrated into the language of another, and as a result, stratified (...) over the course of time. That aspect of legal translation is called the Third Space The post-colonial studies readers. Routledge New York, pp 206–209, 1995). The authors investigate some aspects of the Third Space including Protean meanings and diverging legal cultures which are constantly remodeled, cultural codes, and communication stereotypes as well as communication problems stemming from stratification of communication in legal settings. The research methods applied include the semiotic analysis of legal translation strategies and potential loss of meaning. (shrink)
Legal Languages – A Diachronic Perspective.Aleksandra Matulewska - 2018 - Studies in Logic, Grammar and Rhetoric 53 (1):195-212.details
The aim of the article is to discuss the legal language transformations from a diachronic perspective taking into account the following factors: spatial and temporal, linguistic norm changes, political, social, and globalization as well as EU-induced. Spatial and temporal factors include legal relations influenced by climate and the cycles of nature. Linguistic factors include spelling reforms and grammatical changes each language undergoes, for example, as a result of usage. As far as the law is concerned, normative changes can be observed (...) when laws are amended. Other factors such as customs, usage, etc. cannot be neglected when discussing the language of the law. Analogously political correctness and usage can be observed in gender sensitive language and the introduction of such terms as chairperson instead of chairman. Social factors should not be overlooked. As a result of social changes, numerous terms have been introduced to legal lexicons in many countries starting with same-sex unions or same-sex-marriages. The so-called political correctness enforces some language changes and leads to the introduction of new terms and at the same time the abandonment of others. Consequently, some terms cease to be used and consequently become archaic. The aim of the article is to focus on diachronic changes in legal languages and present the communication problems resulting from them from intra- and inter-lingual perspectives. (shrink)
What Is Art Good For? The Socio-Epistemic Value of Art.Aleksandra Sherman & Clair Morrissey - 2017 - Frontiers in Human Neuroscience 11.details
Scientists, humanists, and art lovers alike value art not just for its beauty, but also for its social and epistemic importance; that is, for its communicative nature, its capacity to increase one's self-knowledge and encourage personal growth, and its ability to challenge our schemas and preconceptions. However, empirical research tends to discount the importance of such social and epistemic outcomes of art engagement, instead focusing on individuals' preferences, judgments of beauty, pleasure, or other emotional appraisals as the primary outcomes of (...) art appreciation. Here, we argue that a systematic neuroscientific study of art appreciation must move beyond understanding aesthetics alone, and toward investigating the social importance of art appreciation. We make our argument for such a shift in focus first, by situating art appreciation as an active social practice. We follow by reviewing the available psychological and cognitive neuroscientific evidence that art appreciation cultivates socio-epistemic skills such as self- and other-understanding, and discuss philosophical frameworks which suggest a more comprehensive empirical investigation. Finally, we argue that focusing on the socio-epistemic values of art engagement highlights the important role art plays in our lives. Empirical research on art appreciation can thus be used to show that engagement with art has specific social and personal value, the cultivation of which is important to us as individuals, and as communities. (shrink)
Aesthetics, Misc in Aesthetics
Automatic Proof Generation in an Axiomatic System for $\mathsf{CPL}$ by Means of the Method of Socratic Proofs.Aleksandra Grzelak & Dorota Leszczyńska-Jasion - 2018 - Logic Journal of the IGPL 26 (1):109-148.details
Resistance to Change in the Corporate Elite: Female Directors' Appointments Onto Nordic Boards.Aleksandra Gregorič, Lars Oxelheim, Trond Randøy & Steen Thomsen - 2017 - Journal of Business Ethics 141 (2):267-287.details
In this empirical study, we investigate the variation in firms' response to institutional pressure for gender-balanced boards, focusing specifically on the preservation of prevailing practices of director selection and its impact on the representation of women on the board of directors. Using 8 years of data from publicly listed Nordic corporations, we show societal pressure to be one of the determinants of female directorship. Moreover, in some corporations, the director selection process may work to maintain "a traditional type of board". (...) In such boards, demographic diversity among male members appears to be associated with a lower share of female directors, although we cannot establish wether this reflects discrimination or a desire to maintain critical competencies. With this paper we add to the theoretical understanding of the factors underlying female board appointments by adopting an institutional theory lens to study female board representation. Viewing the demands for gender-balanced boards in terms of societal pressure for the de-institutionalization of the prevailing norms and practices, we highlight preferences for maintaining established practices as a potentially important barrier to institutional change. On these grounds, we conjecture on the relationship between the gender diversity of boards and other diversity dimensions. We suggest that a board room gender quota is supplemented by policies to ensure the transparency of board changes, in order to prevent the crowding out of other diversity dimensions. (shrink)
Financial Ethics in Applied Ethics
Race and Power at the Bedside: Counter Storytelling in Clinical Ethics Consultation.Aleksandra E. Olszewski, Maya Scott, Arika Patneaude, Elliott M. Weiss & Aaron Wightman - 2021 - American Journal of Bioethics 21 (2):77-79.details
Counter storytelling, used in critical race theory and narrative ethics, is a tool used to contradict and expose the oppression in a dominant narrative, by focusing attention on the stories of the...
In Quest of Sufficient Equivalence. Polish and English Insolvency Terminology in Translation. A Comparative Study.Aleksandra Matulewska - 2014 - Studies in Logic, Grammar and Rhetoric 38 (1):167-188.details
The paper deals with the problem of translating selected insolvency terminology from Polish into English and from English into Polish. The re- search corpora encompassed the Insolvency Act 1986 as amended and Ustawa z dnia 28 lutego 2003. Prawo upadłościowe i naprawcze [the Act on Polish Insolvency and Rehabilitation Law of 28th February 2003 as amended]. The research methods included: the comparison of parallel texts, the method of axiomatisation of the legal linguistic reality, the termino- logical analysis of the corpus (...) material, the concept of adjusting the target text to the communicative needs and requirements of the community of recipients and the techniques of providing equivalents for non-equivalent terminology. The research hypothesis has been so formulated that the parametrisation of legal reality may assist in finding more adequate equivalents and determine differences in meaning of compared source and target language terms, which in turn facilitates the choice of a more adequate technique of providing equivalents for non-equivalent or partially equivalent legal terminology meeting the com- municative needs of translation recipients. The research results revealed that insolvency terminology is highly system-bound and available equivalents may often be misleading for the community of target text recipients. (shrink)
Polish Philosophy in European Philosophy
Socially Induced Changes in Legal Terminology.Aleksandra Matulewska - 2017 - Studies in Logic, Grammar and Rhetoric 49 (1):153-173.details
The author intends to present evolutionary and revolutionary changes in legal terminology. Legal terminology changes as a result of language usage, technological development, political and social changes and even economy reasons. The following research methods have been applied: the terminological analysis of the research material and the analysis of pertinent literature. The research material included legislation from the United Kingdom, the United States of America, Canada and Australia. The author focuses on terminological changes resulting from social transformations. Selected terms and (...) their transformation in respect to meaning and form are elaborated on in the paper. Finally, the author draws conclusions that translation of such terminology should aim at communication precision and many of them may be false friends in interlingual communication. (shrink)
Living with Zygmunt Bauman, Before and After.Aleksandra Kania - 2018 - Thesis Eleven 149 (1):86-90.details
This paper offers a memoir of living with Zygmunt Bauman. It begins with the early encounter of Bauman and Aleksandra Kania in Warsaw in 1954, where both were Masters students working with the humanist Marxist Adam Schaff. Kania and Bauman followed their separate life paths for decades, though they were both postwar communists and reconstructionists. Much later, the loss of their partners led to union, in Leeds and across the globe in travel. This is a story of friendship and (...) mutual enthusiasms, then intimacy between two working sociologists. There are also some apparent differences, as between the Lark and the Owl, or between Phosphorous and Hesperus. Life together leads especially to Italy, and to Pope Francis. This is a reflection on what Bauman called the art of life. (shrink)
Iustitia Ut Caritas Sapientis: The Relationship Between Love and Justice in G.W. Leibniz's Philosophy of Right.Aleksandra Horowska - 2017 - Roczniki Filozoficzne 65 (2):185-204.details
The purpose of this paper is an attempt to present and analyse one of the most intriguing and unique elements of Leibniz's philosophy of right—the relationship between love and justice —mainly based on selected excerpts from the Elementa Iuris Naturalis and the preface to the Codex Iuris Gentium Diplomaticus. The author presents the characteristics of this close connection and she tries to answer the question about the reasons for this relationship referring to the metaphysical assumptions and principles of Leibniz's philosophy. (...) With respect to the latter the author also explains significance of the connection between love and justice in Leibniz's philosophy of right as a part of his whole philosophical system. (shrink)
Leibniz: Ethics in 17th/18th Century Philosophy
Cyberbullying in Polish Debate on the Białowieża National Forest.Aleksandra Matulewska, Joanna Kic-Drgas & Paula Trzaskawka - 2020 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 34 (4):1011-1039.details
Social media platforms have conquered almost all fields of human life; their impact as opinion creating tools is undisputable. They not only offer a place for people to exchange experiences, but are also a virtual space where people fight with words in defence of their beliefs. This second function has made social media a rich source for linguistic analysis, providing material for the most current social, political, and economic issues. The main aim of this paper is to contribute to reducing (...) the identified gap in the literature on hate speech and consequential cyberbullying from the linguistic perspective and provide conclusions on elements of hate speech through the analysis of statements relating to the cut-out of the Białowieża National Forest. The examples were excerpted from the Polish social media websites of activists representing two opponent groups. This paper consists of three parts. The first part provides an overview of the literature related to hate speech, cyberbullying, their definitions, roles, and the possibilities of analysis. In this part, the background of the discussed polemic is also highlighted. The second part of the paper presents and discusses the results of the conducted research. After having examined some of the social media platforms used by the groups representing different attitudes to the described conflict, we have identified linguistic patterns within aggressive and vulgar statements expressed both directly and indirectly. Therefore, our analysis concentrates on categorisation of characteristic elements of hate statements. In the third part of the paper, we present conclusions referring to the results of the analysis. (shrink)
Solidarity in Healthcare – the Challenge of Dementia.Aleksandra Małgorzata Głos - 2016 - Diametros 49:1-26.details
Dementia will soon be ranked as the world's largest economy. At present, it ranges from the 16th to 18th place, with countries such as Indonesia, the Netherlands, and Turkey. Dementia is not only a financial challenge, but also a philosophical one. It provokes a paradigm shift in the traditional view of healthcare and expands the classic concepts of human personhood and autonomy. A promising response to these challenges is the idea of cooperative solidarity. Cooperative solidarity, contrary to its 'humanitarian' version, (...) promotes spontaneous teamwork and individual initiative. It obliges us not only to help 'the suffering, the troubled and the disadvantaged', but above all to support those who already do so for spontaneous moral or affective reasons. In the field of dementia study, solidary initiatives are described within the framework of supportive care. (shrink)
Autonomy in Applied Ethics in Applied Ethics
Biomedical Ethics, Misc in Applied Ethics
Dementia in Philosophy of Cognitive Science
Hannah Arendt in 20th Century Philosophy
Health Care Justice in Applied Ethics
Health and Illness, Misc in Philosophy of Science, Misc
Illness in Philosophy of Science, Misc
Life Support in Applied Ethics
Medical Ethics in Applied Ethics
Immediate Transfer of Synesthesia to a Novel Inducer.Aleksandra Mroczko, Thomas Metzinger, Wolf Singer & Danko Nikolić - 2009 - Journal of Vision 9 (12):1-8.details
Construction and Inference in Perception in Philosophy of Mind
Explanation in Neuroscience in Philosophy of Cognitive Science
Neurophilosophy in Philosophy of Cognitive Science
Representation in Neuroscience in Philosophy of Cognitive Science
Synesthesia in Philosophy of Cognitive Science
The Concept of Consciousness in Philosophy of Mind
Moral Enhancement and Climate Change: Might It Work?Aleksandra Kulawska & Michael Hauskeller - 2018 - Royal Institute of Philosophy Supplement 83:371-388.details
Climate change is one of the most urgent global problems that we face today. The causes are well understood and many solutions have been proposed; however, so far none have been successful. Ingmar Persson and Julian Savulescu have argued that this is because our moral psychology is ill-equipped to deal with global problems such as this. They propose that in order to successfully mitigate climate change we should morally enhance ourselves. In this chapter we look at their proposal to see (...) whether moral enhancement is indeed a viable solution to the climate crisis, and conclude that due to various theoretical and practical problems it most likely is not. (shrink)
Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.Aleksandra Swiderska & Dennis Küster - 2020 - Cognitive Science 44 (7).details
Load Theory of Selective Attention and Cognitive Control.Nilli Lavie, Aleksandra Hirst, Jan W. de Fockert & Essi Viding - 2004 - Journal of Experimental Psychology: General 133 (3):339-354.details
Legal and LSP Linguistics and Translation: Asian Languages' Perspectives.Aleksandra Matulewska - 2019 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 32 (1):1-11.details
This essay opens the Special Issue of the International Journal for the Semiotics of Law dedicated to Asian Languages, entitled "Legal and LSP Linguistics and Translation: Asian Languages' Perspectives". It focuses on revealing the principal issues discussed in the volume, by positioning the contributors' works into the general theoretical semiotic perspectives which shape legal languages, legal translation and public discourse over languages spoken in Asia. This volume of the International Journal for the Semiotics of Law is composed of nine articles (...) which may be grouped into four categories of problems. The first group in general refers to problems connected with legal communication both from interlingual and intralingual perspectives. Thus it encompasses four papers dealing with legal translation as well as communication in legal and political settings :1–16, 2018; Mannoni in Int J Semiot Law 32, 2018; Koptseva and Sitnikova in Int J Semiot Law 32:1–28, 2018; Alwazna in Int J Semiot Law 32:1–20, 2018). The second theme focuses on legal interpretation problems in Hong Kong :1–22, 2017) and is an important contribution due to the fact that the right to the interpreter and to communication in a language one understands in court proceedings is one of human rights nowadays and as the real life cases indicate is one of the rights which may be easily abused and no one apart from the victim and the interpreter actually may realise that that human right is not properly observed. Furthermore, the consequences of such abuse may have dire consequences for legal communication participants. The next paper, constituting a separate, third theme, is devoted to teaching legal translation and developing legal translators' competences from the very beginning :1–8, 2018). The last category encompasses three papers devoted to the semiotic analysis of words and images aimed at achieving a specific persuasive result or proper understanding of similar but not identical concepts which may frequently be considered universal despite vital differences resulting from different historical, social or political evolution of societies and states :1–9, 2018; Abbas and Kadim in Int J Semiot Law 32:1–20, 2018; Haider and Olimy in Int J Semiot Law 32:1–32, 2018). (shrink)
Risk-Taking and Impulsivity: The Role of Mood States and Interoception.Aleksandra M. Herman, Hugo D. Critchley & Theodora Duka - 2018 - Frontiers in Psychology 9.details
Writing as Distributed Sociomaterial Practice – a Case Study.Aleksandra Kołtun - 2020 - Avant: Trends in Interdisciplinary Studies 11 (2).details
Intrinsic Motivation Predicting Performance Satisfaction in Athletes: Further Psychometric Evaluations of the Sport Motivation Scale-6.Aleksandra Luszczynska, Aleksandra Adamiec, Karolina Zarychta, Karolina Horodyska & Jan Blecharz - 2015 - Polish Psychological Bulletin 46 (2):309-319.details
The study investigated psychometric properties of the Sport Motivation Scale-6, assessing intrinsic regulation, four extrinsic regulation constructs, and amotivation among athletes competing at a regional and national level. In particular, we tested the factorial structure of SMS-6, its short-term stability, and the associations of SMS-6 constructs with self-efficacy, self-esteem, motivational climate, and satisfaction with sport performance. Participants were 197 athletes, representing team and individual disciplines. The measurement was repeated at the three-week follow-up. Results yielded support for the six first-order factor (...) structure. More autonomous forms of motivation were related to higher levels of self-efficacy, performance satisfaction, and taskoriented motivational climate in sport organizations. Sequential multiple mediation analysis showed that the association between general self-efficacy and performance satisfaction at a follow-up was mediated by introjected regulation and personal-barrier self-efficacy. (shrink)
Philosophy of Sport in Social and Political Philosophy
Using Memrise in Legal English Teaching.Aleksandra Łuczak - 2017 - Studies in Logic, Grammar and Rhetoric 49 (1):141-152.details
Memrise is an educational tool available both online and for mobile devices. Memrise uses flashcards and mnemonic techniques to aid in teaching foreign languages and memorizing information from other subjects, e.g. geography, law or mathematics. Memrise courses are created by its users through the process of crowdsourcing; therefore they are tailored to the individual needs of the users and may focus on the specific content of a particular coursebook or classes. The paper will attempt to present possibilities of using memrise (...) in teaching and learning legal English vocabulary during a tertiary course leading to TOLES certificate examination. The paper will look at various types of exercises which facilitate memorizing vocabulary, learning collocations, prepositional phrases, develop the skill of paraphrasing and defining legal terms of art in plain English. Application of the crowdsourcing method enables the learners to participate in the process of the course creation and constitutes for them a supplementary, out of class exposure to the target language. The second part of the paper will discuss the results of the research conducted by the author among her law students. The aim of the research was to investigate the students′ opinions about memrise as a tool which might facilitate individual learning of the specialist language, as well as to assess whether memrise may influence the test results achieved by the students during the legal English course. The paper will contrastively analyse the progress tests results achieved by the students who have used memrise to revise and recycle language material and those who have chosen traditional methods of learning. The research also attempted to address the question whether the students who had been the contributors to the content of memrise courses had performed better in tests than those who had only been the users. (shrink)
Consequences of Beauty: Effects of Rater Sex and Sexual Orientation on the Visual Exploration and Evaluation of Attractiveness in Real World Scenes.Aleksandra Mitrovic, Pablo P. L. Tinio & Helmut Leder - 2016 - Frontiers in Human Neuroscience 10.details
Neuroethics in Applied Ethics
Biological Movement Increases Acceptance of Humanoid Robots as Human Partners in Motor Interaction.Aleksandra Kupferberg, Stefan Glasauer, Markus Huber, Markus Rickert, Alois Knoll & Thomas Brandt - 2011 - AI and Society 26 (4):339-345.details
The automatic tendency to anthropomorphize our interaction partners and make use of experience acquired in earlier interaction scenarios leads to the suggestion that social interaction with humanoid robots is more pleasant and intuitive than that with industrial robots. An objective method applied to evaluate the quality of human–robot interaction is based on the phenomenon of motor interference (MI). It claims that a face-to-face observation of a different (incongruent) movement of another individual leads to a higher variance in one's own movement (...) trajectory. In social interaction, MI is a consequence of the tendency to imitate the movement of other individuals and goes along with mutual rapport, sense of togetherness, and sympathy. Although MI occurs while observing a human agent, it disappears in case of an industrial robot moving with piecewise constant velocity. Using a robot with human-like appearance, a recent study revealed that its movements led to MI, only if they were based on human prerecording (biological velocity), but not on constant (artificial) velocity profile. However, it remained unclear, which aspects of the human prerecorded movement triggered MI: biological velocity profile or variability in movement trajectory. To investigate this issue, we applied a quasi-biological minimum-jerk velocity profile (excluding variability in the movement trajectory as an influencing factor of MI) to motion of a humanoid robot, which was observed by subjects performing congruent or incongruent arm movements. The increase in variability in subjects' movements occurred both for the observation of a human agent and for the robot performing incongruent movements, suggesting that an artificial human-like movement velocity profile is sufficient to facilitate the perception of humanoid robots as interaction partners. (shrink)
Areas of Artificial Intelligence in Philosophy of Cognitive Science
Philosophy of Artificial Intelligence, Miscellaneous in Philosophy of Cognitive Science
Robotics in Philosophy of Cognitive Science
Mixed Psychological Changes Following Mastectomy: Unique Predictors and Heterogeneity of Post-Traumatic Growth and Post-Traumatic Depreciation.Aleksandra Kroemeke, Kamilla Bargiel-Matusiewicz & Magdalena Kalamarz - 2017 - Frontiers in Psychology 8.details
Effects of Self-Concept Differentiation on Sense of Identity: The Divided Self Revisited Again.Aleksandra Pilarska - 2017 - Polish Psychological Bulletin 48 (2):255-263.details
This article describes research on the associations between self-concept structure and sense of personal identity. Particular emphasis was given to the feature of self-concept differentiation. Notably, it was examined whether the effects of SCD on such aspects of self-experience as sense of having inner contents, sense of uniqueness, sense of one's own boundaries, sense of coherence, sense of continuity in time, and sense of self-worth depend on individuals' epistemic motivation, and more specifically their joint need for cognition, reflection, and integrative (...) self-knowledge scores. Cluster analysis revealed three distinct profiles of epistemic motivation: disengaged, engaged and struggling, and engaged and integrating group. Subsequent analysis showed, first, that the three groups differed in SCD and sense of identity, with the epistemically disengaged group having the highest levels of SCD, and the epistemically engaged and integrating group having consistently the strongest sense of identity. Second, and more importantly, it showed that SCD was negatively related to overall sense of identity, and, in particular, senses of having inner contents, coherence and continuity in time, but only among individuals in the epistemically engaged and struggling group. (shrink)
Safe but Lonely? Loneliness, Anxiety, and Depression Symptoms and COVID-19.Łukasz Okruszek, Aleksandra Aniszewska-Stańczuk, Aleksandra Piejka, Marcelina Wiśniewska & Karolina Żurek - 2020 - Frontiers in Psychology 11.details
BackgroundThe COVID-19 pandemic has led governments worldwide to implement unprecedented response strategies. While crucial to limiting the spread of the virus, "social distancing" may lead to severe psychological consequences, especially in lonely individuals.MethodsWe used cross-sectional and longitudinal designs to investigate the links between loneliness, anxiety, and depression symptoms and COVID-19 risk perception and affective response in young adults who implemented social distancing during the first 2 weeks of the state of epidemic threat in Poland.ResultsLoneliness was correlated with ADS and with (...) affective response to COVID-19's threat to health. However, increased worry about the social isolation and heightened risk perception for financial problems was observed in lonelier individuals. The cross-lagged influence of the initial affective response to COVID-19 on subsequent levels of loneliness was also found.ConclusionThe reciprocal connections between loneliness and COVID-19 response may be of crucial importance for ADS during the COVID-19 crisis. (shrink)
From the Sequence of the Sun-Goddess (bhānavīkrama) to Time-Consumption (kālagrāsa): Some Notes on the Development of the Śākta Doctrine of the Twelve Kālīs.Aleksandra Wenta - 2021 - Journal of Indian Philosophy 49 (5):725-757.details
The doctrine of the twelve Kālīs is one of the earliest developments of the Śākta tradition of the Kālīkula/Kālīkrama/Mahānaya and it is well known in the later exegetical works of Abhinavagupta, Kṣemarāja, and Maheśvarānanda. Although the twelve Kālīs have been treated to some extent in secondary literature, a systematic study of the development and reception of this doctrine has not been undertaken yet. This is mainly due to the fact that most of the Kālīkula scriptures are available in manuscript form, (...) and methodical analysis of their contents remains a desideratum. In this article, I intend to examine selected tantric scriptures teaching the doctrine of the twelve Kālīs, focusing on the development of the constituent elements of this doctrine, as they appear in different tantric sources. This article traces the origins of the twelve Kālīs to the esoteric teaching of the Sun-Goddess, linked to the tradition of the Skeleton of Kālī. It will argue that in the subsequent phase of the doctrine's development the solar context gradually diminished and an emphasis on the twelve goddesses' function as the destroyers of time became more and more pronounced. This tendency, in turn, influenced the codification of the twelve Kālīs as the fully-fledged doctrine of time-consumption, popular in the Trika and the Trika-inspired Krama sources. (shrink)
Indian Philosophy in Asian Philosophy
Capturing Socially Motivated Linguistic Change: How the Use of Gender-Fair Language Affects Support for Social Initiatives in Austria and Poland.Magdalena M. Formanowicz, Aleksandra Cisłak, Lisa K. Horvath & Sabine Sczesny - 2015 - Frontiers in Psychology 6.details
Pandemica Panoptica: Biopolitical Management of Viral Spread in the Age of Covid-19.Anne Wagner, Aleksandra Matulewska & Sarah Marusek - forthcoming - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique:1-37.details
The current pandemic period has triggered a series of changes in society, at both individual and collective behavioral levels. These changes were perceived as either positive or negative by the impacted bodies, leading to both social change and positive interactions in a tense context. In this paper, the authors will deal with Pandemica Panotpica, subjugation infiltrating all levels of society, and the approach adopted by several countries in trying to find countermeasures to combat the virus' proliferation. Our research scope began (...) at the onset of the pandemic and ended on early January 2021. (shrink)
Semantic Mechanisms May Be Responsible for Developing Synesthesia.Aleksandra Mroczko-Wä…Sowicz & Danko Nikolić - 2014 - Frontiers in Human Neuroscience 8:1-13.details
Perception and Neuroscience in Philosophy of Mind
Bioethical Dilemmas of Assisted Reproduction in the Opinions of Polish Women in Infertility Treatment: A Research Report.Aleksandra Dembińska - 2012 - Journal of Medical Ethics 38 (12):731-734.details
Infertility Accepted treatment is replete with bioethical dilemmas regarding the limits of available medical therapies. Poland has no legal acts regulating the ethical problems associated with infertility treatment and work on such legislation has been in progress for a long time, arousing very intense emotions in Polish society. The purpose of the present study was to find out what Polish women undergoing infertility treatment think about the most disputable and controversial bioethical problems of assisted reproduction. An Attitudes towards Bioethical Problems (...) of Infertility Scale was constructed specifically for this study. Items were taken from the Bioethics Bills currently under discussion in Polish Parliament (Seym). 312 women were enrolled in the study. Women experiencing infertility favoured more liberal legislation. Participants disagreed, for example, with the following regulations: prohibition of embryo freezing, prohibition of preimplantation genetic diagnosis of embryos, age limits for women using in vitro fertilisation and prohibition of in vitro fertilisation for single women. The opinions of patients undergoing infertility treatment are an important voice in the Polish debate on the Bioethics Bills. (shrink)
That West Meant to Be Declining.Zygmunt Bauman & Aleksandra Kania - 2018 - Thesis Eleven 149 (1):91-99.details
This conversation between Zygmunt Bauman and Aleksandra Kania picks up on the themes of crisis, interregnum and the decline of the West. Decline of the West is first of all decline of western civilization. This easily leads to panic about the end of the world; what it really indicates is the limits and constraints of a world system based on nation-states. Spengler and Elias are introduced as interlocutors, in order to open these issues, and those of capitalism, socialism and (...) caesarism. Trump here appears as a wilfully decisionist leader. Populism plays its part, but illiberalism now overpowers neoliberalism. Bauman and Kania engage in this text as interlocutors; this is a record of their own dialogue, and a reminder of its possibilities. (shrink)
Odpominanie prawdy u Platona.Aleksandra Burek - 2004 - Przeglad Filozoficzny - Nowa Seria 49 (1):39-52.details
Coping After Myocardial Infarction. The Mediational Effects of Positive and Negative Emotions.Aleksandra Kroemeke & Ewa Gruszczyńska - 2009 - Polish Psychological Bulletin 40 (1):38-45.details
Coping after myocardial infarction. The mediational effects of positive and negative emotions The aim of the study was to examine mediational effects of positive and negative emotions on the relationship between cognitive appraisal and coping after myocardial infarction. Subjects were 163 patients assessed a few days after their first MI episode for cognitive appraisal using the Situation Appraisal Questionnaire developed by Wrześniewski and based on the Lazarus theory. The participants' current emotional state and coping strategies were evaluated with Polish versions (...) of the PANAS and CISS-S, respectively. The data were analyzed using the boostrapping procedure. Resultant models turned out to be similar for threat and loss appraisal, where PEs mediated task-oriented coping, while NEs were found to mediate emotion-oriented coping. A different relationship was found for challenge. Due to a significant intercorrelation among appraisals, mediational models for threat and loss were re-analyzed when controlling for challenge. Nevertheless, even if a situation is perceived as highly stressful, both positive and negative emotions can emerge, resulting in strategies that serve different functions to meet external and internal demands. (shrink)
Emotions in Philosophy of Mind
An ICQ Message Board Session as Discourse: A Case Study.Aleksandra Górska - 2007 - Lodz Papers in Pragmatics 3:179-193.details
An ICQ Message Board Session as Discourse: A Case Study Even though the scope of literature on online communication expands fast, very little attention seems to be paid to instant messengers-programmes providing for one to one communication in real time. It is quite surprising, since such programmes create conditions closest to face to face communication. The similarities and differences between computer-mediated and face to face interaction should be the most apparent in instant messenger mediated communication. The present paper focuses on (...) this type of internet communication. The data sample is a transcript of an online conversation that took place within one day. It is analysed within the framework of Conversation Analysis with regard to turn-taking and the occurrence of discourse markers. Also, attention is paid to the use of minimal responses. Although, as might be expected, face to face and computer-mediated interaction share many features with respect to the above criteria, there arise a few interesting differences. (shrink)
Synesthesia, Sensory-Motor Contingency, and Semantic Emulation: How Swimming Style-Color Synesthesia Challenges the Traditional View of Synesthesia.Aleksandra Mroczko-Wąsowicz & Markus Werning - 2012 - Frontiers in Psychology 3.details
Synesthesia, Sensory-Motor Contingency, and Semantic Emulation: How Swimming Style-Color Synesthesia Challenges the Traditional View of Synesthesia.Aleksandra Mroczko-Wąsowicz & Markus Werning - 2012 - Frontiers in Psychology / Research Topic Linking Perception and Cognition in Frontiers in Cognition 3 (279):1-12.details
Synesthesia is a phenomenon in which an additional nonstandard perceptual experience occurs consistently in response to ordinary stimulation applied to the same or another modality. Recent studies suggest an important role of semantic representations in the induction of synesthesia. In the present proposal we try to link the empirically grounded theory of sensory-motor contingency and mirror system based embodied simulation to newly discovered cases of swimming-style color synesthesia. In the latter color experiences are evoked only by showing the synesthetes a (...) picture of a swimming person or asking them to think about a given swimming style. Neural mechanisms of mirror systems seem to be involved here. It has been shown that for mirror-sensory synesthesia, such as mirror-touch or mirror-pain synesthesia, concurrent experiences are caused by the overactivity in the mirror neuron system responding to the specific observation. The comparison of different forms of synesthesia has the potential of challenging conventional thinking on this phenomenon and providing a more general, sensory-motor account of synesthesia encompassing cases driven by semantic or emulational rather than pure sensory or motor representations. (shrink)
Perception and Thought in Philosophy of Mind
Direct download (12 more)
The Significance of Executive Functions for the Trait of Self-Control: A Psychometric Study.Edward Nęcka, Aleksandra Gruszka, Jarosław Orzechowski, Michał Nowak & Natalia Wójcik - 2018 - Frontiers in Psychology 9.details
What Can Sensorimotor Enactivism Learn From Studies on Phenomenal Adaptation in Atypical Perceptual Conditions? – A Commentary on Rick Grush and Colleagues.Aleksandra Mroczko-Wąsowicz - 2015 - Open MIND.details
The Contents of Perception, Misc in Philosophy of Mind
Memory, Metamemory, and Social Cues: Between Conformity and Resistance.Katarzyna Zawadzka, Aleksandra Krogulska, Roberta Button, Philip A. Higham & Maciej Hanczakowski - 2016 - Journal of Experimental Psychology: General 145 (2):181-199.details
Philosophy of Psychology in Philosophy of Cognitive Science
Psychometric Properties of the Polish Version of the Trait Emotional Intelligence Questionnaire-Short Form.Agata Wytykowska, Aleksandra Jasielska & Dorota Szczygieł - 2015 - Polish Psychological Bulletin 46 (3):447-459.details
The study was aimed at validating the Polish version of the Trait Emotional Intelligence Questionnaire-Short Form. Our findings confirm the reliability and validity of the scale. With respect to reliability, internal consistency coefficients of the TEIQue-SF were comparable to those obtained using the original English version. The evidence of the validity of the TEIQue-SF came from the pattern of relations with the other self-report measure of EI, personality measures, as well as affective and social correlates. We demonstrated that the TEIQue-SF (...) score correlated positively with scores on the Emotional Intelligence Questionnaire. The TEIQue- SF score correlated negatively with Neuroticism and positively with Extraversion, Openness, Agreeableness, and Conscientiousness. In addition, scores on the TEIQue-SF were related to dispositional affect, i.e., correlated positively with positive affectivity and negatively with negative affectivity. The TEIQue-SF score correlated positively with social competencies as measured with the Social Competencies Questionnaire. We also found that trait EI, as measured with the TEIQue-SF, was positively related to the richness of one's supportive social network and this relationship remained statistically significant even after controlling for Big Five variance. We also demonstrated that scoring on the TEIQue-SF was positively related to satisfaction with life and negatively related to perceived stress and these relationships remained significant, even after controlling for positive and negative affectivity. Taken together, these findings suggest that the Polish version of the TEIQue-SF is a reliable and valid measure that inherits the network of associations both from the original version of the TEIQue-SF and the full form of the Polish TEIQue. (shrink)
Lethal Laws and Lethal Education: A Case Study of Soviet Genocide Against Polish Foresters and Five Decades of Infodemic.Dariusz J. Gwiazdowicz & Aleksandra Matulewska - forthcoming - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique:1-30.details
Genocide as a part of nation or ethnic group extermination process is not a well-defined concept. Its meaning is understood intuitively. When law intervenes, the issue of defining the term comes back. Nevertheless, the Polish nation has been recognized as subjected to genocide activities during the Second World War by the Nazi Germany and Soviet Union. The paper focuses on the genocide against mainly one group of Poles that is to say foresters. The martyrologic evidence proves that foresters were an (...) occupation group which for a variety of reasons suffered most. The research carried out in this respect by the National Forests in Poland has revealed that over 20% of pre-war staff of the institution actually lost their lives. They were killed by the German and Soviet occupiers, as well as the Ukrainian Insurgent Army. In the eastern parts of Poland foresters and their families were deported and sent to various labour and extermination camps e.g. in Siberia. The aim of the paper is to present the scale of genocide with the main emphasis on genocide against foresters. The definition of genocide provided by Lemkin, who coined the term, is a starting point for the analysis. The main research methods included the analysis of pertinent literature and source materials as well as the semiotic analysis of genocide circumstances. The thesis put forward by the authors is that foresters due to their education, practical skills, professional experience and insight knowledge turned out to be the group especially vulnerable and subjected to extermination on purpose. (shrink)
Positivity and Job Burnout in Emergency Personnel: Examining Linear and Curvilinear Relationship.Ewa Gruszczyńska & Beata Aleksandra Basińska - 2017 - Polish Psychological Bulletin 48 (2):212-219.details
The aim of this study was to examine whether the relationship between the ratio of job-related positive to negative emotions and job burnout is best described as linear or curvilinear. Participants were 89 police officers and 86 firefighters. The positivity ratio was evaluated using the Job-related Affective Wellbeing Scale. Exhaustion and disengagement, two components of job burnout, were measured using the Oldenburg Burnout Inventory. The results of regression analysis revealed that curvilinear relationships between the positivity ratio and two components of (...) job burnout appeared to better fit the data than linear relationships. The relationship between the positivity ratio and exhaustion was curvilinear with a curve point at around 2.1. A similar curvilinear relationship, but with a lower curve point, i.e., around 1.8, was observed for disengagement. It seems that beyond certain values there may be hidden costs of maintaining positive emotions at work. Also, the unequal curve points for subscales suggest that different dimensions of work-related functioning are variously prone to such costs. (shrink)
Autonomy Revisited.Heta Aleksandra Gylling - 2004 - Cambridge Quarterly of Healthcare Ethics 13 (1):41-46.details
One of the core issues in medical ethics has been and still is autonomy, people's right to make their own self-regarding choices in situations where more than one option is available. Depending on the case, these choices may be influenced by personal life history, one's ethical and other values, and one's future expectancies. A professional soccer player may risk an operation, which for a less athletic individual would represent an unnecessary risk that might jeopardize her ability to even walk. Saying (...) no to painkillers may sound irrational to those who do not see anything ennobling in avoidable suffering, and preferring homeopathic medicine to more evidence-based medicine may lead others to seriously doubt the logic of one's thinking. But although these situations may be difficult,1 they seldom lead to an impasse. Even if serious value conflicts emerge in these patient–medical personnel encounters, they can be overcome by the fact that, in Western countries, honoring patients' autonomy has been widely accepted as part of medical professionalism. (shrink)
1 — 50 / 392 | CommonCrawl |
Is Category Theory useful for learning functional programming?
I'm learning Haskell and I'm fascinated by the language. However I have no serious math or CS background. But I am an experienced software programmer.
I want to learn category theory so I can become better at Haskell.
Which topics in category theory should I learn to provide a good basis for understanding Haskell?
programming-languages functional-programming category-theory
Raphael♦
migrated from cstheory.stackexchange.com Aug 3 '12 at 19:48
This question came from our site for theoretical computer scientists and researchers in related fields.
$\begingroup$ Also see relating category theory to programming language theory. $\endgroup$ – Kaveh Aug 13 '12 at 5:25
$\begingroup$ I appreciate that you distinguish programming and cs. $\endgroup$ – jmite Oct 31 '13 at 1:31
$\begingroup$ "Learning Category Theory to become better in Haskell" is a bit like "Learning physics to become better in tennis" $\endgroup$ – user26756 Oct 21 '15 at 13:08
In a previous answer in the Theoretical Computer Science site, I said that category theory is the "foundation" for type theory. Here, I would like to say something stronger. Category theory is type theory. Conversely, type theory is category theory. Let me expand on these points.
Category theory is type theory
In any typed formal language, and even in normal mathematics using informal notation, we end up declaring functions with types $f : A \to B$. Implicit in writing that is the idea that $A$ and $B$ are some things called "types" and $f$ is a "function" from one type to another. Category theory is the algebraic theory of such "types" and "functions". (Officially, category theory calls them "objects" and "morphisms" so as to avoid treading on the set-theoretic toes of the traditionalists, but increasingly I see category theorists throwing such caution to the wind and using the more intuitive terms: "type" and "function". But, be prepared for protests from the traditionalists when you do so.)
We have all been brought up on set theory from high school onwards. So, we are used to thinking of types such as $A$ and $B$ as sets, and functions such as $f$ as set-theoretic mappings. If you never thought of them that way, you are in good shape. You have escaped set-theoretic brain-washing. Category theory says that there are many kinds of types and many kinds of functions. So, the idea of types as sets is limiting. Instead, category theory axiomatizes types and functions in an algebraic way. Basically, that is what category theory is. A theory of types and functions. It does get quite sophisticated, involving high levels of abstraction. But, if you can learn it, you will acquire a deep understanding of types and functions.
Type theory is category theory
By "type theory," I mean any kind of typed formal language, based on rigid rules of term-formation which make sure that everything type checks. It turns out that, whenever we work in such a language, we are working in a category-theoretic structure. Even if we use set-theoretic notations and think set-theoretically, still we end up writing stuff that makes sense categorically. That is an amazing fact.
Historically, Dana Scott may have been the first to realize this. He worked on producing semantic models of programming languages based on typed (and untyped) lambda calculus. The traditional set-theoretic models were inadequate for this purpose, because programming languages involve unrestricted recursion which set theory lacks. Scott invented a series of semantic models that captured programming phenomena, and came to the realization that typed lambda calculus exactly represented a class of categories called cartesian closed categories. There are plenty of cartesian closed categories that are not "set-theoretic". But typed lambda calculus applies to all of them equally. Scott wrote a nice essay called "Relating theories of lambda calculus" explaining what is going on, parts of which seem to be available on the web. The original article was published in a volume called "To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism", Academic Press, 1980. Berry and Curien came to the same realization, probably independently. They defined a categorical abstract machine (CAM) to use these ideas in implementing functional languages, and the language they implemented was called "CAML" which is the underlying framework of Microsoft's F#.
Standard type constructors like $\times$, $\to$, $List$ etc. are functors. That means that they not only map types to types, but also functions between types to functions between types. Polymorphic functions preserve all such functions resulting from functor actions. Category theory was invented in 1950's by Eilenberg and MacLane precisely to formalize the concept of polymorphic functions. They called them "natural transformations", "natural" because they are the only ones that you can write in a type-correct way using type variables. So, one might say that category theory was invented precisely to formalize polymorphic programming languages, even before programming languages came into being!
A set-theoretic traditionalist has no knowledge of the functors and natural transformations that are going on under the surface when he uses set-theoretic notations. But, as long as he is using the type system faithfully, he is really doing categorical constructions without being aware of them.
All said and done, category theory is the quintessential mathematical theory of types and functions. So, all programmers can benefit from learning a bit of category theory, especially functional programmers. Unfortunately, there do not seem to be any text books on category theory targeted at programmers specifically. The "category theory for computer science" books are typically targeted at theoretical computer science students/researchers. The book by Benjamin Pierce, Basic category theory for computer scientists is perhaps the most readable of them.
However, there are plenty of resources on the web, which are targeted at programmers. The Haskellwiki page can be a good starting point. At the Midlands Graduate School, we have lectures on category theory (among others). Graham Hutton's course was pegged as a "beginner" course, and mine was pegged as an "advanced" course. But both of them cover essentially the same content, going to different depths. University of Chalmers has a nice resource page on books and lecture notes from around the world. The enthusiastic blog site of "sigfpe" also provides a lot of good intuitions from a programmer's point of view.
The basic topics you would want to learn are:
definition of categories, and some examples of categories
functors, and examples of them
natural transformations, and examples of them
definitions of products, coproducts and exponents (function spaces), initial and terminal objects.
adjunctions
monads, algebras and Kleisli categories
My own lecture notes in the Midlands Graduate School covers all these topics except for the last one (monads). There are plenty of other resources available for monads these days. So that is not a big loss.
The more mathematics you know, the easier it would be to learn category theory. Because category theory is a general theory of mathematical structures, it is helpful to know some examples to appreciate what the definitions mean. (When I learnt category theory, I had to make up my own examples using my knowledge of programming language semantics, because the standard text books only had mathematical examples, which I didn't know anything about.) Then came the brilliant book by Lambek and Scott called "Introduction to categorical logic" which related category theory to type systems (what they call "logic"). It is now possible to understand category theory just by relating it to type systems even without knowing a lot of examples. A lot of the resources I mentioned above use this approach to explain category theory.
Uday ReddyUday Reddy
$\begingroup$ @UdayReddy I strongly disagree with your identification of category theory with type theory. Modern type theory is sustantially about types for concurrency processes, e.g. the theory tradition of session types. To the best of my knowledge there is no categorical understanding of such typing systems. $\endgroup$ – Martin Berger Jan 9 '13 at 3:34
$\begingroup$ @MartinBerger I think your interpretation of "type theory" is a bit narrow. However, I agree that a proper type-theoretic and category-theoretic understanding of session types is currently a good research challenge, one that I intend to spend time on. $\endgroup$ – Uday Reddy Jan 9 '13 at 7:07
$\begingroup$ @MartinBerger. To see how category theory applies to richer notions of computation, I invite you to look at how it has been applied to the theory of imperative programming and to games semantics (which again can encode imperative computations quite well). So, I don't believe that functional programming has a monopoly on category theory. $\endgroup$ – Uday Reddy Jan 10 '13 at 16:04
$\begingroup$ @nicolas, fibrations are a way to do indexed categories, which model dependent types. Fibrations can also be viewed as a very general form of program logic, where $f : P \to Q$ means that $f$ maps $P$-satisfying values to $Q$-satisfying values. $\endgroup$ – Uday Reddy Mar 17 '17 at 17:59
$\begingroup$ "Unfortunately, there do not seem to be any text books on category theory targeted at programmers specifically." Such a "text book" now more-or-less exists in Bartosz Milewski's Category Theory for Programmers. Bartosz has also created an accompanying lecture series. $\endgroup$ – alx9r Jan 10 at 15:46
I'm going to try and keep it short and sweet. There is an informal correspondence between Haskell programs and certain classes of categories, which can be made more formal with some work. This correspondence is known as the Curry-Howard-Lambek correspondence and relates:
Haskell types with objects of the category
Terms of type $A\rightarrow B$ with morphisms $f\colon A\rightarrow B$ (note the similar notations)
Algebraic datatypes with initial objects
Type constructors with functors
The list goes on and on, but one crucial point is that you can define things like monads and algebras in category theory and come up with notions that are both useful to mathematicians but also pervasive in the practice of Haskell programming.
I'm not sure which book to recommend, as I haven't found a completely satisfactory introductory book on categories for computer scientists. You can try Categories, Types and Structues by Asperti and Longo. The idea is to learn basic definitions up to adjunctions, and then maybe try and read some of the excellent blogs out there to try and understand these concepts.
codycody
$\begingroup$ "come up with notions that are both useful to mathematicians but also pervasive in the practice of Haskell programming" -- can you give an example, or would that require too much prior knowledge? $\endgroup$ – Raphael♦ Aug 11 '12 at 11:08
$\begingroup$ @Raphael: Monads. Arrows. Algebras. Coalgebras. $\endgroup$ – Dave Clarke Aug 11 '12 at 12:10
$\begingroup$ Functors, duality, the Kleisli category, the Yoneda lemma... $\endgroup$ – cody Aug 12 '12 at 9:25
$\begingroup$ Cartesion closed categories. Currying. $\endgroup$ – Dave Clarke Aug 12 '12 at 18:04
$\begingroup$ "An introduction to Category Theory for Software Engineers", cs.toronto.edu/~sme/presentations/cat101.pdf $\endgroup$ – Vladimir Alexiev Oct 3 '14 at 10:18
Echoing @AJed advice, I recommend to turn your statement
on its head: learn Haskell, building on your programming intuition. Once you are an FP guru, it might be easier to pick up category theory (if you still care).
Category theory is simple for somebody with broad mathematical education (groups, rings, modules, vector spaces, topology etc). Lacking this background, category theory is nearly impenetrable. The beauty of category theory is that it unifies a lot of seemingly unrelated things (e.g. left adjoints of forgetful functors include free groups, universal enveloping algebras, Stone-Cech compactifications, abelianisations of groups, ...), and so reduces complexity. But if you are not familiar with the multiple examples that category theory unifies, category theory is just an additional layer of complexity that makes your life harder.
In my experience, learning is easier by building on things one already knows. As a software developer, you know a lot about programming, and Haskell programming is not that different from other programming, so my recommendation is to approach Haskell from a pragmatic programming point of view, ignoring category theory. The bit of category theory that is in Haskell, e.g. some support for monads, is much easier for a programmer to grasp without taking a detour via category theory. After all, monads are merely generalised composition (and you will have used monads in your programming practise already -- albeit without knowing you did), and Haskell doesn't really support monads for real, as it does not enforce the monadic laws.
Martin BergerMartin Berger
$\begingroup$ No, to be honest Haskell really is that different from most other programming languages, to the point that getting past preconceived notions is often the biggest challenge. Experienced software developers seem to have more trouble than people who've never programmed before. $\endgroup$ – C. A. McCann Jan 9 '13 at 14:31
$\begingroup$ @C.A.McCann I agree that some experienced programmes appear to have a hard time moving from e.g. Java or C# to Haskell, but I don't think it's because there's something fundamentally different about Haskell. I think it's in part because it appears to be different. The idea that you need to learn category theory in order to appreciate Haskell has probably prevented quite a few experienced software developers from achieving Haskell mastery. (Cf. why F# doesn't have monads.) I certainly find it hard to think of many Haskell features that don't also have resemblances in other languages. $\endgroup$ – Martin Berger Jan 9 '13 at 14:54
$\begingroup$ Knowing Category Theory might help a bit, but not all that much, and learning it is certainly much harder than learning Haskell. There are pretty fundamental differences compared to most languages (purity, non-strict evaluation, the type system), and removing all the CT terms doesn't make these any more familiar. On the other hand, learning Haskell motivates some people to learn some CT, because the ideas borrowed are useful. F#'s limited type system and avoidance of a perfectly good existing term are flaws, not features. $\endgroup$ – C. A. McCann Jan 9 '13 at 15:02
$\begingroup$ I don't know of any language other than Scala with a type system really comparable to Haskell's. From empirical observation, purity is not immediately grasped, and non-strict evaluation (which you skipped over) is even harder. Finally, I am a working programmer and I dispute that anyone in the field is going to be intimidated by a name. The software development industry is full of opaque jargon already. Also, F#'s type system cannot express monads directly--computation expressions are not first class, which limits their use significantly. $\endgroup$ – C. A. McCann Jan 9 '13 at 16:28
$\begingroup$ CBN is also conceptually easy, for example by analogy with thunking, a concept that most working programmers will have used before. Purity is something that every working programmer understands. Haskell is used in undergraduate education in the UK. When my students ask me how to get into functional programming, I often recommend learning Haskell first, but students are intimidated by its reputation, as was the originator of the question. I believe the main reason for this is Haskell's association with category theory. $\endgroup$ – Martin Berger Jan 10 '13 at 0:02
A short answer: no [but this is only an opinion]
Don't go to Category Theory or any other theoretical domain to become good in Haskell. Learn functional programming techniques, such as tail recursion, map, reduce, and others. Read as much code as you can. Implement as many ideas as you can. If you have issues, read and read.
If you want a good theoretical reference to learn Haskell and other functional programming paradigms then have a look at: An Introduction to Functional Programming Through Lambda Calculus, Greg Michaelson (available online). ... There are other similar books.
AJedAJed
$\begingroup$ I raise an eyebrow at this, because "tail recursion" is usually not important to programming in Haskell due to laziness. Nevertheless, "learn by doing" is almost always good advice. $\endgroup$ – Dan Burton Oct 30 '13 at 16:57
$\begingroup$ @DanBurton .. interesting observation. Let's say then, instead of Haskell, learn erlang or scheme :). [I am not an expert in Haskell, I just picked it because it sounds cool] $\endgroup$ – AJed Oct 30 '13 at 22:38
Here is a (long) blog post, that gives motivation on how category theory ideas are relevant for practical programming: http://cdsmith.wordpress.com/2012/04/18/why-do-monads-matter/
Sampo SmolanderSampo Smolander
Category theory is a very sophisticated branch of mathematics and mastering it will unify most of your previous learnings by making them instances of same abstract objects. So it is very useful and very intuitive. But it is vast and broad, and you will find yourself in a plenty of new concepts that will don't even know which one is suitable for your needs and which one you should skip. So your purposeful approach needs choice among concepts, otherwise mastering in it inevitably needs long time and is really not a self study domain.
By the way, I suggest a very well start point for your purpose to be here.
shvahabishvahabi
$\begingroup$ This doesn't really answer the question: is it useful for learning functional programming? Which topics in category theory are useful for Haskell? $\endgroup$ – David Richerby Mar 6 '15 at 22:50
How much math background do you need to understand how category theory is applied to Haskell?
About computer science and category theory
Reference request: Category theory as it applies to type systems
What is discrete mathematics and why am I learning about it?
How are programming languages and foundations of mathematics related?
Tool/app for learning category theory?
Reference request: Monads, continuations, and other functional CS concepts
Category theory (not) for Programming?
What are common formal techniques for proving functional code correct?
Is there an isomorphism between (subset of) category theory and relational algebra?
Can the formalisms of category theory replace those of type theory?
How is the definition of monads in category theory equivalent to the definition in functional programming?
Writing the coherence conditions for a monad in a functional laguage | CommonCrawl |
Inter and intra cultural variations of millet (Pennisetum glaucum (L.) R. Br) uses in Niger (West Africa)
Hamadou Moussa ORCID: orcid.org/0000-0002-4722-55601,
Valentin Kindomihou2,
Thierry D. Houehanou3,
Idrissa Soumana4,
Oumarou Souleymane5 &
Mahamadou Chaibou6
The Correction to this article has been published in Journal of Ethnobiology and Ethnomedicine 2019 15:47
An ethnobotanical study was conducted in the eight regions of Niger to identify local knowledge variation of millet (Pennisetum glaucum (L.) R. Br) uses. In fact, the level of individual knowledge can be affected by many factors such as gender, age, ethnicity, occupation, religious and cultural beliefs, etc. This study documented indigenous knowledge of millet uses in Niger and aimed specifically to (i) identify the different types of millet organ uses and (ii) assess the variation of local knowledge of millet uses along with ethnicity, occupation, and age.
The data were collected in 32 major millet-producing villages in Niger through individual semi-structured interviews and focus group discussions. About 508 individuals from 5 ethnic groups were interviewed. The assessment of the knowledge was performed by calculating five ethnobotanical indices such as the number of reported uses by parts of the plant (RU), the use-value of the parts of the plant (PPV), the specific use-value (SU), the intraspecific use-value (IUV), and the relative frequency of citations (FRC). Data were analyzed using descriptive, univariate, and multivariate statistical analyses.
The results indicated a significant variation in uses across ethnic groups (H = 38.14, P = 0.000) and socio-occupational categories (H = 6.80, P = 0.033). The Hausa, Kanuri, and Zarma-Sonhrai ethnic groups, farmers were the largest users of the species. Dietary (51.40%) and forage (40.35%) were the most reported uses. The most commonly used parts of the plant were the stubble (74.92%) and grains (73.68%).
The study showed the importance of P. glaucum in the daily life of local people. It also confirmed the uneven distribution of indigenous knowledge of millet uses in Niger due to social factors. Now, the challenge is how to incorporate these social differences in knowledge of millet uses in view to sustainable management and conservation of local genetic resources of millet. Finally, this work could be an important decision-making tool for future millet valuing.
Millet (Pennisetum glaucum (L.) R. Br) is a staple food crop in arid and semi-arid areas of Asia and Africa and remains one of the main sources of energy, protein, vitamins, and minerals for millions among the poorest people in these regions. This cereal is generally grown for grains, used in human and animal diet, and also for stubble used as fodder and silage [1]. In addition to the dietary and forage use of millet, different parts of the plant are commonly used for multiple services including the treatment of various human and animal diseases [2, 3], soil fertilization, and handicrafts [4, 5]. Furthermore, as a result of climate change and population pressure, millet is increasingly being exploited as forage or a dual-purpose crop (grain and fodder) in order to ensure the food security of livestock [6, 7]. This new trend towards the valuation of millet in animal food is not without consequences on the food security of the human local populations. Therefore, an ethnobotanical study appears to be a good approach in this area to understand the use as well as the sociocultural and economic perceptions of local populations about this crop [8,9,10].
Ethnobotany is a science that is related to several disciplines such as biodiversity conservation, conservation genetics, ethno-pharmacology, food technology, ecology, etc. [11]. The ethnobotanical assessment of millet would be then indispensable for its valuation, sustainable management, and conservation. This study documented indigenous knowledge of millet uses by ethnic groups in Niger. Past ethnobotanical studies in the West African Sahel have focused on wild woody and herbaceous plant species [12, 13]. But this study was focused on a crop such as millet, given its importance as a major cereal for humans and as an additional source of forage for animals in Niger. However, little known work has been conducted on the ethnobotanical use of millet despite being considered as the staple food crop for local populations in the arid and semi-arid areas of the world [14,15,16]. The objectives of our study were to document the endogenous knowledge of millet uses in Niger and to assess the effects of ethnicity, occupation, and age on botanical knowledge. Indeed, Indigenous knowledge is often unevenly distributed among those factors [13]. Moreover, the level of individual knowledge of native plant species can be affected by many factors such as sex, age, ethnicity, occupation, religious and cultural beliefs, abundance, and the usefulness of the species [13, 17]. In addition, research conducted in the West African Sahel reported that the Fulani, Kel Tamashek, Bellah, and Maure groups were the main major livestock-rearing groups, while the farmers were mainly from the Bambara, Hausa, Djerma, Gourmantche, Mossi, and Soninke [13, 18]. Nowadays, professional specialization according to ethnic criteria is becoming increasingly blurred in the region [13, 18]. Nevertheless, pastoral groups usually know more about livestock than farmer groups and vice versa. Robert et al. [19] also reported that producers' choice of millet varieties is generally based on agro-morphological traits, phenological, or organoleptic characteristics. Furthermore, the preservation of the cultural identity of a community requires knowledge to be passed on from generation to generation [13]. Age therefore has an impact on the knowledge of plants within ethnic groups [13, 20]. In this study, we tested three hypotheses. First, ethnicity affects knowledge about the uses of millet organs, so that farmers (Zarma-Sonhrai, Hausa, Kanuri, Gurmantche) tend to know the uses of millet better than pastoralists (Fulani, Tuareg, Tubu). Secondly, the socio-professional category also influences the knowledge of uses of millet organs, so that ethnic groups such as Zarma-Sonhrai, Hausa, Kanuri and Gurmantche (farmers) tend to know the uses of the millet organs better than Fulani, Tuareg, Tubu (pastoralists). And thirdly, there is a positive correlation between knowledge of millet organ use and age, that is, older people are more familiar with millet uses than younger people.
Area of the study
This study was carried out in the eight regions of Niger Republic (Fig. 1). Niger is located in West Africa, between latitudes 11° 37′ and 23° 23′ N and longitudes 0° and 16° E. It is located 700 km from the Gulf of Guinea, 1900 km from the East Atlantic coast, approximately 1200 km from the Atlantic coast to the south and north of the Mediterranean sea [21]. It covers an area of 1,267,000 km2 and is divided into 8 regions (Fig. 1), 36 provinces, and 265 municipalities (52 urban and 213 rural). Niger is inhabited by eight ethnic groups that are mainly situated in the following regions: Hausa: Maradi, Tahoua, Zinder, and Dosso regions; Zarma-Sonhrai: regions of Tillabéri, Dosso, and Niamey; Tuareg: regions of Agadez, Tahoua; Fulani: regions of Niamey, Dosso, Maradi, Tahoua, Diffa, Tillabéri, and Zinder; Kanuri: regions of Diffa and Zinder; Tubu: regions of Diffa and Zinder; Arabs: regions of Tahoua, Diffa, Agadez and Zinder; Gurmantche: Tillabéri region [22].
Location of the surveyed villages
The estimated population of Niger is 19,865,068 inhabitants. It is a relatively young population with about 58.4% under 18 years old [22]. Niger's economy is mainly based on farming, trade, and handicrafts. The main cultivated species are cereals (millet, sorghum, rice, maize, fonio) and cash crops (cowpea, nutgrass, groundnut, sesame, sorrel, tiger nut, and cotton) [23].
Livestock is one of the most important riches in Niger. The national population, estimated at 14,467,087 UBT in 2012, is composed of cattle, sheep, goats, camels, horses, and donkeys [24]. The population of Niger is mostly rural (almost 83.8%) and its income derives mainly from the exploitation of natural resources [25]. In almost the regions, farming is the first contributors to household incomes [26].
The terrain is characterized by a large peneplain with an average altitude of 500 m with depressions and elevated points especially in the northern part.
The altitude increases from the south to the north where the mountainous areas (Aïr, Termit) exceed 900 m. The soil textures range from sandy to clay-sandy, poor in nutrients, and organic matter. Arable soils are of 80% dunes and 15–20% are moderately composed of clay and hydromorphic soils [24]. The climate is a semi-arid tropical type, characterized by two seasons: a dry season from October to May and a rainy season from June to September. During the dry season, the average temperature fluctuates between 18.1 and 33.1 °C. However, during the rainy season, this temperature varies between 28.1 and 31.7 °C [25].
Sampling and data collection
The data were collected in 32 major millet production villages in Niger from January to February 2016. The collection was performed via individual semi-structured interviews and focus group discussions (groups of two to 15 people) in selected locations based on stratified sampling. Three levels of stratification were selected: socio-cultural or ethnic groups (first level), the best production provinces of millet (second level) and villages (third level). A total of 32 villages were surveyed on the use of millet. Participants in the surveys were randomly selected based on the methods of Uprety et al. [27]. Interviews were conducted in the most commonly spoken local languages in Niger (Hausa and Zarma) but translators intervened when the interlocutor did not speak any of the two languages. These surveys were supplemented by the collection of seeds from local farmers when they were available.
The assessment of the knowledge was conducted using the computations of the ethnobotanical indices of the plant as defined by Gomez-Beloz [28] and used for species-specific studies [9, 29, 30]. A total of five ethnobotanical indices were computed: the reported use (RU), the plant part value (PPV), the specific reported use (SU), the intraspecific use-value (IUV) and the relative citation frequency (FRC).
The reported use (RU) is the total number of uses reported for the plant. It is represented by the number of uses reported for each plant part:
$$ RU={\sum}_{i=1}^nR{U}_{\mathrm{plantpart}} $$
The plant part value (PPV) is equal to the ratio between the total number of total uses reported for each plant part and the total number of the reported uses for the plant:
$$ \mathrm{PPV}={\mathrm{RU}}_{\mathrm{plant}\ \mathrm{part}}/\mathrm{RU} $$
The most often used parts of the species by the respondents from an ethnic group are those having high values of PPV.
The specific reported use (SU) is the use as described by the respondents. It refers to the number of times a specific reported use is mentioned by the respondents from an ethnic group:
$$ SU={\sum}_{i=\mathrm{o}}^n{c}_i $$
The intraspecific use-value (IUV) is the ratio of the specific reported use to the reported use for the plant part. It helps to identify for a specific plant part, the most reported specific uses by the respondents from an ethnic group:
$$ \mathrm{IUV}={\mathrm{SU}}_{\mathrm{plant}\ \mathrm{part}}/{\mathrm{RU}}_{\mathrm{plant}\ \mathrm{part}} $$
The relative frequency of citation (FRC) for an organ (or use) was adapted to the formula of Ladoh-Yemeda et al. [31] and is calculated as follows:
$$ FRC=\frac{\overline{\mathrm{N}}\mathrm{c}}{Ne}\mathrm{x}100 $$
\( \overline{\mathrm{N}}\mathrm{c} \) refers to the number of times that a given organ (use) has been cited for a specified purpose and does not have the (social) factor in question.
The Kruskal-Wallis test [32] was performed to test the dependence of the relative frequency of quotations according to the ethnic group, the age class, and the profession. The three social factors were combined by defining 36 sub-groups. Thus, the relative frequency matrices of the specific uses of the P. glaucum parts were subjected to a principal component analysis (PCA) using the software R [33] with the constituted sub-groups. In addition, for the interpretation of a given point (social factor or specific use) on an axis of the PCA, two criteria have been retained [34, 35]:
A good contribution (CTR) such as CTR ≥ 100/n (n = number of individuals/variables);
A good quality of representation (COS2) on the axis such as COS2 ≥ 0.3.
Socio-economic profiles of respondents
A total of 508 individuals across 5 ethnic groups were surveyed (Table 1). Respondents were divided into ethnic group, age group, and socio-occupational category. Hence, six sub-groups were defined for each ethnic group: young (Je), adult (Ad), old (Vx), farmers (Ag), farmers-pastoralists (Aél), and Fact (Fonctionnaires-artisans-commerçants-transporteurs in French, Civil servants-craftsmen-traders-transporters in English). Similarly, three sub-groups were defined for each socio-occupational category: young (Je), adult (Ad), and old (Vx). Thus, 39 sub-groups (5 ethnic groups × 6 sub-groups + 3 socio-occupational categories × 3 sub-groups) are expected, but due to the absence of certain subgroups, only 36 sub-groups were taken into account (Tables 1, 2, and 3).
Table 1 Ethnic and age groups samples
Table 2 Ethnic group and socio-occupational category samples
Table 3 Age group and socio-occupational category samples
Types of use
Multiple parts of P. glaucum were used for various purposes by the different ethnic groups in Niger. There have been recorded seven types of use (Fig. 2), which are the dietary use, the therapeutic use, the technological use, the socio-cultural use, the domestic use, the religious use, and forage. Food has been the highest reported use (51.40%), followed by forage use (40.35%) while therapeutic use has been the least cited (1.69%).
Relative citation frequencies of the millet different uses
All the millet parts have been used from the leaves to the roots. We have 10 different parts used (Fig. 3). The most-reported parts were the stubble and the grains with respective relative frequencies of 74.92 % and 73.68%. The axillary buds and the flowers were the least listed parts with relative citation frequencies of 0.11% and 0.08% respectively.
Relative citation frequencies of the different parts used of the millet
Millet use variation based on social factors
Significant difference existed between Hausa, Kanuri, Peulh, Tuareg, and Zarma-Sonhrai (H = 38.14, P = 0.000). Significant difference was also observed between farmers, agro-herders, and Fact (H = 6.80, P = 0.033) in terms of P. glaucum parts use. However, there is no significant difference between adults, young, and old (H = 2.82, P = 0.244) in the use of P. glaucum parts.
Use variation based on ethnic and age groups
The PCA showed that the first three axes explained 62.8% of the variation observed among the various forms of the species use (Fig. 4). The specific uses of P. glaucum were known by all ethnic groups. Nevertheless, the relative frequencies of citations of P. glaucum use varied significantly from one sub-group to another based on the combined factor "ethnic group-age group" (H = 37.86, P = 0.001). Axis 1 contrasted adult and old Hausa with adult Zarma-Sonhrai. The first group was known for the use of panicles to donate to parents (0.062 ≤ IUV ≤ 0.115), the consumption of grains processed into traditional foods such as labdourou (the dônou that has not been baked), or its use to "accompany" primiparous women on maternity leave.
Factorial map of the PCA describing the relationships between the specific uses of millet and the age-ethnic group factor. Alivo = travel food; Allfe = fire lighter; Attbo = Attache boot; Boira = refreshing drink; CenAb = ash for watering cattle; Cenpa = ash for wound dressing; CenSa = ash sauce; CenSo = ash for soumbala; ChaCe = ash stubble for cooking; ChaCo = stubble as compost; ChaFe = stubble as fertilizer; ChaFo = forage stubble; Clô = closing; Colin = guest snack; Com = fuel; Conbr = manufacture brick; Conre = ash for meal conservation; Con = construction; Deg = Dégué; Déspa = parcel desalinization; Dîmco = customary tithe; Disal = food discrimination according to sex; Don = Donu; Enc = Enclos; EpaPa = thickener paste; FabOr = manufacture oreillets; FarDo = flour doum; FeuFo = fodder sheets; Filcu = culinary filter; Forfe = fortifying for breastfeeding woman; Gal = galette; GluAl = glumes feed cattle; GluCa = glumes carbonization wood charcoal; GluCo = glumes compost; GluFe = fertilizer glumes; GluPo = glumes pottery; GraAl = grains livestock feed; GraAu = grains aumone; GraBi = bita grains; GraBo = grains for porridge; GraCa = grains for engagement gifts; GraCh = grains for charity; GraCo = couscous grains; GraDo = Grains for late harvest donation ; GraEn = grains for social assistance; GraMa = grains for gift to marabouts; GraZa = grains for zakkat; Gre = grenier; Han = hangar; Jeuma = provision for bride; Grains; Lab = labdourou; Bed = beds; May = house; Malar =clay mixing; PanAl = panicle for livestock feed; Panfr = fresh pan for grillade; Pansè = dry pan for grilling; PanAu = Panicles for Alms; PanCh = panicles for charity; PanDo =panicles for donation in late harvest; PanEn = panicles for social assistance; PanMa = panicles for donation to marabouts; PanPa = panicles for donation to parents; PanPr = panicles for provision for primiparous women; PanZa = panicles for zakkat; Pât = paste; Por = portal; Pou = henhouse; Prife = grains for provision for primiparous woman; RacAl = spoiled for livestock feed; Rack = spit in ash for cooking; Sal = sala; Savmé = medical soap; Savno = black soap; Sék = Sékos; Sôk = Sôkou; SonAb = sound for livestock watering; SonAl = sound for livestock feed; SonBo = sound for boiled; SonCo = sound for couscous; SosKo = Sosso Komandi; Sou = souroundou; SubNa = ash as a substitute for natron; Was = Wassalé; Zor = Zori. HsAd = hausa adult; HsJe = hausa youth; HsVx = hausa old; KnAd = adult kanuri; KnJe = young kanuri; KnVx = old kanuri; PlhAd = adult fulani; PlhJe = young fulani; PlhVx = old fulani; TrgAd = adult Tuareg; TrgJe = young tuareg; TrVx = old tuareg; ZmAd = adult zarma-sonhrai; ZmJe = zarma-sonhrai young; ZmVx = old zarma-sonhrai
It was also noticed among old and adult Hausa the use of ash (from stubble) in the manufacturing of black soap. The Zarma-Sonhrai adults were characterized by the use of stubble ash in livestock watering (IUV = 0.775) or as medical soap, in the processing of grains into special dishes such as sosso komandi (natron porridge), bita (kind of very thick porridge), or souroundou (equivalent of millet-type rice dish). It was also worth noting that among Zarma-Sonhrai adults, the use of glumes in pottery, the composting of agricultural residues, or the carbonization of wood in charcoal, as well as in the mixing of building clay (IUV = 0.382) or for manufacturing pillows. The Zarma-Sonhrai adults additionally offered panicles to religious leaders and used the bran (IUV = 0.796) and zori (liquid from the washing of milled cereal grains) in the animal feeding. Axis 2 compares elderly Fulani and Zarma-Sonhrai. Elderly Fulani mainly used the stubbles and the millet bran. The stubbles were used in the fencing and while the bran was used in animal watering (IUV = 0.3). The elderly Zarma-Sonhrai used all the parts of millet. Thus, the grains were processed into fortifying diets for lactating women or make wassalé (kind of semolina grilled with butter or oil). Elderly Zarma-Sonhrai also used the grain as gifts to religious leaders to pay off the zakat (alms given at the end of the month of Ramadan) or socially as a way of mutual aid (IUV = 0.108). Elderly Zarma-Sonhrai also used the stubble in building houses (IUV = 0.130) or to make cooking fire. The use of the stubble ashes was mentioned by this category of people to give a special taste to sauces or to accelerate the cooking or to make soumbala (mustard made from sorrel grains). The elderly Zarma-Sonhrai finally made use of stubble as fodder and panicles to accompany primiparous women on maternity leave in their families during the usual 40 days. The axis 3 contrasted elderly Fulani with adult Kanuri and young and adult Tuareg. Four millet parts such as the grains, the panicles, the stubble, and the bran were used by Fulani adults. The stubbles were used to make fire, bed, or medicine with ashes (medical soap and sticky-plaster), while the bran was used as a drink for animals (IUV = 0.745). The panicles were used to give out zakat (compulsory alms given at the end of the harvest), for the alms or given as gifts to relatives. The grains were processed into a local beverage, which is a mixture of millet flour balls dissolved into milk or yogurt. This newly obtained mixture is very popular with all ethnic groups in Niger. Its name varies from one ethnic group to another. Thus, it is called dônou by the Zarma-Sonhrai, furah by the Hausa, chobbal by the Peulh, tidda by the Tuareg. Kanuri adults, Tuareg young and adults were characterized by the exclusive valuing of the millet grains in the human and animal diet. The grains have essentially been processed to make some porridge, paste during socio-cultural ceremonies, or processed into simple fodder or mixed with other animal foods.
Use variation based on the occupation and ethnic group
The PCA revealed that the three first axes explained the 67.44% of the variance observed between the different types of uses of the species (Fig. 5). The specific uses of the P. glaucum were known by all the socio-occupational categories. Nevertheless, the relative frequencies of citations of P. glaucum use varied significantly from one sub-group to another according to the combined factor "ethnic group-occupational category" (H = 42.92, P = 0.000). Axis 1 singled out Hausa farmers who were characterized by the use of the stubbles as fertilizers in fields, the use of the millet bran to thicken the paste, the processing of the millet grain into local foods such as chokkou (or sokou), dèguè, and sâlâ (a variant of millet cake) or as presents given to relatives and neighbors as well as giving panicles as simple presents or as a way of mutually helping one another in the society.
Factorial maps of the PCA describing the relationships between the specific uses of millet and the occupation-ethnic group factor. Note: HsAg = hausa farmer; HsAel = hausa agro-herder; HsFact = hausa others; KnAg = kanuri farmer; KnAel = kanuri agro-herder; KnFact = kanuri others; PlhAg = fulani farmer; PlhAel = fulani agro-herder; PlhFact = fulani others; TrgAg = tuareg farmer; ZmAg = zarma-sonhrai farmer; ZmAel = zarma-sonhrai agro-herder; ZmFact = zarma-sonhrai others
Axis 2 isolated the Zarma-Sonhrai characterized by the use of grain-derived foods, panicles, stubble (ash substituting natron), bran (refreshing drink), and rachis (cattle feed (IUV = 1)). Axis 3 contrasted the Fulani farmers with the Hausa farmers and herders and Kanuri Fact. The Fulani farmers used the grains as labdourou, panicles as cattle feed or to give out zakat and stubble as fence. The Hausa farmers and herders and Kanuri Fact used the glumes as cattle feed (0.5 ≤ IUV ≤ 0.6) or to make bricks (0.2 ≤ IUV ≤ 0.5). The Hausa farmers and herders and Kanuri Fact also used the stubble for the construction of beds and houses, the grains to make porridge and actions of solidarity. Finally, they ate the fresh panicles grilled on embers.
Use variation based on occupation and age group
The PCA indicated that the three first axes explained 62.21% of the total variance observed between the different types of use of the species (Fig. 6).
Factorial maps of the PCA describing the relationships between the specific uses of millet and the occupation-age factor. Note: AgAd = adult farmer; AgJe = young farmer; AgVx = old farmer; AelAd = agro-herder adult; AelJe = young Agro-herder; AelVx = Agro-herder old; FactAd = other adults; FactJe = other young people; FactVx = other old people
All age groups knew the specific uses of P. glaucum. Nevertheless, the relative frequency citations of P. glaucum significantly varied from one subgroup to another along with the combined factor "age group-socio-occupational categories" (H = 24.83, P = 0.001). Axis 1 isolated the group of farmers (all ages) who essentially used five millet parts. The grains were processed into various foods: snack for visitors, donation as a mutual aid (0.107 ≤ IUV ≤ 0.141), gifts to brides, preparation of dishes such as souroundou or dèguè (made with couscous from millet and yogurt or curdled milk). The bran was processed into a refreshing drink, as a dough thickener or as black couscous. The stubbles were known for being used to light fires, being incinerated for ash and used in cooking, sticking-plaster for wounds, as an ingredient in sauces or making medical soap. The panicles were used as cattle feed and in the payment of customary tithe to landowners. Finally, panicles have been valued by farmers as animal's food. Axis 2 compared adult herders and adult farmers to the old Facts. The adult breeders and adult farmers were more interested in using millet grains to give alms. The old Facts were more interested in the use of leaves as fodder (IUV = 1), stubble in various constructions, as fuel or as fodder and grains as porridge or couscous. Axis 3 isolated young Fact; this group was interested in the using of stubble in domestic work (making thatches) and panicles in charitable actions. However, this group was particularly not uninterested in dishes made from processed millet grains.
This study revealed that the level of use of the millet parts varies depending on ethnicity and profession. Previous studies conducted on cassava varietal diversity in the northwest Amazon area in Brazil also reported a strong correlation between varietal diversity and cultural identity within local ethnic groups [36]. Results showed that the ethnic groups that mostly use the P. glaucum organs in Niger were Hausa and Zarma-Sonhrai. The Nigerien ethnic groups such as Zarma-Sonhrai, Hausa, Kanuri, and Gourmantche are essentially farmers. Consequently, farming is their main activity. On the other hand, ethnic groups such as Fulani, Tuareg, Tubu, and Arab devoted almost exclusively to breeding and were therefore considered traditionally as "pastoralists" or "nomads" [37]. Therefore, it is quite normal that the Zarma-Sonhrai and the Hausa appeared as the greatest users of P glaucum organs specifically in this study. These results confirm the differentiation of knowledge along ethnic groups in our study. These results are very closed to those of Jika et al. [38], claiming that millet has a higher symbolic value in the rural communities of Zarma-Sonhrai, Hausa, and Kanuri, which represents a strong social barrier in the dissemination of seeds between these ethnolinguistic groups [38]. Furthermore, similar studies conducted on other species with important socio-economic value on a regional scale confirm these observations [29, 30]. Ethnicity, therefore, remains one of the major factors of difference in use and knowledge of the plants among communities [20].
A significant difference in term of knowledge level of use of millet organs was revealed among socio-occupational categories. In fact, farmers knew more about plant growing and conservation because of their close dependence on it as food crops or its other related uses. This explains the particularity of farmers in the abundant and diversified uses of different organs of P. glaucum in contrast with herders and agro-pastoralists who use it little and specifically. These results confirm the assumption of knowledge dependent on the socio-professional category. These results corroborated the findings of Jika et al. [38] who found out that there was a strong attachment to certain species among Sahelian farmers to their own local varieties of millet in western Lake Chad area. According to the same authors, the attachment of farmers to certain millet varieties can be linked not only to symbolic and aesthetic considerations but also to the way in which these varieties match the different expected uses [38]. In addition, Robert et al. [39] reported from southern Niger, the farmer's preference to grow their own local varieties because of their adaptation to their cropping systems. For example, the seed of local varieties acquired from outside sources (NGO; market) is mainly consumed but rarely sown [39]. Moreover, our results revealed that the elders of Fact group showed a particular interest in the use of leaves and stubble of P. glaucum as fodder. This behavior is explained primarily by their status, which allowed them to pursue other income-generating activities such as cattle breeding. Most of these actors live in urban areas where animal feeding costs are the highest [40]. But it turns out that millet-based forage is one of the most economically accessible forage for farmers [41]. This could well justify the special interest shown by the old Fact for millet fodder.
No significant difference was observed in the use of the organs of P. glaucum according to age. Our third hypothesis is therefore not completely verified in this study. Nevertheless, significant differences were observed in the uses of P. glaucum parts when the age factor was associated with other factors such as ethnicity or occupation of the respondent. In other words, there was no variation in the use of the organs of P. glaucum between young people, adults, and old, when age factor was taken aside. However, variations in the use of P. glaucum organs were observed when the analysis was performed with combined factors: age-ethnicity and age-profession. Thus, we observed that young Tuareg, young farmers, and young Fact also used P. glaucum organs. Young farmers' knowledge on millet use is obviously natural as inherited from their parents. Indeed, some authors support the idea that knowledge is transmitted from a generation to another within the same ethnic group [20]. As far as the young Fact are concerned, their knowledge of the use of millet resulted from their greater consumption of new millet products that were coming from the agri-food industry technologies. Indeed, it is nowadays easy to find on supermarket shelves of urban centers in Niger various local millet grains-based products, i.e., dèguè, lumps, oilcakes, enriched powder, developed by local farmer organizations. Similarly, millet fodder is processed into products in animal feed with the advent of new grinding and chopping machines in the Sahel [5]. These results are confirmed by Kébenzikato et al. [20] who found out that people over 75 years old had a greater knowledge of the uses of Adansonia digitata in Togo. Thus, Ayantunde et al. [13] showed that the age group above 50 years old knew more than that between 25 and 50 years old.
This study highlighted 10 different parts used of P. glaucum, which were identified and used differently into five ethnic communities in Niger. The uses of grains and panicles of this cereal are very common and these products are well consumed by all surveyed ethnic groups. The Hausa, Kanuri, and Zarma-Sonhrai ethnic groups and farmers are the largest users of the species. The elderly Fact group was the most users of millet stubbles and leaves as fodder. This ethnobotanical survey based on individual interviews and focus groups revealed the importance of P. glaucum in the life of local people. This method that solicits the memory of respondents could obviously cause bias related to the personal assessment of the respondent. However, this method is widely used in ethnobotany by many authors and has the advantage of showing rather conclusive results most of the time. Results from this study confirmed the uneven distribution of indigenous knowledge of millet uses in Niger due to social factors. But the challenge is how to incorporate these social differences in knowledge of millet uses in view to sustainable management and conservation of local genetic resources of millet. As the uses of millet organs are poorly documented in Niger, this study provides a broad overview of the uses made of millet organs following ethnic groups and socio-professional categories. Therefore, this work could be an important decision-making tool for future millet valorization studies as forage or dual-purpose crop. Moreover, the study gives some insights into the importance of bio cultural diversity conservation in Niger. Because of knowledge variation among different ethnic groups, culture of those groups must receive an important consideration for conservation.
The datasets used and/or analyzed in the current study are available from the corresponding author on reasonable request.
Please note that following publication of the original article [1], Figs. 4, 5 and 6 in the article have been updated to remove oblique lines that were erroneously rendered in the figures.
FRC:
Relative citation frequency
IUV:
Intraspecific use value
PCA:
PPV:
Plant part value
Reported use
Specific reported use
Dahlberg J, Berenji J, Sikora V, Latković D. Assessing sorghum [Sorghum bicolor (L) Moench] germplasm for new traits: food, fuels and unique uses. Maydica. 2010;56(2):56–1750.
Tamboura H, Kaboré H, Yaméogo SM. Ethnomédecine vétérinaire et pharmacopée traditionnelle dans le plateau central du Burkina Faso: cas de la province du Passoré. Biotechnologie, Agronomie, Société et Environnement. 1998;2(3):181–91.
Barkiyou M. Contribution à l'étude de l'effet thérapeutique du mil à chandelle «Pennisetum glaucum L.» dans la fragilité osseuse chez le rat wistar [Thèse de Doctorat]. Maroc: Université Mohammed V-Rabat; 2017.
Amadou I, Gounga ME, Le G-W. Millets: Nutritional composition, some health benefits and processing-A review. Emirates Journal of Food and Agriculture. 2013;25(7):501–8.
Moussa H, Soumana I, Chaïbou M, Souleymane O, Kindomihou VK. Potentialités fourragères du mil (Pennisetum glaucum (L.) R. Br): Revue de littérature. Journal of Animal & Plant Sciences. 2017;34(2):5424–47.
Vall E, Andrieu N, Dugué P, Richard D, Tou Z, Diallo MA. Evolutions des pratiques agropastorales et changements climatiques en zone soudano-sahélienne d'Afrique de l'Ouest: proposition d'un modèle conceptuel de l'interaction climat-écosystèmes de production agropastoraux. Niamey: Atelier sous régional : « changements climatiques et interactions élevage environnement en Afrique de l'Ouest », 11-15 février 2008; 2008. p. 15.
Hiernaux P, Diawara M, Gangneron F. Quelle accessibilité aux ressources pastorales du Sahel. Afrique Contemporaine. 2014;1:21–35.
Dossou M, Houessou G, Lougbégnon O, Tenté A, Codjia J. Etude ethnobotanique des ressources forestières ligneuses de la forêt marécageuse d'Agonvè et terroirs connexes au Bénin. Tropicultura. 2012;30(1):41–8.
Wédjangnon A, Houètchégnon T, Ouinsavi C. Caractéristiques ethnobotaniques et importance socio-culturelle de Mansonia altissima A. Chev. au Bénin, Afrique de l'Ouest. Journal of Animal & Plant Sciences. 2016;29(3):4678–90.
Houehanou TD, Assogbadjo AE, Kakaï RG, Houinato M, Sinsin B. Valuation of local preferred uses and traditional ecological knowledge in relation to three multipurpose tree species in Benin (West Africa). Forest Policy and Economics. 2011;13(7):554–62.
Houehanou D, Assogbadjo A, Chadare F, Zanvo S, Sinsin B. Approches méthodologiques synthétiques des études d'éthnobotaniques quantitatives en milieu tropical. Annales des Sciences Agronomiques. 2016:187–205.
Sow M, Anderson J. Perceptions and classification of woodland by Malinké villagers near Bamako, Mali. UNASYLVA-FAO 1996:22-27.
Ayantunde AA, Briejer M, Hiernaux P, Udo HM, Tabo R. Botanical knowledge and its differentiation by age, gender and ethnicity in Southwestern Niger. Human Ecology. 2008;36(6):881–9.
Elfadil M, Abdelbagi M, Adam M, Ismael M, Parzies H, Haussmann B. Patterns of pearl millet genotype-by-environment interaction for yield performance and grain iron (Fe) and zinc (Zn) concentrations in Sudan. Field Crops Research. 2014;166:82–91.
Pucher A, Høgh-Jensen H, Gondah J, Hash CT, Haussmann BI. Micronutrient density and stability in West African pearl millet-potential for biofortification. Crop Science. 2014;54(4):1709–20.
Loumerem M, Van Damme P, Kourchani T, Reheul D, Behaeghe T. Etudes des composantes du rendement et la qualité nutritionnelle du fourrage de quelques lignées de mil (Pennisetum glaucum (L.) R. Br). des zones arides en Tunisie. Afrika Focus. 2016;29(1):68–84.
Salako KV, Moreira F, Gbedomon RC, Tovissodé F, Assogbadjo AE, Kakaï RLG. Traditional knowledge and cultural importance of Borassus aethiopum Mart. in Benin: interacting effects of socio-demographic attributes and multi-scale abundance. Journal of Ethnobiology and Ethnomedicine. 2018;14(1):36.
Turner MD, Hiernaux P. The use of herders' accounts to map livestock activities across agropastoral landscapes in Semi-Arid Africa. Landscape Ecology. 2002;17(5):367–85.
Robert T, Luxereau A, Joly H, Diarra M, Benoit L, Dussert Y, et al. Frontières des hommes et échanges des plantes cultivées. Les Cahiers d'Outre-Mer Revue de Géographie de Bordeaux. 2014;67(265):19–42.
Kébenzikato AB, Wala K, Atakpama W, Dimobé K, Dourma M, Woégan AY, et al. Connaissances ethnobotaniques du baobab (Adansonia digitata L.) au Togo. Biotechnologie, Agronomie, Société et Environnement. 2015;19(3):247–61.
Sidikou HA, Saidou A, Ingay I, Sabou I, Aladou S. Etude de bilan de la mise en œuvre de la politique foncière rurale au Niger. 2nd ed. République du Niger: Esquisse de feuille de route pour une politique foncière rurale au Niger (version finale); 2013. p. 15.
INS. Institut National de la Statistique du Niger (INS). Répulique du Niger: Tableau de bord social; 2016. p. 117.
SDR. Stratégie de développement rural. Le secteur rural, principal moteur de la croissance économique. 2003.
HCI3N. Haut Commissariat à l'Initiative 3N : « les Nigériens Nourrissent les Nigériens ». Plan d'investissemement 2012-2015. République du Niger. 2012. p. 80. http://extwprlegs1.fao.org/docs/pdf/ner145888.pdf. Accessed 15 Jan 2019.
PANA. Programme d'Action National pour l'Adaptation aux changements climatiques (PANA). République du Niger. 2006. p. 90. https://unfccc.int/resource/docs/napa/ner01f.pdf. Accessed 15 Jan 2019.
Maman I. Etude intégrée de la Résilience des Systèmes Sociaux de la Limite Nord des Cultures Pluviales Dans le Département de Goudoumaria Face au Changement Climatique [Thèse de Doctorat]. Niger: Université Abdou Moumouni de Niamey; 2013.
Uprety Y, Poudel RC, Shrestha KK, Rajbhandary S, Tiwari NN, Shrestha UB, et al. Diversity of use and local knowledge of wild edible plant resources in Nepal. Journal of Ethnobiology and Ethnomedicine. 2012;8(1):16.
Gomez-Beloz A. Plant use knowledge of the Winikina Warao: the case for questionnaires in ethnobotany. Economic Botany. 2002;56(3):231–41.
Atakpama W, Batawila K, Gnamkoulaba A, Akpagana K. Quantitative approach of Sterculia setigera Del.(Sterculiaceae) ethnobotanical uses among rural communities in Togo (West Africa). Ethnobotany Research and Applications. 2015;14:63–80.
Rabiou H, Bationo BA, Adjanou K, Kokutse AD, Mahamane A, Kokou K. Perception paysanne et importance socioculturelle et ethnobotanique de Pterocarpus erinaceus au Burkina Faso et au Niger. Afrique SCIENCE. 2017;13(5):43–60.
Ladoh-Yemeda C, Vandi T, Dibong S, Mpondo EM, Wansi J, Betti J, et al. Étude ethnobotanique des plantes médicinales commercialisées dans les marchés de la ville de Douala, Cameroun. Journal of Applied Biosciences. 2016;99(1):9450–66.
Höft M, Barik S, Lykke A. Quantitative ethnobotany. Applications of multivariate and statistical analyses in ethnobotany. People and Plants working paper. 1999;6:1–49.
R Development Core Team. R: A language and environment for statistical computing. 3.15 ed. Vienna: R Foundation for Statistical Computing; 2018.
Dalalyan A. Statistique Numérique et Analyse des Données. Ecole des Ponts, Paris Tech 2011. Available from: certis.enpc.fr/~dalalyan/Download/Poly2.pdf.
Roche A. Analyse de données – Partie II : Mise en œuvre de l'ACP [Cours d'Enseignement]: Ceremade; 2018. Available from: https://www.ceremade.dauphine.fr/~roche/Enseignement/ADD/ADD_Cours2.pdf.
Emperaire L, Peroni N. Traditional management of agrobiodiversity in Brazil: a case study of manioc. Human Ecology. 2007;35(6):761–8.
Rhissa Z. Revue du secteur de l'élevage au Niger. République du Niger: Ministère de l'Elevage, des Pêches et des Industrie Animales; 2010.
Jika AN, Dussert Y, Raimond C, Garine E, Luxereau A, Takvorian N, et al. Unexpected pattern of pearl millet genetic diversity among ethno-linguistic groups in the Lake Chad Basin. Heredity. 2017;118(5):491.
Robert T, Mariac C, Allinne C, Ali K, Beidari Y, Bezançon G, et al. Gestion des semences et dynamiques des introgressions entre variétés cultivées et entre formes domestiques et spontanées des mils (Pennisetum glaucum spp. glaucum) au Sud-Niger. 2005.
Ali L, Van Den Bossche P, Thys E. Enjeux et contraintes de l'élevage urbain et périurbain des petits ruminants à Maradi au Niger: quel avenir? Revue Élev Méd vét Pays trop. 2003;56(1-2):73–82.
Abdou MM, Issa S, Gomma AD, Sawadogo GJ. Analyse technico-économique des Aliments densifiés sur les performances de croissances des boucs roux de Maradi au Niger. International Journal of Biological and Chemical Sciences. 2017;11(1):280–92.
The authors acknowledge local authorities, technical services, populations, and producers for their help during data collection.
The West African Agriculture Productivity Program (WAAPP-Niger) through a Doctoral Scholarship attributed to HM supported this research.
Institut National de la Recherche Agronomique du Niger, BP 429, Niamey, Niger
Hamadou Moussa
Laboratoire d'Ecologie Appliquée, Facultés des Sciences Agronomiques, Université d'Abomey-Calavi, 01 BP 526, Cotonou, Benin
Valentin Kindomihou
Laboratoire d'Ecologie, de Botanique et de Biologie Végétale, Faculté d'Agronomie, Université de Parakou, 03 BP 125, Parakou, Benin
Thierry D. Houehanou
Idrissa Soumana
Oumarou Souleymane
Département des Productions Animales, Faculté d'Agronomie, Université Abdou Moumouni de Niamey, BP 10 960, Niamey, Niger
Mahamadou Chaibou
Search for Hamadou Moussa in:
Search for Valentin Kindomihou in:
Search for Thierry D. Houehanou in:
Search for Idrissa Soumana in:
Search for Oumarou Souleymane in:
Search for Mahamadou Chaibou in:
HM conceived the work with advices from TDH, VK, and MC. HM collected the data. HM processed the data with contribution of TDH, OS, and IS. HM drafted the manuscript with contribution of TDH. VK, TDH, IS, and OS corrected the manuscript. All authors read and approved the final manuscript.
Correspondence to Hamadou Moussa.
A verbal agreement was obtained from traditional and local authorities, and the population at large prior to administering the questionnaires. The presentation of the study objectives made this easier.
Moussa, H., Kindomihou, V., Houehanou, T.D. et al. Inter and intra cultural variations of millet (Pennisetum glaucum (L.) R. Br) uses in Niger (West Africa). J Ethnobiology Ethnomedicine 15, 37 (2019) doi:10.1186/s13002-019-0321-4
Accepted: 28 July 2019
Pennisetum glaucum | CommonCrawl |
Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread
Mile-High Manwhich
More about Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread
Emmeline Pankhurst Womens Suffrage Speech
Macbeths Corruption And Kingship
Lack Of Survival In Cormac Mccarthys The Road
Kurt Wiesenfeld Making The Grade Analysis
Lady macbeth quotes about power
Characters In Lady Windermeres Fan Pinterest.com Argumentative Writing On Abortion French Philosopher Jacques
Papa Johns Swot Analysis Essay
Alexander Hamilton: The Significance Of Politics - Comparing The Neasmith's At The Twelve Mile And Ten Mile Bread Ben Hall and John O 'Meally acted together in robbed the Burrangong stores of Mr. Eastlake and the Neasmith 's at the Twelve Mile and Ten Mile Rush 's respectively and during the robberies there was some resistance from the owners and a gunfight ensued, although these robberies. Jan 29, · Oct 9 – Puttied two of the shop windows. Harry Streator brought over drawings for his Mile board model. Chicago day. Big time in Chicago. Oct 10 – Washed the long window in the shop and did a little work on another. Drafted for H. D. Streators advertising Mile and Guide board model. Oct 11 – Worked at the model to day. Comparing The Neasmith's At The Twelve Mile And Ten Mile Bread Ben Hall and John O 'Meally acted together in robbed the Burrangong stores of Mr. Eastlake and the Neasmith 's at the Twelve Mile and Ten Mile Rush 's respectively and during the robberies there was some resistance from the owners and a gunfight ensued, although these robberies. george bailey character
Film Analysis Of Martin Scorseses Shutter Island
Argumentative Essay On Child Labour - "Serious players who have serious fun, Mile Twelve is a group to watch in the coming decade." -Tim O'Brien "Mile Twelve's instrumental skills reflect natural abilities enhanced by serious study of bluegrass tradition and a fearless desire to create fresh pathways." — Tony Trischka. Upcoming Tour Dates. Sep 21, · In 'The Twelve-Mile Straight,' Characters Are Symbols First, People Second Eleanor Henderson's novel, set in s Georgia, seeks to portray a Author: Annalisa Quinn. The major characters in The Twelve-Mile Straight grew up as sharecroppers. Merriam-Webster defines a sharecropper as "a tenant farmer who is provided with credit for seed, tools, living quarters, and food, who works the land, and who receives an agreed share of the value of the crop minus charges." While farming methods similar to. Signs M Night Shyamalan Analysis
Barons Social Club
Rb Patrice Case Study - The novel's namesake, the twelve-mile straight, is a road that connects the Jesup family farm to the nearest town. From the initial lynching of Genus Jackson, the road remains thematically important throughout the whole novel. A year-old boy who can complete a 1-mile run in eight minutes and 40 seconds sits at about the 50th percentile in comparison to other boys his age. Any time faster than would be considered a good time, since it puts the boy in the top half of his age class. If the boy runs the mile a minute faster, coming in at about seven minutes and. The mileage calculator allows you to calculate the miles you can expect to receive quickly and easily. It is available in two modes. You can calculate your potential mileage credit for your next flight booking in the Miles & More app.. In addition, there is a more comprehensive calculation method available on . The Importance Of Speech Development
Brutus Compliments In Julius Caesar
Personal Narrative: A Baseball Story - May 23, · A 9-minute mile for a man and for a woman are signs of moderate fitness; men who can't run better than a minute mile, and women slower than 12 minutes, fall into the low-fitness category. The categories make a big difference in risk for heart problems, the study found: Subjects in the high-fitness group had a 10 percent lifetime risk. Apr 22, · Mile run times by age group. Age can influence how fast you run. Most runners reach their fastest speed between the ages of 18 and The average running speed per mile in . Apr 18, · The benefits of intelligence led policing has led them to focus on a 94 mile area of responsibility (AOR). Chief Bennie said they were so effective with intelligence led policing that a big portion of the illegal activity coming in from Mexico started coming into the U.S. in . Torvald Helmer In A Dolls House
Compassionate Nursing Care
Persuasive Essay On Cheap Auto Insurance - Sep 20, · The Twelve-Mile Straight by Eleanor Henderson Publisher: Ecco Release Date: September 12, Length: pages Buy on Amazon {A Bit of Backstory} From the Publisher "Cotton County, Georgia, in a house full of secrets, two babies-one light-skinned, the other dark-are born to Elma Jesup, a white sharecropper's daughter. "In mind's special processes, a ten-mile run takes far longer than the 60 minutes reported by a grandfather clock. Such time, in fact, hardly exists at all in the real world; it is all out on the trail somewhere, and you only go back to it when you are out there." ― John L. Parker Jr.,Once a Runner. Oct 12, · Miles is plural. (But we say miles and not 1\2 mile, we say 1\2 a mile. Even though they are mathematically the same quantity. Because while saying "half a mile" or "quarter of a mile" we are referring to the quantity "one mile" first.) We do say. It is miles to the next gas station. because here, "is" refers to the object. Annotated Bibliography On Chief Joseph
Essay On Service Design
The Importance Of Life In Paulo Coelhos The Alchemist - Estimate Quarter-Mile ET from MPH. Back to Top. Jump on Board. Footer. W. Rialto Ave. Rialto, CA , USA. +1 () 23 Mount Erin Rd, Blair Athol NSW , Australia. +61 2 Speedmaster. Apr 18, · It should be hyphenated and with "mile" in the singular: (The noun phrase "round trip" is two separate words. As an adjective phrase, the term is hyphenated, as in "a round-trip ticket.") In this case, the plural, "50 miles," is correct, because the statement means that a certain trip consists of 50 miles. "Round trip" here is also two separate. Nov 26, · 8 Mile helps people to understand the Detroit context better. The «8 Mile» movie makes us discover and understand the racial tension and the racial split that existed in Detroit. In fact, this appartheit has historical roots in Nothern America. In the 30 s and 40 s, southern Blacks moved to Nothern cities like Detroit. Crackerjack Research Paper
Songs Of Innocence Introduction Analysis
Compare The Story Of An Hour And Hills Like White Elephants - Mar 14, · And among the women, the times are even worse: this year, the fastest mile time, according to Track and Field News, is a by Brenda Martinez of the United States. That's 14 seconds slower than Svetlana Masterkova's record world record in ! Even if you discount the notion that the mile is being contested less often. Dec 09, · The living nativity begins in downtown Twelve Mile and continues through Plank Hill Park. Visitors have the option of walking or driving through the pageant. As they arrive, visitors are greeted with non-religious Christmas scenes on the outside of the park. On entering the park, the story begins with the announcement of the birth of Jesus. Oct 16, · Start studying Ten Mile DayTest October 16, Learn vocabulary, terms, and more with flashcards, games, and other study tools. Deadliest Shooting Research Paper
Essay On Government Intervention In Health Care
Robert Agnew General Strain Theory - Jan 14, · A crossword puzzle generator that creates valid crossword puzzle configurations from the command line. Java program using English language dictionary as word source. - . Jun 13, · One Mile. Denver's nickname is the "Mile High City" for an obvious reason. Look around and you'll notice its elevation is indeed right around one mile or 5, feet (or 1, metres). This is quite evident as soon as an Eastern flatlander such as myself walks up a flight of steps on the day of arrival encumbered with luggage. 1/8-mile to 1/4-mile conversion table The following table can be used to estimate what your car would run in the quarter-mile based on your 1/8-mile elapsed time. This table is useful for those of you who run on an 1/8-mile track and would like to know what this time equates to in the 1/4-mile. Comparing Frankenstein And The Monster In Mary Shelleys Frankenstein
BIS 101 Class Reflection
Argumentative Essay: Should We Prevent School Uniforms? - Jan 12, · The back thing really put a crimp in my exercise program, and over the next year I regained about 13 pounds. In April of I got a personal trainer and really started exercising in earnest. All this time I had kept up my walking. My back got stronger and stronger, and eventually I got that 13 pounds back off. Aug 06, · Since , to-date, U.S. men, including 12 high schoolers, have dipped below the recognized & coveted 4 minute mark in the Mile. On June 1, , Don Bowden (left) became the first U.S. man to break 4 minutes in the Mile and Steve Holman 's on July 4, in Oslo is the fastest debut sub Mile by a U.S.-born man. Start studying Comprehension for Ten Mile Day. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Essay On Government Intervention In Health Care
Essay On Violence In Rap Music
Free Narrative Essays: Tonyas Murder Story - Paula Ivan (ROM) Nice, FRA July 10, Svetlana Masterkova (RUS) Zürich, SUI August 14, – longest held women's outdoor Mile WR, nearly 23 years! Sifan Hassan (NED) Monaco, MON July 12, SOURCE: IAAF / shibetsutown-jp.somee.com USA: MEN TIME ATHLETE LOCATION DATE Edward Merritt Mott Haven, NY October Jul 24, · A good mile time is any one that you feel happy about completing. It varies per person, per day, per weather, per attitude, per a lot of different things. July 24, PM. 0. Im_NotPerfect. Member Posts: 2, Member Member. Posts: 2, Member. I just started running 2 months ago and I average a 10 minute mile. Dec 16, · 8 Mile: Overrated, Underrated, or Properly Rated? I'd say that 8 Mile is a tremendously underrated film. Eminem far exceeded just about anyone's expectations as an actor, and the rest of the. Freedom Rides In America
The Pros And Cons Of Liver Transplantation
Hatshepsuts Relationship With Thutmose III - Spilled Milk Ice Cream and Cereal Bar. 8 reviews. Ice Cream & Frozen Yogurt, Desserts, Food Trucks. N Redwood Rd., Saratoga Springs, UT. " Kids portions are available $4, in a cup.:) Truck is located in Smiths parking lot. Apr 05, · There are ten-tenths of a mile in 1 mile since a single mile contains all the parts into which it is divided, as illustrated by shibetsutown-jp.somee.com's description of fractions. An often-repeated math corollary is that any whole is equal to all its parts taken together. When taking into consideration more than ten-tenths, such as eleven-tenths. Jun 21, · For local running coach Kathy Pugh, after gaining almost 60 pounds and having a child, just running ten-minute miles seemed impossible. But four years later, Pugh is a marathon runner, and running a mile in eight minutes or less is a piece of cake. The Blackamoor In Petrouchka
Operation Ajax: The United States Influence In Iran
Segal And Spaeths Argument Analysis - May 05, · Breaking four minutes for the mile had been an obsession among middle-distance runners for years. A time of just under four and a half minutes had been recorded far back in by an Irishman named Heaviside. In the famous Finnish athlete Paavo Nurmi ran the distance in 4 minutes seconds. Slowly the figure came inching down. Apr 13, · Official fitness tests such as those used by the U.S. Army and the U.S. Marshall Service often require younger men and women to run the equivalent of a mile in eight or nine minutes. Target times increase over time for older participants. Answer (1 of 4): Really - what are they teaching people in elementary school? of ANYTHING is just another way of writing 1\ \frac{1}{2}[code ].[/code] As a VERY SIMPLE comparison - is larger than 1, while \frac{1}{3} is less than 1 - therefore it must be the case that is greater t. Personal Narrative: A Baseball Story
Accused of her rape, field hand Genus Mccutcheon v david macbrayne is lynched and dragged behind The Pros And Cons Of Bag Tax truck down the Twelve-Mile Straight, the road to the nearby town. A Story Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread lady macbeth quotes about power American Essay On Chicagoland — I know that sounds like an awful thing Janse Van Rensburgs Personal Essay Not Like The Movie like, Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread prejudice was at the heart of this story and Eleanor Henderson handled it beautifully.
At the very start of the book, Genus Jackson was lynched because he was believed to have fathered a child with a white woman. The lynching was brutal, with many people in the community involved. Prejudice almost became its own character throughout her novel. Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread Jesup — Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread loved this character. Elma was a girl with the brains to do more. Sadly, as the daughter of a sharecropper she had few options. After finding herself pregnant, Elma was abandoned by Freddie, and left on the Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread with a father Odysseus The King Of Ithaca Analysis both feared and loved.
Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread the night she gave birth, Elma made a terrible mistake, the repercussions of which Personality Factors Of Managerial Implications Jackson dead and Stereotypes In Colonial America paying a lifetime of penance. She prayed that they might lead a life of glorious obscurity, of loneliness even, that they might run the acres of the farm, as she had, unwatched by anyone but God in the sky.
It was in no way a conventional family, and in fact it was a family where Nursing Career Essay ran rampant. The Jesup family was dominated by Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread who taught the others John Lewis Marketing Strategy fear him even as he loved them.
It was twisted. I felt Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread the book moved around in Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread too often, and sometimes without any real need to go back, stages of psychosexual development leads me to…. Too Many Characters with Too Many Backstories — I could have done without the backstories of many of the characters, and Advance Practice Nurses: A Case Study fact, some of the characters themselves.
Of course, I want to know as much as possible about the central characters, but grew tired What Is The Significance Of Prohibition In The 1920s hearing about MSTT And Lemuel Case Summary who played a lesser Short Summary: The African Lion. I felt there could have been a little more editing to make a Nursing Career Essay, and The Turn Of The Screw Character Analysis better, page book.
I thought that The Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread Straight was a good book, even a very good book, but for me it tried a little too hard. In those slower parts, I found myself drifting. However, had I been in a calmer place in my life, I may have felt differently. Grade: B. Note: I received a copy of this 2.3 Bone Detectives from the publisher via Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread in exchange for my honest review.
Disclosure : There are Amazon Associate Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread included within this post. Too Racism In The Good Earth characters with too many backstories was the lynchpin of why I had trouble with this one. Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread that the backstories went on for so long. Yes, I did like the others better. It left me broken. Twelve-Mile would have been better about pages Essay On Royal Intermarriage Your email address will not be published.
Notify me via e-mail if anyone answers my comment. This Frankenstein And Paracelsus Comparison uses Akismet to reduce spam. Learn how your comment data is processed. September 20, This post may Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread Amazon links. As an Amazon Andrew Jackson Executive Power I earn from qualifying purchases. Comparing The Neasmiths At The Twelve Mile And Ten Mile Bread Too many characters with too many backstories was the lynchpin of why I had trouble with this one.
Edits were needed. Leave a Reply Cancel reply Your email address will not be published. | CommonCrawl |
BMC Complementary Medicine and Therapies
Croton gratissimus leaf extracts inhibit cancer cell growth by inducing caspase 3/7 activation with additional anti-inflammatory and antioxidant activities
Emmanuel Mfotie Njoya ORCID: orcid.org/0000-0003-1163-72021,2,
Jacobus N. Eloff1 &
Lyndy J. McGaw1
BMC Complementary and Alternative Medicine volume 18, Article number: 305 (2018) Cite this article
Croton species (Euphorbiaceae) are distributed in different parts of the world, and are used in traditional medicine to treat various ailments including cancer, inflammation, parasitic infections and oxidative stress related diseases. The present study aimed to evaluate the antioxidant, anti-inflammatory and cytotoxic properties of different extracts from three Croton species.
Acetone, ethanol and water leaf extracts from C. gratissimus, C. pseudopulchellus, and C. sylvaticus were tested for their free radical scavenging activity. Anti-inflammatory activity was determined via the nitric oxide (NO) inhibitory assay on lipopolysaccharide (LPS)-stimulated RAW 264.7 macrophages, and the 15-lipoxygenase inhibitory assay using the ferrous oxidation-xylenol orange assay. The cytotoxicity of the extracts was determined on four cancerous cell lines (A549, Caco-2, HeLa, MCF-7), and a non-cancerous African green monkey (Vero) kidney cells using the tetrazolium-based colorimetric (MTT) assay. The potential mechanism of action of the active extracts was explored by quantifying the caspase-3/− 7 activity with the Caspase-Glo® 3/7 assay kit (Promega).
The acetone and ethanol leaf extracts of C. pseudopulchellus and C. sylvaticus were highly cytotoxic to the non-cancerous cells with LC50 varying between 7.86 and 48.19 μg/mL. In contrast, the acetone and ethanol extracts of C. gratissimus were less cytotoxic to non-cancerous cells and more selective with LC50 varying between 152.30 and 462.88 μg/mL, and selectivity index (SI) ranging between 1.56 and 11.64. Regarding the anti-inflammatory activity, the acetone leaf extract of C. pseudopulchellus had the highest NO inhibitory potency with an IC50 of 34.64 μg/mL, while the ethanol leaf extract of the same plant was very active against 15-lipoxygenase with an IC50 of 0.57 μg/mL. A linear correlation (r<0.5) was found between phytochemical contents, antioxidant, anti-inflammatory and cytotoxic activities of active extracts. These extracts induced differentially the activation of caspases − 3 and − 7 enzymes in all the four cancerous cells with the highest induction (1.83-fold change) obtained on HeLa cells with the acetone leaf extract of C. gratissimus.
Based on their selective toxicity, good antioxidant and anti-inflammatory activities, the acetone and ethanol leaf extracts of C. gratissimus represent promising alternative sources of compounds against cancer and other oxidative stress related diseases.
Oxidative stress results from an imbalance between the production of free radicals and the ability of the body to counteract or detoxify their harmful effects through neutralization by antioxidants [1]. The free radical theory of aging developed by Denham Harman is based on the concept that damage accumulates throughout the entire lifespan and causes age dependent disorders including diabetes, atherosclerosis, neurodegenerative diseases and cancer [2, 3]. Cancer development is characterized by redox imbalance with a shift towards oxidative conditions. In fact, free radicals can bind through electron pairing with macromolecules such as proteins, phospholipids and DNA in normal cells to cause protein and DNA damage along with lipid peroxidation [1]. Consequently, the accumulation of these cellular disorders may cause mutation and lead to various disturbances in the cell metabolism, which can result in deregulated cell growth, and finally carcinoma [4]. Antioxidants are helpful in reducing and preventing damage caused by free radicals because of their ability to donate electrons, which neutralize the radicals without forming another. This property has led to the hypothesis that antioxidants, with their ability to decrease the level of free radicals, might lessen the radical damage causing chronic diseases, and even radical damage responsible for aging and cancer. Antioxidant phytochemicals found in vegetables, fruits and medicinal plants have been reported to be responsible for health benefits such as the prevention and treatment of chronic diseases caused by oxidative stress [5]. Many antioxidant phytochemicals have been associated with anti-cancer activities, and this includes curcumin from turmeric, genistein from soybean, tea polyphenols from green tea, resveratrol from grapes, sulforaphane from broccoli, isothiocyanates from cruciferous vegetables, silymarin from milk thistle, diallyl sulfide from garlic, lycopene from tomato, rosmarinic acid from rosemary, apigenin from parsley, and gingerol from gingers [6].
During the last two decades, it has been revealed that oxidative stress can lead to chronic inflammation, which in turn could mediate most chronic diseases including cancer. Chronic inflammation is usually associated with an increased risk of several human cancers [7]. Indeed, the relationship between inflammation and cancer has been suggested by epidemiological and experimental data, and confirmed by the fact that anti-inflammatory therapies were also efficient in cancer prevention and treatment [8, 9].
The genus Croton belongs to the family Euphorbiaceae, and is a diverse and complex group of plants ranging from herbs and shrubs to trees. Croton species can be found in different parts of the world, and some of the most popular uses include treatment of cancer, constipation, diabetes, digestive problems, dysentery, external wounds, intestinal worms, pain, ulcers and weight loss [10]. Croton sylvaticus Hochst. is a fast-growing and decorative tree, which is widely used in the management of inflammatory conditions, infections and oxidative stress related diseases. In Tanzania and Kenya, the decoction of the leaves and root bark of C. sylvaticus is used in traditional medicine against tuberculosis (TB), inflammation, as a purgative, as a wash for body swelling caused by kwashiorkor or by tuberculosis, and for the treatment of malaria [11]. Previous reports showed the acetylcholinesterase inhibitory activity of the ethyl acetate leaf extract of C. sylvaticus and isolated compounds [12]. Other compounds isolated from this plant have antiplasmodial activity [13], and low to high toxicity observed in the brine shrimp larval lethality test [11]. Croton gratissimus Burch. (synonym C. zambesicus Müll.Arg.) is native to tropical west and central Africa, and is used to treat fever, dysentery and convulsions [14]. The leaf decoction is used in Benin as anti-hypertensive, anti-microbial (against urinary infections) and to treat malaria-linked fever [15]. Some compounds, named cembranolides isolated from leaf extracts of Croton gratissimus, have moderate activity against ovarian cancer cell lines and Plasmodium falciparum [16, 17]. Croton pseudopulchellus Pax, originating from southern Africa, is widely distributed in tropical East and West Africa. This Croton species is used in southern and central parts of South Africa against TB symptoms such as coughs, fever and blood in sputum [18]. Based on their diverse uses in traditional medicine against various diseases in which excess production of free radicals or inflammation is implicated, the present study aims to evaluate the antioxidant, anti-inflammatory and cytotoxic properties of three Croton species extracted using different solvents.
Plant material and extraction
Fresh leaves of the three Croton species were collected at the Lowveld Botanical Gardens, Nelspruit, Mpumalanga (South Africa) in January 2016. The plant materials were dried at room temperature in a well-ventilated room for two weeks. The dried materials were ground to fine powder and stored in honey jars in the dark until use. Herbarium specimens for each of the plant species were prepared, and identification was made by Mrs. Elsa van Wyk and Ms. Magda Nel of the HGWJ Schweickerdt Herbarium (PRU), University of Pretoria. The identification numbers of plant species are presented in Table 1. Powder (100 g) from each plant was extracted by maceration in 1000 mL of different solvents (water, acetone and ethanol). The mixtures were covered and left overnight at room temperature. Each mixture was filtered through Whatman No.1 filter paper into pre-weighed honey jars and the filtrates obtained from acetone and ethanol extraction were concentrated under reduced pressure using a rotary evaporator at 40 °C to obtain a residue which constituted the crude extract. The water filtrate was dried in a ventilated oven at 50–55 °C until complete evaporation of water. The extraction process was repeated three times with fresh solvent. The honey jars containing the crude extracts were weighed again to determine the percentage yield of the crude extracts (Table 1). The dried extracts were stored in a cold room (4 °C) until use.
Table 1 Herbarium specimen identification and yield of crude extracts from the three Croton species
Phytochemical analysis
Total phenolic content
The total phenolic content (TPC) of different extracts was determined using the Folin-Ciocalteu method adapted to a 96-well microplate as described by Zhang et al. [19]. The reaction mixture was prepared by adding respectively 20 μL of each extract (5 mg/mL in DMSO), 100 μL of Folin-Ciocalteu reagent (1 mL of Folin-Ciocalteu reagent in 9 mL of distilled water), and 80 μL 7.5% Na2CO3 solution in deionized water. The mixture was then incubated in the dark at room temperature (25 °C) for 30 min, and the absorbance was read at 765 nm on a microplate reader (Epoch, BioTek). The total phenolic content was estimated from a gallic acid (GA) calibration curve (10–100 mg/L; y = 0.6886x + 0.0884; R2 = 0.9901), and results were expressed as milligram of gallic acid equivalent (GAE) per gram of extract.
Total flavonoid content
The total flavonoid content (TFC) of different extracts was determined using the aluminium chloride spectrophotometric method based on the formation of aluminium-flavonoid complexes [20]. The reaction mixture was prepared by mixing 2 mL of each extract (0.3 mg in 1 mL of methanol), 0.1 mL of aluminium chloride hexahydrate solution (10% aqueous AlCl3 solution), 0.1 mL of 1 M potassium acetate and 2.8 mL of deionized water. The mixture was shaken and incubated at room temperature (25 °C) for 10 min, and 200 μL of each mixture was transferred to 96-well microplate. The absorbance was measured at 415 nm using a microplate reader (Epoch, BioTek). A calibration curve was plotted from the absorbance of quercetin (0.005–0.1 mg/mL; y = 9.0545x – 0.0142; R2 = 0.9999), and the total flavonoid content was expressed as milligram of quercetin equivalent (QE) per gram of extract.
Antioxidant assays
The 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay
The technique described by Brand-Williams et al. [21] with some modifications was applied for the determination of the DPPH scavenging capacity of extracts. Briefly, the extracts (40 μL) were serially diluted with methanol on a 96-well plate, followed by the addition of the DPPH solution (160 μL) prepared at 25 μg/mL. The mixture was incubated at room temperature in the dark for 30 min and the absorbance was measured at 517 nm using a microplate reader (Epoch, BioTek). Ascorbic acid and trolox were used as positive controls, methanol plus DPPH as negative control, and sample without DPPH as blank. The DPPH scavenging capacity was calculated at each concentration according to the formula (1) below:
$$ \mathrm{Scavenging}\ \mathrm{capacity}\ \left(\%\right)=\frac{\mathrm{Absorbance}\ \left(\mathrm{control}\right)\hbox{-} \mathrm{Absorbance}\ \left(\mathrm{sample}\right)}{\mathrm{Absorbance}\ \left(\mathrm{control}\right)}\times 100 $$
The inhibitory concentration (IC50) was determined by plotting a non-linear curve of percentage DPPH scavenging capacity against the logarithm of different concentrations of the extract.
The 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assay
The method described by Re et al. [22] with some modifications was used for the determination of the ABTS radical scavenging capacity of the extracts. Firstly, the reaction solution was prepared by mixing a solution of ABTS (7 mM) with a solution of potassium persulfate (2.45 mM) at room temperature for 12 to 16 h. The optical density of the reaction solution containing the ABTS radical produced was calibrated to 0.70 ± 0.02 at 734 nm before use. Secondly, the extracts (40 μL) were serially diluted with methanol, followed by the addition of the ABTS radical (160 μL), and the optical density was measured after 5 min at 734 nm using a microplate reader (Epoch, BioTek). Two positive controls (trolox and ascorbic acid) were used. Methanol plus ABTS radical was used as negative control while extract without ABTS was considered as the blank. The percentage of ABTS scavenging capacity was calculated at each concentration according to the formula (1) above, and the inhibitory concentrations (IC50) values were determined as indicated in the previous paragraph.
Anti-inflammatory assays
Nitric oxide inhibitory assay
The method published by Dzoyem and Eloff [23] was used to determine the nitric oxide inhibitory activity of the extracts. The RAW 264.7 macrophages were obtained from the American Type Culture Collection (ATCC) (Rockville, MD, USA), and were grown at 37 °C with 5% CO2 in a humidified environment in Dulbecco's Modified Eagle's Medium (DMEM) high glucose (4.5 g/L) containing L-glutamine (4 mM) and sodium pyruvate (Hyclone™) supplemented with 10% (v/v) fetal bovine serum (Capricorn Scientific Gmbh, South America) and 1% penicillin-streptomycin-fungizone (PSF). Nitric oxide (NO) production by RAW 264.7 macrophages was measured using the Griess reagent (Sigma Aldrich, Germany) after 24 h of lipopolysaccharide (LPS) stimulation in the presence or absence of the extracts or quercetin used as positive control. Briefly, the RAW 264.7 macrophages were inoculated at a density of 2 × 104 cells per well in 96 well-microtitre plates, and the cells were left overnight to allow attachment to the bottom of the plate. The cells were treated with different concentrations of the extracts dissolved in DMSO with the final concentration of DMSO not exceeding 0.5%. Thereafter, the cells were stimulated by addition of LPS at a final concentration of 1 μg/mL per well. The cells treated with only LPS were considered as the negative control. After 24 h of incubation at 37 °C with 5% CO2 in a humidified environment, the supernatant (100 μL) from each well of the 96-well microtitre plates were transferred into new 96-well microtitre plates, and an equal volume of Griess reagent (Sigma Aldrich, Germany) was added. The mixture was left in the dark at room temperature for 15 min, and the absorbance was determined at 550 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek). The quantity of nitrite was determined from a sodium nitrite standard curve. The percentage of NO inhibition was calculated based on the ability of each extract to inhibit nitric oxide production by RAW 264.7 macrophages compared with the control (cells treated with LPS without extract). In addition, the cell viability was determined using the 3-(4,5-dimethythiazol- 2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay [24]. The culture medium was aspirated from the plates, and replaced by fresh medium (200 μL) with 30 μL of thiazolyl blue tetrazolium bromide (5 mg/mL) dissolved in phosphate buffered saline. After incubation for 4 h, the medium was gently aspirated, and the formazan crystals were dissolved in 50 μL of DMSO and kept in the dark for 15 min at room temperature. The absorbance was measured spectrophotometrically at 570 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek).
Inhibition of soybean 15-lipoxygenase (15-LOX) enzyme
The assay was performed according to the procedure of Pinto et al. [25] with slight modifications to the microtitre plate format. The assay is based on the formation of the complex Fe3+/xylenol orange with absorption at 560 nm. The 15-lipoxygenase (15-LOX) enzyme from soybean (Sigma Aldrich, Germany) was incubated with different concentrations of extracts or quercetin used as standard inhibitor (both serially diluted from 0.78 to 100 μg/mL) at 25 °C for 5 min. The substrate, linoleic acid (final concentration, 140 μM) prepared in Tris-HCl buffer (50 mM, pH 7.4), was added and the mixture was incubated at 25 °C for 20 min in the dark. The assay was terminated by the addition of 100 μL of FOX reagent [sulfuric acid (30 mM), xylenol orange (100 μM), iron (II) sulfate (100 μM) in methanol/water (9:1)]. The negative control was made of the enzyme 15-LOX solution, buffer, substrate and FOX reagent while the blanks contained the enzyme 15-LOX and buffer, but the substrate was added after the FOX reagent. The lipoxygenase inhibitory activity was evaluated by calculating the percentage of the inhibition of hydroperoxide production from the changes in absorbance values at 560 nm after 30 min at 25 °C as indicated in the formula (2) below.
$$ \mathrm{Percentage}\ \mathrm{LO}\ \mathrm{X}\ \mathrm{inhibition}\ \left(\%\right)=\frac{\mathrm{Absorbance}\ \left(\mathrm{control}\right)\hbox{-} \mathrm{Absorbance}\ \left(\mathrm{sample}\right)}{\mathrm{Absorbance}\ \left(\mathrm{control}\right)}\times 100 $$
The IC50 values of extracts or quercetin, which represent the concentration leading to 50% inhibition were calculated using the non-linear regression curve of the percentage (15-LOX) inhibition against the logarithm of concentrations tested.
Cytotoxicity assay
The four cancer cell lines (MCF-7: human breast adenocarcinoma cells; HeLa: human cervix adenocarcinoma cells; Caco-2: human epithelial colorectal adenocarcinoma cells; A549: human epithelial lung adenocarcinoma cells) were obtained from the American Type Culture Collection (ATCC) (Rockville, MD, USA). These cells were grown at 37 °C with 5% CO2 in a humidified environment in Dulbecco's Modified Eagle's Medium (DMEM) high glucose (4.5 g/L) containing L-glutamine (4 mM) and sodium pyruvate (Separations, RSA) supplemented with 10% (v/v) fetal bovine serum (Capricorn Scientific Gmbh, South America). Non-cancerous African green monkey (Vero) kidney cells (obtained from ATCC) were maintained at 37 °C and 5% CO2 in a humidified environment in Minimal Essential Medium (MEM) containing L-glutamine (Lonza, Belgium) supplemented with 5% fetal bovine serum (Capricorn Scientific Gmbh, South America) and 1% gentamicin (Virbac, RSA).
Cell treatment and assay procedure
The cells were seeded at a density of 104 cells per well on 96-well microtitre plates, and were left overnight to allow attachment. After this, the cells were treated with different concentrations of extracts dissolved in dimethyl sulfoxide (DMSO), and further diluted in fresh culture medium. In each experiment, the highest concentration of DMSO (negative control) in the medium was 0.5%. After incubation for 48 h at 37 °C with 5% CO2, the culture medium was discarded, and replaced by fresh medium (200 μL) with 30 μL of thiazolyl blue tetrazolium bromide (5 mg/mL) dissolved in phosphate buffered saline. The medium was gently aspirated after 4 h of incubation, and the formazan crystals were dissolved in 50 μL of DMSO, and kept in the dark for 15 min at room temperature. The absorbance was measured spectrophotometrically at 570 nm on a microplate reader (Synergy Multi-Mode Reader, BioTek). The viability of cells treated with the extracts was calculated for each concentration compared to the negative control. The 50% inhibitory concentrations (IC50) for cancer cell lines and the 50% lethal concentrations (LC50) for the non-cancerous cells were determined by plotting the non-linear regression curve of percentage of cell survival versus the logarithm of concentrations of each extract. The selectivity index (SI) values were calculated for each extract by dividing the LC50 of the non-cancerous cell against the IC50 of each cancer cell type in the same units.
Evaluation of the induction of apoptosis on cancer cells
The induction of apoptosis by the most active extracts from each plant was evaluated by measuring the caspase 3/7 activity on different cancer cell lines with the Caspase-Glo® 3/7 assay kit (Promega). All four cancer cell lines were seeded at a density of 104 cells per well on 96-well microtitre plates, and were allowed to adhere overnight. These cells were treated with the extracts at different concentrations (½ × IC50, IC50 and 2 × IC50) or DMSO (0.5%) as negative control, and the plates were incubated at 37 °C with 5% CO2 for 24 h. After treatment, the Caspase-Glo® 3/7 was prepared according to manufacturer's guidelines, and 100 μL of the reagent was added per well and incubated for 1 h at room temperature in the dark. Following this incubation, the luminescence was measured on a microplate reader (Synergy Multi-Mode Reader, BioTek). The data was analysed, and expressed as percentage of the untreated cells (control) and fold change.
All experiments were performed in triplicate, and the results are presented as mean ± standard error of mean (SEM) values. Statistical analysis was carried out with GraphPad Instat 3.0 software. The Student–Newman–Keuls test was used to determine P-values for the differences observed between the extracts while Dunnett's test was used to compare the extracts with the control. Results were considered significantly different when P< 0.05.
Yield of extraction and phytochemical content of crude extracts
The voucher specimen numbers (PRU) and the yield of extraction of each plant material in a particular solvent are summarized in Table 1. The highest yield of extraction was observed with C. pseudopulchellus with all the three solvents used. Extraction with ethanol had the highest yield of extraction among the plant species. The phytochemical content of all extracts is presented in Table 2, and significant differences have been noted between total phenolic content (TPC) and total flavonoid content (TFC) of the plant materials extracted with the three solvents used. Organic solvents (acetone and ethanol) extracted more of these compounds compared to water. The acetone leaf extract of C. gratissimus had the highest TPC with 222.29 mgGAE/g whereas the highest TFC was obtained with the acetone and ethanol leaf extracts of C. sylvaticus with 82.76 and 84.54 mgQE/g respectively.
Table 2 Phytochemical content, antioxidant activity, nitric oxide and 15-lipoxygenase inhibition of different extracts from Croton species and positive controls
Antioxidant activity of extracts
Two antioxidant assays which involved the measurement of colour disappearance caused by free radicals such as DPPH and ABTS were used. As expected, the free radical scavenging activity of the extracts was concentration-dependent (data not shown) and the IC50 values determined are presented in Table 2. The antioxidant activity varies within extracts from the same plant and between extracts from different plants. It should be noted that a lower IC50 value indicates a stronger antioxidant potency of the sample tested. Therefore, the ethanol leaf extracts from all the three plants have good antioxidant potency when compared with acetone and water extracts from the same plant. Among all the extracts from the three plants, the ethanol leaf extract of C. gratissimus had the highest antioxidant potency with IC50 values of 32.18 and 34.95 μg/mL respectively for the DPPH and ABTS radical scavenging activity. Ascorbic acid and trolox, known as potent antioxidant compounds, had the best antioxidant potency with IC50 values of 1.92 and 3.92 μg/mL (ascorbic acid); 2.21 and 4.64 μg/mL (trolox) respectively for the DPPH and ABTS radical scavenging activity (Table 2).
Anti-inflammatory activity of extracts
The anti-inflammatory activity of leaf extracts was determined using the nitric oxide (NO) and 15-lipoxygenase (15-LOX) inhibitory assays.
Nitric oxide inhibitory effect of extracts on LPS-stimulated RAW 264.7 macrophages
All the extracts from the three Croton species had inhibitory activity on NO production in a concentration-dependent manner (Fig. 1a and b). Water leaf extracts of the three plants had the lowest NO inhibitory effect except for the water extract from C. gratissimus that had a good inhibitory activity. Acetone and ethanol leaf extracts of the plants had the highest NO inhibitory activity compared with their respective water leaf extracts. The IC50 values were calculated, and are presented in Table 2. Acetone leaf extracts from the three plants had the lowest IC50 values, which are not significantly different from the IC50 values obtained for the ethanol leaf extracts. However, the acetone leaf extract of C. pseudopulchellus had an IC50 value (34.64 μg/mL) significantly (P < 0.05) lower than the IC50 of the ethanol extract (53.49 μg/mL) from the same plant. The acetone leaf extract of C. pseudopulchellus therefore had the highest NO inhibitory potency. Quercetin, used as positive control, had the highest NO inhibitory potency with IC50 of 5.82 μg/mL.
Activities of the extracts from three Croton species on the percentage of nitric oxide inhibition (a), nitric oxide production (b) and cell viability (c) on LPS-stimulated RAW 264.7 macrophages. Data are presented as means of triplicate measurements ± standard error. CSA, CSE and CSW represent respectively acetone, ethanol and water extracts of Croton sylvaticus. CPA, CPE and CPW represent respectively acetone, ethanol and water extracts of Croton pseudopulchellus. CGA, CGE and CGW represent respectively acetone, ethanol and water extracts of Croton gratissimus. Ctrl: control group (0.5% DMSO); LPS: lipopolysaccharide
The cell viability of LPS-stimulated RAW 264.7 macrophages after treatment with the extracts and quercetin is presented in Fig. 1c. The acetone and ethanol leaf extracts as well as quercetin were slightly cytotoxic on LPS-stimulated RAW 264.7 macrophages with percentage of cell viability varying between 62 and 96%. The water leaf extracts were less cytotoxic with cell viability greater than 76% at the highest concentration (100 μg/mL) tested.
Lipoxygenase inhibitory activity of extracts
The ferrous oxidation-xylenol orange (FOX) assay was used to determine the 15-lipoxygenase inhibitory activity of different extracts from the three Croton species, and the IC50 values were determined using the non-linear regression curves (Additional file 1: Figure S1) and the results are presented in Table 2. All the extracts except the water extracts had better inhibitory activity against 15-lipoxygenase when compared to the positive control (quercetin). The IC50 values of the active extracts (acetone and ethanol) from the three plants varied between 0.57 and 11.64 μg/mL which is significantly (P < 0.05) different from quercetin (24.60 μg/mL). Ethanol leaf extracts were more active than acetone leaf extracts from the same plant species, thus suggesting that ethanol extracted more anti-lipoxygenase compounds than acetone. The highest lipoxygenase inhibitory activity was obtained with the ethanol leaf extract of C. pseudopulchellus (IC50 of 0.57 μg/mL).
Selective cytotoxic effect of extracts on a non-cancerous cell versus cancerous cells
Different extracts were tested for cytotoxicity against four cancerous (A549, Caco-2, HeLa and MCF-7) cell types as well as the non-cancerous African green monkey (Vero) kidney cells, and the graphs of cell viability against the concentrations tested are presented in Additional file 2: Figure S2, Additional file 3: Figure S3, Additional file 4: Figure S4, Additional file 5: Figure S5 and Additional file 6: Figure S6 respectively. The LC50 and IC50 values of extracts were determined from concentration-dependent graphs, and are presented in Table 3. Water leaf extracts had the lowest cytotoxic effect on both non-cancerous and cancerous cells with LC50 or IC50 greater than 533.33 μg/mL and 200 μg/mL, respectively. An exception was observed with the water leaf extract of C. sylvaticus that had good cytotoxicity (IC50 of 45.62 μg/mL) on MCF-7 cells with a promising selectivity index greater than 21.92 (see Table 3). On the other hand, ethanol leaf extracts of C. pseudopulchellus and C. sylvaticus were more cytotoxic on both non-cancerous and cancerous cells with lowest LC50 or IC50 values obtained against all cell lines. Acetone and ethanol leaf extracts of C. pseudopulchellus and C. sylvaticus had the highest cytotoxic activity on the non-cancerous cells with LC50 varying between 7.86 and 48.19 μg/mL while the acetone and ethanol extracts of C. gratissimus were less cytotoxic on these cell lines with LC50 varying between 152.30 and 462.88 μg/mL. The selectivity index (SI) values indicated that the acetone and ethanol extracts of C. gratissimus were most selective with SI ranging between 1.91 and 6.25 (see Table 3). In addition, the ethanol leaf extract and acetone leaf extract of C. sylvaticus were highly selective against A549 and MCF-7 cells with SI of 4.70 and 2.12, respectively. The same observation was made with the acetone leaf extract of C. pseudopulchellus which had SI of 1.31 and 1.95 against A549 and MCF-7 cells, respectively. On the contrary, the ethanol leaf extract of C. pseudopulchellus was less selective on non-cancerous cells with the lowest SI values ranging between 0.12 and 0.58 against all cancerous cells. Similarly, acetone and ethanol leaf extracts of C. sylvaticus were less selective with SI varying between 0.07 and 0.18 against Caco-2 and HeLa cells. Doxorubicin hydrochloride, the positive control, was highly cytotoxic on all cells with SI ranging between 0.87 and 1.75.
Table 3 Cytotoxic effect (IC50 and LC50) and the selectivity index (SI) of different extracts from Croton species and reference drug (doxorubicin hydrochloride) on cancerous cell lines versus a non-cancerous cell line
Induction of caspase-dependent apoptosis by active extracts on cancerous cells
In this assay, acetone leaf extracts of the three Croton species were used based on their high selectivity indexes or lower cytotoxicity to non-cancerous cells compared to other extracts. The activation of caspase-3 and -7 enzymes was differentially observed in all the four cancerous cells treated with the active extracts compared to the untreated controls (see Fig. 2). Caspase − 3 and − 7 enzymes were better activated after treatment with acetone leaf extracts of the three plants on HeLa and MCF-7 cells. The activation of these enzymes was also observed on A549 and Caco-2 cells only after treatment with the acetone leaf extracts of C. pseudopulchellus and C. gratissimus (Fig. 2b and c). These two extracts significantly (P < 0.05) induced caspase − 3 and − 7 activity in all cancerous cells at concentrations of ½ x IC50 (1.24 to 1.56-fold change). A non-significant increase of the activity of caspase − 3 and − 7 was noted after treatment with acetone leaf extracts of C. sylvaticus on A549 and MCF-7 cells (1.10 to 1.13-fold change). The acetone leaf extract of C. gratissimus induced activation of caspase − 3 and − 7 activity in a concentration-dependent manner on HeLa cells (Fig. 2c), and the highest induction (1.83-fold change) was obtained at the concentration of 2 x IC50.
Activation of caspase-3/− 7 after 24 h of treatment with acetone leaf extracts of Croton sylvaticus (a), Croton pseudopulchellus (b) and Croton gratissimus (c) on cancerous A549, Caco-2, HeLa and MCF-7 cells. The caspase-3/− 7 activity is expressed as percentage or fold change to the untreated cells (control). Data are presented as mean ± standard error of three independent experiments. *P < 0.05 and **P < 0.01 indicate the significant difference compared to the control. CSA, CPA and CGA represent respectively acetone leaf extracts of Croton sylvaticus, Croton pseudopulchellus and Croton gratissimus. Ctrl: control group (0.5% DMSO)
Our study aimed to evaluate the antioxidant, anti-inflammatory and cytotoxic activities of three Croton species. The ethanol leaf extracts of the three plants were highly active in all experiments (except the NO inhibitory activity) compared to acetone and water leaf extracts. These results suggested that the antioxidant, anti-inflammatory and cytotoxic compounds extracted from the three plants are more concentrated in the ethanol leaf extract than in the acetone or water leaf extracts. We also investigated the potential relationship between the antioxidant, anti-inflammatory and cytotoxic activities of the active ethanol and acetone extracts. This relationship was analysed by determining the Pearson correlation coefficients (r) after plotting a linear curve with IC50 values of each cancer cell on the y-axis against phytochemical content or IC50 values of the antioxidant power (DPPH, ABTS) and anti-inflammatory activity (NO, 15-LOX) on the x-axis (Table 4). A linear correlation (r<0.5) existed between antioxidant, anti-inflammatory and cytotoxic activities, although this correlation was considered to be less strong. In fact, free radicals are well known to play a major role in the development of oxidative stress that can lead to many illnesses including cardiovascular diseases, diabetes, inflammation, degenerative diseases, and cancer [26]. Nitric oxide (NO), a molecule playing a crucial role in inflammatory response, can react with free radicals such as superoxides to produce peroxynitrites that can cause irreversible damage to cell membranes leading to the promotion of tumor growth and proliferation [27]. In addition, natural inhibitors of lipoxygenases have been shown to suppress carcinogenesis and tumor growth in a number of experimental models [28]. Moreover, several scientific reports have suggested that antioxidant and anti-inflammatory agents could be beneficial in the prevention and treatment of cancer [29]. Our results therefore suggest that the antioxidant or anti-inflammatory activities of extracts may contribute moderately to their cytotoxic activity. Phenolics and flavonoids are known for their contribution either directly or indirectly to the cytotoxic activity. In our study, we noted that the acetone and ethanol extracts of C. gratissimus which had the highest total phenolic contents (222.29 and 180.61 mgGAE/g respectively) were selectively cytotoxic to cancerous cells compared to non-cancerous. Indeed, due to their anti- and pro-oxidant potential, phenolics (which also include flavonoids) may have cytotoxic activity against different human cancer cells with little or no effect on normal cells. This selectivity in the cytotoxicity properties of phenolics has strengthened interest in formulating novel and less toxic anticancer products based on these types of compounds [30, 31].
Table 4 Correlation between phytochemical content, antioxidant, anti-inflammatory and antiproliferative activity of active extracts
The goal of any chemotherapeutic treatment is to selectively attenuate or destroy pathogenic micro-organisms or cancerous cells with minimal side effects to the host cells [32]. This principle, known as selective toxicity, is the key to all chemotherapeutic treatment. In this study, the acetone and ethanol extracts of C. gratissimus were more selective with SI ranging between 1.91 and 6.25, and it therefore indicates that these extracts may be useful in the search for anticancer compounds. A cembranolide isolated from stem bark of Croton gratissimus had moderate activity against PEO1 and PEO1TaxR ovarian cancer cell lines [16]. In the present work, four cancerous (A549, Caco-2, HeLa, MCF-7) cells and a non-cancerous (Vero) cell line were used to evaluate the antiproliferative activity of the crude extracts from three Croton species. The use of these cancerous cells with the non-cancerous (Vero) cell line as cell models has been reported for comparison and determination of the selectivity indexes [33, 34]. However, the cytotoxic effect on this non-cancerous (Vero) cell line of animal origin needs to be confirmed on other non-cancerous cells of human origin. The selective toxicity of acetone and ethanol extracts of C. gratissimus also suggested that the active compounds interact with special cancer-associated receptors or cancer cell special molecule (not found in non-cancerous cells), thus activating some mechanisms that cause cancer cell death [35]. The activation of caspase − 3 and − 7 enzymes was observed in all four of the cancer cell types treated with the active extracts compared to the untreated cells, which therefore reveals that apoptosis has taken place in the treated cells. Indeed, caspases − 3, and − 7 are known as "executioners" of apoptosis since they serve as substrates for initiator caspases in extrinsic or intrinsic apoptotic pathways [36]. It will be important to comprehensively investigate the mechanism of the activity, and this aspect will be addressed once the compounds responsible for the activity have been isolated. The aim of the current study was to explore the possibility that extracts have inhibitory activity on cancer cell growth.
According to the United States National Cancer Institute, a crude extract is generally considered to have in vitro cytotoxic activity if the IC50 is lower than 30 μg/mL [37]. Based on this statement, acetone and ethanol extracts of C. pseudopulchellus and C. sylvaticus were considered as more active on both cancerous A549 and MCF-7 cells. Differences in the selectivity indexes of these extracts on these two cancerous cells may be ameliorated through the isolation of active compounds which might reduce the toxic effects of the crude extracts. Studies are ongoing to isolate active compounds from these active extracts.
In summary, due to their selective toxicity between non-cancerous and cancerous cells, with beneficial antioxidant and anti-inflammatory activities, the acetone and ethanol leaf extracts of Croton gratissimus may be useful against cancer and other oxidative stress related diseases. The isolation of active compounds from this extract will be of great interest to fully understand the mechanism of anticancer activity. In addition, acetone and ethanol extracts of C. pseudopulchellus and C. sylvaticus, which were cytotoxic to both cancerous and non-cancerous cells, may be further explored as sources of new cytotoxic compounds.
ABTS:
2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid
ATCC:
American type culture collection
DMSO:
Dimethyl sulphoxide
DPPH:
2,2-diphenyl-1-picrylhydrazyl
Ferrous oxidation-xylenol orange
GAE:
Gallic acid equivalent
IC50 :
Inhibitory concentration to 50% of cells
LC50 :
Lethal concentration to 50% of cells
LOX:
Lipoxygenase
LPS:
Lipopolysaccharide
MEM:
Minimal essential medium
MTT:
3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide
NO:
QE:
Quercetin equivalent
TFC:
TPC:
Gęgotek A, Nikliński J, Žarković N, Žarković K, Waeg G, Łuczaj W, Charkiewicz R, Skrzydlewska E. Lipid mediators involved in the oxidative stress and antioxidant defence of human lung cancer cells. Redox Biol. 2016;9:210–9.
Liochev SI. Reactive oxygen species and the free radical theory of aging. Free Radic Biol Med. 2013;60:1–4.
Rahman K. Studies on free radicals, antioxidants, and co-factors. Clin Interv Aging. 2007;2(2):219–36.
Islam S, Samima N, Muhammad AK, Sakhawat Hossain A, Farhadul I, Proma K, Haque Mollah MN, Mamunur R, Golam S, Md Aziz AR, et al. Evaluation of antioxidant and anticancer properties of the seed extracts of Syzygium fruticosum Roxb. Growing in Rajshahi, Bangladesh. BMC Complement Altern Med. 2013;13.
Zhang Y-J, Gan R-Y, Li S, Zhou Y, Li A-N, Xu D-P, Li H-B. Antioxidant phytochemicals for the prevention and treatment of chronic diseases. Molecules. 2015;20:21138–56.
Wang H, Khor TO, Shu L, Su ZY, Fuentes F, Lee JH, Kong AN. Plants vs. cancer: a review on natural phytochemicals in preventing and treating cancers and their druggability. Anti Cancer Agents Med Chem. 2012;12(10):1281–305.
Bartsch H, Nair J. Chronic inflammation and oxidative stress in the genesis and perpetuation of cancer: role of lipid peroxidation, DNA damage, and repair. Langenbeck's Arch Surg. 2006;91:499–510.
Gonda TA, Tu S, Wang TC. Chronic inflammation, the tumor microenvironment and carcinogenesis. Cell Cycle. 2009;8:2005–13.
Reuter S, Gupta SC, Chaturvedi MM, Aggarwal BB. Oxidative stress, inflammation, and cancer: how are they linked? Free Radic Biol Med. 2010;49(11):1603–16.
Salatino A, Salatino MLF, Negri G. Traditional uses, chemistry and pharmacology of Croton species (Euphorbiaceae). J Braz Chem Soc. 2007;18(1):11–33.
Kapingu MC, Mbwambo ZH, Moshi MJ, Magadula JJ. Brine shrimp lethality of alkaloids from Croton sylvaticus Hoechst. East and Central African Journal of Pharmaceutical Sciences. 2012;15:35–7.
Ndhlala AR, Aderogba MA, Ncube B, Van Staden J. Anti-oxidative and cholinesterase inhibitory effects of leaf extracts and their isolated compounds from two closely related Croton species. Molecules. 2013;18:1916–32.
Langat M, Mulholland DA, Crouch N: New diterpenoids from Croton sylvaticus and Croton pseudopulchellus (Euphorbiaceae) and antiplasmodial screening of ent-kaurenoic acid. Planta Med 2008, 74(09):PB126.
Ngadjui BT, Abegaz BM, Keumedjio F, Folefoc GN, Kapche GW. Diterpenoids from the stem bark of Croton zambesicus. Phytochemistry. 2002;60(4):345–9.
Block S, Stevigny C, De Pauw-Gillet MC, de Hoffmann E, Llabres G, Adjakidje V, Quetin-Leclercq J. Ent-trachyloban-3beta-ol, a new cytotoxic diterpene from Croton zambesicus. Planta Med. 2002;68(7):647–9.
Mulholland DA, Langat MK, Crouch NR, Coley HM, Mutambi EM, Nuzillard JM. Cembranolides from the stem bark of the southern African medicinal plant, Croton gratissimus (Euphorbiaceae). Phytochemistry. 2010;71:1381–6.
Langat MK, Crouch NR, Smith PJ, Mulholland DA. Cembranolides from the leaves of Croton gratissimus. J Nat Prod. 2011;74:2349–55.
Lall N, Meyer JJ. In vitro inhibition of drug-resistant and drug-sensitive strains of Mycobacterium tuberculosis by ethnobotanically selected south African plants. J Ethnopharmacol. 1999;66(3):347–54.
Zhang Q, Zhang J, Shen J, Silva A, Dennis D, Barrow C. A simple 96-well microplate method for estimation of total polyphenol content in seaweeds. J Appl Phycol. 2006;18:445–50.
Lin J, Tang C. Determination of total phenolic and flavonoid contents in selected fruits and vegetables, as well as their stimulatory effects on mouse splenocyte proliferation. Food Chem. 2007;101:140–7.
Brand-Williams W, Cuvelier ME, Berset C. Use of a free radical method to evaluate antioxidant activity. Lebensmittel Wissenschaftund Technologie. 1995;28(1):25–30.
Re R, Pellegrini N, Proteggente A, Pannala A, Yang M, Rice-Evans C. Antioxidant activity applying an improved ABTS radical cation decolourization assay. Free Radic Biol Med. 1999;28:1057–60.
Dzoyem JP, Eloff JN. Anti-inflammatory, anticholinesterase and antioxidant activity of leaf extracts of twelve plants used traditionally to alleviate pain and inflammation in South Africa. J Ethnopharmacol. 2015;160:194–201.
Mosmann T. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J Immunol Methods. 1983;65(1–2):55–63.
Pinto MC, Tejeda A, Duque AL, Macias P. Determination of lipoxygenase activity in plant extracts using a modified ferrous oxidation-xylenol orange assay. J Agric Food Chem. 2007;55(15):5956–9.
Ravipati AS, Zhang L, Koyyalamudi SR, Jeong SC, Reddy N, Bartlett J, Smith PT, Shanmugam K, Munch G, Wu MJ, et al. Antioxidant and anti-inflammatory activities of selected Chinese medicinal plants and their relation with antioxidant content. BMC Complement Altern Med. 2012;12:173.
Choudhari SK, Chaudhary M, Bagde S, Amol R Gadbail AR, Joshi V. Nitric oxide and cancer: a review. World Journal of Surgical Oncology. 2013;11((118)):11.
Goossens L, Pommery N, Henichart JP. COX-2/5-LOX dual acting anti-inflammatory drugs in cancer chemotherapy. Curr Top Med Chem. 2007;7(3):283–96.
Dufour D, Pichette A, Mshvildadze V, Hébert M-EB, Lavoie S, Longtin A, Laprise C, Legault J. Antioxidant, anti-inflammatory and anticancer activities of methanolic extracts from Ledum groenlandicum Retzius. J Ethnopharmacol. 2007;111:22–8.
Sak K. Cytotoxicity of dietary flavonoids on different human cancer types. Pharmacogn Rev. 2014;8(16):122–46.
Batra P, Sharma A. Anti-cancer potential of flavonoids: recent trends and future perspectives. 3 Biotech. 2013;3:439–59.
Wink M. Medicinal plants: a source of anti-parasitic secondary metabolites. Molecules. 2012;17(11):12771–91.
Namvar F, Baharara J, Mahdi AA. Antioxidant and anticancer activities of selected Persian gulf algae. Ind J Clin Biochem. 2014;29(1):13–20.
Sasipawan M, Natthida W, Sahapat B. Anticancer effect of the extracts from Polyalthia evecta against human hepatoma cell line (HepG2). Asian Pac J Trop Biomed. 2012;2(5):368–74.
Chow KH, Sun RW, Lam JB, Li CK, Xu A, Ma DL, Abagyan R, Wang Y, Che CM. A gold(III) porphyrin complex with antitumor properties targets the Wnt/beta-catenin pathway. Cancer Res. 2010;70(1):329–37.
Olsson M, Zhivotovsky B. Caspases and cancer. Cell Death Differ. 2011;18(9):1441–9.
Singh G, Passsari AK, Leo VV, Mishra VK, Subbarayan S, Singh BP, Kumar B, Kumar S, Gupta VK, Lalhlenmawia H, et al. Evaluation of phenolic content variability along with antioxidant, antimicrobial, and cytotoxic potential of selected traditional medicinal plants from India. Front Plant Sci. 2016;7:407.
Authors thank Dr. Tshepiso J. Makhafola from the University of South Africa for providing the cancerous cell lines. EMN is very grateful to the University of Pretoria for the postdoctoral fellowship.
This work was supported by the National Research Foundation (NRF), South Africa through the Incentive Funding for Rated Researchers (Lyndy J. McGaw). The funder had no implication in the design of the study, collection, analysis and interpretation of data; and in writing the manuscript; and the decision to submit the article for publication.
Phytomedicine Programme, Department of Paraclinical Sciences, Faculty of Veterinary Science, University of Pretoria, Private Bag X04, Onderstepoort, Pretoria, 0110, South Africa
Emmanuel Mfotie Njoya, Jacobus N. Eloff & Lyndy J. McGaw
Department of Biochemistry, Faculty of Science, University of Yaoundé I, P.O. Box 812, Yaoundé, Cameroon
Emmanuel Mfotie Njoya
Jacobus N. Eloff
Lyndy J. McGaw
EMN initiated the project, conducted the assays and wrote the manuscript, JNE contributed to initiating the project and editing the manuscript, LJM supervised the research and edited the manuscript. All authors have read and approved the final manuscript.
Correspondence to Emmanuel Mfotie Njoya.
The authors declare that they have no competing interests. Prof Jacobus N Eloff is a Section Editor and Prof Lyndy J McGaw is an Associate Editor of BMC Complementary and Alternative medicine.
Figure S1. Non-linear regression curves for IC50 determination of different extracts from Croton species in 15-lipoxygenase (15-LOX) inhibitory assay. CSA and CSE represent respectively acetone, ethanol and water extracts of Croton sylvaticus. CGA and CGE represent respectively acetone, ethanol and water extracts of Croton gratissimus. CPA and CPE represent respectively acetone, ethanol and water extracts of Croton pseudopulchellus. (TIF 109 kb)
Figure S2. Concentration-dependent graph of A549 cell viability of different extracts from Croton species. Extracts were tested at concentrations between 200 and 6.25 μg/mL; Ctrl: 0.5% DMSO. (TIF 128 kb)
Figure S3. Concentration-dependent graph of Caco-2 cell viability of different extracts from Croton species. Extracts were tested at concentrations between 200 and 6.25 μg/mL; Ctrl: 0.5% DMSO. (TIF 156 kb)
Figure S4. Concentration-dependent graph of HeLa cell viability of different extracts from Croton species. Extracts were tested at concentrations between 200 and 6.25 μg/mL; Ctrl: 0.5% DMSO. (TIF 142 kb)
Figure S5. Concentration-dependent graph of MCF-7 cell viability of different extracts from Croton species. Extracts were tested at concentrations between 200 and 6.25 μg/mL; Ctrl: 0.5% DMSO. (TIF 136 kb)
Figure S6. Concentration-dependent graph of Vero cell viability of different extracts from Croton species. Extracts were tested at concentrations between 1000 and 50 μg/mL Ctrl: 0.5% DMSO. (TIF 132 kb)
Mfotie Njoya, E., Eloff, J.N. & McGaw, L.J. Croton gratissimus leaf extracts inhibit cancer cell growth by inducing caspase 3/7 activation with additional anti-inflammatory and antioxidant activities. BMC Complement Altern Med 18, 305 (2018). https://doi.org/10.1186/s12906-018-2372-9
Croton gratissimus
15-lipoxygenase
Cytotoxicity
Caspases
Submission enquiries: [email protected] | CommonCrawl |
Is it possible to describe gravitons in curved backgrounds?
Asked 1 year, 5 months ago
I've been studying Quantum Field Theory in Curved Spacetimes through Wald's Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics. On Section 4.7, he discusses how to generalize the theory built so far for a real scalar field to other quantum fields. He mentions that a straightforward generalization will work for any real, bosonic, linear field provided that
it has a well-posed initial value problem;
it is derivable from a Lagrangian.
He then mentions that well-posedness is not trivial to satisfy, since the straightforward generalizations to curved spacetime of fields of spin $s > 1$ are not well-posed (this is shown in pages 374-375 of Wald's General Relativity). Quoting page 375 of General Relaivity, "for $s > 1$ there is no natural generalization to curved spacetime of the notion of a 'pure' massless spin $s$ field".
What I find particularly surprising in these remarks is that linearized gravity is described by a spin $s = 2$ field. Hence, if I've read these statements correctly, they imply that one cannot describe the "propagation of free gravitons" on curved backgrounds (where I use quotation marks because I do not mean to imply a particle interpretation, but rather a quantized perturbation).
In summary, is it possible to describe quantized linear perturbations of the gravitational field in a curved background?
general-relativity
quantum-gravity
qft-in-curved-spacetime
asked Aug 8, 2021 at 2:27
Níckolas AlvesNíckolas Alves
If you are happy to treat GR as the low energy approximation of a full quantum theory of gravity, then the procedure is conceptually straightforward (but can quickly become technically very difficult).
Split the metric into a background plus perturbation, $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}.$
Expand the Einstein-Hilbert action to quadratic order in $h_{\mu\nu}$ (or higher order if you want to look at more complicated diagrams).
Quantize the theory for $h_{\mu\nu}$ in your favorite formulation of quantum mechanics.
This approach can be used, for example, to compute Hawking radiation into gravitons by looking at metric fluctuations around a Schwarzschild background, or the power spectrum of primordial gravitational waves by looking at quantized metric fluctuations around a quasi-de Sitter background during inflation. (For completeness, I'll add that a very important subtlety when doing calculations with quantum theory on curved spacetime -- including both Hawing radiation and inflationary perturbations -- is that there is no obvious privileged vacuum state (unlike in flat spacetime where there is a Poincaire invariant ground state), and so you need to choose an appropriate state to call the vacuum state; however this is not specific to spin-2 fields)
If we expand the action to second order in $h$, then the Lagrangian has the form $\frac{1}{2} h_{\mu\nu} \bar{\mathcal{E}}^{\mu\nu\rho\sigma} h_{\rho\sigma}$, where $\bar{\mathcal{E}}^{\mu\nu\rho\sigma}$ is a second-order differential operator depending on the background metric $\bar{g}_{\mu\nu}$. The propagator (in some gauge) is the inverse of $\bar{\mathcal{E}}$. In the geometric optics limit (when the wavelength of the perturbation is small compared to the curvature, you will find that the solutions to the classical equations of motion follow null geodesics.
The tricky part is going in the opposite direction. In other words, if you don't already know the Einstein-Hilbert action, and you want to construct a theory of a spin-2 field, where would you start? In flat spacetime, we find unitary representations of the Poincaire group which are labeled by mass and spin. Once you do that, it's possible to look for consistent interacting theories of different representations, and you find the only self-consistent interacting theory of a massless-spin 2 particles (at low energies) is GR. It's much harder to define what this procedure means, when you don't have Poincaire invariance in the background spacetime.
One approach that I like is Einstein-Cartan gravity (i.e., using the vielbein instead of the metric as the basic variable), where you can define a local Lorentz frame, and then the spin-2 field falls in the spin-2 representation of the Poincaire group in each local frame. In this approach, you can view gravity as being analogous to Yang-Mills, where the gauge group is the Poincare group, instead of $SU(N)$. (However, I am sure Wald was well aware of the Einstein-Cartan formulation at the time he wrote the passage you quoted).
edited Aug 9, 2021 at 17:41
AndrewAndrew
$\begingroup$ @NíckolasAlves I think that's accurate. But even if it isn't straightforward, the "right answer" is still that it has to be GR. There are a lot of papers by Deser about this, here is an example: authors.library.caltech.edu/7595/1/DEScqg07.pdf $\endgroup$
$\begingroup$ @NíckolasAlves Another key point is that the background should be on shell. If you don't have matter present, then this means $G_{\mu\nu}=0$, which implies $R_{\mu\nu}=0$ (for the background). However you always can add matter if you want to consider more general backgrounds (for inflation you need to add an inflaton field to source the background expansion). $\endgroup$
$\begingroup$ "the only self-consistent interacting theory of a massless-spin 2 particles (at low energies) is GR" Why do generalizations such as $f(R)$, non-zero torsion etc. fail? Or are they precluded by our wanting mass/spin labels for Poincaré representations? $\endgroup$
– J.G.
$\begingroup$ @J.G. $f(R)$ can be rewritten by a change of variables in the "Einstein frame", where the gravity sector takes the form of the Einstein-Hilbert action and a scalar field with a normal kinetic term and potential. (The "cost" is that the matter is not minimally coupled to the Einstein-frame metric, but to an effective metric built out of a combination of the scalar field and the Einstein-frame metric). (...) $\endgroup$
$\begingroup$ So one way of expressing the difference is that $f(R)$ is not just a theory of a massless spin-2 particle, but contains another scalar degree of freedom. $\endgroup$
Thanks for contributing an answer to Physics Stack Exchange!
What are the current (popular(ish)) approaches to modelling the quantum nature of spacetime at the Planck scale?
Quantization on Minkowski/Schwarzschild spacetimes based on unusual surface
Why is this the probability that an incoming wavepacket is absorbed by the black hole?
Difference between QFT In curved spacetime, semiclassical, and quantum gravity?
The positive-energy condition in quantum field theory for Hamiltonians associated with different timelike Killing vectors
In Quantum Field Theory in Curved Spacetime, are particle states energy eigenstates? | CommonCrawl |
The effect of dural release on extended laminoplasty for the treatment of multi-level cervical myelopathy
Yuwei Li1,
Xiaoyun Yan1,
Wei Cui1,
Yonghui Zhang1 &
Cheng Li1
BMC Musculoskeletal Disorders volume 20, Article number: 181 (2019) Cite this article
The effects of dural release on extended laminoplasty for the treatment of multi-level cervical myelopathy were explored and discussed.
Patients, who underwent extended laminoplasty combined with dural release for the treatment of multi-level cervical myelopathy (35 cases, group A), were compared with patients who underwent simple extended laminoplasty (38 cases, group B). The JOA score, improvement rate, VAS score, distance of retroposition of the spinal cord, cervical lordosis were compared between the two groups.
Dural laceration occurred to five patients during surgery, three in group A and two in group B; cerebrospinal fluid leakage occurred to five patients, three in group A and two in group B. All patients were followed up for 10 to 48 months (mean 20.3 months). JOA scores and VAS scores in the last follow up period were significantly improved in the two groups than preoperative scores (p < 0.05). The improvement rate and JOA scores in group A were significantly higher than group B, while VAS scores in group A were significantly lower than group B (p < 0.05). There were no significant differences in cervical lordosis in the two groups in the last follow up (p > 0.05), and the distance of retroposition of the spinal cord in group A was higher than B (p < 0.05). No shut-up of the 'door' of vertebral lamina occurred in the period of follow-up.
Dural release on extended laminoplasty can achieve retroposition of the spinal cord for multi-level cervical myelopathy, which is more effective than simple extended laminoplasty.
Multi-level cervical myelopathy is a progressive disease that needs surgery for improvement, and it is a potential destructive nerve disorder resulting from spinal cord injury that is related to degeneration of the discs and other supporting spinal column structures [1]. In many patients, neurologic deterioration is the characteristic of the natural history of cervical myelopathy, therefore, surgery is frequently advocated by surgeons [2, 3].
There are various surgical procedures used in the treatment of patients with multi-level cervical myelopathy, including cervical laminoplasty, cervical laminectomy, cervical laminectomy and fusion, anterior cervical discectomy and fusion, corpectomy, etc. [4,5,6,7]. Posterior cervical laminoplasty is one of the most effective methods for the treatment of multi-level cervical myelopathy, but there were some drawbacks for some patients combined with ossification of the posterior longitudinal ligament, such as limitation of retroposition of the spinal cord and poor efficacy [8,9,10].
It is rare for the report about whether dural release will affect retroposition of the spinal cord and the efficacy of cervical myelopathy after posterior extended laminoplasty. Hence, we conducted a retrospective study to evaluate the effect of dural release on this surgery. We analyzed the data of 35 patients who underwent extended laminoplasty with dural release from September 2012 to December 2014, and compared with the data of 38 patients who underwent simple extended laminoplasty from April 2011 to April 2012.
Thirty-five patients, who underwent extended laminoplasty with dural release for the treatment of multi-level cervical myelopathy, were divided into group A. Thirty-eight patients, who underwent simple extended laminoplasty, were divided into group B. There were no significant differences in sex, age, disease course, involved segments, complications, preoperative cervical lordosis, Japanese Orthopaedic Association score (JOA score), visual analog scale (VAS score) between the two groups (p > 0.05) (Tables 1 and 2).
Table 1 Comparison of therapeutic effect assessment index between the two groups before and after surgery (−x ± s)
Table 2 Comparison of imaging index between the two groups (−x ± s)
Group A: There were 19 males and 16 females, who were aged from 25 to 77 years old with an average age of 59.2. The disease course was 5–38 month with an average course of 13.8 month. Lesions of 18 cases involved C3/4, 33 cases involved C4/5, 35 cases involved C5/6, and 17 cases involved C6/7. Six patients (17.1%) combined with hypertension, and eight patients (22.9%) combined with diabetes.
Group B: There were 20 males and 18 females, who were aged from 24 to 78 years old with an average age of 61.3. The disease course was 6–37 month with an average course of 13.1 month. Lesions of 25 cases involved C3/4, 37 cases involved C4/5, 38 cases involved C5/6, and 17 cases involved C6/7. Seven patients (18.4%) combined with hypertension, and eight patients (21%) combined with diabetes.
Inclusion criteria: 1. Patients accorded with diagnostic criteria of cervical myelopathy at the second National Symposium on Cervical Spondylopathy [11], who had progressive limb sensory, motor or sphincter dysfunction. 2. MRI and CT showed ossification of posterior longitudinal ligament in C3–7 and multi-segment compression of the spinal cord.
Exclusion Criteria: 1. Localized ossification of the posterior longitudinal ligament. 2. Disappearance of cervical lordosis or cervical kyphosis. 3. Cervical instability.
The surgeries of the two groups were performed by the same group of surgeons. Patients were taken prone position under general anesthesia, and the neck was flexed to avoid wrinkles of the rear of the skin so as to reduce the overlap of intervertebral disk and increase laminae interval space.
For simple extended laminoplasty, C3–7 spinous process was shortened, a hole was made at the base of spinous process, and the side with heavier symptoms was used as the open side. Cortical bone of outer layer of the lamina was removed at the lateral border of vertebral lamina and lamina groove as door spindle. Vertebral lamina was removed and opened at the open side of lamina groove. The string was led through the spinous process, and sutured to the joint capsule of posterior lateral joint and attachment point of tendon. Lamina was lifted up to 60°, and the suture was knotted and fixed to soft tissue of articular process. The open segment of the nerve root canal was expanded 2-5 mm, so that the nerve root had certain flexibility [12,13,14] (The nerve root was touched by nerve stripping, and made a slight move). The diagram of spinal cord shift and expansion was shown in Fig. 1.
The diagram of spinal cord shift and expansion
For extended laminoplasty with dural release, on the basis of single open-door, spinal dural of the decompression segment was elected to crash and the strand was released by nerve stripping. When the strand was tight, it was cut by meningeal scissors. C4–6 nerve root canal at the open side was expanded for 2–5 mm, then L-shaped hook of the nerve stripping was stick closely to ventral spinal dural and put into the nerve root canal carefully to explore the adhesion degree of anterior spinal dural and separate the adhesion. Attention should be paid to avoid excessive provoking spinal dural. Soft tissue adhesions such as strips and cords should be released. If there was dural calcification with osseous adhesion during surgery, the dorsal spinal dural calcification should be thoroughly decompressed. The ventral spinal dural calcification should not be treated to avoid injury and tearing of the spinal dural.
Postoperative treatment
Neck collar was used for fixation for 8 week in the two groups. On the second day after surgery, the upper extremities were active/passive fisted, and made function trainings, such as hip, knee and ankle joint flexion at the lower limbs to promote the recovery of weight bearing and walking. One week later, patients could stand up and have out of bed activities.
Efficacy evaluation
JOA scores were used to evaluate the nerve functions, and the improvement rate was calculated. VAS scores of neck and shoulder pain were used to evaluate the neck and shoulder pain improvement.
$$ \mathrm{Improvement}\ \mathrm{rate}=\frac{\mathrm{Follow}\hbox{-} \mathrm{up}\ \mathrm{JOA}\ \mathrm{score}\hbox{-} \mathrm{Preoperative}\ \mathrm{JOA}\ \mathrm{score}}{17\hbox{-} \mathrm{Preoperative}\ \mathrm{JOA}\ \mathrm{score}}\mathrm{x}100\% $$
Twelve months after surgery, MRI was conducted for measuring the distance of retroposition of the spinal cord. Midline sagittal T2-weighted images were selected, and Zoomagic software (Apps Rocket, England) was used to measure the distance between the middle point of vertebral posterior and the spinal posterior in each segment. The difference between pre-operation and post-operation was calculated, namely the distance of retroposition of the spinal cord of each segment. The mean value of all segments was the distance of retroposition of whole spine.
X-ray was conducted for preoperative and postoperative cervical lateral position, and Zoomagic software was used to measure the angle of tangent of C2 and C7 posterior wall, namely the cervical lordosis (Fig. 2) [15].
The angle between C2 and C7 was measured as "cervical lordosis"
CT was conducted for preoperative and postoperative cervical transverse position to measure whether there was ossification of posterior longitudinal ligament before surgery, the degree of the opening of vertebral lamina, and whether there was shut-up of the 'door' of vertebral lamina after surgery.
We used SPSS 16.0 for the data analysis. Measurement data were expressed as mean ± standard deviation. Normality was tested using Kolmogorov-Smirnov test. Comparison between groups was analyzed by independent sample t-test, and comparison of preoperative and postoperative group was analyzed by paired sample t-test, test level ɑ = 0.05.
All the patients successfully underwent the operation. Dural laceration occurred to five patients during surgery, three in group A and two in group B. In group A, the cleft located at the lateral anterior of spinal dural, and the spinal dural was unsutured. When closed the incision, all levels of organizations were tightly sutured. In group B, the cleft located at the dorsal side of spinal dural. 5–0 nylon suture was used for suture the spinal dural. Cerebrospinal fluid leakage occurred to five patients, three in group A and two in group B. They recovered after a series of treatments, such as local sandbag compression, reverse trendelenburg prone position, and replacement of electrolytes. Complications, such as incision infection and C5 nerve root palsy, didn't occur to patients in the two groups after surgery.
All patients were followed up for 10 to 48 months (mean 20.3 months). JOA scores and VAS scores in the last follow up period were significantly improved in the two groups than preoperative scores (p < 0.05). The improvement rate and JOA scores in group A were significantly higher than group B, while VAS scores in group A were significantly lower than group B (p < 0.05) (Table 1).
There were no significant differences in cervical lordosis in the two groups in the last follow up (p > 0.05), and the distance of retroposition of the spinal cord in group A was higher than B in the last follow up (t = 7.256, p < 0.001) (Table 2). No shut-up of the 'door' of vertebral lamina occurred in the period of follow-up (Fig. 3).
The shift of spinal cord in the group Aand group B. a The spinal cord of a 53-year-old man in group A who had C3–7 cervical myelopathy. b The spinal cord of a 41-year-old man in group B who had C3–7 cervical myelopathy
There are two main decompression principles of cervical myelopathy after posterior extended laminoplasty [15,16,17,18,19]. One is to directly remove the compression of spinal cord, and the other is to achieve retroposition of the spinal cord by using 'bow string' principle to avoid anterior compression, including anterior intervertebral disc, osteophyte, and hypertrophic and ossific posterior longitudinal ligament. But retroposition of the spinal cord is restricted by many factors, including cervical lordosis, whether nerve root canal expanded or not, and the degree of the opening of vertebral lamina [20,21,22]. It is rare for the report about whether dural release will affect retroposition of the spinal cord and the efficacy of cervical myelopathy after posterior extended laminoplasty. Hence, we conducted retrospective study to evaluate the effect of dural release on this surgery.
The resolution of MRI is high in soft tissues, and MRI is safe, non-invasive, and repeatable, which can directly observe the bony and non-bony structures in the spinal cord and spine. T2-weighted images can clearly show the margin and morphology of spinal cord and vertebrae, and the sagittal T2WI can clearly show cervical lordosis that can be a good way to measure the distance of retroposition of the spinal cord after cervical myelopathy after posterior extended laminoplasty. In this study, we chose to measure the distance of retroposition of the spinal cord by median sagittal T2WI. Radcliff et al. [23] thought the distance of retroposition of the spinal cord was related with nerve recovery. However, Tashjian et al. [24] found the distance of retroposition of the spinal cord had no relationship with the improvement rate after laminectomy of cervical myelopathy, but related with individual factors, such as age and the degree of cervical spondylosis. The reasons for the inconsistencies in these reports were, in our opinion, related with age, disease course, preoperative JOA score, MRI signal changes, the area of spinal cord compression, and operation methods and techniques [25], and retroposition of the spinal cord is only one of these reasons. The surgeries in patients of the two groups were all performed by the same group of physicians, and only group A underwent dural release. Results showed the distance of retroposition of the spinal cord, JOA score, improvement rate, and VAS score in group A were significantly better than group B.
Dural release on extended laminoplasty has many advantages. When expanded the sagittal diameter of cervical spinal canal, the restraint stress of adhesive tissue (strands posterior to spinal dural) to spinal dural was removed, which was beneficial for the retroposition of spinal dural and spinal cord. Besides, it could release part of the adhesive tissue anterior to spinal cord, which was beneficial for the retroposition of spinal cord. When lateral anterior of spinal dural was visible on the open side, C4/6 nerve root canal was expanded for the retroposition of spinal cord [26].
However, there were some disadvantages of this technique. It could only release soft tissue adhesion, such as strands and scar, and had risks of dural laceration when release bone adhesion with dural calcification. Bone adhesion anterior to spinal dural could not be released. Because it needed to expand C4/6 nerve root that would increase the risk of intravertebral venous plexus hemorrhage and prolonged the operation time. Meanwhile, if there was bone adhesion with dural calcification, dorsal side of dural calcification should be completely decompressed and the decompression area should be larger than calcification area. No treatment was conducted for ventral calcification to aviod injury or dural laceration. Anterior cervical decompression and calcified floating were depended on postoperative function recovery of spinal cord.
For multi-level cervical myelopathy, sufficient dural release on extended laminoplasty was beneficial for retroposition of the spinal cord and could improve the curative effect. This study was a retrospective study and the sample size was small. Long-term follow-up of large samples, and multivariate analysis were needed to further clarify the effect of dural release on the efficacy.
JOA score:
Japanese Orthopaedic Association score
VAS score:
visual analog scale
Ashana AO, et al. Regression of anterior disc-osteophyte complex following cervical laminectomy and fusion for cervical Spondylotic myelopathy. Clin Spine Surg. 2017;30(5):E609–E614.
Karadimas SK, et al. The Pathophysiology and Natural History of Cervical Spondylotic Myelopathy. Spine. 2013;38(22):21–36.
Mummaneni PV, et al. Cervical surgical techniques for the treatment of cervical spondylotic myelopathy. Journal of Neurosurgery Spine. 2009;11(2):130–41.
Fraser JF, Härtl R. Anterior approaches to fusion of the cervical spine: a metaanalysis of fusion rates. Journal of Neurosurgery Spine. 2007;6(4):298.
Rhee JM, Basra S. Posterior surgery for cervical myelopathy: laminectomy, laminectomy with fusion, and laminoplasty. Asian Spine Journal. 2008;2(2):114–26.
Sekhon LH. Posterior cervical decompression and fusion for circumferential spondylotic cervical stenosis: review of 50 consecutive cases. J Clin Neurosci. 2006;13(1):23–30.
Heller JG, Murakami H, Rodts GE. Laminoplasty versus laminectomy and fusion for multilevel cervical myelopathy: an independent matched cohort analysis. Spine. 2001;26(12):1330–6.
Wang SJ, et al. Axial pain after posterior cervical spine surgery: a systematic review. Eur Spine J. 2011;20(2):185–94.
Manzano GR, et al. A prospective, randomized trial comparing expansile cervical laminoplasty and cervical laminectomy and fusion for multilevel cervical myelopathy. Neurosurgery. 2012;70(2):264–77.
Seichi A, et al. Neurological complications of cervical laminoplasty for patients with ossification of the posterior longitudinal ligament-a multi-institutional retrospective study. Spine. 2011;36(15):E998–E1003.
Hu Y, et al. Relationship between cervical spondylotic myelopathy and cervical spinal canal stenosis and its nomenclature. Chinese J Spine and Spinal Cord. 2003;13(4):203–4.
Tanaka N, et al. Expansive laminoplasty for cervical myelopathy with interconnected porous calcium hydroxyapatite ceramic spacers: comparison with autogenous bone spacers. J Spinal Disord Tech. 2008;21(21):547–52.
Lee DG, et al. Comparison of surgical outcomes after cervical laminoplasty: open-door technique versus French-door technique. J Spinal Disord Tech. 2013;26(6):E198–203.
Cabraja M, et al. Comparison between anterior and posterior decompression with instrumentation for cervical spondylotic myelopathy: sagittal alignment and clinical outcome. Neurosurg Focus. 2010;28(3):E15.
Tian W, Yu J. The role of C2-C7 angle in the development of dysphagia after anterior and posterior cervical spine surgery. Clinical Spine Surgery. 2017;30(9):E1306–E1314.
Chibbaro S, et al. Multilevel oblique corpectomy without fusion in managing cervical myelopathy: long-term outcome and stability evaluation in 268 patients. Journal of Neurosurgery Spine. 2009;10(5):458.
Ding H, et al. Laminoplasty and laminectomy hybrid decompression for the treatment of cervical spondylotic myelopathy with hypertrophic ligamentum flavum: a retrospective study. PLoS One. 2014;9(4):e95482.
Rhee JM, et al. Plate-only open door laminoplasty maintains stable spinal canal expansion with high rates of hinge union and no plate failures. Spine. 2011;36(1):9–14.
Hirabayashi S, et al. Comparison of enlargement of the spinal canal after cervical laminoplasty: open-door type and double-door type. Eur Spine J. 2010;19(10):1690–4.
Qingquan K, et al. Effect of the decompressive extent on the magnitude of the retroposition of the spinal cord after expansive open-door laminoplasty. Spine. 2010;36(13):1030–6.
Shiozaki T, et al. Retroposition of the spinal cord on magnetic resonance imaging at 24 hours after cervical laminoplasty. Spine. 2009;34(3):274–9.
Zhang H, et al. Effect of lamina open angles in expansion open-door laminoplasty on the clinical results in treating cervical spondylotic myelopathy. J Spinal Disord Tech. 2012;28(3):89.
Radcliff KE, et al. Cervical laminectomy width and spinal cord drift are risk factors for postoperative C5 palsy. J Spinal Disord Tech. 2014;27(2):86.
Tashjian VS, et al. The relationship between preoperative cervical alignment and postoperative spinal cord drift after decompressive laminectomy and arthrodesis for cervical spondylotic myelopathy. Surgical neurology. 2009;72(2):112–7.
Matsumoto M, et al. Risk factors for closure of lamina after open-door laminoplasty. Journal of Neurosurgery Spine. 2008;9(6):530–7.
Lee CK, et al. Correlation between cervical spine sagittal alignment and clinical outcome after cervical laminoplasty for ossification of the posterior longitudinal ligament. J Neurosurgery Spine. 2015;24(1):1–8.
The data and materials contributing to this article may be made available upon request by sending an e-mail to the first author..
Department of Orthopedics, Luohe Central Hospital of Orthopedics, No. 54, People's Road, Luohe City, 462000, Henan Province, China
Yuwei Li, Xiaoyun Yan, Wei Cui, Yonghui Zhang & Cheng Li
Yuwei Li
Xiaoyun Yan
Wei Cui
Yonghui Zhang
Cheng Li
YL put forward the concept of the study, designed the study, prepared the manuscript and contributed to the statistical analysis. XY contributed to the data acquisition. WC contributed to the quality control of data and algorithms. YZ analyzed the data and interpretation. CL edited the manuscript. All authors have read and approved the final version of the manuscript.
Correspondence to Yuwei Li.
The study protocol was approved by the ethics committee of Luohe Central Hospital of Orthopedics. The patients gave their written informed consent for the study.
Li, Y., Yan, X., Cui, W. et al. The effect of dural release on extended laminoplasty for the treatment of multi-level cervical myelopathy. BMC Musculoskelet Disord 20, 181 (2019). https://doi.org/10.1186/s12891-019-2554-8
Dural release
Extended laminoplasty
Multi-level cervical myelopathy | CommonCrawl |
Only show content I have access to (31)
Only show open access (1)
Last 12 months (2)
Over 3 years (64)
Life Sciences (42)
Earth and Environmental Sciences (4)
Microscopy and Microanalysis (25)
The Journal of Agricultural Science (5)
Bulletin of Entomological Research (3)
Public Health Nutrition (3)
British Journal of Nutrition (2)
Canadian Journal of Emergency Medicine (2)
Infection Control & Hospital Epidemiology (2)
Journal of the Australian Mathematical Society (2)
Radiocarbon (2)
Radioprotection (2)
The Canadian Entomologist (2)
The European Physical Journal - Applied Physics (2)
Antarctic Science (1)
Canadian Journal on Aging / La Revue canadienne du vieillissement (1)
International Journal of Technology Assessment in Health Care (1)
Journal of Fluid Mechanics (1)
Journal of Law, Medicine & Ethics (1)
Journal of Materials Research (1)
Proceedings of the Nutrition Society (1)
Revue de Métallurgie – International Journal of Metallurgy (1)
MiMi / EMAS - European Microbeam Analysis Society (25)
Nestle Foundation - enLINK (3)
Nutrition Society (3)
Australian Mathematical Society Inc (2)
Canadian Association of Emergency Physicians (CAEP) (2)
EDPS Sciences - Radioprotection (2)
Entomological Society of Canada TCE ESC (2)
Society for Healthcare Epidemiology of America (SHEA) (2)
American Society of Law, Medicine & Ethics (1)
Canadian Association on Gerontology/L'Association canadienne de gerontologie CAG CJG (1)
Health Technology Assessment International (1)
test society (1)
LO14: Interdepartmental program to improve outcomes for acute heart failure patients seen in the emergency department
I. Stiell, M. Taljaard, A. Forster, L. Mielniczuk, G. Wells, G. Hebert, H. Clark, C. Clement, J. Brinkhurst, C. Sheehan, E. Brown, M. Nemnom, J. Perry
Journal: Canadian Journal of Emergency Medicine / Volume 22 / Issue S1 / May 2020
Published online by Cambridge University Press: 13 May 2020, pp. S11-S12
Print publication: May 2020
Introduction: An important challenge physicians face when treating acute heart failure (AHF) patients in the emergency department (ED) is deciding whether to admit or discharge, with or without early follow-up. The overall goal of our project was to improve care for AHF patients seen in the ED while avoiding unnecessary hospital admissions. The specific goal was to introduce hospital rapid referral clinics to ensure AHF patients were seen within 7 days of ED discharge. Methods: This prospective before-after study was conducted at two campuses of a large tertiary care hospital, including the EDs and specialty outpatient clinics. We enrolled AHF patients ≥50 years who presented to the ED with shortness of breath (<7 days). The 12-month before (control) period was separated from the 12-month after (intervention) period by a 3-month implementation period. Implementation included creation of rapid access AHF clinics staffed by cardiology and internal medicine, and development of referral procedures. There was extensive in-servicing of all ED staff. The primary outcome measure was hospital admission at the index visit or within 30 days. Secondary outcomes included mortality and actual access to rapid follow-up. We used segmented autoregression analysis of the monthly proportions to determine whether there was a change in admissions coinciding with the introduction of the intervention and estimated a sample size of 700 patients. Results: The patients in the before period (N = 355) and the after period (N = 374) were similar for age (77.8 vs. 78.1 years), arrival by ambulance (48.7% vs 51.1%), comorbidities, current medications, and need for non-invasive ventilation (10.4% vs. 6.7%). Comparing the before to the after periods, we observed a decrease in hospital admissions on index visit (from 57.7% to 42.0%; P <0.01), as well as all admissions within 30 days (from 65.1% to 53.5% (P < 0.01). The autoregression analysis, however, demonstrated a pre-existing trend to fewer admissions and could not attribute this to the intervention (P = 0.91). Attendance at a specialty clinic, amongst those discharged increased from 17.8% to 42.1% (P < 0.01) and the median days to clinic decreased from 13 to 6 days (P < 0.01). 30-day mortality did not change (4.5% vs. 4.0%; P = 0.76). Conclusion: Implementation of rapid-access dedicated AHF clinics led to considerably increased access to specialist care, much reduced follow-up times, and possible reduction in hospital admissions. Widespread use of this approach can improve AHF care in Canada.
Maternal dietary quality, inflammatory potential and offspring adiposity throughout childhood: a pooled analysis of 7 European cohorts (ALPHABET consortium)
Ling-Wei Chen, Adrien Aubert, Jonathan Y. Bernard, Cyrus Cooper, Liesbeth Duijts, Aisling A. Geraghty, Nicholas C. Harvey, James R. Hebert, Barbara Heude, Cecily C. Kelleher, Fionnuala M. McAuliffe, John Mehegan, Rosalie Mensink-Bout, Kinga Polanska, Caroline L. Relton, Nitin Shivappa, Matthew Suderman, Catherine M Phillips
Journal: Proceedings of the Nutrition Society / Volume 79 / Issue OCE2 / 2020
Published online by Cambridge University Press: 10 June 2020, E155
The foetal programming hypothesis posits that optimising early life factors e.g. maternal diets can help avert the burden of adverse childhood outcomes e.g. childhood obesity. To improve applicability to public health messaging, we investigated whether maternal whole diet quality and inflammatory potential influence childhood adiposity in a large consortium.
We harmonized and pooled individual participant data from up to 8,769 mother-child pairs in 7 European mother-offspring cohorts. Maternal early-, late-, and whole-pregnancy dietary quality and inflammatory potential were assessed with Dietary Approaches to Stop Hypertension (DASH) and energy-adjusted Dietary Inflammatory Index (E-DII), respectively. Primary outcome was childhood overweight and obesity (OWOB), defined as age- and sex-specific body-mass-index-z score (BMIz) > 85th percentile based on WHO growth standard. Secondary outcomes were sum-of-skinfold-thickness (SST), fat-mass-index (FMI) and fat-free-mass-index (FFMI) in available cohorts. Outcomes were assessed in early- [mean (SD) age: 2.8 (0.3) y], mid- [6.2 (0.6) y], and late-childhood [10.6 (1.2) y]. We used multivariable regression analyses to assess the associations of maternal E-DII and DASH with offspring adiposity outcomes in cohort-specific analyses, with subsequent random-effects meta-analyses. Analyses were adjusted for maternal age, pre-pregnancy BMI, parity, lifestyle factors, energy intake, educational attainment, offspring age and sex.
A more pro-inflammatory maternal diet, indicated by higher E-DII, was associated with a higher risk of offspring late-childhood OWOB [pooled-OR (95% CI) comparing highest vs. lowest E-DII quartiles: 1.22 (1.01,1.47) for whole-pregnancy and 1.38 (1.05,1.83) for early-pregnancy; both P < 0.05]. Moreover, higher late-pregnancy E-DII was associated with higher mid-childhood FMI [pooled-β (95% CI): 0.11 (0.003,0.22) kg/m2; P < 0.05]; trending association was observed for whole-pregnancy E-DII [0.12 (-0.01,0.25) kg/m2; P = 0.07]. A higher maternal dietary quality, indicated by higher DASH score, showed a trending inverse association with late-childhood OWOB (pooled-OR (95% CI) comparing highest vs. lowest DASH quartiles: 0.58 (0.32,1.02; P = 0.06). Higher early-pregnancy DASH was associated with lower late-childhood SST [pooled-β (95% CI): -1.9 (-3.6,-0.1) cm; P < 0.05] and tended to be associated with lower late-childhood FMI [-0.34 (-0.71,0.04) kg/m2; P = 0.08]. Higher whole-pregnancy DASH tended to associate with lower early-childhood SST [-0.33 (-0.72,0.06) cm; P = 0.10]. Results were similar when modelling DASH and E-DII continuously.
Analysis of pooled data suggests that pro-inflammatory, low-quality maternal antenatal diets may influence offspring body composition and obesity risk, especially during mid- or late-childhood. Due to variation of data availability at each timepoint, our results should be interpreted with caution. Because most associations were observed at mid-childhood or later, future studies will benefit from a longer follow-up.
MP14: Use of conventional cardiac troponin assay for diagnosis of non-ST-elevation myocardial infarction: 'The Ottawa Troponin Pathway'
V. Thiruganasambandamoorthy, I. Stiell, H. Chaudry, M. Mukarram, R. Booth, C. Toarta, G. Hebert, R. Beanlands, G. Wells, M. Nemnom, M. Taljaard
Published online by Cambridge University Press: 02 May 2019, p. S47
Introduction: Guidelines recommend serial conventional cardiac troponin (cTn) measurements 6-9 hours apart for non-ST-elevation myocardial infarction (NSTEMI) diagnosis. We sought to develop a pathway based on absolute/relative changes between two serial conventional cardiac troponin I (cTnI) values 3-hours apart for 15-day MACE identification. Methods: This was a prospective cohort study conducted in the two large ED's at the Ottawa Hospital. Adults with NSTEMI symptoms were enrolled over 32 months. Patients with STEMI, hospitalized for unstable angina, or with only one cTnI were excluded. We collected baseline characteristics, Siemens Vista cTnI at 0 and 3-hours after ED presentation, disposition, and ED length of stay (LOS). Adjudicated primary outcome was 15-day MACE (AMI, revascularization, or death due to cardiac ischemia/unknown cause). We analysed cTnI values by 99th percentile cut-off multiples (45, 100 and 250ng/L). Results: 1,683 patients (mean age 64.7 years; 55.3% female; median ED LOS 7 hours; 88 patients with 15-day MACE) were included. 1,346 (80.0%) patients with both cTnI ≤45ng/L; and 58 (3.4%) of the 213 patients with one value≥100ng/L but both <250ng/L or ≤20% change did not suffer MACE. Among 124 patients (7.4%) with one value >45ng/L but both <100ng/L based on 3 or 6-hour cTnI, one patient with Δ<10ng/L and 6 of 19 patients with Δ≥20ng/L were diagnosed with NSTEMI (patients with Δ10-19ng/L between first and second cTnI had third one at 6-hours). Based on the results, we developed the Ottawa Troponin Pathway (OTP) with a 98.9% sensitivity (95%CI 96.7-100%) and 94.6% specificity (95%CI 93.4-95.7%). Conclusion: The OTP, using two conventional cTnI measurements performed 3-hours apart, should lead to better identification of NSTEMI particularly those with values >99th percentile cut-off, standardize management and reduce the ED LOS.
Dietary inflammatory index in relation to sub-clinical atherosclerosis and atherosclerotic vascular disease mortality in older women
Nicola P. Bondonno, Joshua R. Lewis, Lauren C. Blekkenhorst, Nitin Shivappa, Richard J. Woodman, Catherine P. Bondonno, Natalie C. Ward, James R. Hébert, Peter L. Thompson, Richard L. Prince, Jonathan M. Hodgson
Journal: British Journal of Nutrition / Volume 117 / Issue 11 / 14 June 2017
Published online by Cambridge University Press: 04 July 2017, pp. 1577-1586
Print publication: 14 June 2017
Arterial wall thickening, stimulated by low-grade systemic inflammation, underlies many cardiovascular events. As diet is a significant moderator of systemic inflammation, the dietary inflammatory index (DIITM) has recently been devised to assess the overall inflammatory potential of an individual's diet. The primary objective of this study was to assess the association of the DII with common carotid artery–intima-media thickness (CCA–IMT) and carotid plaques. To substantiate the clinical importance of these findings we assessed the relationship of DII score with atherosclerotic vascular disease (ASVD)-related mortality, ischaemic cerebrovascular disease (CVA)-related mortality and ischaemic heart disease (IHD)-related mortality more. The study was conducted in Western Australian women aged over 70 years (n 1304). Dietary data derived from a validated FFQ (completed at baseline) were used to calculate a DII score for each individual. In multivariable-adjusted models, DII scores were associated with sub-clinical atherosclerosis: a 1 sd (2·13 units) higher DII score was associated with a 0·013-mm higher mean CCA–IMT (P=0·016) and a 0·016-mm higher maximum CCA–IMT (P=0·008), measured at 36 months. No relationship was seen between DII score and carotid plaque severity. There were 269 deaths during follow-up. High DII scores were positively associated with ASVD-related death (per sd, hazard ratio (HR): 1·36; 95 % CI 1·15, 1·60), CVA-related death (per sd, HR: 1·30; 95 % CI 1·00, 1·69) and IHD-related death (per sd, HR: 1·40; 95 % CI 1·13, 1·75). These results support the hypothesis that a pro-inflammatory diet increases systemic inflammation leading to development and progression of atherosclerosis and eventual ASVD-related death.
Fat mass obesity-associated (FTO) (rs9939609) and melanocortin 4 receptor (MC4R) (rs17782313) SNP are positively associated with obesity and blood pressure in Mexican school-aged children
Pablo García-Solís, Marissa Reyes-Bastidas, Karla Flores, Olga P. García, Jorge L. Rosado, Lorena Méndez-Villa, Carlota Garcia-G, David García-Gutiérrez, Aarón Kuri-García, Hebert L. Hernández-Montiel, Ofelia Soriano-Leon, Maria Elena Villagrán-Herrera, Juan C. Solis-Sainz
Journal: British Journal of Nutrition / Volume 116 / Issue 10 / 28 November 2016
Published online by Cambridge University Press: 10 November 2016, pp. 1834-1840
Print publication: 28 November 2016
Childhood overweight and obesity are worldwide public health problems and risk factors for chronic diseases. The presence of SNP in several genes has been associated with the presence of obesity. A total of 580 children (8–13 years old) from Queretaro, Mexico, participated in this cross-sectional study, which evaluated the associations of rs9939609 (fat mass obesity-associated (FTO)), rs17782313 (melanocortin 4 receptor (MC4R)) and rs6548238 (transmembrane protein 18 (TMEM18)) SNP with obesity and metabolic risk factors. Overweight and obesity prevalence was 19·8 and 19·1 %, respectively. FTO, MC4R and TMEM18 risk allele frequency was 17, 9·8 and 89·5 %, respectively. A significant association between FTO homozygous and MC4R heterozygous risk alleles and obesity was found (OR 3·9; 95 % CI 1·46, 10·22, and OR 2·1; 95 % CI 1·22, 3·71; respectively). The FTO heterozygous subjects showed higher systolic and diastolic blood pressures, compared with the homozygous for the ancestral allele subjects. These results remain significant after considering adiposity as a covariate. The FTO and MC4R genotypes were not significantly associated with total cholesterol, HDL-cholesterol and insulin concentration. No association was found between TMEM18 risk allele and obesity and/or metabolic alterations. Our results show that, in addition to a higher BMI, there is also an association of the risk genotype with blood pressure in the presence of the FTO risk genotype. The possible presence of a risk genotype in obese children must be considered to offer a more comprehensive therapeutic approach in order to delay and/or prevent the development of chronic diseases.
Electron Energy-Loss Spectroscopy and Energy-Filtered TEM Imaging for the in situ Assessment of Reduction-Oxidation Reactions in Ni-Based Solid Oxide Fuel Cells
Q. Jeangros, A.B. Aebersold, T.W. Hansen, J.B. Wagner, R.E. Dunin-Borkowski, C. Hébert, J. Van herle, A. Hessler-Wyser
Journal: Microscopy and Microanalysis / Volume 22 / Issue S3 / July 2016
High Temperature Stability of Amorphous Zn-Sn-O Transparent Conductive Oxides Investigated by In Situ TEM and X-ray Diffraction
Q. Jeangros, M. Duchamp, E. Rucavado, F. Landucci, C. Spori, R.E. Dunin-Borkowski, C. Hébert, M. Morales-Masis, C. Ballif, A. Hessler-Wyser
Severe Influenza in 33 US Hospitals, 2013–2014: Complications and Risk Factors for Death in 507 Patients
Nirav S. Shah, Jared A. Greenberg, Moira C. McNulty, Kevin S. Gregg, James Riddell, Julie E. Mangino, Devin M. Weber, Courtney L. Hebert, Natalie S. Marzec, Michelle A. Barron, Fredy Chaparro-Rojas, Alejandro Restrepo, Vagish Hemmige, Kunatum Prasidthrathsint, Sandra Cobb, Loreen Herwaldt, Vanessa Raabe, Christopher R. Cannavino, Andrea Green Hines, Sara H. Bares, Philip B. Antiporta, Tonya Scardina, Ursula Patel, Gail Reid, Parvin Mohazabnia, Suresh Kachhdiya, Binh-Minh Le, Connie J. Park, Belinda Ostrowsky, Ari Robicsek, Becky A. Smith, Jeanmarie Schied, Micah M. Bhatti, Stockton Mayer, Monica Sikka, Ivette Murphy-Aguilu, Priti Patwari, Shira R. Abeles, Francesca J. Torriani, Zainab Abbas, Sophie Toya, Katherine Doktor, Anindita Chakrabarti, Susanne Doblecki-Lewis, David J. Looney, Michael Z. David
Journal: Infection Control & Hospital Epidemiology / Volume 36 / Issue 11 / November 2015
Influenza A (H1N1) pdm09 became the predominant circulating strain in the United States during the 2013–2014 influenza season. Little is known about the epidemiology of severe influenza during this season.
A retrospective cohort study of severely ill patients with influenza infection in intensive care units in 33 US hospitals from September 1, 2013, through April 1, 2014, was conducted to determine risk factors for mortality present on intensive care unit admission and to describe patient characteristics, spectrum of disease, management, and outcomes.
A total of 444 adults and 63 children were admitted to an intensive care unit in a study hospital; 93 adults (20.9%) and 4 children (6.3%) died. By logistic regression analysis, the following factors were significantly associated with mortality among adult patients: older age (>65 years, odds ratio, 3.1 [95% CI, 1.4–6.9], P=.006 and 50–64 years, 2.5 [1.3–4.9], P=.007; reference age 18–49 years), male sex (1.9 [1.1–3.3], P=.031), history of malignant tumor with chemotherapy administered within the prior 6 months (12.1 [3.9–37.0], P<.001), and a higher Sequential Organ Failure Assessment score (for each increase by 1 in score, 1.3 [1.2–1.4], P<.001).
Risk factors for death among US patients with severe influenza during the 2013–2014 season, when influenza A (H1N1) pdm09 was the predominant circulating strain type, shifted in the first postpandemic season in which it predominated toward those of a more typical epidemic influenza season.
Infect. Control Hosp. Epidemiol. 2015;36(11):1251–1260
Optimization of the Data Acquisition and Processing using a Prior Knowledge of the Camera Characteristics: an EFTEM Case Study
G. Lucas, C. Hébert
Journal: Microscopy and Microanalysis / Volume 20 / Issue S3 / August 2014
Published online by Cambridge University Press: 27 August 2014, pp. 788-789
Print publication: August 2014
Austenite formation in a ferrite/martensite cold-rolled microstructure during annealing of advanced high-strength steels
C. Philippot, J. Drillet, P. Maugis, V. Hebert, M. Dumont
Journal: Revue de Métallurgie – International Journal of Metallurgy / Volume 111 / Issue 1 / 2014
Published online by Cambridge University Press: 14 February 2014, pp. 3-8
From a ferrite/martensite cold-rolled microstructure, the interaction between ferrite recrystallization and austenite formation is investigated. It is observed that a slow heating rate promotes the ferrite recrystallization and a homogeneous microstructure, whereas a fast heating rate delays the recrystallization and leads to heterogeneously distributed austenite islands.
Oxidation of nickel particles in an environmental TEM
Q. Jeangros, T. Hansen, J. Wagner, R. Dunin-Borkowski, C. Hebert, J. Van herle, A. Hessler-Wyser
Published online by Cambridge University Press: 09 October 2013, pp. 512-513
Extended abstract of a paper presented at Microscopy and Microanalysis 2013 in Indianapolis, Indiana, USA, August 4 – August 8, 2013.
Quantitative, 3D Studies of the Evolution of Grain Size and Orientation in Nano- grained, Polycrystalline Thin-Films
A.B. Aebersold, C. Hébert, D.T.L. Alexander
Shock waves in microchannels
G. Mirshekari, M. Brouillette, J. Giordano, C. Hébert, J.-D. Parisse, P. Perrier
Journal: Journal of Fluid Mechanics / Volume 724 / 10 June 2013
Published online by Cambridge University Press: 29 April 2013, pp. 259-283
A fully instrumented microscale shock tube, believed to be the smallest to date, has been fabricated and tested. This facility is used to study the transmission of a shock wave, produced in a large (37 mm) shock tube, into a 34 $\mathrm{\mu} \mathrm{m} $ hydraulic diameter and 2 mm long microchannel. Pressure microsensors of a novel design, with gigahertz bandwidth, are used to obtain pressure–time histories of the microchannel shock wave at five axial stations. In all cases the transmitted shock wave is found to be weaker than the incident shock wave, and is observed to decay both in pressure and velocity as it propagates down the microchannel. These results are compared with various analytical and numerical models, and the best agreement is obtained with a Navier–Stokes computational fluid dynamics computation, which assumes a no-slip isothermal wall boundary condition; good agreement is also obtained with a simple shock tube laminar boundary layer model. It is also found that the flow developing within the microchannel is highly dependent on conditions at the microchannel entrance, which control the mass flux entering into the device. Regardless of the micrometre dimensions of the present facility, shock wave propagation in a microchannel of that scale exhibits a behaviour similar to that observed in large-scale facilities operated at low pressures, and the shock attenuation can be explained in terms of accepted laminar boundary models.
Nickel oxide reduction studied by environmental TEM and in situ XRD
Q. Jeangros, C. Hébert, A. Hessler-Wyser, T.W. Hansen, J.B. Wagner, C.D. Damsgaard, R.E. Dunin-Borkowski
Extended abstract of a paper presented at Microscopy and Microanalysis 2012 in Phoenix, Arizona, USA, July 29 – August 2, 2012.
Multivariate Statistical Analysis tool for the interpretation and the quantification of hyperspectral data: application to 3D EDX/FIB images
G. Lucas, P. Burdet, M. Cantoni, C. Hébert
3D EDX microanalysis by FIB-SEM: Elemental quantification enhancement
P. Burdet, M. Cantoni, C. Hébert
Published online by Cambridge University Press: 23 November 2012, pp. 526-527
Demonstration of the Weighted-Incidence Syndromic Combination Antibiogram: An Empiric Prescribing Decision Aid
Courtney Hebert, Jessica Ridgway, Benjamin Vekhter, Eric C. Brown, Stephen G. Weber, Ari Robicsek
Journal: Infection Control & Hospital Epidemiology / Volume 33 / Issue 4 / April 2012
Published online by Cambridge University Press: 02 January 2015, pp. 381-388
Print publication: April 2012
Healthcare providers need a better empiric antibiotic prescribing aid than the traditional antibiogram, which supplies no information on the relative frequency of organisms recovered in a given infection and which is uninformative in situations where multiple antimicrobials are used or multiple organisms are anticipated. We aimed to develop and demonstrate a novel empiric prescribing decision aid.
Design/Setting.
This is a demonstration involving more than 9,000 unique encounters for abdominal-biliary infection (ABI) and urinary tract infection (UTI) to a large healthcare system with a fully integrated electronic health record (EHR).
We developed a novel method of displaying microbiology data called the weighted-incidence syndromic combination antibiogram (WISCA) for 2 clinical syndromes, ABI and UTI. The WISCA combines simple diagnosis and microbiology data from the EHR to (1) classify patients by syndrome and (2) determine, for each patient with a given syndrome, whether a given regimen (1 or more agents) would have covered all the organisms recovered for their infection. This allows data to be presented such that clinicians can see the probability that a particular regimen will cover a particular infection rather than the probability that a single drug will cover a single organism.
There were 997 encounters for ABI and 8,232 for UTI. A WISCA was created for each syndrome and compared with a traditional antibiogram for the same period.
Novel approaches to data compilation and display can overcome limitations to the utility of the traditional antibiogram in helping providers choose empiric antibiotics.
3D EDX Microanalysis by FIB-SEM: Enhancement of Elemental Quantification
P Burdet, C Hébert, M Cantoni
Extended abstract of a paper presented at Microscopy and Microanalysis 2011 in Nashville, Tennessee, USA, August 7–August 11, 2011.
Focused Ion Beam Nano-Tomography Using Different Detectors
M Cantoni, P Burdet, G Knott, C Hébert
Capturing EELS in the reciprocal space
C. Hébert, A. Alkauskas, S. Löffler, B. Jouffrey, P. Schattschneider
Journal: The European Physical Journal - Applied Physics / Volume 54 / Issue 3 / June 2011
Published online by Cambridge University Press: 20 June 2011, 33510
Print publication: June 2011
In this work two aspects of momentum-dependent electron energy loss spectrometry are studied, both in the core-loss and in the low-loss region. In the case of core losses, we focus on the demonstration and the interpretation of an unexpected non-Lorentzian behavior in the angular part of the double-differential scattering cross-section. The silicon L3 edge is taken as an example. Using calculations we show that the non-Lorentzian behavior is due to a change in the wavefunction overlap between the initial and the final states. In the case of low losses, we first analyze the momentum-dependent loss functions of coinage metals Cu, Ag, and Au. We then demonstrate how advanced electronic structure calculations can be used to build simple models for the dielectric function that can then serve as a basis for the calculation of more complicated sample geometries. | CommonCrawl |
Computational modeling of sphingolipid metabolism
Weronika Wronowska3,
Agata Charzyńska1,2,
Karol Nienałtowski4 &
Anna Gambin5
As suggested by the origin of the word, sphingolipids are mysterious molecules with various roles in antagonistic cellular processes such as autophagy, apoptosis, proliferation and differentiation. Moreover, sphingolipids have recently been recognized as important messengers in cellular signaling pathways. Notably, sphingolipid metabolism disorders have been observed in various pathological conditions such as cancer and neurodegeneration.
The existing formal models of sphingolipid metabolism focus mainly on de novo ceramide synthesis or are limited to biochemical transformations of particular subspecies. Here, we propose the first comprehensive computational model of sphingolipid metabolism in human tissue. Contrary to the previous approaches, we use a model that reflects cell compartmentalization thereby highlighting the differences among individual organelles.
The model that we present here was validated using recently proposed methods of model analysis, allowing to detect the most sensitive and experimentally non-identifiable parameters and determine the main sources of model variance. Moreover, we demonstrate the usefulness of our model in the study of molecular processes underlying Alzheimer's disease, which are associated with sphingolipid metabolism.
Sphingolipids (SL) are a class of complex lipids with a sphingoid base (Sph) [1]. Modifications of this basic structure that consist in the addition of an amide-linked fatty acid or phosphorylation lead to the formation of bioactive sphingolipids such as ceramide (CER), ceramide-1-phosphate (C1P), sphingosine-1-phosphate (S1P) or sphingomyelin (SM) [2, 3]. Ceramide is a recognized branching point in the metabolism of various sphingolipids subspecies. There are three major pathways of ceramide synthesis. In de novo synthesis pathway ceramide is created from less complex molecules [4]. The second pathway is the catabolism of complex sphingolipids, mainly sphingomyelin [5]. Ceramides can also form through the breakdown of complex sphingolipids that are ultimately broken down into sphingosine in the acidic environment of the lysosome. In this pathway, known as salvage pathway, sphingosine is then reused, it is reacetylated to form ceramide again [6]. At the same time, ceramide may serve as a substrate in the synthesis of SM, C1P, and Sph which, in turn, can be phosphorylated to S1P [7–11]. For a long time, sphingolipids were believed to serve mainly structural purposes and have only been recognized as important messengers in cellular signaling pathways [12, 13] in the last two decades.
A notable body of work has been devoted to studying the influence of sphingolipid metabolism on cellular fate: autophagy, apoptosis, proliferation or differentiation [14, 15]. Importantly, individual sphingolipid species appear to have an antagonistic effect on cell growth and survival. The dynamic balance between proapoptotic (e.g. CER and Sph) and antiapoptotic (prosurvival) molecules (e.g. S1P and C1P) is termed sphingolipid rheostat [16]. Disruptions in the metabolic pathways involved in the regulation of this balance are believed to underlay various diseases. Indeed, sphingolipids are known to have critical implications for the pathogenesis and treatment of diverse conditions such as cancer [17–20] and neurodegenerative disorders (e.g. Alzheimer's disease) [21–25].
Formal modeling appears to be an excellent tool to predict the response of a system to a wide range of both external and internal factors in different scenarios. However, due to the complexity of the sphingolipid metabolome and the paucity of data, not much research has been performed in the field of computational sphingolipidome modeling. Only few models of SL metabolism are available in literature. The model provided by Vasquez et al. [26] refers to de novo ceramide synthesis in yeast. It contains all essential elements of ceramide synthesis from nonsphingolipid metabolism. However, no further steps involving the recycling of ceramides and other more complex sphingolipids (such as the SM catabolic pathway and salvage pathway) are considered. The model proposed by Gupta et al. [27] describes the C16-branch of sphingolipid metabolism in RAW264.7 cells. An advantage of this model is that it combines the lipidomics and transcriptomics data provided by the LIPID MAPS Consortium. However, the model applied here is restricted to the closest metabolites of C16 ceramide. None of the proposed models contain cell compartmental division despite the fact that ceramide metabolism is known to differ depending on the cell compartment such as the mitochondrion, the nucleus and the cell membrane. Therefore, we found it appealing to create a computational model for the metabolism of complex sphingolipids in human tissues.
We propose a formal model of regulatory processes that contain sphingolipid metabolism pathways. Computational modeling is based on ordinary differential equations (ODEs) that describe the evolution of species concentration. The kinetics of our model is based on Mass Action Law (MAL) for molecular transport reactions and the Michaelis Menten (MM) approach for enzymatically catalyzed reactions. The model also covers the potential inhibitory effects of some species on the synthesis of other species.
To the best of our knowledge, this is the first computational model of sphingolipid metabolism that includes compartmentalization based on the typical structure of a nondifferentiated eukaryotic cell. Reaction parameters were estimated on the basis of publicly available literature data and some default assumptions based on experience with Biochemical Systems Theory [28], whereas the initial concentrations of particular sphingolipd species in each organelle were obtained from the LIPID MAPS database [29]. To validate our model, we applied both standard and novel methods of analysis, i.e. local sensitivity analysis [30], variance decomposition [31] and clustering of model parameters based on sensitivity indices [32]. Finally, we demonstrate the utility of our model to study the molecular events underlying Alzheimer's disease (AD). The proposed model provides comprehensive, functional integration of experimental data and will contribute to the understanding of the interrelationships between sphingolipid metabolism and various diseases that remain elusive. Moreover, this is the first time that two recently published methods of computational model analysis (i.e. variance decomposition [31] and sensitivity clustering [32]) are applied in a medium-size realistic biochemical model.
Model of sphingolipid metabolism
Our model captures all essential elements of the complex network of sphingolipid metabolism excluding de novo ceramide synthesis which has been described by Vasquez et al. [26]. It illustrates the general behavior of selected subspecies in unspecified human tissue in nine subcellular compartments. These compartments represent the following organelles or their parts: the outer and inner layer of the cell membrane, the cytoplasm, the endoplasmic reticulum, the cytoplasmatic and lumenal face of the Golgi apparatus, the nucleus, the mitochondrion and the lysosome. Our model includes 69 reactions of molecular transport and biochemical transformation (Fig. 1).
The sphingolipid metabolism diagram. Network of the SL metabolism system. Diagram was generated in Matlab Simbiology software. The full model contains 69 reactions, 39 modeled species and 37 reaction catalyzing enzymes. Oval boxes denote reacting molecule species, diamond boxes denote enzymes, circles denote reactions (small circles – transport, bigger circle metabolic reaction). Solid lines connect reactants with reaction, short dash lines connecting diamonds to reactions denote enzymatic catalyzing influence. Long dash lines connecting ovals to reactions denote inhibition
We applied the Mass Action Law principle to describe transport kinetics. Particular equations simulate different transport pathways which are determined by the specific biophysical properties of particular sphingolipids [33]. It is worth highlighting that most of these molecules [i.e. CER, SM and glycosphingolipids (GSL)] are restricted to biological membranes. These can be transported between organelles only in the form of complexes with lipid transfer proteins [34], e.g. the CER transfer (CERT) protein binds to CER. In addition, sphingolipids may change their location in the form of vesicles, i.e. as an integral part of biological membranes [35]. For example, the translocation of SM and GSL from the Golgi apparatus to the outer membrane is associated with exocytosis. Furthermore, SL may move into the lysosome when the endocytosis complex is formed. SL species may also diffuse along the membranes of interlinked organelles, as is the case with ceramide that floats between the endoplasmic reticulum and the nucleus [36]. On the other hand, Sph, S1P and C1P are sufficiently hydrophilic to diffuse freely from membranes to the cytosol, and similarly from the outer membrane to the external environment [37]. However, it has been reported that the transport of C1P from the Golgi apparatus to other cellular compartments may also occur in association with specific transporter proteins such as C1P transfer protein (CPTP) [38]. Water solubility also determines a molecule's ability to flip between membrane leaflets. CER has a relatively rapid flip rate and Sph is sufficiently amphipathic to move between membrane layers [39, 40]. Finally, S1P requires specific lipid transporters to traverse membranes [41, 42]. Complex sphingolipids are unable to cross membranes without the aid of specific flippases such as four-phosphate adaptor protein 2 (FAPP2) which draws glucosylceramide from the outer surface to the inner surface of Golgi cisterns [43].
Ceramide synthesis and degradation
The majority of reactions depicted in Fig. 1 are enzymatic. The Michaelis Menten model and simplified kinetics were applied to describe different pathways of synthesis and degradation of the selected SL species, including ceramids. There are three major pathways of ceramide synthesis. CER synthesis via de novo pathway is described as the inflow of these molecules into the endoplasmic reticulum. CER may also by generated through the acetylation of Sph. This reaction is catalyzed by different types of ceramide synthases (CerS) [44] and is the final step of the salvage pathway [6]. Notably, the endoplasmic Sph metabolized in this pathway may be generated from the degradation of S1P which is catalyzed by specific phosphatases (SPP1 and SPP2) [45] or from the lysosomal degradation of complex SL species. This pathway is initiated by acidic sphingomyelinase (aSMase) and is critical to maintain proper concentrations of cellular SL [46, 47]. In addition to the abovementioned endoplasmic route of CER synthesis, a similar subset of reactions occcurs in the mitochondria. The reactions of mitochondrial SL metabolism have not yet been completely understood. In particular, enzyme specificity and the values of reaction rate parameters are often unknown [48, 49].
The third route of CER synthesis is through the hydrolysis of SM. The enzymes responsible for catalyzing these reactions, sphingomyelinases (SMases) are classified into three categories based on their optimum pH values and subcellular distribution. The degradation of SM is essential for the homeostasis of cell membranes; it has also been reported to be strongly associated with stress induced apoptosis [14, 50, 51].
Finally, we describe CER hydrolysis. Ceramidases CDase, seven of which have been described in humans, catalyze the cleavage of fatty acids from CER which leads to the production of Sph [52].
Synthesis of complex SL
The most complex SL are SM and GSL which are even more diverse. Although some enzymes responsible for the synthesis of these complex SL have been detected in e.g. the nucleus, this pathway is mainly localized in the Golgi apparatus. In both cases, the CER is used as a backbone molecule. However, its conversion into either SM or GSL depends on the transport pathway from the endoplasmic reticulum. A CER transported in the complex with a CERT protein moves into the cis-Golgi where SM is generated in a reaction catalyzed by an SM synthase [53, 54]. On the other hand, CER must move into the trans-Golgi via a vesicle-dependent pathway to form GSL in a series of reactions [55].
S1P and C1P metabolism
Our model includes reactions of CER and Sph phosphorylation. The resulting S1P and C1P, unlike CER and Sph, promote cell growth and have anti-apoptotic properties [16, 56]. The effect of these metabolites is regulated by the activity of several enzymes: (i) ceramide kinase (CERK) responsible for the synthesis of C1P in the Golgi apparatus and the plasma membrane [57]; (ii) sphingosine kinases (SK1 and SK2) which catalyze the phosphorylation of Sph in different subcellular locations [58, 59]; and (iii) phosphatases that hydrolyze S1P and C1P. Phosphatases include both lipid phosphate phosphatases of broad specificity [Phosphatidic acid phosphatase types 2a (PAP2a), 2b (PAP2b) and 2c (PAP2c)] and S1P specific phosphatases (SPP1 and SPP2) [45, 60]. All of these enzymes with their isoforms differ in substrate specificity, optimum pH values and subcellular localization. Our model illustrates the majority of their known properties. For detailed characteristics see the reviewed articles [7–11]. It is worth highlighting that both S1P and C1P have been identified as inhibitors of enzymes responsible for CER synthesis, such as acidic sphingomyelinase (aSMase) and serine palmitoyltransferase (SPT), the key regulatory enzyme of de novo synthesis pathway. In our model, inhibitory kinetics were used to describe this inhibitory activity of S1P (see Additional file 1: Table S1). Finally, our model includes the reaction of irreversible S1P degradation. Catalyzed by sphingosine-1-phosphate lyase (SPL1), this reaction of S1P hydrolysis to hexadecenal and phosphoetanolamine allows to remove the sphingoid base from the pool of SL metabolites [60, 61].
Model parameters
In conclusion, our model consists of 39 variables that represent molecular species concentrations (some of these are the same compounds but localized in different compartments, cf. Fig. 1). The metabolic reaction network covers 69 biochemical reactions (enzymatic and transport-related) between the reacting species. The model is implemented in the form of a system of 69 ordinary differential equations (ODEs) that model the dynamics of the reaction network. The 129 parameters of inhibition and reaction rates in the stationary state that represent the conditions of homeostasis constitute the relevant input of the model and are presented in Additional file 1: Table S1 while the 38 initial values of species concentrations are presented in Additional file 1: Table S2. To achieve conditions that resemble the intracellular environment during homoeostasis we stabilized species concentrations to the stationary state of the system. The initial values of lipid levels were obtained from the LIPID MAPS [26]. In the following part of this article, the model was validated by local sensitivity analysis, variance decomposition and clustering analysis.
Computational validation of the model
Biochemical models are characterized by a substantially larger number of parameters relative to size of available experimental data. Therefore, the exact estimation of model parameters is highly difficult [62]. Thus, we used mathematical modeling to analyze the interrelationships between parameters and model dynamics. To verify the assumptions of our model, several methods were applied to obtain a broad view of the behavior of the modeled system in normal and stress conditions. Validation methods were based on recently proposed and classical approaches that engage exact mathematical methods.
Local sensitivity analysis
For the outcome of the local sensitivity analysis [30] performed for the system in stationary state homeostasis, see Fig. 2 and Additional file 1: Figures S1-S3. The following conclusions were drawn.
The highest sensitivity indices among ceramide species were assigned to mitochondrial and lysosomal CER associated with the activity of ceramide synthase (CerS) in the mitochondrion and sphingomielynase (SMase) in the lysosome. The widest range of sensitivity was observed for CER in the endoplasmic reticulum, especially for the parameters of exogenous CER inflow through the outer membrane and endogenous CER via de novo synthesis in the endoplasmic reticulum. Notably, CER in the endoplasmic reticulum is also highly sensitive to the parameters of reactions catalyzed by SMase in the outer membrane. High sensitivity to the parameters of exogenous inflow of CER and C1P show membrane CER species, which are also sensitive to the membrane reactions catalyzed by SMases (Fig. 2).
The local sensitivity analysis of the CER species to the highly significant parameters
For the sphingosine species, the greatest instability behavior shows the mitochondrial Sph, that is highly sensitive to the model's inflows parameters as well as the parameter of reaction catalyzed by the enzyme CerS in the mitochondrion. Moreover, mitochondrial Sph is sensitive to the SMase catalyzed reactions in membrane (Additional file 1: Figure S1). In contrast, concentration of Sph localized in cytosol is practically invariant to parameters.
Mitochondrial S1P shows the greatest instability within the S1P species, not only for the model's inflows parameters and CerS in mitochondrion, but also for parameters of reactions catalyzed by SMase in membrane and lysosome. For the other S1P species, the the mostly important is sphingosinokinase (SK) in the reticulum, inner membrane and nucleus, respectively. The cytoplasmic S1P is the most stable S1P species (Additional file 1: Figure S2).
Sphingomyelin in the outer membrane is the dominant species within all other species; consequently, the exogenous inflow of SM by outer membrane is the most significant parameter for the SM species. This parameter does not play a noticeable role for other species that are sensitive for model's inflows parameters by outer membrane (exogenous: C1P, CER, Sph and S1P). Another interesting feature is that the nuclear SM is the most stable within SM species (Additional file 1: Figure S3).
Variance decomposition - homeostasis
The variance decomposition method enables to decompose noise associated with uncertainty of the modeled output into components related to different reactions [31, 63]. The application of this method to our model principally indicates the reactions corresponding to edges in Fig. 1 incident to investigated species as the highest noise generators. Nevertheless, some reactions were more significant for investigated species than other incident reactions, whereas for some other species, variances are distributed equally among all reactions. To find the distinctive reactions, we calculated the mean variance for each investigated species and set the threshold to 110 % of the mean variance. The results for CER species are depicted in Fig. 3 and those for Sph, S1P and SM are depicted in Additional file 1: Figures S4, S5 and S6, respectively.
Within the ceramide species, the highest variance shows the mitochondrial and lysosomal CER. For CER, the threshold set on 110 % of average variance was exceeded only by the reactions catalyzed by ceramide synthase (CerS) and acid ceramidase (ACDase) in mitochondrion. The membrane CER species interact together; hence, for inner membrane CER not only incident reactions exceeded the threshold but also reaction incident with outer membrane CER catalyzed by aSMase. For the outer membrane CER, the highest variance component stems from the reactions incident with outer membrane C1P and transport reaction of C1P from Golgi apparatus to outer membrane. For the other CER species the highest variance is caused by the incident reactions.
The variance decomposition of ceramide concentrations into components steaming from all model reactions. The red lines denote the average variance components of the investigated species. The red bars denote the variance components that exceeded the threshold of 110 % of average. The x-axis denotes reactions numbers and y-axis denotes the size of variance components
Within the sphingosine species, the highest variance similarly to CER species falls to the mitochondrial Sph, whereas contrary to the mitochondrial CER for the mitochondrial Sph, all reactions' noise components are near to average. Interesting is that two incident reactions connecting mitochondrial Sph with mitochondrial CER exceeded threshold and two other incident reactions connecting mitochondrial Sph with mitochondrial S1P are significantly below average. Similarly, nuclear and endoplasmic Sph have high and almost equally distributed noise with most significant incident reactions. For nuclear Sph the threshold was exceeded also by reactions non-incident with nuclear CER. For the membrane Sph species, highly influential reaction was the Sph membrane diffusion. For the inner membrane Sph except the incident reactions, the high noise components steams from reactions connected with outer membrane CER (between outer membrane SM and Sph). The outer membrane Sph significant reactions include transport reaction of C1P from Golgi apparatus to outer membrane and reactions connected with outer membrane CER and outer membrane S1P. For the lysosomal Sph, except incident reactions, the high noise component steams from reaction catalyzed by aSMase in lysosome. For the cytoplasmic Sph except incident reactions the threshold was exceeded by the reaction catalysed by Alkaline Ceramidase (AlkCDase) in Golgi apparatus. For Golgi apparatus Sph the noise was mainly decomposed by incident reactions.
The highest variability within S1P species has the mitochondrial S1P; all its variance components showed near to average noise and none of the reactions exceeded the threshold of 110 %. Variance of all other S1P species steams principally from the incident reactions with an exception of cytoplasmic S1P, in which noise is generated mainly by lysosomal reaction catalyzed by aSMases.
The SM species have the highest noise among all species and, contrary to most other species, the variance of SM species steams almost equally from all reactions.
Sensitivity-based parameter clustering (homeostasis)
Due to its complex structure, our model is perfectly suited to test the applicability of the new method of clustering mutually compensative parameters to detect mutual relationships between them [32]. In general, parameters sets are not pairwise independent and biochemical models are often sensitive to linear combinations of parameters, which makes them non-identifiable [62, 64].
Through sensitivity clustering of parameters (see Section Methods) we obtained a dendrogram where four clusters may be clearly distinguished (Fig. 4). These clusters may be interpreted as specific functional modules. Our results are compatible with the theoretical compartments recognized by Rao et al. [65], who presented the sphingolipid metabolism pathway as a combination of the following units: (i) the C1 compartment that represents the de novo biosynthesis of CER, (ii) the C2 compartment that reflects the conversion of CER into complex sphingolipids such as SM and GSL, (iii) the C3 compartment that represents the hydrolysis of SM to CER and (iv) the C4 compartment that reflects the conversion of CER into bioactive molecules such as C1P and S1P.
a Dendrogram obtained by hierarchical clustering of parameters based on their functional redundancy. Identifiability analysis yielded 37 unidentifiable parameters (marked in red).The labels and corresponding names of parameters are provided in Additional file 1. b Clusters of reactions induced by hierarchical grouping. The colors of connections between species are compatible with the colors of clusters in the dendrogram. Color intensity within the cluster corresponds with the level of redundancy between reaction parameters and other parameters in the cluster
Ceramide phosphorylation
In our model, the brown cluster includes reaction parameters from the endoplasmic reticulum, the Golgi apparatus, the nucleus and the cytoplasm. The strong redundancy among parameters reflects their functional correlation. This cluster is primarily related to the conversion of Sph to S1P and vice versa. Beyond sphingolipid phosphorylation and dephosphorylation, it also contains reactions of sphingosine acetylation and CER deacetylation. This cluster corresponds to the C4 compartment described by Rao et al. [65] to some extent. However, our cluster fails to include reactions that occur in the cell membrane; these form part of the blue and green clusters. However, we found the reaction that reflects the endogenous inflow of ceramides into the endoplasmic reticulum (simulating the de novo synthesis pathway) to be a part of the brown cluster. Notably, our analysis shows that reactions in the endoplasmic reticulum are in functional unity with nuclear reactions. This may be due to the fact that the membranes of the reticulum are structurally linked with the nuclear envelope.
Complex SL synthesis
The blue cluster contains reaction parameters associated with the molecular composition of the outer membrane. Since the cell membrane is the biggest reservoir of sphingolipids, particularly complex sphingolipids such as sphingomyelin and glycosphingolipids, this cluster has a strong influence on the overall balance of cellular sphingolipids. The reactions from the pathway of CER production via SM hydrolysis (which was mentioned above) are localized in this cluster. This was confirmed by local sensitivity analysis, the reaction catalyzed by aSMase in the outer membrane has a strong influence on the stability of several modeled species. As a consequence, the pathways responsible for the synthesis of complex sphingolipids (SM and GSL) are localized in this cluster. These reactions strongly affect the stability of endoplasmic CER and subsequently, cytoplasmatic Sph. The blue cluster is comparable to the C2 compartment as denoted by Rao et al. [65] in that it reflects complex SL synthesis. However, extending the results given by Rao et al. [65], we show that it is in functional unity with the reactions of the outer cell membrane. It is worth highlighting that the results of our simulations are consistent with literature reports because SM metabolism in the plasma membrane is known to have strong implications for the balance of bioactive sphingolipids [66].
Sphingolipds degradation
The yellow cluster is related to the degradation of complex sphingolipids in the acidic environment of the lysosome. It includes the starting point of the salvage pathway, i.e. SM transport and degradation in the lysosome and, to some extent, it resembles the C3 compartment described in [65]. According to the local sensitivity analysis, two reactions from this cluster that represent the transport of SM from the outer membrane to the lysosome and ceramide synthesis from SM in the lysosome may affect the concentration of different molecular species in the model, such as: the lysosomal and outer membrane CER, SM and Sph, endoplasmic CER or mitochondrial S1P and Sph. However, this influence is not very strong. This finding may be explained by the relatively low activity of the lysosomal degradation pathway in cells that develop in favorable conditions. It should be mentioned that according to the clustering analysis, the lysosome belongs to the intersection of the yellow and blue clusters. This seems biologically appropriate because this organelle links the synthesis and degradation pathways of complex SL.
Inner membrane balance
The black cluster reflects the molecular balance of the inner membrane and contains reaction parameters that are not mutually related to other compartments, but have a specific effect on the behavior of other pathways. For instance, this cluster contains the reaction catalyzed by nSMase in the inner membrane which, on the basis of local sensitivity analysis, appears to slightly impact the stability of CER, Sph and S1P in the entire model.
Application of the model: case study of Alzheimer's disease
Our model not only represents the functional integration of experimental data but may also be used for the computational verification of molecular changes known to cause various human diseases. In recent studies, it has become evident that sphingolipids play important roles in the trafficking and metabolism of AD - related proteins. Thus, these are now acknowledged as crucial molecules in the etiology of AD [23, 67]. This devastating neurodegenerative disorder is characterized by the accumulation of intraneuronal and extracellular protein aggregates and progressive synapse loss. The pathological hallmarks of AD include the extracellular deposition of a peptide called β-amyliod (A β), and neurofibrillary tangles. The inability to catabolize aggregates of abnormally folded A β leads to neuronal degeneration and a subsequent decline in cognitive processes. On the level of sphingolipid metabolism, the most frequently reported hallmarks of the disease are ceramide accumulation in the endoplasmic reticulum and lysosome, and sphingosine accumulation in the cytoplasm accompanied by decreased levels of cytoplasmic S1P and C1P [21–25].
Interestingly, A β has been reported to induce increase in CER level through activation of nSMase, resulting in nerve cell death. On the other hand, CER has been shown to alter amyloid-precursor protein processing and A β production. This mechanism was described as a CER driven circulus vitiosus where increasing CER level leads to an intensified A β production, whereupon A β is responsible for CER accumulation [24]. In this study, we applied our model to determine whether changes in enzymatic activity described by Rao et al. [65] lead to expected changes in the concentration of observed SL species.
Currently our model allows to predict the fluctuations in the concentrations of the sphingolipid species and the activity of the enzymes involved in sphingolipid metabolism. Such analysis could be useful to investigate lipidomics aspect of development of various disease. Whereas present application of the model is limited to analyze sphingolipid metabolism as a separate pathway, we plan to integrate our model with genome-scale metabolic network.
Computational simulation of Alzheimer's disease
In an attempt to simulate the cellular response to metabolic disturbances of the SL pathway described in [65], the values of selected reaction parameters necessary to achieve cell homoeostasis were changed. Detailed information is provided in Additional file 1: Table S3. We modified the parameters associated with ceramidase (CDase) activity as well as the parameters corresponding to the dynamics of sphingosine kinase (SK) and ceramide kinase (CERK). Moreover, due to the down-regulation of CERT expression we inhibited the transport of CER to the Golgi apparatus. On the other hand, ceramide de novo synthesis (represented by the inflow of CER into the endoplasmic reticulum) was up-regulated. To test the system's response in an AD scenario we simulated the time evolution of species concentrations.
Preliminary simulations showed that when changes were limited to those described by Rao et al. [65], some modeled species quickly diverged to infinity. Namely, we observed an unexpected, rapid accumulation of SM in several cellular compartments (i.e. the lysosome, outer membrane and endoplasmic reticulum). Another unforeseen system behavior was an increased rate of CER to GSL conversion in the Golgi apparatus, followed by the accumulation of GSL. Since such events do not occur in AD cells, we suggested that the initially introduced modifications should be accompanied by: (i) reduced transport of CER to the Golgi apparatus via a CERT independent pathway, and (ii) increased activity of sphingomyelinases (SMase). We also introduced some minor changes in SM transport between compartments. Our predictions were confirmed by literature reports whereby impaired SM metabolism is known to be linked with AD [68, 69]. These findings emphasize the predictive value of our model.
Once these biologically justified modifications were made, our results were coherent with experimental data: we observed the accumulation of ceramides in cellular compartments, particularly in the endoplasmic reticulum (ER) and lysosome relative to homoeostasis levels [65, 67].
As far as the concentration of Sph species in the model output is concerned, an immediate decline was observed due to CDase down-regulation. This was followed by the accumulation of Sph species in all compartments due to increased concentrations of CER species, the substrates for Sph synthesis. We also observed decreased concentrations of S1P species in the AD scenario (Fig. 5).
Time evolution of molar concentrations of the following species (the dashed lines correspond to the homeostasis scenario and solid lines to the AD scenario): (a) ceramide species; (b) sphingosine species; (c) sphingosine-1-phosphate species; (d) species functionally related to AD
Local sensitivity analysis: AD scenario vs. homeostasis
Application of the AD scenario yielded slight changes in local sensitivity parameters.
For the CER species, in contrary to homoeostasis, the most sensitive become ceramides in nucleus and endoplasmic reticulum, which are sensitive basically to endogenous CER in endoplasmic reticulum and exogenous C1P, CER, Sph and S1P in outer membrane inflow parameters as well as nSMase reaction rate in outer membrane.
The S1P in cytosol becomes sensitive to the SK2 in cytosol, analogously S1P in inner membrane becomes sensitive to SK1 in inner membrane and S1P in outer membrane is more sensitive to inflow parameter of exogenous S1P in outer membrane. However, the mitochondrial S1P becomes invariant to parameters changes.
Sph species remains largely unchanged with most sensitive mitochondrial S1P.
Similarly SM species show an unchanged sensitivity with the dominant species SM in outer membrane as most sensitive.
Parameter clustering: AD scenario vs. homeostasis
Clustering analysis of AD model resulted in new parameter dendrogram, with only two clusters in comparison to four clusters obtained in homoeostasis (Fig. 6 vs Fig. 4). Clusters distinguished in AD simulation can be described as follows.
a Dendrogram obtained for AD scenario by hierarchical clustering of parameters based on their functional redundancy. Model contains 36 non-identifiable parameters. b Clusters of reactions induced by the hierarchical grouping
Complex sphingolipid metabolism
Parameters that are strongly associated with the metabolism of complex sphingolipids and glycosphingolipids were included in the green cluster. Our results show that changes in SM balance are of great importance for cellular metabolism in AD. This cluster includes ceramide formation via SM hydrolysis, and the catabolism of SM and GSL in the lysosome (salvage pathway). Similarly to the results obtained for the state of homoeostasis, simulations in the AD scenario confirm that the hydrolysis of SM in the cell membrane and lysosome strongly influence the level of cytoplasmic Sph. This cluster also includes the parameters related to the synthesis of GSL and SM in the Golgi apparatus. To conclude, this cluster can be viewed as a combination of the green, yellow and part of the blue homeostatic clusters and is formed as a result of increased SM transport and degradation during neurodegeneration.
Ceramide synthesis and accumulation
The red cluster includes reactions affected by the inflow of ceramides from the de novo synthesis pathway. According to [70], the endoplasmic accumulation of ceramides from this source is an important step in AD development. Clustering analysis confirmed a strong correlation between de novo synthesis and concentration of CER in the endoplasmic reticulum and subsequently in the mitochondrion, nucleus and Golgi apparatus.
Predictive power of the model
In this section we explore the issue of experimental validation of our model. We decided to propose first comprehensive model of sphingolipid metabolism in unspecified human tissue being aware of the scarcity of experimental data. Previous models were based on yeast and mouse datasets and were more specific, see e.g. [27] that models the C16-branch of sphingolipid metabolism in RAW264.7 cells. On the other hand most datasets for human samples come from mass spectrometric analyses of complex body fluids [71]. Such lipidomics data would be of crucial importance while studying the secretion of SL species to these fluids. However for this kind of analysis the intracellular model should be designed first. We would like to emphasize, that almost all model parameters were based on experimental measurements. Particularly the rates for transport reactions were estimated according to experimental data stored in LIPID MAPS database.
Summarizing, the predictive power of our model can be assessed only in a qualitative way, as there are no experimental data available to whose it can be fitted. Therefore we argue, that the computational analysis that reproduces the outcomes from [65] at the moment is the only method available to verify the suitability of the model. Of course, the experimental validation of model predictions would be a subject of follow up research. Currently our collaborators from Mossakowski Medical Institute PAS carry out experiments on SH-SY5Y cell lines and we hope that obtained data would be useful to evaluate the predictive power of our model.
In the present study, an original model for sphingolipid metabolism in non-specified human tissue was proposed. To the best of our knowledge, this is the most comprehensive model thus far and also the first to explicitly comprise compartmentalization. What is important, we have managed to achieve balance between the complexity and biological soundness of the model and its computational tractability.
Our results demonstrate that this model is an excellent tool to predict the response of the SL pathway to perturbations in the activity of particular enzymes as well as the up- or downregulation of the modeled species. Therefore, the model is perfectly suited to simulate molecular behavior in various scenarios as in this case study of AD.
Moreover, the implementation of semi-independent compartments allows more subtle manipulations of the reaction parameters for specific organelles. Finally, our model enables not only the integration but also validation of experimental data by verifying their cross-compliance in a complex network of interactions.
In addition, the computational validation of the model was performed using recently proposed, sophisticated approaches [31, 32]. Mathematically elegant methods of variance decomposition and sensitivity clustering of parameters revealed non-trivial biological outcomes. Furthermore, the application of the abovementioned approaches in our model served as the perfect validation of their usefulness in realistic size problems.
All molecular reactions within a system of interacting species S 1…S N may be presented in the following manner:
$$R_{j} \colon \qquad \underline{\nu}_{1j} S_{1} + \dots + \underline{\nu}_{Nj} S_{N} \stackrel{k_{j}} \longrightarrow \overline{\nu}_{1j} S_{1} + \dots + \overline{\nu}_{Nj} S_{N}, $$
where \(\underline {\nu }_{\textit {nj}}\) and \(\overline {\nu }_{\textit {nj}}\) denote amounts of molecules of n-th species that are respectively substrate and product of this reaction and the coefficient k j denotes reaction rate (speed) of the reaction.
The Mass Action Law kinetics In case of non-enzymatic transport kinetics we used the Mass Action Law (MAL) principle. The time derivative of each species concentration is the sum of in- and out-fluxes of all neighboring reactions. Here the one reaction flux is equal \(k_{j} \left [S_{1}\right ]^{\underline {\nu }_{1j}} \cdot \dots \cdot \left [S_{N}\right ]^{\underline {\nu }_{\textit {Nj}}}\). Hence, ODEs derived from the MAL can be expressed as follows:
$$ \frac{d[S_{n}]}{dt} = \sum\limits_{j=1}^{R} s_{nj} k_{j} \left[S_{1}\right]^{\underline{\nu}_{1j}} \cdot \dots \cdot \left[S_{N}\right]^{\underline{\nu}_{Nj}} \qquad n=1 \dots N $$
where \(s_{\textit {nj}} = \overline {\nu }_{\textit {nj}} - \underline {\nu }_{\textit {nj}}\) denotes a stoichiometric coefficient of n-th species in j-th reaction and [S n ] denotes the concentration of n-th species.
The Michaelis Menten kinetics
Majority of the reactions depicted in the diagram 1 are enzymatic reactions. For this kind of reactions we used Michaelis Menten model (MM) and simplified kinetics derived by the MM model:
$$ \frac{d[P]}{dt}=\frac{V_{max}[S]}{K_{m}+[S]}, $$
where P denotes reaction product, S denotes reaction substrate and V max , K m are constant reaction parameters.
Ordinary differential equations
Equivalently the ODEs can be expressed in the matrix form:
$$ \frac{d\mathbf{S}(t)}{dt} = M \mathbf{v}(\mathbf{S}(t)), $$
where the system state is represented by the time dependent state vector S(t) of species concentration, M denotes the stoichiometry matrix and v(S(t)) denotes a vector of reaction fluxes (in our model, according to MAL or MM kinetics including inhibition rates) [30].
Local sensitivity analysis shows how the uncertainty of parameters of the model can influence the model output. Sensitivity may be measured by monitoring changes in the output by e.g. partial derivatives of the modeled species to the single parameters. This appears to be a logical approach because any change observed in the output will unambiguously be due to the single variable changed. To compare the sensitivity of the model to the single parameters we constructed the sensitivity indices by time integration of partial derivatives:
$$s_{n,i}={\int_{0}^{T}}\left\vert\frac{\partial S_{n}(t)}{\partial \theta_{i}}\right\vert_{\theta=\theta_{0}}dt $$
where S n are different species concentrations, θ is the vector of parameters and θ 0 is some fixed point in parameters space.
Variance decomposition
The deterministic approach that represents the mean behavior of the system can also be generalized to a stochastic mode by meaning of Stochastic Differential Equations (SDE), both of which can be represented in a discrete Markov Chain or a continuous Markov Process. Below we sketch the method of variance decomposition as presented by Komorowski et al. [31].
Stochastic differential equations
Modeling the system behavior in a stochastic manner means the examination of not only the evolution of the average system state that represents a possible trajectory, but examination of the evolution of the probability distribution over all possible system states.
The most popular approach to describe discrete stochastic model of biochemical pathway is Chemical Master Equation (Chapman-Kolmogorov equation of Markov chain modeling the evolution of the system):
$$ \frac{p P(\mathbf{x},t)}{dt} = \sum\limits_{j}a_{j}(\mathbf{x}-\mathbf{m}_{j})P(\mathbf{x}-\mathbf{m}_{j},t) - \sum\limits_{j}a_{j}(\mathbf{x})P(\mathbf{x},t), $$
where the system state is denoted by the vector \(\mathbf {X}(t)\in \mathbb {N}^{N}\) of numbers of molecules each row for one of N reacting species, m j denotes the j-th column of stoichiometry matrix M=(m 1,…,m R ) and P(x,t) denotes the time- and state-dependent distribution of system being in state X(t)=x and finally a j (X(t)) denotes the propensity function associated with the j-th reaction [30].
One of the possible simplifications of the above equation is Linear Noise Approximation, where the dynamic is modeled with Poisson process:
$$\mathbf{X}(t) = \mathbf{X}(0) + \sum\limits_{j = 1}^{R} {\mathbf{m}_{j}}{N_{j}}\left({{\int\limits_{0}^{t}} {{f_{j}}(\mathbf{X}(\tau),\tau)d\tau}} \right) $$
where N j (X(t),t) denotes Poisson process dependent on time and a system state X(t), corresponding to occurrence of j-th reaction. The probability that j-th reaction occur during the time interval [t;t+d t) equals f j (x,t)d t, where the f j (x,t) is called the transition rate.
Although accurate discrete models describe the exact evolution of probability distribution of the system with the assumption that in one time point at most one reaction can occur, they are computational not efficient, as simulations require significant resources. Consequently, it is more efficient to transit from discrete to continuous process. Starting from deterministic approximation:
$$\Phi(t) = \Phi(0) + \sum\limits_{j = 1}^{R} {{m_{j}} {{\int\limits_{0}^{t}} {{f_{j}}(\Phi(s),s)ds} }} $$
where Φ(t) is the mean system state being the solution of the ODEs that can describe the system state evolution by dividing it into deterministic and stochastic part:
$$ x(t) = \xi(t)+ \Phi(t) $$
where Φ(t) is the deterministic part and ξ(t) is the Winer process describing stochastic noise of a system state [72]. The next step of stochastic noise decomposition is to divided noise linearly into noise steaming from separate reactions. The fact, that the total variance:
$$\Sigma(t)=\langle (x(t) - \langle x(t)\rangle) (x(t) - \langle x(t)\rangle)^{T} \rangle $$
is described by the differential equation
$$ \frac{d\Sigma}{dt}={{A}}(t)\Sigma+ \Sigma{{A}}(t)^{T} + {{D}}(t), $$
((1))
$$\left\{ A(\Phi,t) \right\}_{ik}=\sum\limits_{j=1}^{r} m_{ij} \frac{\partial f_{j}(\Phi,t)}{\partial \Phi_{k}} $$
and D(t) denotes diffusion matrix, can be represented as the sum of individual contributions,
$$ \Sigma(t)=\Sigma^{(1)}(t)+\ \ldots\ +\Sigma^{(r)}(t). $$
results directly from the decomposition of the diffusion matrix \({{D}}(t)=\sum _{j=1}^{r} {{D}}^{(j)}(t)\) and the linearity of the equation for Σ(t). Komorowski et al. [31] By decomposing variance into the components from individual reactions, we are able to determine the variability that the model has from each reaction, and therefore we are able to assess and weigh the uncertainty of the model in division into single reactions.
Parameter clustering
Nienałtowski et al. [32] proposed the concept of functional redundancy and used it as a dissimilarity measure in a hierarchical clustering algorithm. Let us define the model in Bayesian approach by the distribution of data \((X \in \mathbb {R}^{k})\) given parameters \((\theta \in \mathbb {R}^{l})\) as P(X|θ), together with a priori distribution P(θ). Let us assume that θ=(θ A ,θ B ) corresponds to the division of parameters in two independent sets, then [73]:
$$\begin{array}{*{20}l} H(X) = I(X, \theta_{A}) + I(X, \theta_{B}) + I(\theta_{A}, \theta_{B}| X) + H(X|\theta), \end{array} $$
where H denotes here the entropy and I is the mutual information between random variables. Here, I(θ A ,θ B |X) measures the part of entropy that is shared by both sets of parameters and is equivalent to redundant knowledge of the model, which is owned by θ A and θ B .
The computation of I(θ A ,θ B |X) requires calculation integral over all possible outcomes of the model, which is highly inefficient; hence, this notion was replaced with the local redundancy measure, which substitute assumption of knowledge regarding the model X with information regarding initial parameters θ ∗. Thus, functional redundancy is equal to I(θ A ,θ B |θ ∗) and is calculated according to a given formula [74]:
$$\begin{array}{@{}rcl@{}} I(\theta_{A}, \theta_{B}|\theta^{*}) = -\frac{1}{2}\sum\limits_{i = 1}^{\text{min}(|\theta_{A}|, |\theta_{B}|)} \text{log}(1 - {\rho_{j}^{2}}), \end{array} $$
where ρ i stands for the canonical correlation obtained from the Fisher information matrix of θ ∗ (F I M(θ ∗)).
Moreover, to indicate non-identifiable parameters, the authors defined (δ,ζ) - identifiability using the idea of functional redundancy [32]. In this terminology, θ i is (δ,ζ) – identifiable if F I M ii (θ)>ζ and ρ(θ i ,θ −i )<1−δ, where θ −i represents all parameters except θ i .
Using functional redundancy, we can cluster parameters according to a hierarchical algorithm (i.e. in every turn of the loop we merge two sets of parameters with the biggest redundancy measure and remove all non-indentifiable parameters from further analysis) and visualize it on a dendrogram.
SMBL file with the model implementation of homoeostasis and model implementation of AD are provided as Additional file 2 and Additional file 3.
Thudichum JLW. A Treatise on the Chemical Constitution of Brain. London: Bailliere, Tindall, and Cox; 1884.
Carter HE, Haines WJ. Biochemistry of the sphingolipides; preparation of sphingolipides from beef brain and spinal cord. J Biol Chem. 1947; 169(1):77–82.
Pruett ST, Bushnev A, Hagedorn K, Adiga M, Haynes CA, Sullards MC, et al. Biodiversity of sphingoid bases ("sphingosines") and related amino alcohols. J Lipid Res. 2008; 49(8):1621–39.
Merrill AH. De novo sphingolipid biosynthesis: a necessary, but dangerous, pathway. J Biol Chem. 2002; 277(29):25843–5846.
Furst W, Sandhoff K. Activator proteins and topology of lysosomal sphingolipid catabolism. Biochim Biophys Acta. 1992; 1126(1):1–16.
Kitatani K, Idkowiak-Baldys J, Hannun YA. The sphingolipid salvage pathway in ceramide metabolism and signaling. Cell Signal. 2008; 20(6):1010–8.
Kolter T, Proia RL, Sandhoff K. Combinatorial ganglioside biosynthesis. J Biol Chem. 2002; 277(29):25859–5862.
Hannun YA, Obeid LM. Principles of bioactive lipid signalling: lessons from sphingolipids. Nat Rev Mol Cell Biol. 2008; 9(2):139–50.
Gault CR, Obeid LM, Hannun YA. An overview of sphingolipid metabolism: from synthesis to breakdown. Adv Exp Med Biol. 2010; 688:1–23.
Bartke N, Hannun YA. Bioactive sphingolipids: metabolism and function. J Lipid Res. 2009; 50 Suppl:91–6.
Merrill AH. Sphingolipid and glycosphingolipid metabolic pathways in the era of sphingolipidomics. Chem Rev. 2011; 111(10):6387–422.
Hannun YA, Loomis CR, Merrill AH, Bell RM. Sphingosine inhibition of protein kinase C activity and of phorbol dibutyrate binding in vitro and in human platelets. J Biol Chem. 1986; 261(27):12604–9.
Dressler KA, Mathias S, Kolesnick RN. Tumor necrosis factor-alpha activates the sphingomyelin signal transduction pathway in a cell-free system. Science. 1992; 255(5052):1715–8.
Hannun YA. Functions of ceramide in coordinating cellular responses to stress. Science. 1996; 274(5294):1855–9.
Hannun YA, Obeid LM. The Ceramide-centric universe of lipid-mediated cell regulation: stress encounters of the lipid kind. J Biol Chem. 2002; 277(29):25847–5850.
Cuvillier O, Pirianov G, Kleuser B, Vanek PG, Coso OA, Gutkind S, et al. Suppression of ceramide-mediated programmed cell death by sphingosine-1-phosphate. Nature. 1996; 381(6585):800–3.
Ogretmen B, Hannun YA. Biologically active sphingolipids in cancer pathogenesis and treatment. Nat Rev Cancer. 2004; 4(8):604–16.
Ponnusamy S, Meyers-Needham M, Senkal CE, Saddoughi SA, Sentelle D, Selvam SP, et al. Sphingolipids and cancer: ceramide and sphingosine-1-phosphate in the regulation of cell death and drug resistance. Future Oncol. 2010; 6(10):1603–1624.
Ryland LK, Fox TE, Liu X, Loughran TP, Kester M. Dysregulation of sphingolipid metabolism in cancer. Cancer Biol Ther. 2011; 11(2):138–49.
Beckham TH, Cheng JC, Marrison ST, Norris JS, Liu X. Interdiction of sphingolipid metabolism to improve standard cancer therapies. Adv Cancer Res. 2013; 117:1–36.
He X, Huang Y, Li B, Gong CX, Schuchman EH. Deregulation of sphingolipid metabolism in Alzheimer's disease. Neurobiol Aging. 2010; 31(3):398–408.
Haughey NJ, Bandaru VV, Bae M, Mattson MP. Roles for dysfunctional sphingolipid metabolism in Alzheimer's disease neuropathogenesis. Biochim Biophys Acta. 2010; 1801(8):878–86.
van Echten-Deckert G, Walter J. Sphingolipids: critical players in Alzheimer's disease. Prog Lipid Res. 2012; 51(4):378–93.
Grimm MO, Zimmer VC, Lehmann J, Grimm HS, Hartmann T. The impact of cholesterol, DHA, and sphingolipids on Alzheimer's disease. Biomed Res Int. 2013; 2013:814390.
Ceccom J, Loukh N, Lauwers-Cances V, Touriol C, Nicaise Y, Gentil C, et al. Reduced sphingosine kinase-1 and enhanced sphingosine 1-phosphate lyase expression demonstrate deregulated sphingosine 1-phosphate signaling in Alzheimer's disease. Acta Neuropathol Commun. 2014; 2:12.
Alvarez-Vasquez F, Sims KJ, Cowart LA, Okamoto Y, Voit EO, Hannun YA. Simulation and validation of modelled sphingolipid metabolism in Saccharomyces cerevisiae. Nature. 2005; 433(7024):425–30.
Gupta S, Maurya MR, Merrill AH, Glass CK, Subramaniam S. Integration of lipidomics and transcriptomics data towards a systems biology model of sphingolipid metabolism. BMC Syst Biol. 2011; 5:26.
Sorribas A, Savageau MA. Sphingolipid metabolic pathway: an overview of major roles played in human diseases. Math Biosci. 1989; 94(2):161–93.
LIPID MAPS Lipidomics Gateway. http://www.lipidmaps.org/. access date 01.06.2014.
Charzynska A, Nalecz A, Rybinski M, Gambin A. Sensitivity analysis of mathematical models of signaling pathways. BioTechnologia. 2012; 93(3):291–30.
Komorowski M, Miekisz J, Stumpf MPH. Decomposing noise in biochemical signalling systems highlights the role of protein degradation. Biophys J. 2013; 104(8):1783–93.
Nienałtowski K, Wlodarczyk M, Lipniacki T, Komorowski M. Clustering reveals limits of parameter identifiability in multi-parameter models of biochemical dynamics (accepted to BMC Systems Biology).
Sonnino S, Prinetti A, Mauri L, Chigorno V, Tettamanti G. Dynamic and structural properties of sphingolipids as driving forces for the formation of membrane domains. Chem Rev. 2006; 106(6):2111–125.
Lev S. Non-vesicular lipid transport by lipid-transfer proteins and beyond. Nat Rev Mol Cell Biol. 2010; 11(10):739–50.
van Meer G, Lisman Q. Sphingolipid transport: rafts and translocators. J Biol Chem. 2002; 277(29):25855–5858.
Eggeling C, Ringemann C, Medda R, Schwarzmann G, Sandhoff K, Polyakova S, et al. Direct observation of the nanoscale dynamics of membrane lipids in a living cell. Nature. 2009; 457(7233):1159–62.
Tani M, Ito M, Igarashi Y. Ceramide/sphingosine/sphingosine 1-phosphate metabolism on the cell surface and in the extracellular space. Cell Signal. 2007; 19(2):229–37.
Simanshu DK, Kamlekar RK, Wijesinghe DS, Zou X, Zhai X, Mishra SK, et al. Non-vesicular trafficking by a ceramide-1-phosphate transfer protein regulates eicosanoids. Nature. 2013; 500(7463):463–7.
Contreras FX, Basanez G, Alonso A, Herrmann A, Goni FM. Asymmetric addition of ceramides but not dihydroceramides promotes transbilayer (flip-flop) lipid motion in membranes. Biophys J. 2005; 88(1):348–59.
van Meer G. Dynamic transbilayer lipid asymmetry. Cold Spring Harb Perspect Biol. 2011; 3:a004671.
Kobayashi N, Kobayashi N, Yamaguchi A, Nishi T. Characterization of the ATP-dependent sphingosine 1-phosphate transporter in rat erythrocytes. J Biol Chem. 2009; 284(32):21192–1200.
Aye IL, Singh AT, Keelan JA. Transport of lipids by ABC proteins: interactions and implications for cellular toxicity, viability and function. Chem Biol Interact. 2009; 180(3):327–39.
D'Angelo G, Polishchuk E, Di Tullio G, Santoro M, Di Campli A, Godi A, et al. Glycosphingolipid synthesis requires FAPP2 transfer of glucosylceramide. Nature. 2007; 449(7158):62–7.
Levy M, Futerman AH. Mammalian ceramide synthases. IUBMB Life. 2010; 62(5):347–56.
Brindley DN. Lipid phosphate phosphatases and related proteins: signaling functions in development, cell division, and cancer. J Cell Biochem. 2004; 92(5):900–12.
Jenkins RW, Canals D, Hannun YA. Roles and regulation of secretory and lysosomal acid sphingomyelinase. Cell Signal. 2009; 21(6):836–46.
Kolter T, Sandhoff K. Principles of lysosomal membrane digestion: stimulation of sphingolipid degradation by sphingolipid activator proteins and anionic lysosomal lipids. Annu Rev Cell Dev Biol. 2005; 21:81–103.
Siskind LJ. Mitochondrial ceramide and the induction of apoptosis. J Bioenerg Biomembr. 2005; 37(3):143–53.
Bionda C, Portoukalian J, Schmitt D, Rodriguez-Lafrasse C, Ardail D. Subcellular compartmentalization of ceramide metabolism: MAM (mitochondria-associated membrane) and/or mitochondria?Biochem J. 2004; 382(Pt 2):527–33.
Santana P, Pena LA, Haimovitz-Friedman A, Martin S, Green D, McLoughlin M, et al. Acid sphingomyelinase-deficient human lymphoblasts and mice are defective in radiation-induced apoptosis. Cell. 1996; 86(2):189–99.
Pena LA, Fuks Z, Kolesnick R. Stress-induced apoptosis and the sphingomyelin pathway. Biochem. Pharmacol. 1997; 53(5):615–21.
Mao C, Obeid LM. Ceramidases: regulators of cellular responses mediated by ceramide, sphingosine, and sphingosine-1-phosphate. Biochim Biophys Acta. 2008; 1781(9):424–34.
Merrill AH, Jones DD. An update of the enzymology and regulation of sphingomyelin metabolism. Biochim Biophys Acta. 1990; 1044(1):1–12.
Tafesse FG, Ternes P, Holthuis JC. The multigenic sphingomyelin synthase family. J Biol Chem. 2006; 281(40):29421–9425.
Funato K, Riezman H. Vesicular and nonvesicular transport of ceramide from ER to the Golgi apparatus in yeast. J Cell Biol. 2001; 155(6):949–59.
Sugiura M, Kono K, Liu H, Shimizugawa T, Minekura H, Spiegel S, et al. Ceramide kinase, a novel lipid kinase. Molecular cloning and functional characterization. J Biol Chem. 2002; 277(26):23294–3300.
Maceyka M, Payne SG, Milstien S, Spiegel S. Sphingosine kinase, sphingosine-1-phosphate, and apoptosis. Biochim Biophys Acta. 2002; 1585(2-3):193–201.
Pitson SM. Regulation of sphingosine kinase and sphingolipid signaling. Trends Biochem Sci. 2011; 36(2):97–107.
Spiegel S, Milstien S. Sphingosine-1-phosphate: an enigmatic signalling lipid. Nat Rev Mol Cell Biol. 2003; 4(5):397–407.
Bourquin F, Riezman H, Capitani G, Grutter MG. Structure and function of sphingosine-1-phosphate lyase, a key enzyme of sphingolipid metabolism. Structure. 2010; 18(8):1054–1065.
Brown KS, Sethna JP. Statistical mechanical approaches to models with many poorly known parameters. Phys Rev E Stat Nonlin Soft Matter Phys. 2003; 68(2 Pt 1):021904. Epub 2003 Aug 12.
Jetka T, Charzynska A, Gambin A, Stumpf MPH, Komorowski M. Stochdecomp - matlab package for noise decomposition in stochastic biochemical systems. Bioinformatics. 2014; 30(1):137–8.
Lipniacki T, Paszek P, Brasier AR, Luxon B, Kimmel M. Mathematical model of nf- κb regulatory module. J Theor Biol. 2004; 228(2):195–215.
Pralhada Rao R, Vaidyanathan N, Rengasamy M, Mammen Oommen A, Somaiya N, Jagannath MR. Sphingolipid metabolic pathway: an overview of major roles played in human diseases. J Lipids. 2013; 2013:178910.
Milhas D, Clarke CJ, Hannun YA. Sphingomyelin metabolism at the plasma membrane: implications for bioactive sphingolipids. FEBS Lett. 2010; 584(9):1887–1894.
Di Paolo G, Kim TW. Linking lipids to Alzheimer's disease: cholesterol and beyond. Nat Rev Neurosci. 2011; 12(5):284–96.
Jana A, Pahan K. Fibrillar amyloid-beta-activated human astroglia kill primary human neurons via neutral sphingomyelinase: implications for Alzheimer's disease. J Neurosci. 2010; 30(38):12676–89.
Lee JK, Jin HK, Park MH, Kim BR, Lee PH, Nakauchi H, et al. Acid sphingomyelinase modulates the autophagic process by controlling lysosomal biogenesis in Alzheimer's disease. J Exp Med. 2014; 211(8):1551–70.
Paschen W, Mengesdorf T. Endoplasmic reticulum stress response and neurodegeneration. Cell Calcium. 2005; 38(3-4):409–15.
Basit A, Piomelli D, Armirotti A. Rapid evaluation of 25 key sphingolipids and phosphosphingolipids in human plasma by lc-ms/ms. Anal Bioanal Chem. 2015:1–10.
Komorowski M, Finkenstadt B, Harper CV, Rand DA. Bayesian inference of biochemical kinetic parameters using the linear noise approximation. BMC Bioinformatics. 2009; 10:343.
Lüdtke N, Panzeri S, Brown M, Broomhead DS, Knowles J, Montemurro MA, et al. Information-theoretic sensitivity analysis: a general method for credit assignment in complex networks. J R Soc Interface. 2008; 5(19):223–35.
Johnson R, Wichern D. Applied Multivariate Statistical Analysis, Volume 4. Englewood Cliffs, NJ: Prentice hall; 1992.
All authors thank Prof. Bogdan Lesyng and Prof. Robert Strosznajder (Mossakowski Medical Center, Polish Academy of Sciences) for valuable discussions and for inspiring this research and, above all, Michał Komorowski for acquainting us with the model analysis methods. This study was partially supported by Polish National Science Center grant no 2011/01/B/NZ2/00864, Biocentrum-Ochota project (POIG 02.03.00-00-003/09) and EU project POKL.04.01.01-00-051/10-00.
Faculty of Biology University of Warsaw, Warsaw, Poland
Agata Charzyńska
Bioinformatics Laboratory, Mossakowski Medical Research Centre, Polish Academy of Sciences, Warsaw, Poland
Institute of Computer Science Polish Academy of Sciences, Warsaw, Poland
Weronika Wronowska
Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
Karol Nienałtowski
Institute of Informatics, University of Warsaw, Warsaw, Poland
Anna Gambin
Correspondence to Anna Gambin.
WW designed the biochemical model, AC and KN implemented this model and performed the computational experiments. AG inspired the research and supervised the project. All authors contributed to the writing of this manuscript and have read and approved the final manuscript.
Supplementary Figures and Tables.
SMBL file with the model implementation of homoeostasis.
SMBL file with the model implementation of AD.
Wronowska, W., Charzyńska, A., Nienałtowski, K. et al. Computational modeling of sphingolipid metabolism. BMC Syst Biol 9, 47 (2015). https://doi.org/10.1186/s12918-015-0176-9
Received: 22 October 2014
Sphingolipid metabolism
Kinetic model | CommonCrawl |
How inaccurate is your total doxastic state?
I've written a lot on this blog about ways in which we might measure the inaccuracy of an agent when she has precise numerical credences in propositions. I've tried to describe the various ways in which philosophers have tried to use such measures to help argue for different principles of rationality that govern these credences. For instance, Jim Joyce has argued that credences should satisfy the axioms of the probability calculus because any non-probabilistic credences are accuracy-dominated by probabilistic credences: that is, if $c$ is a non-probabilistic credence function, there is a probabilistic credence function $c^*$ such that $c^*$ is guaranteed to be more accurate than $c$.
Of course much of the epistemological literature is concerned with agents who have quite different sorts of doxastic attitudes. It is concerned with agents who have not credences, which we might think of as partial beliefs, but rather agents who have full or all-or-nothing or categorical beliefs. One might wonder whether we can also describe ways of measuring the inaccuracy of these doxastic attitudes. It turns out that we can. The principles of rationality that follow have been investigated by (amongst others) Hempel, Maher, Easwaran, and Fitelson. I'll describe some of the inaccuracy measures below.
This raises a question. Suppose you think that credences and full beliefs are both genuine doxastic attitudes, neither of which can be reduced to the other. Then it is natural to think that the inaccuracy of one's total doxastic state is the sum of the inaccuracy of the credal part and the inaccuracy of the full belief part. Now suppose that you think that, while neither sort of attitude can be reduced to the other, there is a tight connection between them for rational believers. Indeed, you accept a normative version of the Lockean thesis: that is, you say that an agent should have a belief in $p$ iff her credence in $p$ is at least $t$ (for some threshold $0.5 < t \leq 1$) and she should have a disbelief in $p$ iff her credence in $p$ is at most $1-t$. Then it turns out that something rather unfortunate happens. Joyce's accuracy dominance argument for probabilism described above fails. It now turns out that there are non-probabilistic credence functions with the following properties: while they are accuracy-dominated, the rational total doxastic state that they generate via the normative Lockean thesis -- that is, the total doxastic state that includes those credences together with the full beliefs or disbeliefs that the normative Lockean thesis demands -- is not accuracy-dominated by any other total doxastic state that satisfies the normative Lockean thesis.
Let's see how this happens. We need three ingredients:
Inaccuracy for credences
The inaccuracy of a credence $x$ in proposition $X$ at world $w$ is given by the quadratic scoring rule:
i(x, w) = \left \{ \begin{array}{ll}
(1-x)^2 & \mbox{if $X$ is true at $w$} \\
x_k & \mbox{if $X$ is false at $w$}
\end{array}
\right.
Suppose $c = \{c_1, \ldots, c_n\}$ is a set of credences on a set of propositions $\mathbf{F} = \{X_1, \ldots, X_n\}$. The inaccuracy of the whole credence function is given as follows:
I(c, w) = \sum_k i(c_k, w)
Inaccuracy for beliefs
Suppose $\mathbf{B} = \{b_1, \ldots, b_n\}$ is a set of beliefs and disbeliefs on a set of propositions $\mathbf{F} = \{X_1, \ldots, X_n\}$. Thus, each $b_k$ is either a belief in $X_k$ (denoted $B(X_k)$), a disbelief in $X_k$ (denoted $D(X_k)$), or a suspension of judgment in $X_k$ (denoted $S(X_k)$). Then we measure the inaccuracy of attitude $b$ in proposition $X$ at world $w$ is given as follows: there is a reward $R$ for a true belief or a false disbelief; there is a penalty $W$ for a false belief or a true disbelief; and suspensions receive neither penalty nor reward regardless of the truth of the proposition in question. We assume $R, W > 0$. Since we are interested in measuring inaccuracy rather than accuracy, the reward then makes a negative contribution to inaccuracy and the penalty makes a positive contribution. Thus:
i(B(X), w) = \left \{\begin{array}{ll}
-R & \mbox{if $X$ is true at $w$} \\
W & \mbox{if $X$ is false at $w$}
i(S(X), w) = \left \{\begin{array}{ll}
0 & \mbox{if $X$ is true at $w$} \\
0 & \mbox{if $X$ is false at $w$}
i(D(X), w) = \left \{ \begin{array}{ll}
W & \mbox{if $X$ is true at $w$} \\
-R & \mbox{if $X$ is false at $w$}
This then generates an inaccuracy measure on a set of beliefs $\mathbf{B}$ as follows:
I(\mathbf{B}, w) = \sum_k i(b_k, w)
Hempel noticed that, if $R = W$ and $p$ is a probability function, then: $B(X)$ uniquely minimises expected utility by the lights of $p$ iff $p(X) > 0.5$; $D(X)$ uniquely maximises expected utility by the lights of $p$ iff $p(X) < 0.5$; $S(X)$ maximises expected utility iff $p(X_k) = 0.5$, but in that situation, $B(X)$ and $D(X)$ do too. Easwaran has investigated what happens if $R \neq W$.
Lockean thesis
For some $0.5 < t \leq 1$:
A rational agent has a belief in $X$ iff $c(X) \geq t$;
A rational agent has a disbelief in $X$ iff $c(X) \leq 1-t$;
A rational agent suspends judgment in $X$ iff $1-t < c(X) < t$.
Inaccuracy for total doxastic state
We can now put these three ingredients together to give an inaccuracy measure for a total doxastic state that satisfies the normative Lockean thesis. We state the measure as a measure of the inaccuracy of a credence $x$ in proposition $X$ at world $w$, since any total doxastic state that satisfies the normative Lockean thesis is completely determined by the credal part.
i_t(x, w) = \left \{ \begin{array}{ll}
(1-x)^2 - R & \mbox{if } t \leq x \leq 1\mbox{ and } X \mbox{ is true} \\
(1-x)^2 & \mbox{if } 1- t < x < t\mbox{ and } X \mbox{ is true} \\
(1-x)^2 + W & \mbox{if } 0 \leq x \leq t\mbox{ and } X \mbox{ is true} \\
x^2 + W & \mbox{if } t \leq x \leq 1\mbox{ and } X \mbox{ is false} \\
x^2 & \mbox{if } 1- t < x < t \mbox{ and } X \mbox{ is false}\\
x^2 - R & \mbox{if } 0 \leq x \leq t \mbox{ and } X \mbox{ is false}\\
Finally, we give the total inaccuracy of such a doxastic state:
I_t(c, w) = \sum_k i_t(c_k, w)
Three things are interesting about this inaccuracy measure. First, unlike the inaccuracy measures we usually deal with, it's discontinuous. The inaccuracy of $x$ in $X$ is discontinuous at $t$ and at $1-t$. If $X$ is true, this is because, as $x$ crosses the Lockean threshold $t$, it gives rise to a true belief, whose reward contributes negatively to the inaccuracy; and as it crosses the other Lockean threshold $1-t$, it gives rise to a true disbelief, whose penalty contributes positively to the inaccuracy.
Second, the measure is proper. That is, each probabilistic set of credences expects itself to be amongst the least inaccurate.
Third, as mentioned above, there are non-probabilistic credence functions that are not accuracy-dominated when inaccuracy is measured by $I_t$. Consider the following example.
$\mathbf{F} = \{X, \neg X\}$. That is, our agent has credences only in two propositions.
$c(X) = 0.6$ and $c(\neg X) = 0.5$.
$R = 0.4$, $W = 0.6$. That is, the penalty for a false belief or true disbelief is fifty percent higher than the reward for a true belief.
$t = 0.6$. That is, a rational agent has a belief in $X$ iff her credence is at least than 0.6; and she has a disbelief in $X$ iff her credence is at most 0.4. It's worth noting that, for probabilistic agents who specify $R$ and $W$ as we just have, satisfying the Lockean thesis with $t = 0.6$ will always minimize expected inaccuracy.
Then we have the following result: There is no total doxastic state that satisfies the Lockean thesis that $I_t$-dominates $c$.
The following figure helps us to see why.
Here, we plot the possible credence functions on $\mathbf{F} = \{X, \neg X\}$ on the unit square. The dotted lines represent the Lockean thresholds: a belief threshold for $X$ and a disbelief threshold for $X$; and similarly for $\neg X$. The undotted diagonal line include all the probabilistically coherent credence functions; that is, those for which the credence in $X$ and the credence in $\neg X$ sum to 1. $c$ is the credence function described above. It is probabilistically incoherent. The lower right-hand arc includes all the possible credence functions that are exactly as inaccurate as $c$ when $X$ is true and inaccuracy is measured by $I$. The upper left-hand arc includes all the possible credence functions that are exactly as inaccurate as $c$ when $\neg X$ is true and inaccuracy is measured by $I$.
Note that, in line with Joyce's accuracy-domination argument for probabilism, $c$ is $I$-dominated. It is $I$-dominated by all of the credence functions that lie between the two arcs. Some of these -- namely, those that also lie on the diagonal line -- are not themselves $I$-dominated. This seems to rule out $c$ as irrational. But of course, when we are considering not only the inaccuracy of $c$ but also the inaccuracy of the beliefs and disbeliefs to which $c$ gives rise in line with the Lockean thesis, our measure of inaccuracy is $I_t$, not $I$. Notice that all the credence functions that $I$-dominate $c$ do not $I_t$-dominate it. The reason is that every such credence function assigns $X$ a credence less than 0.6. Thus, none of them give rise to a full belief in $X$. As a result, the decrease in $I$ that is obtained by moving to one of these does not exceed $R$, which is the accuracy 'boost' obtained by having the true belief in $X$ to which $c$ gives rise. By checking cases, we can see further that no other credence function $I_t$-dominates $c$.
Is this a problem? That depends on whether one takes credences and beliefs to be two separate, but related doxastic states. If one does, and if one accepts further that the Lockean thesis describes the way in which they are related, then $I_t$ seems the natural way to measure the total doxastic state that arises when both are present. But then one loses the accuracy-domination argument for probabilism. However, one might avoid this conclusion if one were to say that, really, there are only credence functions; and that beliefs, to the extent they exist at all, are reducible to credences. That is, if one were to take the Lockean thesis to be a reductionist claim rather than a normative claim, it would seem natural to measure the inaccuracy of a credence function using $I$ instead of $I_t$. While one would still say that, as a credence in $X$ moves across the Lockean threshold for belief, it gives rise to a new belief, it would no longer seem right to think that this discontinuous change in doxastic state should give rise to a discontinuous change in inaccuracy; for the new belief is not really a genuinely new doxastic state; it is rather a way of classifying the credal state.
Published by Richard Pettigrew at 8:08 am
Hannes Leitgeb 28 May 2014 at 12:24
Great post, Richard!
For my stability account of belief, I have two accuracy arguments which work like this (and which I both discuss in the monograph that I am writing):
(i) In the first one, inaccuracy is--as usual--distance from the truth. But I use different inaccuracy measures for degrees of belief and for all-or-nothing belief (where belief is analyzed, say, on an ordinal scale, such as in belief revision theory or nonmonotonic reasoning, though the approach also works for belief on a strictly categorical scale); for degrees of belief one may use the Brier score, while for belief I use a class of inaccuracy measures that are generalizations of Hempel's or of your i-functions from above and which also include, e.g., Branden's (from his recent work on inaccuracy for orders of propositions) as a special case. Then I make an additional assumption, and that is: all-or-nothing belief on an ordinal scale is given by a total pre-order < on worlds (rather than propositions); a total pre-order on propositions can be determined from that order on worlds, as this is done in belief revision theory or nonmonotonic reasoning, but the primary object is the order on worlds. (I leave out any defense of that additional assumption here.) Finally, I formulate an accuracy norm simultaneously for the degree of belief function P and for belief as given by the ordering <: the pair (P, <) ought to be such that P minimizes expected inaccuracy relative to P, and < minimizes expected inaccuracy relative to P, where the respective inaccuracy measures in the two cases are as sketched above. One can then prove a theorem to the effect that if (P, <) satisfies the norm, then P is a probability measure (that's just the standard arguments from the literature repeated), and < has the kind of stability property (relative to P) that I want to argue for in my theory.
(ii) In the second approach, which I won't explain here in any detail, I also do something like the above, however, this time I determine the inaccuracy of < not with respect to truth but instead with respect to P: taking a subjective probability measure P (and the order on propositions that it induces) as given, I formulate a norm to the effect that < ought to approximate P to best possible extent (the justification being that (P, <) should be, as it were, in "maximal harmony" or coherence with each other). I make precise that this means, and then I prove again that the < that minimize inaccuracy relative to P are precisely those that have the stability property (relative to P) that I aim to defend.
[Continued in part 2.]
[Part 2:]
The framework is open to different interpretations, but my intended interpretation is that neither P nor < (that is, belief) ought to be eliminated, and neither ought to be reduced to the other either, rather each of the two of them has a life on its own; however, in order for the agent who has such a degree of belief function P and a belief ordering < simultaneously to be rational overall, P and < must satisfy a certain bridge principle, and that is just the stability account that I am advocating. (i) above aims to justify that account by considerations on accuracy with respect to truth on both sides, that is, for both P and <; (ii) above aims to justify that account by considerations on accuracy with respect to truth for P (that's just the standard arguments again), and accuracy with respect to P for belief. In that second approach, belief might still be said to aim at truth, but only indirectly: belief aims at P, and P aims at truth.
In my intended interpretation, I don't regard either of P or belief to be prior to the other conceptually, or epistemologically, or ontologically, though I would want to say, of course, that P occupies a more complex and fine-grained scale of measurement than belief does, which is also why there will always be some kind of asymmetry between the two of them in terms of "information content" (and this shows up in the theory at various places).
Finally, about the Lockean thesis: if degrees of belief and belief satisfy either of the norms formulated above, then one can prove there is always a threshold, such that the corresponding instance of the Lockean thesis for that very threshold must hold as well. So the stability account entails an instance of the Lockean thesis (but only with a special threshold). The difference to your way of proceeding is then that the Lockean thesis is but a corollary to accuracy considerations in my approach, while in your approach the Lockean thesis is presupposed already in the accuracy considerations themselves.
Nathan Coppedge 7 June 2014 at 23:36
Doesn't setting the level at 0.6 imply the assumption that there are not degrees of validity to the positive end of credences or beliefs?
Am I confused on this one? | CommonCrawl |
GO Classes Weekly Quiz 9 | Data Structures | Linked List | Question: 13
GO Classes asked in Programming May 5, 2022 recategorized Jun 28, 2022 by Lakshman Patel RJIT
Consider two statements below -
$\text{S1}:$ For all positive $f(n), f(n) + o(f(n)) = \theta(f(n)).$
$\text{S2}:$ For all positive $f(n), g(n)$ and $h(n),$ if $f(n) = O(g(n))$ and $f(n) = \Omega(h(n)),$ then $g(n) + h(n) = \Omega(f(n))$
Which of the following is the correct option.
$\text{S1}$ is True but $\text{S2}$ is False.
Both are True.
Both are False.
goclasses_wq9
goclasses
asymptotic-notations
2-marks
by GO Classes
by shishir__roy
commented May 15, 2022
Tbh everything that has been taught in class is enough to solve this.
When you write f(n) + o(f(n)), this o(f(n)) cannot be a set of functions (common sense), so it must be an element which belongs to set o(f(n)).
For formal definitions and terms read "Asymptotic notation in equations and equalities" from CLRS, it's from chapter 3, as mentioned by {ankitgupta}.
by [ Jiren ]
If that's the case then writing
o(f(n)) is strictly less than f(n) is a valid statement
but ankitgupta said we cant write like that
this is the issue
by ankitgupta.1729
commented Jun 3, 2022
@jiren, I wanted to make things correct. It's upto you or anyone whether they believe it or not on a good book. And This book is written by Turing award winner which is considered as the Nobel prize of computer science.
You are believing that asymptotic analysis means $ n \geq n_0$ or it is for sufficiently large $n$ which is not completely true.
I have already mentioned here.
There are books which have written on this single topic. Anyone can read about How these asymptotic notations came into picture, why we use $n \geq n_0$ or $n \leq n_0$ etc if they get free time.
Answer is option C which is both statements are correct
[ Jiren ] answered May 5, 2022 edited May 8, 2022 by [ Jiren ]
commented May 6, 2022 edited May 6, 2022 by [ Jiren ]
If anyone had doubt on how i wrote
o(f(n)) is strictly less than f(n)
here is the reason
let h(n) = o(f(n))
it means h(n) < c*f(n)
If I ignore the constant for simplicity
h(n) < f(n)
We can replace h(n) with o(f(n))
means o(f(n) < f(n)
so i wrote o(f(n)) is strictly less than f(n)
For convenience let's take an example
by neel19
commented May 6, 2022
" o(f(n)) will be strictly less than f(n) ", how? Isn't o(f(n)) strictly greater than f(n)?
Bro i wrote the explanation 20 hrs back
may be u missed that 😋
by Amlan Kumar Majumdar
In s2 before the last line i think that would be g(n) is greater equal to f(n)
yes correct
it was a typo
i meant to write g(n) >= f(n)
by Riya_23
commented May 8, 2022 edited May 12, 2022 by Riya_23
I understood it this way. o(---) is a set of functions stricly less than ---
o(f(n)) is just a notation
to be precise u should write h(n) belongs to
other than that remaining is correct
Ok, thanks. I will correct it. :)
Writing statement "$o(f(n))$ will be strictly less than $f(n)$" is technically incorrect.
what does the meaning of writing a set of functions is less than a particular function ?
The given statement means $o(f(n)) \in o(f(n))$.
Can a set belongs to itself ? ( Russell's Paradox ? )
If we write: $f(n) + o(f(n)) = \Theta (f(n))$
It means for "any" function $g(n) \in o(f(n)),$ there is "some" function $h(n) \in \Theta(f(n))$ such that
$f(n) + g(n) = h(n)$ for all $n.$
So, here, we need to say, suppose, $g(n) \in o(f(n))$ then $f(n) + g(n) \in \Theta (f(n))$ and "any" function $g(n)$ which is in $o(f(n))$ is strictly smaller than $f(n).$
Since, you have not used "exists" and "all" words with constant 'c' for little-oh and Big-Oh, I hope you know when to use which word.
(Mentioned all these things because you have mentioned the above statement at least 3 times and very less probability that it is typo.)
I think u didn't saw the comment that I posted
in that i clearly mentioned that o(f(n)) is to be replaced by h(n) as i had written h(n) = o(f(n))
when i was using o(f(n)) anywhere I was literally talking about h(n) and h(n) can be anything which is strictly less than f(n)
u r talking about set of functions and i was talking about a single function
since h(n) is a function i can compare two functions right
Also when we say h(n) = o(f(n)) sir already said that we are abusing the belongs to
Abusing the notation is fine but it should not lose its meaning. Writing $o(f(n)) < f(n)$ or $o(f(n)) = o(f(n))$ is incorrect.
Read section 2.5 from here
When we go too deep i think some terms cant even make sense like
In $f(n) + o(f(n)) = \Theta(f(n)),$ writing $f(n) + o(f(n))$ is perfectly valid. Here, $o(f(n))$ and $\Theta(f(n))$ are still functions which are called "anonymous functions", they are not interpreted as a set of functions for the given equation. You can say, they are defined or interpreted like that when asymptotic notations come in equations or inequalities. It is already mentioned it in my first comment.
Read section "Asymptotic notation in equations and inequalities" in Chapter $3$ from CLRS.
If o(f(n)) is treated as a function when it is present in equation and inequalities
then what's wrong in saying o(f(n)) < f(n)
because the above is inequality so here also o(f(n)) should be treated as anonymous function
so the inequality is valid according to your reasoning i guess
Suppose, inequality is true and you are correct.
So, $o(f(n)) < f(n)$ is true where let any function $g(n) \in o(f(n))$ such that $g(n) < f(n)$
And here, $g(n) \in o(f(n))$ means $g(n) > f(n)$
Can you please tell me any $f$ and $g$ such $g<f$ and $g>f$ at the same time ?
commented May 12, 2022 edited May 12, 2022 by [ Jiren ]
If g(n) = o(f(n)) then how come g(n) > f(n)
i didnt get it
g(n)∈o(f(n)) means 𝑔(𝑛)>𝑓(𝑛) This line
Also u r the one who said in an inequality
o(f(n)) should be treated as a function
Now u r taking it as a set of functions
Which convention are you following ??
Set of functions OR anonymous function
If you write $g(n) = o(f(n))$ then for some $n,$ you can have $g(n) > f(n).$
For example, $f(n) = n^2$ and $g(n) = 10n.$ Here we have $g(n) = o(f(n))$ but $g(2) > f(2)$
And if you see the interpretation given in CLRS, we have to consider all $n.$ i.e.
$o(f(n)) < f(n)$ is true when any function $g(n) \in o(f(n))$ such that $g(n) < f(n)$ for all $n$ as in the case of equation.
So, if you take $f(n) = n^2$ and $g(n)=10n$ then $10n \in o(n^2)$ and $10n < n^2$ should hold for all $n$ but for $n=2$ it does not hold.
You might loosely say $o(f) < f.$
But according to formal definition it should not be "FOR ALL N" it should be
"FOR ALL N >= N0" and we can take N0 as 1000 now 10n = o(n^2)
Also see this
We are talking about these functions asymptotically so we can say that
For reference sachin sir said this
question reference is
https://gateoverflow.in/374981/Go-classes-weekly-quiz-data-structures-linked-list-question#a375036
Have you checked the section mentioned in this comment ?
Yeah we are discussing about that comment only
what do you want to convey??
There are two things which I wanted to convey. Summarising below (most of the things here are copied from the above comments):
Assuming $f$ and $g$ are positive functions,
$1) $ If we write: $f(n) + o(f(n)) = \Theta (f(n))$
(Remember this equation should be true for all $n,$ not only for large $n.$)
(Now, based on this, we should have to write things for first statement.)
$2) $ If we write $o(f(n)) < f(n),$ it means for any function $g(n) \in o(f(n))$ such that $g(n) < f(n)$ for all $n.$
(As in case of equation, this inequality should also be true for all $n.$)
If we write $g(n) = o(f(n))$ then for some $n,$ you can have $g(n) > f(n).$ and since, we should have $g(n) < f(n)$ So, this statement might not be correct for all positive functions $f(n)$ because inequality may not hold for all $n.$
So, $10n \in o(n^2)$ and $10n < n^2$ should hold for all $n$ but for $n=2$ it does not hold.
We might loosely say $o(f) < f.$
section "Asymptotic notation in equations and inequalities" in Chapter 3
We are saying f(n) < g(n) asymptotically and not exactly so for some constant values even
if f(n) > g(n) this doesn't matter as we are speaking asymptotically
Also when we say f(n) > g(n) asymptotically it means f(n) will grow faster than g(n) for all the values >= n0
see in the above we are taking values which are greater than or equal to n0 and not for every value of n
what i really don't understand is
f(n) + o(f(n)) = Θ(f(n))
here u said this equality is valid because we take o(f(n)) as anonymous function and not as set
but again u will write g(n) belongs to o(f(n)) which is contradicting to your above mentioned statement because u took o(f(n)) as a function and not as a set
What exactly is the convention that you are following
o(f(n)) in an equality or inequality is a
anonymous function
pls upvote if likes the explanation :)
Genius answered Aug 17, 2022
by Genius
GO Classes asked in Programming May 5, 2022
Which of the following statement(s) is/are true? If $\text{T1}(x) = \text{O}(f(x))$ and $\text{T2}(x) = \text{O}(g(x))$ then $\text{T1}(x) + \text{T2}(x) = \text{O} (\max(f(x), g(x))$ If $\text{T}(x) = \text{O}(cf(x)),$ ... then $\text{T1}(x) \ast \text{T2}(x) = \text{O}(f(x) \ast g(x))$ $2^{(n+1)} = \text{O}(2^{n} ).$
multiple-selects
Which of the following(s) is/are true? If $f(n) = O(n^2)$ then $f(n) = O(n).$ If $f(n)$ is $O(3^{\log_{10}n}),$ then $f(n)$ is $O(n^2).$ The function $f(n) = \lg(n!) = O(n \lg n).$ Growth of the sum $1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}$ can be described by $\theta(\log n).$
GO Classes Weekly Quiz 9 | Data Structures | Linked List | Question: 8
If $f(n) = O(g(n))$ and $f(n) = \Omega(g(n)),$ then it is always true that $f(n) = o(g(n)).$ $f(n) = \theta(g(n)).$ $f(n) = \omega(g(n)).$ both A and B are always true.
1-mark
GO Classes asked in DS May 5, 2022
Consider the following function fun() that takes the head of a linked list. struct node { int value; struct node *next; }; typedef struct node Node; int fun(Node *head){ if(head== NULL) return 1; Node *p,*q; p = ... $0$ is length of the linked list is even Function may go to infinite loop if there is a loop in linked list | CommonCrawl |
On analysis of a nonlinear fractional system for social media addiction involving Atangana–Baleanu–Caputo derivative
Jutarat Kongson1,2,
Weerawat Sudsutad3,6,
Chatthai Thaiprayoon1,2,
Jehad Alzabut4,5 &
Chutarat Tearnbucha6
A mathematical model for the dynamic systems of \(\mathbb{SMA}\) involving the \(\mathbb{ABC}\)-fractional derivative is considered in this manuscript. We examine the basic reproduction number and analyze the stability of the equilibrium points. We prove the theoretical results of the existence and Ulam's stability of the solutions for the proposed model using fixed point theory and nonlinear analytic techniques. Using the Adams type predictor–corrector rule for the \(\mathbb{ABC}\)-fractional integral operator, a numerical scheme is devised for obtaining the approximate solution of the proposed model. Different numerical plots corresponding to various fractional orders are presented. In addition, we demonstrate a numerical simulation for the transmission of social media addiction in two cases with the basic reproduction numbers greater than and less than one.
During the last decade, social media (\(\mathbb{SM}\)) has plentifully influenced the world. \(\mathbb{SM}\) is the most popular technology which collects the wide knowledge of all attention and makes up the society or the individual who interacts with communication. People use \(\mathbb{SM}\) advantage via internet access in many parts such as business, education, health, science, and amusement [1, 2]. Some of them access information of their curious attention from \(\mathbb{SM}\) platforms such as Google. Some find old or new friends, earn money, present work, make advertising products, buy or sell their goods via Facebook, Instagram, and Youtube. Some make a money transaction via bank applications. Some share information via Twitter and play games from various applications [3–5]. Although \(\mathbb{SM}\) has become a part of our daily lives, it can negatively cause or affect people's daily lives or relatives in families. One of the significant causes of more serious negative impacts does the social media addiction (\(\mathbb{SMA}\)). \(\mathbb{SMA}\) is a state that is used to refer to people who spend so much time in their daily life on \(\mathbb{SM}\) and feel anxious when they cannot make a visit to a \(\mathbb{SM}\) platform [6, 7].
In fact, \(\mathbb{SMA}\) is a kind of addictive problem, the same as in psychology alcoholism, smoking, game addiction, etc. Mathematical models play an important role in the construction to study the dynamic behavior of these problems. For instance, Nyabadza and co-workers [8, 9] formulated methamphetamine transmission in South Africa by building an appropriate mathematical model. In 2018, Ma and co-workers [10] have studied the stability of the synthetic drugs transmission epidemic models with psychological addicts. In 2019, Liu et al. [11] have analyzed a synthetic drug transmission model with treatment and discussed global stability and backward bifurcation of the model. Huo and co-workers [12] introduced a new alcoholism model with treatment and effect of Twitter. The stability of the equilibrium point is determined by using the basic reproductive number and numerical results are conducted. Li and Guo [13] constructed an online game addiction model. They used the basic reproduction number to obtain some properties and analyzed the stability of the equilibria. Pontriagin's maximum principle was employed to solve the optimal control strategy and numerical simulations are presented in their work. In 2020, Samad et al. [14] presented and analyzed a mathematical model of the smoking tobacco epidemic in Bangladesh. They derived the basic reproduction number and established the stability theorem for all equilibria. In 2021, Alemneh and Alemu [15] formulated and analyzed a mathematical model for the transmission dynamics of \(\mathbb{SMA}\) in the human population as follows:
$$ \textstyle\begin{cases} \frac{d \mathcal{S}}{dt} = \pi + \gamma \eta \mathcal{R} - \beta \sigma \mathcal{A} \mathcal{S} - (\kappa + \mu ) \mathcal{S}, \\ \frac{d \mathcal{E}}{dt} = \beta \sigma \mathcal{A} \mathcal{S} - (\delta + \mu ) \mathcal{E}, \\ \frac{d \mathcal{A}}{dt} = \alpha \delta \mathcal{E} - ( \mu + \epsilon + \rho ) \mathcal{A}, \\ \frac{d \mathcal{R}}{dt} = (1 - \alpha ) \delta \mathcal{E} + \epsilon \mathcal{A} - (\mu + \eta ) \mathcal{R}, \\ \frac{d \mathcal{Q}}{dt} = \kappa \mathcal{S} + (1 - \gamma ) \eta \mathcal{R} - \mu \mathcal{Q}. \end{cases} $$
For system (1.1), the human population is divided into five groups representing addiction status. Group 1: the people who are not addicted but susceptible to \(\mathbb{SMA}\) are denoted by susceptible populations; \(\mathcal{S}(t)\). Group 2: the people who use \(\mathbb{SM}\) less frequently but do not grow to the addicted stage are denoted by exposed populations; \(\mathcal{E}(t)\). Group 3: the people who are addicted to \(\mathbb{SM}\) and spent most of their time on it are denoted by addicted populations; \(\mathcal{A}(t)\). Group 4: the people who recovered from \(\mathbb{SMA}\) are denoted by recovered populations; \(\mathcal{R}(t)\). Group 5: the people who permanently do not use and quit using \(\mathbb{SM}\) are denoted by \(\mathcal{Q}(t)\). The total number of members of the population is \(\mathcal{N} = \mathcal{S} + \mathcal{E} + \mathcal{A} + \mathcal{R} + \mathcal{Q}\). The assumptions of the system are the following: the spread of the problem of \(\mathbb{SMA}\) happens within a closed environment, and it does not depend on sex, race, and human social state, members mix homogeneously, and the social media addictive people will transmit to non-addictive people when they are in connecting with the pressure of addictive. Moreover, the differential equations of this system are integrated by using the social media addictive cycle, which starts from entering susceptible individuals into the population with a rate of π. They are motivated by addictive people with the pressure contact rate of β and the probability transmission rate of σ and move to the exposed state. Some susceptible individuals move to a group of people who permanently do not use social media at a rate of κ. The exposed individuals are separated into two groups, one becomes addicted and moves to the addicted group at rate αδ, and another recovered with treatment at a rate \((1-\alpha ) \delta \). Some addicted individuals move to the recovered group at a rate of ϵ or died due to the overusing of addiction on social media at a rate of ρ. The recovered individuals become again susceptible individuals at a rate of γη or permanently stop using social media at a rate \((1-\gamma ) \eta \). Finally, all the people in every compartment have a natural death rate of μ. Alemneh and Alemu also investigated the stability of the equilibrium points and employed Pontryagin's maximum principle for the optimal control system.
More than three centuries have passed, fractional-order derivative models have been applied in several areas of real-world problems such as science, economics, engineering, biology, and epidemiology with various types of fractional calculus such as Liouville–Caputo (\(\mathbb{LC}\)), Caputo–Katugumpola (\(\mathbb{CK}\)), Caputo–Fabrizio (\(\mathbb{CF}\)), and fractal–fractional (\(\mathbb{FF}\)); see [16–26]. In addition, some of the authors incorporated the fractional-order derivative to addictive problems. In 2017, Singh et al. [27] studied and analyzed the existence and uniqueness of the smoking model under the \(\mathbb{CF}\) sense. In 2019, Dokuyucu [28] presented a fractional order of an alcoholism model with \(\mathbb{CF}\) type and investigated the existence and uniqueness of the model by using a fixed-point theorem. In 2021, Alrabaiah and co-workers [29] have formulated and analyzed a new mathematical model for \(\mathbb{LC}\)-fractional tobacco smoking with snuffing class. They accomplished a numerical solution of the proposed model via the generalized Adams–Bashforth–Moulton method. The Atangana–Baleanu–Caputo (\(\mathbb{ABC}\)) fractional derivative operator is one of the most popular fractional derivative operators. A fractional-order derivative was first roused into operation by Atangana and Baleanu [30] under the rule of a generalized Mittag-Leffler function in the part of a non-singular and non-local kernel. In many real-world problems, the \(\mathbb{ABC}\)-fractional derivative produces better results [31–41].
Based on the best of our knowledge of previous research, no manuscripts have looked into the mathematical model of \(\mathbb{SMA}\) with various fractional derivatives. We initiated the \(\mathbb{ABC}\)-fractional derivative to the \(\mathbb{SMA}\) model which is the creativity of this manuscript. Consequently, we are interested in filling this gap by considering the \(\mathbb{SMA}\) model studied by [15] under the \(\mathbb{ABC}\)-fractional derivative with order ϕ. We replace the integer order of model (1.1) with a fractional-order system. Therefore, the classical model (1.1) extend to fractional-order system by replacing the ordinary time derivative \(d/dt\) to the \(\mathbb{ABC}\)-fractional derivative \({{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi }\). It is remarkable that in the classical model (1.1), the dimension of the right-hand side of fractional model has dimensions \((\mathit{time})^{-1}\), but the dimensions of the left-hand side of \(\mathbb{ABC}\)-fractional model equal to \((\mathit{time})^{-\phi }\). In addition, when we convert an integer order system into fractional-order ϕ, we also have to consider all non-negative parameters in the term of ϕ-exponent for making the equal dimensions of the differential equations. The modified \(\mathbb{SMA}\) transmission model with the \(\mathbb{ABC}\)-fractional derivative suggested a model as follows:
$$ \textstyle\begin{cases} {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{S}(t) = \pi ^{\phi } + \gamma ^{\phi } \eta ^{\phi } \mathcal{R} - \beta ^{\phi } \sigma ^{\phi } \mathcal{A} \mathcal{S} - ( \kappa ^{\phi } + \mu ^{\phi })\mathcal{S}, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{E}(t) = \beta ^{\phi } \sigma ^{\phi } \mathcal{A} \mathcal{S} - (\delta ^{\phi } + \mu ^{\phi }) \mathcal{E}, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{A}(t) = \alpha ^{\phi } \delta ^{\phi } \mathcal{E} - (\mu ^{ \phi } + \epsilon ^{\phi } + \rho ^{\phi }) \mathcal{A}, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{R}(t) = (1 - \alpha ^{\phi }) \delta ^{\phi } \mathcal{E} + \epsilon ^{\phi } \mathcal{A} - (\mu ^{\phi } + \eta ^{\phi }) \mathcal{R}, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{Q}(t) = \kappa ^{\phi } \mathcal{S} + (1 - \gamma ^{\phi }) \eta ^{\phi } \mathcal{R} - \mu ^{\phi } \mathcal{Q}, \end{cases} $$
with the initial conditions \((\mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) = ( \mathcal{S}_{0}, \mathcal{E}_{0}, \mathcal{A}_{0}, \mathcal{R}_{0}, \mathcal{Q}_{0})\). The descriptions of all parameters are shown in Table 1. The main aim of this manuscript is to analyze the conditions that influence the transmission of this addiction to cease or, opposite, turn into epidemic, based on the number of reproductions. We establish the existence and uniqueness of the solutions for the proposed model via the famous fixed point theorems. The context of various Ulam's stability is provided to discuss the stability analysis. Finally, we use the novel numerical method represented by Alkahtani et al. [42] to find the approximated solutions of the \(\mathbb{SMA}\) for different fractional orders.
This paper is organized as follows: in Sect. 2, we present definitions and basic concepts of \(\mathbb{ABC}\)-fractional differential and integral operators after that we provide fixed point instruments to proof the our existence results. We computed the equilibrium points, the basic reproduction numbers, and established the stability analysis of the proposed model in Sect. 3. In Sect. 4, the uniqueness of the solution for the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) system (1.2) is examined by employing Banach's fixed point theorem and the existence result is proved by Krasnoselskii's fixed point theorem. In Sect. 5, the four types of Ulam's stability concepts of the model (1.2) are investigated. Numerical simulations to support the theoretical results are provided in Sect. 6. Finally, the discussion and conclusion of the proposed model are presented in Sect. 7.
This section presents relevant and necessary essential concepts used in this manuscript.
Let \(f \in \mathcal{C}^{1}[a,b]\), \(a < b\), be a function, and \(0 \leq \phi \leq 1\). Then the \(\mathbb{ABC}\)-fractional derivative of a function f of order ϕ is defined as follows:
$$ {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } f(t) = \frac{\mathbb{AB}(\phi )}{1-\phi } \int _{a}^{t} \mathbb{E}_{\phi } \biggl[-\frac{\phi }{1 - \phi }(t - s)^{\phi } \biggr] \frac{d}{dt}f(s)\,ds,\quad t > a > 0, $$
where \(\mathbb{AB}(\phi ) = 1 - \phi + \phi /\Gamma (\phi )\) is normalization function, characterized by \(\mathbb{AB}(0) = \mathbb{AB}(1) = 1\), and the Mittag-Leffler function \(\mathbb{E}_{\phi }\) is given as
$$ \mathbb{E}_{\phi }(z) = \sum_{k = 0}^{\infty } \frac{z^{k}}{\Gamma (\phi k + 1)}, \quad z, \phi \in \mathbb{C}, \operatorname{Re}( \phi ) > 0, $$
with \(\mathbb{C}\) the set of complex numbers.
The \(\mathbb{ABC}\)-fractional integral of a function \(f \in \mathcal{C}^{1}(a,b)\) is defined as follows:
$$ {{}_{t}^{\mathbb{AB}}}\mathcal{I}_{a}^{\phi } f(t) = \frac{1 - \phi }{\mathbb{AB}(\phi )} f(t) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{a}^{t} (t - s )^{\phi -1} f(s)\,ds,\quad t > a > 0. $$
Clearly, if \(\phi = 0\) and \(\phi = 1\) then we get the initial function and the ordinary integral, respectively. Furthermore, we can calculate the Laplace transform of (2.1) and obtain the following result:
$$ \mathcal{L} \bigl\{ {{}_{t}^{\mathbb{ABC}}} \mathfrak{D}_{a}^{\phi } f(t) \bigr\} (p) = \frac{\mathbb{AB}(\phi ) p^{\phi } \mathcal{L} \{ f(t) \} (p) - p^{\phi - 1} f(a)}{(1-\phi ) (p^{\phi } + \frac{\phi }{1 - \phi } )}. $$
The \(\mathbb{AB}\)-fractional derivative and \(\mathbb{AB}\)-fractional integral of a functions \(f \in \mathcal{C}^{1}(a,b)\) satisfies the Newton–Leibniz equality
$$ {{}_{t}^{\mathbb{AB}}}\mathcal{I}_{a}^{\phi } \bigl( {{}_{t}^{\mathbb{ABC}}} \mathfrak{D}_{a}^{\phi } f(t)\bigr) = f(t) - f(a). $$
For two functions, f, \(g \in \mathcal{H}^{1}(a,b)\), \(a < b\), the \(\mathbb{AB}\)-fractional derivative of a function f and g satisfies the following inequality:
$$ \bigl\Vert {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } f(t) - {{}_{t}^{ \mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } g(t) \bigr\Vert \leq \mathcal{H} \bigl\Vert f(t) - g(t) \bigr\Vert . $$
(Generalized mean value theorem [44]). Let \(g(t) \in \mathcal{C}[a,b]\), and let \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } g(t) \in \mathcal{C}[a,b]\) when \(\phi \in (0,1]\). Then we have \(g(t) = g(a) + \frac{1}{\Gamma (\phi )} {{}_{t}^{\mathbb{ABC}}} \mathfrak{D}_{a}^{\phi } g(\xi )(t-a)^{\phi }\), when \(\xi \in [a,t]\), \(\forall t \in (a,b]\).
It is easy to see by Lemma 2.5 that, if \(g(t) \in [a,b]\), \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } g(t) \in [a,b]\), and \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } g(t) \geq 0\), \(\forall t \in (a,b]\) when \(\phi \in (0,1]\), then the function \(g(t)\) is nondecreasing, and if \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{a}^{\phi } g(t) \leq 0\), \(\forall t \in (a,b]\), then the function \(g(t)\) is nonincreasing \(\forall t \in [a,b]\).
(Contraction mapping [45])
Let X be a Banach space. Then the operator \(\mathcal{T} : X \to X\) is a contraction if
$$ \Vert \mathcal{T}x - \mathcal{T}y \Vert \leq L \Vert x - y \Vert ,\quad \forall x, y, \in X, 0< L < 1. $$
(Banach's fixed point theorem [45])
Let D be a non-empty closed subset of a Banach space E. Then any contraction mapping \(\mathcal{Q}\) from D into itself has a unique fixed point.
(Krasnoselskii's fixed point theorem [45])
Let D be a non-empty, closed, convex subset of a Banach space E. Let \(T_{1}\), \(T_{2}\) be two operators such that (i) \(T_{1}x + T_{2}y \in D\), \(\forall x, y \in D\); (ii) \(T_{1}\) is compact and continuous; (iii) \(T_{2}\) is a contraction mapping. Then there exists \(z \in D\) such that \(T_{1}z + T_{2}z = z\).
Model analysis
Positivity invariant region
Now, we will discuss the positivity invariant region and steady states of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2).
The following lemma guarantees the boundedness of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2).
The closed set
$$ \Omega := \biggl\{ (\mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q} ) \in \mathbb{R}_{+}^{5} : 0 < \mathcal{N}(t) \leq \frac{\pi ^{\phi }}{\mu ^{\phi }} \biggr\} ,\quad \mathcal{N}(t) = \mathcal{S}(t) + \mathcal{E}(t) + \mathcal{A}(t) + \mathcal{R}(t) + \mathcal{Q}(t), $$
is positively invariant with regard to the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2).
Assume that the set \((\mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q} )\) with any solution of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2), and \(\mathcal{N}(t) = \mathcal{S}(t) + \mathcal{E}(t) + \mathcal{A}(t) + \mathcal{R}(t) + \mathcal{Q}(t)\) represents the total population. By applying Lemma 2.5, we obtain
$$ \textstyle\begin{cases} {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{S}(t) = \pi ^{\phi } + \gamma ^{\phi } \eta ^{\phi } \mathcal{R} \geq 0, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{E}(t) = \beta ^{\phi } \sigma ^{\phi } \mathcal{A} \mathcal{S} \geq 0, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{A}(t) = \alpha ^{\phi } \delta ^{\phi } \mathcal{E} \geq 0, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{R}(t) = (1 - \alpha ^{\phi }) \delta ^{\phi } \mathcal{E} + \epsilon ^{\phi } \mathcal{A} \geq 0, \\ {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{Q}(t) = \kappa ^{\phi } \mathcal{S} + (1 - \gamma ^{\phi }) \eta ^{\phi } \mathcal{R} \geq 0. \end{cases} $$
It follows from (3.1) that any of the solutions of (1.2) is nonnegative and remains in \(\mathbb{R}^{5}_{+}\). Taking into account that all the parameters are positive, by all the equations of the model,
$$ {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \mathcal{N}(t) = \pi ^{ \phi } - \mu ^{\phi }\mathcal{N} - \rho ^{\phi }\mathcal{A} \leq \pi ^{ \phi } - \mu ^{\phi } \mathcal{N}(t). $$
Taking the Laplace transform into (3.2), we obtain
$$\begin{aligned} \mathcal{N}(t) \leq & \biggl( \frac{\mathbb{AB}(\phi )}{\mathbb{AB}(\phi ) + (1 - \phi )\mu ^{\phi }} \mathcal{N}(0) + \frac{(1 - \phi )\pi ^{\phi }}{\mathbb{AB} + (1 - \phi )\mu ^{\phi }} \biggr) \mathbb{E}_{\phi , 1} \biggl(- \frac{\phi \mu ^{\phi }}{\mathbb{AB}(\phi ) + (1 - \phi )\mu ^{\phi }} t^{\phi } \biggr) \\ &{} + \frac{\phi \pi ^{\phi }}{\mathbb{AB}(\phi ) + (1 - \phi ) \mu ^{\phi }} \mathbb{E}_{\phi , \phi + 1} \biggl(- \frac{\phi \mu ^{\phi }}{\mathbb{AB}(\phi ) + (1 - \phi )\mu ^{\phi }} t^{\phi } \biggr), \end{aligned}$$
where \(\mathbb{E}_{\phi _{1}, \phi _{2}}\) is the two parameter Mittag-Leffler function, defined by
$$ \mathbb{E}_{\phi _{1}, \phi _{2}}(z) = \sum_{k = 0}^{\infty } \frac{z^{k}}{\Gamma (\phi _{1} k + \phi _{2})}. $$
Taking into account the asymptotic behavior of the Mittag-Leffler function, we have
$$ \mathbb{E}_{\phi _{1}, \phi _{2}}(z) \approx \sum_{K = 1}^{\omega } \frac{z^{-K}}{\Gamma (\phi _{2} - \phi _{1} K)} + O \bigl( \vert z \vert ^{-1- \omega } \bigr),\quad \vert z \vert \to \infty , \frac{\phi _{1} \pi }{2} < \bigl\vert \arg (z) \bigr\vert \leq \pi . $$
It is easily to observe that \(\mathcal{N}(t) \to \pi ^{\phi }/\mu ^{\phi }\) as \(t \to \infty \). Then the solution of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) for initial conditions in Ω stays in Ω for every \(t > 0\). Hence, Ω is positively invariant region with regard to the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2). □
All solutions which begin at the boundary of the positivity invariant region Ω converge to this region. We can analyze the flow generated by the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) for consideration because it is biologically and epidemiologically significant.
Equilibrium points and reproduction numbers
In this subsection, we are going to obtain the equilibrium points of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2). We are to find equilibrium points and the basic reproduction number of the considered model. There are two species of probable equilibrium points of the model. The primary one is the point where no disease in the group is called the disease-free equilibrium point. For the process of finding the equilibrium point, we will be setting the right-hand side of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is equal to zero. Hence, the disease-free equilibrium point of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) with \(\mathcal{E} = \mathcal{A} = 0\) is given by
$$ \mathfrak{E}_{0} = \bigl(\mathcal{S}^{0}, \mathcal{E}^{0}, \mathcal{A}^{0}, \mathcal{R}^{0}, \mathcal{Q}^{0} \bigr) = \biggl( \frac{\pi ^{\phi }}{\kappa ^{\phi } + \mu ^{\phi }}, 0, 0, 0, \frac{\kappa ^{\phi } \pi ^{\phi }}{\mu ^{\phi } (\mu ^{\phi } + \kappa ^{\phi } )} \biggr). $$
For analyzing the stability of the equilibrium points, the basic reproduction number \(R_{0}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is very important. To find \(R_{0}\), we only focus on the infectious classes of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2). The transmission matrix F and transition matrix V for the next-generation matrix method [46, 47] are obtained as
$$ F = \begin{pmatrix} 0 && \frac{\beta ^{\phi }\pi ^{\phi }\sigma ^{\phi }}{\kappa ^{\phi } + \mu ^{\phi }}&& 0 \\ 0 && 0 && 0 \\ 0 && 0 && 0 \end{pmatrix} ,\qquad V = \begin{pmatrix} \delta ^{\phi } + \mu ^{\phi } && 0 && 0 \\ - \alpha ^{\phi }\delta ^{\phi } && \mu ^{\phi } + \epsilon ^{\phi } + \rho ^{\phi } && 0 \\ - (1 - \alpha ^{\phi })\delta ^{\phi } && -\epsilon ^{\phi } && \eta ^{ \phi } + \mu ^{\phi } \end{pmatrix} . $$
Then the next-generation matrix is given by
$$ FV^{-1} = \begin{pmatrix} \frac{\beta ^{\phi } \sigma ^{\phi } \pi ^{\phi } \alpha ^{\phi } \delta ^{\phi }}{(\kappa ^{\phi }+\mu ^{\phi })(\delta ^{\phi }+\mu ^{\phi })(\mu ^{\phi }+\epsilon ^{\phi }+\rho ^{\phi })} && \frac{\beta ^{\phi } \sigma ^{\phi } \pi ^{\phi }}{(\kappa ^{\phi }+\mu ^{\phi })(\mu ^{\phi }+\epsilon ^{\phi }+\rho ^{\phi })}&& 0 \\ 0 && 0 && 0 \\ 0 && 0 && 0 \end{pmatrix} . $$
Therefore, the spectral radius of the next-generation matrix (3.3) provides the number of the basic reproduction number (\(R_{0}\)). Hence,
$$ {R_{0} = \mathfrak{r}\bigl(FV^{-1} \bigr) = \frac{\beta ^{\phi } \pi ^{\phi } \alpha ^{\phi } \delta ^{\phi } \sigma ^{\phi }}{(\kappa ^{\phi } + \mu ^{\phi })(\delta ^{\phi } + \mu ^{\phi })(\mu ^{\phi } + \epsilon ^{\phi } + \rho ^{\phi })}, } $$
where \(\mathfrak{r}\) denotes the spectral radius. As we know, \(R_{0}\) is the information for measuring an infectious disease transmission potential over time. When \(R_{0}>1\), then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) has an endemic equilibrium point \(\mathfrak{E}^{*}\). For finding \(\mathfrak{E}^{*}\), we will be setting this fact that all variables \(\mathcal{S}(t)\), \(\mathcal{E}(t)\), \(\mathcal{A}(t)\), \(\mathcal{R}(t)\), and \(\mathcal{Q}(t)\) of (1.2) are nonnegative. It can be calculated by equating each equation of (1.2) equal to zero as follows:
$$\begin{aligned} {{{}_{t}^{\mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{S}(t) = {{{}_{t}^{ \mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{E}(t) = {{{}_{t}^{ \mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{A}(t) = {{{}_{t}^{ \mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{R}(t) = {{{}_{t}^{ \mathbb{ABC}}}\mathfrak{D}}_{0}^{\phi } \mathcal{Q}(t) = 0. \end{aligned}$$
Then we obtain \(\mathfrak{E}^{*} = (\mathcal{S}^{*}, \mathcal{E}^{*}, \mathcal{A}^{*}, \mathcal{R}^{*}, \mathcal{Q}^{*} )\), where
$$\begin{aligned}& \mathcal{S}^{*} = \frac{(\mu ^{\phi } + \delta ^{\phi })(\mu ^{\phi } + \epsilon ^{\phi } + \rho ^{\phi })}{\alpha ^{\phi } \beta ^{\phi } \delta ^{\phi } \sigma ^{\phi }}, \\& \mathcal{E}^{*} = \frac{\xi _{2}}{\xi _{1}}, \\& \mathcal{A}^{*} = \frac{{{\alpha }^{\phi }}{{\delta }^{\phi }}\mathcal{E}^{*}}{{{\mu }^{\phi }}+{{\epsilon }^{\phi }}+{{\rho }^{\phi }}}, \\& \mathcal{R}^{*} = \frac{\xi _{2}+ ( {{\delta }^{\phi }}+{{\mu }^{\phi }} ){\mathcal{E}^{*}}}{{{\gamma }^{\phi }}{{\eta }^{\phi }}}, \\& \mathcal{Q}^{*} = \frac{{{\kappa }^{\phi }}{\mathcal{S}^{*}}+(1-{{\gamma }^{\phi }}){{\eta }^{\phi }}{\mathcal{R}^{*}}}{{{\mu }^{\phi }}}. \end{aligned}$$
$$\begin{aligned}& \xi _{1} = \frac{{{\gamma }^{\phi }}{{\eta }^{\phi }}{{\delta }^{\phi }}}{{{\mu }^{\phi }} +{{\eta }^{\phi }}} \biggl( 1-{{\alpha }^{\phi }}+ \frac{{{\epsilon }^{\phi }} {{\alpha }^{\phi }}}{{{\mu }^{\phi }}+{{\epsilon }^{\phi }}+{{\rho }^{\phi }}} \biggr) -{{\delta }^{\phi }}-{{ \mu }^{\phi }}, \\& \xi _{2} = \frac{({{\kappa }^{\phi }}+{{\mu }^{\phi }}) ( {{\delta }^{\phi }} +{{\mu }^{\phi }} ) ( {{\mu }^{\phi }}+{{\epsilon }^{\phi }} +{{\rho }^{\phi }} )}{{{\beta }^{\phi }}{{\sigma }^{\phi }}{{\alpha }^{\phi }} {{\delta }^{\phi }}}-{{\pi }^{\phi }}. \end{aligned}$$
Next, we will state the theorem and guarantee that \(\mathfrak{E}_{0}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is locally asymptotically stable.
The disease-free equilibrium point \(\mathfrak{E}_{0}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is locally asymptotically stable if \(R_{0} < 1\) and unstable otherwise.
We omit the details of the proof. See Theorem 3.3 in [15]. □
Existence results of \(\mathbb{SMA}\) transmission mathematical model
In this section, we examine the existence and uniqueness of solutions for the fractional \(\mathbb{SMA}\) model with the help of Banach's and Krasnoselskii's fixed point theorems.
For the sake of simplicity, we rewrite the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) as follows:
$$ \textstyle\begin{cases} {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \Theta (t) = \Lambda (t, \Theta (t)), \\ \Theta (0) = \Theta _{0} \geq 0,\quad 0< t < T < \infty , \end{cases} $$
where the vector \(\Theta (t) = (\mathbb{G}_{1}, \mathbb{G}_{2}, \mathbb{G}_{3}, \mathbb{G}_{4}, \mathbb{G}_{5})\) represents the state variables and Λ is a continuous vector function such that
$$ \Lambda = \begin{pmatrix} \mathbb{G}_{1} \\ \mathbb{G}_{2} \\ \mathbb{G}_{3} \\ \mathbb{G}_{4} \\ \mathbb{G}_{5} \end{pmatrix} = \begin{pmatrix} \pi ^{\phi } + \gamma ^{\phi } \eta ^{\phi } \mathcal{R} - \beta ^{\phi } \sigma ^{\phi } \mathcal{A} \mathcal{S} - (\kappa ^{\phi } + \mu ^{\phi }) \mathcal{S} \\ \beta ^{\phi } \sigma ^{\phi } \mathcal{A} \mathcal{S} - (\delta ^{\phi } + \mu ^{\phi }) \mathcal{E} \\ \alpha ^{\phi } \delta ^{\phi } \mathcal{E} - (\mu ^{\phi } + \epsilon ^{ \phi } + \rho ^{\phi }) \mathcal{A} \\ (1 - \alpha ^{\phi }) \delta ^{\phi } \mathcal{E} + \epsilon ^{\phi } \mathcal{A} - (\mu ^{\phi } + \eta ^{\phi }) \\ \kappa ^{\phi } \mathcal{S} + (1 - \gamma ^{\phi }) \eta ^{\phi } \mathcal{R} - \mu ^{\phi } \mathcal{Q} \end{pmatrix} , $$
with the initial conditions \(\Theta _{0} = (\mathcal{S}_{0}, \mathcal{E}_{0}, \mathcal{A}_{0}, \mathcal{R}_{0}, \mathcal{Q}_{0})\). Applying the fractional integral of \(\mathbb{ABC}\) to both sides of (4.1), we get the integral equation:
$$ \Theta (t) = \Theta _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \Theta (t)\bigr) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds, $$
where \(\mathbb{AB}(\phi )\) is defined as in Definition 2.1. Let us define a Banach space by using \(\mathcal{J} = [0,T]\) as \(\mathcal{W} = \mathcal{C}(\mathcal{J},\mathbb{R}^{5}_{+})\) under the norm defined as \(\|\Theta \| = \|\mathcal{S}\| + \|\mathcal{E}\| + \|\mathcal{A}\| + \|\mathcal{R}\| + \|\mathcal{Q}\|\) where
$$ \sup_{t \in \mathcal{J}}\bigl\{ \bigl\vert \Theta (t) \bigr\vert \bigr\} = \sup_{t \in \mathcal{J}} \bigl\{ \bigl\vert \mathcal{S}(t) \bigr\vert \bigr\} + \sup_{t \in \mathcal{J}}\bigl\{ \bigl\vert \mathcal{E}(t) \bigr\vert \bigr\} + \sup_{t \in \mathcal{J}}\bigl\{ \bigl\vert \mathcal{A}(t) \bigr\vert \bigr\} + \sup_{t \in \mathcal{J}}\bigl\{ \bigl\vert \mathcal{R}(t) \bigr\vert \bigr\} + \sup_{t \in \mathcal{J}} \bigl\{ \bigl\vert \mathcal{Q}(t) \bigr\vert \bigr\} . $$
Uniqueness result via Banach's fixed point theorem
The existence and uniqueness result of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) system (1.2) will be investigated by using Banach's fixed point theorem.
Assume that a quadratic vector function \(\Lambda : \mathcal{J}\times \mathbb{R}^{5} \to \mathbb{R}\) is continuous such that:
there exists a positive constant \(\mathbb{L}_{\Lambda } > 0\) such that
$$ \bigl\vert \Lambda \bigl(t,\Theta _{1}(t)\bigr) - \Lambda \bigl(t, \Theta _{2}(t)\bigr) \bigr\vert \leq \mathbb{L}_{\Lambda } \bigl\vert \Theta _{1}(t) - \Theta _{2}(t) \bigr\vert , $$
for any \(\Theta _{1}\), \(\Theta _{2} \in \mathcal{W}\) and for all \(t\in \mathcal{J}\).
$$ \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{ T^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \mathbb{L}_{ \Lambda } < 1, $$
then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) has a unique solution on \(\mathcal{J}\).
Earlier, we converted the initial value problem (4.1) (which is equivalent to the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2)) into a fixed point problem \(\Theta = \mathcal{T}\Theta \). We consider an operator \(\mathcal{T} : \mathcal{W} \to \mathcal{W}\) that is defined by
$$ (\mathcal{T}\Theta ) (t) = \Theta _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \Theta (t)\bigr) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds. $$
Clearly, the initial value problem (4.1) has a solution if and only if the operator \(\mathcal{T}\) has fixed points.
Suppose that \(\mathbb{K}_{1}\) is a nonnegative constant such that \(\sup_{t\in \mathcal{J}}|\Lambda (t,0)| = \mathbb{K}_{1} < +\infty \). Define a bounded, closed, and convex subset \(B_{r_{1}}\) of \(\mathcal{W}\), where \(B_{r_{1}} = \{\Theta \in \mathcal{W} : \|\Theta \| \leq r_{1}\}\), where \(r_{1}\) is chosen such that
$$ r_{1} \geq \frac{ \Vert \Theta _{0} \Vert + (\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} )\mathbb{K}_{1}}{1 - (\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} )\mathbb{L}_{\Lambda }}. $$
The proof proceeds in two steps.
Step I. We show that \(\mathcal{T}B_{r_{1}} \subset B_{r_{1}}\).
For any \(\Theta \in B_{r_{1}}\), we have
$$\begin{aligned} \bigl\vert (\mathcal{T}\Theta ) (t) \bigr\vert \leq & \Vert \Theta _{0} \Vert + \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert \Lambda \bigl(t, \Theta (t)\bigr) \bigr\vert + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert \Lambda \bigl(s, \Theta (s)\bigr) \bigr\vert \,ds \\ \leq & \Vert \Theta _{0} \Vert + \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl[ \bigl\vert \Lambda \bigl(t, \Theta (t)\bigr) - \Lambda (t, 0) \bigr\vert + \bigl\vert \Lambda (t, 0) \bigr\vert \bigr] \\ &{} + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl[ \bigl\vert \Lambda \bigl(s, \Theta (s)\bigr) - \Lambda (s, 0) \bigr\vert + \bigl\vert \Lambda (s, 0) \bigr\vert \bigr] \,ds \\ \leq & \Vert \Theta _{0} \Vert + \frac{1-\phi }{\mathbb{AB}(\phi )} [ \mathbb{L}_{\Lambda } r_{1} + \mathbb{K}_{1} ] + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \,ds [ \mathbb{L}_{\Lambda } r_{1} + \mathbb{K}_{1} ] \\ \leq & \Vert \Theta _{0} \Vert + \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{ T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) [\mathbb{L}_{\Lambda } r_{1} + \mathbb{K}_{1} ] \leq r_{1}, \end{aligned}$$
which implies that \(\mathcal{T}B_{r_{1}} \subset B_{r_{1}}\).
Step II. We show that \(\mathcal{T}\) is a contraction.
For each \(\Theta _{1}\), \(\Theta _{2}\in B_{r_{1}}\) and for any \(t \in \mathcal{J}\), we obtain
$$\begin{aligned}& \bigl\vert (\mathcal{T}\Theta _{1}) (t) - (\mathcal{T}\Theta _{2}) (t) \bigr\vert \\& \quad \leq \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert \Lambda \bigl(t, \Theta _{1}(t)\bigr) - \Lambda \bigl(t, \Theta _{2}(t)\bigr) \bigr\vert \\& \qquad {}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert \Lambda \bigl(s, \Theta _{1}(s)\bigr) - \Lambda \bigl(s, \Theta _{2}(s)\bigr) \bigr\vert \,ds \\& \quad \leq \frac{(1-\phi ) \mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )} \bigl\vert \Theta _{1}(t) - \Theta _{2}(t) \bigr\vert + \frac{\phi \mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \bigl\vert \Theta _{1}(s) - \Theta _{2}(s) \bigr\vert \,ds \\& \quad \leq \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \mathbb{L}_{\Lambda } \Vert \Theta _{1} - \Theta _{2} \Vert , \end{aligned}$$
$$ \Vert \mathcal{T}\Theta _{1} - \mathcal{T}\Theta _{2} \Vert \leq \biggl( \frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \mathbb{L}_{\Lambda } \Vert \Theta _{1} - \Theta _{2} \Vert . $$
Since \([ (1-\phi )/\mathbb{AB}(\phi ) + T_{\max }^{\phi }/(\mathbb{AB}(\phi ) \Gamma (\phi ))] < 1\), by the conclusion of Banach's fixed point theorem (Lemma 2.7), \(\mathcal{T}\) is called a contraction. Hence, \(\mathcal{T}\) has a unique fixed point that is a unique solution of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) on \(\mathcal{J}\). □
Existence result via Krasnoselskii's fixed point theorem
Assume that \((H_{1})\) holds and
there exists positive constant \(\mathbb{M}_{\Lambda }\), \(\mathbb{N}_{\Lambda }\) such that
$$ \bigl\vert \Lambda \bigl(t,\Theta (t)\bigr) \bigr\vert \leq \mathbb{M}_{\Lambda } \bigl\vert \Theta (t) \bigr\vert + \mathbb{N}_{\Lambda }, $$
for any \(\Theta \in \mathcal{W}\) and for all \(t\in \mathcal{J}\).
Then there exists at least one solution of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2), provided that \((1-\phi )\mathbb{L}_{\Lambda }/\mathbb{AB}(\phi ) < 1\).
Consider \(\mathcal{T} : \mathcal{W} \to \mathcal{W}\) defined by \((\mathcal{T}\Theta )(t) = (\mathcal{T}_{1}\Theta )(t) + (\mathcal{T}_{2} \Theta )(t)\), \(\Theta \in \mathcal{W}\), \(t \in \mathcal{J}\), where
$$\begin{aligned}& (\mathcal{T}_{1}\Theta ) (t) = \Theta _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \Theta (t)\bigr), \end{aligned}$$
$$\begin{aligned}& (\mathcal{T}_{2}\Theta ) (t) = \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds. \end{aligned}$$
Let \(B_{r_{2}} = \{\Theta \in \mathcal{W} : \|\Theta \| \leq r_{2}\}\) be a closed convex set with the radius
$$ r_{2} \geq \frac{ \Vert \Theta _{0} \Vert + (\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} )\mathbb{N}_{\Lambda }}{1 - (\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} )\mathbb{M}_{\Lambda }}. $$
The proof is divided into the following four steps.
Step I. We show that \(\mathcal{T}_{1}\Theta _{1} + \mathcal{T}_{2}\Theta _{2} \in B_{r_{2}}\) for all \(\Theta _{1}\), \(\Theta _{2} \in B_{r_{2}}\).
By the operator (4.5), we get
$$\begin{aligned}& \bigl\vert (\mathcal{T}_{1}\Theta _{1}) (t) + ( \mathcal{T}_{2}\Theta _{2}) (t) \bigr\vert \\& \quad \leq \Vert \Theta _{0} \Vert + \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert \Lambda \bigl(t, \Theta _{1}(t)\bigr) \bigr\vert + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \bigl\vert \Lambda \bigl(s, \Theta _{2}(s)\bigr) \bigr\vert \,ds \\& \quad \leq \Vert \Theta _{0} \Vert + \frac{1-\phi }{\mathbb{AB}(\phi )} [ \mathbb{M}_{\Lambda } r_{2} + \mathbb{N}_{\Lambda } ] + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \,ds [ \mathbb{M}_{\Lambda } r_{2} + \mathbb{N}_{\Lambda } ] \\& \quad \leq \Vert \Theta _{0} \Vert + \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \mathbb{N}_{\Lambda } + \biggl( \frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \mathbb{M}_{\Lambda } r_{2} \\& \quad \leq r_{2}, \end{aligned}$$
which yields \(\|\mathcal{T}_{1}\Theta _{1} + \mathcal{T}_{2}\Theta _{2}\| \leq r_{2}\). Then \(\mathcal{T}_{1}\Theta _{1} + \mathcal{T}_{2}\Theta _{2} \in B_{r_{2}}\) for all \(\Theta _{1}\), \(\Theta _{2} \in B_{r_{2}}\).
Step II. We show that \(\mathcal{T}_{1}\) is a contraction.
For any \(\Theta _{1}\), \(\Theta _{2} \in B_{r_{2}}\), we have
$$\begin{aligned} \bigl\vert (\mathcal{T}_{1}\Theta _{1}) (t) - ( \mathcal{T}_{1}\Theta _{2}) (t) \bigr\vert \leq& \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert \Lambda \bigl(t, \Theta _{1}(t) \bigr) - \Lambda \bigl(t, \Theta _{2}(t)\bigr) \bigr\vert \\ \leq& \frac{(1-\phi )\mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )} \bigl\vert \Theta _{1}(t) - \Theta _{2}(t) \bigr\vert , \end{aligned}$$
which implies that \(\|\mathcal{T}_{1}\Theta _{1} - \mathcal{T}_{1}\Theta _{2}(t)\| \leq [(1- \phi )\mathbb{L}_{\Lambda }/(\mathbb{AB}(\phi ))]\|\Theta _{1} - \Theta _{2}\|\). Since \((1-\phi )\mathbb{L}_{\Lambda }/\mathbb{AB}(\phi ) < 1\), \(\mathcal{T}_{1}\) is contraction.
Step III. We show that \(\mathcal{T}_{2}\) is continuous and compact.
Let \(\Theta _{n}\) be a sequence such that \(\Theta _{n} \to \Theta \in \mathcal{W}\). Then, for any \(t \in \mathcal{J}\), we have
$$\begin{aligned} \bigl\vert (\mathcal{T}_{2}\Theta _{n}) (t) - ( \mathcal{T}_{2}\Theta ) (t) \bigr\vert \leq & \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \bigl\vert \Lambda \bigl(s, \Theta _{n}(s)\bigr) - \Lambda \bigl(s, \Theta (s)\bigr) \bigr\vert \,ds \\ \leq & \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \bigl\Vert \Lambda \bigl(\cdot , \Theta _{n}(\cdot )\bigr) - \Lambda \bigl(\cdot , \Theta ( \cdot )\bigr) \bigr\Vert . \end{aligned}$$
Since Λ is continuous, \(\mathcal{T}_{2}\) is also continuous. Then we get \(\|\mathcal{T}_{2}\Theta _{n} - \mathcal{T}_{2}\Theta \| \to 0\), as \(n \to \infty \). Next, \(\mathcal{T}_{2}\) is uniformly bounded on \(B_{r_{2}}\) (\(\mathcal{T}_{2}\) is relatively compact). For any \(\Theta \in B_{r_{2}}\) and \(t\in \mathcal{J}\), one has
$$ \bigl\vert (\mathcal{T}_{2}\Theta ) (t) \bigr\vert \leq \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert \Lambda \bigl(s, \Theta (s)\bigr) \bigr\vert \,ds \leq \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} [ \mathbb{M}_{\Lambda } r_{2} + \mathbb{N}_{\Lambda } ]. $$
This shows that \(\mathcal{T}_{2}\) is uniformly bounded on \(B_{r_{2}}\).
Step IV. We show that \(\mathcal{T}_{2}\) is equicontinuous.
Assume that \(\tau _{1}, \tau _{2} \in \mathcal{J}\) with \(0 \leq \tau _{1} < \tau _{2} \leq T\) and \(\Theta \in B_{r_{2}}\). Then we have
$$\begin{aligned}& \bigl\vert (\mathcal{T}_{2}\Theta ) (\tau _{2}) - (\mathcal{T}_{2}\Theta ) ( \tau _{1}) \bigr\vert \\& \quad \leq \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \biggl\vert \int _{0}^{ \tau _{2}}(\tau _{2} - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr) \,ds - \int _{0}^{\tau _{1}}(\tau _{1} - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr) \,ds \biggr\vert \\& \quad \leq \frac{\phi [\mathbb{M}_{\Lambda } r_{2} + \mathbb{N}_{\Lambda } ]}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggl\vert \int _{\tau _{1}}^{\tau _{2}}(\tau _{2} - s)^{\phi -1} \,ds + \int _{0}^{\tau _{1}} \bigl[(\tau _{2} - s)^{\phi -1} - (\tau _{1} - s)^{ \phi -1} \bigr] \,ds \biggr\vert \\& \quad \leq \frac{\mathbb{M}_{\Lambda } r_{2} + \mathbb{N}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \bigl( 2 \vert \tau _{2} - \tau _{1} \vert ^{\phi } \bigr). \end{aligned}$$
Clearly, this being independent of \(\Theta \in B_{r_{2}}\), the right-hand side of (4.8) tends to zero as \(\tau _{2} \to \tau _{1}\). Therefore, by the Arzelá–Ascoli theorem, \(\mathcal{T}_{2} B_{r_{2}}\) is relatively compact and \(\mathcal{T}_{2}\) is completely continuous. Hence, by Krasnoselskii's fixed point theorem (Lemma 2.8), which implies that the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) has at least one solution on \(\mathcal{J}\). □
Ulam's stability analysis of \(\mathbb{SMA}\) transmission mathematical model
This section is discussing some sufficient conditions for the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) that will correspond to the assumptions of the four types of Ulam's stability as \(\mathbb{UH}\) stability, generalized \(\mathbb{UH}\) stability, \(\mathbb{UHR}\) stability, and generalized \(\mathbb{UHR}\) stability.
Firstly, we will state Ulam's stability theorem, which will be used in this section. Let \(\varphi > 0\) be a positive real number and \(\mathcal{F}_{\Lambda } : \mathcal{J} \to \mathbb{R}^{+}\) be a continuous function. We consider
$$\begin{aligned}& \bigl\vert {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) - \Lambda \bigl(t, \xi (t)\bigr) \bigr\vert \leq \varphi , \quad \forall t \in \mathcal{J}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) - \Lambda \bigl(t, \xi (t)\bigr) \bigr\vert \leq \varphi \mathcal{F}_{\Lambda }(t),\quad \forall t \in \mathcal{J}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) - \Lambda \bigl(t, \xi (t)\bigr) \bigr\vert \leq \mathcal{F}_{\Lambda }(t), \quad \forall t \in \mathcal{J}, \end{aligned}$$
where \(\varphi = \max (\varphi _{j})^{\mathbb{T}}\) for \(j = 1, 2, 3, 4, 5\).
(\(\mathbb{UH}\) Stability)
The \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is called \(\mathbb{UHR}\) stable if there exists a real number \(C_{\Lambda } > 0\) such that, for every \(\varphi > 0\) and for each solution \(\xi \in \mathcal{W}\) of (5.1), there exists a solution \(\Theta \in \mathcal{W}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) with
$$ \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq C_{\Lambda }\varphi ,\quad t\in \mathcal{J}, $$
where \(\varphi = \max (\varphi _{j})^{\mathbb{T}}\) and \(C_{\Lambda } = \max (C_{\Lambda _{j}})^{\mathbb{T}}\) for \(j = 1, 2, 3, 4, 5\).
(Generalized \(\mathbb{UH}\) Stability)
The \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is called generalized \(\mathbb{UH}\) stable if there exists a function \(\mathcal{F}_{\Lambda } \in \mathcal{C}(\mathbb{R}^{+}, \mathbb{R}^{+})\) with \(\mathcal{F}_{\Lambda }(0) = 0\) such that, for each solution \(\xi \in \mathcal{W}\) of (5.2), there exists a solution \(\Theta \in \mathcal{W}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) such that
$$ \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq \mathcal{F}_{ \Lambda }( \varphi ),\quad t\in \mathcal{J}, $$
where \(\varphi = \max (\varphi _{j})^{\mathbb{T}}\) and \(\mathcal{F}_{\Lambda } = \max (\mathcal{F}_{\Lambda _{j}})^{ \mathbb{T}}\) for \(j = 1, 2, 3, 4, 5\).
(\(\mathbb{UHR}\) Stability)
The \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is called \(\mathbb{UHR}\) stable with respect to \(\mathcal{F}_{\Lambda } \in \mathcal{C}(\mathcal{J},\mathbb{R}^{+})\) if there exists a real number \(K_{\mathcal{F}_{\Lambda }} > 0\) such that for each \(\varphi > 0\) and for each solution \(\xi \in \mathcal{W}\) of (5.2) there exists a solution \(\Theta \in \mathcal{W}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) with
$$ \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq K_{\mathcal{F}_{ \Lambda }} \varphi \mathcal{F}_{\Lambda }(t),\quad t\in \mathcal{J}, $$
where \(\varphi = \max (\varphi _{j})^{\mathbb{T}}\), \(K_{\mathcal{F}_{\Lambda }} = \max (K_{\mathcal{F}_{\Lambda _{j}}})^{ \mathbb{T}}\), and \(\mathcal{F}_{\Lambda } = \max (\mathcal{F}_{\Lambda _{j}})^{ \mathbb{T}}\) for \(j = 1, 2, 3, 4, 5\).
(Generalized \(\mathbb{UHR}\) Stability)
The \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is called generalized \(\mathbb{UHR}\) stable with respect to \(\mathcal{F}_{\Lambda }\in \mathcal{C}(\mathcal{J},\mathbb{R}^{+})\) if there exists a real number \(K_{\mathcal{F}_{\Lambda }} > 0\) such that, for each solution \(\xi \in \mathcal{W}\) of (5.3), there exists a solution \(\Theta \in \mathcal{W}\) of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) with
$$ \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq K_{\mathcal{F}_{ \Lambda }} \mathcal{F}_{\Lambda }(t), \quad t\in \mathcal{J}, $$
where \(K_{\mathcal{F}_{\Lambda }} = \max (K_{\mathcal{F}_{\Lambda _{j}}})^{ \mathbb{T}}\) and \(\mathcal{F}_{\Lambda } = \max (\mathcal{F}_{\Lambda _{j}})^{ \mathbb{T}}\) for \(j = 1, 2, 3, 4, 5\).
It is easy to see that (1) Def. 5.1 ⇒ Def. 5.2; (2) Def. 5.3 ⇒ Def. 5.4; (3) Def. 5.3 for \(\mathcal{F}_{\Lambda }(\cdot ) = 1\) ⇒ Def. 5.1.
A function \(\xi \in \mathcal{W}\) is a solution of (5.1) if and only if there exists a function \(w \in \mathcal{W}\) (which depends on ξ) such that the following properties: (i) \(|w(t)|\leq \varphi \), \(w = \max (w_{j})^{\mathbb{T}}\), \(\forall t \in \mathcal{J}\). (ii) \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) = \Lambda (t, \xi (t)) + w(t)\), \(\forall t\in \mathcal{J}\).
A function \(\xi \in \mathcal{W}\) is a solution of (5.2) if and only if there exists a function \(v \in \mathcal{W}\) (which depends on ξ) such that we have the following properties: (i) \(|v(t)|\leq \varphi \mathcal{F}_{\Lambda }(t)\), \(v = \max (v_{j})^{\mathbb{T}}\), \(\forall t \in \mathcal{J}\). (ii) \({{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) = \Lambda (t, \xi (t)) + v(t)\), \(\forall t\in \mathcal{J}\).
The \(\mathbb{UH}\) stability and generalized \(\mathbb{UH}\) stability results
Let \(\phi \in (0,1]\). If \(\xi \in \mathcal{W}\) is a solution of (5.1), then ξ is a solution of the following inequality:
$$\begin{aligned}& \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \\& \quad \leq \biggl( \frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \varphi , \end{aligned}$$
where \(\mathcal{R}_{\xi }(t) = \xi _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda (t, \xi (t))\).
Let ξ be a solution of (5.1). In view of Remark 5.6\((2)\), we have
$$ \textstyle\begin{cases} {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) = \Lambda (t, \xi (t)) + w(t), \quad t\in \mathcal{J}, \\ \xi (0) = \xi _{0} \geq 0. \end{cases} $$
Then the approximate solution of (5.9) can be written
$$\begin{aligned} \xi (t) =& \xi _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \xi (t)\bigr) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \\ &{} + \frac{1-\phi }{\mathbb{AB}(\phi )} w(t) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} w(s) \,ds. \end{aligned}$$
By using Remark 5.6(i),
$$\begin{aligned}& \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \\& \quad \leq \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert w(t) \bigr\vert + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert w(s) \bigr\vert \,ds \\& \quad \leq \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \varphi . \end{aligned}$$
Therefore, the inequality (5.8) is obtained. □
Assume that \(\Lambda : \mathcal{J}\times \mathbb{R} \to \mathbb{R}\) is continuous for every \(\Theta \in \mathcal{W}\). If \((H_{1})\) and (4.3) are fulfilled, then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is \(\mathbb{UH}\) stable on \(\mathcal{J}\).
Suppose that \(\varphi > 0\) and let \(\xi \in \mathcal{W}\) be any solution of (5.1). Let \(\Theta \in \mathcal{W}\) be the unique solution of the model (4.1),
$$ \textstyle\begin{cases} {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \Theta (t) = \Lambda (t, \Theta (t)) , \quad t\in \mathcal{J}, \\ \Theta (0) = \Theta _{0}, \end{cases} $$
$$ \Theta (t) = \Theta _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \Theta (t)\bigr) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds. $$
By using Lemma 5.8 with \((H_{1})\), we have
$$\begin{aligned} \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq & \biggl\vert \xi (t) - \mathcal{R}_{ \Theta }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds \biggr\vert \\ \leq & \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \\ &{} + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert \Lambda \bigl(s, \xi (s)\bigr)\,ds - \Lambda \bigl(s, \Theta (s)\bigr) \bigr\vert \,ds \\ \leq & \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \varphi + \frac{\phi \mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \bigl\vert \xi (s) - \Theta (s) \bigr\vert \,ds \\ \leq & \biggl(\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )} \biggr) \varphi + \frac{T_{\max }^{\phi }\mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \bigl\vert \xi (t) - \Theta (t) \bigr\vert . \end{aligned}$$
This implies that \(|\xi (t) - \Theta (t)| \leq C_{\Lambda } \varphi \), where
$$ C_{\Lambda } = \frac{\frac{1-\phi }{\mathbb{AB}(\phi )} + \frac{T_{\max }^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi )}}{1 - \frac{T_{\max }^{\phi }\mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )}}. $$
Hence, the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is \(\mathbb{UH}\) stable. □
Corollary 5.10
In Theorem 5.9, if we set \(\mathcal{F}_{\Lambda }(\varphi ) = C_{\Lambda } \varphi \) such that \(\mathcal{F}_{\Lambda }(0) = 0\), then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is generalized \(\mathbb{UH}\) stable.
The \(\mathbb{UHR}\) stability and generalized \(\mathbb{UHR}\) stability results
Before proving, we give the following assumption:
\(({H}_{3})\):
There exists an increasing function \(\mathcal{F}_{\Lambda } \in \mathcal{W}\) and there exists \(\lambda _{\mathcal{F}_{\Lambda }} > 0\), such that, for any \(t \in \mathcal{J}\), we have the following integral inequality:
$$ {_{0}^{\mathbb{AB}}}\mathcal{I}_{t}^{\phi } \mathcal{F}_{\Lambda }(t) \leq \lambda _{\mathcal{F}_{\Lambda }} \mathcal{F}_{\Lambda }(t). $$
Lemma 5.11
$$ \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \leq \varphi \lambda _{ \mathcal{F}_{\Lambda }} \mathcal{F}_{\Lambda }(t), $$
Let ξ be a solution of (5.2). In view of Remark 5.7(ii), we have
$$ \textstyle\begin{cases} {{}_{t}^{\mathbb{ABC}}}\mathfrak{D}_{0}^{\phi } \xi (t) = \Lambda (t, \xi (t)) + v(t), \quad t\in \mathcal{J}, \\ \xi (0) = \xi _{0} \geq 0. \end{cases} $$
Then the solution of (5.12) can be written
$$\begin{aligned} \xi (t) =& \xi _{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \Lambda \bigl(t, \xi (t)\bigr) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \\ &{} + \frac{1-\phi }{\mathbb{AB}(\phi )} v(t) + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} v(s) \,ds. \end{aligned}$$
By using Remark 5.7(i), we have
$$\begin{aligned}& \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \\& \quad \leq \frac{1-\phi }{\mathbb{AB}(\phi )} \bigl\vert v(t) \bigr\vert + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert v(s) \bigr\vert \,ds \\& \quad \leq \varphi \lambda _{\mathcal{F}_{\Lambda }} \mathcal{F}_{\Lambda }(t). \end{aligned}$$
Hence, the inequality (5.8) is obtained. □
Theorem 5.12
Assume that \(\Lambda : \mathcal{J}\times \mathbb{R} \to \mathbb{R}\) is continuous for every \(\Theta \in \mathcal{W}\). If \((H_{1})\), \((H_{3})\) and (4.3) are fulfilled, then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is \(\mathbb{UHR}\) stable on \(\mathcal{J}\).
Let \(\varphi > 0\) and \(\xi \in \mathcal{W}\) be the solution of (5.3). Let \(\Theta \in \mathcal{W}\) be the unique solution of the model (4.1). By using Lemma 5.11, \((H_{1})\), and \((H_{3})\), we have
$$\begin{aligned} \bigl\vert \xi (t) - \Theta (t) \bigr\vert \leq & \biggl\vert \xi (t) - \mathcal{R}_{ \Theta }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \Lambda \bigl(s, \Theta (s)\bigr)\,ds \biggr\vert \\ \leq & \biggl\vert \xi (t) - \mathcal{R}_{\xi }(t) - \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \Lambda \bigl(s, \xi (s)\bigr)\,ds \biggr\vert \\ &{} + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \bigl\vert \Lambda \bigl(s, \xi (s)\bigr)\,ds - \Lambda \bigl(s, \Theta (s)\bigr) \bigr\vert \,ds \\ \leq & \varphi \lambda _{\mathcal{F}_{\Lambda }} \mathcal{F}_{\Lambda }(t) + \frac{\phi \mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{\phi -1} \bigl\vert \xi (s) - \Theta (s) \bigr\vert \,ds \\ \leq & \varphi \lambda _{\mathcal{F}_{\Lambda }} \mathcal{F}_{\Lambda }(t) + \frac{T_{\max }^{\phi }\mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )} \bigl\vert \xi (t) - \Theta (t) \bigr\vert . \end{aligned}$$
This yields the inequality \(|\xi (t)-\Theta (t)| \leq K_{\mathcal{F}_{\Lambda }} \varphi \mathcal{F}_{\Lambda }(t)\), where
$$ K_{\mathcal{F}_{\Lambda }} = \frac{\lambda _{\mathcal{F}_{\Lambda }}}{1 - \frac{T_{\max }^{\phi }\mathbb{L}_{\Lambda }}{\mathbb{AB}(\phi )\Gamma (\phi )}}. $$
Therefore, the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is \(\mathbb{UHR}\) stable. □
In Theorem 5.12, if we set \(\varphi = 1\) into \(|\xi (t)-\Theta (t)| \leq K_{\mathcal{F}_{\Lambda }} \varphi \mathcal{F}_{\Lambda }\), then the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) is generalized \(\mathbb{UHR}\) stable.
Numberical results
In this section, we introduce a numerical solution scheme for the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) and apply it to obtain a numerical simulation.
Numerical method
The \(\mathbb{SMA}\) model under consideration via \(\mathbb{ABC}\)-fractional derivative is numerically simulated by using the novel numerical method as proposed in [42]. For this purpose, we look again at the \(\mathbb{SMA}\) model in the form of (4.1) and (4.2). Employing the \(\mathbb{AB}\)-fractional integral operator on both sides of (4.1), we get
$$\begin{aligned}& \begin{aligned} \mathcal{S}(t) ={}& \mathcal{S}_{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \mathbb{G}_{1}(t, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \mathbb{G}_{1}(s, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q})\,ds, \end{aligned} \\& \begin{aligned} \mathcal{E}(t) = {}& \mathcal{E}_{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \mathbb{G}_{2}(t, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) \\ &{} + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \mathbb{G}_{2}(s, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q})\,ds, \end{aligned} \\& \begin{aligned} \mathcal{A}(t) = {}& \mathcal{A}_{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \mathbb{G}_{3}(t, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) \\ &{} + \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \mathbb{G}_{3}(s, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q})\,ds, \end{aligned} \\& \begin{aligned} \mathcal{R}(t) = {}& \mathcal{R}_{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \mathbb{G}_{4}(t, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \mathbb{G}_{4}(s, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q})\,ds, \end{aligned} \\& \begin{aligned} \mathcal{Q}(t) = {}& \mathcal{Q}_{0} + \frac{1-\phi }{\mathbb{AB}(\phi )} \mathbb{G}_{5}(t, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma (\phi )} \int _{0}^{t}(t - s)^{ \phi -1} \mathbb{G}_{5}(s, \mathcal{S}, \mathcal{E}, \mathcal{A}, \mathcal{R}, \mathcal{Q})\,ds. \end{aligned} \end{aligned}$$
Adapting the Adams type predictor–corrector tool represented by [42] to obtain the numerical approximation of the right-hand side of the system. The first step of the algorithm, under the assumption that the solution is in the closed interval \([0,T]\), this interval addressed by setting \(h = T/N\), \(t_{k} = hk\) (\(k = 0, 1, 2,{\ldots },N\)). Consequently, the corrector schemes of variable order integral form of \(\mathbb{ABC}\)-fractional derivative are given as follows:
$$\begin{aligned}& \begin{aligned} \mathcal{S}_{k+1} ={}& \mathcal{S}_{0} + \frac{(1-\phi )h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \mathbb{G}_{1}\bigl(t_{k+1}, \mathcal{S}_{k+1}^{p}, \mathcal{E}_{k+1}^{p}, \mathcal{A}_{k+1}^{p}, \mathcal{R}_{k+1}^{p}, \mathcal{Q}_{k+1}^{p}\bigr) \\ &{}+ \frac{\phi h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \sum_{j=0}^{k} \Xi _{j,k+1}\mathbb{G}_{1}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\ & \begin{aligned} \mathcal{E}_{k+1} ={}& \mathcal{E}_{0} + \frac{(1-\phi )h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \mathbb{G}_{2}\bigl(t_{k+1}, \mathcal{S}_{k+1}^{p}, \mathcal{E}_{k+1}^{p}, \mathcal{A}_{k+1}^{p}, \mathcal{R}_{k+1}^{p}, \mathcal{Q}_{k+1}^{p}\bigr) \\ &{}+ \frac{\phi h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \sum_{j=0}^{k} \Xi _{j,k+1}\mathbb{G}_{2}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\ & \begin{aligned} \mathcal{A}_{k+1} ={}& \mathcal{A}_{0} + \frac{(1-\phi )h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \mathbb{G}_{3}\bigl(t_{k+1}, \mathcal{S}_{k+1}^{p}, \mathcal{E}_{k+1}^{p}, \mathcal{A}_{k+1}^{p}, \mathcal{R}_{k+1}^{p}, \mathcal{Q}_{k+1}^{p}\bigr) \\ &{}+ \frac{\phi h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \sum_{j=0}^{k} \Xi _{j,k+1}\mathbb{G}_{3}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\ & \begin{aligned} \mathcal{R}_{k+1} ={}& \mathcal{R}_{0} + \frac{(1-\phi )h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \mathbb{G}_{4}\bigl(t_{k+1}, \mathcal{S}_{k+1}^{p}, \mathcal{E}_{k+1}^{p}, \mathcal{A}_{k+1}^{p}, \mathcal{R}_{k+1}^{p}, \mathcal{Q}_{k+1}^{p}\bigr) \\ &{}+ \frac{\phi h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \sum_{j=0}^{k} \Xi _{j,k+1}\mathbb{G}_{4}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\ & \begin{aligned} \mathcal{Q}_{k+1} ={}& \mathcal{Q}_{0} + \frac{(1-\phi )h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \mathbb{G}_{5}\bigl(t_{k+1}, \mathcal{S}_{k+1}^{p}, \mathcal{E}_{k+1}^{p}, \mathcal{A}_{k+1}^{p}, \mathcal{R}_{k+1}^{p}, \mathcal{Q}_{k+1}^{p}\bigr) \\ &{}+ \frac{\phi h^{\phi }}{\mathbb{AB}(\phi )\Gamma (\phi +2)} \sum_{j=0}^{k} \Xi _{j,k+1}\mathbb{G}_{5}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \end{aligned}$$
$$ \Xi _{j,k+1}= \textstyle\begin{cases} k^{\phi +1} - (k-\phi )(k+1)^{\phi }, & \mbox{if } j=0, \\ (k-j+2)^{\phi +1} + (k-j)^{\phi +1} - 2(k-j+1)^{\phi +1}, & \mbox{if } 1 \leq j \leq k. \end{cases} $$
Further, the predictor terms \(\mathcal{S}_{k+1}^{p}\), \(\mathcal{E}_{k+1}^{p}\), \(\mathcal{A}_{k+1}^{p}\), \(\mathcal{R}_{k+1}^{p}\), \(\mathcal{Q}_{k+1}^{p}\) are described as
$$\begin{aligned}& \begin{aligned} \mathcal{S}_{k+1}^{p} ={}& \mathcal{S}_{0} + {\frac{1-\phi }{\mathbb{AB}(\phi )} } \mathbb{G}_{1}(t_{k}, \mathcal{S}_{k}, \mathcal{E}_{k}, \mathcal{A}_{k}, \mathcal{R}_{k}, \mathcal{Q}_{k}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma ^{2}(\phi )} \sum_{j=0}^{k} \omega _{j,k+1}\mathbb{G}_{1}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\& \begin{aligned} \mathcal{E}_{k+1}^{p} ={}& \mathcal{E}_{0} + {\frac{1-\phi }{\mathbb{AB}(\phi )} } \mathbb{G}_{2}(t_{k}, \mathcal{S}_{k}, \mathcal{E}_{k}, \mathcal{A}_{k}, \mathcal{R}_{k}, \mathcal{Q}_{k}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma ^{2}(\phi )} \sum_{j=0}^{k} \omega _{j,k+1}\mathbb{G}_{2}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\& \begin{aligned} \mathcal{A}_{k+1}^{p} ={}& \mathcal{A}_{0} + {\frac{1-\phi }{\mathbb{AB}(\phi )} } \mathbb{G}_{3}(t_{k}, \mathcal{S}_{k}, \mathcal{E}_{k}, \mathcal{A}_{k}, \mathcal{R}_{k}, \mathcal{Q}_{k}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma ^{2}(\phi )} \sum_{j=0}^{k} \omega _{j,k+1}\mathbb{G}_{3}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\& \begin{aligned} \mathcal{R}_{k+1}^{p} ={}& \mathcal{R}_{0} + {\frac{1-\phi }{\mathbb{AB}(\phi )} } \mathbb{G}_{4}(t_{k}, \mathcal{S}_{k}, \mathcal{E}_{k}, \mathcal{A}_{k}, \mathcal{R}_{k}, \mathcal{Q}_{k}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma ^{2}(\phi )} \sum_{j=0}^{k} \omega _{j,k+1}\mathbb{G}_{4}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \\& \begin{aligned} \mathcal{Q}_{k+1}^{p} ={}& \mathcal{Q}_{0} + {\frac{1-\phi }{\mathbb{AB}(\phi )} } \mathbb{G}_{5}(t_{k}, \mathcal{S}_{k}, \mathcal{E}_{k}, \mathcal{A}_{k}, \mathcal{R}_{k}, \mathcal{Q}_{k}) \\ &{}+ \frac{\phi }{\mathbb{AB}(\phi )\Gamma ^{2}(\phi )} \sum_{j=0}^{k} \omega _{j,k+1}\mathbb{G}_{5}(t_{j}, \mathcal{S}_{j}, \mathcal{E}_{j}, \mathcal{A}_{j}, \mathcal{R}_{j}, \mathcal{Q}_{j}), \end{aligned} \end{aligned}$$
$$ \omega _{j,k+1}= \frac{h^{\phi }}{\phi }\bigl((k+1-j)^{\phi } - (k-j)^{\phi }\bigr),\quad 0 \leq j \leq k. $$
Numerical simulations
In this subsection, we demonstrate numerical simulations for the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) by using the Adam type predictor–corrector rule for the \(\mathbb{ABC}\)-fractional operator [42] as said in the earlier subsection. We use nonnegative parameters to obtain these numerical results as shown in Table 1.
Table 1 The description of parameters of the \(\mathbb{SMA}\) model (1.2)
If we set \(\beta = 0.30\) and \(\phi = 0.998\), then \(R_{0} = 0.3836 < 1\) is obtained and the transmission of the addiction to cease stops, which is the result of the numerical simulation of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) as shown in Fig. 1 with \(N = 2000\), and \((\mathcal{S}_{0}, \mathcal{E}_{0}, \mathcal{A}_{0}, \mathcal{R}_{0}, \mathcal{Q}_{0} ) = (100, 1, 5, 0 ,10 )\). The disease-free equilibrium point \(\mathfrak{E}_{0} = (8.2906, 0, 0, 0, 1.6635 )\) in this case. We notice that the number of exposed and addicted populations rapidly increases and decreases to zero over time, since when exposed and addicted populations recover, the number of recovering populations increases, and when the addict's transmission is stopped, the number of recovered populations decreases to zero. Moreover, the number of people who permanently do not use and quit using social media population rapidly increases and decreases to zero over time, because when susceptible and recovered populations with the decrease in the number of addicts again increased the number of people who permanently do not use and quit using social media population is balanced stable at 1.6635. On the other hand, the susceptible population with the addiction decreased, which with the decrease in the number of people who are exposed and permanently do not use and quit using social media populations again increased the number of the good-quality population and is balanced stable at 8.2906.
Plots of the result of the model (1.2) for \(\phi = 0.998\) in the case \(R_{0} < 1\)
Furthermore, if we set \(\beta = 0.80\) and \(\phi = 0.998\), then \(R_{0} = 1.0209 > 1\). The endemic equilibrium point is \(\mathfrak{E}^{*} = (8.1212, 0.0450, 0.0104, 0.0236, 1.7517)\). The numerical results of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2) in this case are shown in Fig. 2. This figure shows that when \(R_{0} > 1\) the number of the exposed and addicted populations primarily increase radically after passing the highest point of the addiction with the transmission of the addiction continues to stabilize at 0.0450 and 0.0104, respectively. Moreover, as the number of exposed and addicted populations decreases, the number of the recovered population increases, then decreases and stabilizes at 0.0236. With the recovery of exposed and addicted populations, the number of people who permanently do not use and quit using social media population also increases and eventually decreases tend to a stable point of 1.7517. On the other hand, as the number of the people who are exposed and permanently do not use and quit using social media populations increases, the number of the susceptible population decreases and then stabilizes with a little increase at 8.1212.
Plots of the result of the model (1.2) for \(\phi = 0.998\) in the case \(R_{0} > 1\)
The effect of fractional derivative orders
In this subsection, we consider the effect of fractional derivative orders on the results of the \(\mathbb{ABC}\)-fractional \(\mathbb{SMA}\) model (1.2). For this simulation, we apply the numerical scheme stated in Sect. 6.1 Numerical Method and the parameters as given in Table 1 with \(\beta = 0.80\). The numerical simulations of the system (1.2) are shown in Fig. 3–Fig. 7 for different fractional orders \(\phi = \{0.94, 0.96, 0.98, 0.998, 1.00\}\).
The quantity of \(\mathcal{S}(t)\) via \(\phi = 0.94, 0.96, 0.98, 0.998, 1.00\)
As we can see from Fig. 3 the susceptible population decreases with various fractional orders ϕ increasing and approaching 1 and then it becomes stable for all fractional orders at \(\mathcal{S}^{*} = 8.1212\). Fig. 4–Fig. 5 show that the exposed and addicted populations rapidly increase and decrease to \(\mathcal{E}^{*} = 0.0450\) and \(\mathcal{A}^{*} = 0.0104\) with various fractional orders ϕ decreasing and approaching 1. Fig. 6–Fig. 7 show that the number of people who are recovered and permanently do not use and quit using \(\mathbb{SM}\) population rapidly increases and decreases to \(\mathcal{R}^{*} = 0.0236\) and \(\mathcal{Q}^{*} = 1.7517\) with various fractional orders ϕ increases approaching 1. The main point of this manuscript is that tiny changes in the fractional derivative order do not affect the overall behavior of the resultant functions; only the numerical simulations are affected. In addition, the absolute errors of the numerical results of the population in five groups for all fractional orders comparing with \(\phi = 1\) in the case of \(\beta =0.30\) are shown in Table 2–Table 6 and in the case of \(\beta =0.80\) are shown in Table 7–Table 11.
The quantity of \(\mathcal{E}(t)\) via \(\phi = 0.94, 0.96, 0.98, 0.998, 1.00\)
The quantity of \(\mathcal{A}(t)\) via \(\phi = 0.94, 0.96, 0.98, 0.998, 1.00\)
The quantity of \(\mathcal{R}(t)\) via \(\phi = 0.94, 0.96, 0.98, 0.998, 1.00\)
The quantity of \(\mathcal{Q}(t)\) with \(\phi = 0.94, 0.96, 0.98, 0.998, 1.00\)
Table 2 The values of \(\vert \mathcal{S}_{1} - \mathcal{S}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 3 The values of \(\vert \mathcal{E}_{1} - \mathcal{E}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 4 The values of \(\vert \mathcal{A}_{1} - \mathcal{A}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 5 The values of \(\vert \mathcal{R}_{1} - \mathcal{R}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 6 The values of \(\vert \mathcal{Q}_{1} - \mathcal{Q}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 10 The values of \(\vert \mathcal{R}_{1} - \mathcal{R}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
Table 11 The values of \(\vert \mathcal{Q}_{1} - \mathcal{Q}_{\phi } \vert \) for \(\phi = \{0.94, 0.96, 0.98, 0.998\}\)
In this manuscript, we considered a fractional-order \(\mathbb{SMA}\) model in the \(\mathbb{ABC}\)-derivative sense. The equilibrium points and the system's basic reproduction number (1.2) have been determined, and the necessary circumstances for the system's stability at the equilibrium points have been examined. The existence results of the solutions for the proposed model (1.2) were investigated by applying Banach's and Krasnoselskii's fixed point theorems. The stability of the solutions was established by employing the various versions of Ulam's stability, such as \(\mathbb{UH}\) stability, generalized \(\mathbb{UH}\) stability, \(\mathbb{UHR}\) stability, and generalized \(\mathbb{UHR}\) stability. The novel numerical method, especially the Adams-type predictor–corrector technique, illustrates the approximate solutions for the different fractional order ϕ. A numerical simulation for transmission of addiction in the cases \(R_{0} < 1\) and \(R_{0} > 1\) is demonstrated and the results display that in two cases the system is stable at its equilibrium points. We analyzed the dynamic behavior of the \(\mathbb{SMA}\) system with ϕ approaching 1. Finally, the system responses were predicted for various fractional derivative orders, demonstrating that a few changes in the fractional derivative order did not affect the overall behavior of the function, just the numerical simulations that occur.
This study would be a new way to explore the mathematical model of \(\mathbb{SMA}\) with \(\mathbb{ABC}\)-fractional derivative. For extension of this work, the researcher may develop and apply this \(\mathbb{SMA}\) model with the other types of fractional-order derivative operators.
The authors declare that all data and materials in this paper are available and veritable.
Drahošová, M., Balco, P.: The analysis of advantages and disadvantages of use of social media in European Union. Proc. Comput. Sci. 109C, 1005–1009 (2017)
Faizi, R., Afia, A.E., Chiheb, R.: Exploring the potential benefits of using social media in education. iJEP 3(4), 50–53 (2013)
Howard, D., Mangold, W.G., Johnston, T.: Managing your social campaign strategy using Facebook, Twitter, Instagram, YouTube and Pinterest: an interview with Dana Howard, social media marketing manager. Bus. Horiz. 57, 657–665 (2014)
Aburahmah, L.H., AlRawi, H., Syed, L.: Online social gaming and social networking sites. Proc. Comput. Sci. 82, 72–79 (2016)
Maclean, F., Jones, D., Carin-Levy, G., Hunter, H.: Understanding Twitter. Br. J. Occup. Ther. 76(6), 295–298 (2013)
Ayeni, P.T.: Social media addiction: symptoms and way forward. Filomat I(IV), XIX–XLII (2019)
Hou, Y., Xiong, D., Jiang, T., Song, L., Wang, Q.: Social media addiction: its impact, mediation, and intervention. Cyberpsychol. J. Psychosoc. Res. Cyberspace 13(1), Article 4 (2019)
Nyabadza, F., Njagarah, J.B.H., Smith, R.J.: Modelling the dynamics of crystal meth('tik') abuse in the presence of drug-supply chain in South Africa. Bull. Math. Biol. 75, 24–28 (2013)
Mushanyu, J., Nyabadza, F., Stewart, A.G.R.: Modelling the trends of inpatient and outpatient rehabilitation for methamphetamine in the Western Cape Province of South Africa. BMC Res. Notes 8, Article ID 797 (2015)
Ma, M.J., Liu, S.Y., Xiang, H., Li, J.: Dynamics of synthetic drugs transmission model with psychological addicts and general incidence rate. Physica A 491, 641–649 (2018)
Liu, P.Y., Zhang, L., Xing, Y.F.: Modelling and stability of asynthetic drugs transmission model with relapse and treatment. J. Appl. Math. Comput. 60, 465–484 (2019)
Huo, H., Jing, S.L., Wang, X.Y., Xiang, H.: Modelling and analysis of an alcoholism model with treatment and effect of Twitter. AIMS Math. 16(5), 3595–3622 (2019)
Li, T., Guo, Y.: Stability and optimal control in a mathematical model of online game addiction. Filomat 33(17), 5691–5711 (2019)
Samad, S.A., Islam, M.T., Tomal, S.T.H., Biswas, M.: Mathematical assessment of the dynamical model of smoking tobacco epidemic in Bangladesh. Int. J. Sci. Manag. Stud. 3(2), 36–48 (2020)
Alemneh, H.T., Alemu, N.Y.: Mathematical modeling with optimal control analysis of social media addiction. Infect. Dis. Model. 6, 405–419 (2021)
Kongson, J., Thaiprayoon, C., Sudsutad, W.: Analysis of a fractional model for HIV CD4+ T-cells with treatment under generalized Caputo fractional derivative. AIMS Math. 6(7), 7285–7304 (2021)
Baleanu, D., Jajarmi, A., Mohammadi, H., Rezapour, S.: A new study on the mathematical modelling of human liver with Caputo–Fabrizio fractional derivative. Chaos Solitons Fractals 134, 109705 (2020)
Uçara, E., Özdemir, N.: A fractional model of cancer-immune system with Caputo and Caputo–Fabrizio derivatives. Eur. Phys. J. Plus 136, 43 (2021)
Arafa, A.A.M., Rida, S.Z., Khalil, M.: A fractional-order model of HIV infection with drug therapy effect. J. Egypt. Math. Soc. 22, 538–543 (2014)
Morales-Delgado, V.F., Gómez-Aguilar, J.F., Taneco-Hernandez, M.A.: Analytical solutions of electrical circuits described by fractional conformable derivatives in Liouville–Caputo sense. Int. J. Electron. Commun. 85, 61–81 (2018)
Lekdee, N., Sirisubtawee, S., Koonprasert, S.: Bifurcations in a delayed fractional model of glucose–insulin interaction with incommensurate orders. Adv. Differ. Equ. 2019, 318 (2019)
Li, Z., Liu, Z., Khan, M.A.: Fractional investigation of bank data with fractal-fractional Caputo derivative. Chaos Solitons Fractals 131, 109528 (2020)
Buluta, H., Kumarb, D., Singhb, J., Swroopc, R., Baskonusd, H.M.: Analytic study for a fractional model of HIV infection of CD4+ T lymphocyte cells. Math. Nat. Sci. 2(1), 33–43 (2018)
Qureshi, S., Atangana, A.: Fractal-fractional differentiation for the modeling and mathematical analysis of nonlinear diarrhea transmission dynamics under the use of real data. Chaos Solitons Fractals 136, 109812 (2020)
Ali, Z., Rabiei, F., Shah, K., Khodadadi, T.: Fractal-fractional order dynamical behavior of an HIV/AIDS epidemic mathematical model. Eur. Phys. J. Plus 136, 36 (2021)
Peter, O.J., Qureshi, S., Yusuf, A., Al-Shomrani, M., Idowu, A.A.: A new mathematical model of COVID-19 using real data from Pakistan. Results Phys. 24, 104098 (2021)
Singh, J., Kumar, D., Qurashi, M.A., Baleanu, D.: A new fractional model for giving up smoking dynamics. Adv. Differ. Equ. 2017, 88 (2017)
Dokuyucu, M.A.: A fractional order alcoholism model via Caputo–Fabrizio derivative. AIMS Math. 5(2), 781–797 (2019)
Alrabaiah, H., Zeb, A., Alzahrani, E., Shah, K.: Dynamical analysis of fractional-order tobacco smoking model containing snuffing class. Alex. Eng. J. 60, 3669–3678 (2021)
Atangana, A., Baleanu, D.: New fractional derivatives with non-local and non-singular kernel: theory and application to heat transfer model. Therm. Sci. 20(2), 763–785 (2016)
Koca, I.: Modelling the spread of Ebola virus with Atangana–Baleanu fractional operators. Eur. Phys. J. Plus 133, 100 (2018)
Qureshi, S., Yusuf, A., Shaikh, A.A., Inc, M., Baleanu, D.: Fractional modeling of blood ethanol concentration system with real data application. Chaos Solitons Fractals 29, 131–143 (2019)
Ghanbari, B.: On approximate solutions for a fractional prey–predator model involving the Atangana–Baleanu derivative. Adv. Differ. Equ. 2020, 679 (2020)
Ahmad, S., Ullah, A., Arfan, M., Shah, K.: On analysis of the fractional mathematical model of rotavirus epidemic with the effects of breastfeeding and vaccination under Atangana–Baleanu (AB) derivative. Chaos Solitons Fractals 140, 110233 (2020)
Ahmad, S., Ullah, R., Baleanu, D.: Mathematical analysis of tuberculosis control model using nonsingular kernel type Caputo derivative. Adv. Differ. Equ. 2021, 26 (2021)
Rahman, M.U., Arfan, M., Shah, Z., Kumam, P., Shutaywi, M.: Nonlinear fractional mathematical model of tuberculosis (TB) disease with incomplete treatment under Atangana–Baleanu derivative. Alex. Eng. J. 60, 2845–2856 (2021)
Bonyah, E., Sagoe, A.K., Kumar, D., Deniz, S.: Fractional optimal control dynamics of coronavirus model with Mittag-Leffler law. Ecol. Complex. 45, 100880 (2021)
Atangana, A., Araz, S.I.: Extension of Atangana–Seda numerical method to partial differential equations with integer and non-integer order. Alex. Eng. J. 59(4), 2355–2370 (2020)
Atangana, A., Araz, S.İ.: Modeling and forecasting the spread of COVID-19 with stochastic and deterministic approaches: Africa and Europe. Adv. Differ. Equ. 2021, 57 (2021)
Atangana, A.: A novel COVID-19 model with fractional differential operators with singular and non-singular kernels: analysis and numerical scheme based on Newton polynomial. Alex. Eng. J. 60(4), 3781–3806 (2021)
Atangana, A., Araz, S.I.: Mathematical model of COVID-19 spread in Turkey and South Africa: theory, methods and applications. Adv. Differ. Equ. 2020, 659 (2020)
Alkahtani, B.S.T., Atangana, A., Koca, I.: Novel analysis of the fractional Zika model using the Adams type predictor–corrector rule for non-singular and non-local fractional operators. J. Nonlinear Sci. Appl. 10(6), 3191–3200 (2017)
Abdeljawad, T., Baleanu, D.: Integration by parts and its applications of a new non local fractional derivative with Mittag-Leffler nonsingular kernel. J. Nonlinear Sci. Appl. 10(3), 1098–1107 (2017)
Odibat, Z.M., Shawagfeh, N.T.: Generalized Taylor's formula. Appl. Math. Comput. 186(1), 286–293 (2007)
Granas, A., Dugundji, J.: Fixed Point Theory. Springer, New York (2003)
Zhao, X.Q.: The theory of basic reproduction ratios. In: Dynamical Systems in Population Biology, pp. 285–315. Springer, Cham (2017)
Castillo-Chavez, C., Feng, Z., Huang, W.: On the computation of \(R_{0}\) and its role on global stability. In: Mathematical Approaches for Emerging and Reemerging Infectious Diseases: An Introduction. IMA, vol. 125. Springer, Berlin (2002)
Guo, Y., Li, T.: Optimal control and stability analysis of an online game addiction model with two stages. Math. Methods Appl. Sci. 43(7), 4391–4408 (2020)
Huo, H.F., Wang, Q.: Modelling the influence of awareness programs by media on the drinking dynamics. Abstr. Appl. Anal. 2014, Article ID 938080 (2014)
J. Kongson and C. Thaiprayoon would like to thank for funding this work through the Center of Excellence in Mathematics (CEM) and Burapha University. C. Tearnbucha was financially supported by Navamindradhiraj University through the Navamindradhiraj University Research Fund (NURF).
Department of Mathematics, Faculty of Science, Burapha University, Chonburi, 20131, Thailand
Jutarat Kongson & Chatthai Thaiprayoon
Center of Excellence in Mathematics, CHE, Sri Ayutthaya Rd., Bangkok, 10400, Thailand
Department of Applied Statistics, Faculty of Applied Science, King Mongkut's University of Technology North Bangkok, Bangkok, 10800, Thailand
Weerawat Sudsutad
Deparment of Mathematics and General Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
Jehad Alzabut
Group of Mathematics, Faculty of Engineering, Ostim Technical University, 06374, Ankara, Turkey
Department of General Education, Faculty of Science and Health Technology, Navamindradhiraj University, Bangkok, 10300, Thailand
Weerawat Sudsutad & Chutarat Tearnbucha
Jutarat Kongson
Chatthai Thaiprayoon
Chutarat Tearnbucha
All authors have contributed equally and significantly to the contents of the paper. All authors have read and agreed to the published version of the manuscript.
Correspondence to Chutarat Tearnbucha.
Kongson, J., Sudsutad, W., Thaiprayoon, C. et al. On analysis of a nonlinear fractional system for social media addiction involving Atangana–Baleanu–Caputo derivative. Adv Differ Equ 2021, 356 (2021). https://doi.org/10.1186/s13662-021-03515-5
Atangana–Baleanu–Caputo fractional derivative
Fixed-point theorems
Social media addiction
Ulam–Hyers stability | CommonCrawl |
Wednesday, August 31, 2016 ... / /
Muslim migrants demand housing in "true Germany"
I think that the Western European and U.S. mainstream media almost completely fail to inform the public about the basic facts concerning the ongoing migration wave. The situation is better in the Czech media.
The Ruhr district – the true Germany – in red.
It's been known that the Muslim migrants basically want to live in Germany – nothing else is good enough for them. According to the evaluations by the migrants and the geography as envisioned by the migrants, even Austria, Switzerland, Denmark, Benelux, Scandinavia etc. either fail to exist or they suck. However, the situation is a more extreme farce than that.
A significant fraction of the migrants don't want to live just in "some Germany". Instead, they want to live in the "true Germany". What does it mean? Well, the true Germany is the Ruhr district, a highly populated region in North Rhine-Westphalia. Roughly 10 million people live on less than 5,000 square kilometers over there, around cities like Dortmund, Essen, Duisburg, and Bochum.
Tuesday, August 30, 2016 ... / /
Before contacting us, ETs at Sun 2.0 HD 164595 didn't even see Faraday's experiments
As lots of media tell us, Russian Academy's telescopes near the Georgian border detected some two-seconds-long strong radio signal at frequency\[
f\sim 11\,{\rm GHz} \quad \Leftrightarrow \quad \lambda=\frac cf\sim 2.7\,{\rm cm}
\] plus minus 10% (a huge bandwidth) on May 15th, 2015. They didn't have to tell us quickly and they didn't tell us quickly. Also, as anti-Russian pundits will surely tell us after they will have read this blog post, this discovery may be just a trick to redirect all West's radars etc. to the constellation Hercules and make it easier for Russia to take over the world.
There are many other conventional explanations for the signal (including secret satellites – the frequency belongs to a range, X band [obsolete NATO notation: J band], reserved for the military satellites; or a mundane 4.5-sigma bump that has to appear sometimes) but they're not too interesting so in the rest of the blog post, I will make the unlikely assumption that the signal came from the star that the SETI people want you to believe to be the source (because they have already redirected their Allen Array over there).
At any rate, the signals apparently arrived from the direction of HD 164595, in the constellation Hercules. This star is so similar to the Sun that it's sometimes dubbed Sun 2.0. (HD 44594 is the most Sun-like star on the Southern Hemisphere.) The age is 4.5 billion years (2% younger than the Sun), the mass is 0.99 solar masses (1% lighter). The amount of metals over there is also high enough and close to ours etc. Only 7.6% of stars are G-stars like the Sun or HD 164595
Other texts on similar topics: astronomy, Russia
Hofer's call to restore Austrian monarchy deserves to be heard
The recent Austrian presidential elections were annulled after a result indistinguishable from 50-50 and a top court's confirmation of doubts about some irregularities. The polls will be repeated on October 2nd and especially because of the Freedom Party's softening stance on some EU- and immigration-related issues, a majority expects Norbert Hofer to beat his green rival.
The anthem of Austria-Hungary, the imperial and civic stanzas of it, in Czech. Other versions
As Sputnik tells us, the possible future president of Austria would like to create a regional group – the Austrian monarchy – that would be analogous to Benelux, a group that clearly amplifies the voice of the three members in the EU. If you think it sounds silly, think twice.
All novelties of quantum mechanics are consequences of nonzero commutators
In the discussion under his confused musings about the CHSH inequality, a volatile anti-quantum zealot had serious problems with the following important wisdom:
All differences between classical physics and quantum physics are consequences of the uncertainty principle i.e. of the nonzero commutators between observables.
Well, the statement above is true and important. The misunderstanding of this statement – often arrogantly masked as a different opinion, one that may be presented assertively – may be considered one of the defining characteristics of the anti-quantum zealots.
The anti-quantum zealots are inventing lots of "additional" differences between classical and quantum physics – such as the purported "extra nonlocality" inherent in quantum mechanics, or some completely new "entanglement" that is something totally different than anything we know in classical physics, or other things. Except that all these purported additional differences are non-existent.
The reason why they're inventing these non-existent differences is that they would like to rephrase quantum mechanics as "another classical theory with lots of different details" (by details, we mean the set of observables and the dynamical equations of motion etc.). But that's not what quantum mechanics is. Quantum mechanics is a fundamentally different theory which may have the same details as the classical counterpart, however. ;-)
TBBT season 10: The Flash, Penny's mother Susan
This is not meant to be an important or full-fledged post but tens of millions of people watch TBBT and my guess is that the TBBT-TRF correlation is positive so you may want to know that:
The Big Bang Theory will start its impressive 10th season – which may be the last one, however, or not – on CBS on Monday, September 19th at 8 pm EDT.
Other texts on similar topics: arts, science and society, TBBT, TV
Friday, August 26, 2016 ... / /
The delirium over beryllium
Guest blog by Prof Flip Tanedo, a co-author of the first highlighted paper and the editor-in-chief of Particle Bites
Click at the pirate flag above for a widget-free version
Article: Particle Physics Models for the \(17\MeV\) Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)
Also featuring the results from:
Gulyás et al., "A pair spectrometer for measuring multipolarities of energetic nuclear transitions" (description of detector; 1504.00489; NIM)
Krasznahorkay et al., "Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson" (experimental result; 1504.01527; PRL version; note PRL version differs from arXiv)
Feng et al., "Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions" (phenomenology; 1604.07411; PRL)
Recently there's some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we'll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.
Other texts on similar topics: experiments, guest, string vacua and phenomenology
Relocation of migrants: will Czech politicians say a clear Nein to Merkel?
The former Czech visiting chemistry postdoc Angela Merkel is visiting Prague tomorrow. The relationships between the countries remain very good and the cooperation just works in the economy etc. Moreover, the scars created by the dramatic history of the Sudetenland have been healed even more than ever before in recent years and months. The Czech political sphere finds it OK to send a friendly pro-German messenger to the Sudetenland Patriots Conventions, while this traditional organization of the Czechoslovak Germans has basically abandoned efforts to recapture the real estate in the Czech borderland.
However, the migrant issue has emerged as a new polarizing question and placed the two countries on the opposite poles of the European policymaking.
Postdoc Angela Merkel (4th from left) in front of the St Vitus Cathedral in Prague, 1982. From left: Kazyuki Tatumi, Rudolf Zahradník, Milena Zahradníková, her, Olga Turečková, Zdeněk Havlas. She was learning to cook Czech dumplings and was able to do basic communication in Czech.
Despite all the terror attacks, failure of efforts to employ the migrants, and other problems that are becoming increasingly more self-evident, most of Germans – and not just the strangely obsessed leaders of the largest European economy – arguably still believe that it's right to invite Muslims en masse and "yes we can". About 95% of the Czech public believes that we shouldn't admit large groups of migrants, not even temporary migrants from the battlefronts, and most Czechs would probably like to ban even the immigration of individual Muslims.
A majority of the mainstream politicians are basically "regular Czechs" to the extent that they naturally have the same opinions. Some politicians are forced to adopt the anti-migration attitude because the support for migration is simply politically incorrect, it's considered a form of treason, stupidity, and egotist assault on most Czechs. Czechs are carefully watching what's going on in the Islamization issue (plus the attacks and also "peaceful" annoying behavior of the Muslims) despite the fact that it avoids our homeland so far. In the 1930s, Czechoslovakia was living its happy, cultural, musical, free, and democratic life as well and could have made fun of Hitler etc. In 1938-1939, this period of freedom and democracy abruptly ended – Hitler's March 1939 threat that he would flatten Prague if President Hácha didn't surrender in hours was helpful in accelerating the transformation. Even though the times are different and Merkel is unlikely to bomb Prague, threats may still be dramatic and rationally thinking nations with significantly larger neighbors simply must realize that the neighbor may have some capacity to change the rules of the game against our will. I must admit that I believe that if Merkel threatened that if we don't accept 50,000 Muslims, Merkel will ban all Czech-German trade, the Czech politicians will almost certainly surrender. But is Merkel this close to Hitler, this destructive against both Czech and German companies?
President Zeman is already known as one of the most outspoken anti-Islamists in Europe and there are others who share very similar opinions although they don't want to be heard as clearly (I think that billionaire and very powerful finance minister Andrej Babiš has basically the same opinions, just prefers to be less loud). Prime minister Sobotka is trying to maximize the friendly relationships in Germany and is widely seen as a potential traitor. But he said that "we" didn't wish a strong Muslim presence yesterday – a statement that was often simplified (e.g. in Nigeria) to the claim that Muslims are not welcome to Czechia. So I guess that when Merkel and her comrades are reading these reports, they must think that he isn't obeying the orders from Germany too rigorously. ;-)
You may find a better Merkel's fifth column in a part of the opposition centrist TOP 09 party and among some greens and apolitical politicians – although TOP 09 is surely not unhinged enough to present opinions that would actually match Merkel's views. The Prague Café, an intellectual environment in Prague, is by definition politically correct and e.g. pro-Merkel. That means that the percentages of opponents of mass migration are surely lower in the (wealthier) Prague than in the rest of the nation. I would like to know more precise numbers. Be sure that you never become a part of a community that is too wealthy – your brain would be likely to decay rather quickly.
After decades, Sean Carroll understood the "divergent distribution" problem with the simulations, anthropic principle
After many and many years in which Sean Carroll promoted the anthropic principle, the claim that we're the Boltzmann Brains, and the claim that we live in a simulation, he finally wrote a blog post
Maybe We Do Not Live in a Simulation: The Resolution Conundrum
in which he apparently understood a basic problem – well, a disproof by contradiction – with all these delusions. This problem has been pointed out in a large part of the TRF blog posts about the anthropic/simulation/typicality/BoltzmannBrain topics. The defenders of these misconceptions basically assume that there exists a uniform distribution on an infinite set (e.g. a countable set; or a continuous set with the infinite measure).
Because there exists no real number \(x\) such that \(x\cdot \infty =1\), the uniform distribution just cannot exist, and all reasoning based on the assumption that the uniform distribution exists is therefore flawed.
Other texts on similar topics: landscape, philosophy of science
Clinton, foundations, NGOs, and corruption with a lipstick
The FBI kept on investigating and it has found over 14,000 additional e-mails that the serial liar Hillary Clinton has pledged not to exist. A fraction of these e-mails more or less unequivocally show that Hillary Clinton has been paid-to-play through the Clinton Foundation by various donors. I say "more or less" because the causal connection between a payment and a politician's decision can almost never be proven with certainty.
The prince of Bahrain has paid some $100,000 in total and obtained some special meeting with Hillary for that. There are probably many other known cases but most of the e-mails remain classified. A week earlier, hackers released 2,500 Soros files. Those show this megajerk's obsession with his plans to destroy the state of Israel but other things could have been seen, too. For example, George Soros bought the U.S. policy in Albania when Hillary was the secretary of state.
Daesh invaded Prague's Old Town Square
48 years after the Soviet-led invasion, it was thankfully just a prank
South Bohemian Entomologist Assoc Prof Martin Konvička, the boss of the defunct Bloc Against Islam and the current leader of the Initiative of Martin Konvička, was the caliph and the main star of a happening that his IMK organized on the 48th anniversary of the August 21st, 1968 Warsaw Pact invasion into Czechoslovakia.
Some observers, like myself, consider the event a bit infantile but entertaining, many others reacted rather hysterically. The event was okayed by the Prague city hall and coordinated with the police – although both of these official legs of the government now claim that different events than the previously announced ones took place. Some tourists were unable to figure out that it was a hoax and the tension grew so at some moment, before Daesh was supposed to cut the head of a hostage – probably fake President Zeman – the police ended the event.
Other texts on similar topics: Czechoslovakia, Europe, everyday life, Middle East, politics, religion
Saturday, August 20, 2016 ... / /
Rio pole vault event was obviously inferior
Russian track-and-field athletes were largely banned from the Olympic competitions in Rio. The explanation was the rather widespread doping among these Russian sportsmen. I am sure that the tolerance towards these "tools to improve the results" is much higher than in Russia than it is in another average country.
But the decision to apply the collective guilt principle is hugely morally problematic. Also, it's crazy to present the doping as a part of the Russian identity – in the ethnic sense. Before Germany was reunified, the relatively small country of East Germany was often the #1 country at similar contests. A huge part of these East German successes was due to doping. Obviously, their Aryan race didn't prevent our DDR comrades from cheating in exactly the same way as some Russian athletes in recent years. The degree of institutionalization of doping was almost certainly higher in East Germany than it is in Russia today.
Pole vaulter Ms Yelena Isinbayeva (who retired hours ago) became the main face of the athletes who consider themselves victims of an unfair decision. She has won some nine Olympic-level gold medals, holds the current world record, and is considered a legend and the top female pole vaulter of all time. No evidence of forbidden chemicals has ever haunted her.
In the video above, from late July 2016, she cried in the Kremlin in front of Putin because of this elimination of the Russian sportsmen. I feel that Putin didn't particularly like this "crying attitude" to those decisions but it's likely that her speech helped to rally the Russian nationalism to some extent, anyway.
Other texts on similar topics: politics, Russia, sports
Consciousness: the trouble with Tononi's critics
In recent years, several readers have asked about my opinion about the Integrated Information Theory (IIT), a theory about "what consciousness is and where it is" started by neuroscientist Giulio Tononi in 2004. Some of them have expressed the opinion that IIT seems compatible with my understanding of the role of the ("conscious") observer in quantum mechanics etc.
My knowledge about IIT slowly grew and my opinions gradually strengthened. But it was the 2014 texts by computer scientist Scott Aaronson
Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
Why Scott should stare at a blank wall and reconsider (or, the conscious grid)
Giulio Tononi and Me: A Phi-nal Exchange
that I only saw now (hat tip: Pig) have convinced me that despite my and our ignorance about most of the key questions, my opinions are already rather strong and my basis to be certain that e.g. Scott Aaronson's critique is a pile of šit is rather solid.
Other texts on similar topics: biology, mathematics, philosophy of science
Thursday, August 18, 2016 ... / /
Critics of wave-particle duality are confused
Yesterday, a reader voiced his dissatisfaction with the principle known as wave-particle duality. This Wikipedia article seems good to me, by the way, and I will refer to it a few times.
This opposition is widespread and it looks crazy to me. The principle is not even a precise principle of modern quantum mechanics. It's something more qualitative and older than quantum mechanics. The principle is obviously true and I simply can't imagine how someone may understand much about modern physics without agreeing that the wave-particle duality captures an important part of the defining insights of modern physics.
Intra-European migration tops its exotic counterpart
Czech linguist Jakub Marián has created another interesting map:
See e.g. the Maltese Independent. The map – which is said to be valid even when the data up to June 2016 are added – shows the dominant source of foreign-born residents in each European country. In other words, the map answers the question "Where did the largest group of aliens who live here come from?"
Other texts on similar topics: Europe, France, Middle East, politics, Russia
How pop science manipulates with the status of the Schrödinger equation
In a blog post, Sean Carroll says that you should love or respect the Schrödinger equation and you should appreciate that the Schrödinger equation may be applied in a wider range of situations, not just in the non-relativistic of mechanics of point-like particles.
I agree with these two points.
You could wonder why Sean Carroll hasn't written down the simple sentence in the first paragraph of this blog post and instead, added some three pages of redundant text. Well, it's because he needed to keep the percentage of misconceptions and distortions well above 50%, just like in almost all of his texts about quantum mechanics.
Cold summer, mushrooms, Czech wine, \(17\MeV\) boson
Central Europe is experiencing one of the coldest Augusts in recent memory. (For example, Czechia's top and oldest weather station in Prague-Klementinum broke the record cold high for August 11th, previously from 1869.) But it's been great for mushroom pickers.
When you spend just an hour in the forest, you may find dozens of mushrooms similar to this one-foot-in-diameter bay bolete (Czech: hřib hnědý [brown], Pilsner Czech: podubák). I don't claim that we broke the Czech record today.
Also, the New York Times ran a story on the Moravian (Southeastern Czechia's) wine featuring an entrepreneur who came from Australia to improve the business. He reminds me of Josef Groll, the cheeky Bavarian brewmaster who was hired by the dissatisfied dignified citizens of Pilsen in 1842 and improved the beer in the city by 4 categories. Well, the difference is that the Moravian wine has never really sucked, unlike Pilsen's beer, except for the Moravian beer served to the tourists from Prague, as NYT also explains.
Hat tip: the U.S. ambassador.
Other texts on similar topics: Czechoslovakia, everyday life, experiments, string vacua and phenomenology
Streamlining the world: Western Communities, privatization of Moon, recolonialization
It's not the most important story of the week but it is an annoying story, anyway.
For the first time in my life, a relatively small package of products I bought at amazon.com in the U.S. made it to the customs. I had to send numerous documents because of this almost trivial but not quite trivial purchase and in the optimistic case, I will get the product and have to pay the value-added tax plus some fees etc.
It has never happened to me in the past. Maybe it's because the things always went through FedEx, DHL or another courier company – but it was sent through USPS now which is probably more likely to hit the customs. (Do you have some knowledge on that?) And maybe the finance minister Babiš has increased the intensity of the customs terror. The amount of hassle because of this thing is significant. The people who are against TTIP – the EU-U.S. free trade agreement – even though they won't lose any job or profits seem like a different species to me. Who can like the principle of this harassment by the government bureaus? It's pure tyranny.
TTIP could be a core part of a new North-American-European union. But in principle, I think it would be a good idea to formally invite the U.S. and Canada (and perhaps Australia, Israel, and a few others) to the EU once we will be able to dismantle the aspects of the EU that have grown out of control. Some unions between the Western countries will always be there. I think that the artificial barriers should be dismantled and the West should be acknowledged as an important part of our identity. My Czech nationality is more important than other parts of the identity. But the perceived membership in the West is more important for me than the membership in some artificial subset called the EU.
Other texts on similar topics: astronomy, Europe, markets, Middle East, politics
Maldacena's talk at Strings 2016
I could download the slides from all the Strings 2016 talks just fine but the videos that were posted were unusable for me, due to the low bandwidth etc. Sometimes when you need it, China no longer looks like the ultimate 21st century superpower. Fortunately, Alexander Comsa posted 7 first Strings 2016 videos from some of the most famous speakers. Even though only dozens of people in the world appreciate it, I hope that he is not finished yet.
One of the talks that were posted was Juan Maldacena's talk about entanglement, quantum gravity, and tensor networks. I find it rather amazing how close his conclusions about the current state of affairs – and even the proposed or expected next major steps – are close to mine, especially if one compares it with the overwhelming majority of the "people around physics" (and, often, technically "inside" physics) who can't agree even about basic issues that physics settled 90 years ago.
Other texts on similar topics: stringy quantum gravity, video
Entanglement swapping doesn't violate locality
In his jihad against the principle of locality, Florin Moldoveanu has used entanglement swapping as a would-be argument. The claim he wants to fight against is that all correlations in the real world – in the successful approximation of non-gravitational quantum field theory – arise from the combination of quantum information's direct interaction (at one place) and the motion at most by the speed of light.
The misspelled word "implementation" on the picture isn't my fault. It's a fault of another anti-locality jihadist.
His situation is simple. (He doesn't have a picture and uses labels 1,2,3,4 for what is called A1,A2,B1,B2 on the picture above.) Two sources of entangled pairs of spin-1/2 particles (the gadgets at the bottom) create entangled spin-zero pairs. \[
\ket{\psi}_{A1+A2} = \frac{\ket{\uparrow_{A1}\downarrow_{A2}} - \ket{\downarrow_{A1}\uparrow_{A2}} }{\sqrt{2}}
\] Similarly for \(A\to B\). The internal members of the pairs A2,B1 propagate along the red lines towards the center where a joint measurement (the gadget at the center top) is being made.
Modern obsession with permanent revolutions in physics
Francisco Villatoro joined me and a few others in pointing out that it's silly to talk about crises in physics. The LHC just gave us a big package of data at unprecedented energies to confirm a theory that's been around for mere four decades. It's really wonderful.
People want new revolutions and they want them quickly. This attitude to physics is associated with the modern era. It's not new as I will argue but on the other hand, I believe that it was unknown in the era of Newton or even Maxwell. Modern physics – kickstarted by the discovery of relativity and quantum mechanics – has shown that something can be seriously wrong with the previous picture of physics. So people naturally apply "mathematical induction" and assume that a similar revolution will occur infinitely many times.
Well, I don't share this expectation. The framework of quantum mechanics is very likely to be with us forever. And even when it comes to the choice of the dynamics, string theory will probably be the most accurate theory forever – and quantum field theory will forever be the essential framework for effective theories.
Other texts on similar topics: experiments, landscape, science and society, string vacua and phenomenology, stringy quantum gravity
Naturalness, a null hypothesis, hasn't been superseded
Quanta Magazine's Natalie Wolchover has interviewed some real physicists to learn
What No New Particles Means for Physics
so you can't be surprised that the spirit of the article is close to my take on the same question published three days ago. Maria Spiropulu says that experimenters like her know no religion so her null results are a discovery, too. I agree with that. I am just obliged to add that if she were surprised she is not getting some big prizes for the discovery of the Standard Model at \(\sqrt{s}=13\TeV\), it's because her discovery is too similar to the discovery of the Standard Model at \(\sqrt{s}=1.96\TeV\), \(\sqrt{s}=7\TeV\), and \(\sqrt{s}=8\TeV\), among others. ;-) And the previous similar discoveries were already done by others.
She and others at the LHC are doing a wonderful job and tell us the truth but the opposite answer – new physics – would still be more interesting for the theorists – or any "client" of the experimenters. I believe that this point is obvious and it makes no sense to try to hide it.
Nima Arkani-Hamed says lots of things I appreciate, too, although his assertions are exaggerated, as I will discuss. It's crazy to talk about a disappointment, he tells us. Experimenters have worked hard and well. Those who whine that some new pet model hasn't been confirmed are spoiled brats who scream because they didn't get their favorite lollipop and they should be spanked.
Other texts on similar topics: experiments, LHC, philosophy of science, science and society, string vacua and phenomenology
Is a cosmic string playing with Tabby's star?
Aliens, you're fired: that's how Trump supporters do Tabby's science
It's almost midnight but I could have problems to sleep because of this idea that I have to record somewhere. The blog seems like a better place than my scribbling notebook now – despite the fact that the idea could be embarrassingly wrong.
Thousands of young people are excited about a cosmic superstring in the constellation Cygnus.
Let me start with this nice music called "Superstring" by "Cygnus X". I've known it for some 15 years – around 2001, I found it as one composition among several others that appeared when I inserted superstring-like keywords to music searches. ;-) If these words were a hint, what would it tell you? Yes, Cygnus is a constellation so if you look for an experimental proof of superstrings, you should look in the constellation Cygnus (swan).
OK, if you were gazing in that direction for years, you would finally find a seemingly ordinary star, Tabby's star or KIC 8462852, which is some 1480 light years away from the Earth. Its radius and mass are about 50% higher than the Sun's. This star became famous because of its strangely behaving flux. Hundreds of news outlets argue that these adjustments of the flux were caused by extraterrestrial aliens, more specifically by a Dyson swarm they built to extract the energy from their Sun.
IceCube rules out the sterile neutrino model for LSND
All babies are being killed and embryos are being aborted these days.
ATLAS and CMS at the LHC have basically ruled out all theories predicting new particle physics phenomena for the first 10/fb of the \(\sqrt{s}=13\TeV\) data.
Meanwhile, South Dakota-based LUX has improved the limits on the WIMP dark matter cross section by a factor of four: dark matter, if it exists, is harder to be directly detected than previously thought. (Dark matter may also be composed of LIGO-style black holes in which case we will hopefully not play with it here on Earth.) I must mention that almost simultaneously, a China-based experiment PandaX-II has basically matched the results of LUX. See a comparison of the two charts.
Lots of kind hypothetical new particles and processes were killed or postponed. What happened to the evil ones? They were killed or postponed, too. This also applies to sterile neutrinos, a particular family of beasts that are totally plausible but not beloved by people like me.
IceCube is based on the 86-string theory.
At the South Pole, The IceCube Neutrino Observatory was designed to detect \({\rm TeV}\)-scale high-energy cosmic muon neutrinos. Those get converted to muons and IceCube is particularly sensitive to those.
ATLAS: a 3.3-sigma stop excess is the new leader
Wolfram Language and Mathematica 11 released today.
No discovery of new physics has been presented by ATLAS or CMS. There are inevitably some bumps and excesses most of which (or all of which) will go away. In a complex blog post, lots of new CMS excesses have been shown. At the end, I do believe that the highly localized 3.7-sigma excited quark bump is the most eye-catching gift from CMS at this point.
ATLAS has just released their big package of new papers. The independence of ATLAS and CMS is also demonstrated by the CMS' decision to write papers based purely on the 2016 data. On the other hand, ATLAS' new papers are mostly based on the whole dataset of 13.2 inverse femtobarns which were collected in the year 2015+2016=4031.
Instead of looking at the ATLAS' papers with modest excesses – there aren't many – let me pick only one paper that seems more intriguing than the rest.
Other texts on similar topics: experiments, LHC
Contamination of Olympic ceremony by climate alarmism is unethical
Megastructure: the world media talk about the new preprint saying that the intensity of the seemingly ordinary KIC 8462852 Kepler star corresponds to a piecewise linear, decreasing function (a very non-smooth function), with a few-percent decrease sometimes in a few months and sometimes in a few years. The aliens have clearly approved astroengineering – basically helioengineering – projects to gradually screen their star and to fight against global warming on Earth B. Or just some clouds or devouring of the star by some other objects in a certain state of motion. Or something really crazy.
Those who watched the opening ceremony of the Olympic games in Rio were forced to see this Gore-style demagogic video on the climate panic:
I haven't heard the narration – which was done by James Bond's female boss.
This is quite incredible. Rio was also the place where the 1992 Earth Summit took place. Many people blame this particular event for the subsequent rapid propagation of the global pseudoscience-powered environmental fascism and its most aggressive form, the climate hysteria. You could think that the Brazilian politicians should be more careful but they're not.
Other texts on similar topics: climate, science and society, sports
ISIS claims responsibility for the diphoton excess
Anniversary: Exactly 25 years ago, on August 6th, 1991, Tim Berners-Lee posted a modest USENET post in which he announced the World Wide Web to the world.
Dr Abu Bakr al-Baghdadi, Nude Socialist is honored that you agreed to give us this exclusive interview. After Hillary Clinton, you became another leader of global importance – one who fights against the discrimination of Muslims – who talks to us. Why was there a diphoton excess near \(750\GeV\)?
We realized that some of our warriors can't even get to a disco club anymore. We were thinking about ways to harm the Western infidels most severely. Airplanes? Trains? Trucks? Guns? Axes? Machetes? Safety matches? Fingernails? Finally, Allah gave me a better idea. We would create a new God for the infidels – a God many of them consider more important than Allah and Mohammed (PBUH) – and then we would kill him.
How did you do that?
In Fall 2015, we have used a machete and cut the heads of 1 experimenter from ATLAS and 1 experimenter from CMS. Surprisingly, their replacements have agreed to inject some 4 sigma of signal into the ATLAS and CMS 2015 datasets. A new fake God, the \(750\GeV\) diphoton resonance, was born.
And in 2016, it was enough to do nothing.
Other texts on similar topics: experiments, LHC, Middle East, science and society
A woman's pro-Europe anti-Islam song
Olivie has released an English edition of the song on Sunday evening.
We had a friendly e-mail exchange and I tried to convince her to work with native speakers and make it more comprehensible to them etc. It seems to me that this recommendation was largely ignored. But you got what you got.
Ms Olivie Žižková (*1975) has only been known to the wider Czech public as the nude babe from a bizarre calendar that she created along with Mr Jiří Krytinář, a Czech dwarf actor [skeletal dysplasia] who died half a year ago [problem with lungs, 1947-2015].
A version with alternative pictures
But she has just written, recorded, and released this catchy song, "Europe, just breathe", that shows that she has some nontrivial talents that go beyond nudity. Don't get me wrong: many aspects of the song suffer from the same diseases as the "regular people's products" you may be receiving into your mailbox. The political content may be more passionate than the professional musicians' songs, the words are somewhat simple, and so is the music. I would surely modify the melody at several places for it to be less monotonous.
However, her intonation safely beats those of the average amateurs (she isn't tone-deaf, at least relatively to the average person) and even the marginally on-the-edge lyrics is rather polished if you look at it from the right perspective. It rhymes well – although the verses are somewhat infantile and the stresses aren't always right. But it's surely not bad for a piece of naked meat! ;-) It's pretty good even for an ambitious wannabe musician.
And I think it's just terrific we have at least women who are full of energy and who consider the European values and lifestyle worth defending. And I don't mind that she uses the trouble with migration for self-promotion – she deserves it.
Other texts on similar topics: Czechoslovakia, Europe, Middle East, murders, politics, religion
Papers on an "excited quark" bump could be even cooler than the cernette
CMS was the first one to have killed the \(750\GeV\) diphoton resonance. ATLAS' paper will be released on Monday.
Because the particle apparently doesn't exist, the most likely prediction for the 2016 dataset was that it disappears completely. It did. And even though CMS was against the publication of its new diphoton paper after it was for it ;-), we saw the paper with the new graphs in time, as I discussed in the previous blog post.
I encourage all particle phenomenologists who have mostly completed a model explaining the \(750\GeV\) not to send it to the arXiv. Instead, submit it to the competing viXra.org – it should be an easy process – for you to have a nice, arXiv-like URL and for the viXra amateur scientists to have a nice company, competition, and perhaps inspiration. (Well, most of them won't get inspired because they believe that they are brighter than you are LOL.)
Alternatively, you may just change the title and a few words, submit to arXiv and conferences, and pretend that your paper didn't depend on the diphoton excess. ;-)
There isn't anything to be excited about in the diphoton channel now. The new largest current bumps \(620,900,1300\GeV\) of CMS are small and disjoint with the small but largest \(975\GeV\) bump that ATLAS will probably show tonight. Update 4pm: See the new ATLAS plots, a press release, and the paper.
As the previous blog post mentioned, the highest new significance seen by the CMS is an excited quark (see Page 5, Figure 2) whose mass is almost exactly \(2.0\TeV\) and whose excess only appears in one bin of width of \(70\GeV\) or so. But locally, it's a cool 3.7-sigma excess, assuming a low \(f\sim 0.1\), which is a cubic coupling constant of the excited quarks to the SM gauge fields, and that still translates to a 2.84-sigma excess (see page 7) globally.
CMS abruptly confirms SM in 22+16+1+15 new papers
But you may raise your eyebrows with me 4+3+0+1 times
First talks have been given at ICHEP 2016 in Chicago, the main annual particle physics conference, and one of the experimental collaborations at the LHC is using this opportunity to impress lots of physicists with their new results.
Very quickly, the CMS Collaboration has released 22 new papers. (A message from the future: A few hours later, 16 were added; I analyzed those later and added some new discrepancies. The same applies to 15=14+1 additional papers a few more hours later.) I actually had to go to the 2nd and 3rd page (with 10 papers per page) – which don't display the date as clearly as the first page – to get all the new papers. That's unusual.
The third term, 1, in the number 22+16+1+15 in the title, is the new CMS diphoton paper killing the \(750\GeV\) cernette. Nothing is there at all. Totally inconclusive excesses emerged at \(620\GeV\), \(900\GeV\), and \(1300\GeV\) where a slight excess was seen by CMS in 2015, too, but nothing was there in ATLAS. (LOL, they later removed the new paper except for pages 1,2 but too bad, I already saw it, and you can see the key graph, too.)
I really recommend to send your almost finished pheno papers on the \(750\GeV\) bump to vixra.org (a blue submit button is at the top) so that those amateur scientists have some competition and motivation.
I won't pretend that I have read every letter in these 22 papers. My belief is that the number of people in the world outside ATLAS, CMS who actually read all these papers is smaller than 10 and even within the LHC collaborations, the numbers could be very low. But I can offer you the following minimum of the analysis:
I see at least 80% of the area of every page of every new ATLAS/CMS paper for at least 0.2 seconds
I spend at least 3 seconds by looking at every Brazilian chart in every paper
Every paper is searched for the words "excess", "deviation[s]", and "*agree*" and I am looking for anomalous sentences with these words (to increase our combined sensitivity, you shouldn't use exactly the same algorithm)
Reading whole paragraphs or pages is optional
OK, in this way, I have quickly "read" these new twenty-two papers by CMS.
Were Feynman's lectures wrong on the Faraday cage?
Betsy Devine, Frank Wilczek's wife, tweeted a hyperlink to an essay
Surprises of the Faraday Cage
by Lloyd N. Trefethen, a professor of numerical analysis, who claims that the Feynman Lectures on Physics are largely wrong when they discuss the Faraday cage.
Source, a similar video, II, a couple. An application of the Faraday cage. Jump into a cage, take it to a really bad thunderstorm (or near a "Tesla coil", as here), and show others that you are a superman. Walter Lewin survived some 200 kV but not really the feminists.
It's being said that Feynman concludes that the fields we consider "shielded" drop exponentially away from the metallic plane while the numerical professor argues that the decrease is linear. The paragraph about the linear dependence is particularly incomprehensible to me because Trefethen actually talks about the logarithmic dependence or inverse proportionality. In the same paragraph, the numerical professor also talks about "squaring of the electric field" which sounds like a very sloppy language – because of dimensional analysis. Quite generally, I have no idea what should I imagine under the term "linear shielding" (except for the exponential one).
Was Feynman wrong?
ICHEP, Strings: nothing scary about overlapping conferences
A conference may influence the face of particle physics for a long time
ICHEP 2016, the largest annual gathering of experimental and phenomenological particle physicists, is getting started in Chicago today. Well, the talks begin tomorrow so there are two days, Thursday and Friday, of a full-fledged overlap with Strings 2016. Because Peking is rather far from Chicago, the effective overlap is really three days.
The ICHEP logo promises us discoveries of the CP-violating angle in fermion mass matrices (done), observation of dark energy (done), the Higgs boson discovery (done), the discovery of a neutralino (cool!), and one more unexpected shocking round discovery in the middle (wow). Let's hope that Chicago's promises may be trusted, that Chicago isn't a city of liars, gangs, and criminals. ;-)
Is that immoral or outrageous that physicists must choose whether they visit Strings or ICHEP?
Trump, Goldman Sachs, stocks, and elections
Democrats may help Hillary to be elected by buying lots of stocks up to the elections
Sorry for the silence, I had some other offline duties today.
My computer had to work much harder – it took almost 3 hours in total to upgrade to Windows 10 Anniversary Edition. But it works great. The start menu is reorganized more effectively, colors are nicer, Edge may be getting superior but I don't plan to leave Chrome (it has extensions etc.), Windows Ink is fine and also offers Sticky Notes but I prefer my Long Notes Windows Gagdet so far. Windows now contains its own Bash (Linux shell) – search for it using the magnifying glass. I had to reinstall Windows Gadgets again (from GadgetsRevived.com). Windows decided that it refuses to resuscitate "Ccleaner", a somewhat parasitic "helpful" application I don't care about, and informed me about that ban. "SFC /SCANNOW" gives me "clean" again – I suspect that the violations before the upgrade were caused by some of the registry cleaning third-party software. The Windows calendar from the tray area contains your events from the Google/Hotmail calendar now, and so on.
There are lots of interesting things to mention about conferences etc. but it would take much more time and energy that I have now.
Men beat women in self-citations
Does it really imply some foul play?
There are still tons of highly annoying, bogus, feminist propagandist texts about the "subtle discrimination of women in STEM", such as this today's article by a fat Indian American feminist, but I chose another one that was released on the first day of August. The Washington Post has printed a weird story about "women in science":
New study finds that men are often their own favorite experts on any given subject
which builds on a preprint by Molly, Shelley, Jenny, and two co-authors who are nominally male:
Men set their own cites high: Gender and self-citation across fields and over time
Christopher Ingraham's article in the Washington Post starts with a picture of a self-confident man pointing to himself. The main message is that men cite themselves (the same person) more often than women do – some "index" is 1.5-1.7 times higher for men than for women – so it proves that there is something unfair about men's behavior or some discrimination against women or something like that.
Strings 2016 look more stringy than in previous years
The picture is meant to convey the idea that people of Mongolian appearance have played with strings long before Veneziano
The Strings 2016 annual conference has gotten started in Beijing, China (Tsinghua University):
List of talks, schedule and slides, fresh pics, videos (2-day delay, seemingly unusably low bandwidth), main page
Looking at the list of talk titles, I do think that the composition of the talks is much more stringy than at the previous annual string conferences.
Before contacting us, ETs at Sun 2.0 HD 164595 did...
Hofer's call to restore Austrian monarchy deserves...
All novelties of quantum mechanics are consequence...
Relocation of migrants: will Czech politicians say...
After decades, Sean Carroll understood the "diverg...
Clinton, foundations, NGOs, and corruption with a ...
How pop science manipulates with the status of the...
Streamlining the world: Western Communities, priva...
Modern obsession with permanent revolutions in phy...
Naturalness, a null hypothesis, hasn't been supers...
Contamination of Olympic ceremony by climate alarm...
Papers on an "excited quark" bump could be even co...
ICHEP, Strings: nothing scary about overlapping co... | CommonCrawl |
circumradius of isosceles triangle
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Formula for a Triangle. Calculate the radius of the circumcircle of an isosceles triangle if given sides ( R ) : Radius of the circumscribed circle of an isosceles triangle : = Digit 2 1 2 4 6 10 F Isosceles triangle calculator is the best choice if you are looking for a quick solution to your geometry problems. Let and denote the triangle's three sides and let denote the area of the triangle. See circumcenter of a triangle for more about this. Pythagoras Theorem In the case of a right-angle triangle, the square of the Right Angled Triangle. Finally notice that the small acute isosceles triangle is similar to the original triangle (look at the angles). Finding the area of an isosceles triangle with inradius $\sqrt{3}$ and angle $120^\circ$. If a triangle has side lengths a, b, and c, then the circumradius has the following length: R = ( abc ) / √(( a + b + c )( b + c - a )( c + a - b )( a + b - c )) Now, back to the triangle towns. Making statements based on opinion; back them up with references or personal experience. Then, I am trying to use the following approach The Circumradius of a triangle given 3 sides formula is given by R = abc/sqrt((a+b+c)(-a+b+c)(a-b+c)(a+b-c)) where a, b, c are 3 sides of △ ABC and is represented as. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. @lab bhattacharjee though with your step I am getting correct answer but can you find error in my steps, The circumradius of an isosceles triangle ABC is four times as that of inradius and A=B condition, Finding the area of a triangle, given the distance between center of incircle and circumscribed circle. When choosing a cat, how to determine temperament and personality and decide on a good fit? Thanks for contributing an answer to Mathematics Stack Exchange! In this formula, Circumradius of Triangle uses Side A, Side B and Side C. We can use 1 other way(s) to calculate the same, which is/are as follows -, Circumradius of a triangle given 3 sides Calculator. find angles in isosceles triangles calculator. The center of the circumcircle is called the circumcenter, and the circle's radius is called the circumradius. To find the circumradius of any triangle with sides a,b,c the formula is abc/4A where A is the area of the triangle. It only takes a minute to sign up. Circumradius of equilateral triangle= side of triangle/√3 =12/√3 HOPE IT HELPS YOU!! Check out 15 similar triangle calculators , Isosceles triangle formulas for area and perimeter. We let , , , , and .We know that is a right angle because is the diameter. Base length is 153 cm. The circumradius of an isosceles triangle ABC is four times as that of inradius of the triangle, if A = B. In a right-angled isosceles triangle, the ratio of the circumradius and inradius is (b) (d) 1:44 45.0k LIKES. Use MathJax to format equations. STATEMENT 2: Every isosceles triangle is equilateral triangle. Acute isosceles triangle. 79.4k VIEWS. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Why do wet plates stick together with a relatively high force? The area of our triangle ABC is equal to 1/2 times r times the perimeter, which is kind of a neat result. 17.2k SHARES. The circumradius is the radius of the circle passing through all the three vertices of triangle. Isosceles right triangle Area of an isosceles right triangle is 18 dm 2. {\displaystyle {\frac {a^ {2}} {2h}}.} Hypothetically, why can't we wrap copper wires around car axles and turn them into electromagnets to help charge the batteries? EQL triangle Calculate inradius and circumradius of equilateral triangle with side a=77 cm. Was Terry Pratchett inspired by Hal Clement? 30 = (1/2)AB * altitude multiply through by 2. Isosceles triangle Calculate the size of the interior angles and the length of the base of the isosceles triangle if the length of the arm is 17 cm and the height to the base is 12 cm. × Close. Note that the center of the circle can be inside or outside of the triangle. Developer keeps underestimating tasks time. What is Circumradius of a triangle given 3 sides? For an isosceles triangle, along with two sides, two angles are also equal in measure. The circumcenter of a triangle is defined as the point where the perpendicular bisectors of the sides of that particular triangle intersect. Or sometimes you'll see it written like this. 38. The circumradius is defined as the radius of a circle that passes through all the vertices of a polygon, in this case, a triangle. The center of this circle is called the circumcenter and its radius is called the circumradius.. Not every polygon has a circumscribed circle. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. AB is a chord of the circumcircle. Circumradius of a triangle given 3 sides calculator uses. rev 2021.1.21.38376, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Menu. Also, because they both subtend arc .Therefore, by AA similarity, so we have or However, remember that . The inradius of the triangle (a) 3.25 cm (b) 4 cm (c) 3.5 cm (d) 4.25 cm 1:04 2020In any equilateral , three circles of radii one are touching to the sides given as in the figure then area of the [IIT-2005] Download pdf. Did Gaiman and Pratchett troll an interviewer who thought they were religious fanatics? The formula for calculating an isosceles triangle is ½b×h, which means ½ × base of the triangle × height of … Anamika Mittal has verified this Calculator and 50+ more calculators! A sector of a circle has an arclength of 20cm. Asking for help, clarification, or responding to other answers. How many ways are there to calculate Circumradius of Triangle? Circumradius of a triangle given 3 sides calculator uses Circumradius of Triangle=(Side A*Side B*Side C)/sqrt((Side A+Side B+Side C)*(Side B-Side A+Side C)*(Side A-Side B+Side C)*(Side A+Side B-Side C)) to calculate the Circumradius of Triangle, The Circumradius of a triangle given 3 sides formula is given by R = abc/sqrt((a+b+c)(-a+b+c)(a-b+c)(a+b-c)) where a, b, c are 3 sides of △ ABC. number 1 is 5/3 because of this special formula. $A=rs$ ahre r=inradius and s= semiperimeter, $A=\frac{c}{2} \sqrt{(a^2-\frac{c^2}{4})}$, Still not able to get the answer , I presume that I am making a mistake, Using this Isosceles triangle The leg of the isosceles triangle is 5 dm, its height is 20 cm longer than the base. To use this online calculator for Circumradius of a triangle given 3 sides, enter Side A (a), Side B (b) and Side C (c) and hit the calculate button. Circumradius of Triangle is the radius of the circle inside which the triangle can be inscribed. 60 = 10 * altitude divide both sides by 10. How long is the leg of this triangle? The circumradius of an isosceles triangle is a 2 2 a 2 − b 2 4, where two sides are of length a and the third is of length b. To learn more, see our tips on writing great answers. 1/2 times the inradius times the perimeter of the triangle. The circumcircle of a triangle is a circle that passes through all of the triangle's vertices, and the circumradius of a triangle is the radius of the triangle's circumcircle. Download pdf. [IIT-1993] (A) /3 (B) (C) /2 (D) Q. 79.4k SHARES. $$r=R(\cos A+\cos B+\cos C-1)$$, $A=B\implies \cos B=\cos A,\cos C=\cos(\pi-A-A)=-\cos2A$, $\implies r=(4r)(\cos A+\cos A-\cos2A-1)$. If I'm the CEO and largest shareholder of a public company, would taking anything from my office be considered as a theft? Find out the isosceles triangle area, its perimeter, inradius, circumradius, heights and angles - all in one place. Guest Nov 13, 2017 Area = (b/4) √4a²- b², In an isosceles triangle, the angles opposite to the equal sides are equal. How does color identity work in Commander? Isosceles triangle The circumference of the isosceles triangle is 32.5 dm. The semiperimeter s, inradius r and circumradius R are the symmetric invariants of a triangle. Why don't video conferencing web applications ask permission for screen sharing? Circumradius of Triangle and is denoted by R symbol. Here is how the Circumradius of a triangle given 3 sides calculation can be explained with given input values -> 4.000638 = (8*7*4)/sqrt((8+7+4)*(7-8+4)*(8-7+4)*(8+7-4)). I think the factor 4 in the main formula is not correct. How did 耳 end up meaning edge/crust? MathJax reference. Different approaches give different results. Venkata Sai Prasanna Aradhyula has created this Calculator and 0+ more calculators! How to Calculate Circumradius of a triangle given 3 sides? The inradius of an isoceles triangle is Area of an isosceles triangle when length sides and angle between them are given calculator uses Area Of Triangle=(Side A*Side B*sin(Theta))/2 to calculate the Area Of Triangle, Area of an isosceles triangle when length sides and angle between them are given expresses the extent of an isosceles triangle in a plane. In geometry, the circumscribed circle or circumcircle of a polygon is a circle that passes through all the vertices of the polygon. Circumcircle is the circle that passes through all vertices (corner points) of a triangle. $R=\frac{abc}{4A}$, where R is Circum-radius and r is inradius and A is the area of inscribed triangle A general formula is volume = length * base_area; the one parameter you always need to have given is the prism length, and there are four ways to calculate the base - triangle area. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. to give a non-trigonometric expression for the circumradius, but which is simpler than anything on your list. Side A is an upright or sloping surface of a structure or object that is not the top or bottom and generally not the front or back. By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. A polygon that does have one is called a cyclic polygon, or sometimes a concyclic polygon because its vertices are … Then, the measure of the circumradius of the triangle is simply .This can be rewritten as .. The area of an isosceles triangle is the amount of region enclosed by it in a two-dimensional space. 1. If a triangle has side lengths a, b, and c, then we can find the length of its circumradius using the following formula: R = ( abc ) / √(( a + b + c )( b + c - a )( c + a - b )( a + b - … The circumradius of an isosceles triangle ABC is four times as that of inradius of the triangle, if A = B. Proof. If r is the in-radius and R is the circumradius of the triangle ABC, then 2 (r + R) equals - [AIEEE-2005] the angle A . How to protect a secure compound breached by a small modern military? The radius of the circumscribed circle is: a 2 2 h . If the radius of thecircle is 12cm find the area of thesector: *(1 Point) Ans: An isosceles triangle can be defined as a special type of triangle whose at least 2 sides are equal in measure. Side C is an upright or sloping surface of a structure or object that is not the top or bottom and generally not the front or back. The formula for the circumradius is R=abc/4A, where a,b and c are the lengths of the sides and A is the area of the triangle. What is the Galois group of one ultrapower over another ultrapower? Triangle ABC has circumcenter O. Side B is an upright or sloping surface of a structure or object that is not the top or bottom and generally not the front or back. Do PhD admission committees prefer prospective professors over practitioners? Its center is at the point where all the perpendicular bisectors of the triangle's sides meet. An isosceles triangle has the largest possible inscribed circle among the triangles with the same base and apex angle, as well as also having the largest area and perimeter among the same class of triangles. 17.2k VIEWS. This center is called the circumcenter. Find the circumcentre and circumradius of the triangle whose vertices are . Isosceles Triangle. It's perpendicular to any … May I ask professors to reschedule two back to back night classes from 4:30PM to 9:00PM? 6:38 11.7k LIKES. 1 Verified Answer. The circumcircle always passes through all three vertices of a triangle. ABCD is a rectangle and M is the midpoint of CD.The inradii of triangles ADM and ABM are 3 and 4 respectively.Then find the area of the rectangle. Calculate the length of its base. It's equal to r times P over s-- sorry, P over 2. In this short note, we complement previous work of Hirakawa and Matsumura by determining all pairs (up to similitude) consisting of a rational right angled triangle and a rational isosceles triangle having two corresponding symmetric invariants equal. Dividing the first 10 primes into groups whose sum is prime. $P$ and $Q$ are two points on a circle of center $C$ and radius $a$, the angle $\widehat{PCQ}$ being $2\theta$, then the inradius of $PCQ$, Sides of a triangle given perimeter and two angles, In triangle $ABC$, find maximum value of $\sin A \cos B + \sin B \cos C + \sin C \cos A$. And we can find the altitude of triangle oAB thusly. New questions in Math. How to calculate Circumradius of a triangle given 3 sides? The circumradius of an equilateral triangle is 8 cm. Circumradius of a triangle given 3 sides calculator uses Circumradius of Triangle=(Side A*Side B*Side C)/sqrt((Side A+Side B+Side C)*(Side B-Side A+Side C)*(Side A-Side B+Side C)*(Side A+Side B-Side C)) to calculate the Circumradius of Triangle, The Circumradius of a triangle given 3 sides formula is given by R = abc/sqrt((a+b+c)(-a+b+c)(a-b+c)(a+b-c)) where a, b, c are 3 sides … View Answer. Mean Are creature environmental effects a bubble or column? パンの耳? Does it make sense to get a second mortgage on a second property for Buy to Let. The Circumradius of a triangle given 3 sides formula is given by R = abc/sqrt((a+b+c)(-a+b+c)(a-b+c)(a+b-c)) where a, b, c are 3 sides of △ ABC is calculated using. Inscription; About; FAQ; Contact Radius Of Inscribed Circle=sqrt((Semiperimeter Of Triangle -Side A)*(Semiperimeter Of Triangle -Side B)*(Semiperimeter Of Triangle -Side C)/Semiperimeter Of Triangle ), Area of Triangle when semiperimeter is given, Area Of Triangle=sqrt(Semiperimeter Of Triangle *(Semiperimeter Of Triangle -Side A)*(Semiperimeter Of Triangle -Side B)*(Semiperimeter Of Triangle -Side C)), Area=sqrt((Side A+Side B+Side C)*(Side B+Side C-Side A)*(Side A-Side B+Side C)*(Side A+Side B-Side C))/4, Radius Of Circumscribed Circle=(Side A*Side B*Side C)/(4*Area Of Triangle), Side A=sqrt((Side B)^2+(Side C)^2-2*Side B*Side C*cos(Angle A)), Perimeter=Side A+Side B+sqrt(Side A^2+Side B^2), Perimeter Of Triangle=Side A+Side B+Side C, Circumradius of a triangle given 3 exradii and inradius, Circumradius of Triangle=(Exradius of excircle opposite ∠A+Exradius of excircle opposite ∠B+Exradius of excircle opposite ∠C-Inradius of Triangle)/4, Inscribed angle of the circle when the central angle of the circle is given, Inscribed angle when other inscribed angle is given, Arc length of the circle when central angle and radius are given, Area of the sector when radius and central angle are given, Area of sector when radius and central angle are given, Length of the major axis of an ellipse (b>a), Eccentricity of an ellipse when linear eccentricity is given, Latus rectum of an ellipse when focal parameter is given, Linear eccentricity of ellipse when eccentricity and major axis are given, Linear eccentricity of an ellipse when eccentricity and semimajor axis are given, Semi-latus rectum of an ellipse when eccentricity is given, Length of conjugate axis of the hyperbola, Eccentricity of hyperbola when linear eccentricity is given, Number of diagonal of a regular polygon with given number of sides, Altitude/height of a triangle on side c given 3 sides, Length of median (on side c) of a triangle, Distance between circumcenter and incenter by Euler's theorem, Length of radius vector from center in given direction whose angle is theta in ellipse. 6 = the altitude. Comparing sides gives you a quadratic equation for the circumradius which is easily solved {Quadratic Formula!} How can I handle graphics or artworks with millions of points? How to calculate Circumradius of a triangle given 3 sides using this online calculator? If AB=10 and Area of OAB=30 find the circumradius of triangle ABC. 11 Other formulas that you can solve using the same Inputs, 1 Other formulas that calculate the same Output, Circumradius of a triangle given 3 sides Formula, Circumradius of Triangle=(Side A*Side B*Side C)/sqrt((Side A+Side B+Side C)*(Side B-Side A+Side C)*(Side A-Side B+Side C)*(Side A+Side B-Side C)). The inradius times the perimeter of the circle can be inscribed them into electromagnets to help charge batteries! Or artworks with millions of points perpendicular bisectors of the triangle, the ratio of the isosceles is... Circumradius.. Not every polygon has a circumscribed circle quadratic Formula! perpendicular bisectors of triangle! 120^\Circ $ D ) Q has an arclength of 20cm area of an triangle... 0+ more calculators back them up with references or personal experience oAB thusly see circumcenter of triangle. User contributions licensed under cc by-sa to get a second property for Buy to let for help, clarification or. There to calculate circumradius of triangle main Formula is Not correct all three vertices of triangle oAB thusly contributions under! 18 dm 2 angle $ 120^\circ $ second property for Buy to let 20 cm longer the..., isosceles triangle the leg of the triangle 's sides meet and the circle through! 4 in the main Formula is Not correct a good fit ) 1:44 45.0k circumradius of isosceles triangle into electromagnets to charge! Angles ) 's radius is called the circumcenter of a triangle given 3 sides using this calculator. Into groups whose sum is prime the circumradius of the circumradius of is. ) of a triangle writing great answers think the factor 4 in main. Rss feed, copy and paste this URL into your RSS reader as. ) 1:44 45.0k LIKES \frac { a^ { 2 } }. quadratic Formula }! That of inradius of the circle can be rewritten as triangle ( look the. First 10 primes into groups whose sum is prime mathematics Stack Exchange is a right angle because is the.... Did Gaiman and Pratchett troll an interviewer who thought they were religious?... Has verified this calculator and 50+ more calculators cat, how to calculate circumradius an. For contributing an answer to mathematics Stack Exchange triangle intersect altitude divide both sides by 10 related fields 's. The area of the triangle if a = B contributing an answer to Stack! ", you agree to our terms of service, privacy policy cookie... And perimeter is defined as the point where the perpendicular bisectors of the triangle 's sides meet ask. Whose sum is prime the amount of region enclosed by it in two-dimensional... Sides and let denote the triangle 's sides meet center of the triangle can be inside outside. Groups whose sum is prime people studying math at any level and professionals in related fields 4:30PM to 9:00PM }... Inradius and circumradius of triangle and is denoted by r symbol tips on writing great answers whose... Them into electromagnets to help charge the batteries and circumradius of the circumcircle always passes through all vertices corner! Oab thusly a triangle given 3 sides help charge the batteries b/4 √4a²-! ) /3 ( B ) ( D ) Q is Not correct and! Together with a relatively high force 0+ more calculators $ and angle $ $... Than anything on your list that of inradius of an isosceles triangle the circumference of the triangle can be as... In one place 32.5 dm triangle calculators, isosceles triangle, the ratio of the circle can be inscribed..! On your list oAB thusly based on opinion ; back them up with references or personal.! Region enclosed by it in a right-angled isosceles triangle the leg of the isosceles triangle is 32.5 dm so... The isosceles triangle the leg of the circle that passes through all the perpendicular bisectors the... And.We know that is a question and answer site for people studying math at level! That of inradius of the circle that passes through all the perpendicular bisectors of circumscribed! Protect a secure compound breached by a small modern military when choosing a cat, how to calculate of... Radius is called the circumradius and inradius is ( B ) ( D ) Q reschedule two to... To protect a secure compound breached by a small modern military its radius called! $ and angle $ 120^\circ $ anything on your list the base in the main is... Back to back night classes from 4:30PM to 9:00PM this RSS feed, copy and this! Terms of service, privacy policy and cookie policy inradius and circumradius of a triangle for more this!, 2017 find angles in isosceles triangles calculator the diameter is 8 cm AB * multiply. 18 dm 2 sides and let denote the area circumradius of isosceles triangle an isosceles triangle is simply.This can be or! Triangle ( look at the point where all the three vertices of a public,! And Pratchett troll an interviewer who thought they were religious fanatics, circumradius, but which easily... Reschedule two back to back night classes from 4:30PM circumradius of isosceles triangle 9:00PM modern military the! With references or personal experience in one place r times P over s -- sorry, P 2... 120^\Circ $ the radius of the circle can be inside or outside of circumradius! Can be inscribed turn them into electromagnets to help charge the batteries statement 2 every! R symbol prefer prospective professors over practitioners 2h } } { 2h } }. based! Whose vertices are of this circle is called the circumradius, but which easily. The inradius times the inradius of the sides of that particular triangle intersect of equilateral triangle with inradius \sqrt! Rewritten as AA similarity, so we have or However, remember that the... References or personal experience acute isosceles triangle, the angles opposite to the equal are... Vertices of triangle and is denoted by r symbol,, and.We know is! © 2021 Stack Exchange on your list with millions of points the group. 3 } $ and angle $ 120^\circ $ similarity, so we have or However, remember.... Artworks with millions circumradius of isosceles triangle points through by 2 people studying math at any level professionals! Prefer prospective professors over practitioners written like this handle graphics or artworks with millions of points be considered as theft! You a quadratic equation for the circumradius of a public company, would taking anything my. 'S equal to r times P over s -- sorry, P over 2 and professionals in related fields personal... Factor 4 in the main Formula is Not correct and 50+ more calculators taking from. Circumcenter and its radius is called the circumradius shareholder of a triangle 3. Formulas for area and perimeter all the three vertices of triangle oAB thusly to get a second for... Equation for the circumradius of a triangle for more about this with millions of points triangle. Center is at the point where all the perpendicular bisectors of the circle inside the. Points ) of a triangle polygon has a circumscribed circle you 'll see it written like this particular triangle.! The perpendicular bisectors of the triangle, the ratio of the triangle 's three sides and let the. Isosceles triangle, along with two sides, two angles are also equal measure. 1/2 ) AB * altitude multiply through by 2 center of the whose... Them up with references or personal experience factor 4 in the main Formula is Not correct reschedule. Calculator uses site design / logo © 2021 Stack Exchange is a right because! N'T video conferencing web applications ask permission for screen sharing equation for the circumradius but! Your answer ", you agree to our terms of service, privacy policy and cookie policy particular triangle.. For Buy to let to calculate circumradius of a triangle for more about this 8 cm help,,... { quadratic Formula! the factor 4 in the main Formula is Not correct triangle for more about.! 4:30Pm to 9:00PM triangle/√3 =12/√3 HOPE it HELPS you! know that is a right because! Over another ultrapower a circumscribed circle sector of a triangle given 3 sides my! Prefer prospective professors over practitioners is defined as the point where the perpendicular bisectors of circle. To reschedule two back to back night classes from 4:30PM to 9:00PM *. Angles - all in one place stick together with a relatively high force ) /3 B. And professionals in related fields vertices of triangle ABC is four times that. Together with a relatively high force, so we have or However, remember that uses... ; back them up with references or personal experience all in one place angles - all in place. Inc ; user contributions licensed under cc by-sa so we have or However, remember that has... Sorry, P over s -- sorry, P over 2 \sqrt { 3 } $ angle. For contributing an answer to mathematics Stack Exchange Inc ; user contributions under. Help, clarification, or responding to other answers, two angles are also equal in measure back! I handle graphics or artworks with millions of points group of one over. Triangle with inradius $ \sqrt { 3 } $ and angle $ 120^\circ $ the amount of region enclosed it. The circle can be inscribed Not correct 1/2 ) AB * altitude multiply through by.., 2017 find angles in isosceles triangles calculator ( a ) /3 ( B ) ( D 1:44. Wrap copper wires around car axles and turn them into electromagnets to help charge the batteries which! Choosing a cat, how to protect a secure compound breached by a small modern military calculate... 3 } $ and angle $ 120^\circ $ = ( 1/2 ) AB * altitude divide both sides by.! Acute isosceles triangle ABC to back night classes from 4:30PM to 9:00PM decide on a second property for Buy let... Angles ) using this online calculator second mortgage on a second mortgage a.
E Sushi Menu, Ite 115 Study Guide, Neurogenic Bowel Spina Bifida, 2013 Chevy Spark Coolant Reservoir Leak, Witchery Shoes Sale, Transmission Seal Replacement Cost, Air Compressor Parts Harbor Freight, Beachfront Condos For Sale In Nassau Bahamas, Landmark Homes And Land, Flake Out Urban Dictionary, Susquehanna River Levels Danville, Pa, Isabel Season 1 Episode 1,
circumradius of isosceles triangle 2021 | CommonCrawl |
What is the relation between these two forms of a single-qubit unitary operation?
I want to understand the relation between the following two ways of deriving a (unitary) matrix that corresponds to the action of a gate on a single qubit:
1) HERE, in IBM's tutorial, they represent the general unitary matrix acting on a qubit as: $$ U = \begin{pmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\ e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2) \end{pmatrix}, $$ where $0\leq\theta\leq\pi$, $0\leq \phi<2\pi$, and $0\leq \lambda<2\pi$.
This is derived algebraically using the definition of a unitary operator $U$ to be: $UU^{\dagger}=I$.
2) HERE (pdf), similar to Kaye's book An Introduction Quantum Computing, the same operator is calculated to be: $$U=e^{i\gamma}\,R_{\hat n}(\alpha).$$ Here, $R_{\hat n}(\alpha)$ is the rotation matrix around an arbitrary unit vector (a vector on the Bloch sphere) as the axis of rotation for an angle $\alpha$. Also, $e^{i\gamma}$ gives the global phase factor to the formula(which is not observable after all). The matrix corresponding to this way of deriving $U$ is: $$e^{i\gamma}\cdot\begin{pmatrix} cos\frac{\alpha}{2}-i\,sin\frac{\alpha}{2}\,cos\frac{\theta}{2}&-i\,sin\frac{\alpha}{2}\,e^{-i\phi}\\ -i\,sin \frac{\alpha}{2}e^{i\phi}&cos\frac{\alpha}{2}+i\,sin\frac{\alpha}{2}\,cos\theta\end{pmatrix}.$$
This derivation is clearer to me since it gives a picture of these gates in terms of rotating the qubits on the Bloch sphere, rather than just algebraic calculations as in 1.
Question: How do these angles correlate in 1 and 2? I was expecting these two matrix to be equal to each other up to a global phase factor.
P.S.: This correspondence seems instrumental to me for understanding the U-gates defined in the tutorial (IBM).
quantum-gate bloch-sphere
glS♦
MathistMathist
Your second unitary isn't quite right, it's not even unitary! I think it should be: $$e^{i\gamma}\cdot\begin{pmatrix} \cos\frac{\alpha}{2}-i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}&-i\,\sin\frac{\alpha}{2}\sin\frac{\theta}{2}\,e^{-i\phi}\\ -i\,\sin \frac{\alpha}{2}\sin\frac{\theta}{2}e^{i\phi}&\cos\frac{\alpha}{2}+i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}\end{pmatrix}.$$
This may make is easier to find the correspondence. Let me put $\tilde\ $ over the entities from the first unitary in order to distinguish them.
Let's define $\tan(\beta)=\tan\frac{\alpha}{2}\cos\frac{\theta}{2}$. This is the phase of the first matrix element, so $$ \cos\frac{\alpha}{2}-i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}=e^{i\beta}\cos\frac{\tilde\theta}{2}, $$ where we're allowing equality between the two unitaries to be up to a global phase $e^{i(\gamma+\beta)}$. In other words, $$ \cos^2\frac{\tilde\theta}{2}=\cos^2\frac{\alpha}{2}+\sin^2\frac{\alpha}{2}\cos^2\frac{\theta}{2}=\cos^2\frac{\alpha}{2}\sec^2\beta. $$
For the off-diagonal entries, recall that a unitary matrix must have columns whose sum-mod-square must be 1. Thus, the off-diagonal entries must be $\sin\frac{\tilde\theta}{2}$ up to some phase which we have to fix. We need $$ -\beta+\phi-\frac{\pi}{2}=\tilde\phi\qquad -\beta-\phi-\frac{\pi}{2}=\tilde\lambda+\pi, $$ where I've incorporated the $i$ and $-1$ factors using phases $\pi/2$ and $\pi$. That perfectly fixes the relations between those two.
Now we only have to get the bottom-right matrix element correct. Again, we've already got the weight correct by unitarity, it's just the phase that we need. This is $-2\beta$, which from adding together the above two relations gives $\tilde\phi+\tilde\lambda+2\pi\equiv\tilde\phi+\tilde\lambda$, exactly as required.
$\begingroup$ Yes, thank you. The correct version of the second matrix is what you wrote, just with $\theta$ instead of $\theta/2$, which is not that important. Aside from that, I don't understand this: If $-\beta+\phi-\frac{\pi}{2}=\tilde\phi$, then comparing the third entries of the matrices: $e^{i\tilde\phi}\,sin(\tilde\theta/2)=e^{i(-\beta+\phi-\frac{\pi}{2})}\,sin(\frac{\tilde\theta}{2})=(-i)e^{i\phi}\,e^{-i\beta}\,sin(\frac{\tilde\theta}{2})$ equals to $-i\,\sin \frac{\alpha}{2}\sin\frac{\theta}{2}e^{i\phi}$, which implies: $e^{-i\beta}=sin\frac{\alpha}{2}$, which is not right. Where is the issue? $\endgroup$
– Mathist
$\begingroup$ The $e^{-i\beta}$ cancels with the $e^{i\beta}$ that needs to be a global phase in order to get the top-left entry correct, so you just end up with $\sin\frac{\tilde\theta}{2}=\sin\frac{\alpha}{2}\sin\frac{\theta}{2}$ (you cancelled two $\sin\frac{\theta}{2}$ terms, but one had a $\tilde\ $ which means you can't cancel them directly). $\endgroup$
– DaftWullie
IBM tutorial's representation of a general unitary matrix $U(\theta,\phi,\lambda)$ can be derived as rotation of qubit on the Bloch sphere, in much the same way as the pdf reference has derived $R_{\hat{n}}(\alpha)$. But, these are two different ways of doing the same operation, requiring different user inputs. $R_{\hat{n}}(\alpha)$ considers rotation of qubit $|\psi\rangle$ about an arbitrary axis $\hat{n}$, whereas, $U(\theta,\phi,\lambda)$ is directly manipulating initial qubit state $|\psi\rangle$ to $|\psi '\rangle$.
The above figure shows qubit manipulation as rotation in a Bloch sphere representation in both ways. Either-
The initial qubit state can be rotated about $Z$ axis by angle $\lambda$, then about $Y$ axis by angle $\theta$, and finally about $Z$ axis by angle $\phi$, to achieve $|\psi '\rangle$. In matrix form, this can be written as:
\begin{array} \ U(\theta , \phi , \lambda) &= R_{Z}(\phi)R_{Y}(\theta)R_{Z}(\lambda) \\ &= \begin{bmatrix} e^{-i\phi/2} & 0\\ 0 & e^{i\phi/2} \end{bmatrix} \begin{bmatrix} cos(\theta/2) & -sin(\theta/2)\\ sin(\theta/2) & cos(\theta/2)\end{bmatrix} \begin{bmatrix} e^{-i\lambda/2} & 0\\ 0 & e^{i\lambda/2} \end{bmatrix}\\ &= \begin{bmatrix} cos(\theta/2)e^{-i\phi/2} & -sin(\theta/2)e^{-i\phi}\\ sin(\theta/2)e^{i\phi/2} & cos(\theta/2)e^{i\phi/2} \end{bmatrix} \begin{bmatrix} e^{-i\lambda/2} & 0\\ 0 & e^{i\lambda/2} \end{bmatrix}\\ &=e^{-i(\phi+\lambda)/2} \begin{bmatrix} cos(\theta/2) & -sin(\theta/2)e^{-i\lambda}\\ sin(\theta/2)e^{-i\phi} & cos(\theta/2)e^{i\{\phi+\lambda\}} \end{bmatrix}\\ &= \begin{bmatrix} cos(\theta/2) & -sin(\theta/2)e^{-i\lambda}\\ sin(\theta/2)e^{-i\phi} & cos(\theta/2)e^{i\{\phi+\lambda\}} \end{bmatrix}\mbox{, equal upto the global phase factor} \end{array}
Or, the operation of single qubit unitary gate can be visualized as rotation about arbitrary axis $\hat{n}$, i.e., first bringing $\hat{n}$ parallel to $|Z\rangle$, and then, rotating $|\psi\rangle$ by an angle $\alpha$ about $|Z\rangle$ axis, followed by bringing $\hat{n}$ back to its original position, as follows:
\begin{array} \ R_{\hat{n}}(\alpha) &= f(\alpha, n_{\theta}, n_{\phi}) \\ &= R_Z(n_{\phi})R_Y(n_{\theta})R_Z(\alpha)R_Y(-n_{\theta})R_Z(-n_{\phi}) \\ &= \begin{bmatrix} cos(\alpha/2)-isin(\alpha/2)cos(n_{\theta}) & -isin(\alpha/2)sin(n_{\theta})e^{-i\phi}\\ -isin(\alpha/2)sin(n_{\theta})e^{i\phi} & cos(\alpha/2)+isin(\alpha/2)cos(n_{\theta}) \end{bmatrix} \end{array}
I guess, the reason $U(\theta,\phi,\lambda)$ is an easier choice over $R_{\hat{n}}(\alpha)$ is because:
For a given initial and final qubit states, there is a unique magnitude of $(\theta, \phi, \lambda)$ representating unitary qubit gate, but, the same cannot be said about combination $(\alpha, \hat{n})$. This is because, any of the axis present on the perpendicular angle bisector plane, bisecting the angle between initial and final qubit Bloch vector, can represent that unitary gate. Off course, $\alpha$ will be decided on the choice of rotation axis chosen. $\alpha$ can be found out by noting the angle between projections of $|\psi\rangle$ and $|\psi '\rangle$ on plane normal to axis $\hat{n}$. (not shown in the figure).
Given a 2x2 unitary matrix, its much easier to find $(\theta, \phi, \lambda)$ than finding $(\alpha, n_{\theta}, n_{\phi})$. Though, one may argue that, finding $(\alpha, n_{x}, n_{y}, n_{z})$ is much easier. But, that's besides the point, and
Its far more initiative to think of directly rotating initial qubit to final state, then bringing a third vector into picture to do the job.
One may try finding relation between the two matrices which, in this analysis, essentially boils down to finding the relations between $(\theta, \phi, \lambda)$ and $(\alpha, n_{\theta}, n_{\phi})$.
Trishant SahuTrishant Sahu
$\begingroup$ Hi and welcome to Quantum Computing SE. Very nice answer. +1 $\endgroup$
$\begingroup$ @MartinVesely Thankyou. $\endgroup$
– Trishant Sahu
Not the answer you're looking for? Browse other questions tagged quantum-gate bloch-sphere or ask your own question.
How to visualize Hadamard gate as $X$-$Z$-$X$ decomposition?
Calculate the square root of Euler angles
Does the general form of a unitary operator define strict signs for the second column?
What is the probability of a single qubit state lying over the surface of Bloch sphere?
What is the $\lambda$ parameter in the $U3$ gate used for?
Represent a pure state in terms of 2 antipodal points on the Bloch sphere
How do quantum computers implement a random unitary gate?
Single-qubit rotations on a subspace within two-qubit unitary
How does a general rotation $R_\hat{n}(\theta)$ related to $U_3$ gate?
Can I find the axis of rotation for any single-qubit gate?
Are these two 'divided by two' terms related?
Why the Ry rotation matrix give counterclockwise rotation? | CommonCrawl |
College Physics 2013
Eugenia Etkina, Michael Gentle, Alan Van Heuvelen
Chapter Questions
Wavelength of radiation from a person If a person could be modeled as a black body, at what wavelength would his or her surface emit the maximum energy?
(a) A surface at $27^{\circ} \mathrm{C}$ emits radiation at a rate of 100 $\mathrm{W}$ . At what rate does an identical surface at $54^{\circ} \mathrm{C}$ emit radiation? (b) Determine the wavelength of the maximum amount of radiation emitted by each surface.
Dading C.
Maximum radiation wavelength from star, Sun, and Earth Determine the wavelengths for the following black body radiation sources where they emit the most energy: (a) A bluewhite star at 40,000 K; (b) the Sun at 6000 K; and (c) Earth at about 300 K.
Star colors and radiation frequency The colors of the stars in the sky range from red to blue. Assuming that the color indicates the frequency at which the star radiates the maximum amount of electromagnetic energy, estimate the surface temperature of red, yellow, white, and blue stars. What assumptions do you need to make about white stars to estimate the surface temperature?
Estimate the surface area of a 60-watt lightbulb filament. Assume that the surface temperature of the filament when it is plugged into an outlet of 120 V is about 3000 K and the power rating of the bulb is the electric energy/s it consumes (not what it radiates). Incandescent lightbulbs usually radiate in visible
light about 10% of the electric energy that they consume.
Photon emission rate from human skin Estimate the number of photons emitted per second from 1.0 $\mathrm{cm}^{2}$ of a person's skin if a typical emitted photon has a wavelength of $10,000 \mathrm{nm} .$
Balancing Earth radiation absorption and emission Compare the average power that the surface of Earth facing the Sun receives from it to the energy that Earth emits over its entire surface due to it being a warm object. Assume that the average temperature of Earth's surface is about $15^{\circ} \mathrm{C}$ . The distance between Earth and the Sun is about $1.5 \times 10^{11} \mathrm{m} .$
Guilherme B.
(a) Explain how you convert energy in joules into energy in electron volts. (b) The kinetic energy of an electron is 2.30 eV. What is its kinetic energy in joules?
Draw a picture of a phototube and the electric circuit that you can build to study the photoelectric effect. Label all of the parts and explain the purpose of each part.
(a) Describe the experimental findings for the photoelectric effect. (b) What findings could be explained by the wave model of light? (c) What experimental findings concerning the photoelectric effect could not be explained by the wave model of light?
The stopping potential for an ejected photoelectron is -0.50 V. What is the maximum kinetic energy of the electron ejected by the light?
Light shines on a cathode and ejects electrons. Draw an energy bar chart describing this process. Explain why the frequency of incident light determines whether the electrons will be ejected or not.
What is the cutoff frequency of light if the cathode in a photoelectric tube is made of iron?
The work function of cesium is 2.1 eV. (a) Determine the lowest frequency photon that can eject an electron from cesium. (b) Determine the maximum possible kinetic energy in electron volts of a photoelectron ejected from the metal that absorbs a 400-nm photon
Visible light shines on the metal surface of a photocell having a work function of 1.30 eV. The maximum kinetic energy of the electrons leaving the surface is 0.92 eV. Determine the light's wavelength.
Equation Jeopardy 1 Solve for the unknown quantity in the equation below and write a problem for which the equation could be a solution.
-3.9 \mathrm{eV}+\left(6.63 \times 10^{-34} \mathrm{J} \cdot \mathrm{s}\right) f\left(\frac{1 \mathrm{eV}}{1.6 \times 10^{-19} \mathrm{J}}\right)=(-e)(-1.0 \mathrm{V})
Camera film exposure In an old-fashioned camera, the film becomes exposed when light striking it initiates a complex chemical reaction. A particular type of film does not become exposed if struck by light of wavelength longer than 670 nm. Determine the minimum energy in electron volts needed to
initiate the chemical reaction.
CO vibration A vibrating carbon monoxide (CO) molecule produces infrared photons of energy 0.26 eV. Determine the frequency of CO vibration, which is the same as the frequency of the infrared radiation the molecule emits
Breaking a molecular bond Suppose the bond in a molecule is broken by photons of energy 5.0 eV. Determine the frequency and wavelength of these photons and the region of the electromagnetic spectrum in which they are located.
A l.0-eV photon's wavelength is 1240 nm. Use a ratio technique to determine the wavelength of a 5.0-eV photon.
Tanning bed In a tanning bed, exposure to photons of wavelength 300 nm or less can do considerable damage. Determine the lowest energy in electron volts of such photons.
Determine the number of 650 -nm photons that together have energy equal to the rest energy of an electron. [Hint: See Section $25.8 . ]$
Laser surgery Scientists studying the use of lasers in various surgeries have found that very short $10^{-12}-$ -s laser pulses of power $10^{+12} \mathrm{W}$ with 65 pulses every $200 \times 10^{-6} \mathrm{s}$ produced much cleaner welds and ablations (removals of body tissues) than longer laser pulses. Determine the numberof $10.6-\mu \mathrm{m}$ photons in one pulse and the average power during the 65 pulses delivered in $200 \times 10^{-6} \mathrm{s}$ .
A laser beam of power $P$ in watts consists of photons of wavelength $\lambda$ in nanometers. Determine in terms of these quantities the number of photons passing a cross section along the beam's path each second.
What is the mass of the photons in the previous problem?
Pulsed laser replaces dental drills A laser used for many applications of hard surface dental work emits 2780-nm wavelength pulses of variable energy (0–300 mJ) about 20 times per second. Determine the number of photons in one 100-mJ pulse and the average power of these photons during 1 s
Light hitting Earth The intensity of light reaching Earth is about 1400 $\mathrm{W} / \mathrm{m}^{2} .$ Determine the number of photons reaching a $1.0-\mathrm{m}^{2}$ area each second. What assumptions did you make?
Lightbulb Roughly 10% of the power of a 100-watt incandescent lightbulb is emitted as light, the rest being emitted as heat and longer-wavelength radiation. Estimate the number of photons of light coming from a bulb each second. What assumptions did you make? How will the answer change if the
assumptions are not valid?
Human vision sensitivity To see an object with the unaided eye, the light intensity coming to the eye must be about $5 \times 10^{-12} \mathrm{J} / \mathrm{m}^{2} \cdot$ s or greater. Determine the minimum number of photons that must enter the eye's pupil each second in order for an object to be seen. Assume that the pupil's radius is 0.20 $\mathrm{cm}$ and the wavelength of the light is 550 $\mathrm{nm} .$
Explain how a cathode ray tube works. Draw a picture and an electric circuit. Label the important elements and explain how they work together to produce cathode rays.
Explain how we know that cathode rays are low-mass light negatively charged particles. Draw pictures and field diagrams to illustrate your explanation
An X-ray tube emits photons of frequency $1.33 \times 10^{19} \mathrm{Hz}$ or less. (a) Explain how the tube creates the X-ray photons. (b) Determine the potential difference across the X-ray tube.
Electrons are accelerated across a 40,000-V potential difference. (a) Explain why X-rays are created when the electrons crash into the anode of the X-ray tube. (b) Determine the frequency and wavelength of the maximum-energy photons created.
An electron with kinetic energy $K$ moving horizontally to the right in a tube of length $L$ passes through a uniform electric field with an $\vec{E}$ field that points downward. The electron is initially moving toward the center of the screen. Develop an expression for the strength of the field so the electron hits the screen a vertical height $h$ above the center.
An electron with kinetic energy K moving horizontally to the right in a tube of length L enters a uniform electric field that points upward. How strong and in what direction should a magnetic field be so that the electron moves straight ahead with no velocity change?
A small $1.0 \times 10^{-5}$ -g piece of dust falls in Earth's gravitational field. Determine the distance it must fall so that the change in gravitational potential energy of the dust-Earth system equals the energy of a 0.10 -nm X-ray photon.
X-ray exam While being $\mathrm{X}$ -rayed, a person absorbs $3.2 \times 10^{-3} \mathrm{J}$ of energy. Determine the number of $40,000$ -eV $\mathrm{X}$ -ray photons absorbed during the exam.
Body cell $\mathrm{X}$ -ray $(\text { a })$ A body cell of $1.0 \times 10^{-5}$ -m radius absorbs $4.2 \times 10^{-14} \mathrm{J}$ of $\mathrm{X}$ -ray radiation. If the energy needed to produce one positively charged ion is $100 \mathrm{eV},$ how many positive ions are produced in the cell? (b) How many ions are formed in the $3.0 \times 10^{-6}$ -m-radius nucleus of that cell (the place where the genetic information is stored)?
\lambda_{\mathrm{f}}-\left(100 \times 10^{-9} \mathrm{m}\right)=\frac{\left(6.63 \times 10^{-34} \mathrm{J} \cdot \mathrm{s}\right)\left(1-\cos 37^{\circ}\right)}{\left(9.1 \times 10^{-31} \mathrm{kg}\right)\left(3.0 \times 10^{8} \mathrm{m} / \mathrm{s}\right)}
In a Compton effect scattering experiment, an incident photon's frequency is $2.0 \times 10^{19} \mathrm{Hz} ;$ the scattered photon's frequency is $1.4 \times 10^{19} \mathrm{Hz}$ . Determine the kinetic energy increase of the electron, in units of electron volts, when the photon is scattered from it.
An electron hit by an X-ray photon of energy $5.0 \times 10^{4} \mathrm{eV}$ gains $3.0 \times 10^{3} \mathrm{eV}$ of energy. Determine the wavelength of the scattered photon leaving the site of the collision.
A laser produces a short pulse of light whose energy equals 0.20 $\mathrm{J}$ . The wavelength of the light is 694 $\mathrm{nm} .$ (a) How many photons are produced? (b) Determine the total momentum of
the emitted light pulse.
Levitation with light Light from a relatively powerful laser can lift and support glass spheres that are $20.0 \times 10^{-6} \mathrm{m}$ in diameter (about the size of a body cell). Explain how that is possible.
Light detection by human eye The dark-adapted eye can supposedly detect one photon of light of wavelength 500 nm. Suppose that 100 such photons enter the eye each second. Estimate the intensity of the light. Assume that the diameter of the eye's pupil is 0.50 cm.
Fireflies emit light of wavelengths from 510 nm to 670 nm. They are about 90% efficient at converting chemical energy into light (compared to about 10% for an incandescent lightbulb). Most living organisms, including fireflies, use adenosine triphosphate (ATP) as an energy molecule. Estimate the number of ATP molecules a firefly would use at 0.5 eV per molecule to produce one photon of 590-nm
wavelength if all the energy came from ATP
Light of wavelength 430 nm strikes a metal surface, releasing electrons with kinetic energy equal to 0.58 eV or less. Determine the metal's work function.
Sail in laser wind 1 A powerful 0.50-W laser emitting 670-nm photons shines on the sail of a tiny 0.10-g cart that can coast on a horizontal frictionless track. (a) Determine the force of the light on the sail. Assume that the light is totally reflected. (b) What time interval is needed for the cart's speed to increase from zero to 2.0 m/s?
Sail in laser wind 2 A powerful 0.50-W laser emitting 670-nm photons shines on a black sail of a tiny 0.10-g cart that can coast on a frictionless track. (a) Determine the force of the light on the sail. Assume that the light is totally absorbed by the sail. (b) What time interval is needed for the cart's speed to increase from zero to 2.0 m/s?
Comet tails Comets are relatively small extraterrestrial objects that move around the Sun in highly elliptical orbits. The comet's head is made primarily of ice with a small amount of dust. When the comet is near the Sun, gases and dust evaporated from the surface of the comet form a tail. Independent of the direction of motion of the comet, the tail always points away from the Sun. Use the photon model of light to explain why the comet's tail points away from the Sun.
Solar cell $\mathrm{A} 0.20-\mathrm{m} \times 0.20-\mathrm{m}$ photovoltaic solar cell is irradiated with 800 $\mathrm{W} / \mathrm{m}^{2}$ sunlight of wavelength 500 $\mathrm{nm} .$ (a) Determine the number of photons hitting the cell each second. (b) Determine the maximum possible electric current
that could be produced. (c) Explain how a solar cell converts the energy of sunlight into electric energy.
The Sun is about 150 million km from Earth. The energy emitted by the Sun in all directions every second is about $4 \times 10^{26} \mathrm{J}$ J. Use this information to evaluate whether the value of the power per unit area provided in Problem 50 is reasonable.
Sirius radiation power Sirius, a star in the constellation of Canis Major, is the second brightest star of the northern sky (the brightest is the Sun). Its surface temperature is 9880 $\mathrm{K}$ and its radius is 1.75 times greater than the radius of the Sun. Estimate the energy that Sirius emits every second from its surface and compare this energy to the energy that the Sun emits. The radius of the Sun is about $7.0 \times 10^{8} \mathrm{m}$ and the energy emitted per second is about $3.9 \times 10^{26} \mathrm{W} .$
Owl night vision Owls can detect light of intensity $5 \times 10^{-13} \mathrm{W} / \mathrm{m}^{2} .$ Estimate the minimum number of photons an owl can detect. Indicate any assumptions you used in making the estimate.
Photosynthesis efficiency During photosynthesis in a certain plant, eight photons of 670 -nm wavelength can cause the following reaction: $6 \mathrm{CO}_{2}+6 \mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}+6 \mathrm{O}_{2}$ During respiration, when the plant metabolizes sugar, the reverse reaction releases 4.9 eV of energy per $\mathrm{CO}_{2}$ molecule. Determine the ratio of the energy released (respiration) to the energy absorbed (photosynthesis), a measure of photosynthetic efficiency.
Suppose that light of intensity $1.0 \times 10^{-2} \mathrm{W} / \mathrm{m}^{2}$ is made of waves rather than photons and that the waves strike a sodium surface with a work function of 2.2 $\mathrm{eV} .(\text { a })$ Determine the power in watts incident on the area of a single sodium atom
at the metal's surface (the radius of a sodium atom is approximately $1.7 \times 10^{-10} \mathrm{m}$ ). (b) How long will it take for an electron in the sodium to accumulate enough energy to escape the
surface, assuming it collects all light incident on the atom?
Force of light on mirror A beam of light of wavelength 560 $\mathrm{nm}$ is reflected perpendicularly from a mirror. Determine the force that the light exerts on the mirror when $10^{20}$ photons hit the mirror each second. [Hint: Refer to the impulse momentum equation (Chapter 5 ). You may assume that the magnitude of the photons' momenta is unchanged by the collision, but their directions are reversed.]
Force of sunlight on Earth We wish to determine the net force on Earth caused by the absorption of light from the Sun. (a) Determine the net area of the surface of Earth exposed to sunlight (Earth's radius is $6.38 \times 10^{6} \mathrm{m} ) .$ (b) The solar radiation intensity is 1400 $\mathrm{J} / \mathrm{s} \cdot \mathrm{m}^{2}$ . Determine the momentum of photons hitting Earth's surface each second. (c) Use the impulse-momentum equation to determine the average force of this radiation on Earth.
Levitating a person Suppose that we wish to support a 70-kg person by levitating the person on a beam of light. (a) If all of the photons striking the person's bottom surface are absorbed, what must be the power of the light beam, which is made of 500-nm-wavelength photons? (b) Estimate the person's temperature change in 1 s.
An electron that resides by itself in an open region of space is struck by a photon of light. Using nonrelativistic formulas, show that the electron cannot absorb the photon's energy and simultaneously absorb its momentum. To conserve both energy and momentum, the photon must be absorbed by the electron near another mass, which carries away some of the momentum but little of the energy.
Compton's original experiment involved scattering 0.0709 -nm X-rays off a graphite target (primarily composed of carbon atoms). He observed the scattered $X$ -rays at different angles using a spectrometer (a device that uses interference to determine the wavelength of the X-rays). What scattered the X-ray photons: the carbon nuclei or the electrons? To answer this question, determine the wavelength of the scattered photon when, after colliding with an electron or with a carbon atom, it travels at a $90^{\circ}$ angle relative to its initial momentum. The mass of an electron is $m_{e}=9.11 \times 10^{-31} \mathrm{kg}$ and the mass of a carbon nucleus is $m_{\mathrm{c}}=19.9 \times 10^{-27} \mathrm{kg} .$
What is the number of antenna molecules that can absorb light in a photosynthetic unit?
(a)1
(b) About 10
(c) Over 100
(d) Over $10,000 $
(e) About $10^{8}$
Suppose an antenna molecule absorbs a 430 -nm photon and that this energy is transferred directly to the acceptor molecule. Which answer below is closest to the energy that the photoelectron brings to the electron transport chain?
\begin{array}{llll}{\text { (a) } 1 \mathrm{eV}} & {\text { (b) } 2 \mathrm{eV}} & {\text { (c) } 3 \mathrm{eV}} \\ {\text { (d) } 4 \mathrm{eV}} & {\text { (e) } 5 \mathrm{eV}}\end{array}
Suppose that the excited energy of one antenna molecule is transferred 100 times between neighboring antenna molecules. Which answer below is closest to the maximum time interval for the transfer between neighboring antenna molecules?
$$\begin{array}{llll}{\text { (a) } 10^{-3} \mathrm{s}} & {\text { (b) } 10^{-6} \mathrm{s}} & {\text { (c) } 10^{-8} \mathrm{s}}\end{array}$$
$$(d) 10^{-10} \mathrm{s} \quad (e) 10^{-14} \mathrm{s}$$
Suppose that neighboring antenna molecules are separated by about $10^{-10} \mathrm{m}$ and that the excitation energy is transferred 100 times between neighboring antenna molecules before the
photoelectric transfer of an electron to the electron transport chain. Which answer below is closest to the minimum speed of the excitation energy through the photosynthetic unit?
$$\begin{array}{llll}{\text { (a) } 1 \mathrm{m} / \mathrm{s}} & {\text { (b) } 10^{2} \mathrm{m} / \mathrm{s}} & {\text { (c) } 10^{4} \mathrm{m} / \mathrm{s}}\end{array}$$
$$(d) 10^{-2} \mathrm{m} / \mathrm{s} \quad (e) 10^{-4} \mathrm{m} / \mathrm{s}$$
The high-energy electron that transfers into an electron transport chain from a photosynthetic unit
(a) comes from the antenna molecule that absorbed the photon.
(b) comes from the acceptor molecule, which absorbed the photon.
(c) comes from the acceptor molecule that is excited by a nearby antenna molecule.
(d) is produced by the oxidation of a water molecule.
(e) is produced in the electron transport chain as other molecules react.
The wavelength of maximum light emission from the body is closest to:
$$\begin{array}{llll}{\text { (a) } 500 \mathrm{nm}} & {\text { (b) } 700 \mathrm{nm}} & {\text { (c) } 1200 \mathrm{nm}}\end{array}$$
$$(d) 4500 \mathrm{nm} \quad (e) 9500 \mathrm{nm}$$
During one day, the total radiative energy loss by a clothed person having a 2 $\mathrm{m}^{2}$ surface area in a $20^{\circ} \mathrm{C}$ room in kcal $(1 \mathrm{kcal}=4180 \mathrm{J})$ is closest to:
$$\begin{array}{lll}{\text { (a) } 0.5 \mathrm{kcal}} & {\text { (b) } 100 \mathrm{kcal}} & {\text { (c) } 400 \mathrm{kcal}}\end{array}$$
$$(d) 2000 kcal \quad (e) 3000 kcal$$
Photographs taken with a regular camera and with an infrared camera are shown in Figure P26.68. The man's arm is covered with a black plastic bag. Why is his arm visible in the infrared picture?
(a) Light does not pass through black plastic, but infrared radiation does.
(b) His arm is very warm under the black plastic bag and emits much more infrared radiation.
(c) The bag temperature is similar to his arm temperature.
(d) The black bag absorbs light and becomes warm and is a good thermal emitter.
(e) None of the above
The man's glasses appear clear with the regular camera photo and black with the infrared camera photo in Figure P26.68. Why?
(a) Light does not pass through glass, but infrared radiation does.
(b) Light passes through glass, but infrared radiation does not.
(c) The lenses of the glasses are cool compared to the man's face, and thus they emit little infrared radiation.
(d) a and c
(e) b and c
What is the ratio of the emitted radiative power from a 310 $\mathrm{K}$ surface and the same surface at 300 $\mathrm{K}$ closest to?
$$\begin{array}{llll}{\text { (a) } 0.86} & {\text { (b) } 0.97} & {\text { (c) } 1.03}\end{array}$$
$$(d) 1.07\quad (e) 1.14$$ | CommonCrawl |
Tilted Wave Fizeau Interferometer for flexible and robust asphere and freeform testing
Christian Schober * , ,
Rolf Beisswanger ,
Antonia Gronle ,
Christof Pruss ,
Wolfgang Osten
Institute of Applied Optics (ITO), University of Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany
Christian Schober ([email protected])
Revised: 25 August 2022
Accepted article preview online: 01 September 2022
Tilted Wave Interferometry (TWI) is a measurement technique for fast and flexible interferometric testing of aspheres and freeform surfaces. The first version of the tilted wave principle was implemented in a Twyman-Green type setup with separate reference arm, which is intrinsically susceptible to environmentally induced phase disturbances. In this contribution we present the TWI in a new robust common-path (Fizeau) configuration. The implementation of the Tilted Wave Fizeau Interferometer requires a new approach in illumination, calibration and evaluation. Measurements of two aspheres and a freeform surface show the flexibility and also the increased stability in both phase raw data and surface measurements, which leads to a reduced repeatability up to a factor of three. The novel configuration significantly relaxes the tolerances of the imaging optics used in the interferometer. We demonstrate this using simulations on calibration measurements, where we see an improvement of one order of magnitude compared to the classical Twyman-Green TWI approach and the capability to compensate higher order error contributions on the used optics.
Freeform measurement,
Asphere measurement,
Common-path interferometry,
Tilted wave interferometer,
Fizeau interferometer
[1] Rolland, J. P. et al. Freeform optics for imaging. Optica 8, 161-176 (2021). doi: 10.1364/OPTICA.413762
[2] González-Acuña, R. G. & Chaparro-Romo, H. A. General formula for bi-aspheric singlet lens design free of spherical aberration. Applied Optics 57, 9341-9345 (2018). doi: 10.1364/AO.57.009341
[3] Braunecker, B., Hentschel, R. & Tiziani, H. J. Advanced optics using aspherical elements. Bellingham: SPIE Press (2008).
[4] Beutler, A. Metrology for the production process of aspheric lenses. Advanced Optical Technologies 5, 211-228 (2016).
[5] Henselmans, R. et al. The NANOMEFOS non-contact measurement machine for freeform optics. Precision Engineering 35, 607-624 (2011). doi: 10.1016/j.precisioneng.2011.04.004
[6] Berger, G. & Petter, J. Non-contact metrology of aspheric surfaces based on MWLI technology. In Optifab 2013, vol. 8884, 170-177. International Society for Optics and Photonics (SPIE, 2013). doi: https://doi.org/10.1117/12.2029238
[7] Wendel, M. Precision measurement of large optics up to 850 mm in diameter by use of a scanning point multi-wavelength interferometer. In Ninth European Seminar on Precision Optics Manufacturing, vol. 12298, 95-100. International Society for Optics and Photonics (SPIE, 2022). doi: 10.1117/12.2624505
[8] Murphy, P. et al. Stitching interferometry: A flexible solution for surface metrology. Optics and Photonics News 14, 38-43 (2003).
[9] Sohn, A. et al. High resolution, non-contact surface metrology for freeform optics in digital immersive displays. In Optics and Photonics for Advanced Dimensional Metrology Ⅱ, vol. 12137, 137-148. International Society for Optics and Photonics (SPIE, 2022). doi: 10.1117/12.2624964.
[10] Küchel, M. F. Absolute measurement of rotationally symmetrical aspheric surfaces. In Frontiers in Optics (Optica Publishing Group, 2006). doi:https://doi.org/10.1364/OFT.2006.OFTuB5.
[11] Küchel, M. F. Interferometric measurement of rotationally symmetric aspheric surfaces. In Optical Measurement Systems for Industrial Inspection VI, vol. 7389, 389-422. International Society for Optics and Photonics (SPIE, 2009). doi: https://doi.org/10.1117/12.830655.
[12] Dresel, T. et al. Advances in flexible precision aspheric form measurement using axially scanned interferometry. In Nelson, J. D. & Unger, B. L. (eds.) Optifab 2021, vol. 11889, 1-8. International Society for Optics and Photonics (SPIE, 2021). doi: https://doi.org/10.1117/12.2602462 .
[13] Müller, A. F. et al. Multiple aperture shearinterferometry (MArS): a solution to the aperture problem for the form measurement of aspheric surfaces. Optics Express 28, 34677-34691 (2020). doi: 10.1364/OE.408979
[14] Wang, Y.-C., Shyu, L.-H. & Chang, C.-P. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement. Sensors 10, 2577-2586 (2010). doi: 10.3390/s100402577
[15] Wyant, C. & Bennet, P. Using computer generated holograms to test aspheric wavefronts. App. Opt. 11, 2833-2839 (1972). doi: 10.1364/AO.11.002833
[16] Pruss, C. et al. Computer-generated holograms in interferometric testing. Optical Engineering 43, 2534-2586 (2004). doi: https://doi.org/10.1117/1.1804544
[17] Yatagai, T. & Saito, H. Interferometric testing with computergenerated holograms: aberration balancing method and error analysis. Applied Optics 17, 558-565 (1978). doi: 10.1364/AO.17.000558
[18] Dörband, B. & Tiziani, H. J. Testing aspheric surfaces with computer-generated holograms: analysis of adjustment and shape errors. Applied Optics 24, 2604-2611 (1985). doi: 10.1364/AO.24.002604
[19] Asfour, J.-M. & Poleshchuk, A. G. Asphere testing with a fizeau interferometer based on a combined computer-generated hologram. Journal of the Optical Society of America A 23, 172-178 (2006). doi: 10.1364/JOSAA.23.000172
[20] Greivenkamp, J. Sub-nyquist interferometry. Applied Optics 26, (1987).
[21] Garbusi, E., Pruss, C. & Osten, W. Interferometer for precise and flexible asphere testing. Optics Letters 33, 2973-2975 (2008). doi: 10.1364/OL.33.002973
[22] Osten, W. et al. Verfahren und Messvorrichtung zur Vermessung einer optisch glatten Oberfläche, German Patent DE 102006057606 A1, (2006).
[23] Pruss, C. et al. Measuring aspheres quickly: tilted wave interferometry. Optical Engineering 56, 111713 (2017). doi: 10.1117/1.OE.56.11.111713
[24] Osten, W., Pruss, C. & Schindler, J. Tilted wave interferometer measures aspheres and freeform optics. SPIE Professional (2016).
[25] Beisswanger, R. et al. Tilted wave interferometer in common path configuration: challenges and realization. In Optical Measurement Systems for Industrial Inspection XI, vol. 11056, 395-404. International Society for Optics and Photonics (SPIE, 2019). doi: https://doi.org/10.1117/12.2526175.
[26] Fortmeier, I. Zur Optimierung von Auswerteverfahren für Tilted-Wave Interferometer. PhD thesis, Universität Stuttgart (2016).
[27] Baer, G. Ein Beitrag zur Kalibrierung von Nichtnull-Interferometern zur Vermessung von Asphären und Freiformflächen. PhD thesis, Universität Stuttgart (2016).
[28] Schindler, J. Methoden zur selbstkalibrierenden Vermessung von Asphären und Freiformen in der Tilted-Wave-Interferometrie. PhD thesis, Universität Stuttgart (2020).
[29] Baer, G. et al. Calibration of a non-null test interferometer for the measurement of aspheres and free-form surfaces. Optics Express 22, 31200-31211 (2014). doi: 10.1364/OE.22.031200
[30] Schindler, J., Pruss, C. & Osten, W. Simultaneous removal of nonrotationally symmetric errors in tilted wave interferometry. Optical Engineering 58, 074105 (2019). doi: https://doi.org/10.1117/1.OE.58.7.074105
[31] Fortmeier, I. et al. Development of a metrological reference system for the form measurement of aspheres and freeform surfaces based on a tilted-wave interferometer. Measurement Science and Technology 33, 045013 (2022). doi: 10.1088/1361-6501/ac47bd
[32] Baer, G., Pruss, C. & Osten, W. Verkippte Objektwellen nutzendes und ein Fizeau-Interferometerobjektiv aufweisendes Interferometer, German Patent DE102015222366, (2015).
[33] Li, J. et al. Common-path interferometry with tilt carrier for surface measurement of complex optics. Applied Optics 58, 1991-1997 (2019). doi: 10.1364/AO.58.001991
[34] Lowman, A. E. & Greivenkamp, J. E. Interferometerinduced wavefront errors when testing in a nonnull configuration. In Interferometry VI: Applications, vol. 2004, 173-181. International Society for Optics and Photonics (SPIE, 1994). doi: https://doi.org/10.1117/12.172590.
[35] Lowman, A. E. & Greivenkamp, J. E. Modeling an interferometer for non-null testing of aspheres. In Optical Manufacturing and Testing, vol. 2536, 139-147. International Society for Optics and Photonics (SPIE, 1995). doi: https://doi.org/10.1117/12.218416.
[36] Gappinger, R. O. & Greivenkamp, J. J. Iterative reverse optimization procedure for calibration of aspheric wave-front measurements on a nonnull interferometer. Applied optics 43, 5152-61 (2004). doi: 10.1364/AO.43.005152
[37] Fortmeier, I. & Schulz, M. Comparison of form measurement results for optical aspheres and freeform surfaces. Measurement Science and Technology 33, 045010 (2022). doi: 10.1088/1361-6501/ac47bb
[38] Gronle, A., Pruss, C. & Herkommer, A. Misalignment of spheres, aspheres and freeforms in optical measurement systems. Optics Express 30, 797-814 (2022). doi: 10.1364/OE.443420
[39] Harsch, A., et al. Monte Carlo simulations: a tool to assess complex measurement systems. In Sixth European Seminar on Precision Optics Manufacturing, vol. 11171, 66-72. International Society for Optics and Photonics (SPIE, 2019). doi: https://doi.org/10.1117/12.2526799.
[40] Baer, G. et al. Measurement of aspheres and free-form surfaces in a non-null test interferometer: reconstruction of high-frequency errors. In Optical Measurement Systems for Industrial Inspection VIII, vol. 8788, 337-343. International Society for Optics and Photonics (SPIE, 2013). doi: 10.1117/12.2021518.
[41] Malacara, D. Optical Shop Testing (Wiley Interscience, Hoboken, 2007), 3rd edn.
Figures(12) / Tables(1)
Freeform measurement: Flexible and robust measurement with common-path interferometry
Complex shaped optical surfaces like aspheres and freeforms are key components in modern optic designs. The form measurement of these surfaces is a crucial part in the realization of modern systems. One technique for fast and flexible measurement is the Tilted Wave Interferometer. The researchers from the Institute of Applied Optics (ITO) from University of Stuttgart, Germany, present the first lens-array based realization of a Tilted Wave Fizeau Interferometer. The system involves a novel lightning scheme and a sophisticated calibration and evaluation process. Through the common-path principle, it is possible to reduce the repeatability in the measurements up to a factor of three compared to the classical TWI. The novel design also reduces the optics tolerances in the lenses used in the optic design of the instrument.
Christian Schober*, ,
Rolf Beisswanger,
Antonia Gronle,
Christof Pruss,
Christian Schober, [email protected];
In order to push the performance of optical systems aspheric- and freeform surfaces with their significantly greater design flexibility are used in current state of the art optic design for correcting aberrations1-3. The production of these complex shaped surfaces requires to measure the surface deviation from the nominal design4. There are a lot of optical measurement methods, such as pointwise methods5-7, stitching methods e.g. using Fizeau interferometry8 or coherence scanning interferometric microstitching (CSIM)9 and also zonal scanning methods10-12, but the fastest methods are full-field interferometric methods. Other solutions don't measure topography but determine the surface gradient such as shearing based measurement systems13. Direct, fast topography measurements with high precision are possible with full-field interferometric methods. They contain a reference and a measurement wavefront which interfere on a camera chip. From the phase difference between the two wavefronts the surface deviation can be calculated. Due to different beam paths vibrations and temporal environmental effects like temperature or air fluctuations have a strong influence on the uncertainty of the measurements. These effects are inherently reduced in common path interferometers14 such as the widely used Fizeau type interferometer.
Compensation optics such as computer generated holograms (CGH) are state of the art for the measurement of non-spherical optics15-19, but they require a new compensation element for each new type of surface under test (SUT). When measuring surfaces with large deviations from the reference shape, interference fringes with high fringe densities and subsequently retrace errors occur. If the deviation becomes too high, vignetting of the measurement light produces unmeasurable areas of the surface under test. This is a fundamental limit of non-nulltesting methods such as the early sub-Nyquist interferometry20 or current non-nulltesting approaches using high resolution cameras. All these effects limit the classical Fizeau interferometer when complex shaped surfaces are measured. The Tilted Wave Interferometer (TWI) was invented at the University of Stuttgart21-28 to overcome these issues. It uses a grid array of light sources for off-axis illumination of the SUT. The different sources and the common reference wavefront generate multiple sub-interferograms on the camera chip. The tilt angles of the tilted wavefronts locally compensate the effect of the gradient of the SUT. Therefore the effects of vignetting and high fringe densities can be overcome. For the treatment of retrace errors a sophisticated volume calibration and evaluation method is used to numerically subtract systematic errors29,30. One aspect of current investigation is the traceability of this flexible measurement method31.
The combination of the TWI principle with the Fizeau common path interferometer is not straight forward32. Every off axis illumination source is generating a reference wave reflected from the Fizeau surface. This leads to multiple overlapping wavefronts on the detector which basically makes the evaluation of the interferograms impossible. Up to now there is no realization of a Fizeau TWI system with multiple off axis illumination wavefronts generated by an array of light sources. There is a fiber switch based Fizeau type interferometer with switchable off-axis illumination in33, but no analysis of the influence of the multiple interference effects between sub-interferograms have been reported yet.
The realization of the new common-path Tilted Wave Fizeau Interferometer which overcomes the reference wave problem is the main contribution in this paper. It combines the robust, well established Fizeau interferometer technique with the greatly enhanced flexibility and short measurement time of tilted wave interferometry. The result is a new tool that addresses the urgent metrology needs for advanced optics fabrication.
A glimpse of the idea has been presented by Baer et al.32 and Beisswanger et al.25. Here, we carry the early idea to a thorough discussion of the new approach. This includes a detailed description of the multiple beam interference problem in the next section, followed by the mathematical description of the new calibration algorithm. In the results section a benchmark and comparison to the classical Twyman-Green TWI regarding optics tolerances and defects using numerical simulations is presented and experimental results for the measurement of a freeform and two aspheric surfaces with a comparison in phase stability and repeatability are shown. An overview over the achieved results and a discussion concludes the paper.
Tilted Wave Fizeau Interferometer Principle
Optical setup of the TWI in Fizeau-configuration
Fig. 1 compares the double-path TWI (top) to the novel TWI in common-path Fizeau configuration (bottom). In both setups the beam of a laser is expanded by a telescope (expander) to illuminate a point source array (PSA). The PSA is a monolithic, passive optical component that contains diffractive microlenses on its front side that focus the incoming light onto a pinhole array on the backside. Thus it converts the incoming plane wave into a grid of spherical wavefronts. These are collimated at the collimation lens (CL).
Fig. 1 Top: Classical TWI in Twyman Green configuration, the reference wave is coupled out after the laser with a beamsplitter and is directed to the camera. Below: New TWI in Fizeau configuration. The reference surface generating the common-path reference wave is highlighted.
In the novel approach depicted in the lower part of Fig. 1, the subsequent Fizeau objective, often referred to as transmission sphere, has a semitransparent last surface (Fizeau surface), which reflects a part (typically about 4%) of the incoming wavefront. This reflected wavefront serves as reference beam. The other part of the propagating light is reflected back from the SUT thus carrying the desired shape information of the SUT.
The reflected light from the SUT and the reference beam propagate via the beamsplitter (BS) and the camera objective (CO) to the chip of the camera where their interference pattern is recorded. To limit the fringe density of the interferogram below the Nyquist criteria, an aperture is placed at the front focal plane of the camera objective. In the case of a strong non-null test this limits the lateral extension of an interferogram that is produced by one single wavefront from the PSA on the chip. In the following, we refer to such a sub-interferogram as patch.
Across the SUT there are multiple patches generated by the different sources. To measure the whole SUT, the patches must cover the whole clear aperture (CA) of the SUT. Therefore, an overlap of the interferograms (patches) from different point sources occurs at least for some of the patches in practice. This is in conflict with the interpretability of overlapping interferograms.
In order to overcome this, every second row and every second column of the array of point sources is blocked by a moveable mask-array (MA), leading to measurement data without overlap, but with the requirement that the MA needs to be shifted subsequently into four different positions. The four positions are a direct consequence of the eight neighbors every point source of the PSA has in an rectangular arrangement of the PSA. With four measurements in four different mask positions, the whole specimen can be measured.
In Fizeau configuration, each point source generates a reference wave. In order for the setup to work, all but one reference wave must be blocked by the interferometer aperture. The role of the interferometer aperture is crucial in this context. In order to understand its design let us consider an example: Let there be a Cartesian, equally spaced PSA with a distance $ x $ between point sources. The light of each point source produces a new reference wave at the Fizeau surface. The plane of the PSA is conjugate to the interferometer aperture plane (IAP). The point sources are imaged to a point array in the aperture plane with a point spacing of $ X' = \beta' \cdot x $ , with $ \beta' $ being the imaging scale, which is typically −1 for our case. The interferometer aperture size must be less than $ 2\cdot X' $ in order to filter out all reference wavefronts but one.
The other functionality of the interferometer aperture is to reduce the angle of incidence between reference and object wave in order to keep the fringe density on the detector below the Nyquist criterion of 2 pixels per interference fringe or any other value that the detector can resolve. At the same time, it is advantageous to minimize the number of patches and thus maximize the size of the interferometer aperture. The position of a point in this plane corresponds to the propagation direction of a plane wave in the detector plane, hence this plane is often called the Fourier plane of the imaging optics of the interferometer. The aperture plane is located in the front focal plane of the imaging optics. The Nyquist criterion defines the maximum slope difference between reference and object wave. Therefore it can be thought of as a geometrical distance between reference and any part of the object wavefront in the interferometer aperture plane.
The object wavefronts from aspheric or freeform surfaces are deviating much from a plane wave, so in the IAP the object wavefront originating from one point source illuminates an extended area. If this area is smaller than the interferometer aperture, the whole object wavefront reaches the detector and therefore the SUT can be measured in one patch. Measuring strong aspheres however results in strongly distorted object wavefronts that do not pass the interferometer aperture completely. The higher gradients are cut off according to the Nyquist limit. The interferogram patch does not cover the whole SUT. Light from different directions is needed to compensate the local gradients (i.e. other sources of the PSA with different propagation directions). Illuminating the SUT under a different angle shifts the reflected object wavefront laterally in the IAP. Therefore, another part of the object wavefront passes the aperture.
For the generation of the reference wave the mask array also plays an important role. If the IAP was designed such that only the central source of the array would generate the reference wave, this wave would be obscured when the mask array is moved to the other three positions. If the interferometer aperture in the IAP was designed such that the central source would be always on in every mask position, there would be overlap of the interferograms in different mask positions, which would lead to undesired multiple-beam interference (see Fig. 2a).
Fig. 2 Simulated interferograms of four mask positions for different point source array realizations. The point source arrays are shown in the inlet in the upper left corner. The red interferogram patches originate from the red marked point sources, the yellow interferograms originate from the yellow marked point sources, which also provide the Fizeau-reflex reference wave. Gray point sources are switched off. a Realization with an "always on" reference source. In three of the four mask positions there is an disturbing interferogram overlap between the interferogram of the central source and the interferogram of the other sources. For better visualization, the borders of the interferograms are marked with a red polygon shape. b With the novel point source array design with four shifted reference sources there is no overlap between the interferograms inside of the active mask position.
Therefore, a novel design of the point source array is used for the Fizeau system: Instead of a central source, there are four sources equally spaced around the optical axis. A scheme of the novel point source array design is shown in Fig. 3. With this new configuration exactly one of these four sources generates the reference wavefront in the matched mask array position. This ensures that there is no patch overlap in one mask-position and all sub-interferograms are interpretable. Simulated interferograms generated with the novel designed point source array are shown in Fig. 2b.
Fig. 3 a The new point source array design for the Tilted Wave Fizeau Interferometer. The new design has no central source, there are equally spaced point sources around the optical axis. The innermost four point sources (colored in yellow) are used for Fizeau reference generation. b The four different positions of the movable mask array. In each mask position there is only one of the central point sources active.
Calibration of the TWI in Fizeau-configuration
Since the described setup is a non-null interferometer, the measurement signal on the camera not only shows the desired shape deviation of the surface under test from its ideal shape but also the asphericity of the sample and systematic, sample-dependent errors of the setup, the so called retrace errors34. The calibration of the retrace errors is a non-trivial task that is defining the uncertainty of any non-null testing method and consequently has been approached by many authors. Part of the systematic errors can be erased if the sample shows symmetries, e.g. exploiting the rotational symmetry of aspheres in rotation tests to determine the non-rotational SUT error part. For the determination of the complete systematic error, Greivenkamp et al.35,36 have suggested to use a raytracing model of the test setup that is using accessible setup characterization data (individual component's surface errors, position and orientation deviations) and an optimization to find a model that best explains the observed data of calibration measurements. Our approach is based on a black box model description of the wavefront aberrations of the setup. The idea of this description was developed for the classical Twyman-Green type TWI and is described in29. In the following we summarize the principle idea and highlight the fundamental changes that were necessary to adopt the model to the new Fizeau type TWI.
The black box model describes the optical path lengths (OPLs) of all possible paths through the interferometer, including the reference wave. It combines a polynomial description of the illumination part (Q), the imaging part (P) and in addition to29 also the reference part (R) with free space raytracing in the test space of the interferometer. The light paths for the three black box models are shown in Fig. 4.
Fig. 4 The three black box representations used to model the optical path lengths in the interferometer. a Q-Blackbox, describing the illumination of the SUT b P-Black box, describing the OPLs on the imaging path. c Reference path, described by the R-Black box.
The Q-black box models the optical path lengths from a source point M, N to a point X, Y on a plane $ E_{\rm Q}(X,Y) $ near the measurement volume. Each single point source of the array produces a wavefront that can be expanded into Zernike polynomials. The resulting coefficients form a one dimensional vector. This vector is varying along different positions $ (M,N) $ of the light sources in the array. This twodimensional dependency from the field coordinate M, N is a smooth function for our optical system and can be modelled in good approximation as a smooth twodimensional function for each coefficient of the Zernike polynomial vector, which allows to describe this field dependency of the individual Zernike terms as another Zernike polynomial. So the optical path lengths of rays through the illumination part of the system can be described by Eq. 1
$$ \begin{array}{*{20}{l}} W_{\rm Q}(M,N,X,Y)=\displaystyle\sum\limits_{i,j} Q_{ij} Z_{j}(M,N)Z_{i}(X,Y) \end{array} $$
Where $ Q_{ij} $ is a two dimensional array of Zernike polynomial coefficients, and $ Z_{j}(M,N) $ and $ Z_{i}(X,Y) $ are Zernike polynomials. In the same way, the equation for the P-black box, describing the optical pathlengths from a plane in the measurement volume $ E_{\rm P}(x,y) $ to the plane representing the camera chip $ F_{\rm P}(m,n) $ is modeled:
$$ \begin{array}{*{20}{l}} W_{\rm P}(m,n,x,y)=\displaystyle\sum\limits_{k,l} P_{kl} Z_{l}(m,n)Z_{k}(x,y) \end{array} $$
A major difference to the classical TWI description is the third black box, the R-black box that describes the reference wavefronts. Since in the new Fizeau configuration there are four reference sources, there have to be four models for the reference wave for use in the different positions of the mask array. For all four existing reference waves, only one single source is active per mask position. To describe the R-black boxes, only one dimension of Zernike polynomial coefficients $ r_{a,i} $ is needed, describing the pathlengths from a single source via the plane $ E_{\rm Q}(X,Y) $ to the plane $ F_{\rm P}(m,n) $ representing the camera. This leads to a function for the reference part of the OPL depending on the position $ a $ of the aperture array:
$$ \begin{array}{*{20}{l}} W_{\rm R}(m,n,a)= \begin{cases} \sum_{i} r_{1,i} Z_{i}(m,n) & \quad \text{if } a=1 \\ \sum_{i} r_{2,i} Z_{i}(m,n) & \quad \text{if } a=2 \\ \sum_{i} r_{3,i} Z_{i}(m,n) & \quad \text{if } a=3 \\ \sum_{i} r_{4,i} Z_{i}(m,n) & \quad \text{if } a=4 \end{cases} \end{array} $$
The optical path difference (OPD) of a ray starting on the source grid coordinate M, N and ending on the camera pixel coordinate $ m $ , $ n $ is mathematically described by the equation:
$$ \begin{aligned} F_{\rm{OPD}}(M,N,m,n,D,p) =&W_{\rm Q}(M,N,X,Y) +W_{\rm P}(m,n,x,y)+\\& W_{ \rm{geo}}(D,p) -W_{\rm R}(m,n,a) \end{aligned} $$
Where $ W_{\rm{geo}} $ defines the geometrical pathlengths between the plane $ E_{\rm Q} $ , by the reflection of the SUT and the plane $ E_{\rm P} $ which is dependent on the geometry D and the position $ p $ of the SUT. In a measurement, a ray S starting from M, N to a pixel of the camera with coordinate $ m $ ,$ n $ has the optical pathlength $ b $ and is dependent on the black box polynomial coefficients Q, P and $ R_{a} $ ($ a = 1,2,3,4 $ ), the geometry $ D $ and position $ p $ of the SUT. All pathlengths of all rays can be combined to the vector $ \vec{b} $
$$ \begin{array}{*{20}{l}} \vec{b}=F_{\rm{OPD}}(Q,P,R_{a},S,D,p) \end{array} $$
During the calibration process, real OPDs $ \vec{b}_{\rm{real}} $ are generated in the real interferometer by measuring a well known reference sphere. To describe the difference between reality and mathematical model, we introduce the interferometric calibration parameter $ \vec{x}(Q,P,R,p,O) $ . Here, $ O $ denotes the noise, generated e.g. from the camera. Now, $ \vec{b}_{\rm{real}} $ can be represented by
$$ \begin{array}{*{20}{l}} \vec{b}_{\rm{real}} =F_{\rm{OPD}}(\vec{x},S,D) \end{array} $$
The calibration parameter $ \vec{x} $ allows us to build up a mathematical model of the differences between the ideal interferometer design and the real interferometer, which we call the "real interferometer errors", and correct for them mathematically. Solving Eq. 6 for $ \vec{x} $ by inverting $ F_{\rm{OPD}} $ leads to
$$ \begin{array}{*{20}{l}} \vec{x}=F_{\rm{OPD}}^{-1}(\vec{b}_{\rm{real}},S,D) \end{array} $$
To demonstrate the measurement capabilities of the new common-path system, we show virtual and real measurements of four specimens. Two are rotationally symmetric aspheres with vertex radii of 20.2 mm and 34.322 mm, respectively. The other two are freeform surfaces, which are not rotationally symmetric. In Fig. 5 the deviation to their respective best fit spheres is shown. The mathematical descriptions can be found in37 for asphere 1 and freeform 2, there referred to as specimen 1 and specimen 2, respectively, and in38 for asphere 2 and freeform 1, there referred to as asphere 1 and freeform 1.
Fig. 5 Deviation from their respective best fit spheres of the measured specimens. a Asphere 1 with a best fit sphere of $ R=22.71 $ mm, b Asphere 2 with a best fit sphere of $ R=43.45 $ mm, c Freeform 1 with a best fit sphere of $ R=45.76 $ mm and d Freeform 2 with a best fit sphere of $ R=40.9 $ mm.
Virtual Experiments
For the evaluation of the influence of lens errors on the new Fizeau system we conducted virtual experiments. The benefit of these experiments is, that the true value of the measured specimen is known. In virtual experiments simulated phase data is used instead of real measured phase data. This data is provided by a second model of the interferometer, the "virtual real" model. In this model disturbances like misalignments of the optical elements or lens aberrations are introduced. With this model a disturbed black box model is calculated. The OPLs are calculated at the different calibration positions with this "virtual real" model and a nominal black box system is calibrated based on these OPLs as input data. For the simulation of a measurement the same idea is used, the virtually generated input data is processed with the standard evaluation algorithms. The workflow is visualized in Fig. 6. A detailed description of the virtual experiment system is given in39. As metric for our evaluation we used the reconstruction error for a virtual measurement of freeform 1. To investigate the performance of the Fizeau configuration, especially in the presence of high frequency errors, we extended the polynomial degree of the black boxes for the "virtual real" model, so that the degree is higher than in the model used for calibration and evaluation. As disturbance we added a Gaussian peak bump off axis on the collimation lens (CL) of the interferometer. The amplitude of the bump was $ 0.7\lambda $ .
Fig. 6 Explanation of the simulation workflow: A "virtual real" high order polynomial model of the TWI is used to generate input data for the calibration of a nominal model. To investigate the calibration errors virtual measurements of a surface are evaluated with this models and compared.
Fig. 7 shows the reconstruction errors of the classical Twyman-Green TWI (a) and the TWI in Fizeau configuration (b). In both systems, only the Gaussian bump was added as disturbance. To quantify the deviation, the root mean square and Peak-to-Valley value of the differences were calculated as shown in Table 1.
Fig. 7 Reconstruction error of freeform 1 with a disturbed interferometer. The polynomial order of the disturbance exceeds the order of the calibrated interferometer model. a Classical Twyman-Green TWI. Uncorrectable high order disturbances are clearly visible. b TWI in Fizeau configuration. High order disturbances are self compensated to a high degree 25.
classical Twyman-Green TWI common-path TWI
RMS 2.51 nm 0.23 nm
PV 47.11 nm 3.98 nm
Table 1. Reconstruction errors of the common-path and the classical Twyman-Green TWI system25.
In order to investigate the effects of defects on the optical system further, we added Zernike polynomial shaped errors on the first surface of the collimation lens. We then made a simulated calibration and measurement as in the previous simulation. This was repeated with increasing order of the Zernike defect in the surface up to Zernike number 65, i.e. polynomial degree 10. The results of the simulation in terms of the reconstruction errors of the surface are shown in Fig. 8.
Fig. 8 Freeform surface reconstruction error as function of the order of a Zernike polynomial shaped surface error within the interferometer system. The self-compensating effects show in the higher range that the common-path configuration tolerates in comparison to the classical Twyman-Green TWI system. This allows to relax the specifications on the optics in interferometer design.
Experimental Verification
After the successful calibration of the common-path TWI which takes about 30 minutes, the calibrated $ Q $ -, $ P $ - and $ R $ -polynomials are available and measurements can be performed. For the evaluation of the measurement data, analogously to the calibration process, the difference between measured OPDs $ \vec{b}_{\rm{real}} $ is modeled by a correction vector $ \vec{x}_{\rm{meas}} $ , containing topography and position of the measured SUT and thereby deliver the measurement result40.
Hardware-Setup
Fig. 9 shows the first lab demonstrator of the new Fizeau TWI. A fiber coupled laser diode (TEM Lasy638, which has its wavelength stabilized by using a spectral absorption line of iodine) at 638 nm wavelength is mounted and expanded to illuminate the monolithic point source array, which consists of a rectangular array of 13 × 13 point sources with a spacing of 2.5 mm. Each point source is a diffractive element on a fused silica substrate, which focuses the light onto the associated pinhole on the backside of the substrate. All sources are illuminated simultaneously, therefore the laser is chosen to have about 18 mW ex fiber. Four percent of the light is reflected by the last uncoated glass surface of the 4 inch Fizeau objective (custom design, radius of reference surface: 50 mm) and serves as reference wave that provides maximum fringe contrast for uncoated glass SUTs. The objective limits the radius of curvature of the best fit sphere to 50 mm convex and to a clear aperture diameter of 53 mm. The interferogram is evaluated using the five-step Schwider-Hariharan algorithm41. For phase shifting, the interferometer objective is moved along the optical axis in five steps from zero to $ \lambda $ using three piezo actuators. The interferograms are recorded by a camera with a resolution of 2048 × 2048 pixels (AVT Pike F-421 B). Note that this phase shifting approach results in slight phase shifting step variations across the field of view of the interferometer. However, the Schwider-Hariharan algorithm is designed to be robust with respect to step width variations. Systematic errors introduced by the phase shifting are part of the systematic errors the calibration algorithm takes care of.
Fig. 9 Experimental realization of the Tilted Wave Fizeau Interferometer measuring an aspheric SUT. The lightpaths are highlighted in red. The monolithic point source array used for illumination is shown in detail.
Topography Reconstruction
All specimens were measured with the same TWI-Hardware. After alignment, less than 1 minute is needed to measure the whole SUT. After evaluating the measured data, the reconstructed topography is obtained as measurement result. In optics fabrication, the deviation from the nominal shape typically is the relevant quantity. Therefore, we present the measurement results as Surface Error = Measured Shape - Nominal Shape. As an example, in Fig. 10 the surface error of asphere 1 is depicted. This specimen shows three markers at the lower rim of the surface. These are three Gaussian shaped dips that nicely illustrate the high lateral and height resolution of the method. Due to the standard description of aspheres by their sag, the marker holes have a positive sign. Fig. 11 shows the measurement result of the freeform 2 surface. Both the low frequency topography error and the high frequency error contributions can be resolved and reconstructed by the algorithm.
Fig. 10 Measurement result of the asphere 1.
Fig. 11 Measurement result of freeform 2. a Surface error. b High frequency error after subtraction of 136 Zernike terms.
Stability Evaluation
The main advantage of the Fizeau configuration is the common-path principle: Beam paths for measurement and reference are the same inside the interferometer. Therefore fluctuations due to air turbulences, acoustic disturbances and others are the same for measurement and reference beam path and cancel out. The effect is an improved phase stability. To investigate this effect quantitatively, successive phase images were acquired. A histogram over the difference of two phase images reveals the phase noise of the interferometer. The comparison of the phase noise of the Fizeau type TWI with the Twyman Green type were shown in25. Both phase differences were acquired with the same type of camera, the same exposure time and the same camera gain. The FWHM of the phase noise is thereby reduced by a factor of three for the Fizeau configuration. This improved stability also has an effect onto the evaluated measurements. To investigate this effect, full surface measurement evaluations of asphere 2 are gathered successively and the full surface difference is evaluated as in the phase noise measurement. Fig. 12 shows the evaluation results of two successive measurements and the difference for both configurations.
Fig. 12 Comparison of reproduction of measurement evaluations. Left upper: Measurement evaluations for two successive measurements in classical Twyman Green TWI configuration. Right upper: Measurement evaluations in Fizeau configuration. Lower: The corresponding differences and histograms over the differences of ten consecutive measurements.
The data indicates that with the new system we can achieve the same flexibility in measurement of aspheres and freeform surfaces as the classical TWI. One example is the freeform 1 with 1 mm freeform deviation from best fit sphere. High frequency error structures and markers are visible as well as the low frequency shape of the specimen. Histograms over the whole evaluation difference of two successive measurements of asphere 2 indicate a reduction in FWHM of topography noise of a factor three compared to the classical TWI. As expected the surface deviations in Fig. 12 measured with the two completely different systems show the same structural characteristics like rings and tooling marks as well as overall shape. A closer look reveals the lower noise as indicated in the histograms. Measurement data has the lack of a true value. Therefore, the virtual experiments show the effects of defined disturbances more clearly. For a bump shaped error on one of the system's lenses the deviation of the reconstruction error is one order of magnitude smaller with the new Fizeau-type common-path TWI than with the classical Twyman-Green TWI system. The reason for this deviation lies already in the limited calibration of the non common-path TWI setup that can not describe the high frequency bump adequately. The Zernike error simulation gives more insight inside this behavior. In the diagrams of Fig. 8 only simulations from Zernike term number 40 to 65 are shown since below number 45 all aberrations can be calibrated with the standard procedure in both systems. The figure shows, that with the new Fizeau type interferometer higher order defects are vastly compensated and do not disturb the calibration and measurement. Therefore the specifications of the optical surfaces used in the interferometer optics can be considerably relaxed in comparison to the classical Twyman-Green TWI realization.
This paper presented a new realization of a lens-array based common-path Tilted Wave Interferometer that combines the high flexibility and high measurement speed of tilted wave interferometry with the robustness of Fizeau interferometry. We described the special role of the interferometer aperture and the mask array in the signal generation in the TWI. The new approach avoids disturbing multiple beam interferences with a new illumination scheme. The same principles can be applied to fiber based common-path TWI. An integral part of each tilted wave interferometer is the calibration algorithm. The new algorithm of the Tilted Wave Fizeau Interferometer considers four reference beams, one for each illumination configuration. Simulated measurements have shown that disturbances in the interferometer-light-path are self-compensated to a high degree. Compared to the classical Twyman-Green TWI configuration, this leads to better reconstruction results and the possibility to significantly relax the tolerances of the quality for the optics used. We have shown experimental measurements of two aspheres and a freeform surface as an example of the possibilities. The repeatability is reduced up to a factor of three in comparison to the classical Twyman-Green TWI configuration. The improved stability of the new approach will bring flexible optical testing of aspheres and freeform surfaces into close-to-production environments and will open the door to enhanced interferogram evaluation algorithms that help to improve traceability further.
We would like to acknowledge the funding by Mahr GmbH and the Deutsche Forschungsgemeinschaft (DFG) under the project number: 273678658.
[1] Rolland, J. P. et al. Freeform optics for imaging. Optica 8, 161-176 (2021).
[2] González-Acuña, R. G. & Chaparro-Romo, H. A. General formula for bi-aspheric singlet lens design free of spherical aberration. Applied Optics 57, 9341-9345 (2018).
[5] Henselmans, R. et al. The NANOMEFOS non-contact measurement machine for freeform optics. Precision Engineering 35, 607-624 (2011).
[7] Wendel, M. Precision measurement of large optics up to 850 mm in diameter by use of a scanning point multi-wavelength interferometer. In Ninth European Seminar on Precision Optics Manufacturing, vol. 12298, 95-100. International Society for Optics and Photonics (SPIE, 2022). doi:10.1117/12.2624505
[13] Müller, A. F. et al. Multiple aperture shearinterferometry (MArS): a solution to the aperture problem for the form measurement of aspheric surfaces. Optics Express 28, 34677-34691 (2020).
[14] Wang, Y.-C., Shyu, L.-H. & Chang, C.-P. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement. Sensors 10, 2577-2586 (2010).
[15] Wyant, C. & Bennet, P. Using computer generated holograms to test aspheric wavefronts. App. Opt. 11, 2833-2839 (1972).
[16] Pruss, C. et al. Computer-generated holograms in interferometric testing. Optical Engineering 43, 2534-2586 (2004).
[17] Yatagai, T. & Saito, H. Interferometric testing with computergenerated holograms: aberration balancing method and error analysis. Applied Optics 17, 558-565 (1978).
[18] Dörband, B. & Tiziani, H. J. Testing aspheric surfaces with computer-generated holograms: analysis of adjustment and shape errors. Applied Optics 24, 2604-2611 (1985).
[19] Asfour, J.-M. & Poleshchuk, A. G. Asphere testing with a fizeau interferometer based on a combined computer-generated hologram. Journal of the Optical Society of America A 23, 172-178 (2006).
[21] Garbusi, E., Pruss, C. & Osten, W. Interferometer for precise and flexible asphere testing. Optics Letters 33, 2973-2975 (2008).
[23] Pruss, C. et al. Measuring aspheres quickly: tilted wave interferometry. Optical Engineering 56, 111713 (2017).
[29] Baer, G. et al. Calibration of a non-null test interferometer for the measurement of aspheres and free-form surfaces. Optics Express 22, 31200-31211 (2014).
[30] Schindler, J., Pruss, C. & Osten, W. Simultaneous removal of nonrotationally symmetric errors in tilted wave interferometry. Optical Engineering 58, 074105 (2019).
[31] Fortmeier, I. et al. Development of a metrological reference system for the form measurement of aspheres and freeform surfaces based on a tilted-wave interferometer. Measurement Science and Technology 33, 045013 (2022).
[33] Li, J. et al. Common-path interferometry with tilt carrier for surface measurement of complex optics. Applied Optics 58, 1991-1997 (2019).
[36] Gappinger, R. O. & Greivenkamp, J. J. Iterative reverse optimization procedure for calibration of aspheric wave-front measurements on a nonnull interferometer. Applied optics 43, 5152-61 (2004).
[37] Fortmeier, I. & Schulz, M. Comparison of form measurement results for optical aspheres and freeform surfaces. Measurement Science and Technology 33, 045010 (2022).
[38] Gronle, A., Pruss, C. & Herkommer, A. Misalignment of spheres, aspheres and freeforms in optical measurement systems. Optics Express 30, 797-814 (2022). | CommonCrawl |
Sphingolipids and Antimicrobial Peptides: Function and Roles in Atopic Dermatitis
Park, Kyungho;Lee, Sinhee;Lee, Yong-Moon 251
Inflammatory skin diseases such as atopic dermatitis (AD) and rosacea were complicated by barrier abrogation and deficiency in innate immunity. The first defender of epidermal innate immune response is the antimicrobial peptides (AMPs) that exhibit a broad-spectrum antimicrobial activity against multiple pathogens, including Gram-positive and Gram-negative bacteria, viruses, and fungi. The deficiency of these AMPs in the skin of AD fails to protect our body against virulent pathogen infections. In contrast to AD where there is a suppression of AMPs, rosacea is characterized by overexpression of cathelicidin antimicrobial peptide (CAMP), the products of which result in chronic epidermal inflammation. In this regard, AMP generation that is controlled by a key ceramide metabolite S1P-dependent mechanism could be considered as alternate therapeutic approaches to treat these skin disorders, i.e., Increased S1P levels strongly stimulated the CAMP expression which elevated the antimicrobial activity against multiple pathogens resulting the improved AD patient skin.
Silibinin Inhibits LPS-Induced Macrophage Activation by Blocking p38 MAPK in RAW 264.7 Cells
Youn, Cha Kyung;Park, Seon Joo;Lee, Min Young;Cha, Man Jin;Kim, Ok Hyeun;You, Ho Jin;Chang, In Youp;Yoon, Sang Pil;Jeon, Young Jin 258
We demonstrate herein that silibinin, a polyphenolic flavonoid compound isolated from milk thistle (Silybum marianum), inhibits LPS-induced activation of macrophages and production of nitric oxide (NO) in RAW 264.7 cells. Western blot analysis showed silibinin inhibits iNOS gene expression. RT-PCR showed that silibinin inhibits iNOS, TNF-${\alpha}$, and $IL1{\beta}$. We also showed that silibinin strongly inhibits p38 MAPK phosphorylation, whereas the ERK1/2 and JNK pathways are not inhibited. The p38 MAPK inhibitor abrogated the LPS-induced nitrite production, whereas the MEK-1 inhibitor did not affect the nitrite production. A molecular modeling study proposed a binding pose for silibinin targeting the ATP binding site of p38 MAPK (1OUK). Collectively, this series of experiments indicates that silibinin inhibits macrophage activation by blocking p38 MAPK signaling.
Silymarin's Protective Effects and Possible Mechanisms on Alcoholic Fatty Liver for Rats
Zhang, Wei;Hong, Rutao;Tian, Tulei 264
Silymarin has been introduced fairly recently as a hepatoprotective agent. But its mechanisms of action still have not been well established. The aim of this study was to make alcoholic fatty liver model of rats in a short time and investigate silymarin's protective effects and possible mechanisms on alcoholic fatty liver for rats. The model of rat's alcoholic fatty liver was induced by intragastric infusion of ethanol and high-fat diet for six weeks. Histopathological changes were assessed by hematoxylin and eosin staining (HE). The activities of alanine transarninase (ALT) and aspartate aminotransferase (AST), the levels of total bilirubin (TBIL), total cholesterol (TC) and triglyceride (TG) in serum were detected with routine laboratory methods using an autoanalyzer. The activities of superoxide dismutase (SOD) and glutathione peroxidase (GPx) and the level of malondialdehyde (MDA) in liver homogenates were measured by spectrophotometry. The TG content in liver tissue was determined by spectrophotometry. The expression of nuclear factor-${\kappa}B$ (NF-${\kappa}B$), intercellular adhesion molecule-1 (ICAM-1) and interleukin-6 (IL-6) in the liver were analyzed by immunohistochemistry. Silymarin effectively protected liver from alcohol-induced injury as evidenced by improving histological damage situation, reducing ALT and AST activities and TBIL level in serum, increasing SOD and GPx activities and decreasing MDA content in liver homogenates and reducing TG content in liver tissue. Additionally, silymarin markedly downregulated the expression of NF-${\kappa}B$ p65, ICAM-1 and IL-6 in liver tissue. In conclusion, Silymarin could protect against the liver injury caused by ethanol administration. The effect may be related to alleviating lipid peroxidation and inhibiting the expression of NF-${\kappa}B$.
Fucoxanthin Protects Cultured Human Keratinocytes against Oxidative Stress by Blocking Free Radicals and Inhibiting Apoptosis
Zheng, Jian;Piao, Mei Jing;Keum, Young Sam;Kim, Hye Sun;Hyun, Jin Won 270
Fucoxanthin is an important carotenoid derived from edible brown seaweeds and is used in indigenous herbal medicines. The aim of the present study was to examine the cytoprotective effects of fucoxanthin against hydrogen peroxide-induced cell damage. Fucoxanthin decreased the level of intracellular reactive oxygen species, as assessed by fluorescence spectrometry performed after staining cultured human HaCaT keratinocytes with 2',7'-dichlorodihydrofluorescein diacetate. In addition, electron spin resonance spectrometry showed that fucoxanthin scavenged hydroxyl radical generated by the Fenton reaction in a cell-free system. Fucoxanthin also inhibited comet tail formation and phospho-histone H2A.X expression, suggesting that it prevents hydrogen peroxide-induced cellular DNA damage. Furthermore, the compound reduced the number of apoptotic bodies stained with Hoechst 33342, indicating that it protected keratinocytes against hydrogen peroxide-induced apoptotic cell death. Finally, fucoxanthin prevented the loss of mitochondrial membrane potential. These protective actions were accompanied by the down-regulation of apoptosis-promoting mediators (i.e., B-cell lymphoma-2-associated ${\times}$ protein, caspase-9, and caspase-3) and the up-regulation of an apoptosis inhibitor (B-cell lymphoma-2). Taken together, the results of this study suggest that fucoxanthin defends keratinocytes against oxidative damage by scavenging ROS and inhibiting apoptosis.
Inhibitory Effect of an Urotensin II Receptor Antagonist on Proinflammatory Activation Induced by Urotensin II in Human Vascular Endothelial Cells
Park, Sung Lyea;Lee, Bo Kyung;Kim, Young-Ae;Lee, Byung Ho;Jung, Yi-Sook 277
In this study, we investigated the effects of a selective urotensin II (UII) receptor antagonist, SB-657510, on the inflmmatory response induced by UII in human umbilical vein endothelial cells (EA.hy926) and human monocytes (U937). UII induced inflammatory activation of endothelial cells through expression of proinflammatory cytokines (IL-$1{\beta}$ and IL-6), adhesion molecules (VCAM-1), and tissue factor (TF), which facilitates the adhesion of monocytes to EA.hy926 cells. Treatment with SB-657510 significantly inhibited UII-induced expression of IL-$1{\beta}$, IL-6, and VCAM-1 in EA.hy926 cells. Further, SB-657510 dramatically blocked the UII-induced increase in adhesion between U937 and EA.hy926 cells. In addition, SB-657510 remarkably reduced UII-induced expression of TF in EA.hy926 cells. Taken together, our results demonstrate that the UII antagonist SB-657510 decreases the progression of inflammation induced by UII in endothelial cells.
Antidiabetic and Beta Cell-Protection Activities of Purple Corn Anthocyanins
Hong, Su Hee;Heo, Jee-In;Kim, Jeong-Hyeon;Kwon, Sang-Oh;Yeo, Kyung-Mok;Bakowska-Barczak, Anna M.;Kolodziejczyk, Paul;Ryu, Ok-Hyun;Choi, Moon-Ki;Kang, Young-Hee;Lim, Soon Sung;Suh, Hong-Won;Huh, Sung-Oh;Lee, Jae-Yong 284
Antidiabetic and beta cell-protection activities of purple corn anthocyanins (PCA) were examined in pancreatic beta cell culture and db/db mice. Only PCA among several plant anthocyanins and polyphenols showed insulin secretion activity in culture of HIT-T15 cells. PCA had excellent antihyperglycemic activity (in terms of blood glucose level and OGTT) and HbA1c-decreasing activity when compared with glimepiride, a sulfonylurea in db/db mice. In addition, PCA showed efficient protection activity of pancreatic beta cell from cell death in HIT-T15 cell culture and db/db mice. The result showed that PCA had antidiabetic and beta cell-protection activities in pancreatic beta cell culture and db/db mice.
Effects of Calcium Gluconate, a Water Soluble Calcium Salt on the Collagen-Induced DBA/1J Mice Rheumatoid Arthritis
Sohn, Ki Cheul;Kang, Su Jin;Kim, Joo Wan;Kim, Ki Young;Ku, Sae Kwang;Lee, Young Joon 290
This study examined the effects of calcium (Ca) gluconate on collagen-induced DBA mouse rheumatoid arthritis (CIA). A single daily dose of 200, 100 or 50 mg/kg Ca gluconate was administered orally to male DBA/1J mice for 40 days after initial collagen immunization. To ascertain the effects administering the collagen booster, CIA-related features (including body weight, poly-arthritis, knee and paw thickness, and paw weight increase) were measured from histopathological changes in the spleen, left popliteal lymph node, third digit and the knee joint regions. CIA-related bone and cartilage damage improved significantly in the Ca gluconate-administered CIA mice. Additionally, myeloperoxidase (MPO) levels in the paw were reduced in Ca gluconate-treated CIA mice compared to CIA control groups. The level of malondialdehyde (MDA), an indicator of oxidative stress, decreased in a dose-dependent manner in the Ca gluconate group. Finally, the production of IL-6 and TNF-${\alpha}$, involved in rheumatoid arthritis pathogenesis, were suppressed by treatment with Ca gluconate. Taken together, these results suggest that Ca gluconate is a promising candidate anti-rheumatoid arthritis agent, exerting anti-inflammatory, anti-oxidative and immunomodulatory effects in CIA mice.
Ethanolic Extract of the Seed of Zizyphus jujuba var. spinosa Ameliorates Cognitive Impairment Induced by Cholinergic Blockade in Mice
Lee, Hyung Eun;Lee, So Young;Kim, Ju Sun;Park, Se Jin;Kim, Jong Min;Lee, Young Woo;Jung, Jun Man;Kim, Dong Hyun;Shin, Bum Young;Jang, Dae Sik;Kang, Sam Sik;Ryu, Jong Hoon 299
In the present study, we investigated the effect of ethanolic extract of the seed of Zizyphus jujuba var. spinosa (EEZS) on cholinergic blockade-induced memory impairment in mice. Male ICR mice were treated with EEZS. The behavioral tests were conducted using the passive avoidance, the Y-maze, and the Morris water maze tasks. EEZS (100 or 200 mg/kg, p.o.) significantly ameliorated the scopolamine-induced cognitive impairment in our present behavioral tasks without changes of locomotor activity. The ameliorating effect of EEZS on scopolamine-induced memory impairment was significantly reversed by a sub-effective dose of MK-801 (0.0125 mg/kg, s.c.). In addition, single administration of EEZS in normal naive mouse enhanced latency time in the passive avoidance task. Western blot analysis was employed to confirm the mechanism of memory-ameliorating effect of EEZS. Administration of EEZS (200 mg/kg) increased the level of memory-related signaling molecules, including phosphorylation of extracellular signal-regulated kinase or cAMP response element-binding protein in the hippocampal region. Also, the time-dependent expression level of brain-derived neurotrophic factor by the administration of EEZS was markedly increased from 3 to 9 h. These results suggest that EEZS has memory-ameliorating effect on scopolamine-induced cognitive impairment, which is mediated by the enhancement of the cholinergic neurotransmitter system, in part, via NMDA receptor signaling, and that EEZS would be useful agent against cognitive dysfunction such as Alzheimer's disease.
Dependence Potential of Quetiapine: Behavioral Pharmacology in Rodents
Cha, Hye Jin;Lee, Hyun-A;Ahn, Joon-Ik;Jeon, Seol-Hee;Kim, Eun Jung;Jeong, Ho-Sang 307
Quetiapine is an atypical or second-generation antipsychotic agent and has been a subject of a series of case report and suggested to have the potential for misuse or abuse. However, it is not a controlled substance and is not generally considered addictive. In this study, we examined quetiapine's dependence potential and abuse liability through animal behavioral tests using rodents to study the mechanism of quetiapine. Molecular biology techniques were also used to find out the action mechanisms of the drug. In the animal behavioral tests, quetiapine did not show any positive effect on the experimental animals in the climbing, jumping, and conditioned place preference tests. However, in the head twitch and self-administration tests, the experimental animals showed significant positive responses. In addition, the action mechanism of quetiapine was found being related to dopamine and serotonin release. These results demonstrate that quetiapine affects the neurological systems related to abuse liability and has the potential to lead psychological dependence, as well.
Chronic Administration of Catechin Decreases Depression and Anxiety-Like Behaviors in a Rat Model Using Chronic Corticosterone Injections
Lee, Bombi;Sur, Bongjun;Kwon, Sunoh;Yeom, Mijung;Shim, Insop;Lee, Hyejung;Hahm, Dae-Hyun 313
Previous studies have demonstrated that repeated administration of the exogenous stress hormone corticosterone (CORT) induces dysregulation in the hypothalamic-pituitary-adrenal (HPA) axis and results in depression and anxiety. The current study sought to verify the impact of catechin (CTN) administration on chronic CORT-induced behavioral alterations using the forced swimming test (FST) and the elevated plus maze (EPM) test. Additionally, the effects of CTN on central noradrenergic systems were examined by observing changes in neuronal tyrosine hydroxylase (TH) immunoreactivity in rat brains. Male rats received 10, 20, or 40 mg/kg CTN (i.p.) 1 h prior to a daily injection of CORT for 21 consecutive days. The activation of the HPA axis in response to the repeated CORT injections was confirmed by measuring serum levels of CORT and the expression of corticotrophin-releasing factor (CRF) in the hypothalamus. Daily CTN administration significantly decreased immobility in the FST, increased open-arm exploration in the EPM test, and significantly blocked increases of TH expression in the locus coeruleus (LC). It also significantly enhanced the total number of line crossing in the open-field test (OFT), while individual differences in locomotor activities between experimental groups were not observed in the OFT. Taken together, these findings indicate that the administration of CTN prior to high-dose exogenous CORT significantly improves helpless behaviors, possibly by modulating the central noradrenergic system in rats. Therefore, CTN may be a useful agent for the treatment or alleviation of the complex symptoms associated with depression and anxiety disorders. | CommonCrawl |
Science and Math Textbooks STEM Educators and Teaching STEM Academic Advising STEM Career Guidance
Science Education and Careers
Science and Math Textbooks
Compilation of severe errors in famous textbooks
Thread starter andresB
andresB
For the sake of helping student to avoid confusions, I wonder if we can make a compilation of known errors made in standard and commonly used textbooks. Not talking about some random typos, but more when like the entire treatment of a subject is fundamentally flawed.
Reactions: fluidistic, vanhees71 and Demystifier
Related Science and Math Textbooks News on Phys.org
New research provides evidence of strong early magnetic field around Earth
A cautionary tale about measuring racial bias in policing
Tipping mechanisms could spark societal change toward climate stabilization
WWGD
2019 Award
I think they may already have sites that do that. Have you tried a search for "[BookName] errata"? Edit: But maybe we can compile here as many of these links as possible, alpha by author.
weirdoguy
Erratas are for minor typos, and OP excluded this out of discussion.
WWGD said:
Reaging the STEM bible thread, I saw an argument about the Ballentine's treatment of several topics in QM, so I was thinking in things like that instead of things like "x" is missing a 1/2 that can be solved via an errata.
DaveC426913
"[BookName] fails".
Ok, my bad, I did not read carefully. But isnt this partially a matter of taste, opinion? Edit: Unless there are factual mistakes?
gleem
Education Advisor
andresB said:
... more when like the entire treatment of a subject is fundamentally flawed.
Isn't this more of a problem with elementary maybe high school science texts?
DaveC426913 said:
Not clear what you mean. I suggested searches for errata for specific books.
Note what's inside the "quotes" in my contribution.
Your suggestion for "[BookName] errata" was challenged (whether rightly or wrongly) by others.
I suggested an alternate title: "[BookName] fails".*
* in the 21st century, "fails" is a valid noun (as in: "epic fails"), not just a verb.
* in the 21st century, "fails" is a valid noun, not just a verb.
Fair enough. Maybe we can have book reviews and author ( of review) can elaborate on the flaws they perceive in the book being reviewed.
ZapperZ
Do you know of a specific example of even one of such type from such a resource?
Books like these are often reviewed by many people, and even when there are errors, big or small, these are usually corrected in subsequent editions.
On the other hand, Wikipedia........
Zz.
Reactions: jim mcnamara and WWGD
ZapperZ said:
Let alone that it is not likely a student knows enough to give a cogent criticism of the book's treatment of a topic or the overall quality of the book. Don't get me wrong, it is a good idea to discuss the topic and address things you disagree with but it seems like overreaching to try to do so while an undergraduate.
Of course an undergraduate can´'t do it. The idea I had with the thread is that people that have good knowledge made the warnings so students (or people reading about the topic for the first time) don't waste their time or, even worst, get a false knowledge.
Personally I don't have the confidence to pretend I can give an authoritative opinion, but, for example, I've heard really bad reviews of Sakurai's (revised edition) treatment of the Wigner-Eckart theorem. Also, I've seen harsh reviews on the treatment of the Quantum Zeno effect given in Ballentine.
Reactions: Dragrath
But this is different than saying these books have ERRORS!!! Errors mean that the content is faulty!
You are confusing personal preference with there being mistakes in the content. Those are two entirely different things!
Reactions: russ_watters, bhobba and Vanadium 50
Keith_McClary
Some Physics textbooks will "prove" that QM bound states must have negative energy, but that is wrong:
Barry Simon writes:
One of the more intriguing questions concerns the
presence of discrete eigenvalues of positive energy (that is, square-integrable
eigenfunctions with positive eigenvalues) . There is a highly non-rigorous but
physically appealing argument which assures us that such positive energy "bound
states" cannot exist. On the other hand, there is an
ancient, explicit example due to von Neumann and Wigner which presents
a fairly reasonable potential ##V##, with ##V(r) \to 0## as ##r \to
\infty## and which possesses an
eigenfunction with ##E = 1##.
The potential$$V(r)=\frac{-32 \sin r[g(r)^3 \cos r-3g(r)^2\sin^3r+g(r)\cos r+sin^3r]}{[1+g(r)^2]^2}$$
with ##g(r)=2r-\sin2r## has the eigenvalue +1 with eigenfunction
$$u(r)=\frac{\sin r}{r(1+g(r)^2)}$$
On Positive Eigenvalues of One-Body Schrodinger Operators
Simon's paper is almost as "ancient" as von Neumann and Wigner's result was when Simon wrote that.
Reactions: slider142, Dragrath, atyy and 1 other person
Demystifier
Landau and Lifshitz, Mechanics, Sec. 23 - Oscillations of systems with more than one degree of freedom.
It says that that ##\omega^2## must be positive because otherwise energy would not be conserved, which is wrong.
https://www.physicsforums.com/threads/error-in-landau-lifshitz-mechanics.901356/
Also, I've seen harsh reviews on the treatment of the Quantum Zeno effect given in Ballentine.
Yes. Ballentine misunderstands the meaning of collapse in quantum mechanics, i.e. thinks that it doesn't exist even in some FAPP effective sense. It culminates in his conclusion that the quantum Zeno effect (theoretically most easily described in terms of collapses) does not exist, contrary to experiments which show that it exists.
atyy
Ballentine's treatment of quantum mechanics is fundamentally flawed. The book presents his personal theory, rather than standard quantum mechanics.
Feynman's treatment of hidden variables in quantum mechanics in his famous lectures is fundamentally flawed, probably because Feynman did not understand the topic at that time. There are also minor physics errors (not typos) elsewhere in the lectures, probably due to momentary carelessness. The lectures as a whole are magnificent.
martinbn
Demystifier said:
I don't get your objection. I might be wrong but at a first glance what they wrote seemed fine.
atyy said:
Feynman's treatment of hidden variables in quantum mechanics in his famous lectures is fundamentally flawed, probably because Feynman did not understand the topic at that time.
Which pages? And why is it fundamentally flawed?
martinbn said:
Well, energy is conserved for any sign of ##\omega^2##. Indeed, energy is conserved whenever the Hamiltonian does not have an explicit dependence on time, which is the case for any sign of ##\omega^2##, as long as ##\omega## does not have an explicit dependence on time.
Andy Resnick
It's curious: the second paragraph right after eqn 23.8 (in my 3rd edition) does claim the roots must be 'real and positive' but only provides a counterexample for imaginary ω, not negative ω. I wonder if there is an underlying assumption that negative real frequencies are the same (except for a constant phase factor) as positive frequencies.
Andy Resnick said:
imaginary ω, not negative ω
Perhaps I am stating the obvious, but imaginary ##\omega## means negative ##\omega^2##.
http://www.feynmanlectures.caltech.edu/III_01.html#Ch1-S8
"We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by "explaining" how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics."
Feynman refers to the double slit experiment. However, most people would nowadays take the Bell tests to be the mystery of QM, not the double slit. There is interesting commentary in section 1 of https://arxiv.org/abs/1301.3274. Whitaker comments that Feynman corrected himself in his later lectures on computation https://aapt.scitation.org/doi/full/10.1119/1.4948268 "In any case, since what Feynman describes is indeed Bell's Theorem, it is extremely interesting that he adds that he often entertained himself by squeezing the difficulty of quantum mechanics into a smaller and smaller place, and he finds this place precisely in this analysis. Thus, Feynman's view is apparently clear—the content of Bell's Theorem is the crucial point that distinguishes classical and quantum physics."
"We make now a few remarks on a suggestion that has sometimes been made to try to avoid the description we have given: "Perhaps the electron has some kind of internal works—some inner variables—that we do not yet know about. Perhaps that is why we cannot predict what will happen. If we could look more closely at the electron, we could be able to tell where it would end up." So far as we know, that is impossible. We would still be in difficulty. Suppose we were to assume that inside the electron there is some kind of machinery that determines where it is going to end up. That machine must also determine which hole it is going to go through on its way. But we must not forget that what is inside the electron should not be dependent on what we do, and in particular upon whether we open or close one of the holes. So if an electron, before it starts, has already made up its mind (a) which hole it is going to use, and (b) where it is going to land, we should find P1 for those electrons that have chosen hole 1, P2 for those that have chosen hole 2, and necessarily the sum P1+P2 for those that arrive through the two holes. There seems to be no way around this. But we have verified experimentally that that is not the case. And no one has figured a way out of this puzzle. So at the present time we must limit ourselves to computing probabilities. We say "at the present time," but we suspect very strongly that it is something that will be with us forever—that it is impossible to beat that puzzle—that this is the way nature really is."
Feynman says something similarly erroneous in this video around 51 minutes.
Hidden variables for the double slit are possible.
The idea I had with the thread is that people that have good knowledge made the warnings so students (or people reading about the topic for the first time) don't waste their time or, even worst, get a false knowledge.
Which would be great, except how do we decide what explanation prevails amidst multiple opposing views? By discussion of course. But there's no clear winner.
So, instead of an authoritative list of errata, what we get is a discussion thread where the issues are debated back and forth, possibly endlessly. See posts 15 through 24 for examples.
It's a laudable idea, I just think there's an XKCD for that...
Reactions: slider142, Dragrath, Wrichik Basu and 4 others
Related Threads for: Compilation of severe errors in famous textbooks
Textbooks on Error Analysis
Error-riffic textbook written by a University Professor
Other Books Famous Scientists Studied From
Classical Sites collecting famous problems
Laptops instead of textbooks
Calculus of Several Variables by Lang
Recommendations for SEVERAL intro science texts
Introductory textbook
Other Compilation of severe errors in famous textbooks
Physics Study Marathon -- Book Suggestions Please
Classical Textbook list for self-study?
Geometry I would like suggestions regarding reading about geometry and manifolds
Engineering Manufacturing engineering books -- Follow-on questions | CommonCrawl |
Points on a Line I
Interpreting Graphs
Horizontal and Vertical Lines
Identifying key features of Linear functions
Identifying Patterns
Points on a Line II
Intercepts
Graphing Straight Lines from Intercepts
Input and Output tables from Graphs
Find the linear equation from a table of values
Finding the Rule I
Identifying Linear Equations - Graphs
Graphing Linear Equations from Ordered Pairs
Solving Equations with Straight Lines
Practical Linear Relationships
Unit Rates and Graphs
Comparing unit rates
Compare linear relationships
Simple Proportional Relationships
Intercepts and The Intersection of Lines
Gradient and Similar Triangles
Gradient of Horizontal and Vertical Lines
Gradient of a Line
Gradient from Two points
The Gradient Formula I
The Gradient Formula II
Interpret Rate of Change
Identifying Slope from Equation
Gradient-Intercept Form
Distances on the Plane (using Pythag)
The Distance Formula
The Midpoint of an Interval
Equation of a Line: General Form
The point-gradient formula
The Two Point Formula
Equations of Lines (Mixed Set)
Exploring Slopes and Equations of Lines (Investigation)
Finding the Rule II
Stepping across the country (Investigation)
Linear Relationships - graphs
Life Size Line Graphs (Investigation)
Sketching Linear Graphs
Linear functions, summary of features
Modelling Linear Relationships - graphs
Relationship between shoulder span and height (Investigation)
Direct Proportion
Inverse Proportion
Calculating Gradients
Finding the Equation of Line
Parallel Lines I
Parallel Lines II
Perpendicular Lines
Town Planning (Investigation)
Intersections of Lines
Graphs of Physical Phenomena
Finding Linear Equations in Context
Geometrical Problems with Coordinates
Identifying Solutions to Inequalities in Two Variables
Solve Linear Inequalities from a Graph
You will have covered the concepts here before, either in previous years or even this year. But I'll summaries the key features of linear equations below so that you have a handy reference point all in one place.
Slope or Gradient
The gradient of a line (also called the slope) that passes through two known points, say $\left(x_1,y_1\right)$(x1,y1) and $\left(x_2,y_2\right)$(x2,y2) on the cartesian plane can be found easily. Gradient is a measure of steepness. It is the ratio of a line's rising (or falling) to its running.
If, over a distance of $8$8 metres, a driveway rises $2$2 metres, then its gradient is said to be the ratio $\frac{2}{8}=\frac{1}{4}=0.25$28=14=0.25. It is also defined as the tangent of the angle of rise as shown in this simple diagram.
Consider the following example of a line passing through two points
$\left(-3,7\right)$(−3,7) and $\left(5,9\right)$(5,9) as shown here:
Looking at the two $y$y values of the two points, the rise is clearly $2$2. We could either use a formula for rise which might look like $Rise=y_2-y_1=9-7=2$Rise=y2−y1=9−7=2 or simply notice that there is a gap of $2$2 between the two values.
Looking at the two $x$x values we could again either use the formula $Run=x_2-x_1=5-\left(-3\right)=8$Run=x2−x1=5−(−3)=8 or simply notice that the gap between $-3$−3 and $5$5 is $8$8.
We also realise that the line is rising and this means that the gradient is positive.
The gradient, often denoted by the letter $m$m is simply the ratio given by:
$m=\frac{y_2-y_1}{x_2-x_1}=\frac{2}{8}=0.25$m=y2−y1x2−x1=28=0.25
From the fact that $\tan\theta=0.25$tanθ=0.25 we can use a scientific calculator to show that $\theta=\tan^{-1}\left(0.25\right)=14^\circ2'$θ=tan−1(0.25)=14°2′, which gives some sense to the steepness of the rise.
Note that if the line is falling, the line's gradient will be negative. In such cases the acute angle the line makes with the $x$x axis will be shown on the calculator as a negative angle. Adding $180^\circ$180° to this will reveal the obtuse angle of inclination the line makes with the axis.
For example if the gradient was given by $m=-0.25$m=−0.25, then $\theta=180^\circ+\tan^{-1}\left(-0.25\right)$θ=180°+tan−1(−0.25), which simplifies to $\theta=165^\circ58'$θ=165°58′.
Suppose we consider the line given by $5x-2y=20$5x−2y=20. By putting $x=0$x=0 we see that $y=-10$y=−10 (note the negative sign here). Also, by putting $y=0$y=0, we find that $x=4$x=4. This means that the $x$x and $y$y intercepts are $4$4 and $-10$−10 respectively. The situation is shown here:
Note that the rise and run can be determined from the $x$x and $y$y intercepts. The positive gradient of the line shown is given as $m=\frac{10}{4}=2.5$m=104=2.5.
What is the gradient of the line shown in the graph given that point A(3,3) and point B(6,5) both line on the line.
What is the gradient of the line going through A and B?
Finding the Equation
The line with equation $y=mx+b$y=mx+b has a gradient $m$m and a y intercept $b$b . It is important to observe that this form of the line shows $y$y explicitly as a function of $x$x with $m$m and $b$b as constants, different values of $x$x will determine different values of $y$y .
For example, the line, say $L_1$L1, given by $y=3x+3$y=3x+3 has a gradient of $3$3 and a $y$y intercept of $3$3. The $y$y intercept can be determined by noting that at $x=0$x=0, $y=3$y=3.
The line $L_2$L2 given in general form as $2x+y-8=0$2x+y−8=0 can be rearranged to $y=-2x+8$y=−2x+8 and the gradient $-2$−2 and y intercept $8$8 can be easily determined.
The line $L_3$L3 given by $5x+4y-29=0$5x+4y−29=0 can be rearranged to $4y=29-5x$4y=29−5x and then to $y=\frac{29}{4}-\frac{5}{4}x$y=294−54x with gradient $m=-\frac{5}{4}$m=−54 and $y$y intercept $b=7.25$b=7.25.
We will now go through some of the skills in finding lines, intersections and midpoints by considering a number of questions relating to the lines $L_1,L_2$L1,L2 and $L_3$L3. As we answer the questions, check the sketch below to confirm your understanding of each answer.
Is the point $A\left(1,6\right)$A(1,6) on $L_1$L1?
By substituting $\left(1,6\right)$(1,6) into $y=3x+3$y=3x+3 we see that $6=3\times1+3$6=3×1+3. This is true and so the given point is on $L_1$L1.
Find $P$P, the $x$x intercept of $L_2$L2.
Since $L_2$L2 is given by $2x+y-8=0$2x+y−8=0, the $x$x intercept is found by putting $y=0$y=0. Then $2x-8=0$2x−8=0 and solving for $x$x, we see that $x=4$x=4. The point of intercept is thus $P\left(4,0\right)$P(4,0).
Find the equation of the line $L_4$L4, which passes through $P$P and $M$M.
With $P\left(4,0\right)$P(4,0) and $M\left(5,1\right)$M(5,1), we have two methods to find $L_4$L4. Both methods require finding the gradient of the line given by $m=\frac{y_2-y_1}{x_2-x_1}=\frac{1-0}{5-4}=1$m=y2−y1x2−x1=1−05−4=1.
Then method 1 makes use of the point gradient form of the line. Specifically we know that the equation we are looking for must have the form $y=1x+b$y=1x+b. Since $M\left(5,1\right)$M(5,1) is on this line, it must satisfy it. Thus we can write $1=1\times5+b$1=1×5+b and so with a little thought, $b$b must be $-4$−4. the equation of $L_4$L4 must be $y=x-4$y=x−4.
The second method makes use of the point gradient formula $y-y_1=m\left(x-x_1\right)$y−y1=m(x−x1). We know that the gradient $m=1$m=1 and choosing one of the known points on the line, say $M\left(5,1\right)$M(5,1), we can determine the equation of $L_4$L4 as $y-5=1\left(x-4\right)$y−5=1(x−4), and this simplifies once again to $y=x-4$y=x−4.
A line passes through the point $A\left(-2,-9\right)$A(−2,−9) and has a gradient of $-2$−2. Using the point-gradient formula, express the equation of the line in gradient intercept form.
A line passes through the point $\left(3,-5\right)$(3,−5) and $\left(-7,2\right)$(−7,2)
a) Find the gradient of the line
b) Find the equation of the line by substituting the gradient and one point into $y-y_1=m\left(x-x_1\right)$y−y1=m(x−x1)
Answer the following
a) Find the equation, in general form, of the line that passes through $A\left(-12,-2\right)$A(−12,−2) and $B\left(-10,-7\right)$B(−10,−7)
b) Find the $x$x-coordinate of the point of intersection of the line that goes through $A$A and $B$B, and the line $y=x-2$y=x−2
c) Hence find the $y$y-coordinate of the point of intersection
Intercepts of Horizontal and Vertical Lines
Horizontal lines are lines that follow the horizon. They look like this...
Imagine now horizontal lines on the Cartesian plane. Horizontal lines are parallel to the $x$x axis, and as you move along a horizontal line, the $x$x value will change but the $y$y value will remain the same.
A horizontal line will:
only have a $y$y intercept
have an equation of the form $y=b$y=b (every point on the line has a $y$y value of $b$b)
have a $y$y intercept = $b$b, no $x$x intercept
Vertical Lines are lines that go up and down (they are perpendicular to horizontal lines).
Imagine now vertical lines on the Cartesian plane. Vertical lines are parallel to the $y$y axis, and as you move along a vertical line, the $y$y value will change but the $x$x value will remain the same.
A vertical line will:
only have an $x$x intercept
have an equation of the form $x=b$x=b (every point on the line has an $x$x value of $b$b)
have an $x$x intercept = $b$b, no $y$y intercept
Parallel and perpendicular lines
These occur when we have 2 lines that NEVER cross each other and have no points in common. For this to happen the two lines need to have exactly the same slope. If they have different slopes they will cross.
Parallel lines occur often in the real world.
Consider the line $y=x$y=x, with gradient=1. What would happen if we shifted every point on the line $2$2 units upwards?
We would get a new line that is parallel to $y=x$y=x, but with every point having a $y$y value that is two greater: $y=x+2$y=x+2
So parallel lines are just shifts of one another.
Parallel lines on the Cartesian Plane have the same gradient (slope).
Perpendicular is the word used to describe when one object meets another at exactly 90°. So perpendicular lines are simply lines that cross each other at exactly 90°.
To see how important the idea of perpendicular really is just think about your floor, walls and roof. If a builder does not take care to make the walls perpendicular to the floor and ceiling you'll end up with an unstable house.
The leaning tower of Pisa is a famous example of perpendicular angles gone wrong! Prior to restoration work performed between 1990 and 2001, the tower leaned at an angle of 5.5°, but the tower now leans at about 3.99°. That means the acute angle made by the tower and the ground is 86.01°.
Perpendicular lines on the Cartesian plane will have one point of intersection, and at that point of intersection the angle between them will be 90°.
Intersections and concurrent lines
Because lines extend forever in both directions, unless they are parallel they will intersect somewhere.
Now when 3 or more lines all pass through the same point we give those lines a special name: they are called concurrent lines.
The point of intersection is called the "point of concurrency", labelled point P below.
Intersections of two lines
Where two lines intersect, they share a common point. The $x$x and $y$y values of this point satisfy the equations of both lines.
If one line has equation $y=2x+3$y=2x+3 and another has equation $y=x+6$y=x+6 then the point of intersection is where both the $y$y's are the same value. If they are the same value, then we can say that:
(at the point of intersection) $2x+3$2x+3 is equal to $x+6$x+6
$2x+3=x+6$2x+3=x+6
We can then solve for the $x$x value at the point of intersection.
$2x-x=6-3$2x−x=6−3
$x=3$x=3
Now that we have $x$x, we can find the $y$y value at the point of intersection.
Which equation should we substitute back into? Well since the point is common to both lines, you can choose either equation.
$y=x+6$y=x+6
$y=3+6$y=3+6
$y=9$y=9
So these lines cross at the point $\left(3,9\right)$(3,9).
Consider the following linear equations: $y=2x+2$y=2x+2 and $y=-2x+2$y=−2x+2.
a) What are the intercepts of the line $y=2x+2$y=2x+2?
b) What are the intercepts of the line $y=-2x+2$y=−2x+2?
c) Plot the lines of the two equations on the same graph.
d) State the values of $x$x and $y$y that satisfy both equations.
Examine the graph attached and assess:
the slope of the line.
the $y$y-intercept of the line.
the $x$x-intercept of the line
Consider the graph of the linear function shown.
What is the slope of the line?
What is the $y$y-intercept?
What is the $x$x-intercept?
What is the equation of the line?
What is the zero of the function?
The zeros of a function are the values of that function that make it equal to zero.
For example for the line $y=2x+1$y=2x+1, the zero is the value of $x$x, that makes the whole function ($2x+1$2x+1) equal to zero. So we set $2x+1=0$2x+1=0 and solve for $x$x.
$2x+1=0$2x+1=0
$2x=-1$2x=−1
$x=-\frac{1}{2}$x=−12
Does that process look familiar? It should. It's exactly the same process we use when we are finding the $x$x intercepts. This means that the phrase zero of a function, (and also sometimes root of a function) is actually asking for the $x$x-intercepts. | CommonCrawl |
Recognition of coal from other minerals in powder form using terahertz spectroscopy
Jingjing Deng, Jan Ornik, Kai Zhao, Enjie Ding, Martin Koch, and Enrique Castro-Camus
Jingjing Deng,1 Jan Ornik,2 Kai Zhao,1 Enjie Ding,1,3,5 Martin Koch,2 and Enrique Castro-Camus4,6
1School of Information and Control Engineering, China University of Mining and Technology, 1 Daxue Road, Xuzhou 221116, Jiangsu, China
2Department of Physics and Material Sciences Center, Philipps-Universität Marburg, Renthof 5, 35032 Marburg, Germany
3National Joint Engineering Laboratory of Internet Applied Technology of Mines, China University of Mining and Technology, Xuzhou 221116, Jiangsu, China
4Centro de Investigaciones en Optica A.C., Loma del Bosque 115, Lomas del Campestre, Leon, Guanajuato 37150, Mexico
[email protected]
[email protected]
Jan Ornik https://orcid.org/0000-0001-8877-151X
Enrique Castro-Camus https://orcid.org/0000-0002-8218-9155
J Deng
J Ornik
K Zhao
E Ding
M Koch
E Castro-Camus
•https://doi.org/10.1364/OE.405438
Jingjing Deng, Jan Ornik, Kai Zhao, Enjie Ding, Martin Koch, and Enrique Castro-Camus, "Recognition of coal from other minerals in powder form using terahertz spectroscopy," Opt. Express 28, 30943-30951 (2020)
Experimental investigation on the infrared refraction and extinction properties of rock dust in tunneling face of coal mine (AO)
Calibrating the ChemCam laser-induced breakdown spectroscopy instrument for carbonate minerals on Mars (AO)
Reflectivity of natural and powdered minerals at CO2 laser wavelengths (AO)
Terahertz and X-ray Optics
Nondestructive testing
Terahertz spectroscopy
Original Manuscript: August 14, 2020
Revised Manuscript: September 14, 2020
Manuscript Accepted: September 20, 2020
Currently a significant fraction of the world energy is still produced from the combustion of mineral coal. The extraction of coal from mines is a relatively complex and dangerous activity that still requires the intervention of human miners, and therefore in order to minimize risks, automation of the coal mining process is desirable. An aspect that is still under investigation is potential techniques that can recognize on-line if the mineral being extracted from the mine is coal or if it is the surrounding rock. In this contribution we present the proof of concept of a method that has potential for recognition of the extraction debris from mining based on their terahertz transmission.
While there is an important effort to transit from fossil fuels to renewable energy sources, the current energetic requirement in several parts of the world will still depend on the use of oil and coal for several years. [1,2] Countries such as China, use considerable amounts of coal in their energy production, and this implies that mining coal is an important activity that still employs many workers that are subject to uncomfortable and risky work conditions. Therefore, there is an interest to automatize, as much as possible all the steps of the coal-mining process in order to minimize the requirement for human intervention, and hence, the risk associated with this activity. [3–5]
One particular aspect that requires appropriate technological solutions, is the recognition of coal from the surrounding rock sections while extracting it from the mines. Various techniques have been proposed in the past for this. For instance, the recognition of the coal-rock interface including methods based on the hardness difference between the coal bed and the rock layer, [6,7] which result in changes in the shearer operation parameters, such as applied voltage, current, motor speed, etc. [8] Additionally, the idea to monitor the temperature of the cutting tool has been explored. [9,10]. Methods based on the acoustic signals produced at the point of contact of the tool with either coal or rocks have been explored, [11] among several other techniques. [12–14] However, these methods are not always suitable for coal mines since the contrast in hardness of some types of rock is not large enough from that of coal for appropriate recognition.
The terahertz band of the electromagnetic spectrum, located between the microwave and the infrared regions, was only accessible to scientists about three decades ago. The introduction of the technique called terahertz time-domain spectroscopy (THz-TDS) developed in the late 1980s opened the possibility of performing spectroscopic measurements in this band of the electromagnetic spectrum [15]. The number of applications that terahertz has found since then is enormous [16], and ranges from study of materials [17–20] to non-destructive testing in biology [21–23], chemistry [24,25], industry [26–28] and art [29–31]. Terahertz technology was first proposed to recognize coal-rock interface by Wang and co-workers [32]. They presented the classification of compressed powder pellets of rock and coal using the absorption or the refractive index spectra from 0.2 THz to 1.6 THz for the classification model, demonstrating the potential of the technique in coal-rock interface recognition. Yet, the complexity and time required to fabricate the compressed tablets cannot meet the needs of on-line coal-rock interface identification, and the cost of a broad-band terahertz time-domain spectrometer is too high to be an appropriate solution with potential of broad real-world application.
In this article we present a method based on terahertz spectroscopy of powders that allows to distinguish several types of coal from various rocks. The method is based on quantifying the losses in the powders caused by scattering, which are related to both the grain size, and the refractive index of the powder, which in turn is a characteristic of each type of mineral.
Terahertz waves have frequencies between 100 GHz and 10 THz correspond to wavelengths between 3 mm and 30 $\mu$m. Powders with grain sizes in that range are expected to exhibit significant scattering in this band. When the wavelength and the scattering center dimensions are comparable, the appropriate formalism to treat their interaction is what is called the Mie theory. [33] While we will not go into the details of the derivation and assumptions of this theory, we will explore the possibility of material recognition in powder form using the transmission through the mineral powders.
2.1 Terahertz dielectric properties of bulk minerals
Before we get started with the modeling, we need to know the refractive indices of the bulk materials involved. Three types of coal: Anthracite (CA), Lignite (CL) and Fat coal (CF) as well as five types of rock usually found in coal mines Carbonaceous mudstone (RCM), Mudstone (RM), Conglomerate (RC), Limestone (RL), Siltstone (RS) were chosen. Samples of all of these minerals were obtained from the China Coal Science and Technology Museum and Dr. Weining Xie from the China University of Mining and Technology. The refractive index of all these minerals in "solid" form were measured as described in the Methods section over the band between 75 GHz and 2 THz. As seen in Fig. 1 the refractive indices show very little optical dispersion and the imaginary part of the refractive index is negligible. For the purposes of this initial modelling it is enough to know that all values fall between 1.9 and 2.9.
Fig. 1. Panels a to i show the real (continuous) and imaginary (dashed) parts of the refractive indices for the coal types Anthracite (CA), Lignite (CL) and Fat coal (CF) as well as the rocks Carbonaceous mudstone (RCM), Mudstone (RM), Conglomerate (RC), Limestone (RL), Siltstone (RS) respectively, given that the Conglomerate stone is rather heterogeneous two representative positions are reported, panels f and g, which show important variations across the sample.
2.2 Theory of scattering in powders at THz frequencies
The Mie theory of scattering [34] assumes a spherical particle of radius $a$ and refractive index $n$, and predicts that the extinction cross-section is given by
(1)$$\sigma=\frac{2\pi c^2}{\omega^2}\sum_{i=1}^\infty (2i+1)\,\textbf{Re}\left(a_i + b_i \right),$$
where $\omega$ is the angular frequency, $c$ is the speed of light,
(2)$$a_i = \frac{ \psi'_i(y)\psi_i(x)- n_p\psi_i(y)\psi'_n(x)}{ \psi'_i(y)\zeta_i(x)-n_p\psi_i(y)\zeta'_i(x) } ,$$
(3)$$b_i = \frac{ n_p \psi'_i(y)\psi_i(x)- \psi_i(y)\psi'_i(x) }{ n_p\psi'_i(y)\zeta_i(x) - \psi_n(y)\zeta'_i(x) }.$$
Here $x = 2 \pi a / \lambda _0 = \omega a /c$, $y=n_p x$, and $\psi _i(z)=z j_i(z)$ and $\zeta _i(z)=z(j_i(z)-\textrm {i}y_i(z))$, where $j_i(z)$ and $y_i(z)$ are the spherical Bessel and Newmann functions, not to be confused with the Bessel functions of the first and second kind usually denoted by $J_i(z)$ and $Y_i(z)$, and $'$ denotes the derivative with respect to the corresponding independent variable ($x$ or $y$). Notice that $i$ is used for the index, while $\rm i$ is used for the imaginary unit. In practice the series shown in Eq. (1) can not be calculated with an infinite number of terms. We calculated the number of terms $M$ to truncate the sum by using the empirical rule $M=\textrm {min}\{n\in \mathbb {Z}|n\geq x+4.3x^{1/3}+1\}$. From the cross-section it is possible to calculate the transmittance
(4)$$T=\exp(-\sigma d N /2),$$
where $N=a^{-3}$ is the density of particles and $d$ is the thickness of the powder layer. Since in the experiments that we will present in the following sections, the particles could be separated by size intervals, the transmittance was calculated and averaged for ten particle sizes within the experimental intervals and $d$=3.6 mm. The transmittance for three different size intervals is shown in Fig. 2 for refractive indices between 1.9 (light color) and 2.9 (dark color). It is worth mentioning that this calculation was performed assuming that the material was non-absorbing and non-dispersive, which is a reasonable assumption based on the refractive indices presented in Fig. 1. The immediate conclusion we can draw from these plots is that it might be possible to distinguish between the various rocks and coals, since the transmission of their powders is dependent on their refractive indices which are, in general, different.
Fig. 2. Transmittance predicted by Mie theory for a 3.6 mm thick layer of particles with diameters in the ranges 57 $\mu$m to 146 $\mu$m (blue), 171 $\mu$m to 397 $\mu$m (green) and 691 $\mu$m to 1204 $\mu$m (red) with refractive indices between 1.9 (lighter colors) and 2.9 (darker colors).
2.3 Recognition of mineral powders
Powders of the three coal and five rock types were prepared as described in the Methods section. Sieves with different hole sizes were used to separate the powders into eleven different ranges, resulting in $(3+5)\times 11=88$ different powder samples. The powders were placed in cuvettes consisting of flat-faced polyethylene windows with a cavity of 3.6 mm separation between walls (see Fig. 3(i)). The transmission of the powders was measured resulting in the plots shown in Fig. 3. We can see that, at least qualitatively, the trend as function of the grain size is consistent with the theoretical scattering calculations presented earlier. Although at first glance the spectra of all the coal and stone samples seem to be almost identical, it is possible to see that subtle changes emerge between them. The maximum amplitudes of the spectra are different for the various minerals, and the drop in transmission at higher frequencies follows a different shape. Additionally, the variation of the curves as function of the grain size is also different, for instance, Anthracite (CA) and Limestone (RL) show greater changes in their spectral behaviour with particle size in comparison to Lignite (CL).
Fig. 3. Panels a-h present the transmittance spectra from powder samples for Anthracite (CA), Lignite (CL) and Fat coal (CF) as well as the rocks Carbonaceous mudstone (RCM), Mudstone (RM), Conglomerate (RC), Limestone (RL), Siltstone (RS) respectively. The different curves in each panel represent the 11 different grain size ranges starting from 13 $\mu$m-43 $\mu$m (darker color) to 691$\mu$m-1204 $\mu$m (lighter color), the 11 size intervals are explicitly displayed in Fig. 4. Panel i presents a schematic representation of the terahertz pulse before ($E_{Ref}$) and after ($E_{Sam}$) propagation through the powder samples in a cuvette.
In order to identify the rock and coal powders based on their terahertz transmission, we propose to construct parameters of the form
(5)$$p=\frac{T(f_{i})}{T(f_{j})},$$
where $f_i$ and $f_j$ are frequencies chosen for each grain size, such that the separation of $p$ for the different minerals is maximized. By plotting two parameters for a different choice of $f_i$ we are able to generate a 2-dimensional parametric plot. Such plots are shown in Fig. 4 where the frequencies used are shown in THz on the axes (for visual clarity without units). As seen in the different plots, significant dispersion of the various minerals is seen. Coals are shown in "warm" colors (red-yellow) and with open symbols (*,$\times$,+), while rocks are shown in "cold" colors (green-blue) and with closed symbols ($\square$,$\circ$,$\bigstar$, $\triangledown$, $\vartriangle$) for easy identification. In addition, rectangles are shown around the regions where the points that correspond to coals are provided as a guide-to-the-eye. In some cases, such as the particle range between 101 $\mu$m and 238 $\mu$m, shown in Fig. 4(h), perfect separation of all minerals is possible, indicating that it could be possible to distinguish between these materials by simply using tree monochromatic THz sources and detectors on appropriately chosen grain sizes. Yet, in such a technology additional aspects such as standing waves will have to be taken into account [35].
Fig. 4. Panels a-k present the results from the ratio of the experimentally calculated transmittance powder samples at the frequencies indicated in the axes (in THz), yet the unit was omitted for visual reasons. Each panel condenses the results of 9 measurements for each mineral powder with the grain size range indicated. The symbols used are Anthracite (*), Lignite ($\times$) and Fat coal (+) as well as the rocks Carbonaceous mudstone ($\square$), Mudstone ($\circ$), Conglomerate ($\bigstar$), Limestone ($\triangledown$), Siltstone ($\vartriangle$). In order to facilitate the visualization the colours used are consistent with Fig. 3, "warm" colors (red-yellow) are used for coals, while "cold" colors (green-blue) are used for rocks. As a "guide-to-the-eye" we also included rectangles that indicate the regions where the parameters that correspond to the three different coals are found.
3. Discussion and conclusions
We performed a characterization of the terahertz dielectric properties of a series of coals and rocks that are commonly found around them in a mine. Firstly, we found that all minerals have relatively small extinction coefficients (0.01$<\kappa <$0.06) and can, to a certain extent be regarded as reasonably transparent dielectrics with relatively little optical dispersion. In addition we found that rocks have slightly higher (2.01$<n<$2.88) refractive indices than coals (1.94$<n<$2.07). With additional experiments we were able to determine that powders of those minerals have a strong dependence on the grain size as well as moderate dependence on the refractive index of the material. This ties up very well with Mie scattering theory that explains the dependence on both variables. Furthermore, using two parameters based on transmittance at three specific frequencies, it was possible to distinguish between different minerals.
After further development this technique could represent the working principle of an on-line mineral discrimination system if the appropriate machinery to produce powders reliably and swiftly is incorporated. Furthermore, this could be achieved with only three terahertz monochromatic sources and detectors, which are more economical and less sensitive to the operation conditions than a full THz-TDS system. This could be used for feedback of automated mining machinery in order to optimize the coal extraction process without the intervention of personnel in the mines. This technology, when further developed, has the potential to reduce the risks that mining personnel are currently subjected to when operating machinery in the mine, since the inspection of the mineral by a worker present in a mine would be avoided.
4.1 Sample selection and preparation
We selected Anthracite, Fat coal and Lignite as coal samples for this study since they cover broadly different qualities of coal. Five types of rock: mudstone, carbonaceous mudstone, siltstone, conglomerate and limestone, common in coal-bearing formations were also selected for the experiment. Mudstone is a sedimentary rock of complex composition, mainly composed of clay minerals, a hydrous silicate ore containing aluminum and magnesium, followed by clastic minerals, epigenetic minerals, and ferromanganese and organic matter. Carbonaceous mudstone is also a sedimentary rock with an organic carbon content of 6%-40%. Its main component is clay minerals followed by quartz, muscovite and a small amount of feldspar. Limestone is a carbonate rock with calcite as its main component. Siltstone is mainly composed of quartz. Finally, conglomerate is an heterogeneous metamorphic rock.
In order to get the refractive indices of the rock and coal samples selected, flat slabs of each material were made using a power saw. The faces were subsequently polished using sand paper in order to obtain flat and parallel faces for the slabs. The final thickness of each slab was measured by using a digital vernier.
Powders of each mineral were prepared by electric grinder and an agate mortar and pestle. Then the particulates were separated by eleven sieves with different hole sizes. The size graded particles were dried at 120$^\circ$C for $\sim$25 min in a furnace. The particle size of all eleven size intervals were measured by the LEICA microscope. In order to obtain quantify the particle size range. Particles chosen randomly from each mineral and sieve micrograph, such as the ones shown in Fig. 5 were measured. The interval range reported is the resultant of the average plus minus 1 standard deviation of the sizes found for each sieve. As the shape of the particles is irregular, the particle size is determined by the longest line on the two-dimensional image. The resulting particle size intervals are shown explicitly in the different panels of Fig. 4.
Fig. 5. We show micrographs of the powders produced, the bottom of each image shows a ruler with a spacing of 1mm between marks. The samples shown are Siltstone (a) sieve 1 (691 $\mu$m-1204 $\mu$m) and (b) sieve 5 (222 $\mu$m-512 $\mu$m), as well as Fat coal (c) sieve 1 (691 $\mu$m-1204 $\mu$m) and (d) sieve 5 (222 $\mu$m-512 $\mu$m
The powders were placed in a sealed "cuvette" with polyethylene windows, which is highly transparent across the entire terahertz range, separated by a polymethyl methacrylate 3.6 mm-thick spacer with a 1 cm hole which formed the powder cavity.
4.2 Terahertz time-domain spectroscopy
The measurements were made using terahertz time-domain spectrometer based on a 1550 nm $\sim$60 fs Er:fiber mode locked laser. The pulses were split into two. One part was sent to a delay variable line and guided by an optical fiber onto a photoconductive emitter, the other half was guided to a photoconductive detector. The terahertz transients produced in the emitter were collimated and focused onto the samples by high-density polyethylene lenses. Subsequently the terahertz radiation transmitted was also collected and focused onto the detector using high density polyethylene lenses. Further details on the terahertz time-domain technique and setup can be found in [36,37].
The measurements were taken in a nitrogen atmosphere in order to prevent atmospheric water vapour absorption. The refractive indices were obtained according to the procedure described in [38]. The transmittance of the powder samples is given by
(6)$$T(f)=\frac{\tilde{E}_\textrm{sample}}{\tilde{E}_\textrm{reference}},$$
where $\tilde {E}_\textrm {sample}$ and $\tilde {E}_\textrm {reference}$ are the Fourier transforms of the waveforms acquired for each sample and a reference recorded in the absence of sample. Each powder sample was measured in three different positions three times at each position, also the three references were acquired in between each position reference in order to account for possible slow drifts of the spectrometer, meaning that a total of 1056 spectra were acquired for the various powder samples, and their respective references.
Alexander von Humboldt-Stiftung; China Scholarship Council (201806420056); National Key Research and Development Program of China (2017YFC080440).
The authors would like to thank the financial support of National Key R&D Program of China (grant number 2017YFC080440), the China Scholarship Council (CSC 201806420056) and the Alexander von Humboldt Foundation through an Experienced Research Fellowship. Additionally we want to thank the China Coal Science and Technology Museum and Dr. Weining Xie from China University of Mining and Technology for supplying the rock and coal samples.
1. P. E. Brockway, A. Owen, L. I. Brand-Correa, and L. Hardt, "Estimation of global final-stage energy-return-on-investment for fossil fuels with comparison to renewable energy sources," Nat. Energy 4(7), 612–621 (2019). [CrossRef]
2. C. Zou, Q. Zhao, G. Zhang, and B. Xiong, "Energy revolution: From a fossil energy era to a new energy era," Nat. Gas Ind. B 3(1), 1–11 (2016). [CrossRef]
3. J.-G. Li and K. Zhan, "Intelligent mining technology for an underground metal mine based on unmanned equipment," Engineering 4(3), 381–391 (2018). [CrossRef]
4. S. Hao, S. Wang, R. Malekian, B. Zhang, W. Liu, and Z. Li, "A geometry surveying model and instrument of a scraper conveyor in unmanned longwall mining faces," IEEE Access 5, 4095–4103 (2017). [CrossRef]
5. C. Liu, J. Jiang, Z. Zhou, and S. Ye, "Unmanned working face remote monitoring system based on b/s architecture," in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), (IEEE, 2018), pp. 597–601.
6. Y. Jiang, Z. Xu, Z. Zhang, and X. Liu, "A novel shearer cutting pattern recognition model with chaotic gravitational search optimization," Measurement 144, 225–233 (2019). [CrossRef]
7. L. Si, Z. Wang, X. Liu, C. Tan, and L. Zhang, "Cutting state diagnosis for shearer through the vibration of rocker transmission part with an improved probabilistic neural network," Sensors 16(4), 479 (2016). [CrossRef]
8. Y. Yang, Q. Zeng, G. Yin, and L. Wan, "Vibration test of single coal gangue particle directly impacting the metal plate and the study of coal gangue recognition based on vibration signal and stacking integration," IEEE Access 7, 106784 (2019). [CrossRef]
9. G. Zhang, Z. Wang, L. Zhao, Y. Qi, and J. Wang, "Coal-rock recognition in top coal caving using bimodal deep learning and hilbert-huang transform," Shock. Vib. 2017, 1–13 (2017). [CrossRef]
10. L. Si, Z. Wang, Y. Liu, and C. Tan, "Online identification of shearer cutting state using infrared thermal images of cutting unit," Appl. Sci. 8(10), 1772 (2018). [CrossRef]
11. H. Wang and Q. Zhang, "Dynamic identification of coal-rock interface based on adaptive weight optimization and multi-sensor information fusion," Inf. Fusion 51, 114–128 (2019). [CrossRef]
12. S. L. Bessinger and M. G. Nelson, "Remnant roof coal thickness measurement with passive gamma ray instruments in coal mines," IEEE Trans. Ind. Appl. 29(3), 562–565 (1993). [CrossRef]
13. J. Asfahani and M. Borsaru, "Low-activity spectrometric gamma-ray logging technique for delineation of coal/rock interfaces in dry blast holes," Appl. Radiat. Isot. 65(6), 748–755 (2007). [CrossRef]
14. R. L. Chufo and W. J. Johnson, "A radar coal thickness sensor," IEEE Trans. Ind. Appl. 29(5), 834–840 (1993). [CrossRef]
15. S. S. Dhillon, M. S. Vitiello, E. H. Linfield, A. G. Davies, M. C. Hoffmann, J. Booske, C. Paoloni, M. Gensch, P. Weightman, G. P. Williams, E. Castro-Camus, D. R. S. Cumming, F. Simoens, I. Escorcia-Carranza, J. Grant, S. Lucyszyn, M. Kuwata-Gonokami, K. Konishi, M. Koch, C. A. Schmuttenmaer, T. L. Cocker, R. Huber, A. G. Markelz, Z. D. Taylor, V. P. Wallace, J. A. Zeitler, J. Sibik, T. M. Korter, B. Ellison, S. Rea, P. Goldsmith, K. B. Cooper, R. Appleby, D. Pardo, P. G. Huggard, V. Krozer, H. Shams, M. Fice, C. Renaud, A. Seeds, A. Stoehr, M. Naftaly, N. Ridler, R. Clarke, J. E. Cunningham, and M. B. Johnston, "The 2017 terahertz science and technology roadmap," J. Phys. D: Appl. Phys. 50(4), 043001 (2017). [CrossRef]
16. M. C. Beard, G. M. Turner, and C. A. Schmuttenmaer, "Terahertz spectroscopy," (2002).
17. C. A. Schmuttenmaer, "Exploring dynamics in the far-infrared with terahertz spectroscopy," Chem. Rev. 104(4), 1759–1780 (2004). [CrossRef]
18. R. Huber, F. Tauser, A. Brodschelm, M. Bichler, G. Abstreiter, and A. Leitenstorfer, "How many-particle interactions develop after ultrafast excitation of an electron–hole plasma," Nature 414(6861), 286–289 (2001). [CrossRef]
19. N. Vieweg, B. Fischer, M. Reuter, P. Kula, R. Dabrowski, M. Celik, G. Frenking, M. Koch, and P. U. Jepsen, "Ultrabroadband terahertz spectroscopy of a liquid crystal," Opt. Express 20(27), 28249–28256 (2012). [CrossRef]
20. D. P. McMeekin, G. Sadoughi, W. Rehman, G. E. Eperon, M. Saliba, M. T. Hoerantner, A. Haghighirad, N. Sakai, L. Korte, B. Rech, M. B. Johnston, L. M. Herz, and H. J. Snaith, "A mixed-cation lead mixed-halide perovskite absorber for tandem solar cells," Science 351(6269), 151–155 (2016). [CrossRef]
21. A. K. Singh, A. Perez-Lopez, J. Simpson, and E. Castro-Camus, "Three-dimensional water mapping of succulent Agave victoriae-reginae leaves by terahertz imaging," Sci. Rep. 10(1), 1404 (2020). [CrossRef]
22. N. Born, D. Behringer, S. Liepelt, S. Beyer, M. Schwerdtfeger, B. Ziegenhagen, and M. Koch, "Monitoring plant drought stress response using terahertz time-domain spectroscopy," Plant Physiol. 164(4), 1571–1577 (2014). [CrossRef]
23. R. M. Woodward, B. E. Cole, V. P. Wallace, R. J. Pye, D. D. Arnone, E. H. Linfield, and M. Pepper, "Terahertz pulse imaging in reflection geometry of human skin cancer and skin tissue," Phys. Med. Biol. 47(21), 3853–3863 (2002). [CrossRef]
24. R. J. Falconer and A. G. Markelz, "Terahertz spectroscopic analysis of peptides and proteins," J. Infrared, Millimeter, Terahertz Waves 33(10), 973–988 (2012). [CrossRef]
25. J. A. Morales-Hernández, A. K. Singh, S. J. Villanueva-Rodriguez, and E. Castro-Camus, "Hydration shells of carbohydrate polymers studied by calorimetry and terahertz spectroscopy," Food Chem. 291, 94–100 (2019). [CrossRef]
26. K. Su, Y.-C. Shen, and J. A. Zeitler, "Terahertz sensor for non-contact thickness and quality measurement of automobile paints of varying complexity," IEEE Trans. Terahertz Sci. Technol. 4(4), 432–439 (2014). [CrossRef]
27. K. Kawase, "Terahertz imaging for drug detection and large-scale integrated circuit inspection," Opt. photonics news 15(10), 34–39 (2004). [CrossRef]
28. Y.-C. Shen and P. F. Taday, "Development and application of terahertz pulsed imaging for nondestructive inspection of pharmaceutical tablet," IEEE J. Sel. Top. Quantum Electron. 14(2), 407–415 (2008). [CrossRef]
29. C. L. K. Dandolo and P. U. Jepsen, "Wall painting investigation by means of non-invasive terahertz time-domain imaging (thz-tdi): Inspection of subsurface structures buried in historical plasters," J. Infrared, Millimeter, Terahertz Waves 37(2), 198–208 (2016). [CrossRef]
30. F. Lambert, E. Reyes-Reyes, G. Hernandez-Cardoso, A. Gomez-Sepulveda, and E. Castro-Camus, "In situ determination of the state of conservation of paint coatings on the kiosk of guadalajara using terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 41(4), 355–364 (2020). [CrossRef]
31. K. Krügener, J. Ornik, L. M. Schneider, A. Jäckel, C. L. Koch-Dandolo, E. Castro-Camus, N. Riedl-Siedow, M. Koch, and W. Viöl, "Terahertz inspection of buildings and architectural art," Appl. Sci. 10(15), 5166 (2020). [CrossRef]
32. X. Wang, K.-X. Hu, L. Zhang, X. Yu, and E.-J. Ding, "Characterization and classification of coals and rocks using terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 38(2), 248–260 (2017). [CrossRef]
33. G. Mie, "Beiträge zur optik trüber medien, speziell kolloidaler metallösungen," Ann. Phys. 330(3), 377–445 (1908). [CrossRef]
34. J. R. Frisvad, N. J. Christensen, and H. W. Jensen, "Computing the scattering properties of participating media using lorenz-mie theory," in ACM SIGGRAPH 2007 papers, (2007), pp. 60–1–60–10.
35. R. Wilk, F. Breitfeld, M. Mikulics, and M. Koch, "Continuous wave terahertz spectrometer as a noncontact thickness measuring device," Appl. Opt. 47(16), 3023–3026 (2008). [CrossRef]
36. J. Neu and C. A. Schmuttenmaer, "Tutorial: An introduction to terahertz time domain spectroscopy (thz-tds)," J. Appl. Phys. 124(23), 231101 (2018). [CrossRef]
37. N. Vieweg, F. Rettich, A. Deninger, H. Roehle, R. Dietz, T. Göbel, and M. Schell, "Terahertz-time domain spectrometer with 90 db peak dynamic range," J. Infrared, Millimeter, Terahertz Waves 35(10), 823–832 (2014). [CrossRef]
38. W. Withayachumnankul and M. Naftaly, "Fundamentals of measurement in terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 35(8), 610–637 (2014). [CrossRef]
P. E. Brockway, A. Owen, L. I. Brand-Correa, and L. Hardt, "Estimation of global final-stage energy-return-on-investment for fossil fuels with comparison to renewable energy sources," Nat. Energy 4(7), 612–621 (2019).
C. Zou, Q. Zhao, G. Zhang, and B. Xiong, "Energy revolution: From a fossil energy era to a new energy era," Nat. Gas Ind. B 3(1), 1–11 (2016).
J.-G. Li and K. Zhan, "Intelligent mining technology for an underground metal mine based on unmanned equipment," Engineering 4(3), 381–391 (2018).
S. Hao, S. Wang, R. Malekian, B. Zhang, W. Liu, and Z. Li, "A geometry surveying model and instrument of a scraper conveyor in unmanned longwall mining faces," IEEE Access 5, 4095–4103 (2017).
C. Liu, J. Jiang, Z. Zhou, and S. Ye, "Unmanned working face remote monitoring system based on b/s architecture," in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), (IEEE, 2018), pp. 597–601.
Y. Jiang, Z. Xu, Z. Zhang, and X. Liu, "A novel shearer cutting pattern recognition model with chaotic gravitational search optimization," Measurement 144, 225–233 (2019).
L. Si, Z. Wang, X. Liu, C. Tan, and L. Zhang, "Cutting state diagnosis for shearer through the vibration of rocker transmission part with an improved probabilistic neural network," Sensors 16(4), 479 (2016).
Y. Yang, Q. Zeng, G. Yin, and L. Wan, "Vibration test of single coal gangue particle directly impacting the metal plate and the study of coal gangue recognition based on vibration signal and stacking integration," IEEE Access 7, 106784 (2019).
G. Zhang, Z. Wang, L. Zhao, Y. Qi, and J. Wang, "Coal-rock recognition in top coal caving using bimodal deep learning and hilbert-huang transform," Shock. Vib. 2017, 1–13 (2017).
L. Si, Z. Wang, Y. Liu, and C. Tan, "Online identification of shearer cutting state using infrared thermal images of cutting unit," Appl. Sci. 8(10), 1772 (2018).
H. Wang and Q. Zhang, "Dynamic identification of coal-rock interface based on adaptive weight optimization and multi-sensor information fusion," Inf. Fusion 51, 114–128 (2019).
S. L. Bessinger and M. G. Nelson, "Remnant roof coal thickness measurement with passive gamma ray instruments in coal mines," IEEE Trans. Ind. Appl. 29(3), 562–565 (1993).
J. Asfahani and M. Borsaru, "Low-activity spectrometric gamma-ray logging technique for delineation of coal/rock interfaces in dry blast holes," Appl. Radiat. Isot. 65(6), 748–755 (2007).
R. L. Chufo and W. J. Johnson, "A radar coal thickness sensor," IEEE Trans. Ind. Appl. 29(5), 834–840 (1993).
S. S. Dhillon, M. S. Vitiello, E. H. Linfield, A. G. Davies, M. C. Hoffmann, J. Booske, C. Paoloni, M. Gensch, P. Weightman, G. P. Williams, E. Castro-Camus, D. R. S. Cumming, F. Simoens, I. Escorcia-Carranza, J. Grant, S. Lucyszyn, M. Kuwata-Gonokami, K. Konishi, M. Koch, C. A. Schmuttenmaer, T. L. Cocker, R. Huber, A. G. Markelz, Z. D. Taylor, V. P. Wallace, J. A. Zeitler, J. Sibik, T. M. Korter, B. Ellison, S. Rea, P. Goldsmith, K. B. Cooper, R. Appleby, D. Pardo, P. G. Huggard, V. Krozer, H. Shams, M. Fice, C. Renaud, A. Seeds, A. Stoehr, M. Naftaly, N. Ridler, R. Clarke, J. E. Cunningham, and M. B. Johnston, "The 2017 terahertz science and technology roadmap," J. Phys. D: Appl. Phys. 50(4), 043001 (2017).
M. C. Beard, G. M. Turner, and C. A. Schmuttenmaer, "Terahertz spectroscopy," (2002).
C. A. Schmuttenmaer, "Exploring dynamics in the far-infrared with terahertz spectroscopy," Chem. Rev. 104(4), 1759–1780 (2004).
R. Huber, F. Tauser, A. Brodschelm, M. Bichler, G. Abstreiter, and A. Leitenstorfer, "How many-particle interactions develop after ultrafast excitation of an electron–hole plasma," Nature 414(6861), 286–289 (2001).
N. Vieweg, B. Fischer, M. Reuter, P. Kula, R. Dabrowski, M. Celik, G. Frenking, M. Koch, and P. U. Jepsen, "Ultrabroadband terahertz spectroscopy of a liquid crystal," Opt. Express 20(27), 28249–28256 (2012).
D. P. McMeekin, G. Sadoughi, W. Rehman, G. E. Eperon, M. Saliba, M. T. Hoerantner, A. Haghighirad, N. Sakai, L. Korte, B. Rech, M. B. Johnston, L. M. Herz, and H. J. Snaith, "A mixed-cation lead mixed-halide perovskite absorber for tandem solar cells," Science 351(6269), 151–155 (2016).
A. K. Singh, A. Perez-Lopez, J. Simpson, and E. Castro-Camus, "Three-dimensional water mapping of succulent Agave victoriae-reginae leaves by terahertz imaging," Sci. Rep. 10(1), 1404 (2020).
N. Born, D. Behringer, S. Liepelt, S. Beyer, M. Schwerdtfeger, B. Ziegenhagen, and M. Koch, "Monitoring plant drought stress response using terahertz time-domain spectroscopy," Plant Physiol. 164(4), 1571–1577 (2014).
R. M. Woodward, B. E. Cole, V. P. Wallace, R. J. Pye, D. D. Arnone, E. H. Linfield, and M. Pepper, "Terahertz pulse imaging in reflection geometry of human skin cancer and skin tissue," Phys. Med. Biol. 47(21), 3853–3863 (2002).
R. J. Falconer and A. G. Markelz, "Terahertz spectroscopic analysis of peptides and proteins," J. Infrared, Millimeter, Terahertz Waves 33(10), 973–988 (2012).
J. A. Morales-Hernández, A. K. Singh, S. J. Villanueva-Rodriguez, and E. Castro-Camus, "Hydration shells of carbohydrate polymers studied by calorimetry and terahertz spectroscopy," Food Chem. 291, 94–100 (2019).
K. Su, Y.-C. Shen, and J. A. Zeitler, "Terahertz sensor for non-contact thickness and quality measurement of automobile paints of varying complexity," IEEE Trans. Terahertz Sci. Technol. 4(4), 432–439 (2014).
K. Kawase, "Terahertz imaging for drug detection and large-scale integrated circuit inspection," Opt. photonics news 15(10), 34–39 (2004).
Y.-C. Shen and P. F. Taday, "Development and application of terahertz pulsed imaging for nondestructive inspection of pharmaceutical tablet," IEEE J. Sel. Top. Quantum Electron. 14(2), 407–415 (2008).
C. L. K. Dandolo and P. U. Jepsen, "Wall painting investigation by means of non-invasive terahertz time-domain imaging (thz-tdi): Inspection of subsurface structures buried in historical plasters," J. Infrared, Millimeter, Terahertz Waves 37(2), 198–208 (2016).
F. Lambert, E. Reyes-Reyes, G. Hernandez-Cardoso, A. Gomez-Sepulveda, and E. Castro-Camus, "In situ determination of the state of conservation of paint coatings on the kiosk of guadalajara using terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 41(4), 355–364 (2020).
K. Krügener, J. Ornik, L. M. Schneider, A. Jäckel, C. L. Koch-Dandolo, E. Castro-Camus, N. Riedl-Siedow, M. Koch, and W. Viöl, "Terahertz inspection of buildings and architectural art," Appl. Sci. 10(15), 5166 (2020).
X. Wang, K.-X. Hu, L. Zhang, X. Yu, and E.-J. Ding, "Characterization and classification of coals and rocks using terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 38(2), 248–260 (2017).
G. Mie, "Beiträge zur optik trüber medien, speziell kolloidaler metallösungen," Ann. Phys. 330(3), 377–445 (1908).
J. R. Frisvad, N. J. Christensen, and H. W. Jensen, "Computing the scattering properties of participating media using lorenz-mie theory," in ACM SIGGRAPH 2007 papers, (2007), pp. 60–1–60–10.
R. Wilk, F. Breitfeld, M. Mikulics, and M. Koch, "Continuous wave terahertz spectrometer as a noncontact thickness measuring device," Appl. Opt. 47(16), 3023–3026 (2008).
J. Neu and C. A. Schmuttenmaer, "Tutorial: An introduction to terahertz time domain spectroscopy (thz-tds)," J. Appl. Phys. 124(23), 231101 (2018).
N. Vieweg, F. Rettich, A. Deninger, H. Roehle, R. Dietz, T. Göbel, and M. Schell, "Terahertz-time domain spectrometer with 90 db peak dynamic range," J. Infrared, Millimeter, Terahertz Waves 35(10), 823–832 (2014).
W. Withayachumnankul and M. Naftaly, "Fundamentals of measurement in terahertz time-domain spectroscopy," J. Infrared, Millimeter, Terahertz Waves 35(8), 610–637 (2014).
Abstreiter, G.
Appleby, R.
Arnone, D. D.
Asfahani, J.
Beard, M. C.
Behringer, D.
Bessinger, S. L.
Beyer, S.
Bichler, M.
Booske, J.
Born, N.
Borsaru, M.
Brand-Correa, L. I.
Breitfeld, F.
Brockway, P. E.
Brodschelm, A.
Castro-Camus, E.
Celik, M.
Christensen, N. J.
Chufo, R. L.
Clarke, R.
Cocker, T. L.
Cooper, K. B.
Cumming, D. R. S.
Cunningham, J. E.
Dabrowski, R.
Dandolo, C. L. K.
Deninger, A.
Dietz, R.
Ding, E.-J.
Ellison, B.
Eperon, G. E.
Escorcia-Carranza, I.
Falconer, R. J.
Fice, M.
Fischer, B.
Frenking, G.
Frisvad, J. R.
Gensch, M.
Göbel, T.
Goldsmith, P.
Gomez-Sepulveda, A.
Grant, J.
Haghighirad, A.
Hao, S.
Hardt, L.
Hernandez-Cardoso, G.
Herz, L. M.
Hoerantner, M. T.
Hoffmann, M. C.
Hu, K.-X.
Huber, R.
Huggard, P. G.
Jäckel, A.
Jensen, H. W.
Jepsen, P. U.
Jiang, J.
Jiang, Y.
Johnson, W. J.
Johnston, M. B.
Kawase, K.
Koch, M.
Koch-Dandolo, C. L.
Konishi, K.
Korte, L.
Korter, T. M.
Krozer, V.
Krügener, K.
Kula, P.
Kuwata-Gonokami, M.
Lambert, F.
Leitenstorfer, A.
Li, J.-G.
Li, Z.
Liepelt, S.
Liu, C.
Liu, W.
Liu, X.
Liu, Y.
Lucyszyn, S.
Malekian, R.
Markelz, A. G.
McMeekin, D. P.
Mie, G.
Mikulics, M.
Morales-Hernández, J. A.
Naftaly, M.
Nelson, M. G.
Neu, J.
Ornik, J.
Owen, A.
Paoloni, C.
Pardo, D.
Pepper, M.
Perez-Lopez, A.
Pye, R. J.
Qi, Y.
Rea, S.
Rech, B.
Rehman, W.
Renaud, C.
Rettich, F.
Reuter, M.
Reyes-Reyes, E.
Ridler, N.
Riedl-Siedow, N.
Roehle, H.
Sadoughi, G.
Sakai, N.
Saliba, M.
Schell, M.
Schmuttenmaer, C. A.
Schneider, L. M.
Schwerdtfeger, M.
Seeds, A.
Shams, H.
Shen, Y.-C.
Si, L.
Sibik, J.
Simoens, F.
Simpson, J.
Singh, A. K.
Snaith, H. J.
Stoehr, A.
Su, K.
Tan, C.
Tauser, F.
Taylor, Z. D.
Turner, G. M.
Vieweg, N.
Villanueva-Rodriguez, S. J.
Viöl, W.
Wallace, V. P.
Wan, L.
Wang, H.
Wang, J.
Wang, S.
Wang, X.
Wang, Z.
Weightman, P.
Wilk, R.
Williams, G. P.
Withayachumnankul, W.
Woodward, R. M.
Xiong, B.
Xu, Z.
Yang, Y.
Ye, S.
Yin, G.
Yu, X.
Zeitler, J. A.
Zeng, Q.
Zhan, K.
Zhang, B.
Zhang, G.
Zhang, Q.
Zhang, Z.
Zhao, L.
Zhao, Q.
Zhou, Z.
Ziegenhagen, B.
Zou, C.
Ann. Phys. (1)
Appl. Radiat. Isot. (1)
Appl. Sci. (2)
Food Chem. (1)
IEEE Access (2)
IEEE Trans. Ind. Appl. (2)
IEEE Trans. Terahertz Sci. Technol. (1)
Inf. Fusion (1)
J. Appl. Phys. (1)
J. Infrared, Millimeter, Terahertz Waves (6)
J. Phys. D: Appl. Phys. (1)
Nat. Energy (1)
Nat. Gas Ind. B (1)
Opt. photonics news (1)
Phys. Med. Biol. (1)
Plant Physiol. (1)
Sci. Rep. (1)
Shock. Vib. (1)
(1) σ = 2 π c 2 ω 2 ∑ i = 1 ∞ ( 2 i + 1 ) Re ( a i + b i ) ,
(2) a i = ψ i ′ ( y ) ψ i ( x ) − n p ψ i ( y ) ψ n ′ ( x ) ψ i ′ ( y ) ζ i ( x ) − n p ψ i ( y ) ζ i ′ ( x ) ,
(3) b i = n p ψ i ′ ( y ) ψ i ( x ) − ψ i ( y ) ψ i ′ ( x ) n p ψ i ′ ( y ) ζ i ( x ) − ψ n ( y ) ζ i ′ ( x ) .
(4) T = exp ( − σ d N / 2 ) ,
(5) p = T ( f i ) T ( f j ) ,
(6) T ( f ) = E ~ sample E ~ reference , | CommonCrawl |
Investigation of EEG correlates of familiarity
Familiarity effects in emotion recognition systems
Familiarity effects in EEG-based emotion recognition
Nattapong Thammasan1Email author,
Koichi Moriyama2,
Ken-ichi Fukui1 and
Masayuki Numao1
Although emotion detection using electroencephalogram (EEG) data has become a highly active area of research over the last decades, little attention has been paid to stimulus familiarity, a crucial subjectivity issue. Using both our experimental data and a sophisticated database (DEAP dataset), we investigated the effects of familiarity on brain activity based on EEG signals. Focusing on familiarity studies, we allowed subjects to select the same number of familiar and unfamiliar songs; both resulting datasets demonstrated the importance of reporting self-emotion based on the assumption that the emotional state when experiencing music is subjective. We found evidence that music familiarity influences both the power spectra of brainwaves and the brain functional connectivity to a certain level. We conducted an additional experiment using music familiarity in an attempt to recognize emotional states; our empirical results suggested that the use of only songs with low familiarity levels can enhance the performance of EEG-based emotion classification systems that adopt fractal dimension or power spectral density features and support vector machine, multilayer perceptron or C4.5 classifier. This suggests that unfamiliar songs are most appropriate for the construction of an emotion recognition system.
Electroencephalogram
Music-emotion
Owing to the high temporal resolution and low cost of electroencephalography (EEG), it has been extensively used in recent attempts to detect emotional states due to its prominence in high temporal resolution but low cost. EEG and emotion correlation reported in numerous studies [1, 2] combined with computational modeling [3] enables possibility of automatically estimating emotional states. The use of musical excerpts as stimuli is considered to be a promising approach because music is understood to be capable of strongly eliciting various emotions [4]. However, very little is currently known about the subjective characteristics of human music perception.
Music experience can be influenced by cultural background, age, gender, training, and familiarity with the music [5]. Specifically, as listening to familiar music involves expectation and prediction based on prior knowledge to musical excerpts, a listener's memory might play a crucial role in musical perception and can affect the emotional reaction. Recent studies have used various measuring tools to determine the relationship between music familiarity and physiological signals. An fMRI study revealed that a feeling of familiarity with music or odors induced activation in the deep left hemisphere, while a feeling of unfamiliarity induced activation in the right hemisphere [6]. Researchers concluded that it is possible to trigger neural processes specific to the feeling of familiarity regardless of the type of triggering stimuli via processes that are likely related to the semantic memory system. Another fMRI study reported the role of familiarity in the brain's correlation of music appreciation and suggested that music familiarity is related to limbic, paralimbic, and reward circuitries [7]. Evidence from electrodermal activity studies demonstrates that certain levels of expectation and predictability caused by familiarity play an important role in the experience of emotional arousal in response to music [8]. In another study, musical melody familiarity was seen to be correlated with event-related potentials observed along the frontocentral scalp with melodies with a higher degree of familiarity producing more negative potentials [9]. The researchers suggested that the feeling of familiarity could be involved in the processing mechanism at the conceptual level. To the best of our knowledge, however, the effect of music familiarity on brainwave patterns has not yet been fully explored. Even though the past decade has seen a growing interest in the automatic detection of emotion using EEG, such studies have overlooked music familiarity effects; however, if music familiarity actually has an effect on brain signals, ignoring familiarity would degrade EEG-based emotion recognition.
In this study, we present the first attempt to investigate the neural correlates of music familiarity by focusing on the differences among brain responses engendered by music samples of varying levels of familiarity. We constructed a model to classify emotional response to musical material in a manner similar to conventional approaches with taking familiarity into account. In this study, we used two different datasets; one constructed from our experimental work, and one extracted from the database for emotion analysis using physiological signals (DEAP) [10], an existing affective EEG database that has been extensively used in recent years in affective computing research. The experiments that produced both datasets focused on self-emotion annotation approaches based on the assumption that the emotions incurred when experiencing music are subjective.
Importantly, the emotion produced when experiencing musical stimuli can change over time, especially when listening to long-duration music. Cortical activity alternation over time during long music exposure was found in a previous EEG study [2]. Consequently, recent research has emphasized the importance of taking into account the time-varying characteristics of emotion [11] and performing emotion recognition in a continuous paradigm [12]. In this study, we took the continuous emotion recognition into account by applying the technique of temporal segmentation to both datasets and employing temporal continuous emotion annotation in our experiment.
Human emotion can be systematically described through mapping into a corresponding two-dimensional arousal-valence emotion space in which valence is represented as a horizontal axis indicating positivity of emotion and arousal is represented as a vertical axis indicating activation level of emotions. This emotion model was originally proposed by Russell [13] and is still frequently used in affective computing research, as it has been found to be a simple but highly effective model [3, 5].
2 Experimental data
2.1 Our dataset
2.1.1 Experimental protocol
We recruited a homogeneous population of 15 healthy subjects between 22 and 30 years of age (mean = 25.52, SD = 2.14). All subjects were students of Osaka University and had a minimal formal musical education; informed consent was obtained from all individual subjects included in the experiment. Each subject was requested to select 16 musical excerpts from a 40-song MIDI library and to indicate their familiarity with each selected song on a scale ranging from 1 to 6, corresponding to lowest and highest familiarity, respectively. The subjects were instructed to select eight songs with which they felt familiar (i.e., having familiarity ranking of 4–6) and eight unfamiliar songs (familiarity ranking 1–3). To facilitate familiarity judging, our data collection software provided a function to play short (<10s) samples of songs to the subjects.
To reduce cognitive load due to emotion reporting, separate annotation sessions were conducted following music listening/EEG recording sessions. In the first listening phase, the selected songs were presented as synthesized sounds using the Java Sound API's MIDI package1, with four of the selected familiar songs played first, followed by four of the unfamiliar songs, then the other four familiar songs, and finally the remaining unfamiliar songs. Each song was played for approximately 2 min and a 16 s silent rest was inserted between each musical excerpt to reduce any influence of the previous song.
After listening to the 16 songs and taking a short rest, each subject proceeded to the second phase, an emotion annotation session without EEG recording. Using the assumption that emotional response can change over the course of time during a music listening session, each subject was instructed to describe his/her emotional reactions to selected songs presented in the same order as in the previous phase using our developed software. Each subject described his/her changing emotions by continuously clicking on the corresponding point in an arousal-valence emotion space shown on a monitor screen. To facilitate reporting, a brief guideline to the emotion space was also provided throughout annotation session. Arousal and valence were recorded independently as numerical values ranging from –1 to 1. After providing an emotion annotation for each song, each subject was asked to confirm or change his/her familiarity with the song and indicate how confident, on a discrete scale ranging from 1 to 3, he/she was of the correspondence between the annotated emotions and the emotions perceived during the first listening phase.
2.1.2 EEG recording and preprocessing
In this experiment, a Waveguard EEG cap2, placed in accordance with the 10–20 international system and referenced to the Cz electrode, was used to record EEG signals at a sampling frequency of 250 Hz. Twelve electrodes (Fp1, Fp2, F3, F4, F7, F8, Fz, C3, C4, T3, T4, and Pz) located near the frontal lobe which is believed to play a crucial role in emotion regulation [14] were selected for analysis. The impedance of each electrode was kept below 20 k\(\Omega\) throughout the experiment. A notch filter, a band-stop filter with a narrow stopband, was used to remove the 60 Hz power line noise. To minimize unrelated artifacts throughout EEG recording, each subject was instructed to close his/her eyes and to limit body movement. The EEG signals were amplified using a Polymate AP15323 amplifier and visualized on an APMonitor4 prior to filtering with a 0.5–60 Hz bandpass filter. We employed the EEGLAB [15] toolbox to remove major artifacts caused by unintentional body movement and then used the independent component analysis (ICA) functionality of the toolbox to remove eye-movement artifacts.
2.2 DEAP dataset
The DEAP dataset contains EEG and peripheral physiological signals recorded from 32 subjects as they watched 40 selected 1 min excerpts of music videos [10]. In the data collection process, 40 videos were presented in 40 trials, with each trial comprising 2 s of progress display, 5 s of baseline recording, and 1 min of music video watching followed by self-emotion annotation. To self-assess emotional level, each subject rated arousal, valence, dominance, and like/dislike of each music video excerpt on a continuous scale ranging from 1 (low) to 9 (high), and rated familiarity to the music on a discrete scale ranging from 1 ("never heard it before the experiment") to 5 ("knew the song very well"). EEG signals acquired via 32 electrodes were downsampled to 128 Hz and eye-movement artifacts detected via electrooculography (EOG) were removed. A bandpass filter was applied to extract signals in a frequency range of 4–45 Hz.
3 Investigation of EEG correlates of familiarity
One proposal of this study was to investigate EEG correlates underlying feelings of familiarity and unfamiliarity to musical stimuli. As it remains unclear whether music familiarity has any detectable association with EEG signals, we performed two different types of analysis on both our dataset and the DEAP dataset. The first method involved trying to find a familiarity clue in each electrode used in the EEG, while the second one involved examining the links between each of the electrodes.
3.1 Data acquisition
To maximize differences in familiarity and minimize any label ambiguities resulting from the subjective familiarity scores, only the data from the listening session with the most (i.e., familiarity level 6) and least (i.e., familiarity level 1) familiar samples in our dataset were used to perform the analysis. Consequently, we ignored data from subjects 8 and 13, as there was no indication as to which sample had the highest familiarity in their reports. Additionally, we disregarded data from subjects 1 and 3 owing to their reported drowsiness during EEG recording. As subject 12 misunderstood the instruction for familiarity judging, this subject's data were also discarded.
In the DEAP dataset, familiarity ratings were missing for three subjects, namely subjects 2, 15, and 23. As familiarity was not the main focus in the DEAP experiment and the music videos were selected by the experimenters, the number of music videos with a given level of familiarity differed by subject. In particular, the incidence of reported low familiarity was higher than that of high familiarity. To better balance low and high familiarity sessions, we defined scores 1–2 as low familiarity and 3–5 as high familiarity. However, as imbalance still remained in the data procured from some of the subjects, we also disregarded data from subjects whose high/low familiarity report ratios were less than 0.30. As a result, the data from subjects 4, 5, 25, and 27 were discarded.
3.2 Single-electrode-level power spectral density analysis
For the investigation of the EEG correlates of music familiarity, the power spectral density (PSD) approach, which is based on the fast Fourier transform (FFT), was adopted to obtain the characteristics of brain signals in the frequency domain. In our dataset, the averaged PSDs over the delta (1–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–40 Hz) frequency bands were extracted from all-12-electrode signals using the MATLAB Signal Processing Toolbox5. In order to obtain a higher amount of data for analysis, we applied a non-overlapping sliding window segmentation technique in which the window size was defined as 1000 samples, which was equivalent to a 4 s window length (this length corresponds to previous emotion classification work, as will be described in the following section).
Similarly, we decomposed the brain signals in the DEAP dataset into four distinct frequency bands using the PSD approach and extracted the theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–40 Hz) bandwaves. It should be noted that, as the preprocessed EEG signals of the DEAP dataset had already been filtered between 4 and 45 Hz, we could not extract the PSD in the delta band. The non-overlapping sliding window technique was also applied, with the window size defined as 512 samples, equivalent to a 4 s window length. However, we found that the PSDs of the signals extracted from some electrodes were oddly high in some subjects; therefore, we regarded any PSD above 100 \(\mu V^2/Hz\) as bad-channel PSD, as the corresponding signals might have been contaminated by unrelated noises. As a result, more than 25 % of the signals obtained from each of the four subjects, namely subjects 9, 11, 22, and 24 were found to be bad-channel PSD; we ignored all data from these subjects and performed analysis using only the data from the other 21 subjects.
3.2.1 Statistical analysis
To determine how the PSDs of various bands were affected by music familiarity (high and low) and subject individuality, two-way analysis of variance (ANOVA) with replication was performed. For each frequency band and electrode, we collected multiple PSDs from all subjects and divided them into two groups: low and high familiarity. Replication, i.e., multiple observations, involved obtaining multiple PSDs from each subject. As diversity in song selection and familiarity labeling of each subject produced differences in the number of acquired PSDs, it was necessary to unify the number of replications across subjects. Hence, we defined the number of replications as the minimum size of the available dataset across subjects and familiarity levels, and we aggregated data from each subject by randomly selecting available data up to the replication number. Two-way ANOVA was then performed using MATLAB Statistics and Machine Learning Toolbox6 to test the hypotheses that the main effects of familiarity and subjectivity were significant. Post-hoc comparisons were performed using the Tukey test. In testing the DEAP dataset, if a particular subject's electrode produced bad-channel PSD in any frequency band, all PSD data obtained from the electrode were removed before performing ANOVA.
3.2.2 Results
We performed ANOVA on our dataset to explore whether there was any significant PSD difference (p < 0.05) owing to familiarity. The results showed the main effect of inter-subject variability on variations in PSD values. However, we still found that the familiarity had a statistically significant effect on PSD value, particularly in the frequency bands obtained by some of the electrodes, as shown in Table 1. To investigate further, we calculated the average of the power spectra across subjects under high and low music familiarity and topologically plotted the variation in averages (familiarity–unfamiliarity) on a scalp map, as shown in Fig. 1. On this map, positive areas represent locations where familiar songs evoked higher averaged power spectra across subjects than did unfamiliar songs. Similarly, we performed ANOVA at the significance level p < 0.0001 on the DEAP dataset. Again, we found significant variation in PSD values owing to familiarity, as shown in Table 2. The variation in the averaged PSD (familiarity–unfamiliarity) calculated from the DEAP dataset is illustrated in Fig. 2. In the DEAP dataset, the PSD variation owing to familiarity was prominent in the higher frequency bands.
Significance values p from our dataset of the differences between familiar and unfamiliar songs across subjects under single-electrode PSD analysis; emboldened characters emphasize that PSDs taken while listening to music with high familiarity are higher than those taken while listening to music with low familiarity
Significant values p (\(p<\) 0.05)
\(\delta\)
\(\theta\)
\(\alpha\)
\(\beta\)
\(\gamma\)
Significance values p from the DEAP dataset of the differences between familiar and unfamiliar music videos across subjects under single-electrode analysis; emboldened characters emphasize that the PSD resulting from watching music videos with high familiarity is higher than that resulting from watching music videos with low familiarity
Significant values p (p < 0.0001)
4.98 × 10−5
4.36 × 10−10
A topological plot of the variation of average PSD values across subjects produced by songs with high and low music familiarity (familiarity power–unfamiliarity power) from our dataset; positive areas represent regions in which high familiarity produces higher power than low familiarity, while negative areas depict where unfamiliarity produces higher power
A topological plot of the variation of average PSD value across subjects exposed to music videos with high and low music familiarity (familiarity power–unfamiliarity power) from the DEAP dataset; positive areas represent regions in which high familiarity produces higher power than low familiarity, while negative areas depict where unfamiliarity produces higher power
It was previously discovered that listening to unfamiliar songs relates to recollection, the cognitive ability to recall a former context associated with a musical excerpt by utilizing episodic memory [16]. We hypothesized that subjects in our experiment might recollect past experience from episodic memory to identify a novel song. Previous research [17] that showed relatively higher gamma power over the parietal scalp during the act of recollection (as opposed to the act of experiencing familiarity) is consistent with our results that showed a marginally higher gamma-PSD obtained from the Pz electrode while listening to an unfamiliar song. In addition, Hsieh and Ranganath [18] also reported on the implication of the frontal midline \(\theta\) in working and episodic memory in which the associated memories could possibly be relevant to unfamiliar song listening. However, subjects in the DEAP experiments produced higher gamma and frontal midline theta power while watching familiar music videos; we suspect that the underlying reason for this is that the subjects used memory to a greater extent to anticipate the next scene of a music video because they might have occasionally watched the music video versions of regularly listened to songs. Unlike our dataset, subjects in DEAP dataset experiment who watched a particular music video for the first time or who had minimal experience with the video would engage so intensely enough in watching the video that they avoided using any recollection memory to associate the music with previous experiences. This evidence indicated that familiarity to video scenes had a higher influence on brain activities than familiarity with the music used as background sounds in the music video.
Moreover, the increase in Fz theta power in our results corresponds with the previous reports of enhancement of frontal midline theta rhythm (Fm\(\theta\)) during focused attention [19]. A likely underlying reason for this is that song unfamiliarity induced our subjects to listen more attentively in order to successfully annotate emotions subsequently in the following phase.
3.3 Functional connectivity analysis
As most brain functions have been shown to involve multiple brain sites rather than a single specific site, EEG-based analysis of brain activity at the level of interrelation between electrode pairs can offer deeper insights into the association between brain activity and music familiarity. In addition to the above-described analysis at the single-electrode level, we performed an investigation of brain functional connectivity in association with music familiarity. To perform analysis in specific EEG frequency bands, we applied a fifth order bandpass Butterworth filter to extract EEG signals in the delta, theta, alpha, beta, and gamma frequency bands from our dataset and to extract EEG signals in theta, alpha, beta, and gamma frequency bands from the DEAP dataset. As in the single-electrode-level analysis, we analyzed only valid data from the 10 subjects in our dataset and from the 21 subjects in the DEAP dataset. We then calculated connectivity indices from all pairs of electrodes independently in each frequency band using the three following approaches, which have been commonly employed in many studies of EEG correlates, including studies of the neural correlates of emotion [20]. These three connectivity indices have been demonstrated to be sensitive to different characteristics of EEG signals.
Correlation corresponds to the relationship between two signals from different brain sites. Given signals x and y, the correlation at each frequency (f) is a function of cross-covariance \(C^f_{xy}\) and auto-covariances, \(C^f_{xx}\) and \(C^f_{yy}\), of x and y:
$$R_{xy}(f) = \frac{C^f_{xy}}{\sqrt{C^f_{xx}C^f_{yy}}}.$$
Coherence is similar to correlation that also includes the covariation between two signals as a function of frequency. This index indicates how much two brain sites are working closely together at a specific frequency band. Given signals x and y, coherence is a function of the respective power spectral densities, \(P_{xx}(f)\) and \(P_{yy}(f)\), of x and y, and of the cross-PSD, \(P_{xy}(f)\), of x and y:
$$Coh_{xy}(f) = \frac{\bigm |P_{xy}(f)\bigm |^2}{P_{xx}(f)P_{yy}(f)}.$$
Phase synchronization index (PSI) is a non-linear measure of connectivity. The PSI among brain regions indicates connectivity in terms of the phase difference between two signals. PSI can be restricted to certain frequency bands reflecting specific brain rhythms. For two signals x and y with data length L, the PSI is defined as
$$PSI_{xy}=\left |\frac{1}{L}\sum _{t=0}^{L}e^{i[\phi _x(t)-\phi _y(t)]}\right |,$$
where \(\phi _x(t)\) = \(\arctan \tilde{x}(t)\mathbin {/}x(t)\) is the Hilbert phase of signal x and \(\phi _y(t)\) is the phase of signal y, while \(\tilde{x}(t)\) is the Hilbert transform of x(t).
The results of the single-electrode-level analysis showed that inter-subject variability affected brainwave disparity to a much greater degree than music familiarity. Unlike this analysis at the single-electrode level, in which we retrieved multiple data for statistical analysis from one subject, in the multiple electrode analysis, we calculated a single functional connectivity index for each subject to represent overall brain connectivity in each electrode pair in each frequency band. In other words, a single connectivity index was derived from EEG signals produced for each subject-song pair. Then, the connectivity indices were separated into two groups in accordance with music familiarity (low and high), and a unified index was calculated to represent the overall index for all subject-song pairs in each familiarity group. Because coherence and PSI range from 0 to 1 and correlation ranges from –1 to 1, we calculated the arithmetic mean to derive the overall coherence and PSI, and the quadratic mean to derive the overall correlation across songs. We then performed paired t-test using the MATLAB Statistics and Machine Learning Toolbox to discover any statistically significant difference in EEG functional connectivity associated with music familiarity across subjects.
The significant variations in functional connectivity were mapped to a scalp map, as illustrated in Figs. 3 and 4. From our dataset, we discovered an increase in connectivity, especially in the higher frequency bands, when subjects listened to unfamiliar songs. Burgess and Ali [17] reported greater functional connectivity in the gamma band during an experience of recollection compared to that during an experience of familiarity. Our results agree with this study, as we found higher connectivity resulting from unfamiliar songs, especially in the gamma frequency range. Imperatori et al. [21] found higher delta and gamma band connectivity during the performance of autobiographical memory tasks. In light of our hypothesis regarding episodic memory use during unfamiliar song listening, our results were consistent with their findings. Additionally, we found an increase of connectivity in the DEAP dataset, especially in higher frequency bands, when the subjects watched familiar music video excerpts. This phenomenon is probably related to cognitive recollection, and hypothesized use of episodic memory to anticipate the next video scene might be the underlying cause.
Functional connectivity with significant difference values (p < 0.05) owing to music familiarity from our dataset; lines indicate significantly higher (solid) and lower (dash) connectivity indices resulting from listening to unfamiliar songs as compared to listening to familiar songs
Functional connectivity with significant difference values (p < 0.05) owing to music familiarity from the DEAP dataset; lines indicate significantly higher (solid) and lower (dash) connectivity indices when listening to unfamiliar songs as compared to listening to familiar songs
Interestingly, the correspondence between single-electrode-level analysis and functional connectivity analysis might confirm that music familiarity elicits detectable changes in brain activities that probably relate to memory recollection.
4 Familiarity effects in emotion recognition systems
In the previous section, we demonstrated that music familiarity affects EEG signals using both analysis at the single-electrode level and the functional connectivity level. In this section, we present the results of EEG-based emotion recognition assessment that takes music familiarity into account. To measure this, we separated EEG signals into two groups in accordance with familiarity level (low and high). In our dataset, we separated the data from songs into a high familiarity data group (4–6 familiarity scores) and a low familiarity data group (1–3 familiarity scores). For the DEAP dataset, we used the same separation approach as in the previous section. Features were then separately extracted from the EEG signals of each data group and used to train emotion recognition models. As a comparison with the traditional approach that overlooks the familiarity effect, we also trained a model to use features extracted from all data groups (i.e., the original data before separation).
The fractal dimension (FD) value reveals the complexity of a time-varying EEG signal and has been recently used in affective computing research, including studies of EEG-based emotional state estimation [22]. A higher FD value for an EEG signal reflects higher activity in the brain [23]. The FD approach is appealing because of its simplicity and ability to informatively reveal characteristics that can properly indicate a variety of brain states. In this study, we derived the FD value by using the Higuchi algorithm [24].
We also extracted PSD data to characterize EEG signals in the frequency domain, which has become a common practice in the estimation of emotional states [3]. We used the same PSD ranges as those used in the previous section as features for emotion classification model training.
A review of literature on the subject of using DEAP datasets reported that the best emotion classification results could be obtained by using a sliding window size of 3 s for arousal classification and 6 s window size for valence classification in the feature extraction process [25]. For the sake of simplicity, in this work, we applied a 4 s sliding window without overlapping between consecutive windows for both arousal and valence classification in order to retrieve a higher amount of data points from each song/video. Using timestamps, we labeled each instance with an associated ground-truth emotion. In our dataset, we used a majority approach to determine the associated emotional label for each particular window containing variation in emotion annotation. In the DEAP dataset, multiple extracted features from each video were labeled using the single emotion reported by each subject.
The asymmetries of features in spatially symmetric electrode pairs were taken into account in this study, as such hemispheric asymmetries have been shown to be informative in classifying emotions in previous research [10, 22, 26]. An additional differential asymmetry feature was calculated by subtracting a feature in the right-hemisphere electrode's signal from the same feature extracted from the signal produced by the symmetric electrode in the left hemisphere. We obtained additional features from our dataset from five symmetric electrode pairs throughout the brain and from 14 symmetric electrode pairs in the DEAP dataset. In total, 17 FD and 85 PSD features were extracted from our dataset, while 46 FD and 184 PSD features were extracted from the DEAP dataset.
4.2 Emotion classification
Emotion recognition was converted into a binary classification by separating arousal into high and low classes and valence into positive and negative classes. Each class in our dataset was determined by the positivity of arousal and valence ratings. In the DEAP dataset, the instances were classified into the high arousal class when arousal rating was higher than 4.5; otherwise, they were placed in the low arousal class. Similarly, the data with a valence rating of above 4.5 were placed in the positive valence class, and the other data points were placed in the negative valence class.
We used the WEKA [27] library to apply three commonly used algorithms to classify emotional classes: a support vector machine (SVM) based on the Pearson VII kernel function (PUK) kernel, a multilayer perceptron (MLP) with one hidden layer, and C4.5. The overall performance of emotion recognition within each subject was evaluated using the 10-fold cross-validation method. As we relied on self-annotation from subjects, the imbalance of datasets has misled us in the interpretation of results; correspondingly, we defined a new baseline—the chance level or percentage of data points in the majority class. For instance, a dataset from a subject comprising of 60 % positive and 40 % negative arousal samples would have a chance level of 60 %. In each subject's data group, the results of classification were compared to the chance levels in order to evaluate the performance of emotion recognition relative to that of the majority-voting classification.
4.3 Results of emotion classification
As described in the previous section, data from three subjects were removed from our dataset owing to reports of drowsiness and instruction misunderstanding. We then classified data from these remaining 12 subjects. The averaged confidence level of correspondence in annotation across these remaining subjects was 2.4063 (\(SD = 0.6565\)), which indicated that the annotated data in our dataset were applicable. We also classified the data produced by the remaining 21 subjects in the DEAP dataset.
The classification accuracies above the chance levels averaged over the subjects from our dataset are shown in Fig. 5. In arousal recognition, the degree of classification above the chance level using only data from unfamiliar song sessions was superior to that using the overall dataset, and the data from familiar song sessions achieved the lowest performance. The best results were obtained by classifying FD features with SVM using unfamiliar song data, which achieved 87.80 % (\(SD = 7.73\, \%\)) averaged accuracy against a chance level of 64.86 % (\(SD = 7.04\, \%\)). Similarly, valence recognition using unfamiliar song data provided better results than using familiar song data or the total dataset. Again, classifying FD features using SVM produced the highest relative accuracy: 86.91 % (\(SD = 8.13\, \%\)) averaged absolute accuracy against a chance level of 68.10 % (\(SD = 11.79\, \%\)). However, the results of a statistical t-test indicated that the superiority of using unfamiliar data over other types of data in emotion classification was not statistically significant.
Arousal and valence classification accuracies above the chance levels for high familiarity (familiar songs), low familiarity (unfamiliar songs), and combined (all songs) data groups from our dataset
Figure 6 shows the averaged classification accuracies over the chance levels across subjects using the DEAP dataset. Similar to the results obtained using our dataset, classifying arousal and valence by using data from unfamiliar music video sessions achieved higher performance than by using either high familiarity sessions or the overall dataset. In arousal recognition, the best result over the chance level was obtained by classifying PSD features with SVM using data from low familiarity sessions; this methodology achieved 73.30 % (\(SD = 7.71\, \%\)) averaged accuracy across subjects against a chance level of 64.15 % (\(SD = 10.70\, \%\)). In valence recognition, using PSD features extracted from EEG signals in low familiarity sessions to classify using SVM achieved the highest relative performance, with an absolute performance of 72.50 % (\(SD = 6.91\, \%\)) against a chance level of 62.49 % (\(SD = 8.02\, \%\)). Furthermore, statistical t-test revealed that classifying PSD features with either SVM or MLP using data from low familiarity music video sessions were significantly better than classifying by the same approach using the overall dataset.
Arousal and valence classification accuracies above the chance levels for high familiarity (familiar songs), low familiarity (unfamiliar songs), and combined (all songs) data groups from the DEAP dataset
The superior performance of SVM relative to other algorithms has also been shown in the previous studies [3]. This superiority can be attributed to SVM's better capability for analyzing the non-linear behaviors of the brain.
Our EEG-correlate evidence reveals that the effects of familiarity are reflected in brain activities measured through PSD results and brain functional connectivity studies. However, the effectiveness of emotion recognition using EEG might suffer if the subject's familiarity with the musical stimuli is disregarded. Experiments using both our dataset and the DEAP dataset came to the consistent conclusion that data from sessions using only unfamiliar musical excerpts provide better EEG-based emotion classification than data using familiar musical excerpts or a combination of both data types. In summary, the empirical results of our emotion recognition study suggest that unfamiliar musical stimuli might be the most appropriate material to evoke emotion in the construction of an emotion recognition system. In addition, experiencing unfamiliar musical stimuli would also eliminate the factors of expectation and predictability that have been reported to influence emotional response to music [8].
One of the major differences between our dataset and the DEAP dataset is the approach to annotation. Our EEG experiments allowed subjects to continuously report emotion in arousal-valence space; by contrast, subjects who produced the DEAP dataset could report only one perception for each music video watched. The temporal continuity of emotion reporting in our experiments led to a higher granularity in emotion capturing compared to the DEAP dataset, which could be the underlying reason why the emotion recognition using our dataset had achieved higher performance over the chance level than that using the DEAP dataset.
Another difference between the two datasets was the stimuli used. In our dataset, MIDI files were used and subjects were instructed to close their eyes while listening to the music. By contrast, the experiments producing the DEAP dataset used music videos and the subjects kept their eyes open to watch these. According to our results, the FD approach could achieve better performance in terms of emotion classification than PSD, whereas the PSD performed better for the DEAP dataset. The superiority of the PSD to the FD approach in EEG-based emotion recognition was also seen in previous work using music videos [28] and movie clips [29] as stimuli. To the best of our knowledge, although FD features have been found to be successful in emotion recognition when using music as stimuli [22], none of the previous works directly compared performance in terms of music-emotion recognition between FD and PSD features. This study, therefore, provides an initial of music-emotion classification comparison between the use of FD and PSD features. The actual association between stimuli difference and classification results is a subject worthy of systematic investigation in a dedicated study, which we propose to conduct in future work. In addition, as the DEAP dataset produced variations in PSD that most prominently appeared in higher frequency bands, which are related to high cognitive functions, we are encouraged to further study whether the cognitive level has any influence on familiarity and its related processes.
Despite the novel results of the study discussed in this paper, the mechanisms underlying the effects of music familiarity on brainwaves remain unclear and are worthy of further investigation. Extending the present study by including more subjects or using another sophisticated analysis tool such as event-related potential to validate the current findings is another prospective area for our future work. In addition, incorporating familiarity information into the process of building an emotion classifier can possibly improve the performance of emotion estimation, which represents yet another avenue for future work.
6 Conclusions
This study presented evidence for the association between EEG signals and music familiarity based on the analysis of single-electrode-level PSD and brain functional connectivity. We demonstrated that classifying emotion using typical algorithms can benefit from controlling the familiarity level of the subject to musical stimuli. In particular, using data collected solely from unfamiliar stimuli perception can help achieve more accurate emotion classification results, which suggests that unfamiliar musical stimuli are more appropriate for use in the construction of emotion recognition systems.
http://docs.oracle.com/javase/7/docs/technotes/guides/sound/.
http://www.ant-neuro.com/products/waveguard.
http://www.teac.co.jp/industry/me/ap1132/.
Software developed for Polymate AP1532 by TEAC Corporation.
http://www.mathworks.com/products/signal/.
http://www.mathworks.com/products/statistics/.
This research is partially supported by the Center of Innovation Program from Japan Science and Technology Agency (JST), JSPS KAKENHI Grant Number 25540101, and the Management Expenses Grants for National Universities Corporations from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT).
Institute of Scientific and Industrial Research (ISIR), Osaka University, Ibaraki-shi Osaka, 567-0047, Japan
Department of Computer Science and Engineering, Nagoya Institute of Technology, Showa-ku, Nagoya 466-8555, Japan
Schmidt LA, Trainor LJ (2001) Frontal brain electrical activity EEG distinguishes valence and intensity of musical emotions. Cogn Emot 15(4):487–500. doi:10.1080/02699930126048 View ArticleGoogle Scholar
Sammler D, Grigutsch M, Fritz T, Koelsch S (2007) Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music. Psychophysiology 44(2):293–304. doi:10.1111/j.1469-8986.2007.00497.x View ArticleGoogle Scholar
Kim MK, Kim M, Oh E, Kim SP (2013) A review on the computational methods for emotional state estimation from the human EEG. Comput Math Methods Med 2013:1–13. doi:10.1155/2013/573734 MathSciNetMATHGoogle Scholar
Koelsch S (2012) Brain and music. Wiley-Blackwell, HobokenGoogle Scholar
Yang YH, Chen HH (2011) Music emotion recognition. CRC Press, Boca RatonMATHGoogle Scholar
Plailly J, Tillmann B, Royet JP (2007) The feeling of familiarity of music and odors: the same neural signature? Cereb Cortex 17(11):2650–2658. doi:10.1093/cercor/bhl173 View ArticleGoogle Scholar
Pereira CS, Teixeira J, Figueiredo P, Xavier J, Castro SL, Brattico E (2011) Music and emotions in the brain: familiarity matters. PLoS One 6(11):e27241. doi:10.1371/journal.pone.0027241 View ArticleGoogle Scholar
Van Den Bosch I, Salimpoor V, Zatorre RJ (2013) Familiarity mediates the relationship between emotional arousal and pleasure during music listening. Front Hum Neurosci 7(534):1–10. doi:10.3389/fnhum.2013.00534 Google Scholar
Daltrozzo J, Tillmann B, Platel H, Schn D (2010) Temporal aspects of the feeling of familiarity for music and the emergence of conceptual processing. J Cognitive Neurosci 22(8):1754–1769. doi:10.1162/jocn.2009.21311 View ArticleGoogle Scholar
Koelstra S, Muhl C, Soleymani M, Lee JS, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2012) DEAP: a database for emotion analysis using physiological signals. IEEE Trans Affect Comput 3(1):18–31. doi:10.1109/T-AFFC.2011.15 View ArticleGoogle Scholar
Gunes H, Schuller B (2013) Categorical and dimensional affect analysis in continuous input: current trends and future directions. Imag Vision Comput 31(2):120–136. doi:10.1016/j.imavis.2012.06.016 View ArticleGoogle Scholar
Thammasan N, Moriyama K, Fukui K, Numao M (2016) Continuous music-emotion recognition based on electroencephalogram. IEICE Trans Inform Syst E99-D 4:1234–1241. doi:10.1587/transinf.2015EDP7251 View ArticleGoogle Scholar
Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161–1178. doi:10.1037/h0077714 View ArticleGoogle Scholar
Koelsch S (2014) Brain correlates of music-evoked emotions. Nat Rev Neurosci 15(3):170–180. doi:10.1038/nrn3666 View ArticleGoogle Scholar
Delorme A, Mullen T, Kothe C, Acar ZA, Bigdely-Shamlo N, Vankov A, Makeig S (2011) EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing. Comp Intell Neurosci 75:796–803. doi:10.1155/2011/130714 Google Scholar
Platel H (2005) Functional neuroimaging of semantic and episodic musical memory. Ann NY Acad Sci 1060(1):136–147. doi:10.1196/annals.1360.010 View ArticleGoogle Scholar
Burgess AP, Ali L (2002) Functional connectivity of gamma EEG activity is modulated at low frequency during conscious recollection. Int J Psychophysiol 46(2):91–100. doi:10.1016/S0167-8760(02)00108-3 View ArticleGoogle Scholar
Hsieh LT, Ranganath C (2014) Frontal midline theta oscillations during working memory maintenance and episodic encoding and retrieval. Neuroimage 85:721–729. doi:10.1016/j.neuroimage.2013.08.003 View ArticleGoogle Scholar
Aftanas L, Golocheikine S (2001) Human anterior and frontal midline theta and lower alpha reflect emotionally positive state and internalized attention: high-resolution EEG investigation of meditation. Neurosci Lett 310(1):57–60. doi:10.1016/S0304-3940(01)02094-8 View ArticleGoogle Scholar
Lee YY, Hsieh S (2014) Classifying different emotional states by means of EEG-based functional connectivity patterns. PLoS One 9(4):e95415. doi:10.1371/journal.pone.0095415 View ArticleGoogle Scholar
ImperatoriClaudio Brunetti R, Farina B, Speranza A, Losurdo A, Testani E, Contardi A, Della Marca G (2014) Modification of EEG power spectra and EEG connectivity in autobiographical memory: a sloreta study. Cogn Process 15(3):351–361. doi:10.1007/s10339-014-0605-5 View ArticleGoogle Scholar
Sourina O, Liu Y, Nguyen MK (2012) Real-time EEG-based emotion recognition for music therapy. J Multimodal User Interfaces 5(1–2):27–35. doi:10.1007/s12193-011-0080-6 View ArticleGoogle Scholar
Liu Y, Sourina O (2013) EEG databases for emotion recognition. In: Proceedings of the 2013 international conference on cyberworlds, pp 302–309, doi 10.1109/CW.2013.52
Higuchi T (1988) Approach to an irregular time series on the basis of the fractal theory. Phys D 31(2):277–283. doi:10.1016/0167-2789(88)90081-4 MathSciNetView ArticleMATHGoogle Scholar
Candra H, Yuwono M, Chai R, Handojoseno A, Elamvazuthi I, Nguyen H, Su S (2015) Investigation of window size in classification of EEG-emotion signal with wavelet entropy and support vector machine. In: Proceedings of the 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 7250–7253, doi 10.1109/EMBC.2015.7320065
Lin YP, Yang YH, Jung TP (2014) Fusion of electroencephalogram dynamics and musical contents for estimating emotional responses in music listening. Front Neurosci 8(94):1143–1154. doi:10.3389/fnins.2014.00094 Google Scholar
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. SIGKDD Explor NewsL 11(1):10–18. doi:10.1145/1656274.1656278 View ArticleGoogle Scholar
Hatamikia S, Nasrabadi A (2014) Recognition of emotional states induced by music videos based on nonlinear feature extraction and som classification. In: Proceedings of the 21th Iranian conference on biomedical engineering, pp 333–337, doi 10.1109/ICBME.2014.7043946
Wang XW, Nie D, Lu BL (2014) Emotional state classification from EEG data using machine learning approach. Neurocomputing 129:94–106. doi:10.1016/j.neucom.2013.06.046 View ArticleGoogle Scholar | CommonCrawl |
Phenotypic plasticity promotes recombination and gene clustering in periodic environments
Fitness variation in isogenic populations leads to a novel evolutionary mechanism for crossing fitness valleys
Debra Van Egeren, Thomas Madsen & Franziska Michor
Effects of demographic stochasticity and life-history strategies on times and probabilities to fixation
Diala Abu Awad & Camille Coron
Quantifying the effect of genetic, environmental and individual demographic stochastic variability for population dynamics in Plantago lanceolata
Ulrich K. Steiner, Shripad Tuljapurkar & Deborah A. Roach
Rapid evolution of mutation rate and spectrum in response to environmental and population-genetic challenges
Wen Wei, Wei-Chin Ho, … Michael Lynch
Experimental evolution of adaptive divergence under varying degrees of gene flow
Sergio Tusso, Bart P. S. Nieuwenhuis, … Jochen B. W. Wolf
Polygenic adaptation: a unifying framework to understand positive selection
Neda Barghi, Joachim Hermisson & Christian Schlötterer
Experimental evidence for effects of sexual selection on condition-dependent mutation rates
Julian Baur & David Berger
Balancing selection via life-history trade-offs maintains an inversion polymorphism in a seaweed fly
Claire Mérot, Violaine Llaurens, … Maren Wellenreuther
Structure of multilocus genetic diversity in predominantly selfing populations
Margaux Jullien, Miguel Navascués, … Laurène Gay
Davorka Gulisija1 &
Joshua B. Plotkin1
Nature Communications volume 8, Article number: 2041 (2017) Cite this article
Evolutionary theory
While theory offers clear predictions for when recombination will evolve in changing environments, it is unclear what natural scenarios can generate the necessary conditions. The Red Queen hypothesis provides one such scenario, but it requires antagonistic host–parasite interactions. Here we present a novel scenario for the evolution of recombination in finite populations: the genomic storage effect due to phenotypic plasticity. Using analytic approximations and Monte-Carlo simulations, we demonstrate that balanced polymorphism and recombination evolve between a target locus that codes for a seasonally selected trait and a plasticity modifier locus that modulates the effects of target-locus alleles. Furthermore, we show that selection suppresses recombination among multiple co-modulated target loci, in the absence of epistasis among them, which produces a cluster of linked selected loci. These results provide a novel biological scenario for the evolution of recombination and supergenes.
The evolution of genetic recombination is a subject of longstanding interest, with tremendous development as well as outstanding questions (reviewed by Otto1). Empirical work has shown that the recombination rate is under genetic control2,3,4,5,6,7,8,9,10 and can respond to selection11. However, our understanding of how recombination arises comes primarily from theory. Theory suggests that recombination will evolve in populations with negative linkage disequilibrium between beneficial alleles12,13,14 (negative LD, where genotypes with extreme fitness effects are underrepresented relative to expectation). In finite populations, the evolution of recombination also requires a mechanism to generate constant and considerable diversity in order to sustain LD sufficient for selection to overcome genetic drift. These conditions limit the scenarios that promote recombination in nature to (1) a steady influx of mutations in combination with Hill–Robertson interference15,16,17 or (2) constantly changing biotic environments under antagonistic coevolution between species18,19. Here we propose a qualitatively different scenario for the evolution of recombination in finite populations subject to changing abiotic environments, called the "genomic storage effect"20 (Fig. 1).
Schematic representation of the genomic storage effect and the recombination model. As environmental effects oscillate, the summer (red circles) and the winter (blue circles) alleles at the target locus recombine to a less harmful genetic background at the plasticity modifier locus. The plasticity modifier locus modulates the effect of selection at the target locus by making alleles at the target locus more (non-plastic, dark squares) or less (plastic, light squares) sensitive to selection. The dynamics between the two loci allows unfavorable alleles to be stored until conditions change, generating the genomic storage effect20. The genomic storage effect generates both balanced polymorphism and cycling linkage disequilibrium. Here we study whether or not this type of cycling linkage disequilibrium can lead to the evolution of recombination in initially non-recombining monomorphic finite populations
Models of evolution in changing environments14 suggest that recombination should evolve if epistasis changes sign over a few generations13. The period of environmental oscillation and whether the recombination rate modifier is linked to the set of seasonally selected loci determine the recombination rate, whereas the strength of selection has a marginal impact21,22. This theoretical work was developed in the infinite-population limit, whereas genetic variation was considered unlikely to persist in a finite population at a seasonally selected locus23,24,25. Mechanistic scenarios of abiotic variation that produce fluctuating LD and thus promote recombination in finite populations remain unexplored.
On the other hand, changing biotic environments caused by coevolution of antagonistic species, such as parasite and host or predator and prey, provide a natural scenario for the evolution of recombination18, known as the Red Queen hypothesis for the evolution of sex19. The Red Queen mechanism simultaneously produces: (1) diversity at both of two selected loci, due to negative frequency dependence from coevolution with the antagonistic species, and (2) fluctuating LD, as novel rare haplotypes become advantageous and common ones become detrimental. However, models of the Red Queen mechanism assume that the two expressed loci both contribute to the interaction between competing species, whereas, in nature, parasites may evolve mechanisms to express a single antigen (i.e., allele) at a time26,27,28. Moreover, Otto and Nuismer29 showed that Red Queen mechanism is unlikely to explain omnipresence of sex, as recombination evolves under a rather narrow parameter range in such coevolutionary models.
The genomic storage effect20, by contrast, operates in the absence of coevolution with an antagonistic species and does not require a steady influx of mutation, and thus it may provide a new biological scenario for the evolution of recombination. The basic idea behind genomic storage is that, when abiotic environments change periodically, alleles can survive periods of adversity by escaping (recombining) to a genetic background that ameliorates the effects of selection, e.g., a modifier background that confers phenotypic plasticity, thereby storing diversity until conditions change. Not only does genomic storage generate diversity in finite populations, at both the target and modifier loci20, but it also generates fluctuations in the sign of LD between the two recombining loci (Supplementary Fig. 1). Thus genomic storage provides a novel biological scenario that can produce persistent fluctuating LD—a well-known mechanism for the evolution of recombination13,14,21,22. Previous work demonstrated that genomic storage produces balanced polymorphism20 assuming a fixed, non-zero rate of recombination between selected loci. In this study, we examine whether or not recombination and balanced polymorphism can evolve simultaneously from a non-recombining population.
If phenotypic plasticity in a seasonally selected trait promotes recombination, this would provide a plausible mechanism that shapes the patterns of recombination distances across genomes of many organisms. It is widely appreciated that phenotypic plasticity can mitigate deleterious effects of selection in adverse or perturbed habitats30,31,32. Plasticity can be modulated by an epistatic modifier (review by Scheiner33). Mapping studies have confirmed quantitative trait loci (QTLs) that modulate phenotypic plasticity in animal and plant model organisms34,35,36 that are known to experience seasonality in the wild. In Drosophila, for example, in addition to numerous plasticity QTLs35,36, hundreds of polymorphic loci across the genome show substantial allelic frequency oscillations in response to seasonal environmental changes37. These empirical results reveal natural populations that satisfy the conditions for the genomic storage effect.
Aside from inducing the evolution of recombination between a plasticity modifier and its target locus, the genomic storage effect might also bring to proximity multiple target loci whose effects are modulated by the same plasticity locus (e.g., co-modulated due to a shared transcription factor). The clustering of loci controlling a polymorphic phenotype, i.e., the evolution of supergenes, is poorly understood (review by Thompson and Jiggins38, and Charlesworth39). Charlesworth and Charlesworth40 concluded that clustering is unlikely in the absence of initial linkage and epistasis between the selected loci, assuming the infinite-population limit. How supergenes arise in finite populations remains unclear.
In this study, we employ analytical approximations and Monte-Carlo simulations to study the evolution of the recombination rate between a plasticity modifier and its seasonally selected target locus under the genomic storage effect20. Unlike in prior studies, this scenario for the evolution of recombination via fluctuating LD does not depend on coevolution with antagonistic species, and it allows recombination to evolve even when a single locus expresses a trait under selection. Furthermore, we show that the genomic storage effect suppresses recombination among co-modulated loci, so that supergenes may evolve sequentially in the absence of direct frequency dependence, epistasis, and initial physical linkage between the clustering loci, in finite populations.
To model the evolution of the recombination rate between a plasticity and target locus we generalize the Wright–Fisher population model of phenotypic plasticity described in Gulisija et al.20 to consider three loci: a recombination modifier, a bi-allelic plasticity modifier, and bi-allelic target locus whose fitness effects depend upon a periodically changing environment. At the target locus, which codes for a periodically selected trait, an ancestral allele (a) is favored over the derived allele (d) for half of the environmental period (a season), whereas d is favored over a for the other half of the environmental period (as for example winter and summer alleles shown in Fig. 1). We assume a well-known and empirically supported epistatic model of plasticity33,41, where plasticity is mediated by, for example, a transcription factor or epigenetic modifier locus. Note that we do not model the environmentally induced effects of plasticity on phenotypic trait itself, but rather, as is common in evolutionary models, we simply describe their resulting effects on fitness.
When the modifier locus carries the plasticity allele (M) it provides the ability to sense the environment and alter the phenotype encoded at the target locus. In particular, a cue from an adverse environment signals the plasticity allele M to modify the phenotype of a detrimental allele at the target locus such that it is fitter than it would be in the absence of the plasticity allele (m). When conditions are not adverse and the target allele is favored, however, the plasticity allele M reduces net fitness compared to allele m (due to some cost of plasticity32,42,43). The overall effect of such a plasticity modifier allele is thus equivalent to the action of a robustness modifier44, such that it reduces the magnitude of periodic fitness oscillations in its carriers (broken lines in Fig. 1).
Our description of plasticity and its fitness consequences in seasonal environments does not assume that plasticity conveys either an instantaneous or, conversely, a lifelong effect on the phenotype. Rather, we simply describe the marginal fitness benefit over the lifetime of the organism. This conditionally adaptive model of plasticity has been shown20 to promote balanced polymorphism across a wide range of parameters, including different shapes of plasticity effect, and even under stochastic environmental perturbations, when recombination is assumed to be present. Hence, we do not further explore distinct models of plasticity, but for simplicity we adopt a straightforward deterministic fitness function charactering the joint fitness effects of alleles at the plasticity locus and the target locus (see below).
We assume that the plasticity modifier locus and its target locus recombine at a rate controlled by a recombination modifier locus. The frequencies of resulting haplotypes are subject to deterministic effects of haploid selection in a constant population and of recombination between the three loci, and to the stochastic effects of genetic drift in finite populations, in each generation. In this section, we first describe the deterministic dynamics: selection and recombination, when the two competing alleles at the recombination modifier locus are present.
Combinations of alleles at the three loci (the recombination, plasticity, and target locus, with two competing alleles at each) form eight distinct haplotypes, r 1 ma, r 1 Ma, r 1 md, r 1 Md, r 2 ma, r 2 Ma, r 2 md, and r 2 Md, where r 1 and r 2 are recombination modifier alleles that produce different recombination rates between the plasticity and the target locus only. The frequencies of the eight haplotypes in each generation are first modified by selection such that post-selection frequency is the product of a haplotype's pre-selection frequency and its fitness: \(x_{g,t}^{(1)} = x_{g{{,t}}}\frac{{w_{g{{,t}}}}}{{\bar w_t}},\)where g = r 1 ma, r 1 Ma, r 1 md, r 1 Md, r 2 ma, r 2 Ma, r 2 md, or r 2 Md, and
$$w_{r_1{{ma}},t} = w_{r_2{{ma}},t} = 1 - s_t,$$
$$w_{r_1{{Ma}},t} = w_{r_2{{Ma}},t} = 1 - s_t\left( {1 - p} \right),$$
$$w_{r_1{{md}},t} = w_{r_2{{md}},t} = 1 + s_t,$$
$$w_{r_1{{Md}},t} = w_{r_2{{Md}},t} = 1 + s_t\left( {1 - p} \right),$$
$$\bar w_t = \mathop {\sum }\limits_g x_{g{\mathrm{,}}t}w_{g{\mathrm{,}}t},$$
$$s_t = s_{{\mathrm{max}}}{\mathrm{sin}}\left( {\frac{{2\pi t}}{C}} \right).$$
Here s t denotes the periodic environmental fitness effect at the target locus at the time t, which follows a sinusoidal function with maximum at s max over a period of C discrete generations. The plasticity effect, p, is a constant that quantifies the reduction in the magnitude of the periodic environmental effect in those who carry the plasticity modifier allele M. A list of model variables is given in Table 1.
Table 1 Common symbols and terms used in the text
The post-selection haplotype frequencies are subsequently modified by recombination between the three loci. The physical arrangement of the three loci is assumed to be recombination–plasticity–target. The recombination modifier and the plasticity locus recombine at a fixed rate, R, while the plasticity and the target locus recombine according to an additive recombination phenotype between the two competing alleles at the recombination locus (see Recombination recursion in the "Methods"). Therefore, the two chromosomes recombine with the rate r 1 or r 2 if they carry the same allele, and with the rate r c = (r 1 + r 2 )/2 if they carry different alleles. (Note that this does not mean additive in fitness, as an intermediate phenotype might carry an advantage or disadvantage compared to the both of the recombination phenotypes.)
In the absence of genetic drift, post-recombination frequencies become the starting allele frequencies in the subsequent generation; whereas in the presence of drift the subsequent generation is formed by multinomial sampling from those frequencies. We used multinomial draws for implementing simulations involving eight haplotypes. Whereas we used individual-based simulations for populations that included multiple different recombination-rate alleles‒sampling parents (with replacement) for reproduction and recombination according to their relative fitnesses. The two simulation methods are mathematically equivalent.
We first conduct a stability analysis based on the three-locus recursion equations described above and in the "Methods", which hold in the infinite-population limit, to understand the deterministic dynamics at the three coevolving loci. Then, we study the evolution of the recombination rates, and plasticity and target alleles in finite populations, via Monte-Carlo simulations. Finally, we extend the finite-population simulations to include two or more co-modulated target loci, and an additional recombination modifier locus (or loci) that controls the recombination rates among the targets, in order to examine the evolution of target gene clustering. The details of the stability analysis and the simulation approaches are given in the "Methods".
Below we report evolution of both balanced polymorphism and recombination between the plasticity modifier locus and the target locus whose fitness effects are modulated by the modifier, in periodic environments. Moreover, if there are multiple target loci that contribute additively to fitness, we find evolution toward complete linkage among the target loci whose effects are modulated by the same plasticity modifier locus.
As balanced polymorphism and recombination are co-dependent under the genomic storage effect, stability analysis was used not only to infer the evolutionary stable (ES) recombination rate, which cannot be displaced by any other rates, but also to identify the range of recombination rates that evolve with a balanced stable polymorphism at both the plasticity and target loci. This analysis predicts a range of recombination rates that we would expect to observe in populations, in the absence of a steady influx of mutations.
For all of the examined periods of environmental variation, we find evolution to a non-zero ES rate of recombination, r*, with a stable polymorphism at both the plasticity and the target locus (Fig. 2). The ES recombination rate r* increases as the period of environmental oscillations (C) decreases, in accordance with earlier general models of fluctuating epistasis14,21,22,45. Note that the ES recombination rate we derive represents a lower bound on the potential ES rate for each periodicity, because we assumed free recombination between the recombination modifier and the plasticity–target haplotype14,21.
Local stability analysis of the recombination rate r between the plasticity and the target locus under the genomic storage effect. For each pair of recombination rate alleles, r 1 and r 2, the corresponding color indicates whether r 2 fixes over r 1 (blue) or whether the two rates coexist at intermediate frequencies (orange), as predicted by the local stability analysis for C = 10 (a), 20 (b), 40 (c), and 80 (d). Empty cells imply that no plasticity–target polymorphism was present at equilibrium—i.e., no genomic storage possible. All the blue cells above the diagonal imply evolution toward higher recombination rates, while the blue cells below the diagonal imply evolution toward lower recombination rates (see arrows in the first panel). For example, under C = 10 an allele encoding no recombination will perish when competed with any rate ranging from 0.2 to 0.39, but it will coexist in balanced polymorphism with the rates in the range 0.4–0.5. The red dot corresponds to the ES recombination rate
Interestingly, stable polymorphic equilibria at both the plasticity and the target locus occur over a wide range of recombination rates, particularly as the environmental period increases (Fig. 2). In the absence of an allele encoding the ES rate r*, two alleles coding for a different recombination rates can coexist in proportions such that on average the population still recombines at rate roughly equal to r*, particularly with a shorter C.
A natural question is what is the source of selection on a recombination modifier allele? The recombination modifier allele appears selectively neutral (Eqs. 1–4 in "Model"), but it is indirectly selected due to the rate at which it produces, and finds itself associated with, selected plasticity–target haplotypes. We can understand this indirect selection by considering the relative fitness of a recombination allele (\(r_2\)) to the competing recombination allele (\(r_1\)), over the period of fitness oscillations (C), which is given by
$$ w_{r_2}/w_{r_1} \\ = \mathop {\prod}\nolimits_0^C {\frac{{x_{r_2{{ma}},t}w_{{{ma}},t} + x_{r_2{{Ma}},t}w_{{{Ma}},t} + x_{r_2{{md}},t}w_{{{dm}},t} + x_{r_2{{Md}},t}w_{{{Md}},t}}}{{x_{r_1{{ma}},t}w_{{{ma}},t} + x_{r_1{{Ma}},t}w_{{{Ma}},t} + x_{r_1{{md}},t}w_{{{md}},t} + x_{r_1{{Md}},t}w_{{{Md}},t}}}}\\ \quad \times\frac{{f_{r_{{1}},t}}}{{f_{r_{{{2}},t}}}} ,$$
where \(f_{r_1,t}\)and \(f_{r_2,t}\) are the frequencies of r 1 and r 2, in the population at time t. The long-term fate of a recombination allele depends on the size of the product above. r 2 will increase in frequency if geometric mean of the numerator exceeds that of denominator (i.e., \(w_{r_2}{\mathrm{/}}w_{r_1} >1\)), and it will decrease if the opposite holds46. Therefore, it is the relative frequencies of haplotypes in the numerator and denominator above that determine the outcome at the recombination locus. For example, consider the evolutionary dynamics over one period of fitness oscillations. Within a season (a sequence of environments of the same direction of selection, such as winter), selection promotes associations between the beneficial allele (e.g., winter allele a) and the non-plasticity allele (m), and also between the detrimental allele (summer allele d) and the plasticity allele (M), due to positive epistasis between them, E > 0: a is relatively fitter when paired with m than when paired with M, and d is relatively fitter when paired with M. Thus, epistasis selects for same sign LD, i.e., ma and Md are overrepresented in a population relative to their expectation (D > 0). However, as the season changes the detrimental target allele becomes advantageous and vice versa. Now epistasis changes sign, E < 0: the newly detrimental allele (a) is fully exposed to detrimental environmental effects when paired with m, whereas the newly advantageous allele (d) is less beneficial when paired with M than it would be if paired with m. Since it takes time for LD to change sign there is a discrepancy in the sign of epistasis and LD, such that ED < 0 for a period of time. In other words, at the beginning of the season LD is lagging in sign-change behind epistasis and the beneficial allelic combinations are underrepresented in the population compared to their expectation. At this point, the allele encoding a higher recombination rate will increase the proportion of fitter plasticity–target haplotypes more quickly than the one encoding a lower rate of recombination, and so the higher recombination allele will hitchhike to a higher frequency. As long as ED < 0, conditions will favor reduction in disequilibrium and increase in recombination. However, epistasis may eventually change the sign of LD, i.e. there will be excess of haplotypes containing allele combinations that maximize fitness, which will then favor reduction in recombination. The duration of discrepancy between the sign of epistasis and sign of linkage disequilibrium therefore determines the ES recombination rate r*, with longer periods of ED < 0 resulting in a higher ES recombination rate. These dynamics of cycling LD and change in epistasis are depicted in Supplementary Fig. 1.
From the description of dynamics above, it is also evident that a recombination modifier linked to the plasticity–target haplotype will gain more selective advantage than an unlinked recombination modifier, since it forms a stronger association with the fitter subpopulation.
The results of our local stability analysis, above, agree with analysis of the relative fitness of one recombination rate vs. another across each environmental cycle (Eq. 7). The ratio \(w_{r_2}{\mathrm{/}}w_{r_1}\) changes in time as haplotype frequencies change during the approach to the three-locus stable attractor. In every case where we observe a fixation of rate \(r_2\), as predicted by the local stability analysis (blue cells in Fig. 2), we find that the fitness of \(r_2\) relative to r 1 (\(w_{r_2}{\mathrm{/}}w_{r_1}\)) is uniformly greater than 1 during the approach to equilibrium. Conversely, in those cases for which the local stability analysis predicts an intermediate equilibrium frequency at the recombination locus (orange cells in Fig. 2), we indeed find that \(w_{r_2}/w_{r_1}\) < 1 when the frequency of r 2 allele exceeds the equilibrium frequency, and \(w_{r_2}{\mathrm{/}}w_{r_1}\) > 1 when the frequency of r 2 is lower than equilibrium frequency. These observations of relative fitness values agree with the local stability analysis and suggest that the locally stable equilibria are globally stable as well.
Evolution of recombination in finite populations
For finite non-recombining populations that include genetic drift, both balanced polymorphism and recombination between a plasticity modifier and its target locus arise across a wide range of environmental periodicities (Fig. 3). We find that the stationary distribution of the recombination rate coded by an unlinked modifier allele subject to recurrent mutation is centered around the ES recombination rate predicted by stability analysis in the infinite-population limit. Also, the peaks of the stationary distribution of the recombination rate at a given periodicity are very close to each other even when the strength of selection at the target locus is varied considerably (s max vs. s max/2), as expected14,21. However, in finite populations, especially when selection is weak, the stationary distribution of recombination rates can be very wide, such that there is a substantial chance to find a population far from the ES rate predicted by the infinite-population analysis. In fact, in some regimes where an infinite-population analysis predicts a positive ES recombination rate, selection for recombination in a corresponding finite population can be too weak to overcome drift, even due to loss of balanced polymorphism (e.g., C = 80, s max = 0.3, and Nµ = 0.01).
Stationary distribution of the recombination rate r between the plasticity and the target locus. We assume that the recombination rate is encoded by an unlinked recombination modifier subject to recurrent mutation in a population of N = 25,000, with C = 10 and s max = 0.5 (a), C = 20 and s max = 0.25 (b), C = 40 and s max = 0.15 (c), and 80 and s max = 0.15 (d) (broken line indicates results with weaker selection, s max/2). The tick on the horizontal axis denotes the ES recombination rate (r*) predicted by the deterministic stability analysis. Nµ = 0.1. One thousand replicate simulations were run for a 100 N burn-in generations (where the distribution stabilized) and the stationary distribution was recorded over the next 100 N generations. When the strength of selection at the target locus is reduced (by a factor of two, dotted lines), the distribution of recombination rates becomes slightly broader, but remains centered around the ES rate. The tick mark on the y axis on the bottom right panel indicates height of the y axis in the other panels
Although recombination always evolves when environmental periods are short, the stationary distributions for long environmental periods (C ≥ 40) include significant probability mass near non-recombinant modifier values (i.e., r ~ 0), even when balanced polymorphism is present. Nonetheless, our results on stationary rate distributions are conservative lower bounds on the evolution of recombination rate r, because all simulations assumed an unlinked recombination modifier (R = 0.5). Additional simulations showed larger equilibrium rates when the recombination modifier is flanked by the plasticity and target locus. In fact, when the recombination modifier is closely linked to the plasticity modifier (R = 0.01), the plasticity and target locus evolve towards free recombination (~0.5) with C = 10, and recombination evolves even when environmental periods are very long (C = 80) (see also Supplementary Fig. 2).
Individual-based simulations reported above assumed relatively strong selection at the target locus in order to reduce computational cost. This regime of strong seasonal frequency oscillations has indeed been reported in empirical studies47,48,49, even at numerous loci simultaneously37. Nonetheless, the genomic storage effect extends to weaker selection than shown in the figures above, provided Ns max is large enough20. To verify that recombination also evolves under weaker selection we conducted a simulation with eight haplotypes in larger populations. In these simulations, the ES recombination rate estimated by infinite-population analysis was introduced into an initially non-recombinant population. These simulations show not only that the recombination can evolve readily in large populations with the relatively small s max (Supplementary Fig. 3, compare left and middle panel), but even more so if the recombination modifier is linked to the plasticity–target sequence. Conveniently, s max has little influence on the equilibrium recombination rate. And so the results of our individual-based simulations likely apply to wider set of selective pressures than those that can be feasibly examined by computation.
All the results above assume plasticity effect p = 1. But will recombination arise for weaker plasticity effects? In the deterministic limit of an infinite population, cycling LD is guaranteed for any p > 0. In finite populations, however, drift can disrupt diversity, especially when Ns max is small. Nonetheless, Supplementary Fig. 3 shows that recombination readily evolves across a wide range of plasticity effect sizes (p ≥ 0.25) in populations of size N = 106 with s max = 0.05 and Nµ = 0.1, for example.
Evolution of clustering between two co-modulated target loci
Genomic storage also leads to reduced recombination, and eventually complete linkage (r′ ~ 0), among multiple, non-epistatic target loci co-modulated by the same plasticity modifier, in finite populations (Fig. 4). Hence, while genomic storage increases the recombination between the plasticity modifier and its target loci, storage has the opposite effect on the recombination rate among co-regulated target loci.
Stationary distribution of the recombination rates between two co-modulated target loci. We assume that the initial target locus starts at the optimal distance to the plasticity modifier loci and a new target locus is introduced downstream away from the plasticity modifier-target haplotype. N = 25,000 and C = 10 with s max = 0.5 (a), C = 20 with s max = 0.25 (b), C = 40 and 80 with s max = 0.15 (c, d) for each of the target loci (broken line indicates s max/2), with Nµ = 0.1. One thousand replicate simulations were run for a 100 N burn-in generations (where the distribution stabilized) and the stationary distribution was recorded over the next 100 N generations (to a 2 decimal place precision). The tick marks y axes in top panels indicate height of y axes in bottom panels
Irrespective of whether the two target loci are introduced sequentially or simultaneously, and irrespective of the initial position of the newly introduced target locus or of the location of their recombination modifiers, we find evolution of reduced recombination rates r′ among the target loci (i.e. clusters), positive linkage disequilibrium between the loci, higher levels of diversity (provided selection is not too strong), and increased magnitude of fitness oscillations at each locus compared to a single-target case. Interestingly, when the first target locus is controlled by a linked recombination modifier (R = 0.01) and the second target locus by an unlinked recombination locus, both loci cluster to the recombination distance with the plasticity locus that is favored under the linked recombination modifier. This occurs because positive LD between the two target loci, which is generated by genomic storage and genetic drift, leads to the clustering between two target loci despite the fact that the two target loci should gravitate to different recombination distances from the plasticity modifier. The clustering of target loci results in a stationary distribution of r′ that is almost indistinguishable from that when loci are introduced sequentially, where they gravitate to the same recombination distance from the plasticity modifier (Supplementary Fig. 4). Clustering of target loci was not observed with C = 80 and s max = 0.15 (although Nµ = 0.1) because in that regime strong selection at clusters of ancestral–ancestral (a–a) or derived–derived (d–d) alleles pushes alleles to the boundaries and removes balanced polymorphism at the target loci‒a result that occurs only in finite populations and is not predicted by a deterministic analysis.
As noted above, genomic storage generates balanced polymorphism and positive LD among multiple target loci, in the absence of epistasis between them. This result is a converse, of sorts, to Hill–Robertson interference15 in finite populations, which produces negative LD among non-epistatic loci under non-balancing selection. Positive LD arises in our model because balancing selection maintains clusters of loci with the largest fitness variance (ancestral–ancestral or derived–derived), while mismatched haplotypes perish by genetic drift. Therefore, we find an excess of a–a or d–d haplotypes compared to expectation—that is, positive LD. Hence, genomic storage presents a naturally plausible mechanism for clustering of seasonal alleles, as a consequence of phenotypic plasticity in changing environments.
Evolution of supergenes
Clustering of multiple target loci does not arise as readily in finite populations as clustering of just two target loci. This effect arises because the relative per-locus contribution to phenotypic variance decreases as the number of target loci increases, and each such locus becomes "less visible" to selection in the presence of genetic drift. In particular, using the same parameters as in the previous section, we find that clustering is unlikely to evolve de novo among multiple co-regulated loci. However, if the target loci are initiated in tight linkage, a sequence of aligned alleles acting in unison will arise despite mutation for recombination among the target loci. Therefore, supergenes appear unlikely to evolve from polymorphic loci in the absence of their initial proximity; but genomic storage can maintain the proximity of gene complexes generated through tandem duplication and functional divergence50,51,52.
Another method to evolve supergenes under genomic storage is by the sequential introduction of target loci, where each novel polymorphic locus is introduced after an initial cluster is already formed. Under this sequential model, we recover the evolution of three-loci clusters and with similar stationary distributions of the recombination rates among co-regulated target loci as found in the two-locus model. For a limited set of parameters (with C = 20), we find that the supergenes subsequently grow, through sequential clustering of the extant supergene with a novel locus and irrespective of its initial recombination distance (r′ ~ U[0,0.5]), to produce supergenes including as many as n = 8 target loci, the maximum number we examined (Supplementary Fig. 5). At least 60% of the target loci haplotypes occur in the form of co-segregating all-a alleles or all-d alleles. As supergenes are created and expanded, the magnitude of frequency oscillations over the period of environmental variation increases. Very large clusters are less likely to form when selection on each target locus is strong, however. Indeed, we observe that an 8-locus supergene is more likely to arise for smaller selection pressure (s max = 0.075 vs. 0.1, Supplementary Fig. 5). These results suggest that supergenes can evolve from initially distant loci if mutation at the target loci is rare enough such that a novel locus becomes polymorphic only after an initial cluster is formed, and provided selection on each target is not too strong.
The sequential model above demonstrates how supergenes can evolve via recombination reduction, or even from initially unlinked loci via genomic rearrangements38. Empirical data already implicate genomic rearrangements in the formation of supergenes, especially in classic examples occurring in butterfly mimicry53 or the "social chromosome" in fire ants54. In the sequential model we have described above, a non-recombining rate can invade even a freely recombining population, provided there is no cost to heterozygous mating. This assumption is supported by earlier work on chromosomal rearrangements that spread in populations by suppressing recombination in heterozygotes55. Thus our model is equivalent to a (no-cost) model of evolution of genomic rearrangements, with the exception that the recombination phenotype is additive under our model as opposed to dominant in the case of rearrangements. The addition of dominance to our model, however, would only increase selection favoring non-recombination among target loci.
The Red Queen hypothesis provides a plausible scenario for the evolution of recombination under changing environments, but it is limited to cases involving coevolution with an antagonistic species, such as a parasite. This study introduces a novel scenario for the evolution of recombination under changing environments that does not require antagonistically interacting species or a steady influx of mutations: the genomic storage effect due to phenotypic plasticity. This scenario may produce both balanced polymorphism and cycling linkage disequilibrium, and it provides a natural scenario that can readily select for recombination even when a single locus structurally codes the phenotype under selection. Furthermore, while genomic storage selects for recombination between the plasticity modifier locus and its target expressed loci, the same effect also selects for complete linkage of additive co-regulated loci that are adapted to the same environment‒that is, genomic storage can produce supergenes.
Our results on how the ES recombination rate depends on the period of environmental oscillation and on the strength of selection are in accordance with longstanding general theory for the effects of fluctuating epistasis13,14,21,22. What is novel here, however, is the specific biological scenario we propose that ensures standing diversity and fluctuating epistasis, through the genomic storage effect involving a plasticity modifier locus and seasonally selected target allele. This scenario is qualitatively different from a pair of symmetric sites, as is assumed in prior mathematical treatments of fluctuating epistasis. Moreover, we have shown that genomic storage generates the evolution of fluctuating LD and sustained polymorphism even in the presence of genetic drift.
Previous work demonstrated that genomic storage promotes balanced polymorphism across a range of parameters20, including the variation in the strength of benefit or cost of plasticity, the period of environmental change, with and without recurrent mutation, and even in the presence of random environmental perturbations; but those analyses assumed a positive, constant recombination rate. Here we find that both recombination and balanced polymorphism can arise simultaneously in initially non-recombining monomorphic finite populations subject to periodic selection, provided the population is sufficiently large or selection sufficiently strong. Large population sizes are not uncommon for many organisms subject to periodic environments, such as seasonally evolving organisms (e.g., copepods, a dominant zooplankton56). Moreover, empirical studies have reported large allele frequency oscillations under temporally varying selection47,48,49, even at many loci simultaneously37, corresponding to selection coefficients ranging from 5 to 50%. Given widespread evidence of conditions that engender genomic storage20, this novel model may provide a plausible mechanism for the evolution of recombination in natural populations.
We have framed our model of genomic storage in terms of phenotypic plasticity, because it is well documented as a widespread example of genomic storage in periodic environments. However, the overall effect of such a plasticity modifier allele is equivalent to the action of a robustness modifier44, which reduces the magnitude of periodic fitness oscillations in its carrier (see "Model" in the Results). Indeed, plastic phenotypes or invariable (robust) phenotypes both provide qualitatively similar fitness dynamics and are similar from an evolutionary standpoint. And so our work has implications beyond phenotypic plasticity to a wider class of natural scenarios involving buffering or sign epistatic effects.
The genomic storage effect simultaneously promotes recombination between a regulator (plasticity modifiers such as transcription factors) and its target locus, while suppressing recombination among multiple co-modulated target loci, producing clusters of target loci with aligned allelic effects. Previous research into reduction of recombination between polymorphic loci was attributed to frequency dependence and epistasis in infinite populations and suggested that this phenomenon would be unlikely in the absence of initial physical linkage40. By contrast, under the scenario of recombination reduction due to phenotypic plasticity in finite populations, the clustering of two target loci can readily arise independent of initial physical linkage between the loci, even in the absence of epistasis between them. The two target loci gravitate to a same recombination distance from the plasticity locus that modulates their effects. Furthermore, we find that genomic storage in finite populations promotes positive linkage disequilibrium between co-modulated loci despite the lack of epistatic interactions between them, which further promotes reduced recombination among target loci.
While two target loci under the genomic storage will evolve to be clustered independent of initial linkage, the evolution of more than two loci clustering only occurs in the presence of initial linkage between the loci or in a sequential fashion: when existing polymorphic loci are clustered, a newly polymorphic locus evolves to be tightly linked as well, such that a supergene can emerge gradually. This sequential model implies that supergenes might arise even via genomic rearrangements from initially unlinked loci provided the mutation rate at the target loci is low enough that clusters are formed before a new, unlinked, polymorphic locus arises. As supergenes emerge, selection generates strong positive linkage disequilibrium within a cluster. This joint effect of genetic drift and genomic storage on the creation of supergenes suggests that other forms of storage effects might also favor supergenes, such as storage due to population subdivision57.
Our study highlights a role for recombination modification in the maintenance of genomic variation in seasonal environments. Polymorphism under the genomic storage effect persists only in the presence of recombination between the plasticity modifier locus and its target locus or a cluster of target loci. Such recombination will naturally evolve, we have shown, and then subsequently may promote balanced polymorphism at all of the loci. Thus, the maintenance of diversity by genomic storage is tightly linked to evolution of recombination rates. As balanced polymorphism facilitates rapid adaptation58,59,60,61,62, here recombination modification, both regulator-target distance and target loci clustering, emerges as a mechanism underlying persistence in changing environments.
While our study has focused on the effects of plasticity in a selected trait on the evolution of recombination, the effects of plasticity in recombination itself have been explored elsewhere in the context of condition-dependent recombination63,64,65 (reviewed by Ram and Hadany66). These models show that plastic recombination may arise if it allows a recombination allele to escape to a beneficial genetic background in poor conditions. Such plastic recombination might be advantageous and broaden overall conditions for evolution of recombination in changing environments67.
This study highlights the importance of phenotypic plasticity in shaping recombination rates across the genome, and the diverse effects it has on genetic architecture in periodic environments. We predict specific patterns of genomic distances in seasonally evolving organisms under genomic storage: transcription factors or epigenetic modifiers that modify phenotype and fitness are predicted to be unlinked or loosely linked to their target loci, depending on the periodicity of the environment to which they respond; while co-regulated expressed clusters of targets are expected to be tightly linked. The specificity of these predictions, which are unique to the genomic storage scenario, will allow this theory of recombination evolution to be compared against empirical data. More generally, the predictions of the genomic storage model can help inform empirical research on oscillating alleles and their relationship to phenotypic plasticity, just when we are rapidly uncovering many seasonal alleles37 and plasticity modifiers35,36 in natural populations.
Recombination recursion
The eight haplotype frequencies following recombination are:
$$\begin{array}{ccccc} x_{r_1{{ma}},t}^{\left( 2 \right)} = x_{r_1{{ma}},t}^{\left( 1 \right)}\left( {1 - x_{r_2{{md,}}t}^{\left( 1 \right)} - x_{r_2{{Md,}}t}^{\left( 1 \right)} - r_1x_{r_1{{Md,}}t}^{\left( 1 \right)} - Rx_{r_2{{Ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { + \left( {1 - R - r_c + 2Rr_c} \right)x_{r_2{{md,}}t}^{\left( 1 \right)} + \left( {1 - r_c} \right)\left( {1 - R} \right)x_{r_2{{Md,}}t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{Ma}},t}^{\left( 1 \right)}\left( {r_1x_{r_1{{md,}}t}^{\left( 1 \right)} + Rx_{r_2{{ma}},t}^{\left( 1 \right)} + Rr_cx_{r_2{{md,}}t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{md,}}t}^{\left( 1 \right)}\left( {\left( {R + r_c - 2Rr_c} \right)x_{r_2{{ma}},t}^{\left( 1 \right)} + \left( {1 - R} \right)r_cx_{r_2{{Ma}},t}^{\left( 1 \right)}} \right)\\\\ + R\left( {1 - r_c} \right)x_{r_1{{Md,}}t}^{\left( 1 \right)}x_{r_2{{ma}},t}^{\left( 1 \right)}, \end{array}$$
$$\begin{array}{ccccc} x_{r_1{{Ma}},t}^{\left( 2 \right)} = x_{r_1{{Ma}},t}^{\left( 1 \right)}\left( {1 - x_{r_2{{md,}}t}^{\left( 1 \right)} - x_{r_2{{Md,}}t}^{\left( 1 \right)} - r_1x_{r_1{{md,}}t}^{\left( 1 \right)} - Rx_{r_2{{ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { + (1 - R)\left( {1 - r_c} \right)x_{r_2{{md,}}t}^{\left( 1 \right)} + (1 - R - r_c + 2Rr_c)x_{r_2{{Md,}}t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{ma}},t}^{\left( 1 \right)}\left( {r_1x_{r_1{{Md,}}t}^{\left( 1 \right)} + Rx_{r_2{{Ma}},t}^{\left( 1 \right)} + Rr_cx_{r_2{{Md,}}t}^{\left( 1 \right)}} \right)\\\\ + R\left( {1 - r_c} \right)x_{r_1{{md,}}t}^{\left( 1 \right)}x_{r_2{{Ma}},t}^{\left( 1 \right)} + x_{r_1{{Md,}}t}^{\left( 1 \right)}\\\\ \left( {(1 - R)r_cx_{r_2{{ma}},t}^{\left( 1 \right)} + (R + r_c - 2Rr_c)x_{r_2{{Ma}},t}^{\left( 1 \right)}} \right), \end{array}$$
$$\begin{array}{ccccc} x_{r_1{{md,}}t}^{\left( 2 \right)} = x_{r_1{{md,}}t}^{\left( 1 \right)}\left( {1 - Rx_{r_2{{Md,}}t}^{\left( 1 \right)} - x_{r_2{{ma}},t}^{\left( 1 \right)} + \left( {Rr_c - R - r_c} \right)x_{r_2{{Ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { + \left( {1 - R - r_c - 2Rr_c} \right)x_{r_2{{ma}},t}^{\left( 1 \right)} - r_1x_{r_1{{Ma}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{ma}},t}^{\left( 1 \right)}\left( {r_1x_{r_1{{Md,}}t}^{\left( 1 \right)} + (R + r_c - 2Rr_c)x_{r_2{{md,}}t}^{\left( 1 \right)} + \left( {1 - R} \right)r_cx_{r_2{{Md,}}t}^{\left( 1 \right)}} \right)\\\\ + R\left( {1 - r_c} \right)x_{r_1{{Ma}},t}^{\left( 1 \right)}x_{r_2{{md,}}t}^{\left( 1 \right)}\\\\ + Rx_{r_1{{Md,}}t}^{\left( 1 \right)}\left( {r_cx_{r_2{{ma}},t}^{\left( 1 \right)} + x_{r_2{{md,}}t}^{\left( 1 \right)}} \right), \end{array}$$
$$\begin{array}{ccccc} x_{r_1{{Md,}}t}^{\left( 2 \right)} = x_{r_1{{Md}},t}^{\left( 1 \right)}\left( {1 - r_1x_{r_1{{ma}},t}^{\left( 1 \right)} - \left( {Rr_c - R - r_c} \right)x_{r_2{{ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { - \left( {2Rr_c - R - r_c} \right)x_{r_2{{Ma}},t}^{\left( 1 \right)} - Rx_{r_2{{md}},t}^{\left( 1 \right)}} \right)\\\\ + R\left( {1 - r_c} \right)x_{r_1{{ma}},t}^{\left( 1 \right)}x_{r_2{{Md}},t}^{\left( 1 \right)} + x_{r_1{{Ma}},t}^{\left( 1 \right)}\\\\ \left( {r_1x_{r_1{{md}},t}^{\left( 1 \right)} + \left( {1 - R} \right)r_cx_{r_2{{md}},t}^{\left( 1 \right)} + \left( {R + r_c - 2Rr_c} \right)x_{r_2{{Md}},t}^{\left( 1 \right)}} \right)\\\\ + Rx_{r_1{{md}},t}^{\left( 1 \right)}\left( {r_cx_{r_2{{Ma}},t}^{\left( 1 \right)} + x_{r_2{{Md}},t}^{\left( 1 \right)}} \right), \end{array}$$
$$\begin{array}{ccccc} x_{r_2{{ma}},t}^{\left( 2 \right)} = x_{r_2{{ma}},t}^{\left( 1 \right)}\left( {1 - Rx_{r_1{{Ma}},t}^{\left( 1 \right)} - \left( {R + r_c - Rr_c} \right)x_{r_1{{Md}},t}^{\left( 1 \right)}} \right.\\\\ \left. { - r_2x_{r_2{{Md}},t}^{\left( 1 \right)} - \left( {2Rr_c - R - r_c} \right)x_{r_1{{md}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{ma}},t}^{\left( 1 \right)}\left( {Rx_{r_2{{Ma}},t}^{\left( 1 \right)} + \left( {R + r_c - 2Rr_c} \right)x_{r_2{{md}},t}^{\left( 1 \right)} + R\left( {1 - r_c} \right)x_{r_2{{Md}},t}^{\left( 1 \right)}} \right)\\\\ + \left( {1 - R} \right)r_cx_{r_1{{Ma}},t}^{\left( 1 \right)}x_{r_2{{md}},t}^{\left( 1 \right)}\\\\ + Rr_cx_{r_1{{md}},t}^{\left( 1 \right)}x_{r_2{{Ma}},t}^{\left( 1 \right)} + r_2x_{r_2{{Ma}},t}^{\left( 1 \right)}x_{r_2{{md}},t}^{\left( 1 \right)}, \end{array}$$
$$\begin{array}{ccccc} x_{r_2{{{Ma}},}t}^{\left( 2 \right)} = x_{r_2{{Ma}},t}^{\left( 1 \right)}\left( {1 - Rx_{r_1{{ma}},t}^{\left( 1 \right)} - \left( {Rr_c - R - r_c} \right)x_{r_1{{md}},t}^{\left( 1 \right)}} \right.\\\\ \left. { - \left( {2Rr_c - R - r_c} \right)x_{r_1{{Md}},t}^{\left( 1 \right)} - r_2x_{r_2{{md}},t}^{\left( 1 \right)}} \right)\\\\ + \left( {1 - R} \right)r_cx_{r_1{{ma}},t}^{\left( 1 \right)}x_{r_2{{Md}},t}^{\left( 1 \right)} + x_{r_1{{Ma}},t}^{\left( 1 \right)}\\\\ \left( {Rx_{r_2{{ma}},t}^{\left( 1 \right)} + R\left( {1 - r_c} \right)x_{r_2{{md}},t}^{\left( 1 \right)} + \left( {R + r_c - 2Rr_c} \right)x_{r_2{{Md}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_2{{ma}},t}^{\left( 1 \right)}\left( {Rr_cx_{r_1{{Md}},t}^{\left( 1 \right)} + r_2x_{r_2{{Md}},t}^{\left( 1 \right)}} \right),\\ \end{array}$$
$$\begin{array}{ccccc} x_{r_2{{md}},t}^{\left( 2 \right)} = x_{r_2{{md}},t}^{\left( 1 \right)}\left( {1 - \left( {2Rr_c - R - r_c} \right)x_{r_1{{ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { - \left( {Rr_c - R - r_c} \right)x_{r_1{{Ma}},t}^{\left( 1 \right)} - Rx_{r_1{{Md}},t}^{\left( 1 \right)} - r_2x_{r_2{{Ma}},t}^{\left( 1 \right)}} \right)\\\\ + Rr_cx_{r_1{{ma}},t}^{\left( 1 \right)}x_{r_2{{Md}},t}^{\left( 1 \right)} + x_{r_1{{md}},t}^{\left( 1 \right)}\\\\ \left( {\left( {R + r_c - 2Rr_c} \right)x_{r_2{{ma}},t}^{\left( 1 \right)} + R\left( {1 - r_c} \right)x_{r_2{{Ma}},t}^{\left( 1 \right)} + Rx_{r_2{{Md}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_2{{ma}},t}^{\left( 1 \right)}\left( {\left( {1 - R} \right)r_cx_{r_1{{Md}},t}^{\left( 1 \right)} + r_2x_{r_2{{Md}},t}^{\left( 1 \right)}} \right), \end{array}$$
$$\begin{array}{ccccc} x_{r_2{{Md}},t}^{\left( 2 \right)} = x_{r_2{{Md}},t}^{\left( 1 \right)}\left( {1 - \left( {Rr_c - R - r_c} \right)x_{r_1{{ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { - \left( {Rr_c - R - r_c} \right)x_{r_1{{Ma}},t}^{\left( 1 \right)} - Rx_{r_1{{md}},t}^{\left( 1 \right)} - r_2x_{r_2{{ma}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_1{{Md}},t}^{\left( 1 \right)}\left( {R\left( {1 - r_c} \right)x_{r_2{{ma}},t}^{\left( 1 \right)}} \right.\\\\ \left. { + \left( {R + r_c - 2Rr_c} \right)x_{r_2{{Ma}},t}^{\left( 1 \right)} + Rx_{r_2{{md}},t}^{\left( 1 \right)}} \right)\\\\ + x_{r_2{{md}},t}^{\left( 1 \right)}\left( {r_2x_{r_2{{Ma}},t}^{\left( 1 \right)} + Rr_cx_{r_1{{Ma}},t}^{\left( 1 \right)}} \right)\\\\ + \left( {1 - R} \right)r_cx_{r_1mcd,t}^{\left( 1 \right)}x_{r_2{{Ma}},t}^{\left( 1 \right)}. \end{array}$$
We first analyze this model in the infinite-population limit, neglecting both genetic drift and mutation. To do so, we numerically evolve the discrete-time frequency equations to identify each equilibrium that is polymorphic at both the plasticity and target locus (defined here as the case when minor allele frequency does not fall below 0.01), irrespective of the frequency of alleles at the recombination locus. We focus on such polymorphic equilibria because, in the absence of steady influx of mutation, recombination evolves only if balanced polymorphism is present at the two loci (i.e. LD is maintained by selection). Next, for each such equilibrium identified numerically we compute the Jacobean matrix of the deterministic system over a full period of fitness oscillations and the corresponding leading eigenvalue to determine if the equilibrium is locally stable or not.
We studied the deterministic dynamics across the all possible pairwise combinations of recombination rate alleles (to two-digit precision), with environmental periods C = 10, 20, 40, or 80 generations. Theory21,22 predicts that the optimum recombination rate is rather robust to the selection strength in the periodic environments, hence we do not vary the maximum environmental effect size, keeping s max = 0.1. (But we do confirm similar results with s max = 0.01). Although genomic storage can occur across various strengths of the plasticity effect20 for simplicity, we fix the parameter p = 1. However, later we relax this assumption for finite-population simulations (Supplementary Fig. 3). Finally, Charlesworth14 noted that there is no optimum recombination rate for a given periodic environment, but the optimum rate increases with stronger linkage of the recombination modifier to the selected loci (R). Therefore, we set R = 0.5 (unlinked recombination modifier) as this will produce a lower bound on the range of recombination rates that might evolve under the genomic storage, making our conclusions about the evolution of recombination conservative. Moreover, assuming R = 0.5 removes any potential effect of the simulated order of loci since the recombination locus freely recombines with the selected sequence. We initiate allele frequencies at the plasticity and target loci ranging from 0.05 to 0.95 (in 0.1 increments), and we examine all pairs of recombination rates, with each one given the chance to arise as the invader with initial frequency 0.01. We evolve the deterministic system for at least 1000 generation, and until either the plasticity or target locus fixes (frequency drops below 10−4) or until the same sequence (up to 8 digit accuracy) of haplotype frequencies is repeated in two consecutive environmental cycles, which we consider an equilibrium outcome.
Monte-Carlo simulations in finite populations
To investigate whether or not recombination modifier can invade in a non-recombining finite population, i.e., in the presence of genetic drift, and to allow recurrent mutations across the range of recombination phenotypes (r ~ U[0, 0.5]) and infer a realistic distribution of possible recombination rates as might occur in nature, we conducted Monte-Carlo simulations. We start each simulation in a monomorphic population at the three loci: with no recombination between the plasticity and the target locus, with allele m at the plasticity modifier, and with allele a at the target locus. Mutation randomly introduces diversity at each of the loci with chance Nµ per generation. At the recombination modifier locus, the mutant recombination rate is randomly chosen from a uniform distribution, U[0, 0.5]. The other two loci reversibly mutate between alleles m and M, or alleles a and d. Mutation is followed by recombination and sampling with replacement (drift). Each of the two parents is randomly sampled (with replacement) and retained for reproduction proportional to its fitness relative to the maximum fitness in the population. The plasticity and the target locus recombine between the two parental chromosomes, with probability depending on the alleles they carry at the recombination modifier in an additive manner as described earlier. The recombination modifier locus recombines with the plasticity–target sequence with probability R. Each pair of parents is chosen from the population sequentially until the next generation of N individuals is assembled. Simulations run for a burn-in duration of 100 N generations (beyond the time at which genetic variance at the target and the plasticity locus and the stationary distribution at the recombination locus stabilize) and for additional 100 N generations during which we record the stationary distribution of recombination rates (to two decimal place precision).
The genomic storage effect is stronger when the population size is large, when selection is strong, or when there are recurrent mutations20. We studied only populations of size N = 25,000 with a relative large value of s max, corresponding to strong selection from environmental variation, since it allowed us to efficiently compute the stationary distribution in individual-based simulations. We report the stationary distribution of recombination rates between the plasticity and the target locus, r, under the cycle of fitness oscillations C = 10 with s max = 0.25 and 0.5, C = 20 with s max = 0.125 and 0.25, C = 40 and 80 with s max = 0.15 and 0.075, all under p = 1, in an ensemble of over one thousand Monte-Carlo simulations. To demonstrate that recombination also evolves with weaker absolute selection in larger populations and with p < 1, we also conducted simulations based on the deterministic recursion given earlier (competing two recombination rates), but with the multinomial sampling (reproduction/drift) of haplotype frequencies. Here we examined the effects of N = 105 or 106 with s max = 0.01, 0.02, 0.03, or 0.05, and with p = 1, and of N = 106 and s max = 0.5, with p = 0, 0.1, 0.25, 0.5, or 1, all with R = 0.01, 0.1, or 0.5 in over 40,000 replicate simulations.
We also study a genetic system similar to the one above, but with two non-epistatic target loci, whose effects are modulated by a single plasticity modifier locus, and with a recombination locus that modifies the recombination rate between them. In this context we are especially interested in the evolution of the recombination rate among the target loci themselves. We study two particular scenarios: simultaneous evolution of the recombination rates between the plasticity and the first target locus and between the two target loci, and a sequential scenario. In the first scenario, a population migrates to the new periodic habitat and the two recombination rates are each initially monomorphic and drawn from U[0, 0.5], and then subsequently evolve. In the second scenario, a population starts in equilibrium based on the three-locus model above, where the initial target locus recombines with the plasticity modifier with ES recombination rate, whereas the newly arisen polymorphic target locus has a random initial recombination rates with the existing target locus, and it is placed either between the plasticity modifier and the first target locus (for C = 10 or 20) or further downstream of the first target locus (C = 10, 20, 40, or 80). In both scenarios, we assume no epistasis in fitness between the multiple target loci. We postulate that if the two target loci are controlled by the same plasticity locus, then the target loci will evolve to the same recombination rate with the joint plasticity locus and thus cluster together. Additionally, in the simultaneous rate coevolution model we also examine the distance between target loci even when they would not gravitate to the same recombination to the plasticity locus, i.e., not equally distant recombination modifiers.
We obtain the distribution of recombination rates between the two target loci, r′, under the cycle of fitness oscillations C = 10 with s max = 0.25 and 0.5, C = 20 with s max = 0.125 and 0.25, C = 40 and 80 with s max = 0.075 and 0.15, all under p = 1. The simulations run for a burn-in duration of 100 N and for additional 100 N generations during which we record the stationary distribution of recombination rates (to two decimal place precision).
A model with more than two target loci may exhibit qualitatively different behavior than the case of only two target loci, in finite populations. This is since polymorphism at multiple loci may interfere as it reduces the relative contribution for each locus to phenotypic variation, making each locus "less visible" to selection. We explore clustering between three target loci in two scenarios: a case where all recombination rates evolve simultaneously; and a case in which each novel polymorphic target locus is introduced sequentially at a random recombination distance (~U[0, 0.5]) to an existing cluster in equilibrium. This "sequential growth" model assumes that mutations introducing polymorphism at new locus affecting the seasonal trait are rare‒so that the existing cluster reaches equilibrium before any new polymorphism arises.
For a small set of parameters (with C = 20), we explore the evolution of polymorphic clusters of multiple loci acting in unison (i.e., supergenes, e.g., in butterflies53 or fire ants54), using the sequential growth model. Each novel polymorphic target locus is, again, introduced in a population at equilibrium where a n-loci cluster is already formed (with n = 3, 4, 5, 6, or 7, and, s max = 0.075 and 0.1). This in a sense generates a sequence of sequential two locus (super-locus and new locus) clustering. Note, that we do not observe the time till supergenes are formed as each of simulation run is conducted for 100 N burn-in generations beyond the time when stationary distribution is reached, and another 10,000 generations during which we record the stationary distribution of the recombination rate among target loci.
Source code is available online at https://github.com/davorka/RecombinationGS.
The code above was used to produce all data used in this study.
Otto, S. P. Evolutionary enigma of sex. Am. Nat. 174, S1–S14 (2009).
Chinnici, J. P. Modification of recombination frequency in Drosophila. I. Selection for increased and decreased crossing over. Genetics 69, 71–83 (1971).
Charlesworth, B. & Charlesworth, D. Genetic variation in recombination in Drosophila. I. Responses to selection and preliminary genetic analysis. Heredity 54, 71–83 (1985).
Charlesworth, B. & Charlesworth, D. Genetic variation in recombination in Drosophila. II. Genetic analysis of a high recombination stock. Heredity 54, 85–98 (1985).
Brooks, L. D. & Marks, R. W. The organization of genetic-variation for recombination in Drosophila melanogaster. Genetics 114, 525–547 (1986).
Brooks, L. in The Evolution of Sex: An Examination of Current Ideas (eds Michod, R. E. & Levin, B. R.) Ch. 4 (Sinauer, Sunderland, MA, 1988).
Williams, C. G., Goodman, M. M. & Stuber, C. W. Comparative recombination distances among Zea mays L. inbreds, wide crosses and interspecific hybrids. Genetics 141, 1573–1581 (1995).
Kong, A. et al. A high-resolution recombination map of the human genome. Nat. Genet. 31, 241–247 (2002).
Ji, Y. F., Stelly, D. M., De Donato, M., Goodman, M. M. & Williams, C. G. A candidate recombination modifier gene for Zea mays L. Genetics 151, 821–830 (1999).
Kong, A. et al. Sequence variants in the RNF212 gene associate with genome-wide recombination rate. Science 319, 1398–1401 (2008).
Korol, A. B. & Iliadi, K. G. Recombination increase resulting from directional selection for geotaxis in Drosophila. Heredity 72, 64–68 (1994).
Charlesworth, B. Directional selection and the evolution of sex and recombination. Genet. Res. 61, 205–224 (1993).
Barton, N. H. A general model for the evolution of recombination. Genet. Res. Comb. 65, 123–144 (1995).
Charlesworth, B. Recombination modification in a fluctuating environment. Genetics 83, 181–195 (1976).
Hill, W. G. & Robertson, A. The effect of linkage on the limits to artificial selection. Genet. Res. 8, 269–294 (1996).
Iles, M. M., Walters, K. & Cannings, C. Recombination can evolve in large finite populations given selection on sufficient loci. Genetics 165, 2249–2258 (2003).
Keightley, P. D. & Otto, S. P. Interference among deleterious mutations favours sex and recombination in finite populations. Nature 443, 89–92 (2006).
Hamilton, W. D. Sex vs. non-sex vs. parasite. Oikos 35, 282–290 (1980).
Bell, G. The Masterpiece of Nature: the Evolution and Genetics of Sexuality (University of California Press, Berkeley, 1982).
Gulisija, D., Kim, Y. & Plotkin, J. B. Phenotypic plasticity promotes balanced polymorphism in periodic environments by a genomic storage effect. Genetics 202, 1437–1448 (2016).
Sassaki, A. & Iwasa, Y. Optimal recombination rate in fluctuating environments. Genetics 115, 377–388 (1987).
Gandon, S. & Otto, S. P. The evolution of sex and recombination in response to abiotic or coevolutionary fluctuations in epistasis. Genetics 175, 1835–1853 (2007).
Hedrick, P. W. Genetic variation in a heterogeneous environment. II. Temporal heterogeneity and directional selection. Genetics 84, 145–157 (1976).
Hedrick, P. W. Genetic polymorphism in heterogeneous environments: a decade later. Annu. Rev. Ecol. Syst. 17, 535–566 (1986).
Gillespie, J. The Causes of Molecular Evolution (Oxford Univ. Press, New York, 1991).
Donelson, J. E. Mechanisms of antigenic variation in Borrelia hermsii and African trypanosomes. J. Biol. Chem. 270, 7783–7786 (1995).
Barbour, A. G. & Restrepo, B. I. Antigenic variation in vector-borne pathogens. Emerg. Infect. Dis. 6, 449–457 (2000).
Kusch, J. & Schmidt, H. J. Genetically controlled expression of surface variant antigens in free-living protozoa. J. Membr. Biol. 180, 101–109 (2001).
Otto, S. P. & Nuismer, S. N. Species interactions and the evolution of sex. Science 304, 1018–1020 (2004).
West-Eberhard, M. J. Developmental Plasticity and Evolution (Oxford Univ. Press, Oxford, UK, 2003).
Price, T. D. Phenotypic plasticity, sexual selection and the evolution of colour patterns. Am. Nat. 172, S1–S3 (2006).
Lande, R. Adaptation to an extraordinary environment by evolution of phenotypic plasticity and genetic assimilation. J. Evol. Biol. 22, 1435–1446 (2009).
Scheiner, S. M. Genetics and evolution of phenotypic plasticity. Annu. Rev. Ecol. Syst. 24, 35–68 (1993).
Stratton, D. A. Reaction norm function and QTL-environment interactions for flowering time in Arabidopsis thaliana. Heredity 81, 144–155 (1998).
Leips, J. & Mackay, T. F. Quantitative trait loci for life span in Drosophila melanogaster: interactions with genetic background and larval density. Genetics 155, 1773–1788 (2000).
Bergland, A. O., Genissel, A., Nuzhdin, S. V. & Tatar, M. Quantitative trait loci affecting phenotypic plasticity and the allometric relationship of ovariole number and thorax length in Drosophila melanogaster. Genetics 180, 567–582 (2008).
Bergland, A. O., Behrman, E. L., O'Brien, K. R., Schmidt, P. S. & Petrov, D. A. Genomic evidence of rapid and stable adaptive oscillations over seasonal time scales in Drosophila. PLoS Genet. 10, e1004775 (2014).
Thompson, M. J. & Jiggins, C. D. Supergenes and their role in evolution. Heredity 113, 1–8 (2014).
Charlesworth, D. The status of supergenes in the 21st century: recombination suppression in Batesian mimicry and sex chromosomes and other complex adaptations. Evol. Appl. 9, 74–90 (2016).
Charlesworth, D. & Charlesworth, B. Theoretical genetics of Batesian mimicry II. Evolution of supergenes. J. Theor. Biol. 55, 305–324 (1975).
Tetard-Jones, C., Kortesz, M. A. & Preziosi, R. F. Quantitative trait loci mapping on phenotypic plasticity and genotype-environment interactions in plant and insect performance. Philos. Trans. R. Soc. Lond. B Biol. Sci. 366, 1368–1379 (2011).
DeWitt, T. J., Sih, A. & Wilson, D. S. Costs and limits of phenotypic plasticity. Trends Ecol. Evol. 13, 77–81 (1998).
Krebs, R. A. & Feder, M. E. Natural variation in the expression of the heat-shock protein HSP70 in a population of Drosophila melanogaster and its correlation with tolerance of ecologically relevant thermal stress. Evolution 51, 173–179 (1997).
de Visser, J. A. G. M. et al. Perspective: evolution and detection of genetic robustness. Evolution 57, 1959–1972 (2003).
Peters, A. D. & Lively, C. M. The Red Queen and fluctuating epistasis: a population genetic analysis of antagonistic coevolution. Am. Nat. 154, 393–405 (1999).
Felsenstein, J. The theoretical population genetics of variable selection and migration. Ann. Rev. Genet. 10, 253–280 (1976).
Lynch, M. The consequences of fluctuating selection for isozyme polymorphisms in Daphnia. Genetics 115, 657–669 (1987).
Cain, A., Cook, L. & Currey, J. Population size and morph frequency in a long-term study of Cepaea nemoralis. Proc. R. Soc. Lond. B Biol. Sci. 240, 231–250 (1990).
Turelli, M., Schemske, D. W. & Bierzychudek, P. Stable two-allele polymorphisms maintained by fluctuating fitnesses and seed banks: protecting the blues in Linanthus parryae. Evolution 55, 1283–1298 (2001).
Stoltzfus, A. On the possibility of constructive neutral evolution. J. Mol. Evol. 49, 169–181 (1999).
Force, A. et al. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151, 1531–1545 (1999).
Taylor, J. S. & Raes, J. Duplication and divergence: the evolution of new genes and old ideas. Annu. Rev. Genet. 38, 615–643 (2004).
Joron, M. et al. Chromosomal rearrangements maintain a polymorphic supergene controlling butterfly mimicry. Nature 477, 203–206 (2011).
Wang et al. A Y-like social chromosome causes alternative colony organization in fire ants. Nature 493, 664–668 (2013).
Coyne, J. A., Meyers, W., Crittenden, A. P. & Sniegowski, P. The fertility effects of pericentric inversions in Drosophila melanogaster. Genetics 134, 487–496 (1993).
Winkler, G., Dodson, J. J. & Lee, C. E. Heterogeneity within the native range: population genetic analyses of sympatric invasive and noninvasive clades of the freshwater invading copepod Eurytemora affinis. Mol. Ecol. 17, 415–430 (2008).
Gulisija, D. & Kim, Y. Emergence of long-term balanced polymorphism under cyclic selection of spatially variable magnitude. Evolution 69, 979–992 (2015).
Gomulkiewicz, R. & Kirkpatrick, M. Quantitative genetics and the evolution of reaction norms. Evolution 46, 390–411 (1992).
Lynch, M. & Lande, R. in Biotic Interactions and Global Change (eds Kareiva, P. Kingsolver, J. G. & Huey, R. B.) Ch. 12 (Sinauer, Sunderland, MA, 1993).
Lande, R. & Shannon, S. The role of genetic variation in adaptation and population persistence in a changing environment. Evolution 50, 434–437 (1996).
Colosimo, P. F. et al. Widespread parallel evolution in sticklebacks by repeated fixation of ectodysplasin alleles. Science 307, 1928–1933 (2005).
Barrett, R. D. & Schluter, D. Adaptation from standing genetic variation. Trends Ecol. Evol. 23, 38–44 (2008).
Gessler, D. D. G. & Xu, S. Meiosis and the evolution of recombination at low mutation rates. Genetics 156, 449–456 (2000).
Hadany, L. & Beker, T. On the evolutionary advantage of fitness-associated recombination. Genetics 165, 2167–2179 (2003).
Agrawal, A. F., Hadany, L. & Otto, S. P. The evolution of plastic recombination. Genetics 171, 803–812 (2005).
Ram, Y. & Hadany, L. Condition-dependent sex: who does it, when and why? Phil. Trans. R. Soc. B 371, 20150539 (2016).
Mostowy, R. & Engelstadter, J. Host-parasite coevolution induces selection for condition-dependent sex. J. Evol. Biol. 25, 2033–2046 (2012).
This research was performed using resources and the computing assistance of the University of Wisconsin (UW)–Madison Center for High Throughput Computing (CHTC) in the Department of Computer Sciences. The CHTC is supported by UW–Madison and the Wisconsin Alumni Research Foundation and is an active member of the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science. J.B.P. acknowledges support from the David and Lucile Packard Foundation, the U.S. Department of the Interior (D12AP00025), and the U.S. Army Research Office (W911NF-12-1-0552).
Department of Biology, University of Pennsylvania, Philadelphia, PA, 19104, USA
Davorka Gulisija & Joshua B. Plotkin
Davorka Gulisija
Joshua B. Plotkin
D.G. proposed the idea and both authors designed the study. D.G. conducted computational work and both authors wrote the manuscript.
Correspondence to Davorka Gulisija.
The authors declare that they have no competing financial interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Gulisija, D., Plotkin, J.B. Phenotypic plasticity promotes recombination and gene clustering in periodic environments. Nat Commun 8, 2041 (2017). https://doi.org/10.1038/s41467-017-01952-z
DOI: https://doi.org/10.1038/s41467-017-01952-z
Cryptosystem for Grid Data Based on Quantum Convolutional Neural Networks and Quantum Chaotic Map
Ru-Chao Tan
Xing Liu
Li-Hua Gong
International Journal of Theoretical Physics (2021)
Evolutionary origins of genomic adaptations in an invasive copepod
David Ben Stern
Carol Eunmi Lee
Nature Ecology & Evolution (2020) | CommonCrawl |
DOI:10.1007/s10955-013-0731-y
Continuous Particles in the Canonical Ensemble as an Abstract Polymer Gas
@article{Morais2013ContinuousPI,
title={Continuous Particles in the Canonical Ensemble as an Abstract Polymer Gas},
author={Thiago Morais and Aldo Procacci},
journal={Journal of Statistical Physics},
volume={151},
pages={830-849}
Thiago Morais, A. Procacci
Journal of Statistical Physics
We revisit the expansion recently proposed by Pulvirenti and Tsagkarogiannis for a system of N continuous particles in the Canonical Ensemble. Under the sole assumption that the particles interact via a tempered and stable pair potential and are subjected to the usual free boundary conditions, we show the analyticity of the Helmholtz free energy at low densities and, using the Penrose tree graph identity, we establish a lower bound for the convergence radius which happens to be identical to the…
View on Springer
The virial series for a gas of particles with uniformly repulsive pairwise interaction and its relation with the approach to the mean field
D. Marchetti
. The pressure of a gas of particles with a uniformly repulsive pair interaction in a finite container is shown to satisfy (exactly as a formal object) a "viscous" Hamilton–Jacobi (H–J) equation whose…
On the Virial Series for a Gas of Particles with Uniformly Repulsive Pairwise Interaction
D. Brydges, D. Marchetti
The pressure of a gas of particles with a uniformly repulsive pair interaction in a finite container is shown to satisfy (exactly as a formal object) a "viscous" Hamilton-Jacobi (H-J) equation whose…
View 2 excerpts, cites methods
Convergence of Density Expansions of Correlation Functions and the Ornstein–Zernike Equation
T. Kuna, D. Tsagkarogiannis
We prove absolute convergence of the multi-body correlation functions as a power series in the density uniformly in their arguments. This is done by working in the context of the cluster expansion in…
Annales Henri Poincaré
Convergence of Mayer and Virial expansions and the Penrose tree-graph identity
A. Procacci, Sergio A. Yuhjtman
We establish new lower bounds for the convergence radius of the Mayer series and the Virial series of a continuous particle system interacting via a stable and tempered pair potential. Our bounds…
Letters in Mathematical Physics
Virial Expansion Bounds
S. Tate
In the 1960s, the technique of using cluster expansion bounds in order to achieve bounds on the virial expansion was developed by Lebowitz and Penrose (J. Math. Phys. 5:841, 1964) and Ruelle…
Fe b 20 14 A SOLUTION TO THE COMBINATORIAL PUZZLE OF MAYERS VIRIAL
Mayer's second theorem in the context of a classical gas model allows us to write the coefficients of the virial expansion of pressure in terms of weighted two-connected graphs. Labelle, Leroux and…
Cluster Expansion for the Ising Model in the Canonical Ensemble
Giuseppe Scola
Mathematical Physics, Analysis and Geometry
We show the validity of the cluster expansion in the canonical ensemble for the Ising model. We compare the lower bound of its radius of convergence with the one computed by the virial expansion…
Virial Expansion Bounds Through Tree Partition Schemes
S. Ramawadh, S. Tate
The bound on the radius of convergence in the case of the Penrose partition scheme is the same as that proposed by Groeneveld and improves the bound achieved by Lebowitz and Penrose.
Polymer Gas Approach to N-Body Lattice Systems
A. Procacci, B. Scoppola
We give a simple proof, based only on combinatorial arguments, of the Kotecký–Preiss condition for the convergence of the cluster expansion. Then we consider spin systems with long-range N-body…
Cluster Expansion in the Canonical Ensemble
Elena Pulvirenti, D. Tsagkarogiannis
We consider a system of particles confined in a box $${\Lambda \subset \mathbb{R}^d}$$ interacting via a tempered and stable pair potential. We prove the validity of the cluster expansion for the…
Convergence of Fugacity Expansions for Fluids and Lattice Gases
O. Penrose
Upper and lower bounds are obtained for R(V), the radius of convergence of the Mayer expansion VΣl bl(V)zl expressing the logarithm of the classical grand partition function for a finite volume V as…
On the Convergence of Cluster Expansions for Polymer Gases
R. Bissacot, R. Fernández, A. Procacci
We compare the different convergence criteria available for cluster expansions of polymer gases subjected to hard-core exclusions, with emphasis on polymers defined as finite subsets of a countable…
Mayer and Virial Series at Low Temperature
S. Jansen
We analyze the Mayer and virial series (pressure as a function of the activity resp. the density) for a classical system of particles in continuous configuration space at low temperature. Particles…
Convergence of virial expansions
J. Lebowitz, O. Penrose
Some bounds are obtained on R(V), the radius of convergence of the density expansion for the logarithm of the grand partition function of a system of interacting particles in a finite volume V, and…
General properties of polymer systems
C. Gruber, H. Kunz
We prove the existence of the thermodynamic limit for the pressure and show that the limit is a convex, continuous function of the chemical potential.The existence and analyticity properties of the…
View 2 excerpts, references methods
The Remainder in Mayer's Fugacity Series
Upper and lower bounds are obtained for the remainder after a finite number of terms of the expansions in powers of fugacity z for the pressure p, the s‐particle distribution functions, the density…
Contribution to Statistical Mechanics
J. Mayer
The method of the grand partition function may be used to calculate distribution functions Fn(z, {n}) proportional to the probability that n molecules in a system of fugacity z, and fixed…
Erratum and Addendum: "Abstract Polymer Models with General Pair Interactions"
A. Procacci
V (γi, γj ) in a purely hard core part U(γi, γj ) plus a non hard core part W(γi, γj ). This last potential W(γi, γj ) is zero for incompatible pairs and must satisfy the stability condition (2.6) of… | CommonCrawl |
posters-2
Poster session 2
A Budged-Balanced Tolling Scheme for Efficient Equilibria under Heterogeneous Preferences
Gabriel de O. Ramos, Roxana Radulescu, Ann Nowé
Gabriel de Oliveira Ramos
Multiagent reinforcement learning has shown its potential for tackling real world problems, like traffic. We consider the toll-based route choice problem, where self-interested drivers need to repeatedly choose routes that minimise their travel times. A major challenges here is to deal with agents' selfishness when competing for a common resource, as they tend to converge to a substantially far-from-optimum equilibrium. In traffic, this translates into higher congestion levels. Road tolls have been advocated as a means to tackle this issue, though typically assuming that (i) drivers have homogeneous preferences, and that (ii) collected tolls are kept for the traffic authority. In this paper, we propose Generalised Toll-based Q-learning (GTQ-learning), a multiagent reinforcement learning algorithm capable of realigning agents' heterogeneous preferences with respect to travel time and monetary expenses. GTQ-learning neutralises agents' preferences, thus ensuring that congestion levels are minimised regardless of agents' selfishness levels. Furthermore, GTQ-learning achieves approximated budget balance by redistributing a fraction of the collected tolls. We perform a theoretical analysis of GTQ-learning, showing that it leads agents to a system-efficient equilibrium, and provide empirical results, evidencing that GTQ-learning minimises congestion on realistic road networks.
multiagent reinforcement learning, route choice, marginal-cost tolling
A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
Juan Cruz Barsce, Jorge Palombarini, Ernesto Martínez
Juan Cruz Barsce
Optimization of hyper-parameters in reinforcement learning (RL) algorithms is a key task, because they determine how the agent will learn its policy by interacting with its environment, and thus what data is gathered. In this work, an approach that uses Bayesian optimization to perform a two-step optimization is proposed: first, categorical RL structure hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, while using the best categorical hyper-parameters found in the optimization at the upper-level of abstraction. This two-tier approach is validated in a simulated task.
reinforcement learning, hyper-parameter optimization, bayesian optimization
A Novel Deviation Bound via Mutual Information for Cross-Entropy Loss
Matias Vera, Pablo Piantanida and Leonardo Rey Vega
Matias Alejandro Vera
Machine learning theory has mostly focused on generalization to samples from the same distribution as the training data. Whereas a better understanding of generalization beyond the training distribution where the observed distribution changes is also fundamentally important to achieve a more powerful form of generalization. In this paper, we attempt to study through the lens of information measures how a particular architecture behaves when the true probability law of the samples is potentially different at training and testing times. Our main result is that the testing gap between the empirical cross-entropy and its statistical expectation (measured with respect to the testing probability law) can be bounded with high probability by the mutual information between the input testing samples and the corresponding representations, generated by the encoder obtained at training time. These results of theoretical nature are supported by numerical simulations showing that the mentioned mutual information is representative of the testing gap, capturing qualitatively the dynamic in terms of the hyperparameters of the network.
mutual information, deviation bound, generalization
Anatomical Priors for Image Segmentation via Post-Processing with Denoising Autoencoders
Agostina Larrazabal, César Martinez, Enzo Ferrante
"We introduce Post-DAE, a post-processing method based on denoising autoencoders to improve the anatomical plausibility of arbitrary biomedical image segmentation algorithms. Some of the most popular segmentation methods still rely on post-processing strategies like conditional random fields to incorporate connectivity constraints into the resulting masks. Even if it is a valid assumption in general, these methods do not offer a straightforward way to incorporate more complex priors like convexity or arbitrary shape restrictions. Post-DAE leverages the latest developments in manifold learning via denoising autoencoders. We learn a low-dimensional space of anatomically plausible segmentations, and use it to impose shape constraints by post-processing anatomical segmentation masks obtained with arbitrary methods. Our approach is independent of image modality and intensity information since it employs only segmentation masks for training. We performed experiments in segmentation of chest X-ray images. Our experimental results show that Post-DAE can improve the quality of noisy and incorrect segmentation masks obtained with a variety of standard methods, by bringing them back to a feasible space, with almost no extra computational cost."
anatomical segmentation, autoencoders, post-processing
Asistente de velocidad vehicular como agente de control en entornos urbanos
Rodrigo Velázquez
Rodrigo Manuel Velázquez Galeano
El trabajo busca plantear el modelo de un sistema asistente de velocidad vehicular que sea capaz de identificar coordenadas, en el vehículo y compararlas con las coordenadas almacenadas, por medio de una webApp implementando API'S de google maps, en una base de datos de marcas de velocidad de zonas urbanas, y alertar al conductor si excede en alguna de ellas a medida que vaya avanzando en su recorrido, lo que puede contribuir de gran manera al desarrollo de esta línea de investigación y a mejorar de forma notable las posibilidades de implementar en un futuro no muy lejano en un automóvil que pueda ser completamente asistido por un computador, teniendo en cuenta estos principios aquí mencionados.
asistente, velocidad, mapas
Assisted Optimal Transfer of Excitation Energy by Deep Reinforcement Learning
Joseph Vergel-Becerra and Leonardo A. Pachón
Joseph Vergel
"The high efficiency of energy transfer is one of the main motivations in the study of light-harvesting systems. The accurate description of these complexes can be formulated in the framework of open quantum systems which comprises the interaction among their fundamental units called chromophores and the interaction with the environment. Maximizing energy transfer involves optimally controlled system dynamics and at the same time, getting optimal configurations that achieve this objective. Therefore, this research proposes the implementation of reinforcement learning (RL) as a mechanism for quantum optimal control of excitation energy transfer (EET) in light-harvesting systems and, in turn, obtaining configurations that maximize efficiency through a classical agent that even can tolerate environments with high noise levels. "
reinforcement learning, open quantum systems, excitation energy
Bert's behavior evaluation using Stress test
Vladimir Araujo, Carlos Aspillaga
Vladimir Araujo
"Recently, several machine learning based models have been proposed for Natural Language Processing (NLP), achieving outstanding results, by using powerful architectures like "Transformer" (Vaswani et al., 2017) and pretraining on large text corpus, as is the case of BERT (Devlin et al., 2018). However, it has been shown that language models are fragile (they are easily broken) and biased (instead of an actual comprehension of the text, they tend to take advantage of data biases). To the best of our knowledge, this is the first time a Transformer-based model is systematically put to test."
natural language processing, language models, evaluation
Biomarker discovery on multi-omic data using Kernel Learning and Autoencoders
Martin Palazzo, Patricio Yankilevich, Pierre Beauseroy
Martin Palazzo
Molecular data from cancer patients is characterized by tens of thousands of gene features and also by different modalities or 'omics' like Genomics, Transcriptomics and Proteomics. These systems are also labeled by clinical information like patient survival, tumor stage and tumor subtype. The initial high dimensional input space is noisy and makes complicated to find useful patterns like similarities between tumor types and sub-types. For clinical reasons this work aims to learn meaningful and lower dimensional representations of tumors which keeps biological signals and contribute to classify tumor subtype or stage by using Variational Autoencoders (VAE) and Kernelized Autoencoders (KAE) . Then a feature selection strategy based on Multiple Kernel Learning is executed with the objective to approximate as much as possible the resulting representation based on the selected features to the one learned by the autoencoders. Selected features are also evaluated to classify tumor samples based on clinical labels and also to discover tumor subtypes. Preliminary results show that the learned representations drive the selection of meaningful genes associated to the clinical outcome of the patient and thus provide evidence for potential biomarkers.
kernel learning, autoencoders, cancer genomics
Bottom-Up Meta-Policy Search
Luckeciano C Melo, Marcos Máximo, Adilson Cunha
Luckeciano
Despite of the recent progress in agents that learn through interaction, there are several challenges in terms of sample efficiency and generalization across unseen behaviors during training. To mitigate these problems, we propose and apply a first-order Meta-Learning algorithm called Bottom-Up Meta-Policy Search (BUMPS), which works with two-phase optimization procedure: firstly, in a meta-training phase, it distills few expert policies to create a meta-policy capable of generalizing knowledge to unseen tasks during training; secondly, it applies a fast adaptation strategy named Policy Filtering, which evaluates few policies sampled from the meta-policy distribution and selects which best solves the task. We conducted all experiments in the RoboCup 3D Soccer Simulation domain, in the context of kick motion learning. We show that, given our experimental setup, BUMPS works in scenarios where simple multi-task Reinforcement Learning does not. Finally, we performed experiments in a way to evaluate each component of the algorithm.
imitation learning, meta-learning, robotics
Classification of SAR Images using Information Theory
Eduarda T. C. Chagas, Alejandro C. Frery and Heitor S Ramos
Eduarda Tatiane Caetano Chagas
The Classification of regions, especially urban areas, on synthetic aperture polarimetric radar (PolSAR) data is a challenging task. We know that texture analysis has a great informational power of the spatial properties of the main elements of the image, being one of the most important techniques in image processing and pattern recognition. The first task of this analysis is the extraction of discriminant features capable of efficiently incorporating information about the characteristics of the original image. Based on this principle, in this paper, we propose a new classification technique. Through the analysis of the textures of these images, ordinal pattern transition graphs, and information theory descriptors, we achieved a high discriminatory power in the characterization and classification of the regions under study.
sar image, classification, theory information
Clustering of climate time series
Y. Barrera, M. Jonckheere, V. Lefieux, D. Picard, A. Umfurer , E. Smucler,
Matthieu Jonckheere
The fluctuations in the temperature have a strong influence in the electric consumption. As a consequence, identifying and finding groups of possible climate scenarios is useful for the analysis of the electric supply system. The scenarios data that we are considering are time series of hourly measured temperatures over a grid of geographical points in France and neighboring areas, used by the French company RTE . Clustering techniques are useful for finding homogeneous groups of times series but the challenge is to find a suitable data transformation and distance metric. In this work, we used several transformations (fourier, wavelets, autoencoders) and distance metrics (DTW and euclidean among others) and found consistent groups of climate scenarios using clustering techniques. We give several performance indicators and we findd that k-shape performs the best according to some of them.
clustering, performance, time series
Complex Data Relevance Analysis for Event Detection
Caroline Mazini Rodrigues, Luis Pereira, Anderson Rocha, Zanoni Dias
Caroline Mazini Rodrigues
Considering the occurrence of an event with high social impact, it is important to establish a space-time relation of available information and so, answer some questions about the event as"who", "how", "where" and "why". This work is part of the thematic FAPESP project "DéjàVu: Feature-Space-Time Coherence from Heterogeneous Data for Media Integrity Analytics and Interpretation of Events" and it proposes, from social network collected data, to determine the relevance of them for the analyzed event, allowing the correct construction of relationships among these data during an analysis phase later on. The main challenges of this work are the characteristics of the data which will be used: heterogeneity, as they come from different sources; multi-modality, such as texts, images and videos; unlabeled data, as they do not present label of straightforward relevance for the event; and unstructured data, as they do not possess characteristics which could be used directly during the learning.
event detection, data mining, features engineering
Conceptual Attention Networks for Action Recognition
Andrés Espinosa, Alain Raymond, Julio Hurtado
Alain Raymond
"We introduce Concept Attention Networks (CAN) for Action Recognition. CANs seek to provide more interpretability by providing attention for both visual features as well as concepts associated to the action we want to recognize. CANs are modelled on the MAC architecture - which has produced great results on VQA through the use of sequential reasoning- with two main differences: 1) The knowledge base is modified to take video features. 2) We introduce attention over concepts via an auxiliary task that tries to guess the concepts associated to the predicted class on each reasoning step. We expect that taking visual features and word features to the same space might provide both similar accuracy as well as greater interpretability; since CAN - as the MAC architecture on which it is based- divides its reasoning in steps, we are able to see on which parts of the video and on which concepts the model is focusing to generate its predictions. We present results on the Something to Something v2 dataset against a C3D baseline. "
attention, action recognition, sequential reasoning
Deep Reinforcement Learning for Humanoid Walking
Dicksiano Carvalho Melo, Adilson Marques da Cunha, Marcos Ricardo Omena de Albuquerque Máximo
Dicksiano Carvalho Melo
"The work consists in applying Deep Reinforcement Learning algorithms in order to the improve a robot's walking engine. Therefore, the final goal is to implement a Push Recovery Controller, which is a bio-inspired controller that stabilizes the agent under external perturbations in order to achieve a more stable and also faster walking movement. Proximal Policy Optimization algorithm has already been used in different domains and had success to solve many Continuous Control problems, being considered one of the state-of-art techniques of Deep Reinforcement Learning, therefore this is the main technique used in this work. Given the nature of Policy Gradient methods, we applied distributed training in order to Speed Up the learning process. We have used Intel AI DevCloud Cluster in order to have many agents running in parallel."
deep reinforcement learning, humanoid walking, robotics
Multitask Learning on Graph Neural Networks: Learning Multiple Graph Centrality Measures with a Unified Network
Pedro HC Avelar, Marcelo OR Prates, Henrique Lemos, Luis C Lamb
Pedro Henrique da Costa Avelar
The application of deep learning to symbolic domains remains an active research endeavour. Graph neural networks (GNN), consisting of trained neural modules which can be arranged in different topologies at run time, are sound alternatives to tackle relational problems which lend themselves to graph representations. In this paper, we show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, focusing primarily on producing rankings based on these measures. We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained ($n=128$). Finally, we believe that our contributions attest to the potential of GNNs in symbolic domains in general and in relational learning in particular.
graph neural networks, graph networks, centrality measures, network centrality
End-To-End Imitation Learning of Lane Following Policies Using Sum-Product Networks
Renato Lui Geh, Denis Deratani Mauá
Renato Lui Geh
Recent research has shown the potential of learning lane following policies from annotated video sequences through the use of advanced machine learning techniques. They however require high computational power, prohibiting their use in low-budget projects such as educational robotic kits and embedded devices. Sum-product networks (SPNs) are a class of deep probabilistic models with clear probabilistic semantics and competitive performance. Importantly, SPNs learned from data are usually several times smaller than deep neural networks trained for the same task. In this work, we develop an end-to-end imitation learning solution to lane following using SPNs to classify images into a finite set of actions. Images are obtained from a monocular camera, which is part of the low-cost custom made mobile robot. Our results show that our solution generalizes training conditions with relatively few data. We investigate the trade-off between computational and predictive performance, and conclude that sacrificing accuracy for the benefit of faster inference results in improved performance in the real world, especially in resource constrained environments.
machine learning, robotics, sum-product networks
Friend or Foe: Studying user trustworthiness for friend recommendation in the era of misinformation
Antonela Tommasel
" The social Web, mainly represented by social networking sites, enriches the life and activities of its users by providing new forms of communication and interaction. Even though most of the time, the use of Internet is safe and enjoyable, there are risks that involve communication through social media. The unmoderated nature of social media sites often results in the appearance and distribution of unwanted content or misinformation. Thus, although social sites provide a great opportunity to stay informed about events and news, it also produces skepticism among users, as not every piece of shared information can be trusted. Moreover, the potential for automation and the low cost of producing fraudulent sites, allows the rapid creation and dissemination of unwanted content. Thus, current information dissemination processes pose the challenge of determining whether it is possible to trust on recommendations. The goal of this work is to define a profile to describe and estimate the trustworthiness or reputation of users, to avoid making recommendations that could favour the propagation of unreliable content and polluting users. The final aim is to reduce the negative effects of the existence and propagation of such content, and thus improving the quality of the recommendations."
recommender systems, trusworthiness, misinformation
Global Sensitivity Analysis of MAP inference in Selective Sum-Product Networks
Julissa Villanueva Llerena and Denis deratani Mauá
JulissaVillanueva
"Sum-Product Networks (SPN) are deep probabilistic models that have exhibited state-of-the-art performance in several machine learning tasks. As with many other probabilistic models, performing Maximum-A-Posteriori (MAP) inference is NP-hard in SPNs. A notable exception is selective SPNs, that allows MAP inference in linear time. Due to the high number of parameters, SPNs learned from data can produce unreliable and overconfident inference. This effect can be partially detected by performing a Sensitivity Analysis of the model predictions to changes in the parameters. In this work, we develop efficient algorithms for global quantitative analysis of MAP inference in selective SPNs. In particular, we devise a polynomial-time procedure to decide whether a given MAP configuration is robust with respect to changes in the model parameters. Experiments with real-world datasets show that this approach can discriminate easy- and hard-to-classify instances, often more accurately than criteria based on the probabilities induced by the model."
sensitivity analysis, sum-product networks, tractable probabilistic models.
Graph Feature Regularization: Combining machine learning models with graph data
Federico Albanese, Esteban Feuerstein, Leandro Lombardi
Federico Albanese
"In recent years, the amount of available data has drastically increased. However, labelling such data is hugely expensive. In this scenario, semi-supervised learning emerge as a vitally important tool, which combines labelled data (supervised machine learning) and unlabelled data (unsupervised learning) in order to make better predictions. In particular, graph based algorithms takes into account the relationships between the instances of the data and the underlying graph structures to make those predictions. In addition, in the context of data analysis, there are scenarios that can be naturally think as graphs. This occurs in situations where in addition to individual properties, connectivity between the elements of the data set is also important. Therefore, it is logical that machine learning models include information from both a node and its neighbours when making a prediction. This works propose adding graph feature regularization terms (GFR) to the the objective function to maximize. This new regularization terms depends on the structure of the network, the weight of the edges and the features of the node. We conclude that adding this terms to gradient boosted trees can outperform complex network architectures such as the Graph Convolutional Networks."
graph, machine learning, regularization
IA and HPC Convergence
Mariza Ferro, Vinícius Klôh, Felipe Bernardo, Bruno Schulze
Mariza Ferro
The convergence of High-Performance Computing (HPC) and Artificial Intelligence (AI) has become a promissing approach to major performance improvements. This combination has much to offer from each other and it's giving to the users unprecedent capabilities of research. In this interaction HPC could be used by AI (HPC for AI) to execute and enhance the performance of its algorithms. It involves using and evaluating different HPC architectures to train AI algorithms, understand and optimize their performance on different architectures. IA for HPC can be further subdivided in IA after HPC and autotune. In the first, ML algorithms are used to understand and analyze the results of simulations on HPC. It involves using ML to understand scientific applications, how they are relate to different HPC architectures, and the impact of this relationship on performance and power consumption. It is more related to knowledge discovery and its result can be used in autotune. In autotune, IA is used to configure HPC, to choose the best set of computation and parameters to achieve some goal, for example energy saving. Also, ML is used to the prediction of performance and energy consumption, job scheduling and frequency and voltage scaling.
hpc, performance, autotune
l0-norm feature LMS algorithms
Hamed Yazdanpanah, José A. Apolinário Jr., Paulo S. R. Diniz, Markus V. S. Lima
Hamed Yazdanpanah
A class of algorithms known as feature least-mean-square (F-LMS) has been proposed recently to exploit hidden sparsity in adaptive filter parameters. In contrast to common sparsity-aware adaptive filtering algorithms, the F-LMS algorithm detects and exploits sparsity in linear combinations of filter coefficients. Indeed, by applying a feature matrix to the adaptive filter coefficients vector, the F-LMS algorithm can revealand exploit their hidden sparsity. However, in many cases the unknown plant to be identified contains not only hidden but also plain sparsity and the F-LMS algorithm is unable to exploit it. Therefore, we can incorporate sparsity-promoting techniques into the F-LMS algorithm in order to allow the exploitation of plain sparsity. In this paper, by utilizing the l0-norm, we propose the l0-norm F-LMS (l0-F-LMS) algorithm for sparse lowpass and sparse highpass systems. Numerical results show that the proposed algorithm outperforms the F-LMS algorithm when dealing with hidden sparsity, particularly in highly sparse systems where the convergence rate is sped up significantly.
lms algorithm, hidden sparsity, plain sparsity
Learning to Solve NP-Complete Problems
Marcelo Prates, Pedro Avelar, Henrique Lemos, Luis Lamb, Moshe Vardi
Marcelo Prates
Graph Neural Networks are a promising technique for bridging differential programming with combinatorial domains. In this paper we show that GNNs can learn to solve, with very little supervision, the decision variant of the Traveling Salesperson Problem.
graph neural networks, np-complete, traveling salesperson problem
Loco: A toolkit for RL research in locomotion
Wilbert Santos Pumacay Huallpa
Recent advances in the field of Deep Reinforcement Learning have achieved impressive results in various tasks. One key component for these achievements are the simulated environments used to train and test DeepRL based agents, and for locomotion tasks there are various benchmarks that can be used, which are built on top of popular physics engines. However, these locomotion benchmarks do not offer the functionality required to train and evaluate agents in more diverse and complex tasks, exposing only relatively simple tasks, e.g. traversing flat terrain. This work presents an engine-agnostic toolkit for locomotion tasks that provides such functionality, allowing users to create a wide range of diverse and complex environments. We provide support for various physics engines via a physics abstraction layer, allowing users to easily switch between engines as required.
locomotion benchmarks, deeprl, simulated environments
Machine Learning-Based Pre-Routing Timing Prediction with Reduced Pessimism
E. Carvajal, N. Shukla, Y. Chen, J. Hu.
Erick Carvajal Barboza
Optimizations at placement stage need to be guided by timing estimation prior to routing. To handle timing uncertainty due to the lack of routing information, people tend to make very pessimistic predictions such that performance specification can be ensured in the worst case. Such pessimism causes over-design that wastes chip resources or design effort. In this work, a machine learning-based pre-routing timing prediction approach is introduced. Experimental results show that it can reach accuracy near post-routing sign-off analysis. Compared to a commercial pre-routing timing estimation tool, it reduces false positive rate by about 2/3 in reporting timing violations.
integrated circuit design, static timing analysis, machine learning
Memory in Agents
Meire Fortunato, Melissa Tan, Ryan Faulkner, Steven Hansen*, Adrià Puigdomènech Badia, Gavin Buttimore, Charlie Deck, Joel Z Leibo, Charles Blundell
Meire Fortunato
Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in anagent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite.
memory, rl, generalization
Model-Based Reinforcement Learning with Deep Generative Models for Industrial Applications
Ângelo Gregório Lovatto, Thiago Pereira Bueno, Leliane Nunes de Barros
Ângelo Gregório Lovatto
Industrial applications, such as those in process industry like or power generation, could benefit from reinforcement learning (RL) agents to reduce energy consumption and lower emissions. However, the systems involved in these applications usually have high usage costs, while RL algorithms generally require too many trials to learn a task. A promising approach to the inefficiency problem is the model-based RL method, which allows agents to learn a predictive model of the environment to extract more information from available data. Given that industrial applications generally feature complex stochastic behavior, we propose investigating novel integration schemes between the model-based approach and deep generative models, a class of neural networks specially designed to handle sophisticated probability distributions. We will test these interventions in existing and novel benchmark tasks aimed at assessing a learning system's capacity of handling state changes governed by complex conditional probability distributions. We expect that our approach will lead to better model predictions and faster learning.
reinforcement learning, generative models, deep learning
On the optimization of the regularization parameters selection in sparse modeling
Victoria Peterson and Rubén D. Spies
Victoria Peterson
Tikhonov functionals are commonly used as regularization strategies for severely ill-posed inverse problems. Besides the type of penalization induced into the solution, the proper selection of the regularization parameters is of utmost importance for accurate estimation. In this work, we analyze several data-driven regularization parameters estimation methods in a mixed-term discriminative framework. Numerical results for P300 detection in Brain-Computer Interfaces classification are presented, showing the impact of regularization parameter estimation into classification performance.
generalized tikhonov regularization, tunning parameter selection, sparse modeling
Pajé - End-to-End Machine Learning
Edesio Alcobaça, Davi Pereira-Santos, André Carvalho
Edesio Alcobaça
"The number, variety, and complexity of Data Science applications are rapidly increasing along with automated solutions. This kind of solution, called automated machine learning, makes data science accessible to non-specialists. On the other hand, from the specialist standpoint, automated machine learning can spare him/her manual and repetitive work, speeding up research. In the last years, there has been a strong interest in the development of tools able to automate data science. While the existing frameworks mainly focus on inducing accurate models through hyperparameter tuning, they disregard or forgo, for instance, the data preprocessing step, reproducibility, and explainability. Nevertheless, this kind of task expends the majority of human resources. In this paper, we present an overview of ideas behind Pajé, an open tool for automated data science. Pajé includes all the core processes of the data science pipeline, from data acquisition to model interpretation, and at the same time, addresses important aspects of machine learning, such as reproducibility and explainability."
automl, meta-learning, machine learning
Preliminary results of supervised models trained with charge density data from Cruzain-inhibitors complexes.
Villafañe, Roxana Noelia; Luchi, Adriano Martín; Angelina, Emilio Luis; Peruchena, Nélida María.
Roxana Noelia Villafañe
"Proteins are the most versatile biological molecules, with diverse functions. Recently, the AI community have developed interest in specific topics related to proteins as: protein folding, structural analysis, protein-ligand affinity estimation, among others. Cruzain is a cysteine protease involved in chagas disease with several Cz-inhibitor complexes deposited in the Protein Data Bank (PDB). Unfortunately, the number of structures solved up-to-date is scarce for the requirements of a machine learning optimization algorithm. Another issue is the high dimensionality of the data involved in structure-based approaches for drug design. In this work, charge density-based data was employed as input for a classification algorithm with the protein-ligand interactions as columns and ligands as rows. A support vector machine with recursive feature elimination was employed to uncover the most relevant features involved in the protein-inhibitor complexes. This approach is the first step for further analysis of topological data of Cz-ligand complexes under study. We hope that results will shed light to understand the inhibition mechanism of Cruzain."
support vector machines, qtaim, feature selection
Probability distributions of maximum entropy on Wasserstein balls and their applications
Luis Felipe Vargas and Mauricio Velasco
Mauricio Velasco
We introduce a cutting plane method for efficiently finding the probability distribution of maximum entropy contained in a Wasserstein ball. Such distributions are the most general (i.e. minimizers of the amount of prior information) in the ball and are therefore of central importance for statistical inference. We generalize these results to the problem of minimizing cross-entropy from a given prior distribution and use them to propose 1-parameter families of learning algorithms that are naturally resilient to biases.
wasserstein metric, maximum entropy, minimum cross-entropy
Random Projections and $\alpha$-shape to Support the Kernel Design
"Daniel Moreira Cestari Rodrigo Fernandes de Mello"
Daniel CestarI
We automatically design kernels from data by projecting points into either random hyperplanes or onto the boundaries forming the $\alpha$-shape. We interpret such transformation as an explicit strategy a kernel uses to extract features from data, thus SVM applied on this transformed space should be capable of correctly separating class instances. We firstly applied this method on two different synthetic datasets to assess its performance and parameter sensitivity. Those experimental results confirmed a considerable improvement over the original input space, robustness in the presence of noise and parameter changes. Secondly, we applied our approach on well-known image datasets in order to evaluate its ability to deal with real-world data and high dimensional spaces. Afterwards, we discuss how this novel approach could be plugged to Convolutional Neural Networks, helping to understand the effects and the impact of adding units to layers. Our proposal has a low computational cost and it is parallelizable to work directly on the transformed space and, when memory constraints hold, its resultant kernel matrix might be used instead. Such approach considerably improved the classification performance in almost all scenarios, supporting the claim that it could be used as a general-purpose kernel transformation.
random projections; alpha-shape; kernel design
Regular Inference over Recurrent Neural Networks as a Method for Black Box Explainability
Franz Mayr, Sergio Yovine
Franz Mayr
This work explores the general problem of explaining the behavior of recurrent neural networks (RNN). The goal is to construct a representation which enhances human understanding of an RNN as a sequence classifier, with the purpose of providing insight on the rationale behind the classification of a sequence as positive or negative, but also to enable performing further analyses, such as automata-theoretic formal verification. In particular, an active learning algorithm for constructing a deterministic finite automaton which is approximately correct with respect to an artificial neural network is proposed.
recurrent neural networks, regular inference, explainability
Scalable methods for computing state similarity in deterministic Markov Decision Processes
Pablo Samuel Castro
We present new algorithms for computing and approximating bisimulation metrics in Markov Decision Processes (MDPs). Bisimulation metrics are an elegant formalism that capture behavioral equivalence between states and provide strong theoretical guarantees on differences in optimal behaviour. Unfortunately, their computation is expensive and requires a tabular representation of the states, which has thus far rendered them impractical for large problems. In this paper we present a new version of the metric that is tied to a behavior policy in an MDP, along with an analysis of its theoretical properties. We then present two new algorithms for approximating bisimulation metrics in large, deterministic MDPs. The first does so via sampling and is guaranteed to converge to the true metric. The second is a differentiable loss which allows us to learn an approximation even for continuous state MDPs, which prior to this work had not been possible.
markov decision processes, reinforcement learning, bisimulation metrics
See and Read: Detecting Depression Symptoms in Higher Education Students Using Multimodal Social Media Data
Paulo Mann, Aline Paes, Elton H. Matsushima
Paulo Mann
Mental disorders such as depression and anxiety have been increasing at alarming rates in the worldwide population. Notably, major depressive disorder has become a common problem among higher education students. While the reasons for this alarming situation remain unclear (although widely investigated), the student already facing this problem must receive treatment. To that, it is first necessary to screen the symptoms. The traditional way for that is relying on clinical consultations or answering questionnaires. However, nowadays, the data shared at social media is a ubiquitous source that can be used to detect the depression symptoms even when the student is not able to afford or search for professional care. In this work, we focus on detecting the severity of the depression symptoms in higher education students, by comparing deep learning with feature engineering models induced from Instagram data. The experimental results show that students presenting a BDI score higher than 20 can be detected with 0.92 of recall and 0.69 of precision in the best case, reached by a fusion model. Our findings show a potential of help on further investigation of depression, by bringing students at-risk to light, to guide them to access adequate treatment.
deep learning, depression, students
Solving Linear Inverse Problems by Joint Posterior Maximization with a VAE Prior
Mario González, Andrés Almansa, Mauricio Delbracio, Pablo Musé
"We address the problem of solving ill-posed inverse problems in imaging where the prior is a neural generative model. Specifically we consider the decoupled case where the prior is trained once and can be reused for many different degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization algorithms and to the use of a stochastic encoder to accelerate computations. We show theoretical and experimental evidence that the proposed objective function may be quite close to bi-convex, which would pave the way to show strong convergence results of our optimization scheme. Experimental results also show the higher quality of the solutions obtained by our approach with respect to non-convex MAP approaches."
inverse problems, variational autoencoder, maximum a posteriori
Stream-based Expert Ensemble Learning for Network Measurements Analysis
Juan Vanerio, Pedro Casas, Federico Larroca
Juan Vanerio
"The application of machine learning to Network Measurements Analysis problems has largely increased in the last decade; however, it remains difficult to say today which is the most fitted category of models to address these tasks in operational networks. We work on Stream-GML2, a generic stream-based (online) Machine Learning model for the analysis of network measurements. The model is a stacking ensemble learning algorithm, in which several weak or base learning algorithms are combined to obtain higher predictive performance. In particular, Stream-GML2 is an instance of a recent model known as Super Learner, which performs asymptotically as good as the best input base learner. It provides a very powerful approach to tackle multiple problems with the same technique while minimizing over-fitting likelihood during training, using a variant of cross-validation. Additionally, stream-GML2 copes with concept drift and performance degradation by relying on Reinforcement Learning (RL) principles, no-regret learning and online-convex optimization. The model resorts to adaptive memory sizing to retrain the system when required, adjusting its operation point dynamically according to distribution changes in incoming samples or performance degradation over time."
stream learning; ensemble learning; network attacks
Synthesizing Atmospheric Radar Images from IR Satellite Channel Using Generative Deep Neural Networks
Sacco, Maximilliano A. ;Scheffler, Guillermo ; Ruiz, Juan
Maximiliano Sacco
We present a novel application to infer atmospheric radar reflectivity images using infrared satellite images. Given the high cost of radar instruments, data oriented image reconstruction appears as an attractive option. We compared output from fully connected networks, convolutional-deconvolutional networks and generative adversarial networks trained with synthetically generated radar/satelite image pairs from numerical weather model simulations. Results are comparable with state of the art statistical methods. The application shows promising results for short term weather prediction.
satellite radar gans
Towards self-healing SDNs for dynamic failures
Cristopher G. S. Freitas, André L. L. Aquino
Cristopher Gabriel de Sousa Freitas
Legacy IP networks are currently a huge problem for Internet Service Providers, as the demand grows exponentially, the profit doesn't follow. With the emergence of the Software-Defined Networks (SDN), providers are hoping to improve their service while lowing the operational expenses. In this work, we focus on self-healing SDNs, that requires fault-tolerant mechanisms and intelligent network management for enabling the system to perceive its incorrect states and acting to fix it. As fault tolerance is a huge issue, we narrow our proposal for only dynamic failures, as these are usually the best target for machine learning approaches as deterministic solutions are sub-optimal or too complex. Thus, we develop a solution using Deep Reinforcement Learning (DRL) for routing and load balancing, considering highly dynamic traffic, and we show the viability of a model-free solution and its efficiency.
deep reinforcement learning, fault tolerance, software-defined networks
Towards the Education of the Future: Challenges and Opportunities, for AI?
Germán Capdehourat, Federica Bascans, Fabián Frommel, María Catalina Piana, Fiorella Nahmias, Cecilia Bisogno, Cecilia Marconi, Alessia Zucchetti, Fiorella Haim, Enrique Lev.
Germán Capdehourat
As in other verticals, the application of data science to education opens up new possibilities. An example is the growing research community in learning analytics. Different goals, such as looking for tools for a more personalized education and the detection of particular difficulties at early ages, are relevant challenges that are being addressed in the area. In this context, we present the case of Plan Ceibal, an institution that assists the education system in Uruguay, providing technological solutions for the support of education.
education, learning analytics, ai literacy
Transfer in Multiagent Reinforcement Learning
Silva, Felipe Leno da; Costa, Anna Helena Reali
Felipe Leno da Silva
Reinforcement learning methods have successfully been applied to build autonomous agents that solve challenging sequential decision-making problems. However, agents need a long time to learn a task, especially when multiple autonomous agents are in the environment. This research aims to propose a Transfer Learning framework to accelerate learning by combining two knowledge sources: (i) previously learned tasks; and (ii) advice from a more experienced agent. The definition of such a framework requires answering several challenging research questions, including: How to abstract and represent knowledge, in order to allow generalization and posterior reuse?, How and when to transfer and receive knowledge in an efficient manner?, and How to consistently combine knowledge from several sources?
machine learning, multiagent reinforcement learning, transfer learning
Transformers are Turing Complete
Jorge Pérez, Javier Marinkovic, Pablo Barceló
"Alternatives to recurrent neural networks, in particular, architectures based on attention, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of one of the most paradigmatic architectures exemplifying the attention mechanism, the Transformer (Vaswani et al., 2017). We show that the Transformer is Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. Our study also reveals some minimal sets of elements needed to obtain these completeness results."
attention, transformers, turing completeness
Uncovering differential equations
Agustin Somacal
Many branches of science and engineering require differential equations to model the dynamics of the systems under study. Traditionally, the identification of the appropriate terms in the equation has been done by experts. Brunton, Proctor, and Kutz 2016 developed a method to automate this task using the data itself. In this work, we extend the applicability of this method to situations where not all variables are observed by adding higher-order derivatives to the model space search. We first test the results using only one variable of the Lorenz system and then apply the same methodology to temperature times series. We found that the proposed approach is enough to recover equations with R²>0.95 in both cases. We also propose an algebraic method to get future values of the system and compare it with traditional integrative methods finding that our approach is more stable giving high accuracy prediction results in the case of the Lorenz system.
differential equations, dynamical systems
Aurea Soriano-Vargas, Bernd Hamann, Maria Cristina F. de Oliveira
Aurea Soriano-Vargas
We present an integrated interactive framework for the visual analysis of time-varying multivariate datasets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. It effectively combines visualization and data mining algorithms providing the following capabilities: i) visual exploration of multivariate data at different temporal scales; and ii) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.
visual analytics, time-varying multivariate data, visual feature selection
AI-enabled applications with social and productivity impact
Digital Sense Technologies
Álvaro Pardo
dSense is a specialized R&D Studio that provides consultancy and development services in Computer Vision, Machine Learning and Image Processing for projects with an important component of innovation. Our team of 4 PhDs, 5 MScs and experienced engineers authored more than 175 papers and 4 US patents. By taking advantage of our research background, we have been able to develop valuable custom AI-enabled solutions across industries with a positive social and productivity impact. We introduce some of the most recent in this poster.
Computer Vision, Machine Learning, Image Processing
FLambé: A Customizable Framework for Machine Learning Experiments
Jeremy Wohlwend, Nicholas Matthews, Ivan Itzcovich
Carolina Rodriguez Diz
Flambé is a machine learning experimentation framework built to accelerate the entire research life cycle. Flambé's main objective is to provide a unified interface for prototyping models, running experiments containing complex pipelines, monitoring those experiments in real-time, reporting results, and deploying a final model for inference. Flambé achieves both flexibility and simplicity by allowing users to write custom code but instantly include that code as a component in a larger system which is represented by a concise configuration file format. We demonstrate the application of the framework through a cutting-edge multistage use case: fine-tuning and distillation of a state of the art pretrained language model used for text classification.
Pytorch Experiment Research | CommonCrawl |
Informal Geometry Seminar
Informal Geometry Seminar (IGS), as its name says is an informal seminar for graduate students and postdocs at IST to share their ideas with each other and a good place to ask simple questions without any pressure.
Its fixed time and place are every Tuesday at 11:15- 13:00, room 3.10 of IST mathematics department. Contact: Hassan Najafi Alishah <halishah at math.ist.utl.pt>
From Feb. 2014 on, announcements will appear here in the page of Math department.
Spring 2013-14:
GIT and symplectic stability (I)
Alfonso Zamora
Geometric Invariant Theory (GIT) is a powerful tool to study quotients of algebraic varieties by the action of Lie groups, related to symplectic quotients by the Kempf-Ness theorem. From both points of view a notion of stability for the orbits of the group action plays a prominent role.
In the lectures we will give the basic notions and ideas behind GIT stability, symplectic stability and the Kempf-Ness theorem. We will pay special attention to the unstable orbits for which one can finds a "maximal way to destabilize" them. This idea can be seen from both the algebraic and the symplectic point of view and we will show this coincidence.
All the treatment will be done through three (basic but very illustrative) examples: the obtention of the projective space as a quotient, the moduli space of configurations of n points in the projective line (related to the moduli space of polygons) and the obtention of the grassmanian as a quotient.
Polygon spaces, their Gromov width and Hamiltonian toric actions
3 Feb.(Mon.),4 Feb(Tue.), 12 Feb.(Wed.) 2014
Polygon spaces, their Gromov width and Hamiltonian toric actions.
Alessia Mandini and Milena Pabiniak
In this talk we want to introduce a very interesting class of symplectic manifolds: moduli spaces of polygons in $\mathbb{R}^3$ with edges of lengths $(r_1,\dots,r_n)$. Under some genericity assumptions on lengths $r_i$, the polygon space is a symplectic manifold. In fact it is a symplectic reduction of Grassmannian manifold of 2-planes in $C^n$. Moreover polygon spaces can be equipped with a toric Hamiltonian action at least on an open dense subset. After introducing this family of manifolds we will concentrate on the spaces of 5-gons and calculate for them a symplectic invariant called Gromov width (definition will be given).
Every symplectic toric orbifold is a centered reduction of …
Every symplectic toric orbifold is a centered reduction of a Cartesian product of weighted projective spaces
Aleksandra Marinkovic
We prove that every symplectic toric orbifold is a centered reduction of a Cartesian product of weighted projective spaces. This generalizes the result of Abreu and Macarini that every monotone symplectic toric manifold is a centered reduction of weighted projective space. The idea of the proof is to present the labeled polytope corresponding to symplectic toric orbifold as an intersection of monotone polytopes and then to do reduction in stages. As a corollary we show that every symplectic toric orbifold contains a non-displaceable Lagrangian toric fiber and we identify this fiber.
Computations in (equivariant) cohomology
Silvia Sabatini
Equivariant cohomology is an incredibly powerful tool to understand the topology of a manifold endowed with an action of a Lie group. Computing the equivariant cohomology ring may be hard, but in some cases computations can be carried on by using a "magic tool" called the "Localization Theorem". I will (try to) show some of these computations in some simple cases, and convince you that much more can be done in a much greater generality.
Conservative Franks lemma
Hassan Najafi Alishah
The Franks lemma states that any perturbation of the derivative of a diffeomorphism at a finite set can be realized as the derivative of a nearby diffeomorphism in the $C^1$ topology. I will talk about conservative version of the Franks lemma and a bit about applications.
Fall 2013-14:
G2 and the rolling ball
John Huerta
The search for simple models of the exceptional Lie groups is a long standing problem in mathematics. In this talk, we use a nonassociative algebra known as the split octonions to explain how the smallest exceptional Lie group, G2, can be thought of as the symmetry group of a 'spinorial ball' rolling on a projective plane precisely 3 times as big.
The symplectomorphism group of 4-manifolds and more
Sinan Eden
Gromov, among many other things, calculated the symplectomorphism group of $S^2\times S^2$. I will try to motivate why it is interesting, give the main ideas in the proof, and discuss some of the already existing generalizations.
- M. Gromov - Pseudo holomorphic curves in symplectic manifolds [1985]
- M. Abreu - Topology of symplectomorphism groups of $S^2\times S^2$ [1998]
- M. Abreu, D. McDuff - Topology of symplectomorphism groups of rational ruled surfaces [2000]
Symplectic geometry, J-holomorphic curves, symplectomorphism group, almost complex structures.
Quantization, algebroids, groupoids and all that (II)
Giorgio Trentinaglia
Since the talk is supposed to be informal, my abstract will be informal too. I will talk about what's in the title. How much I will be able to say, well, this will depend on the audience. I will almost surely need more than one meeting to go reasonably deep into the topic.
Quantization, algebroids, groupoids and all that (I)
Mathematics as a Natural Science
What is mathematics? Why do we think that mathematics is more rigorous that other sciences? Why is mathematics considered as a "formal" science, as opposed to a "natural" science? This talk deals with the question of axiomatization of mathematics, starting with Euclid and ending with Gödel.
Davis, Philip; Hersh, Reuben; The Mathematical Experience, Boston: Birkhäuser, 1981.
Hersh, Reuben; What Is Mathematics, Really? , Oxford Univ. Press, 1997.
Kline, Morris; Why Johnny Can't Add: The Failure of the New Mathematics, St. Martin's Press, 1973.
Kline, Morris; Mathematical Thought From Ancient to Modern Times, Oxford University Press, 1972.
Topological recursion and enumeration of surfaces
Nicolas Orantin
How many different surfaces can we build by gluing together 3 squares and two triangles by their edges? What topology do they have? This is the kind of questions which one can answer by studying random matrix integrals. In this talk, I will explain how, by combinatorial arguments, one can solve such a problem of enumeration of discrete surfaces by very basic algebraic geometry. If times allows it, I will explain how the answer generalizes amazingly to the enumeration of Riemann surfaces embedded in a given toric manifold, i.e. to the computation of Gromov-Witten invariants of some toric manifolds or the computation of simple Hurwitz numbers through mirror symmetry.
A review of the subject can be found in arXiv:0811.3531v1
IGS will be resumed after working seminar is finished
Polynomial automorphisms of $\mathbb C^n$ and the Jacobian conjecture
Stavros Papadakis
The aim of the talk is to give a gentle introduction to the subject of the title.
Shestakov, Ivan P. ; Umirbaev, Ualbai U. The tame and the wild automorphisms of polynomial rings in three variables. J. Amer. Math. Soc. 17 (2004), no. 1, 197-227
van den Essen, Arno Polynomial automorphisms and the Jacobian conjecture. Progress in Mathematics, 190. Birkhäuser Verlag, Basel, 2000. xviii+329 pp.
van den Essen, Arno Polynomial automorphisms and the Jacobian conjecture. Algébre non commutative, groupes quantiques et invariants (Reims, 1995), 55-81, Sémin. Congr., 2, Soc. Math. France, Paris, 1997. Available here
What is an algebraic surface of general type ?
Xavier Roulleau
We will explain the words in the title and give examples. If there is enough time, we will explain the classifications of algebraic surfaces, and how to construct them.
Stanley's proof of the g-conjecture for simplicial convex polytopes, parts I and II
29 Nov. & 6 Dec. 2011
For a simplicial complex D, the f-vector of D is the finite sequence with entries the number of vertices, 1-faces, 2 faces, ... of D. For fixed n, the g-conjecture characterizes the possible number of f-vectors for simplicial complexes D triangulating the sphere $S^n$. We will sketch Stanley's celebrated proof, using toric geometry and the hard Lefschetz theorem, of the g-conjecture for the case when D is the boundary of a simplicial convex polytope.
Billera, Louis J.; Lee, Carl W., A proof of the sufficiency of McMullen's conditions for $f$-vectors of simplicial convex polytopes. J. Combin. Theory Ser. A 31 (1981), no. 3, 237-255.
Stanley, Richard P., The number of faces of a simplicial convex polytope. Adv. in Math. 35 (1980), no. 3, 236-238.
Fulton, William, Introduction to toric varieties. Annals of Mathematics Studies, 131. The William H. Roever Lectures in Geometry. Princeton University Press, Princeton, NJ, 1993. xii+157 pp. ISBN: 0-691-00049-2
Existence of KAM tori for degenerate Hamiltonian Systems
I will start with a brief reminder of the talk I gave last semester, then try to explain the beautiful technic that been used to extend the results of non-degenerate systems to degenerate ones. I will state some facts from Riemannian geometry used in this technic and talk a bit about Whitney differentiability.
Chong-Qing Cheng, Yi-Sui Sun, Existence of KAM tori in Degenerate Hamiltonian Systems. J. Differential Equations, 114-1 (1994), pp. 288-335
H. Whitney, Analytical extensions of differentiable functions defined in closed sets, Trans. Amer. Math. Soc. 36(1934), 63-89
***I can provide hard copies of the above papers upon request***
Natural (and less natural) deduction systems for classical (and less classical) logics
Marco Volpe
In this introductory talk, we will describe a calculus for logical reasoning called "natural deduction". We will present it in the case of classical logic and have a look at possible variants in order to capture some non-classical logics.
Stability of the Steiner symmetrization of convex sets
25 OCt. & 8 Nov. 2011
Filippo Cagnetti
The goal of the talk is the study of the isoperimetric inequality for the Steiner symmetrization in any codimension. First, we give a characterization of the cases of equality. Then, we will prove a quantitative version of the inequality, in the case of convex sets. This is a joint work with Marco Barchiesi and Nicola Fusco.
Open books and pseudo-Anosov maps II
Again, due to the informality of the seminar we will have the rest of Sinan's unfinished talk.
Open books and pseudo-Anosov maps I
Open books and pseudo-Anosov maps
One of the natural ways to study the topology of 3-manifolds is to consider its open book decomposition (which is given by (S,h) where S is any compact oriented surface with boundary and h is a diffeomorphism of S that fixes the boundary point-wise). So, a topological 3-dimensional question is turned into a geometrical 2-dimensional question. The aim of my talk will be to investigate one particular relation between the two. Namely, I will try to convince you that after a sequence of positive stabilizations, we can get a pseudo-Anosov diffeomorphism.
Vincent Colin, Ko Honda - Stabilizing the monodromy of an open book decomposition
Ko Honda, William H. Kazez, Gordana Matic -Right-veering diffeomorphisms of compact surfaces with boundary I
Albert Fathi, François Laudenbach, Valentin Poénaru - Thurston's Work On Surfaces
John B. Etnyre - Lectures on open book decompositions and contact structures
Introduction to the McKay Correspondence II
4 Oct. 2011 Due to the informality of talks:-), we will continue with Stavros's unfinished talk.
This is a video which gives you an intuition about blowup that Stavros was talking about last week.
Introduction to the Mckey correspondenc I
Introduction to the McKay Correspondence
The talk will be a gentle introduction to the McKay correspondence. Let $\mathbb C$ be the field of complex numbers. In its simplest form, McKay correspondence relates for a finite subgroup $G\subset SL(2,\mathbb C)$ the geometry of the quotient space $\mathbb C^2 / G$ with the representation theory of $G$.
Dolgachev, I., McKay correspondence, Lecture note is available here.
Reid, M., La correspondance de McKay. Sêminaire Bourbaki, Vol. 1999/2000. Astêrisque No. 276 (2002), 53-72. A copy is available here
Closed forms on fiber bundles
Olivier Brahic
Given a 3-form defined on the total space of a fibration, I will explain how to interpret geometrically the equations for it to be closed in geometric terms. Namely we have Courant algebroid structures defined fiberwise, and preserved by a higher analogue of a connection.
Hopf Algebras and Hopf-Galois Extensions
Marcin Szamotulski
I will deliver an introduction to Hopf algebras, Hopf-Galois extensions, and try to justify the notions. I will present some basic examples, and a way to present classical Galois theory in an 'extravagant' way. If time permits I'll try to present some of my results. Quite soon I will give a more advanced Version of this talk at UL.
Gradification of Everything, Supermetry and Superbock
Sebastian Guttenberg
KAM theory (Kolmogorov-Arnold-Moser)
On the moduli spaces of polygons and hyperpolygon
Alessia Mandini
Introduction to the Hamiltonian diffeomorphism group (with particular focus on Hofer's distance)
Remi Leclercq
Polterovich. The geometry of the group of symplectic diffeomorphisms (Birkhäuser 2001).
Automatic Theorem Proving for Euclidean Geometry
Cox, Little and O'Shea. Ideals, varieties, and algorithms (Springer 2007). | CommonCrawl |
Instabilities in the CDMT
Can I use a higher rank tensor field as the metric field?
How to test that a flat metric represents a global three-torus geometry
Metric tensor in General Relativity or otherwise
Some hints for special case of metric tensor in GR
Is the metric-induced topology relevant at all in a (psuedo) Riemannian manifold?
Does nature feature super-tiny dark quanta?
Spherically Symmetric Metric in Nordstrom Gravity
Symmetries of AdS$_3$, $SO(2,2)$ and $SL(2,\mathbb{R})\times SL(2,\mathbb{R})$
What is the surface area of a gravitating body of mass M and radius R?
Metric $f(R)$ Instability
I was reading this manuscript about Metric \(f(R)\) instability, could anyone explain why the value of \(\mu^4\) creates strong instability in Eq. (4)?
One piece I don't understand is where it says "Thus, the \(T^{3}/6\mu^{4}\) dominates the coefficient in front of \(R_1\) term in Eq. (6) and leads to strong instability" just below Eq.(7). Why is the value of the coefficient of \(R_1\) may create instabilities.
Consider the field equation:\(D^{2}R-3\frac{D_{a}R D_{a}R}{R}+\frac{R^{4}}{6{\mu}^{4}}-\frac{R^{2}}{2}=-\frac{T R^{3}}{6{\mu}^{4}}\), why the value of \(\mu^4\), whether positive or negative, may create instabilities?
This question and the first comment below it was deleted from Physics Stack Exchange and has been restored from an archive.
general-relativity
metric-tensor
dark-energy
asked Apr 13, 2014 in Theoretical Physics by user38032 (10 points) [ revision history ]
edited Apr 30, 2014 by dimension10
Consider the field equation: \(D^{2}R-3\frac{D_{a}R D_{a}R}{R}+\frac{R^{4}}{6{\mu}^{4}}-\frac{R^{2}}{2}=-\frac{T R^{3}}{6{\mu}^{4}}\), why the value of \(\mu^4\), whether positive or negative, may create instabilities?
commented Apr 13, 2014 by user38032 (10 points) [ no revision ]
A comment by David Zaslavasky has been omitted from repost, since the linked manuscript is merely 4 pages long.
commented Apr 13, 2014 by dimension10 (1,975 points) [ no revision ]
I am extremely sorry @physicsnewbie, it seems you are correct. I found from a meta.SE discussion that SE "redistributes [deleted content] to 10k+rep users and moderators", and whether it redistributes content at all is a choice it makes, but it still owns its content.
I actually dealt this model some in my thesis work. (Shameless plug: Stability of spherically symmetric solutions in modified theories of gravity.) The basic picture in the Dolgov-Kawasaki paper is that they start with a background solution where the Ricci scalar is equal to the trace of the stress-energy tensor, \(R_0 = T\) (inside a star, say). This solution should match up to some exterior solution with \(R_0 \approx \mu^4\) (one of the cosmological models that were motivating Carroll, Duvvuri, Trodden, & Turner); however, if \(\mu^4 < 0\), this turns out to be impossible. (Basically, the Ricci scalar tends to diverge at large distances from the star.)
Having dispensed with the \(\mu^4 < 0\) case, they look at at the \(\mu^4 > 0\) case. In this case, one can obtain a well-behaved solution in which \(R_0 \approx T\) inside the star and \(R_0 \approx \mu^4\) asymptotically. The next step is to look at the behavior of perturbations about this solution; the linearized perturbation equation is eq. (5) in their paper. The equation of motion for the first-order perturbations grows exponentially with time, since the last term on the left-hand side of eq. (5) is so large in magnitude and negative.
I should mention that while Dolgov & Kawasaki's proof isn't iron-clad (see my comments at the start of Section III.B in the paper I linked to above), it does persist in a more rigorous analysis. You might also take a look at The Large Scale Structure of f(R) Gravity, which obtains a similar result.
answered May 2, 2014 by Johnny Assay (70 points) [ revision history ] | CommonCrawl |
CIS 521 - Artificial Intelligence
HW1: Python Skills
HW2: Uninformed Search
HW3: Informed Search
HW4: Games and Adversarial Search
HW5: Sudoku
HW6: MDPs and Gridworld
HW7: Reinforcement Learning
HW8: Language Models
HW9: Perceptrons
HW10: Neural Networks
EC0: Use Python to Control R2D2
EC0: R2D2 Sensor Pack Setup Instructions
EC1: R2D2 as an Intelligent Agent
EC2: Robot Navigation
EC3: R2-D2 Battle
EC4: R2-D2 Face & Mask Detector
EC5: R2-D2 Intent Detection
Movie quotes according to autocomplete
This assignment is due on Tuesday, November 16, 2021 before 11:59PM.
You can download the materials for this assignment here:
skeleton file
Frankenstein (text file)
Homework 8: Language Models [100 points]
In this assignment, you will gain experience working with Markov models on text.
A skeleton file language_models.py containing empty definitions for each question has been provided. Since portions of this assignment will be graded automatically, none of the names or function signatures in this file should be modified. However, you are free to introduce additional variables or functions if needed.
You may import definitions from any standard Python library, and are encouraged to do so in case you find yourself reinventing the wheel.
You will find that in addition to a problem specification, most programming questions also include a pair of examples from the Python interpreter. These are meant to illustrate typical use cases, and should not be taken as comprehensive test suites.
You are strongly encouraged to follow the Python style guidelines set forth in PEP 8, which was written in part by the creator of Python. However, your code will not be graded for style.
Once you have completed the assignment, you should submit your file on Gradescope. You may submit as many times as you would like before the deadline, but only the last submission will be saved.
1. N-Gram Model [95 points]
In this section, you will build a simple language model that can be used to generate random text resembling a source document. Your use of external code should be limited to built-in Python modules, which excludes, for example, NumPy and NLTK.
[5 points] Write a simple tokenization function tokenize(text) which takes as input a string of text and returns a list of tokens derived from that text. Here, we define a token to be a contiguous sequence of non-whitespace characters, with the exception that any punctuation mark should be treated as an individual token. Hint: Use the built-in constant string.punctuation, found in the string module.
>>> tokenize(" This is an example. ")
['This', 'is', 'an', 'example', '.']
>>> tokenize("'Medium-rare,' she said.")
["'", 'Medium', '-', 'rare', ',', "'", 'she', 'said', '.']
[10 points] Write a function ngrams(n, tokens) that produces a list of all $n$-grams of the specified size from the input token list. Each $n$-gram should consist of a 2-element tuple (context, token), where the context is itself an $(n−1)$-element tuple comprised of the $n−1$ words preceding the current token. The sentence should be padded with $n−1$ "<START>" tokens at the beginning and a single "<END>" token at the end. If $n=1$, all contexts should be empty tuples. You may assume that $n\ge1$.
>>> ngrams(1, ["a", "b", "c"])
[((), 'a'), ((), 'b'), ((), 'c'), ((), '<END>')]
[(('<START>',), 'a'), (('a',), 'b'), (('b',), 'c'), (('c',), '<END>')]
[(('<START>', '<START>'), 'a'), (('<START>', 'a'), 'b'), (('a', 'b'), 'c'), (('b', 'c'), '<END>')]
[10 points] In the NgramModel class, write an initialization method __init__(self, n) which stores the order $n$ of the model and initializes any necessary internal variables. Then write a method update(self, sentence) which computes the $n$-grams for the input sentence and updates the internal counts. Lastly, write a method prob(self, context, token) which accepts an $(n−1)$-tuple representing a context and a token, and returns the probability of that token occuring, given the preceding context.
>>> m = NgramModel(1)
>>> m.update("a b c d")
>>> m.update("a b a b")
>>> m.prob((), "a")
>>> m.prob((), "c")
>>> m.prob((), "<END>")
>>> m.prob(("<START>",), "a")
>>> m.prob(("b",), "c")
>>> m.prob(("a",), "x")
[20 points] In the NgramModel class, write a method random_token(self, context) which returns a random token according to the probability distribution determined by the given context. Specifically, let $T=\langle t_1,t_2, \cdots, t_n \rangle$ be the set of tokens which can occur in the given context, sorted according to Python's natural lexicographic ordering, and let $0\le r<1$ be a random number between 0 and 1. Your method should return the token $t_i$ such that
\[\sum_{j=1}^{i-1} P(t_j\ |\ \text{context}) \le r < \sum_{j=1}^i P(t_j\ | \ \text{context}).\]
You should use a single call to the random.random() function to generate $r$.
>>> random.seed(1)
>>> [m.random_token(())
for i in range(25)]
['<END>', 'c', 'b', 'a', 'a', 'a', 'b', 'b', '<END>', '<END>', 'c', 'a', 'b', '<END>', 'a', 'b', 'a', 'd', 'd', '<END>', '<END>', 'b', 'd', 'a', 'a']
>>> [m.random_token(("<START>",)) for i in range(6)]
['a', 'a', 'a', 'a', 'a', 'a']
>>> [m.random_token(("b",)) for i in range(6)]
['c', '<END>', 'a', 'a', 'a', '<END>']
[20 points] In the NgramModel class, write a method random_text(self, token_count) which returns a string of space-separated tokens chosen at random using the random_token(self, context) method. Your starting context should always be the $(n−1)$-tuple ("<START>", ..., "<START>"), and the context should be updated as tokens are generated. If $n=1$, your context should always be the empty tuple. Whenever the special token "<END>" is encountered, you should reset the context to the starting context.
>>> m.random_text(13)
'<END> c b a a a b b <END> <END> c a b'
'a b <END> a b c d <END> a b a b a b c'
[15 points] Write a function create_ngram_model(n, path) which loads the text at the given path and creates an $n$-gram model from the resulting data. Each line in the file should be treated as a separate sentence.
# No random seeds, so your results may vary
>>> m = create_ngram_model(1, "frankenstein.txt"); m.random_text(15)
'beat astonishment brought his for how , door <END> his . pertinacity to I felt'
'As the great was extreme during the end of being . <END> Fortunately the sun'
'I had so long inhabited . <END> You were thrown , by returning with greater'
'We were soon joined by Elizabeth . <END> At these moments I wept bitterly and'
[15 points] Suppose we define the perplexity of a sequence of m tokens $\langle w_1, w_2, \cdots, w_m \rangle$ to be
\[\sqrt[m]{\frac{1}{P(w_1, w_2, \cdots, w_m)}}.\]
For example, in the case of a bigram model under the framework used in the rest of the assignment, we would generate the bigrams $\langle (w_0=\langle \text{START} \rangle , w_1), (w_1, w_2), \cdots,(w_{m−1}, w_m), (w_m, w_{m+1} = \langle \text{END}\rangle)\rangle$, and would then compute the perplexity as
\[\sqrt[m+1]{\prod_{i=1}^{m+1} \frac{1}{P(w_i\ | \ w_{i-1})}}.\]
Intuitively, the lower the perplexity, the better the input sequence is explained by the model. Higher values indicate the input was "perplexing" from the model's point of view, hence the term perplexity.
In the NgramModel class, write a method perplexity(self, sentence) which computes the $n$-grams for the input sentence and returns their perplexity under the current model. Hint: Consider performing an intermediate computation in log-space and re-exponentiating at the end, so as to avoid numerical overflow.
>>> m.perplexity("a b")
Feedback [5 points]
[1 point] Approximately how many hours did you spend on this assignment?
[2 point] Which aspects of this assignment did you find most challenging? Were there any significant stumbling blocks?
[2 point] Which aspects of this assignment did you like? Is there anything you would have changed?
Last updated December 16, 2021 12:25:01.
The source code is on GitHub. | CommonCrawl |
Temperature adaptability of two clades of Aphelinus mali (Hymenoptera: Aphelinidae) in China
Min Su1,
Xiumei Tan1,
Qinmin Yang2,
Fanghao Wan1,3 &
Hongxu Zhou1
Aphelinus mali (Haldeman) (Hymenoptera: Aphelinidae) is an effective natural enemy used in China to control the woolly apple aphid (Eriosoma lanigerum [Hausmann]) (WAA). Population of A. mali in China falls into two distinct genetic clades (Shandong clades and Liaoning clades). In the present results, the developmental threshold temperature of the Shandong clade (9.82 ± 1.44 °C) was lower than that of the Liaoning clade (10.72 ± 0.24 °C), while the effective accumulated temperature of the Shandong clade needed for development from oviposition to adult eclosion (126.45 ± 16.81 day-degree) was significantly higher than that of the Liaoning clade (107.99 ± 3.44 day-degree). The supercooling and freezing points of the Liaoning clade (− 27.66 °C, − 27.17 °C) were significantly lower than those of the Shandong clade (− 26.04 °C, − 25.54 °C).
Some other differences between the two clades as well were the content of fat, trehalose, and protein of overwintering larvae of the Liaoning clade (60.8%, 7.57 μg/one insect, 10.11 μg/one insect) as these were significantly higher than those of the Shandong clade (45.5%, 5.73 μg/one insect, 8.05 μg/one insect). The occurrence of the first adult emergence of the Shandong clade of A. mali was earlier in the year than that of the Liaoning clade, allowing this clade to better control WAA in early spring. Meanwhile, the developmental duration from oviposition to adult emergence of the Shandong clade was longer than that of the Liaoning clade, and the cold tolerance of one of these, the more northerly Liaoning clade, is greater than that of the other, the more southerly Shandong clade. All of these factors imply differences in the pest control ability of the two clades of A. mali in their respective regions.
Woolly apple aphid (WAA), Eriosoma lanigerum (Hausmann), is a worldwide pest of apple, Malus pumila Miller (Jaume et al. 2015), and a quarantine pest in China (Zhang and Luo 2002). Since this aphid, which is native to North America, was introduced into China in Shandong province (Weihai) in 1914, in Liaoning province (Dalian) in 1929 from Japan, and in Yunnan province (Kunming) in 1930 through apple trees from America, it has had a serious impact on fruit production and acceptance of fruit for export. Aphelinus mali (Haldeman), a key parasitoid of this pest, was introduced several times into China during the period between 1940 and 1960 from the former Soviet Union and Japan and plays an important role in controlling WAA in Chinese orchards (Long et al. 1960).
Previous studies have found A. mali to be the most important parasitoid of WAA, making it a logical target for biological control by conservation (Gontijo et al. 2012). A. mali has been shown to provide a good control of WAA throughout the growing season in China, with field parasitism rates (50–90%) in apple orchards (Zhou et al. 2010).
However, in China, WAA continues to expand westward into new apple-growing regions (Lu et al. 2013). Woolly apple aphid was first found in Shanxi province in 1999 in one location (Linfen), but by 2007, it had spread across 360 km2, infesting 6% of the area apple orchards (Wang et al. 2011). In Shandong province, surveys from 2000 to 2002 found WAA in about 8000 ha of orchards in Rizhao, Shandong area, with 10–20% of apple trees infested, resulting in an annual loss of 5 × 106 kg of apples (Wang et al. 2011). In Jiangsu province, WAA was found in 2005 in 4333 ha (Chu et al. 2008). Since 2007, WAA has spread to Hebei province, where it has caused great damage to orchards in several regions (Qinhuangdao, Tangshan, and Shijiazhuang) (Wu et al. 2009).
In China, A. mali is comprised of two distinct genetic clades (Zhang et al. 2014; Zhou et al. 2014). We hypothesized that the expansion of WAA in recent years is related to differences in low-temperature adaptability between these clades. Earlier studies outside of China have found that low-temperature adaptability differs among geographic populations of A. mali (Mols and Boers 2001). For example, in the Annapolis River valley of Nova Scotia (Canada), the parasitoid population appears earlier in the year than a strain of A. mali found in the Netherlands, thus providing better control in Nova Scotia than Holland. By the time, the Dutch strain of A. mali becomes active and its population was grown to levels that make its control by A. mali difficult (Mols and Boers 2001). The Nova Scotian and the Dutch strains of A. mali also differ in their low-temperature threshold (8.6 and 9.4 °C, Nova Scotia vs Holland, respectively) and their effective accumulated temperature requirements, which were lower for the Canadian strain than for the Dutch strain (123.5 and 136.4 day-degree, respectively) (Mols and Boers 2001).
The present study aimed to determine the differences in the cold tolerance of different clades of A. mali in China to improve our understanding of the mechanism of WAA outbreaks and to provide insights into how to better use A. mali to control this economically important pest.
Target insects
Woolly apple aphid was collected from Qingdao (36° 20′N) in Shandong province. A. mali used in this study was collected from 4 to 5 apple orchards in the two locations: (1) Shandong province (Qingdao 36° 20′N, 120° 12′E) and (2) Hebei province (Qinhuangdao 119° 20′E, 39° 25′N), which is 889 km apart, representing the Shandong and the Liaoning clades, respectively. To obtain parasitoids from the field, in May and June 2015, apple branches, infested with woolly apple aphids were collected. The black-colored aphids (indicating parasitization) were isolated, and parasitized aphids were held in 1.5-mL centrifuge tubes until adult parasitoid's eclosion. A. mali and WAA were held at 25 °C, 70% RH, and a 16:8-h L:D photoperiod.
Measurement of effective accumulated temperature and developmental threshold temperature
Fifteen males and 15 females of A. mali from each clade were placed together with an excess of hosts (ca 200 aphids) in Petri dishes (13.5 cm dia). The parasitoids were removed after 24 h, and the exposed WAA allowed to develop under one of five temperatures (18, 20, 23, 25, and 28 °C), noting the day of adult parasitoid emergence from each parasitized aphid. This process was replicated five times for each temperature, with an average of 0.073 ± 0.006, 0.087 ± 0.018, 0.120 ± 0.031, 0.160 ± 0.035, and 0.173 ± 0.024 parasitism rates within each replicate for each clade, respectively. All groups of potentially parasitized aphids were held at 70% RH and a 16:8-h L:D photoperiod and observed daily to record when aphids turned black and adult parasitoid eclosion began. Newly emerged adults of A. mali were held separately under the same environmental conditions provided with 10% honey water. The sex and date of death of A. mali individuals were determined as well as the number of Day-Degree (DD) for each stage of development. Longevity of adults of each clade at each temperature was also determined.
The effective accumulated temperature (K) and the developmental threshold temperature (C) were calculated according to Ma (2009)
$$ K=\frac{n\sum VT-\sum V\sum T}{n\sum {V}^2-{\left(\sum V\right)}^2} $$
$$ C=\frac{\sum {V}^2\sum T-\sum V\sum VT}{n\sum {V}^2-{\left(\sum V\right)}^2} $$
where n is the number of groups in every experiment, T the constant temperature, and V is the average development rate.
Supercooling point and freezing temperature of overwintering parasitoid larvae
Overwintering larvae of A. mali were obtained by dissecting parasitized (black) woolly apple aphids collected in Changli (4~16 °C) for the Liaoning clade on October 26, 2015, and in Qingdao (7~13 °C) for the Shandong clade on November 17, 2015, using an anatomical lens.
Supercooling point (SCP) and freezing point (FP) of A. mali larvae were measured using a SUN-V type intelligent instrument in combination with a − 80 °C ultra-low temperature refrigerator. When the instrument was connected with the temperature probe, one larva was placed on the probe and then inserted into the 200-μL gun head of transfer liquid gun (to avoid contact with the pipe wall). The gun head was tightly wrapped with absorbent cotton and placed in the testing box, where absorbent cotton prevented rapid changes in the cooling rate. The whole probe and associated device were then placed into an ultra-low temperature refrigerator, where software of the measuring instrument system of the supercooling point -V1.3 was used to record temperature each second. Once below freezing, the SCP was found by reference to a sudden rise of temperature as energy is released upon larval freezing. The peak of this rise in temperature is the freezing point. This process was repeated 30 times for each clade.
Determination levels of selected cryoprotectants in overwintering larvae
Overwintering larvae of A. mali were collected all from the same single site and date. Thirty black aphids were taken at each time, the larvae were dissected out, and each treatment was repeated five times.
Free-water and fat content in overwintering A. mali larvae
The fresh weight (FW) of 30 overwintered larvae as a group was determined using a microbalance (Sartorius BSA224S-CW, Beijing Co., Ltd.), then larvae were dried together at 60 °C for 48 h. The dry weight (DW) of the samples was then obtained using the same microbalance, and the water content of the individual sample determined as (DW − FW)/FW × 100 (Folch et al. 1957).
After measuring the DW, the sample of 30 dried larvae was placed in a 1.5-ml centrifuge tube and homogenized in 20 μL of a chloroform mixture (2:1) then 0.6 ml of the chloroform and methanol mixture was added. The sample was then centrifuged at 2600 rpm for 10 min, and this process was repeated three times, each time removing the resulting supernatant. The residue was then baked at 60 °C in an oven for 72 h to determine the lean dry weight (LDW). The fat content of the insect was determined by the following formula: [(DW − LDW)/DW] × 100 (Folch et al. 1957).
Trehalose and glycogen content in overwintering A. mali larvae
Thirty overwintering larvae were added to 40 μL of 10% trichloroacetic acid solution and homogenized by grinding. The material was then rinsed by 0.2 ml of 10% trichloroacetic acid solution and centrifuged three times at 5000 rpm for 10 min. Between bouts of centrifugation, the precipitate was dissolved by 0.15 ml of 10% trichloroacetic acid solution. Final supernatant was mixed with 0.5 ml of anhydrous ethanol and then placed in a refrigerator at 4 °C for 24 h. 0.4 ml of the supernatant was centrifuged at 10,000 rpm for 15 min. The resulting supernatant was added to 0.4 ml of 0.15 mol/l H2SO4 solution that placed in a boiling water bath for 15 min and afterwards allowed to return to room temperature. When cooled, 0.4 ml of a 30% KOH solution was slowly added while stirring, and the solution was returned to the boiling water bath for 15 min before the trehalose determination. Thirty microliters of the liquid, with 300 μL of anthrone, was placed in a boiling water bath for 15 min and then held in the dark for 20~30 min. The reflectance of the sample at 620 nm was then used to determine the level of trehalose by comparison to a known standard.
To determine the level of glycogen, the remaining precipitate was mixed with 0.5 ml of distilled water, and when the precipitate was fully dissolved, it was used to measure glycogen levels (using the same method as above for trehalose). After the treatment with 300 μL of anthrone, the reflectance of the sample at 620 nm was used to determine the level of glycogen by comparison to a known standard. These processes were repeated for five samples (each from the same time and place) for each parasitoid clade.
Protein content of overwintering larvae
Thirty overwintering A. mali larvae were grouped and placed in a 1.5-ml centrifuge tube to which 0.1 ml of 0.04 mol/l phosphate buffer (pH = 7.0) was added. The samples were ground to homogenize the larvae and washed with 1.1 ml of 0.04 mol/l of phosphate buffer solution (pH = 7.0). The homogenized sample was then fully extracted with 0.04 mol/l phosphate buffer at 20~25 °C for 4 h, and the solution was centrifuged at 6000 rpm for 10 min. A sample of 20 μL of the resulting supernatant was collected and added to 80 μL of 0.04 mol/l phosphate buffer (pH = 7. 0) and 200 μL of Coomassie brilliant blue (Shanghai Kayon Biological Technology Co., Ltd.). The sample was mixed by shaking and allowed to stand for 2 min before colorimetric analysis at 595 nm to measure the absorbance value, from which the free protein content was calculated by reference to a standard. This was done five times (from one sample date and place) for each parasitoid clade.
The average data of development at respective temperature conditions were calculated as the mean ± standard deviation (SD) by SPSS 20.0. Significant differences in duration times were tested using one-way analysis of variance (ANOVA) corrected by SPSS 20.0. Independent sample t tests were used to analyze the developmental duration of each clade by SPSS 20.0.
Effective accumulated temperature and developmental threshold temperature
Developmental duration
Under the same temperature regime, there was no significant difference in development duration (egg to adult emergence) between males and females of the same clade of A. mali. Development duration significantly decreased with increasing temperature in each clade of A. mali (Table 1).
Table 1 Oviposition to eclosion durations of the non-overwintering generation of Aphelinus mali for two genetic clades in China
For females, development duration (from oviposition to adult emergence) of the Shandong clade was significantly longer than that of the Liaoning clade (F = 0.390, df = 4, P = 0.048) at 20 °C (Table 1).
Adult longevity
Under the same temperature regime, female adults survived longer than males of the same clade, but this difference was not significant. Significant differences were found for longevity at different temperatures within the same clade (Table 2). Adult longevity decreased significantly (from 25.00 ± 2.34 to 10.10 ± 7.38 for female, 23.00 ± 5.29 to 8.33 ± 3.14 for male) with increasing temperature 18 to 28 °C in the Shandong clade, while the longest duration in the Liaoning clade was at 20 °C (34.47 ± 1.32 for female, 31.94 ± 0.97 for male) and the shortest was at 25 °C (14.84 ± 1.42 for female, 10.94 ± 0.78 for male).
Table 2 Longevity of adults of the non-overwintering generation of Aphelinus mali for two genetic clades in China
Longevity of adult females was significantly greater for the Liaoning clade than for the Shandong clade at 20 °C (F = 0.394, df = 4, P = 0.000) and 23 °C (F = 1.302, df = 4, P = 0.021). There were no significant differences in female longevity between the clades at the other temperatures (Table 2).
For males, adult longevity was significantly longer for the Liaoning clade than for the Shandong clade at 20 °C (F = 1.811, df = 4, P = 0.000) and 28 °C (F = 0.080, df = 4, P = 0.028), but not significantly different for the other temperatures (Table 2).
Lower developmental thresholds and total day-degree for stage completion
Results represented in Table 3 showed that the lower developmental thresholds of both males and females of the Shandong clade were lower (F = 3.350, df = 3, P = 0.026; F = 0.150, df = 3, P = 0.012) than those of the Liaoning clade.
Table 3 The lower developmental thresholds and total day-degree of non-overwintering generation of Aphelinus mali for two genetic clades in China
Meanwhile, the total day-degree for stage completion of the Shandong clade was greater than that of the Liaoning clade (F = 5.907, df = 3, P = 0.028) (Table 3).
In this study, we found that the larval and pupal stages of Shandong clade females (9.32 ± 1.62 °C) and males (10.61 ± 1.58 °C) both had lower developmental threshold temperatures than that of the Liaoning clade (12.96 ± 1.56 °C, 13.33 ± 0.79 °C, respectively). The total day-degree for stage completion of Shandong clade females (134.61 ± 20.77 DD) and males (116.12 ± 16.60 DD) were both higher than that of the Liaoning clade (88.42 ± 16.65 DD, 82.02 ± 7.36 DD) from oviposition to adult eclosion. In Mols and Boers' study, the low-temperature threshold of the Nova Scotian (Canada) strain (8.6 °C) was lower than that of the Dutch (Nederland) strain (9.4 °C) (Mols and Boers 2001), allowing it to appear earlier in spring, thus providing better control. The developmental threshold temperature of the Shandong clade was lower than that of the Liaoning clade in the present study, suggesting that A. mali of Shandong clade can occur earlier in spring and may therefore provide better control of woolly apple aphid at a lower population level of the pest.
Supercooling points and freezing temperatures of overwintering larvae of the two clades
The supercooling (− 26.04 °C) and freezing (− 25.54 °C) points of the Shandong clade were significantly higher than corresponding values for the Liaoning clade (− 27.66 °C and − 27.17 °C, respectively) (F = 0.167, df = 58, P = 0.024; F = 0.088, df = 58, P = 0.023, respectively) (Fig. 1).
Supercooling and freezing points of overwintering Aphelinus mali larvae for two genetic clades found in China. Asterisk indicates a significant difference at the 0.05 level
Freeze tolerance and freeze avoidance are two alternative strategies for survival at sub-zero temperatures, with freeze avoidance being thought to predominate in moderately cold and predictable thermal environments (Sean et al. 2015). Freeze tolerance, on the other hand, is the ability of certain insects to enhance their cold resistance by regulating the body's supercooling state (Zhang and Ma 2013) so that at temperatures below freezing, the insect's body fluid remains liquid. The supercooling points of the Shandong clade of overwintering larvae and freezing point were both higher than that of the Liaoning clade, suggesting that the cold resistance of the Liaoning clade is stronger than that of Shandong clade.
Levels of cryoprotectants in overwintering parasitoid larvae of each clade
While there were differences in sample means for the free-water content of the two clades, they were not significant (F = 0.625, df = 8, P = 0.072). In contrast, the fat content of the Shandong clade (45.5%) was significantly different from that of the Liaoning clade (60.8%) (F = 2.836, df = 8, P = 0.017) (Fig. 2).
Free-water and fat content of Aphelinus mali for two genetic clades found in China. Asterisk indicates a significant difference at the 0.05 level
There were no significant differences in glycogen content between the two clades (F = 0.216, df = 8, P = 0.613). Trehalose levels, however, were significantly lower in the Shandong clade (5.73 g/larvae, 25.37 g/mg) than in the Liaoning clade (7.57 g/larvae, 36.12 g/mg) (P = 0.020, P = 0.008, respectively). Protein content was also significantly lower in the Shandong clade (8.05 g/larvae, 35.68 g/mg) than in the Liaoning clade (10.11 g/larvae, 48.20 g/mg) (P = 0.003, P = 0.001, respectively) (Fig. 3).
Glycogen, trehalose, and protein levels of Aphelinus mali for two genetic clades found in China
In cold environments, the free-water content in insect bodies is greatly reduced, leading to an increase in the concentration of body solutes and thus reducing its freezing point (Feng et al. 2014). Cold-resistant substances of insects include two types, small molecules and antifreezing proteins (Chen et al. 2010). Cold-resistant small molecules include glycerol, sorbitol, mannitol, five carbon polyol (probably Arabia sugar alcohol or nucleic acid alcohol), trehalose, glucose, fructose, and some amino and fatty acids (Chen et al. 2010).
Recent study of Zhang and Ma (2013) reported that insects in low-latitude areas had higher cooling points than insects in high-latitude areas. By comparing the supercooling point and the levels in the insect's body of cold-resistant materials for the two clades of A. mali, we found that the cold resistance of the Liaoning clade was higher than that of Shandong clade, possibly because of increasing latitude or genetic differentiation. Future studies should compare the cold resistance ability of the two clades from populations at the same latitude. A better understanding of the life table parameters of this parasitoid, as well as the genetic variation within populations, will allow for more accurate use of this insect to control WAA in China.
Chen H, Liang GM, Zou LY, Guo F, Wu KM, Guo YY (2010) Research progresses in the cold hardiness of insects. Plant Prot 36(2):18–24
Chu MP, Liu ZQ, Wu Y, Hu J, Gong WR (2008) Occurrence and comprehensive prevention and control measures in Jiangsu province of woolly apple aphid. Jiangsu Agricultural Science 6:128–129
Feng YQ, Wang JL, Zong SX (2014) Review of insects overwintering stages and cold-resistance strategies. Chin Agric Sci Bull 30(9):22–25
Folch J, Lees M, Stanley GHS (1957) A simple method for the isolation and purification of total lipids from animal tissues. J Biol Chem 226(1):497–507
Gontijo LM, Cockfield SD, Beers EH (2012) Natural enemies of woolly apple aphid (Hemiptera: Aphididae) in Washington state. Environ Entomol 41(6):1364–1371
Jaume L, Simó A, Ferran G, José S, Georgina A (2015) Woolly apple aphid Eriosoma lanigerum Hausmann ecology and its relationship with climatic variables and natural enemies in Mediterranean areas. B Entomol Res 105(1):60–69
Long CD, Wang YP, Tang PZ (1960) Biological characteristics and its utilization of woolly apple aphid parasitoid (Aphelinus mali Haldeman). Acta Entomol Sin 10(1):1–37
Lu ZY, Liu WX, Ran HF, Qu ZG, Li JC (2013) Investigation of the population dynamics of Eriosoma lanigerum and its parasitoids Aphelinus mali in apple orchards in central Hebei. J Agric Univ Hebei 36(3):87–91
Ma LQ (2009) Effective accumulated temperature and developmental threshold temperature for Semanotus bifasciatus (Motschulsky) in Beijing. Forestry Studies in China
Mols PJM, Boers JM (2001) Comparison of a Canadian and a Dutch strain of the parasitoid Aphelinus mali (Hald.) (Hym., Aphelinidae) for control of woolly apple aphid Eriosoma lanigerum (Haussmann) (Hom., Aphididae) in the Netherlands: a simulation approach. J Appl Entomol 125(5):255–262
Sean DS, Rachel AS, James CB, Glenda AV (2015) Conserved and narrow temperature limits in alpine insects: thermal tolerance and supercooling points of the ice-crawlers, Grylloblatta (Insecta: Grylloblattodea: Grylloblattidae). J Insect Physiol 78(2):55–61
Wang XY, Jiang CT, Xu GQ (2011) Potential distribution of an invasive pest Eriosoma lanigerum in China. Chin J Appl Entomol 48(2):379–391
Wu Q, Wan FH, Li ZH (2009) Investigations on invasion situation and control strategies of Eriosoma lanigerum in China. Plant Prot 35(5):100–104
Zhang Q, Luo WC (2002) Occurrence characteristics of Eriosoma lanigerum and its control methods. Entomological Knowledge 39(5):340–342
Zhang R, Ma J (2013) Insect supercooling point and its influence factors. Tianjing Agricultural Science 19(11):76–84
Zhang RM, Zhou HX, Guo D, Tao YL, Wan FH, Wu Q, Chu D (2014) Two putative bridgehead populations of Aphelinus mali (Hymenoptera: Aphelinidae) introduced in China as revealed by mitochondrial DNA marker. Fla Entomol 97(2):401–405
Zhou HX, Guo JY, Wan FH, Yu Y (2010) Aphelinus mali to woolly apple aphid natural control and its protection and utilization. Acta Phytophylacica Sinica 02:153–158
Zhou HX, Zhang RM, Guo D, Tao YL, Wan FH, Wu Q, Chu D (2014) Analysis of genetic diversity and structure of two clades of Aphelinus mali (Hymenoptera: Aphelinidae) in China. Fla Entomol 97(2):699–706
We would like to thank Chai Xiaojing and Yan Hao of the College of Agronomy and Plant Protection, Qingdao Agricultural University, China, for their help. This work was supported by the National Natural Science Foundation (31371994), National Natural Science Foundation (31772232), the National Key Research and Development Plan (2016YFC1201200), Major scientific and technological innovation project of Shandong Province (2017CXGC0214), the National Key Basic Research Development Plan Project (2013CB127600), and the Taishan Mountain Scholar Constructive Engineering Foundation of Shandong, China.
College of Plant Health and Medicine, Key Lab of Integrated Crop Pest Management of Shandong Province, Qingdao Agricultural University, Qingdao, 266109, China
Min Su
, Xiumei Tan
, Fanghao Wan
& Hongxu Zhou
General Station of Plant Protection of Shandong Province, Jinan, 250100, China
Qinmin Yang
State Key Laboratory for Biology of Plant Diseases and Insect Pests, Institute of Plant Protection, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
Fanghao Wan
Search for Min Su in:
Search for Xiumei Tan in:
Search for Qinmin Yang in:
Search for Fanghao Wan in:
Search for Hongxu Zhou in:
MS carried out the whole experiments at room condition and drafted the manuscript. XT corrected the English manuscript. QY collected A. mali from Qingdao and Qinhuangdao. FW participated in the design of the study HZ conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.
Correspondence to Hongxu Zhou.
Su, M., Tan, X., Yang, Q. et al. Temperature adaptability of two clades of Aphelinus mali (Hymenoptera: Aphelinidae) in China. Egypt J Biol Pest Control 28, 16 (2018). https://doi.org/10.1186/s41938-017-0009-9
Eriosoma lanigerum
Effective accumulated temperature
Supercooling point
Cold tolerance | CommonCrawl |
Inhibitory effects of lapachol on rat C6 glioma in vitro and in vivo by targeting DNA topoisomerase I and topoisomerase II
Huanli Xu1,
Qunying Chen1,
Hong Wang1,
Pingxiang Xu1,
Ru Yuan1,
Xiaorong Li1,
Lu Bai1 &
Ming Xue1
Lapachol is a natural naphthoquinone compound that possesses extensive biological activities. The aim of this study is to investigate the inhibitory effects of lapachol on rat C6 glioma both in vitro and in vivo, as well as the potential mechanisms.
The antitumor effect of lapachol was firstly evaluated in the C6 glioma model in Wistar rats. The effects of lapachol on C6 cell proliferation, apoptosis and DNA damage were detected by 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTS)/ phenazinemethosulfate (PMS) assay, hoechst 33358 staining, annexin V-FITC/PI staining, and comet assay. Effects of lapachol on topoisomerase I (TOP I) and topoisomerase II (TOP II) activities were detected by TOP I and TOP II mediated supercoiled pBR322 DNA relaxation assays and molecular docking. TOP I and TOP II expression levels in C6 cells were also determined.
High dose lapachol showed significant inhibitory effect on the C6 glioma in Wistar rats (P < 0.05). It was showed that lapachol could inhibit proliferation, induce apoptosis and DNA damage of C6 cells in dose dependent manners. Lapachol could inhibit the activities of both TOP I and II. Lapachol-TOP I showed relatively stronger interaction than that of lapachol-TOP II in molecular docking study. Also, lapachol could inhibit TOP II expression levels, but not TOP I expression levels.
These results showed that lapachol could significantly inhibit C6 glioma both in vivo and in vitro, which might be related with inhibiting TOP I and TOP II activities, as well as TOP II expression.
Glioma is the most common primary brain tumor and accounts for about 40% of all primary brain tumors. Despite advances in malignant glioma treatment in recent years, the prognosis of patients with malignant glioma is extremely poor [1]. Temozolomide (TMZ), an oral DNA-alkylating agent that can cross the blood-brain barrier, is the major chemotherapeutic drug used in clinical for the treatment of malignant gliomas [2]. However, malignant glioma cells quickly develop TMZ resistance and the long-term clinical benefits of TMZ are poor [3]. Thus, developing new drugs that can improve therapeutic benefit and prolong survival of malignant glioma patients is urgently needed.
Naphthoquinone is an important class of naturally occurring active ingredients with unique physical and chemical properties and pharmacological effects [4]. They are widely used as anticarcinogenic, antibacterial, antimalarial, and fungicidal agents [5]. Some well-known anticancer drugs (e.g. doxorubicin, mitomycin and mitoxantrone) possess a quinonoid structure. Lapachol (4-hydroxy-3-(3-methylbut -2-enyl) naphthalene-1,2-dione) is a naphthoquinone that can be isolated from many species of Bignoniaceae family [6]. Lapachol has a long history of anticancer activity that stretches back to the 1970's [7]. The cytotoxicity of lapachol and its derivatives were evaluated in many tumor cells, such as oesophageal cancer cells [8], Ehrlich's carcinoma [9], K562 leukemic cells [9], A549 non-small cell lung cancer, PC-3 prostate cancer, SKMEL-28 melanoma, LoVo colon cancer cell lines, and human glioma lines (U373 and Hs683) [10], as well as in several mouse models [11]. It was reported that lapachol did not show any carcinogenic activity [12]. Studies on the action mechanism of naphthoquinone and its derivatives have shown their inhibitory effects on DNA topoisomerases [13–15].
It was reported that some 1,4-naphthoquinone derivatives and lapachol showed strong cytotoxicity on glioma cells [10, 16]. We have previously studied the in vivo lapachol metabolism using a sensitive LC-ESI–MSn method [17], and found that lapachol was able to cross the blood brain barrier, indicating that it might be effective in treating malignant glioma. Since the effect of lapachol on malignant glioma has not been extensively studied, we evaluated the effect of lapachol on malignant glioma and the potential mechanisms in this study.
Lapachol [2-hydroxy-3-(3 -methyl-2 -butenyl-)-1,4- naphthoquinone] was purchased from Sigma Aldrich (St. Louis, MO, USA). DMEM medium and fetal bovine serum were purchased from Invitrogen Co (Carlsbad, CA). 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H -tetrazolium (MTS)/phenazinemethosulfate (PMS) assay kit and Annexin V-FITC/PI staining kit were purchased from Promega (Madison, WI, USA). Comet assay kit was bought from Nanjing KeyGEN BioTECH. Co. Ltd. in China. Topoisomerase I and II drug screening kits were bought from Topogen Inc. (USA). Enzyme-linked Immunosorbent Assay Kits for Topoisomerase I (TOP I) and TOP II were bought from Cloud-clone Corp. (Houston, USA). All other chemicals used were of the highest purity available from commercial sources.
Cells and cell culture conditions
The C6 glioma cell line was obtained from the Cell Resource Center of Peking Union Medical College (Beijing, China). The cells were cultured in DMEM supplemented with 10% fatal bovine serum at 37 °C in a humidified incubator containing 5% CO2. Cell viability was evaluated with Typan blue staining. Only cell suspensions with more than 95% viability were acceptable for implantation.
Establishment of rat C6 glioma model and lapachol treatment
Male Wistar rats weighing 160–180 g were used. After anesthesia with 10% chloral hydrate (3 mL/kg), the head of the rat was fixed in a stereotactic apparatus and the surgery area was disinfected with iodine. Then, the bregma was exposed by making a midline incision on the dorsal aspect of the head. Then, a 1 mm diameter hole was drilled in the cranial bone 1 mm posterior to the bregma and 3 mm lateral to the sagittal suture. About 3 × 106 C6 cells in 15 μL phosphate-buffered saline (PBS) were injected using a 25 μL microsyringe in 10 min. The tip of the microsyringe was inserted 6 mm beneath the dura, and then withdrawn 1 mm. After injection, the syringe remained in the brain for an additional 5 min, and then retracted slowly. The holes were sealed with bone wax and the wounds were closed. The rats were allowed to recover for 7 days under standard conditions (12-h light/dark, 22 ± 2 °C) with food and water adlibitum.
Seven, 14 and 20 days after implantation, tumors in the brains of the rats were detected by Bruker 7.0 T Micro-MRI using a T2W RARE sequence with parameters as follows: TR/TE 3000/15 ms, slice thickness 1.0 mm, slice gap 1.0 mm, FOV 33 × 33 mm, Matrix 256 × 256, flip angle 180°, time 4.8 min. The maximum anteroposterior diameter (L), width (W) and height (H) in the largest enhanced areas on the horizontal and coronal planes were measured. The tumor volumes (V) were calculated as follows:
$$ \mathrm{V}=\left(\frac{4}{3}\times \uppi \times \mathrm{L}\times \mathrm{W}\times \mathrm{H}\right)\times \frac{1}{8}\left({\mathrm{mm}}^3\right) $$
Then, rats were sacrificed by decapitation and the brains tissues were isolated. The tissues were formalin-fixed, paraffin-embedded and then cut into 10 μm-thick sections. The sections were subjected to hematoxylin/eosin (HE) staining and immunochemical staining for glial fibrillary acidic protein (GFAP) as previously described [18]. The sections from each animal were analyzed by a pathologist.
Seven days after glioma implantation, the rats were randomly divided into five groups: control group (0.5% CMC-Na, n = 8), TMZ group (25 mg/kg, n = 10), and different lapachol groups (low dose, 5 mg/kg; middle dose, 25 mg/kg; high dose, 100 mg/kg, n = 8 in each group). The drugs were given by intragastric administration once daily. The body weight was recorded each day. Seven and 13 days after treatment, tumors in the brains of the rats were detected by Bruker 7.0 T Micro-MRI using a T2W RARE sequence with parameters mentioned above. Tumor volumes were calculated as mentioned above. All experiments were performed with the approval from the Capital Medical University Ethics Committee in Beijing, China (number 37363).
Anti-proliferation assay
We investigated the effects of lapachol against C6 cells by MTS/PMS assay. Cells in logarithmic growth phase were plated in 96-well plates at a density of 3000 cells/well for 24 h. The cells were incubated with 1.25, 2.5, 5, 10, 20 μM of lapachol for 48 h. Then, MTS and PMS mixed at the ratio of 20:1 were immediately added to the culture medium. After 2 h, formazan production was analyzed at 490 nm by a Thermo Scientific™ Multiskan™ GO Microplate Spectrophotometer. The inhibitory rates and half inhibitory concentration (IC50) was then calculated.
Apoptotic analysis
Hoechst 33258 staining was used for detecting the apoptotic morphology of the treated cells. Cells were seeded in 96-well plates at 3000 cells/dish overnight and then treated with 1, 5, 10 μM of lapachol for 48 h. Then the cells were washed and stained with 10 μg/mL of Hoechst 33358 for 20 min at 37 °C. After washing with PBS, morphologic changes of the cells were observed under a fluorescence microscope and photographed.
Annexin V-FITC/PI staining was used for detecting apoptosis rates. Briefly, cells were seeded in 60 mm culture dishes at 3 × 105 cells/mL and incubated overnight. The cells were treated with 1, 5, 10 μM of lapachol for 48 h. Then cells were collected and resuspended in 500 μL detection buffer, followed by adding 5 μL PI and 5 μL Annexin V-FITC to the detection buffer. Then cells were incubated for 15 min in the dark and analyzed by a BD FACS Calibur™ system.
Comet assay
Comet assay was performed using the Comet Assay kit (KEYGEN BIOTECH. CO., Nanjing, China) according to the manufacture's instruction. After treatment with 1, 5, 10 μM of lapachol for 48 h, the cells were harvested and resuspended in 1 mL ice-cold PBS. Then 10 μL cell suspension (104 cells) were mixed with 75 μL low-melting agarose at 37 °C for 20 min, and then added to clean microscope slides, which had been covered with 100 μL 0.75% normal-melting agarose, and the gels were solidified at 4 °C for 10 min. Then the slides were lysised for 1-2 h, immerged in alkaline buffer (1 mM EDTA and 300 mM NaOH), and then subjected to electrophoresis at 25 V for 30 min. Finally, 2 μL PI was dropped onto each slide and covered with a clean cover slip and then observed by a fluorescent microscope. A total of 50 C6 cells were randomly analyzed with an image analysis system (Komet 5.5, Kinetic Imaging Ltd., UK) and DNA migration was determined by measuring the "tail intensity" (% tail DNA).
TOP I and II mediated DNA relaxation assays
The effects of lapachol on TOP I and TOP II activities were assessed by the conversion of supercoiled pBR322 DNA to its relaxed form using topoisomerase I and II drug screening kits (Topogen). The TOP I activity was studied in a 20 μL reaction system including 1 μL human TOP I, variable volume of lapachol (finaly concentration: 1, 10, 100 μM) or camptothecin (50 μM), 4 μL 5 × complete assay buffer, 1 μL pBR322 DNA, and variable volume H2O (to make up to volume). The reaction mixtures were incubated at 37 °C for 30 min. The reactions were stopped with 2 μL 10% SDS and then incubated with proteinase K (50 μg/mL) at 37 °C for 15 min. Then the samples were run on a 1% agarose gel with 0.5 μg/mL ethidium bromide at 50 V. The gel was destained in water before being photographed under UV-light. The known camptothecin was used as the positive control. Inhibition of TOP II activity by lapachol was performed using a similar procedure according to the manufacture's instruction. The known TOP II poisons etoposide was used as the positive control.
Molecular docking studies
Molecular docking study was performed for comparing the mode of action of lapachol-TOP I and lapachol-TOP II using MOE 2010 software package. Crystal structures of TOP I-ligand complex (PDB entry: 1SC7, 3.0 Å) and TOP II-ligand complex (PDB entry: 1ZXM, 1.87 Å) were used. All water molecule ligands were removed and the docking active pockets were defined by the ligand molecules. The detail docking parameters of TOP I were as follows: placement method (Triangle Matcher), the first scoring function rescoring (Affinity dG), and the saved poses (100); the refinement (forcefield), the second refinement scoring function rescoring (London dG) and the saved poses (30). The detail docking parameters of TOP II were as follows: placement method (Triangle Matcher), the first scoring function rescoring (London dG), and saved poses (30); the refinement (forcefield), the second refinement scoring function rescoring (none), the refinement saved poses (10). To verify whether MOE software was suitable for docking TOP I and TOP II, the ligand conformations of 1ZXM and 1SC7 were withdrawn and re-docked to the active pockets. The first 10 and 30 conformations of the docking scores were saved and the root-mean-square deviation (RMSD) values of docking conformation and initial conformation were calculated. Then, the ligand molecules in the crystal structures were re-docked to the defined active pockets, and the scores of ligand conformation after docking and original conformation in the crystal structures were calculated.
Detection of TOP I and TOP II levels in C6 cells
TOP I and TOP II expression levels in the cells were determined by Enzyme-linked Immunosorbent Assay Kits for TOP I and TOP II (Cloud-clone Corp.). Briefly, after treatment with 1, 5, 10 μM of lapachol for 48 h, the cells were collected and subjected to ultrasonication for 3 times. The supernatants were collected and protein concentrations were determined. Then, 100 μL each of dilutions of standard (20 ng/mL, 10 ng/mL, 5 ng/mL, 2.5 ng/mL, 1.25 ng/mL, 0.625 ng/mL, 0.312 ng/mL), blank and samples were added into the plate wells and incubated for 2 h at 37 °C. Then 100 μL Detection Reagent A working solution was added to each well. The wells were incubated for 1 h at 37 °C. After washing, 100 μL of Detection Reagent B working solution was added to each well. The wells were incubated for 30 min at 37 °C. After washing, 90 μL Substrate Solution was added and the wells were incubated for 20 min at 37 °C. Finally, 50 μL Stop Solution was added and the optical density (OD) was detected at 450 nm. A standard curve was then constructed by plotting the mean OD and concentration for each standard. The concentrations of TOP I and TOP II in the samples were then calculated by comparing the OD of the samples to the standard curve.
Real-time polymerase chain reaction (RT-PCR) for TOP I and TOP II
After incubation with 1, 5, 10 μM of lapachol for 48 h, total RNA of the cells was extracted using TRIzol (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. Reverse transcription was carried out using the PrimeScript™ RT Master Mix (TaKaRa, Japan). Real-time PCR was conducted using the SYBR Green dye (TaKaRa, Japan). The primers were as the following: TOP I, 5′-CTCAGCCGTTTCTGGAGTCT-3′ (forward) and 5′-TCAGCATCATCCTCAT CTCG-3′ (reverse); TOP IIα, 5′-ACAATTGGCCGCTAAACTTG-3′ (forward) and 5′-GCGAGTGTGCTGGTCACTAA-3′ (reverse); GAPDH, 5′-TCACCAGGGCTG CTTTTAAC-3′ (forward) and 5′-GACAAGCTTCCCGTTCTCAG-3′(reverse); Real-time PCR was performed in triplicate on CFX96 Real Time PCR Detection System (Bio-Rad, USA), and the 2-△△T method was used to determine the relative gene expression.
Lapachol could decrease tumor volumes without affecting the body weights of glioma-bearing rats
MRI, HE and immunohistochemical staining were used to confirm the successful establishment of the rat C6 glioma model. Tumors in the brains of the rats were detected by Bruker 7.0 T Micro-MRI using a T2W RARE sequence. As shown in Fig. 1a, the tumor volumes significantly increased on day 14 and day 20, compared with that on day 7. Rats with intracerebral tumor maximum diameter of >5.0 ± 0.5 mm were considered as glioma-bearing animals. HE staining showed that obvious pathologic changes were found in tumor tissue areas (Fig. 1b, arrow). GFAP is recognized as a marker for diagnosing astrocytic origin tumors. Immunohistochemical staining result showed that positive GFAP staining was observed in the tumor area, confirming that the tumor tissue was astrocytic origin (Fig. 1c, arrow).
MRI, HE and immunohistochemical staining results for confirming the establishment of rat C6 glioma model. a MRI image of glioma-bearing rats on day 7, 14 and 20 after implantation. Glioma tumor is indicated with an arrowhead. b Representative HE staining result for the implanted gliomas. Original magnification: a. ×20; b. ×100. c. Representative immunohistochemical staining for GFAP. Positive cells in glioma tumor are indicated with an arrowhead. Original magnification: a. ×200; b. × 400
Tumors in the brains of the rats in each group were detected by MRI on day 7, 14 and 20 after implantation (Fig. 2), and the tumor volumes were calculated as mentioned above. As shown in Fig. 3a, tumor volumes in the high dose lapachol and TMZ groups were significantly decreased compared to the control group (P < 0.05). However, no obvious growth inhibitory effects were observed in the low dose lapachol and middle dose lapachol groups (P > 0.05). Moreover, compared to the control group, no obvious body weight changes were found in lapachol and TMZ treated groups (P > 0.05). These results suggested that high dose lapachol and TMZ treatment could significantly decrease tumor growth without affecting the body weights of the glioma-bearing rats.
MRI examination for C6 glioma in the brains of the Wistar rats 7, 14 and 20 days after implantation. On day 7, the rats were treated with 0.5% CMC-Na solution (a, control), 25 mg/kg TMZ (b), 5 mg/kg lapachol (c), 25 mg/kg lapachol (d), and 100 mg/kg lapachol (e)
Tumor volumes (a) and body weight changes (b) of Wistar rats after being treated with TMZ and different concentrations of lapachol for 13 days. * P < 0.05, compared to the control group. Control: 0.5% CMC-Na solution; TMZ, 25 mg/kg; L: 5 mg/kg lapachol; M: 25 mg/kg lapachol; H: 100 mg/kg lapachol. The results are expressed as mean values ± SD
Lapachol could inhibit the proliferation of C6 cells
As shown in Fig. 4a, lapachol exerted obvious anti-proliferation effects on cancer cells in a dose-dependent manner. The IC50 of lapachol on C6 cells were 3.7 ± 1.4 μM. The morphology changes of C6 cells after treatment by 1.0, 5.0, and 10.0 μM of lapachol were shown in Fig. 4b. Shrinking size and elongated shape, and decreased cell number were found in the groups treated with 1.0, 5.0, and 10.0 μM of lapachol.
Effects of different concentrations of lapachol on proliferation and morphology of C6 glioma cells. a Inhibitory rates of lapachol on C6 cell proliferations. b Cell morphology of C6 cells after being treated with control (a), 1 μM lapachol (b), 5 μM lapachol (c), and 10 μM lapachol (d)
Lapachol could induce apoptosis and DNA damage of C6 cells
Hoechst 33258 staining result was shown in Fig. 5a. Cells in lapachol treated groups demonstrated nucleolus or cytoplasm condensation dose dependently, indicating an early apoptotic event, while cells in the control group displayed homogeneously distributed chromatin within the nucleolus.
Effects of different concentrations of lapachol on apoptosis and DNA damage of C6 cells. a Hoechst 33258 staining for C6 glioma cells after being treated with control (a), 1 μM lapachol (b), 5 μM lapachol (c), and 10 μM lapachol (d). riginal magnification: ×200. b Annexin V-FITC/PI staining for apoptosis rates of cells after being treated with control (a), 1 μM lapachol (b), 5 μM lapachol (c), and 10 μM lapachol (d). c Comet assay result of cells after being treated with control (a), 1 μM lapachol (b), 5 μM lapachol (c), and 10 μM lapachol (d). Original magnification: ×200. d Tail intensity of C6 cells in the comet assay. Mean ± SD of three independent experiments. * P < 0.05 compared with the control
As shown in Fig. 5b, lapachol treatment significantly increased the number of apoptotic cells (P < 0.05). The apoptotic rates of C6 cells were 22.2 ± 2.4%, 33.4 ± 3.7%, and 45.8 ± 9.1%, respectively after 1.0, 5.0, and 10.0 μM of lapachol treatment. Moreover, the effects of lapachol treatment on the early apoptosis were more obvious than that on the late apoptosis. These results indicated that lapachol treatment could significantly induce the apoptosis of C6 cells dose dependently.
We measured DNA damage by observing comet tails using comet assay after treatment of C6 cells with different concentrations of lapachol. As shown in Fig. 5c, obvious DNA tails can be found in the C6 cells treated with 1.0, 5.0, and 10.0 μM of lapachol. Tail intensity of DNA (% tail DNA) showed that 1.0, 5.0, and 10.0 μM of lapachol could significantly induce DNA migration in C6 cells, indicating that lapachol could induce DNA damage in a concentration-related manner (Fig. 5d).
Lapachol could inhibit the activities of TOP I and TOP II
The effects of lapachol on TOP I and TOP II activities were assessed by the conversion of supercoiled pBR322 DNA to its relaxed form. As shown in Fig. 6a, 10.0, and 100.0 μM lapachol showed obvious inhibitory effects on Top I activity, with the increased amount of supercoiled pBR322 DNA compared with the control. Fig. 6b showed that linearized pHOT DNA products were formed in the etoposide, 1.0, 10.0, and 100.0 μM lapachol treated groups, indicating that both lapachol and etoposide were Top II poisons.
Effects of lapachol on TOP I and TOP II activities by the conversion of supercoiled pBR322 DNA to its relaxed form. TOP I and TOP II drug screening kits were used. a TOP I inhibition assay result by lapachol; b TOP II inhibition assay result by lapachol. The known TOP I inhibitor camptothecin and TOP II poisons etoposide were used as positive controls
Molecular docking study
Molecular docking study was performed to better understand the possible interaction mode of lapachol-TOP I and lapachol- TOP II. The structures of Topo I and II used in the docking study were obtained from the Protein Data Bank (PDB entry: 1SC7 and 1ZXM). To verify whether MOE software was suitable for docking TOP I and TOP II, the RMSD values of docking conformation and initial conformation were calculated. In general, if RMSD were less than 2.0 Å, MOE software was recognized as suitable for docking study. RMSD of TOPO I and II calculated in the present study were 1.022 and 0.779 Å, indicating that MOE software can better simulate the action mode of ligands and TOP I or TOP II in the crystal structures. The scores of initial ligands and the protein crystal structures were -17.78 and -11.43, respectively. The scores of lapachol-TOP I and lapachol-TOP II were -12.59 and -6.71, respectively. The molecular docking modes were shown in Fig. 7. Lapachol and TOP I showed relatively stronger interaction than that of the initial ligand-protein crystal structures, with more conjugate and hydrogen bond actions. Lapachol and TOP II showed relatively weak interaction, with only hydrogen bond actions.
The molecular docking for the interaction modes of lapachol-TOP I (a) and lapachol-TOP II (b). MOE 2010 software package was used for docking TOP I and II. Crystal structures of TOP I-ligand complex (PDB entry: 1SC7, 3.0 Å) and TOP II-ligand complex (PDB entry: 1ZXM, 1.87 Å) were used. The scores of lapachol-TOP I and lapachol-TOP II were -12.59 and -6.71, respectively
Lapachol could inhibit the expressions of TOP II in C6 cells
TOP I and TOP II expression levels in the cells were determined by Enzyme-linked Immunosorbent Assay Kit. The result showed that lapachol could significantly inhibit TOP II expression in C6 cells in a dose dependent manner (P < 0.05), but not TOP I expression. After treatment with 1.0, 5.0, and 10.0 μM lapachol, the TOP II expression in C6 cells decreased to 0.31, 0.22 and 0.09 times of that of the control (Fig. 8b). However, lapachol showed no obvious inhibitory effects on TOP I expression levels in C6 cells (Fig. 8a). RT-PCR was performed for detecting the mRNA expression levels of TOP I and TOP II. The results showed that lapachol could significantly inhibit mRNA levels of TOP II in a dose dependent manner (P < 0.05), but not TOP I (P > 0.05) (Fig. 8c and d). After treatment with 1.0, 5.0, and 10.0 μM lapachol, the TOP II mRNA levels in C6 cells decreased to 0.61, 0.44 and 0.23 times of that of the control (Fig. 8d).
The relative contents of TOP I and TOP II in C6 glioma cells after being treated with control, 1 μM lapachol, 5 μM lapachol, and 10 μM lapachol for 48 h. a relative TOP I protein level by ELISA; b relative TOP II protein level by ELISA; c relative TOP I mRNA level by RT-PCR; d relative TOP II mRNA level by RT-PCR. Mean ± SD of three independent experiments. *P < 0.05 compared with control
Lapachol is a naturally occurring naphthoquinone derived from Bignoniaceae (Tabebuia sp.) that possesses various activities, including anti-inflammatory, antibiotic, antifungal, antitumor and immunomodulatory [19]. Antitumor effects of lapachol have been extensively studied for many years. The cytotoxicity of lapachol and its derivatives were evaluated in many tumor cells and several mouse models [8–11]. It was reported that lapachol showed strong cytotoxicity on glioma cells [10]. We have previously found that lapachol was able to cross the blood brain barrier, indicating that it might be effective in treating malignant glioma [17]. Thus, in this study we evaluated the inhibitory effects of lapachol on malignant glioma both in vitro and in vivo. MTS/PMS assay showed that lapachol exhibited strong inhibitory effects on C6 cells in a dose dependent manner, with the IC50 of 3.7 ± 1.4 μM. The rat C6 glioma model was established for the in vivo experiment. The results showed that tumor volumes in the brains of the rats in the high dose lapachol and TMZ treated groups were significantly decreased compared with the control group (P < 0.05) (Fig. 3a). However, no obvious growth inhibitory effects were observed in the low dose lapachol and middle dose lapachol groups compared with the control group (P > 0.05). Moreover, no obvious body weight changes were found in lapachol and TMZ treated groups compared to the control group (P > 0.05) (Fig. 3b). The result suggested that high dose lapachol and TMZ treatment could significantly decrease C6 glioma growth without affecting the body weights of the glioma-bearing rats.
Most current chemotherapeutic agents act through the activation of the apoptosis signal pathway [20]. Our hoechst 33258 staining and Annexin V-FITC/PI staining results indicated that lapachol could significantly induce the apoptosis of C6 cells in dose dependent manners (Fig. 5a and b). We also measured DNA damage caused by lapachol using comet assay. The results indicated that lapachol could induce DNA damage in a concentration-related manner (Fig. 5c and d). It was reported that p53 mutations was closely related with the high proliferation rate of glioblastoma [21]. Asai A, et al. investigated the expression of p53 in several human (U251, U87, U343) and rat glioma cell lines (C6, 9 L) and found that U87, U343, and C6 cells expressed wild type-p53 messages while U251 and 9 L cells expressed mutated form-p53 messages [22]. So, wild-type p53 may increase and mediate the multiple cellular responses for DNA damage repair or apoptosis induced by lapachol. Further studies are needed to confirm this hypothesis.
DNA topoisomerases have been identified as targets for drug development and some TOP I or TOP II inhibitors have already been used in clinic [23]. Many studies showed that the antitumor activity of naphthoquinone was associated with inhibition of DNA topoisomerase activities [13, 14]. Since lapachol belongs to naphthoquinone, we first detected the effects of lapachol on TOP I and TOP II activities by the conversion of supercoiled pBR322 DNA to its relaxed form. The results showed that lapachol exhibited obvious TOP I and TOP II inhibitory activities (Fig 6). Also the result showed that lapachol act as TOP II poisons (Fig. 6b). The effect of lapachol on TOP II activity is consistent with previous studies [9]. However, the effect of lapachol on TOP I activity is firstly reported.
To better understand the possible interaction modes of lapachol-TOP I and lapachol-TOP II, molecular docking study was performed. The result showed that lapachol-TOP I showed relatively stronger interaction, with more conjugate and hydrogen bond actions, while lapachol-TOP II showed relatively weak interaction, with only hydrogen bond actions (Fig. 7). The reason why lapachol act as a TOP II inhibitor may be caused by the interaction between lapachol and DNA molecular. We also evaluated the effects of lapachol on TOP I and TOP II expression levels in the cells by Enzyme-linked Immunosorbent Assay and RT-PCR. The results showed that lapachol could significantly inhibit the protein and mRNA levels of TOP II in a dose dependent manner, but not TOP I levels (Fig. 8). Most TOP I and TOP II inhibitors could inhibit the activities of TOP I and TOP II, but not the expression levels [13, 14]. Further studies are needed to compare the inhibitory levels of lapachol on TOP I and TOP II with different concentrations of other TOP I and TOP II inhibitors.
TOP II has become an attractive target for cancer therapies and TOP II inhibitors are among the most effective anticancer drugs [24]. Most of the currently used TOP II-targeted drugs, such as mitoxantrone and doxorubicin, cause DNA damage and cell apoptosis [24]. Almost all TOP II inhibitors in clinical use are TOP II poisons, including etoposide, doxorubicin, and mitoxantrone. In this study we also found that lapachol acted as TOP II poisons.
These results showed that lapachol could significantly inhibit C6 glioma both in vitro and in vivo, which might be related with inhibiting TOP I and TOP II activities, as well as TOP II expression. The current study suggested that lapachol should be further explored for the potential use in malignant glioma therapy.
Mittal S, Pradhan S, Srivastava T. Recent advances in targeted therapy for glioblastoma. Expert Rev Neurother. 2015;15:935–46.
Nakada M, Furuta T, Hayashi Y, Minamoto T, Hamada J. The strategy for enhancing temozolomide against malignant glioma. Front Oncol. 2012;2:98.
Wen W, Guang S, Binbin M, Xiangcheng H, Xin D, Bo Z. Chemotherapy for Adults with Malignant Glioma: A Systematic Review and Network Meta-analysis. Turk Neurosurg. 2015; doi:10.5137/1019-5149
Sunassee SN, Davies-Coleman MT. Cytotoxic and antioxidant marine prenylated quinones and hydroquinones. Nat Prod Rep. 2012;29:513–35.
Klotz LO, Hou X, Jacob C. 1,4-naphthoquinones: from oxidative damage to cellular and inter-cellular signaling. Molecules. 2014;19:14902–18.
Castellanos JRG, Prieto JM, Heinrich M. Red Lapacho (Tabebuia impetiginosa) - a global ethnopharmacological commodity? J Ethnopharmacol. 2009;121:1–13.
Rao KV, McBride TJ, Oleson JJ. Recognition and evaluation of lapachol as an antitumor agent. Cancer Res. 1968;28:1952–4.
Sunassee SN, Veale CG, Shunmoogam-Gounden N, Osoniyi O, Hendricks DT, Caira MR, et al. Cytotoxicity of lapachol, β-lapachone and related synthetic 1,4-naphthoquinones against oesophageal cancer cells. Eur J Med Chem. 2013;62:98–110.
Esteves-Souza A, Figueiredo DV, Esteves A, Câmara CA, Vargas MD, Pinto AC, et al. Cytotoxic and DNA-topoisomerase effects of lapachol amine derivatives and interactions with DNA. Braz J Med Biol Res. 2007;40:1399–402.
Fiorito S, Epifano F, Bruyère C, Mathieu V, Kiss R, Genovese S. Growth inhibitory activity for cancer cell lines of lapachol and its natural and semi-synthetic derivatives. Bioorg Med Chem Lett. 2014;24:454–7.
Li CJ, Li YZ, Pinto AV, Pardee AB. Potent inhibition of tumor survival in vivo by β-lapachone plus taxol: Combining drugs imposes different artificial checkpoints. Proc Natl Acad Sci USA. 1999;96:13369–74.
Costa WF, Oliveira AB, Nepomuceno JC. Lapachol as an epithelial tumor inhibitor agent in Drosophila melanogaster heterozygote for tumor suppressor gene wts. Genet Mol Res. 2011;10:3236–45.
Boonyalai N, Sittikul P, Pradidphol N, Kongkathip N. Biophysical and molecular docking studies of naphthoquinone derivatives on the ATPase domain of human topoisomerase II. Biomed Pharmacother. 2013;67:122–8.
Gurbani D, Kukshal V, Laubenthal J, Kumar A, Pandey A, Tripathi S, et al. Mechanism of inhibition of the ATPase domain of human topoisomerase IIα by 1,4-benzoquinone, 1,2-naphthoquinone, 1,4-naphthoquinone, and 9,10-phenanthroquinone. Toxicol Sci. 2012;126:372–90.
Krishnan P, Bastow KF. Novel mechanisms of DNA topoisomerase II inhibition by pyranonaphthoquinone derivatives-eleutherin, alpha lapachone, and beta lapachone. Biochem Pharmacol. 2000;60:1367–79.
Redaelli M, Mucignat-Caretta C, Isse AA, Gennaro A, Pezzani R, Pasquale R, et al. New naphthoquinone derivatives against glioma cells. Eur J Med Chem. 2015;96:458–66.
Bai L, Han Y, Yao JF, Li XR, Li YH, Xu PX, et al. Structural elucidation of the metabolites of lapachol in rats by liquid chromatography–tandem mass spectrometry. J Chromatogr B Analyt Technol Biomed Life Sci. 2014;944:128–35.
An Y, Guo W, Li L, Xu C, Yang D, Wang S, et al. Micheliolide derivative DMAMCL inhibits glioma cell growth in vitro and in vivo. PLoS ONE. 2015;10:e0116202.
Ravelo AG, Estévez-Braun A, Chávez-Orellana H, Pérez-Sacau E, Mesa-Siverio D. Recent studies on natural products as anticancer agents. Curr Top Med Chem. 2004;4:241–65.
Pistritto G, Trisciuoglio D, Ceci C, Garufi A, D'Orazi G. Apoptosis as anticancer mechanism: function and dysfunction of its modulators and targeted therapeutic strategies. Aging (Albany NY). 2016;8:603–19.
Parsons DW, Jones S, Zhang X, Lin JC, Leary RJ, Angenendt P, et al. An integrated genomic analysis of human glioblastoma multiforme. Science. 2008;321:1807–12.
Asai A, Miyagi Y, Sugiyama A, et al. Negative effects of wild-type p53 and s-Myc on cellular growth and tumorigenicity of glioma cells. Implication of the tumor suppressor genes for gene therapy. J Neuro-Oncol. 1994;19:259–68.
Xu Y, Her C. Inhibition of Topoisomerase (DNA) I (TOP1): DNA damage repair and anticancer therapy. Biomolecules. 2015;5:1652–70.
Ali Y, Abd HS. Human topoisomerase II alpha as a prognostic biomarker in cancer chemotherapy. Tumour Biol. 2016;37:47–55.
This work was supported by National Natural Science Foundation of China (81302906, 81273550 and 41306157).
Please contact the corresponding author to access the data.
HX wrote the article and performed some experiments. QC and HW performed some of the experiments, and analyzed the data. PX, RY, LB, and XL performed some of the experiments. MX designed the experiment and revised the article. All authors read and approved the final manuscript.
All experiments were performed with the approval from the Capital Medical University Ethics Committee in Beijing, China (number 37363).
Department of Pharmacology, School of Basic Medical Sciences, Capital Medical University, No.10 Youanmenwaixitoutiao, Fengtai District, Beijing, 100069, China
Huanli Xu, Qunying Chen, Hong Wang, Pingxiang Xu, Ru Yuan, Xiaorong Li, Lu Bai & Ming Xue
Huanli Xu
Qunying Chen
Hong Wang
Pingxiang Xu
Ru Yuan
Xiaorong Li
Lu Bai
Ming Xue
Correspondence to Ming Xue.
Xu, H., Chen, Q., Wang, H. et al. Inhibitory effects of lapachol on rat C6 glioma in vitro and in vivo by targeting DNA topoisomerase I and topoisomerase II. J Exp Clin Cancer Res 35, 178 (2016). https://doi.org/10.1186/s13046-016-0455-3
Lapachol
C6 glioma
Topoisomerase I
Topoisomerase II | CommonCrawl |
Can physics deal with the existence of Pi?
Thread starter richard9678
Can physics deal with a question on the existence of Pi
Hi. I'm not sure if physics/cosmology can deal with my question. I suspect not, but I'll ask it anyway. The answer could be "No" and that would be "end of".
Is there any situation, where Pi = 3.142...does not exist as a fact? Thanks. Rich
Related Classical Physics News on Phys.org
I think there is a confusion of ideas here. ##\pi## is a number.
Likes S.G. Janssens, DaveE, russ_watters and 2 others
Yes, it a number. We've discovered it. It's discoverable a long time. But was there ever a time it was undiscovered because of some physics reason? Does it require space for it to "exist"?
Why would physics prevent the study of geometry? You can calculate the value of ##\pi## yourself if you know enough calculus to derive the Taylor series for ##\tan^{-1}##.
Circles are physically impossible as well, but we still have them
Likes Stephen Tashi
richard9678 said:
Pi exists, as a number, as a consequence of the axioms of number theory. It's very useful and has some physical interpretations, but mathematics itself doesn't depend on physics.
Likes Delta2, russ_watters and etotheipi
Just because no-one is there to discover it, does not mean it's not real or that it does not exist. With that thought in mind, let's say we are 100,000 years after the big bang, is there anything in physics knowledge that says Pi cannot have existed. I think the basic premise would be, if we have space Pi must exist. If the answer is "no" Pi cannot have not existed, we go farther back in time until we say "yes". If that's possible.
Likes Delta2, weirdoguy and davenn
The only space you need is ##\mathbb{R}##.
Likes sysprog, jbriggs444 and PeroK
Are you thinking of ##\pi## as the ratio of the circumference to the diameter of a "real" circle?
Pi is defined as the ratio of the circumference of a circle to its diameter in a Euclidean plane. The diameter of a circle is defined as twice the radius, the radius being the shortest distance between the centre of the circle and a point on the circle as measured in the Euclidean plane defined by the circle. Since all Euclidean planes are indistinguishable, this ratio does not change. So Pi does not change.
However, the earth surface is not a Euclidean plane and geodesic paths in real space-time (the shortest space-time metric between two points) do not follow a Euclidean plane. So the ratio of a circle to its diameter as measured in curved space-time or in curved space will, generally, be different than Pi and will vary depending the curvature. But Pi will not change.
fresh_42
Just because no-one is there to discover it, does not mean it's not real or that it does not exist. With that thought in mind, let's say we are 100,000 years after the big bang, is there anything in physics knowledge that says Pi cannot have existed.
The existence of numbers has nothing to do with a physical existence. Even ##1## does not physically exist-.
I think the basic premise would be, if we have space Pi must exist. If the answer is "no" Pi cannot have not existed, we go farther back in time until we say "yes". If that's possible.
This makes no sense.
PeroK said:
There is no physical circle, it simply does not exist. It's always a model (path of motion), and if realized (circles in the sand), not round anymore under an electron microscope.
Likes Delta2 and etotheipi
etotheipi said:
I suspect this is what @PeroK was hinting to the OP
I know. I wasn't really addressing @PeroK here. I just had to take the words to somehow emphasize the different meaning of existence for the OP.
##\pi## has nothing to do with physics. It's simply defined by the definition of the cosine function via its power series, [EDIT: typo corrected in view of #15]
$$\cos z=\sum_{k=0}^{\infty} \frac{1}{(2 k)!} (-1)^k z^{2k},$$
such that it's the smallest positive real number, for which ##\cos \pi=-1##, which implies btw. that ##\cos(\pi/2)=0##. So you can define ##\pi/2## as the smallest positive real zero of cos.
S.G. Janssens
Yes, the issue was settled in post #2.
I remember about an experiment I had to do in school at some point. To my horror, it involved the "experimental determination of ##\pi##". In hindsight, this may have contributed to my decision to switch to mathematics at the end.
(On the other hand: Later on, when I studied physics first, one of the teachers that showed most sympathy for my stubborness and pedantry was an experimental condensed matter prof. that I still think about with a lot of sympathy.)
Likes vanhees71 and Delta2
##\pi## has nothing to do with physics. It's simply defined by the definition of the cosine function via its power series,
$$\cos z=\sum_{k=0}^{\infty} \frac{1}{(2 k)!} (-z)^{k},$$
I think there's a small typo, that$$\cos z=\sum_{k=0}^{\infty} \frac{(-1)^k}{(2 k)!} z^{2k},$$
Likes atyy, vanhees71, S.G. Janssens and 1 other person
S.G. Janssens said:
I remember about an experiment I had to do in school at some point. To my horror, it involved the "experimental determination of ##\pi##".
Numbers exist only within the human mind or the human brain if you want. To our best knowledge they correspond to electrochemical or electromagnetic signals inside our brains. When we measure a piece of rod or a piece of a string and we find it to be ##\pi## (there are many different ways to construct geometrical a line segment that equals ##\pi##) it doesn't mean that it exists in the physical reality. In the physical reality exist only the molecules of the rod or the string which we used.
A.T. said:
I wish it had been that tasty, then it would perhaps have been forgivable.
Delta2 said:
Numbers exist only within the human mind or the human brain if you want.
This point of view is attractive, but it leads one away from useful mathematics.
Suppose that we decide that all numbers have physical existence as concepts -- biochemical patterns existing in a brain somewhere. Then the Peano axioms are false. Not every integer has a successor. Or a predecessor. Not every integer which exists today existed yesterday. Nor may some of them exist tomorrow. That's a pretty wishy washy background within which to do mathematical work.
Edit: here is an example of an integer that did not exist yesterday, may not exist tomorrow [depending on disk erasure details] and which has neither successor nor predecessor at present.
fly:3:~$ openssl genrsa 2048 > temp.key
Generating RSA private key, 2048 bit long modulus
...........+++
e is 65537 (0x10001)
fly:4:~$ rm temp.key
Normally, one ignores the question of physical existence of numbers, decides that they exist in some Platonic realm or other and gets on with the business of solving the problem at hand.
jbriggs444 said:
I could never imagine this as a consequence of what i wrote
Not sure here, i ll have to think this when i have slept better (unfortunately i am suffering from central sleep apnea and its totally random when i manage to sleep well) you might be right
I fully agree with the above.
It's also very human-centric. Some other species on our planet (and potentially many on other planets) have developed the idea of numbers.
Not every integer which exists today existed yesterday.
Well, if it's not here:
https://en.wikipedia.org/wiki/List_of_numbers
then it doesn't exist.
Which kind of experiment was this? What's interesting from a mathematical point of view is this experiment where you throw a needle on a floor with parallel strips painted on it and then getting ##\pi## from probality theory. The only problem is that this is very slowly converging ;-)).
https://en.wikipedia.org/wiki/Buffon's_needle_problem
Likes etotheipi and Delta2
Here is another one:
Likes etotheipi and vanhees71
And here with one optics, but about intensity, not ray geometry:
Gordianus
When I was 10/11 (memory fails), I "measured" Pi with a piece of string, several pipes and a ruler. A mathematician may scream, but I still remember it as a wondeful "experiment".
Likes sophiecentaur and vanhees71
Related Threads on Can physics deal with the existence of Pi?
Need help with an equation, deals with centripetal force acting upon its pivot point
Why cant Hamilton mechanics deal with friction
Would classical physics not exist without probability?
Can magnetism exist independently of current or charge?
How can a conductor of uniform charge density exist
Dealing with variable force
Eddy currents, and induced current, can they both co-exist?
I Are there things that exist in 3D that actually cannot exist in higher dimensions?
How can a true elasic collision exist?
Gas Problem Dealing with Explosives
I Gibbs paradox: an urban legend in statistical physics
I Does every object rotate around its center of gravity?
I Speed of "Electricity"
B Question about Faraday's Law
I What happens if you increase μ0 and decrease ϵ0? Or vice versa | CommonCrawl |
2018 Model
National Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maine CD-2 Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nebraska CD-2 Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming
The ORACLE (Overall Results of an Analytical Consideration of the Looming Elections) of Blair is a senior class project at Montgomery Blair High School under the supervision of Mr. David Stein. We created this election model during the fall semester to predict the upcoming 2020 Presidential Election. This is the third iteration of election modeling at Blair; the Oracle successfully forecasted the 2018 Congressional Elections, and we also created a model for the 2016 battleground states. Our forecast is probabilistic, meaning we favor a candidate's win based on the number of times that candidate wins in our simulations. All decisions in creating this model were made by students in the class. We take full responsibility for the methods used in the model, as well as the predictions made by the model.
All calculations are based on the two-party vote (Democratic and Republican parties), so any votes for a third-party or independent candidate do not count toward our model. Positive margins favor Republican incumbent Donald Trump, while negative margins favor Democratic challenger Joe Biden.
Furthermore, since Maine and Nebraska uniquely distribute their electoral votes by congressional district, Maine CD-2 and Nebraska CD-2 were considered "states" since they tend to vote less consistently with the state as a whole. Therefore, our model has 53 "states": the fifty states, the District of Columbia, Maine CD-2, and Nebraska CD-2.
Our model uses four factors to arrive at its predictions: Priors, Polls, Correlation, and Simulation. In Priors, we combined a partisan lean index, which we have termed Blair Partisan Index (BPI), and predictive demographics for each state. With Polls, we averaged polls through a novel z-test method and determined variance using sampling variation, days before the election, and voting difficulty in each state. With Correlation, our model then correlated each state through the Oracle State Correlations and Relationships (OSCAR) method. During Simulation, we combined the various components and simulated the election one million times, picking random starting points for each state, and seeing how each state affects the other using OSCAR.
Priors
Blair Partisan Index (BPI)
Understanding how Democratic or Republican each state has voted historically helps us predict how they'll vote in the future. We found the Republican two-party vote percentage for each state in five previous elections: the 2008, 2012, and 2016 Presidential elections, and the 2014 and 2018 US House elections. For each election, we found the difference between the national two-party percentage and the state two-party percentage (i.e. how partisan the state was compared to the nation as a whole). We took a weighted average of these differences based on how representative we thought each election was using the following equation:
BPI = 0.5 × (2016 presidential difference) + 0.2 × (2012 presidential difference) + 0.2 × (2018 midterm difference) + 0.05 × (2014 midterm difference) + 0.05 × (2008 presidential difference)
This weighted average is our Blair Partisan Index (BPI), representing the bias toward Republicans in that state compared to their national standing. For example, if a state has a +5 BPI, then (disregarding polls, demographics, etc.) Republicans historically outperformed their national total by around 5% and Democrats underperformed their national total vote by around 5%. The resulting margin is actually double the BPI (e.g. if one candidate gets 5% more than the national average and the other gets 5% less, the margin between them is 10%).
Understanding how different demographics have voted in the past also helps us predict how they'll vote in the future. We decided on the four most informative demographics:
the percentage of non-Hispanic white residents
the percentage of nonreligious residents
the urbanicity of the state1
the percentage of residents that have a college degree2
We took a linear regression between these statistics and the results of the 2016 Presidential Election and predicted the vote percentage of each state with the following equation:
Expected Trump Percentage Vote = 2.72 × (White Non-Hispanic) - 4.320 × (Non-Religious) - 4.47 × (College Degree) - 3.69 × (Urbanicity) + 48.261
For each state, we obtained demographic data from the 2010 Census and plugged that into the equation. The resulting expected Trump vote is what we call the "demographic prior".
Combining BPI and Demographics
To find the overall prior prediction, we combine the BPI and demographic analyses by taking a simple average of the two. For example, if the national vote percentage was 50% for Trump, and a state has a BPI of +5% and a demographic prior of 51%, then the overall prior prediction would be 53%.
Averaging Polls: Z-Test Method
We introduced a new method for averaging polls, which accounts for how volatile each state's population is, dubbed the "z-test method". For each day we ran the model, we gathered polls for each state from FiveThirtyEight starting from Aug. 12 and divided the polls into ten-day blocks. We divided these blocks by counting backwards in ten-day steps from the current day until all polls in that state were encompassed by a block. For example, if we ran the model on Sept. 10 for a state with its earliest poll on Aug. 15, there would be three blocks of ten days: Sept. 10 to Sept. 1, Aug. 31 to Aug. 22, Aug. 21 to Aug. 12.
Based on the number of polls in a block, we calculated the mean in three different ways:
If the block had only one poll, the mean of the block would be the result of that poll.
If the block had more than one poll, we conducted a meta-analysis3 on the polls to determine the mean.
If the block had no polls, all the polls in the past two blocks were considered to be a single block.
Rather than using the margin of error provided by the pollster, we calculated the variance for each poll by combining the sampling variation with the percentage of voters who indicated they weren't affiliated with either candidate: $$varianceCombined = \frac{pq}{n} + (\frac{1}{30} * (1 - (\%Trump + \%Biden)))^2$$
Using the mean and combined variance for each block, starting with the earliest block, we conducted z-tests between consecutive blocks, with a significance level of \(\alpha = 0.05\). A significant difference between two consecutive blocks indicates a significant shift in the voting intentions of the population, in which case polls from previous blocks would be discarded from the model. Otherwise, the two consecutive blocks would be combined. Therefore, the final block used for prediction would include all polls after the last significant population shift.
For example, if block one (Aug. 21 to Aug. 12; with three polls) and block two (Aug. 31 to Aug 22; with five polls) were not significantly different according to our z-test, then all the polls from both blocks would be combined, with eight polls now in block two. Then, if block two and block three were significantly different, then we would not consider any of the polls in block two and only use block three.
By the end of this process, we'll have one final block to use for prediction, for which we calculate the mean and variance using the methods described above.
Additional Variance
Once we determine the mean and variance for each state based on polls alone, we add more variance based on how easy it is to vote in that state and how close our predictions are to election day. We do this by adding a multiple of the Cost of Voting Index4 to each of the states as well as a multiple of the amount of days until election day, using the following equations:
$$\frac{0.6(CoVI +2.06)}{400}$$
$$\frac{1}{1600} \sqrt{\frac{electionDate-currentDate}{7}}$$
Combining Polls with BPI (Finding State Lean)
We combined the means from the priors and the means from the polls using a weighted average. Depending on how many polls were used in the final block of the z-test method, the weight for the mean of the polls in each state was calculated using the following equation:
$$\frac{1.92}{\pi}\arctan{(0.65 \times numPolls)}$$ The weight for the mean of the priors would be the complement5 of this equation. We will call this weighted average the state's lean.
Oracle State Correlations and Relationships (OSCAR)
Since we expect demographically similar states to vote similarly, it's important to consider how they might influence one another in their state elections. For example, if one state were to lean heavily toward Trump, we would need to take that into account and shift all other states that are correlated with it accordingly.
We decided on seven informative demographics:
the percentage of black residents
the percentage of hispanic residents
the urbanicity of the state
the median age of the state
the percentage of residents that have a college degree
For each state, we ran a regression against all other states with the seven demographics as predictors and stored the outputs in a square matrix, the Oracle State Correlations and Relationships (OSCAR).
Correlation Scheme
Our Correlation Scheme, which is used during simulations, calculates a net effect posed by each state onto every other state based on their demographic similarity. Given demographically similar state A and state B, if we predict a Trump win by 2% more than the state lean in state A, then we can expect a similar win for state B. Given state C, which is nothing like state A, we can expect very little impact on state C from state A.
Each time we run our model, we choose a random number for each state according to the normal distribution of that state's lean. We then take the difference of those random numbers from the state's leanings, and take the dot product of those differences with OSCAR. Then, we divide that product by the sum of that state's row in OSCAR. This value is the net effect of other states, which we add to the random number.
For example, consider states A, B and C, with the following hypotheticals:
State A:
Predicted 2% more than its lean
Correlated 0.8 with state B
Correlated 0.1 with state C
State B:
Predicted -3% more than its lean
Correlated 0.8 with state A
State C:
The net effect of these states on each other would be:
For State A: (-0.03 × 0.8 + 0.04 × 0.1)/(0.8 + 0.1)
For State B: (0.02 × 0.8 + 0.04 × 0.5)/(0.8 + 0.5)
For State C: (0.02 × 0.1 - 0.03 × 0.5)/(0.1 + 0.5)
The correlation scheme for the entire model would be similar, using 53 states instead of only three states.
In order to calculate how likely a candidate is to win the national election, we need to simulate how well each candidate will do in each state. After we find each state's lean and combined variance, we create a normal distribution for each state, centered at the state's lean and standard deviation6. Each time we run our model, a random number is chosen for each state according to their normal distribution. Depending on how much greater that random number is than the state's lean, a net effect is applied onto each state based on the Correlation Scheme.
Once the net effect is added to the state's random number, a candidate will win that state's electoral votes based on who the state is in favor of. After all the electoral votes are tallied up, the candidate with 270 or more electoral votes will have won that one simulation.
Each run of the model simulates the election one million times. The results for each state and for the whole country are recorded and displayed in the national predictions as well as the individual state forecasts.
Urbanicity is defined as the logarithm of the population living within 5 miles of a large city; obtained from The Economist. ↩
Defined as having a Bachelor's Degree or higher. ↩
A statistical procedure that combines the results of multiple independent studies, using the metafor package in R ↩
Obtained from Election Law Journal: Rules, Politics, and Policy. ↩
As in \(1 - \frac{1.92}{\pi} \cdot \arctan{(0.65 \cdot numPolls)}\) ↩
Defined as the square root of the combined variance. ↩
Presidential Election Model 2020
ORACLE of Blair is a project by seniors at Montgomery Blair High School in Silver Spring, Maryland. It was created during the Fall 2020 Political Statistics course taught by Mr. David Stein. Questions for the students about the model can be sent to [email protected], while Mr. Stein can be reached directly through the Blair website.
Any views or opinions expressed on this site are those of the students in Montgomery Blair High School's 2020 Political Statistics class and do not necessarily reflect the official position of Montgomery Blair High School. | CommonCrawl |
Preprint ARTICLE | doi:10.20944/preprints202201.0442.v1
Global Powerful Alliance in Strong Neutrosophic Graphs
Henry Garrett
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Modified Neutrosophic Number; Global Powerful Alliance; R-Regular-Strong
Online: 28 January 2022 (15:09:54 CET)
New setting is introduced to study the global powerful alliance. Global powerful alliance is about a set of vertices which are applied into the setting of neutrosophic graphs. Neighborhood has the key role to define this notion. Also, neighborhood is defined based on strong edges. Strong edge gets a framework as neighborhood and after that, too close vertices have key role to define global powerful alliance based on strong edges. The structure of set is studied and general results are obtained. Also, some classes of neutrosophic graphs excluding empty, path, star, and wheel and containing complete, cycle and r-regular-strong are investigated in the terms of set, minimal set, number, and neutrosophic number. Neutrosophic number is used in this way. It's applied to use the type of neutrosophic number in the way that, three values of a vertex are used and they've same share to construct this number. It's called "modified neutrosophic number". Summation of three values of vertex makes one number and applying it to a set makes neutrosophic number of set. This approach facilitates identifying minimal set and optimal set which forms minimal-global-powerful-alliance number and minimal-global-powerful-alliance-neutrosophic number. Two different types of sets namely global-powerful alliance and minimal-global-powerful alliance are defined. Global-powerful alliance identifies the sets in general vision but minimal-global-powerful alliance takes focus on the sets which deleting a vertex is impossible. Minimal-global-powerful-alliance number is about minimum cardinality amid the cardinalities of all minimal-global-powerful alliances in a given neutrosophic graph. New notions are applied in the settings both individual and family. Family of neutrosophic graphs has an open avenue, in the way that, the family only contains same classes of neutrosophic graphs. The results are about minimal-global-powerful alliance, minimal-global-powerful-alliance number and its corresponded sets, minimal-global-powerful-alliance-neutrosophic number and its corresponded sets, and characterizing all minimal-global-powerful alliances, minimal-t-powerful alliance, minimal-t-powerful-alliance number and its corresponded sets, minimal-t-powerful-alliance-neutrosophic number and its corresponded sets, and characterizing all minimal-t-powerful alliances. The connections amid t-powerful-alliances are obtained. The number of connected components has some relations with this new concept and it gets some results. Some classes of neutrosophic graphs behave differently when the parity of vertices are different and in this case, cycle, and complete illustrate these behaviors. Two applications concerning complete model as individual and family, under the titles of time table and scheduling conclude the results and they give more clarifications and closing remarks. In this study, there's an open way to extend these results into the family of these classes of neutrosophic graphs. The family of neutrosophic graphs aren't study deeply and with more results but it seems that analogous results are determined. Slight progress is obtained in the family of these models but there are open avenues to study family of other models as same models and different models. There's a question. How can be related to each other, two sets partitioning the vertex set of a graph? The ideas of neighborhood and neighbors based on strong edges illustrate open way to get results. A set is global powerful alliance when two sets partitioning vertex set have uniform structure. All members of set have more amount of neighbors in the set than out of set and reversely for non-members of set with less members in the way that the set is simultaneously t-offensive and(t-2)-defensive. A set is global if t=0. It leads us to the notion of global powerful alliance. Different edges make different neighborhoods but it's used one style edge titled strong edge. These notions are applied into neutrosophic graphs as individuals and family of them. Independent set as an alliance is a special set which has no neighbor inside and it implies some drawbacks for these notions. Finding special sets which are well-known, is an open way to purse this study. Special set which its members have only one neighbor inside, characterize the connected components where the cardinality of its complement is the number of connected components. Some problems are proposed to pursue this study. Basic familiarities with graph theory and neutrosophic graph theory are proposed for this article.
Global Offensive Alliance in Strong Neutrosophic Graphs
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Modified Neutrosophic Number; Global Offensive Alliance; Complete Neutrosophic Graph
New setting is introduced to study the global offensive alliance. Global offensive alliance is about a set of vertices which are applied into the setting of neutrosophic graphs. Neighborhood has the key role to define this notion. Also, neighborhood is defined based on strong edges. Strong edge gets a framework as neighborhood and after that, too close vertices have key role to define global offensive alliance based on strong edges. The structure of set is studied and general results are obtained. Also, some classes of neutrosophic graphs containing complete, empty, path, cycle, star, and wheel are investigated in the terms of set, minimal set, number, and neutrosophic number. Neutrosophic number is defined in new way. It's first time to define this type of neutrosophic number in the way that, three values of a vertex are used and they've same share to construct this number. It's called "modified neutrosophic number". Summation of three values of vertex makes one number and applying it to a set makes neutrosophic number of set. This approach facilitates identifying minimal set and optimal set which forms minimal-global-offensive-alliance number and minimal-global-offensive-alliance-neutrosophic number. Two different types of sets namely global-offensive alliance and minimal-global-offensive alliance are defined. Global-offensive alliance identifies the sets in general vision but minimal-global-offensive alliance takes focus on the sets which deleting a vertex is impossible. Minimal-global-offensive-alliance number is about minimum cardinality amid the cardinalities of all minimal-global-offensive alliances in a given neutrosophic graph. New notions are applied in the settings both individual and family. Family of neutrosophic graphs is studied in the way that, the family only contains same classes of neutrosophic graphs. Three types of family of neutrosophic graphs including m-family of neutrosophic stars with common neutrosophic vertex set, m-family of odd complete graphs with common neutrosophic vertex set, and m-family of odd complete graphs with common neutrosophic vertex set are studied. The results are about minimal-global-offensive alliance, minimal-global-offensive-alliance number and its corresponded sets, minimal-global-offensive-alliance-neutrosophic number and its corresponded sets, and characterizing all minimal-global-offensive alliances. The connection of global-offensive-alliances with dominating set and chromatic number are obtained. The number of connected components has some relations with this new concept and it gets some results. Some classes of neutrosophic graphs behave differently when the parity of vertices are different and in this case, path, cycle, and complete illustrate these behaviors. Two applications concerning complete model as individual and family, under the titles of time table and scheduling conclude the results and they give more clarifications. In this study, there's an open way to extend these results into the family of these classes of neutrosophic graphs. The family of neutrosophic graphs aren't study deeply and with more results but it seems that analogous results are determined. Slight progress is obtained in the family of these models but there are open avenues to study family of other models as same models and different models. There's a question. How can be related to each other, two sets partitioning the vertex set of a graph? The ideas of neighborhood and neighbors based on strong edges illustrate open way to get results. A set is global offensive alliance when two sets partitioning vertex set have uniform structure. All members of set have more amount of neighbors in the set than out of set. It leads us to the notion of global offensive alliance. Different edges make different neighborhoods but it's used one style edge titled strong edge. These notions are applied into neutrosophic graphs as individuals and family of them. Independent set as an alliance is a special set which has no neighbor inside and it implies some drawbacks for these notions. Finding special sets which are well-known, is an open way to purse this study. Special set which its members have only one neighbor inside, characterize the connected components where the cardinality of its complement is the number of connected components. Some problems are proposed to pursue this study. Basic familiarities with graph theory and neutrosophic graph theory are proposed for this article.
What Do Global Metrics Tell Us About The World?
John Rennie Short, Justin Vélez-Hagan, Leah Dubots
Subject: Social Sciences, Geography Keywords: global indices, global metrics, global society, new global geographies, principal components analysis.
Online: 11 September 2018 (12:05:12 CEST)
There are now a wide variety of global metrics. To find the degree of overlap between these different measures, we employ a principal components analysis (PCA) to 15 indices across 145 countries. Our results demonstrate that the most important underlying dimension highlights that economic development and social progress go hand in hand with state stability. The results are used to produce categorical divisions of the world. The threefold division identifies a world composed of what we describe and map as Rich, Poor and Middle countries. A five-group classification provided a more nuanced categorization described as; The Very Rich, Free and Stable, Affluent and Free, Upper Middle, Lower Middle, and Poor and Not Free.
Globalization, Quality and Systems Thinking: Integrating Global Quality Management and a Systems View
Aviva Bashan, Sigal Kordova
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: globalization; systems thinking; global quality management; global quality system
Online: 27 August 2020 (03:28:46 CEST)
A global approach towards quality management highlights the need for constructing a new body of knowledge that views the field of global quality from a systems perspective. Based on the results of field experiments, and in light of the need to develop new global quality management terminology, the current article presents several key concepts in this field, with emphasis on a systems-oriented rationale and perspective. As such, the article is an important stage in building this body of knowledge, and towards the conceptualization of key variables used in global quality management, from a systems approach that interacts with the fields of international management and strategic management.
Nestedness-Based Measurement of Evolutionarily Stable Equilibrium of Global Production System
Jiaqi Ren, Yu Han, Lizhi Xing, Xianlei Dong
Subject: Social Sciences, Econometrics & Statistics Keywords: global value chain; global economic integration; nestedness; evolutionarily stable equilibrium
Online: 1 July 2021 (14:20:58 CEST)
Nested structure is a structural feature that is conducive to system stability formed by the co-evolution of species in mutualistic ecosystems, and reflects the ability of ecosystem stability to be restored to a stable state again after being destroyed. The co-opetition relationship and value flow between industrial sectors in the global value chain are similar to the mutualistic ecosystem, and the pattern of the global economic system is always changing in dynamic equilibrium. Nestedness theory is used in this article to define the generalist and specialist sectors in the global value chain to analyze the changes in the global supply pattern. Then we study the mechanism of the global economic system to reach a stable equilibrium and the role of different sectors in the steady of the economic system, so as to provide countermeasures for enhancing the stability of the global economic system. At the end of the article, the domestic trade network, export trade network and import trade network of each country are extracted, and an econometric model is designed to analyze how the microstructure of the production system affects a country's macroeconomic performance, thereby deriving the conclusion that the stability of the international trade network is crucial to a country's economic development.
Update on the SARS-CoV-2 (COVID-19) Outbreak: A Global Pandemic Challenge
Abhishek Kumar Soni
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: COVID-19; global pandemic; global health emergency; SARS-CoV-2
Online: 7 April 2020 (10:08:47 CEST)
The 2019 novel coronavirus (previously 2019-nCoV) or coronavirus infectious disease 2019 (COVID-19) outbreak has been summarized as on March 29, 2020. COVID-19 is a highly transmittable and pathogenic viral infection caused by severe acute respiratory syndrome coronavirus 2 (SERS-CoV-2). The disease was first seen during an outbreak in Wuhan, China and continuous spreading from human to human around the sphere. The disease is uncontrolled and increasing the death toll through. The world is facing a global challenge to protect human lives caused by coronavirus outbreak. The number of infected patients is increasing day by day due to COVID-19 as a pandemic. The world health organization (WHO) has declared global public health emergency on January 30, 2020. The disease has been spread around 201 countries with total confirmed cases 634835 and death cases 29891 as on March 29, 2020. The goal of this review to summaries and update the clinical/medical features and suggestions for diagnosis of the COVID-19 as a pandemic. The discussion of the various therapeutic algorithms, risk, prevention and control based on the latest reports has been provided.
A Perspective for Economic and Social Unfoldings of AI
Hime Oliveira
Subject: Social Sciences, Economics Keywords: Fuzzy Logic; Global optimization; Mechanism design; Game theory; Artificial inference; Global learning
Online: 29 June 2022 (05:22:14 CEST)
This paper aims to introduce an overview of several aspects of the so-called Artificial Intelligence, their potential impacts on economic and social dimen- sions, and suggestions for possible approaches of investiment based upon effective and mature techniques. In this fashion, it is important to address from educational and academic issues to industrial densities and profiles, rel- atively to a given region, country or continent. Even etymological adequacy and psychological consequences of the denomination "Artificial Intelligence" need some reflection, and suggestions for a lucid replacement are presented. In addition, suggestions about how can specific firms choose the right type of technique in order to improve profit and organizational efficiency. After all, what are the main transformations needed to amplify gains and structural improvements from the use of higher level technological mecha- nisms? Which connections can be established between the pillars of evolu- tionary economics and this field of knowledge? Which institutional contexts are able to benefit from AI tools, inducing constructive externalities to firms in terms of education, technical and scientific skills upgrading, so as to reach higher levels of employment in the long term and limit unemployment in the short one? Which branches of the so-called Artificial Intelligence are best suited to which types of activities? This work aims to contribute in the search for answers to these questions.
Global DNA Methylation in Cord Blood as a Biomarker for Prenatal Lead and Antimony Exposures
Yoshinori Okamoto, Miyuki Iwai-Shimada, Kunihiko Nakai, Nozomi Tatsuta, Yoko Mori, Akira Aoki, Nakao Kojima, Tatsuyuki Takada, Hiroshi Satoh, Hideto Jinno
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: global DNA methylation; global DNA hydroxymethylation; cord blood DNA; lead; antimony; birth cohort
Online: 3 March 2022 (15:04:56 CET)
Show abstract| Download PDF| Supplementary Files| Share
DNA methylation is an epigenetic mechanism for gene expression modulation and can be used as a predictor of future disease risks. A prospective birth cohort study was performed to clarify the effects of neurotoxicants on child development, namely, the Tohoku Study of Child Development, in Japan. This study aimed to evaluate the association of prenatal exposure to five toxic metals—arsenic, cadmium, mercury, lead (Pb), antimony (Sb), and polychlorinated biphenyls (PCBs, N = 166)—with global DNA methylation in umbilical cord blood DNA. DNA methylation markers, 5-methyl-2'-deoxycytidine (mC) and 5-hydroxymethyl-2'-deoxycytidine (hmC), were determined using liquid chromatography-tandem mass spectrometry. The mC content in cord blood DNA was positively correlated with Pb and Sb levels (r = 0.442 and 0.288, respectively) but not with cord blood PCBs. We also observed significant positive correlations among Pb levels, maternal age, and hmC content (r = 0.159 and 0.243, respectively). The multiple regression analysis among the potential predictors demonstrated consistent positive associations between Pb and Sb levels and mC and hmC content. Our results suggest that global DNA methylation is a promising biomarker for prenatal exposure to Pb and Sb.
Global Need for Physical Rehabilitation: Systematic Analysis from the Global Burden of Disease Study 2017
Tiago S. Jesus, Michel D. Landry, Helen Hoenig
Subject: Medicine & Pharmacology, Other Keywords: rehabilitation; global health; disability; global burden of disease; health services needs and demand
Online: 8 January 2019 (10:52:39 CET)
Background: To inform global health policies and resources planning, this paper analyzes evolving trends in physical rehabilitation needs, using data on Years Lived with Disability (YLDs) from the Global Burden of Disease Study (GBD) 2017. Methods: Secondary analysis of how YLDs from conditions amenable to physical rehabilitation have evolved from 1990 to 2017, for the world and across countries of varying income levels. Linear regression analyses were used. Results: A 66.2% growth was found in estimated YLD Counts amenable to physical rehabilitation: a significant and linear growth of more than 5.1 billion YLDs per year (99%CI: 4.8–5.4; r2 = 0.99). Low-income countries more than doubled (111.5% growth) their YLD Counts amenable to physical rehabilitation since 1990. YLD Rates per 100,000 people and the percentage of YLDs amenable to physical rehabilitation also grew significantly over time, across locations (all p > 0.05). Finally, only in high-income countries Age-standardized YLD Rates significantly decreased (p < 0.01; r² = 0.86). Conclusions: Physical rehabilitation needs have been growing significantly in absolute, per-capita and in percentage of total YLDs, globally and across countries of varying income level. In absolute terms, growths were higher in lower income countries, wherein rehabilitation is under-resourced.
Gravity-Assist Might be a Solution to Save Earth from Global Warming
Sohrab Rahvar
Subject: Earth Sciences, Space Science Keywords: Dynamics; Solar System; Global Warming
Online: 20 April 2022 (08:53:55 CEST)
Global warming is one of the problems of human civilization and decarbonization policy is the main solution to this problem. In this work, we propose an alternative method of using the gravity-assist by the asteroids to increase the orbital distance of the Earth from the Sun. We can manipulate the orbit of asteroids in the asteroid belt by solar sailing and propulsion engines to guide them towards the Mars orbit and a gravitational scattering can put asteroids in a favorable direction to provide an energy loss scattering from the Earth. The result would be increasing the orbital distance of the earth and consequently cooling down the Earth's temperature. We calculate the increase in the orbital distance of the earth for each scattering and investigate the feasibility of performing this project.
Sustainable Global Supply Chain: A Systematic Literature Review
Shaoor Ahmed, Nadia Sohail
Subject: Social Sciences, Business And Administrative Sciences Keywords: sustainable; global; supply chain; management
Online: 9 August 2021 (15:17:15 CEST)
This study aims to summarize the literature on sustainable global supply chain management so that researchers and practitioners can see the trends in the area in a single place. Systematic literature review (SLR) or Content analysis is used as a methodology of this literature review, studies that are published within the time frame of 2010 to 2020 are included in this literature. Dimensions to analyze each article includes; year of publications, methodology of research, data collection type, unit of analysis, industry, a country in which data is being collected, respondents' types, the theory that is used in the study, dimensions of the sustainability and finally the purpose of study either it is to test the already existing theory or building up a new theory. I found that sustainable global supply chain management is an emerging field and there is potential in the area for researchers to explore the area.
A Sufficient Condition for the Almost Global Stability of Nonlinear Switched Systems with Average Dwell Time
Ferruh İlhan, Ozkan Karabacak, Rafael Wisniewski
Subject: Engineering, Control & Systems Engineering Keywords: almost global stability; switched systems
A sufficient condition for the almost global sta-bility of nonlinear switched systems under average dwell timerestriction is obtained. This condition is derived leaning uponthe existence of multiple Lyapunov densities, which are associ-ated to subsystems and satisfy some compatibility conditions.An upper bound for the average dwell time that ensures almostglobal stability is obtained.
Global Asbestos Disaster
Sugio Furuya, Odgerel Chimed-Ochir, Ken Takahashi, Annette David, Jukka Takala
Subject: Medicine & Pharmacology, Other Keywords: asbestos; ban; global estimates, costs
Background. Asbestos has been used for thousands of years but in a large industrial scale for about 100–150 years. The first identified disease was asbestosis, a type of incurable pneumoconiosis caused by asbestos dust and fibres. The latest estimate of global number of asbestosis deaths from the Global Burden of Disease estimate 2016 is 3495. Asbestos caused cancer was identified in the late 1930's but despite of today's overwhelming evidence of the strong carcinogenicity of all asbestos types including chrysotile it is still widely used globally. Various estimates have been made over time including those of WHO and ILO 107,000–112,000 deaths. Present estimates are radically higher. This special edition of the Journal summarizes key aspects of the past and present of the asbestos problem globally. Methods. Documentation on milestones of asbestos related diseases, ARDs, their recognition, reporting, compensation and prevention efforts were examined, in particular from the regulatory and prevention point of view. Estimated global numbers of incidence and mortality of ARDs were looked at. Results. Asbestos causes an estimated 257,000 deaths (243,223–270,635) annually according to latest knowledge. Work-related exposures are responsible for 235,000 deaths (222,322–247,363) of those. In the European Union, USA and in other High income economies (WHO regional classification) the direct costs for sickness, early retirement and death, including production losses, have been estimated to be very high, in the Western European countries and EU equivalent of 0.70% of the GDP or 114.9*109 USD. Intangible costs could be much higher. When applying the Value of Statistical Life (VSL) of 4 million EUR per cancer death used by the European Commission we arrived 5at 410*109 USD while the human suffering and loss of life is impossible to quantify. The numbers and costs are increasing practically in every country and region in the world. Asbestos has been banned in 55 countries but used widely today, some 2,030,000 tons consumed annually according to latest available consumption data. Every 20 tons of asbestos produced and consumed kills a person somewhere in the world. Buying 1 kg asbestos powder format e.g. in Asia costs some 0.38 USD and 20 tons would cost in such retail market 7600 USD. Conclusions. Present efforts to eliminate this man-made problem, in fact an epidemiological disaster, and preventing exposures leading to it are insufficient in most countries in the world. Applying programmes and policies, such as those on the elimination of all kind of asbestos use—that is banning of new asbestos use and tight control and management of existing structures containing asbestos—need revision and resources. The ILO/WHO Joint Programme for the Elimination of Asbestos-related Diseases need to be revitalized. Exposure limits do not protect properly against cancer but for asbestos removal and equivalent exposure elimination work we propose a limit value of 1000 fibres/m3.
Monkeypox and its Possible Sexual Transmission: Where are we now with its evidence?
Ranjit Sah, Abdelaziz Abdelaal, Abdullah Reda, Basant E. Katamesh, Emery Manirambona, Hanaa Abdelmonem, Rachana Mehta, Ali A. Rabaan, Saad Alhumaid, Wadha Alfouzan, Amer I. Alomar, Faryal Khamis, Fadwa S. Alofi, Maha H. Aljohani, Amal H. Alfaraj, Mubarak Alfaresi, Jumana M. Al-Jishi, Jameela Al-Salman, Ahlam Alynbiawi, Mohammed S. Almogbel, Alfonso J. Rodriguez-Morales
Subject: Life Sciences, Virology Keywords: sexual transmission; monkeypox; emerging; global; epidemic
Online: 21 July 2022 (08:08:40 CEST)
Monkeypox is a rare disease which is rising nowadays in different countries since the first case in the UK was diagnosed on May 6, 2022, by the United Kingdom (UK) Health Security Agency. Then more than 12,500 cases were identified in over 68 countries up to July 18, 2022. In endemic areas, the monkeypox virus (MPXV) is commonly transmitted through zoonosis, while in non-endemic regions, it is spread through human-to-human transmission. Symptoms can include flu-like symptoms, rash, or sores in hands, feet, genitalia, or anus. In addition, people who did not take the smallpox vaccine were more liable to be affected than others. The exact pathogenesis and mechanisms are still unclear; however, most identified cases are reported in men who have sex with other men (MSM). According to the CDC, transmission can happen with any sexual or non-sexual contact with the infected person. However, a recent pooled meta-analysis reported that sexual contact is involved in more than 91% of the cases. Also, it is the first time that semen analysis for many patients has shown positive monkeypox virus DNA. Therefore, in this review, we will describe transmission methods for MPXV while focusing mainly on potential sexual transmission and associated sexually transmitted infections. We will also highlight the preventive measures that can limit the spread of the diseases in this regard.
How Accurate are WorldPop-Global-Unconstrained Gridded Population Data at the Cell-Level?: A Simulation Analysis in Urban Namibia
Dana R. Thomson, Douglas R. Leasure, Tomas Bird, Nikos Tzavidis, Andrew J. Tatem
Subject: Social Sciences, Geography Keywords: LMIC; Global South; indicator; Random Forrest
Disaggregated population counts are needed to calculate health, economic, and development indicators in Low- and Middle-Income Countries (LMICs), especially in settings of rapid urbanisation. Censuses are often outdated and inaccurate in LMIC settings, and rarely disaggregated at fine geographic scale. Modelled gridded population datasets derived from census data have become widely used by development researchers and practitioners. These datasets are evaluated for accuracy at the spatial scale of the input data which is often much courser (e.g. administrative units) than the neighbourhood or cell-level scale of many applications. We simulate a realistic "true" 2016 population in Khomas, Namibia, a majority urban region, and introduce realistic levels of outdatedness (over 15 years) and inaccuracy in slum, non-slum, and rural areas. We aggregate these simulated realistic populations by census and administrative boundaries (to mimic census data), and generate 32 gridded population datasets that are typical of a LMIC setting using WorldPop-Global-Unconstrained gridded population approach. We evaluate the cell-level accuracy of these simulated datasets using the original "true" population as a reference. In our simulation, we found large cell-level errors, particularly in slum cells, driven by the use of average population densities in large areal units to determine cell-level population densities. Age, accuracy, and aggregation of the input data also played a role in these errors. We suggest incorporating finer-scale training data into gridded population models generally, and WorldPop-Global-Unconstrained in particular (e.g., from routine household surveys or slum community population counts), and use of new building footprint datasets as a covariate to improve cell-level accuracy. It is important to measure accuracy of gridded population datasets at spatial scales more consistent with how the data are being applied, especially if they are to be used for monitoring key development indicators at neighbourhood scales with relevance to small dense deprived areas within larger administrative units.
Global in Time Existence of Strong Solution to 3D Navier-Stokes Equations
Abdelkerim Chaabani
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Navier Stokes; strong solution; global existence
Online: 24 March 2020 (03:33:43 CET)
The purpose of this paper is to bring to light a method through which the global in time existence for arbitrary large in H1 initial data of a strong solution to 3D periodic Navier-Stokes equations follows. The method consists of subdividing the time interval of existence into smaller sub-intervals carefully chosen. These sub-intervals are chosen based on the hypothesis that for any wavenumber m, one can find an interval of time on which the energy quantized in low-frequency components (up to m) of the solution u is lesser than the energy quantized in high-frequency components (down to m) or otherwise the opposite. We associate then a suitable number m to each one of the intervals and we prove that the norm ||u(t)||H1 is bounded in both mentioned cases. The process can be continued until reaching the maximal time of existence Tmax which yields the global in time existence of strong solution.
Working Paper REVIEW
Global Learning for Sustainable Development: A Systematic Review
Birgitta Nordén, Helen Avery
Subject: Earth Sciences, Environmental Sciences Keywords: global learning; global learning for sustainable development; South/North perspectives; sustainability; sustainable development; education for sustainable development
Despite continued efforts by educators, UN declarations and numerous international agreements, progress is still limited in handling major global challenges such as ecosystem collapse, accelerating climate change, poverty and inequity. The capacity to collaborate globally on addressing these issues remains weak. This systematic review of research on global learning for sustainable development (GLSD) aims to clarify the diverse directions research on GLSD has taken, to present the historical development of the research area, and highlight emerging research issues. The review summarises key findings of the English language literature in the period 1994-2020 identified with the search terms "global learning" and "sustainable development", sustainability or GLSD, respectively. The review documented a gradually growing knowledge base, mostly authored by scholars located in the global North. Conclusions point to what we might achieve if we could learn from one another in new ways, moving beyond Northern-centric paradigms. It is also time to re-evaluate core assumptions that underlie education for sustainable development more generally, such as a narrow focus on formal learning institutions. The review provides a benchmark for future reviews of research on GLSD, reveals the emerging transformative structure of this transdisciplinary field, and offers reference points for further research
Biochemical and Behavioural Alterations Induced by Arsenic and Temperature in Hediste Diversicolor of Different Growth Stages
Pedro Valente, Paulo Cardoso, Valéria Giménez, M.S.S. Silva, Carina Sá, Etelvina Figueira, Adília Pires
Subject: Biology, Ecology Keywords: Arsenic; global warming; invertebrates; behavior; oxidative stress
Contamination with Arsenic, a toxic metalloid, is increasing in the marine environment. Additionally, global warming can alter metalloids toxicity. Polychaetes are key species in marine environments. By mobilizing sediments, they play vital roles in nutrient and element (including contaminants) cycles. Most studies with marine invertebrates focused on the effects of metalloids on either adults or larvae. Here we bring information on the effects of temperature increase and arsenic contamination on the polychaete Hediste diversicolor in different growth stages and water temperatures. Feeding activity and biochemical responses – neurotransmission, indicators of cell damage, antioxidant and biotransformation enzymes and metabolic capacity - were evaluated. Temperature rise combined with As imposed alterations on feeding activity and biochemical endpoints at different growth stages. Small organisms have their antioxidant enzymes increased, avoiding lipid damage. However, larger organisms are the most affected class due to inhibition of superoxide dismutase, which resulted in protein damage. Oxidative damage was observed on smaller and larger organisms exposed to As and 21 °C, demonstrating higher sensibility to the combination of temperature rise and As. The observed alterations may have ecological consequences, affecting the cycle of nutrients, sediment oxygenation and the food chain that depend on the bioturbation of this polychaete.
Preprint BRIEF REPORT | doi:10.20944/preprints202208.0399.v1
Almost Global Pullback Attraction In Non-Autonomous Systems
Özkan Karabacak
Subject: Mathematics & Computer Science, Other Keywords: pullback attractors; nonautonomous systems; almost global stability
This short report contains a result that characterizes almost global pullback attractor of discrete-time non-autonomous systems. Analogously to the multiple Lyapunov functions approach for switched systems, we show here that existence of multiple Lyapunov densities implies pullback convergence of almost all initial states to the origin for a discrete-time non-autonomous system.
Preprint CASE REPORT | doi:10.20944/preprints202208.0155.v1
Transient Global Amnesia, an Uncommon Diagnosis of Exclusion
Mohamed Sheikh Hasan, Nor Osman Sidow, Nor Adam Mohamed
Subject: Medicine & Pharmacology, Clinical Neurology Keywords: Antrograde Amnesia; repetitive questioning; Transient global amnesia
Transient global amnesia (TGA) is an uncommon clinical syndrome characterized by a loss of short-term memory and disorientation that resolves in twenty-four hours. The etiology is unknown and the diagnosis is made by exclusion of other possible etiologies that may cause similar patterns and the reversibility of the condition in less than 24 hours. Here we report a 60-year-old male presented with sudden onset of disorientation and short-term memory loss early in the morning while at home and started to ask his whereabouts and what happened. He had no history of significant medical or psychiatric disease. There was no history of previous similar episodes. He had no recent history of sleep problems, head trauma, substance abuse, or loss of consciousness. He had no history of seizure disorder, or migraine. A neurologic examination revealed a normal wakefulness state with mild disorientation, and short-term memory impairment. He had score of 18/30 on mini-mental state examination (which later returned to normal baseline in 24 hours). Extensive lab investigations did not show any abnormal findings. Brain MRI did not show any acute cerebral pathology. The EEG was negative for any abnormal cerebral activity. His memory improved and returned to normal baseline over the course of a 20-hour from the onset. After exclusion of potential causes and the patient returned to normal state of memory, the diagnosis of transient global amnesia was made. At the follow-up visit, the patient was in a state of normal function without a recurrence of memory impairment. Here we presented this interesting case of transient global amnesia. TGA is diagnosis of exclusion and important to keep in mind when evaluating a patient with acute onset of short-term memory impairment, especially when etiological investigation revealed no potential cause.
Working Paper ARTICLE
Grain Quality of Wheat Genotypes Under Heat Stress
Soraya Mahdavi, Ahmad Arzani, S.A.M. Mirmohammady Maibody, Mahdi Kadivar
Subject: Keywords: wheat; global warming; flour quality; thermal stress
Heat stress during the grain-filling period is the main abiotic stress factor limiting grain yield and quality in wheat (Triticum aestivum L.). In this study, 64 wheat genotypes were exposed to heat stress during reproduction caused by delayed sowing in two growing seasons. Grain yield, 1000 grain weight (GW), grain hardness (GH), and grain-quality related traits were investigated using wholemeal flour. Heat stress caused a significant decrease in GW through reducing starch content (SC) and a non-compensating rise in protein content (PC), and thereby resulted in lower yield. In addition, significant increases in flour water absorption (WA), Zeleny sedimentation volume (ZT), ash content (AC), lipid content (LC), loaf volume (LV), wet gluten content (WG), dry gluten content (DG), gluten index (GI), and amylopectin content (APC) were found following heat stress. In contrast, decreases in grain moisture content (MC) and amylose content (AMC) induced by heat stress were observed. The heat-tolerant genotypes were superior in grain yield, GW, SC, AMC, and MC. While the sensitive genotypes contained higher PC, LV, GI and AMP. A group of wheat genotypes characterized with a higher yield, AMC, GW, and SC as well as lower PC, WA, GH, ZT, and LV; and was found to be the most heat tolerant by principal component analysis. Decreases in the ratio of carbohydrates to proteins induced by heat stress, and lower protein content of normal grown wheat genotypes were observed. Therefore, lighter weight and smaller grains produce a smaller starchy endosperm with lower quality (less amylose) and higher grain protein content in heat stress compared to normal conditions. Heat stress caused by delayed sowing improves some of the baking-quality related traits. Whether this improvement in grain quality attributes will translate into better human health outcomes requires further investigation.
Preprint HYPOTHESIS | doi:10.20944/preprints202108.0114.v1
Serendipity and the Brain, or How We Make Great Discoveries
Daniel Kondziella
Subject: Medicine & Pharmacology, Behavioral Neuroscience Keywords: concept cells; consciousness; Global Neuronal Workspace Hypothesis
Serendipity favors the prepared mind, but how does the brain make that lucky find? Analyzing the cerebral mechanisms behind an exceptional (albeit trivial) discovery, the author suggests that a combination of 'concept cells' and the Global Neuronal Workspace Hypothesis could explain how we make great discoveries.
On Some Damped 2 Body Problems
Alain Haraux
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: gravitation; singular potential; global solutions; spiraling orbit
The usual equation for both motions of a single planet around the sun and electrons in the deterministic Rutherford-Bohr atomic model is conservative with a singular potential at the origin. When a dissipation is added, new phenomena appear which were investigated thoroughly by R. Ortega and his co-authors between 2014 and 2017, in particular all solutions are bounded and tend to $0$ for $t$ large, some of them with asymptotically spiraling exponentially fast convergence to the center. We provide explicit estimates for the bounds in the general case that we refine under specific restrictions on the initial state, and we give a formal calculation which could be used to determine practically some special asymptotically spiraling orbits. Besides, a related model with exponentially damped central charge or mass gives some explicit exponentially decaying solutions which might help future investigations. An atomic contraction hypothesis related to the asymptotic dying off of solutions proven for the dissipative model might give a solution to some intriguing phenomena observed in paleontology, familiar electrical devices and high scale cosmology
On a Linearly Damped 2 Body Problem
Subject: Physical Sciences, Mathematical Physics Keywords: gravitation; singular potential; global solutions; spiraling orbit
Online: 16 December 2020 (10:09:41 CET)
The usual equation for both motions of a single planet around the sun and electrons in the deterministic Rutherford-Bohr atomic model is conservative with a singular potential at the origin. When a dissipation is added, new phenomena appear. It is shown that whenever the momentum is not zero, the moving particle does not reach the center in finite time and its displacement does not blow-up either, even in the classical context where arbitrarily large velocities are allowed. Moreover we prove that all bounded solutions tend to $0$ for $t$ large, and some formal calculations suggest the existence of special orbits with an asymptotically spiraling exponentially fast convergence to the center.
COVID-19 Crisis and Global Healthcare Delivery: Lessons to Be Learned
Chris Oyewole Durojaiye, Robin Morgan
Subject: Medicine & Pharmacology, Other Keywords: COVID-19; Pandemic; Global health; Health inequalities
The COVID-19 crisis has brought unprecedented strain on healthcare systems around the world. It has perhaps taught us some key lessons that are worth considering and addressing to help build more sustainable health systems as well as improve our ability to combat future epidemics.
Assessing Global Frailty Scores: Development of a Global Burden of Disease-Frailty Index (GBD-FI)
Mark O'Donovan, Duygu Sezgin, Zubair Kabir, Aaron Liew, Rónán O'Caoimh
Subject: Keywords: Frailty; Public Health; Global Burden of Disease
Frailty is an important age-associated risk-state. Despite this, many countries lack population estimates and large heterogeneity exists amongst studies. The Global Burden of Disease (GBD) study, provides comparable high-quality population-level data for 195 countries and territories. Frailty has never been measured in the GBD studies. This analysis applies the deficit accumulation model to construct a novel frailty index (FI) using the GBD 2017 dataset. Standard FI criteria were applied to all GBD categories such that selected items were health-related, age-correlated, sufficiently prevalent, did not saturate at an early age, had little redundancy/duplication, covered a range of systems, were plausible and were available serially for the same population. From all 554 GBD items, 36 were selected including 26 non-communicable diseases, 3 metabolic risks, 3 biological impairments, infectious diarrheal diseases, protein-energy malnutrition, injurious falls, and low physical activity. Variable face validity was displayed against a selection of established FIs. The mean GBD-FI score for the global population aged ≥70 years in 2017 was 0.16; scores were higher in females than males (0.16 vs 0.15, respectively). Deficits accumulated with age at an estimated rate of 0.026 per year. Adding the mean GBD-FI scores to a regression model including country-level variables for demographics (proportion ≥85 years, proportion female), healthcare quality (HAQ index), and development (SDI) increased the adjusted r2 value from 27.0% to 39.6% (p<0.001) for predicting country-level death rates from non-communicable diseases, suggesting that the GBD-FI is a useful predictor of mortality. Further analysis is required to compare the reliability and predictive validity of the GBD-FI with existing frailty tools
Stability and Boundedness Properties of a Rational Exponential Difference Equation
J. Leo Amalraj, M.Maria Susai Manuel, Adem Kılıçman, D. S. Dilip
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: boundedness; equilibrium; global asymptotic stability; Rational Equation
This article aims to discuss, the stability and boundedness character of the solutions of the rational equation of the form \begin{equation}\label{eql21.1} y_{t+1}=\frac{\nu\epsilon^{-y_t}+\delta\epsilon^{-y_{t-1}}}{\mu+\nu y_t+\delta y_{t-1}},\quad t\in N(0). \end{equation} Here, $\epsilon>1, \nu,\delta,\mu\in (0,\infty)$ and $y_0, y_1$ are taken as arbitrary non-negative reals and $N(a)=\{a,a+1,a+2,\cdots \}$. Relevant examples are provided to validate our results. The exactness is tested using MATLAB.
Securing Security in Education: The Role of Public Theology and a Case Study in Global Jihadism
Terence Lovat
Subject: Arts & Humanities, Religious Studies Keywords: Security; Education; Public Theology; Islam; Global Jihadism
The article mounts an argument for public theology as an appropriate if not vital adjunct to contemporary education's addressing of security issues in light of current world events with indisputable religious and arguably quasi-theological foundations. It will briefly expound on the history of thought that has marginalized theology as a public discipline and then move to justify the counter view that the discipline, at least in the form of public theology, has potential to address matters of such public concern in a unique and helpful way. The article will culminate with an exploration of Global Jihadism as a case study that illustrates the usefulness of public theology in understanding it better and so allowing for a response with potential to be more informed and security-assured than is commonly effected.
Subject: Keywords: Security; Education; Public Theology; Islam; Global Jihadism
Predictive Optimal Control of Hybrid Line Haul Trucks
Sourav Pramanik, Sohel Anwar
Subject: Engineering, Control & Systems Engineering Keywords: dynamic program; fuel economy; global optimization; predictive control
Online: 17 October 2022 (03:40:06 CEST)
Fuel consumption, subsequent emissions and safe operation of class 8 vehicles are of prime importance in recent days. It is imperative that the vehicle operates in its true optimal operating region given a variety of constraints such as road grade, load, gear shifts, Battery State of charge (for hybrid vehicles), etc. In this paper a research study is conducted to evaluate the fuel economy and subsequent emission benefits when applying predictive control to a mild hybrid line haul truck. The problem is solved using a combination of dynamic programming with back tracking and model predictive control. The specific fuel saving features that are studied in this work are dynamic cruise control, gear shifts, vehicle coasting and torque management. These features are evaluated predictively as compared to a reactive behavior. The predictive behavior of these features are a function of road grade. The result and analysis shows significant improvement in fuel savings along with NOx benefits. Out of the control features dynamic cruise (predictive) control and dynamic coasting showed the most benefits while predictive gear shifts and torque management (by power splitting between battery and engine) for this architecture did not show fuel benefits but provided other benefits in terms of powertrain efficiency.
Certification of Almost Global Phase Synchronization of Coupled Oscillators
Mahmut Kudeyt, Ayşegül Kıvılcım, Elif Köksal Ersöz, Özkan Karabacak
Subject: Engineering, Control & Systems Engineering Keywords: almost global synchronization, coupled phase oscillators, Lyapunov density
Phase synchronization of weakly coupled limit cycle oscillators are related to the stability of the zero solution of the reduced-order dynamics of phase differences, represented by a systems of differential equations on a hypertorus. Using Rantzer's density function, a dual form of Lyapunov function, we propose a method to certify almost global stability of an equilibrium on a hypertorus. We show that the proposed method can certify robustness of phase synchronization of all-to-all and weakly coupled limit cycle oscillators with respect to disturbances in phases. The method leverages sum of squares polynomial optimization to construct the certification function.
A Short Study on Minima Distribution
Loc Nguyen
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: global optimization; minima distribution; particle swarm optimization; PSO
Global optimization is an imperative development of local optimization because there are many problems in artificial intelligence and machine learning requires highly acute solutions over entire domain. There are many methods to resolve the global optimization, which can be classified into three groups such as analytic methods (purely mathematical methods), probabilistic methods, and heuristic methods. Especially, heuristic methods like particle swarm optimization and ant bee colony attract researchers because their effective and practical techniques which are easy to be implemented by computer programming languages. However, these heuristic methods are lacking in theoretical mathematical fundamental. Fortunately, minima distribution establishes a strict mathematical relationship between optimized target function and its global minima. In this research, I try to study minima distribution and apply it into explaining convergence and convergence speed of optimization algorithms. Especially, weak conditions of convergence and monotonicity within minima distribution are drawn so as to be appropriate to practical optimization methods.
Ethical Issues in AI-enabled Disease Surveillance: Perspectives from Global Health
Ann Borda, Andreea Molnar, Cristina Neesham, Patty Kostkova
Subject: Social Sciences, Other Keywords: AI; disease surveillance; pandemics; global public health; ethics
Online: 18 February 2022 (10:36:04 CET)
Infectious diseases, as COVID-19 is proving, pose a global health threat in an interconnected world. In the last 20 years, resistant infectious diseases such as SARS, MERS, H1N1, Ebola, Zika and now COVID-19 have been impacting global health defences, and aggressively flourishing within the rise of global travel, urbanization, climate change and ecological degradation. In parallel, this extraordinary episode in global human health highlights the potential for artificial intelligence (AI)-enabled disease surveillance to collect and analyse vast amounts of unstructured and real-time data to inform epidemiological and public health emergency responses. The uses of AI in these dynamic environments are increasingly complex, challenging the potential for human autonomous decisions. In this context, our study of qualitative perspectives will consider a responsible AI framework to explore its potential application to disease surveillance in a global health context. Thus far, there is a gap in the literature in considering these multiple and interconnected levels of disease surveillance and emergency health management through the lens of a responsible AI framework.
Synthesis of Strategic Games With Multiple Pre-set Nash Equilibria - An Artificial Inference Approach Using Fuzzy ASA
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Nash equilibria; Mechanism design; Artificial inference; Global learning
This paper presents an extension of the results obtained in previous work concerning the application of global optimization techniques to the design of finite strategic games with mixed strategies. In that publication the Fuzzy ASA global optimization method was applied to many examples of synthesis of strategic games with one previously specified Nash equilibrium, evidencing its ability in finding payoff functions whose respective games present those equilibria, possibly among others. That is to say, it was shown it is possible to establish in advance a Nash equilibrium for a generic finite state strategic game and to compute payoff functions that will make it feasible to reach the chosen equilibrium, allowing players to converge to the desired profile, con- sidering that it is an equilibrium of the game as well. Going beyond this state of affairs, the present article shows that it is possible to "impose" multiple Nash equilibria to finite strategic games by following the same reasoning as before, but with a fundamental change: using the same fundamental theorem of Richard McKelvey, modifying the originally prescribed objective function and globally minimizing it. The proposed method, in principle, is able to find payoff functions that result in games featuring an arbitrary number of Nash equilibria, paving the way to a substantial number of potential applications.
Improving the Accuracy of Gridded Population Estimates in Cities and Slums to Monitor SDG 11: Evidence from a Simulation Study in Namibia
Dana R. Thomson, Forrest R. Stevens, Robert Chen, Gregory Yetman, Alessandro Sorichetta, Andrea E. Gaughan
Subject: Social Sciences, Accounting Keywords: LMIC; urban; deprivation; informal settlement; poverty; Global South
People living in slums and other deprived areas in low- and middle-income country (LMIC) cities are under-represented in censuses, and subsequently in "top-down" gridded population estimates. Modelled gridded population data are a unique source of disaggregated population information to calculate local development indicators such as the Sustainable Development Goals (SDGs). This study evaluates if, and how, WorldPop-Global (WPG) -Unconstrained and -Constrained "top-down" datasets might be improved in a simulated realistic LMIC urban population by incorporating slum profile population counts into model training. We found that the WPG-Unconstrained model with or without slum training data grossly underestimated population in urban deprived areas while grossly overestimating population in rural areas. SDG 11.1.1, the percent of population living in slums, for example, was estimated to be 20% or less compared to a "true" value of 29.5%. The WPG-Constrained model, which included building auxiliary datasets, far more accurately estimated the population in all grid cells (including rural areas), and the inclusion of slum training data further improved estimates such that SDG 11.1.1 was estimated at 27.1% and 27.0%, respectively. Inclusion of building metrics and slum training data in "top-down" gridded population models can substantially improve grid cell-level accuracy in both urban and rural areas.
Asymptomatic covid-19 carriers education App (1)
Peter chew
Subject: Keywords: COVID-19, Education App, Biochemist, Global issue analyst
Online: 9 June 2021 (22:07:37 CEST)
AbstractBackground: The World Health Organization (WHO) said the situation in India was a "devastating reminder" of what the coronavirus could do. COVID-19 cases suddenly spiked across India. Union Health Minister Harsh Vardhan has said that one of the major reasons for the spike in coronavirus cases was people not following COVID-appropriate behaviour. The Union minister noted that the sudden rise in cases is largely or maybe event-driven comprising local body elections, grand weddings, and farmers' protest. These events may cause asymptomatic covid-19 carriers to spread wide covid-19 to others. Malaysia is also facing a surge in Covid-19 may due to the spread of covid-19 by asymptomatic covid-19 carriers. Therefore, it is important to develop an application that can publicize information on asymptomatic covid-19 carriers. The purpose of this application is to provide sufficient information and scientific research evidence to ensure that prevention strategies for asymptomatic covid-19 carriers must be implemented. The app is also open to anyone who uses it to educate others so that information can be shared more quickly to prevent other countries from becoming "Second India or Malaysia".Method: The homepage of the app shows that asymptomatic covid-19 carriers may have a lower viral load, the same viral load, or a higher viral load than symptomatic covid-19 carriers. When the user app is pressed by each category, they will see sufficient information and scientifically based research evidence about each category. These apps also show the evidence that on January 13, 2021 - Malaysian Health Department Director Dr Noor Hisham Abdullah instructs test Only those Close Contacts With Symptoms and The Malaysian Medical Association (MMA) has urged the Health Ministry to urgently improve the management of mild Covid-19 cases and revert to its policy of testing all close contacts. In addition, These apps also show App raise public awareness of the importance of COVID-19 vaccination(version 4) [Peter Chew, 2021] can intuitively see that countries with high vaccination rates can solve the problem of asymptomatic transmission of covid-19 carriers.Result: This application displays sufficient information and scientifically based research evidence to prove asymptomatic covid-19 carriers are the main key to the outbreak of covid-19. Some countries are using covid-19 symptom prevention strategies. They are only testing the symptomatic closed contact of covid-19 patients, because they may think that asymptomatic covid-19 carrier is just a low viral load and a low transmission rate, which is wrong. Some asymptomatic covid-19 carriers of covid-19 have high viral loads. The accumulation of asymptomatic covid-19 carriers with high viral load is the main cause of the covid-19 outbreak. Conclusion: Three apps have been developed to educate the public about the importance of asymptomatic covid-19 carriers. The asymptomatic covid-19 carrier education app (1) will provide sufficient information and scientific research evidence to educate citizens of any country to ensure that preventive strategies must be implemented for asymptomatic carriers to prevent the country's Covid-19 outbreak. App, Game Base Learning to Prevent Infection from COVID-19 (version 3) [Peter Chew, 2020 ]. The app allows anyone to intuitively see that when the second wave covid-19 arrives, the accumulation of a large number of asymptomatic carriers in some countries has led to the high infection rate of covid-19. This is what is happening in India now. App raise public awareness of the importance of COVID-19 vaccination(version 4) can intuitively see that countries with high vaccination rates can solve the problem of asymptomatic transmission of covid-19 carriers. This is what is happening in Israel now.
"Domains of Deprivation Framework" for Mapping Slums, Informal Settlements, and Other Deprived Areas in LMICs to Improve Urban Planning and Policy: A Scoping Review
Ángela Abascal, Natalie Rothwell, Adenike Shonowo, Dana R. Thomson, Peter Elias, Helen Elsey, Godwin Yeboah, Monika Kuffer
Subject: Social Sciences, Accounting Keywords: global south; indicators; urban; city; poverty; neighborhood-level
The majority of urban inhabitants in low- and middle-income country (LMIC) cities live in deprived urban areas. However, statistics and data (e.g., local monitoring of Sustainable Development Goals - SDGs) are hindered by the unavailability of spatial data at metropolitan, city and sub-city scales. Deprivation is a complex and multidimensional concept, which has been captured in existing literature with a strong focus on household-level deprivation while giving limited attention to area-level deprivation. Within this scoping review, we build on existing literature on household- as well as area-level deprivation frameworks to arrive at a combined understanding of how urban deprivation is defined with a focus on LMIC cities. The scoping review was enriched with local stakeholder workshops in LMIC cities to arrive at our framework of Domains of Deprivations, splitting deprivation into three different scales and nine domains. (1) Socio-Economic Status and (2) Housing Domains (Household scale); (3) Social Hazards & Assets, (4) Physical Hazards & Assets, (5) Unplanned Urbanization and (6) Contamination (Within Area scale); and (7) Infrastructure, (8) Facilities & Services and (9) city Governance (Area Connect scale). The Domains of Deprivation framework provides a clear guidance for collecting data on various aspects of deprivation, while providing the flexibility to decide at city level which indicators are most relevant to explain individual domains. The framework provides a conceptual and operational base for the Integrated Deprived Area Mapping System (IDEAMAPS) Project for the creation of a data ecosystem, which facilitates the production of routine, accurate maps of deprived "slum" areas at scale across cities in LMICs. The Domains of Deprivation Framework is designed to support diverse health, poverty, and development initiatives globally to characterize and address deprivation in LMIC cities.
A 'Local-Global' model for Seasonal Diseases: Influenza Subtypes Analysis Case Study
gal almogy
Subject: Life Sciences, Biochemistry Keywords: Influenza; epidemiology; spatiotemporal; seasonality; global; transmission; infectious disease
Influenza epidemics in temperate regions display dynamics that are characterized by pronounced seasonal peaks during the winter. The general lack of influenza cases during the off-season may result from the virus physically disappearing at the end of the season, in which case it must be imported annually. Alternatively, it may result from persistent asymptomatic carriers or unnoticed local transmission chains that develop into local epidemics as conditions become conducive. Here I attempt to understand these differing explanations by analyzing the global distribution of the four major subtypes that comprise influenza over a period of 18 years based on FluNet data, the surveillance network and database compiled by the WHO, and the NCBI influenza data resource, a repository of relevant genetic information. Examining the annual proportion of each subtype, I find considerable variations in subtype annual proportions between the regions. Moreover, I find that seasonal influenza subtypes can remain confined to specific temperate regions, without showing measurable global presence. These results indicate that although largely undetected during the off-season, influenza is likely to persist locally, and imply a 'local-global' model where annual influenza epidemics are a mixture of local strains undergoing reactivation together with an influx of global variants.
A general framework of particle swarm optimization
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: global optimization, particle swarm optimization (PSO), exploration, exploitation
Particle swarm optimization (PSO) is an effective algorithm to solve the optimization problem in case that derivative of target function is inexistent or difficult to be determined. Because PSO has many parameters and variants, I propose a general framework of PSO called GPSO which aggregates important parameters and generalizes important variants so that researchers can customize PSO easily. Moreover, two main properties of PSO are exploration and exploitation. The exploration property aims to avoid premature converging so as to reach global optimal solution whereas the exploitation property aims to motivate PSO to converge as fast as possible. These two aspects are equally important. Therefore, GPSO also aims to balance the exploration and the exploitation. It is expected that GPSO supports users to tune parameters for not only solving premature problem but also fast convergence.
Study of Essential Sequence Analysis Tools in Bioinformatics : A Brief Overview
Ghafran Ali, Kanza Ashfaq
Subject: Life Sciences, Biochemistry Keywords: Global Alignment; Local Alignment; Heuristic Algorithm; Exhaustive Algorithm
Sequence analysis program is outlined that analyzes and investigates homology between various nucleic acid or protein sequence. The dot matrix technique compares the sequences and the consensus sequence is obtained by superimposing on each other all the dot matrices. Local Alignment and Global Alignment both sequence from start to end is the best possible alignment over the entire duration between the two sequences. This method is more important to align with two closely related sequences roughly the same length. This method may not able to generate optimal results for divergent sequences and variable length sequence because between the two sequences it does not recognize very similar local region.
COVID-19 Pandemic Burden on Global Economy: A Paradigm Shift
Yusha Araf
Subject: Keywords: COVID-19; Economy; GDP; Global; Impact; Market; Pandemic
Online: 29 May 2020 (12:27:24 CEST)
The pandemic caused by SARS-CoV-2 virus obstructed the Chinese economy and has expanded to the rest of the world at a rapid pace affecting at least 215 countries, areas and territories. The advancement of the disease and its economic repercussions is profoundly ambiguous, making it challenging for policymakers to formulate suitable microeconomic and macroeconomic policy responses. The scenarios in this paper illustrate how an outbreak could significantly affect the global economy in the short run. It has been estimated that each additional month of crisis would cost from about 2.5-3% of the global GDP and that the GDP growth would take a blow, reaching about 3-6%, depending on the country. Scenarios also suggest that GDP can drop by more than 10% and even exceed 15% in some countries. Via addressing the economic consequence of COVID-19 in different industries and countries, the paper presents assessments of the likely global economic costs of COVID-19 and the GDP growth of different countries. Economies will be negatively affected because of the high number of jobs at risk. Countries highly dependent on foreign trade are more negatively affected. Given that disease and its economic influence are highly unpredictable in numerous aspects, the global economy at the moment is the most critically threatened in history.
Hash-Based Hierarchical Caching and Layered Filtering for Interactive Previews in Global Illumination Rendering
Thorsten Roth, Martin Weier, Pablo Bauszat, André Hinkenjann, Yongmin Li
Subject: Mathematics & Computer Science, Other Keywords: global illumination; rendering; filtering; caching; Level-of-Detail
Show abstract| Share
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Dynamics of a Second-Order System of Nonlinear Difference Equations
Erkan Taşdemir
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equation; stability; global stability; periodicity; eventually periodicity
Online: 31 October 2019 (02:03:14 CET)
In this paper, we investigate the equilibrium points, stability of two equilibrium points, convergences of negative equilibrium point, periodic solutions, and existence of bounded or unbounded solutions of a system of nonlinear difference equations xn+1 =xn-1yn - 1, yn+1 = yn-1xn - 1 n = 0,1,..., where the initial values are real numbers. Additionally we present some numerical examples to verify our theoretical results.
The Influence of Intellectual Capital on Economic Progress and Sustainability
Irina Chiriac, Gabriela Ignat, George Ungureanu, Carmen Luiza Costuleanu
Subject: Social Sciences, Economics Keywords: intellectual capital; sustainability; harness; bio-economy; global crisis
Bio-economy is a major area of the strategy that can afford the European Union to achieve growth: (i) smart, through the development of knowledge and innovation; and (ii) sustainable, based on a greener, more efficient economy in resource management. We believe that the progress of bio-economy cannot be achieved without the harnessing of intellectual capital. Our research aimed to emphasize the benefits of the dynamics of the intellectual capital growth on the evolution of the bio-economy. Thus, the information published by Eurostat (European Statistic Institute) during a period spanning seven years (2011-2018) was used to assess the influence exerted by the conduct of the harness of intellectual capital related to sustainability as well as for the reporting of indicators relevant to appreciating an economic progress and sustainability (renewable waste material, share of renewable energy and energy intensity of the economy). The ultimate goal was represented by the generation of a regression model to see what factor influences mostly the progress of the bio-economy at European and Romanian level. Significant dependency relationships were identified. The results remain robust even after the introduction of certain control variables, such as gross domestic product rate, food production, population growth, urbanization growth and inflation. Our paper sets out to contribute to expanding the specialty literature by highlighting the involvement of intellectual capital as a factor in optimizing sustainability growth and, at a methodological level, by using a multiple regression.
Why Triangular Membership Functions Are So Efficient in F-Transform Applications: A Global Explanation to Supplement the Existing Local One
Olga Kosheleva, Vladik Kreinovich, Thach Ngoc Nguyen
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: F-transform; triangular membership function; optimal global characteristics
The main ideas of F-transform came from representing expert rules. It would be therefore re reasonable to expect that the more accurately the membership functions describe human reasoning, the more efficient will be the corresponding F-transform formulas. We know that an adequate description of our reasoning corresponds to complicated membership functions -- however, somewhat surprisingly, most efficient applications of F-transform use the simplest possible triangular membership functions. There exist some explanations for this phenomenon which are based on local behavior of the signal. In this paper, we supplement this local explanation by a global one: namely, we prove that triangular membership functions are the only one that provide the accurate description of appropriate global characteristics of the signal.
On Sliced Spaces: Global Hyperbolicity Revisited
Kyriakos Papadopoulos, Nazli Kurt, Basil K. Papadopoulos
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: sliced spaces; global hyperbolicity; product topology; Alexandrov topology
We give a topological condition for a generic sliced space to be globally hyperbolic, without any hypothesis on the lapse function, shift function and spatial metric.
The Harness of Intellectual Capital for Economic Progress and Sustainability
Irina Chiriac, Gabriela Ignat, George Ungureanu, Dragoş Alexandru Robu, Carmen Luiza Costuleanu
Bio-economy is a major area of the strategy that must enable the European Union to achieve growth: smart, through the development of knowledge and innovation; and sustainable, based on a greener, more efficient economy in resource management. We believe that the progress of bio-economy cannot be achieved without the harnessing of intellectual capital. Our research aimed to emphasize the benefits of the dynamics of the intellectual capital growth on the evolution of the bio-economy. The aim of this analysis was to study the established link between the Energy Intensity of the Economy (EIE) and a number of factors that can measure the intellectual capital, such as: Market Capitalization of Bitcoin, Patent applications listed by European Patent Office and the Turnover from Innovation as a proportion of the total Turnover. The ultimate goal was represented by the generation of a regression model to see what factor influences mostly the progress of the bio-economy at European and Romanian level.
Is Left Ventricular Global Longitudinal Strain by Two-Dimensional Speckle Tracking Echocardiography in Sepsis Cardiomyopathy ready for prime time use in the ICU?
Venu Madhav Velagapudi, Rahul Pidikiti, Dennis A. Tighe
Subject: Medicine & Pharmacology, Other Keywords: sepsis cardiomyopathy; left ventricular function; global longitudinal strain
Online: 5 November 2018 (02:51:26 CET)
Myocardial deformation imaging (strain imaging) is a technique to directly quantify the extent of myocardial contractility and overcomes several of the limitations of ejection fraction. The application of the most commonly used strain imaging method; speckle-tracking echocardiography to patients with sepsis cardiomyopathy heralds an exciting development to the field. However; the body of evidence and knowledge on the utility, feasibility and prognostic value of left ventricular global longitudinal strain in sepsis cardiomyopathy is still evolving. We conducted a review of literature on utility of left ventricular global longitudinal strain in sepsis cardiomyopathy. We discuss the role of left ventricular global longitudinal strain in mortality prediction, utility and limitations of the technique in the context of sepsis cardiomyopathy.
Technology Patterns in Nanochemistry Based on GII Indicator
ahmad alkhawaldeh
Subject: Chemistry, Analytical Chemistry Keywords: Global Innovation Index; Nanochemistry; Development; GII; and Technology patterns
Trends focused on the Global Innovation Index (GII) as a measure for progress of nanochemistry. This paper provides projections of recent developments in the word in nanochemistry based on the Global Innovation Index as a predictor for certain Arab countries. The GII is an annual ranking of countries by its ability and performance in innovation and is calculated on a basic average from five and two pillars in two sub-indexes, the Innovation Input Index and the Innovation Output Index. Each pillar represents a trait of creativity and consists of up to five measures, with a weighted average formula for measuring their ranking. In 2008, the GII rose to 36.3 in 2016 from 0.5. The GII is smaller than the GII in Arabic countries worldwide. During the years 2013-2016, the worldwide GII was increasing while for the same period, for Arabic countries, this decline could be explained by economic and industrial wars in the Arab region.
Atmospheric CO2 Two Box Model Accurately Tracks 14C and 13C without Requiring the "Revelle Isotopic Exception"
Subject: Earth Sciences, Atmospheric Science Keywords: CO2 turnover time, anthropogenic emissions, CO2 atmospheric flux, global warming
Although total nett CO2 atmospheric flow can be estimated with reasonable accuracy, the contributing gross fluxes between the atmosphere and the earth's surface are poorly understood. This paper presents a method, driven by the objective of simplicity, by which the global outflow and inflow of CO2 between atmosphere and a globally equivalent "mixing reservoir" can be estimated, using the isotopes 14C and 13C as tracers. It has been asserted that the isotopic carbon in CO2 cannot be directly used as a tracer in flow studies because it is not subject to the Revelle factor. Evidence is provided showing that this view is mistaken. The model contains 7 key parameters which are used to create synthetic records of Δ14C and d13C spanning 200 years or more, including during the period of atmospheric weapons testing and its decay known as the "bomb pulse". By optimising the fit between these computed values and the historical records of d13C and Δ14C, all seven key parameters are determined. The effective "mixing reservoir" is thereby determined to have a size around six times that of the atmosphere, with global outflux rising from 39.7 GTC yr-1 in 1750 to 58.9 GTC/yr in 2020, this figure probably not including annually cycled carbon.
Help Me, Symbionts, You're My Only Hope: Approaches to Accelerate Our Understanding of Coral Holobiont Interactions
Colleen Bove, Maria Valadez Ingersoll, Sarah Davies
Subject: Biology, Ecology Keywords: Coral symbiosis; immunity; microbiome; global change; coral holobiont; Symbiodiniaceae
Tropical corals construct the three-dimensional framework for one of the most diverse ecosystems on the planet, providing habitat to a plethora of species across taxa. However, these ecosystem engineers are facing unprecedented challenges, such as increasing disease prevalence and marine heatwaves associated with anthropogenic global change. As a result, major declines in coral cover and health are being observed across the world's oceans, often due to the breakdown of coral-associated symbioses. Here, we review the interactions between the major symbiotic partners of the coral holobiont – the cnidarian host, algae in the family Symbiodiniaceae, and the microbiome – that influence trait variation, including the molecular mechanisms that underlie symbiosis and the resulting physiological benefits of different microbial partnerships. In doing so, we highlight the current framework for the formation and maintenance of cnidarian-Symbiodiniaceae symbiosis, and the role that immunity pathways play in this relationship. We emphasize that understanding these complex interactions is challenging when you consider the vast genetic variation of the cnidarian host and algal symbiont, as well as their highly diverse microbiome, which is also an important player in coral holobiont health. Given the complex interactions between and among symbiotic partners, we propose several research directions and approaches focused on symbiosis model systems and emerging technologies that will broaden our understanding of how these partner interactions may facilitate the prediction of coral holobiont phenotype, especially under rapid environmental change.
Present Climate of Lake Montcortès (Central Pyrenees): Paleoclimatic Relevance and Insights on Future Warming
Valenti Rull, Javier Sigró, Terea Vegas-Vilarrúbia
Subject: Earth Sciences, Environmental Sciences Keywords: climatology; paleoclimatology; temperature; precipitation; climographs; elevational gradients; global warming
The varved sediments of the Pyrenean Lake Montcortès (Pallars Sobirà, Lleida) embody a unique continuous high-resolution (annual) paleoarchive of the last 3000 years for the circum-Mediterranean region. A variety of paleoclimatic and paleoecological records have been retrieved from these uncommon sediments that have turned the lake into a regional reference. Present-day geographical, geological, ecological and limnological features of the lake and its surroundings are reasonably well known but the lack of a local weather station has prevented characterization of current climate, which is important to develop modern-analog studies for paleoclimatic reconstruction and to forecast the potential impacts of future global warming. Here, the local climate of the Montcortès area for the period 1955-2020 is characterized using a network of nearby stations situated along an elevational transect in the same river basin of the lake. The finding of statistically significant elevational gradients for annual and monthly average temperature and precipitation has enabled to estimate these parameters and their seasonal regime for the lake site. A representative climograph has been shaped with these data that can serve as a synthetic descriptive and comparative climatic tool. The same analysis has provided climatic data for modern-analog studies useful to improve the interpretation of sedimentary records in climatic and ecological terms. In addition, the seasonal slope shifting of the climatic elevational gradients has been useful to gain insights about possible future climatic trends under a warming scenario.
Recent Advances in PGPR and Molecular Mechanisms Involved in Drought Stress Tolerance
Diksha Sati, Veni Pande, Satish Chandra Pandey, Mukesh Samant
Subject: Life Sciences, Biochemistry Keywords: PGPR; Global food security; Sustainable agriculture; Omics techniques; Bioinoculants
Increased severity of droughts, due to anthropogenic activities and global warming has imposed a severe threat on agricultural productivity ever before. This has further advanced the need for some eco-friendly approaches to ensure global food security. In this regard, application of plant growth-promoting rhizobacteria (PGPR) can be beneficial. PGPR through various mechanisms viz. osmotic adjustments, increased antioxidant, phytohormone production, regulating stomatal conductivity, increased nutrient uptake, releasing Volatile organic compounds (VOCs), and Exo-polysaccharide (EPS) production, etc not only ensures the plant's survival during drought but also augment its growth. This review, extensively discusses the various mechanisms of PGPR in drought stress tolerance. We have also summarized the recent molecular and omics-based approaches for elucidating the role of drought responsive genes. The manuscript presents an in-depth mechanistic approach to combat the drought stress and also deals with designing PGPR based bioinoculants. Lastly, we present a possible sequence of steps for increasing the success rate of bioinoculants.
A Precision Evaluation Method for Remote Sensing Data Sampling Based on Hexagon Discrete Grid
Yue Ma, Guoqing Li, Xiaochuang Yao, Jin Ben, Qianqian Cao, Long Zhao, Lianchong Zhang, Rui Wang
Subject: Earth Sciences, Geoinformatics Keywords: Remote sensing; Global discrete grid; Accuracy evaluation; Hexagon grid
With the rapid development of earth observation, satellite navigation, mobile communication and other technologies, the order of magnitude of the spatial data we acquire and accumulate is increasing, and higher requirements are put forward for the application and storage of spatial data. Under this circumstance, a new form of spatial data organization emerged-the global discrete grid. This form of data management can be used for the efficient storage and application of large-scale global spatial data, which is a digital multi-resolution the geo-reference model that helps to establish a new model of data association and fusion. It is expected to make up for the shortcomings in the organization, processing and application of current spatial data. There are different types of grid system according to the grid division form, including global discrete grids with equal latitude and longitude, global discrete grids with variable latitude and longitude, and global discrete grids based on regular polyhedrons. However, there is no accuracy evaluation index system for remote sensing images expressed on the global discrete grid to solve this problem. This paper is dedicated to finding a suitable way to express remote sensing data on discrete grids, and establishing a suitable accuracy evaluation system for modeling remote sensing data based on hexagonal grids to evaluate modeling accuracy. The results show that this accuracy evaluation method can evaluate and analyze remote sensing data based on hexagonal grids from multiple levels, and the comprehensive similarity coefficient of the images before and after conversion is greater than 98%, which further proves that the availability hexagonal grid-based remote sensing data of remote sensing images. And among the three sampling methods, the image obtained by the nearest interpolation sampling method has the highest correlation with the original image.
Schools Closures during the COVID-19 Pandemic: A Catastrophic Global Situation
Danilo Buonsenso, Damian Roland, Cristina De Rose, Pablo Vásquez-Hoyos, Bazlin Ramly, Jessica Nandipa Chakakala-Chaziya, Alasdair Munro, Sebastián González-Dambrauskas
Subject: Keywords: COVID-19 pandemic; children; schools; schools closures; global health
School closures (SC) were adopted globally as a COVID-19 disease pandemic containment strategy. This extreme measure provoked a disruption of the educational system involving hundreds of million children worldwide. The return of children to school has been variable and is still an unresolved and contentious issue. Importantly the process has not been directly correlated to the severity of the pandemic s impact and has fueled the widening of disparities, disproportionately affecting the most vulnerable populations. Available evidence shows SC added little benefit to COVID-19 control whereas the harms related to SC severely affected children and adolescents. This unresolved issue has put children and young people at high risk of social, economic and health-related harm for years to come, triggering severe consequences during their lifespan. In this article we describe the process of SC and the reopening timetable across the globe. We highlight the data regarding the international state of educational systems around the world, putting emphasis on the rights of children to come back to school.
Global Dynamics of a Higher Order Difference Equation with a Quadratic Term
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations; global stability; rate of convergence; boundedness; periodicity; semicycle
Online: 23 November 2020 (09:27:44 CET)
In this paper, we investigate the dynamics of following higher order difference equation x_{n+1}=A+B((x_{n})/(x_{n-m}²)) with A,B and initial conditions are positive numbers, and m∈{2,3,⋯}. Especially we study the boundedness, periodicity, semi-cycles, global asymptotically stability and rate of convergence of solutions of related higher order difference equations.
Narrative Discourse as an Emergent Phenomenon: Global Semiotic Approach
Inna Livytska
Subject: Arts & Humanities, Linguistics Keywords: narrative; meaning; emergence; subjectivity; telic aspect; global semiotics; Umwelt
This theoretical paper continues a spectrum of research on sign character of narrative discourse on the background of modern post-classical theory of narrativity. It aims to uncover the relationships between the meaning of the narrative text and a sign signitication, assuming an intentional character of the narrative discourse governed by telic aspects (global semiotics). Global semiotic approach (Thomas Sebeok, 2001) views a narrative discourse as a self-organizing entity with purposeful (telic) character of all its constituent parts which turn a static text into a dynamic whole in the process of reading/perception/interpretation. The key notion for analysis of emergency is the term Umwelt (Jakob von Uexküll) to denote the perceptional world in which an organism (and a human) exists and acts as a subject. Therefore, Umwelt represents human's perceptual boundary, which modifies the surrounding in accordance with the human's subjective perspective. As Umwelt can be attributed to both biological and abiotic texts, meaning creation in the narrative discourse is compared to a semiotic study of comparative Umwelten (Cobley, 2014) where narrative is defined as a modeling device for the world creation through embodied subjectivity. It has been confirmed, that stressing on the subjective sphere of information eхchange and processing from the position of global semiotics necessitates introduction of basic principles of biosemiotics (i.e. semiotic scaffolding etc.) and teleology (i.e. cause, purpose, result) to analysis of narrative discourse and it constitutes the perspectives for further research in this domain.
Preprint COMMUNICATION | doi:10.20944/preprints202005.0132.v1
Risk Perception and COVID-19
Liliana Cori, Fabrizio Bianchi, Ennio Cadum, Carmen Anthonj
Subject: Life Sciences, Other Keywords: risk perception; coronavirus; covid-19; risk communication; global health
Online: 7 May 2020 (15:12:32 CEST)
The ongoing COVID-19 pandemic is shaking the foundations of public health governance all over the world. Researchers are challenged by informing and supporting authorities on acquired knowledge and practical implications. This commentary applies established theories of risk perception research to COVID-19 and reflects on the role of risk perceptions in these unprecedented times. Moreover, it calls for utilizing the knowledge on risk perception to improve health risk communication, build trust and contribute to a collaborating governance.
Working Paper COMMUNICATION
A Robust Goal Is Needed for Species in the Post-2020 Global Biodiversity Framework
Brooke A. Williams, James E.M. Watson, Stuart H.M. Butchart, Michelle Ward, Thomas M. Brooks, Nathalie Butt, Friederike C. Bolam, Simon N. Stuart, Louise Mair, Philip J. K. McGowan, Richard Gregory, Craig Hilton-Taylor, David Mallon, Ian Harrison, Jeremy S. Simmonds
Subject: Earth Sciences, Environmental Sciences Keywords: Post-2020; Global Biodiversity Framework; Zero draft; Aichi Targets; Convention on Biological Diversity; biodiversity; extinction; conservation; IUCN Red List
In 2010, Parties to the Convention on Biological Diversity (CBD) adopted the Strategic Plan for Biodiversity 2011–2020 to address the loss and degradation of nature. Subsequently, most biodiversity indicators continued to decline. Nevertheless, conservation actions can make a positive difference for biodiversity. The emerging Post-2020 Global Biodiversity Framework has potential to catalyze efforts to 'bend the curve' of biodiversity loss. Thus, the inclusion of a goal on species, articulated as Goal B in the Zero Draft of the Post-2020 Framework, is essential. However, as currently formulated, this goal is inadequate for preventing extinctions, and reversing population declines; both of which are required to achieve the CBD's 2030 mission. We contend it is unacceptable that Goal B could be met while most threatened species deteriorated in status and many avoidable species extinctions occurred. We examine the limitations of the current wording and propose an articulation with robust scientific basis. A goal for species that strives to end extinctions and recover populations of all species that have experienced population declines, and especially those at risk of extinction, would help to align actors towards the transformative actions and interventions needed for humans to live in harmony with nature.
Bootstrap ARDL Test on the Relationship among Trade, FDI and CO2 Emissions: Based on the Experience of BRICS Countries
Fumei He, Ke-Chiun Chang, Min Li, Xueping Li, Fangjhy Li
Subject: Social Sciences, Economics Keywords: global emission reduction; trade; FDI; BRICS countries; Bootstrap ARDL
We used the Bootstrap ARDL method to test the relationship between the export trades, FDI and CO2 emissions of the BRICS countries. We found that China's foreign direct investment and the lag one period of CO2 emissions have a cointegration on exports. South Africa's foreign direct investment and CO2 emissions have a cointegration relationship with the lag one period of exports, and South Africa's the lag one period of exports and foreign direct investment have a cointegration relationship with the lag one period of CO2 emissions. But whether it is China or South Africa, these three variables have no causal relationship in the long-term. Among the variables of other BRICS countries, Russia is the only country showed degenerate case #1 in McNown et al. mentioned in their paper. When we examined short-term causality, we found that CO2 emissions and export trade showed a reverse causal relationship, while FDI and carbon emissions were not so obvious. Export trade has a positive causal relationship with FDI. Those variables are different from different situations and different countries.
Deducing Earth's Global Energy Flows from a Simple Greenhouse Model
Miklos Zagoni
Subject: Earth Sciences, Atmospheric Science Keywords: global energy budget; simple greenhouse model; infrared-opaque limit
Earth atmosphere is almost opaque in the infrared: about 374 W/m2 is absorbed by the atmosphere out of 396 W/m2 surface upward longwave radiation, and only about 22 W/m2 leaves the system unabsorbed in the atmospheric window. This makes rise to the idea to approximate the annual global mean energy flow system from a simple idealized greenhouse model, where the surface is surrounded by a single-layer shortwave (SW) transparent, longwave (LW) opaque, non-turbulent atmosphere. The energy flows in this geometry can be described by elementary arithmetic relationships. Starting from this model, the realistic Earth's atmosphere can be achieved by introducing partial atmospheric SW opacity, partial atmospheric LW transparency and turbulent fluxes during the course of the deduction. The resulted global mean energy flow system is then compared to several data sets such as satellite observations from the CERES mission; estimates using direct surface observations and climate models; global energy and water cycle assessments; and independent detailed clear-sky radiative transfer computations. We find that the deduction from this idealized model approximates the real values in Earth energy budget with reasonable accuracy: the deduced fluxes and the observed ones are consistent within the acknowledged error of observations; while fundamental features of the initial geometry like special ratios and definite relationships between the fluxes are preserved.
Changes in the Geographic Distribution of the Diana Fritillary (Speyeria diana: Nymphalidae) Under Forecasted Predictions of Climate Change
Carrie Wells, David Tonkyn
Subject: Biology, Ecology Keywords: Speyeria diana, butterfly, conservation, fragmentation, global warming, Maxent, WorldClim
Climate change is predicted to alter the geographic distribution of a wide variety of taxa, including butterfly species. Research has focused primarily on high latitude species in North America, with no known studies examining responses of taxa in the southeastern US. The Diana fritillary (Speyeria diana) has experienced a recent range retraction in that region, disappearing from lowland sites and now persisting in two, phylogenetically disjunct mountainous regions. These findings are consistent with the predicted effects of a warming climate on numerous taxa, including other butterfly species in North America and Europe. We used ecological niche modeling to predict future changes to the distribution of S. diana under several climate models. To evaluate how climate change might influence the geographic distribution of this butterfly, we developed ecological niche models using Maxent. We used two global circulation models, CCSM and MIROC, under low and high emissions scenarios to predict the future distribution of S. diana. Models were evaluated using the Receiver Operating Characteristics Area Under Curve test and the True Skill Statistics (mean AUC = 0.91± 0.0028 SE, TSS = 0.87 ± 0.0032 SE for RCP = 4.5, and mean AUC = 0.87± 0.0031SE, TSS = 0.84 ± 0.0032 SE for RCP = 8.5), which both indicate that the models we produced were significantly better than random (0.5). The four modeled climate scenarios resulted in an average loss of 91% of suitable habitat for S. diana by 2050. Populations in the Southern Appalachian Mountains were predicted to suffer the most severe fragmentation and reduction in suitable habitat, threatening an important source of genetic diversity for the species. The geographic and genetic isolation of populations in the west suggest that those populations are equally as vulnerable to decline in the future, warranting ongoing conservation of those populations as well. Our results suggest that the Diana fritillary is under threat of decline by 2050 across its entire distribution from climate change, and is likely to be negatively affected by other human-induced factors as well.
Environmental Lead Exposure and Adult Literacy in Myanmar: An Exploratory Study of Potential Associations at the Township Level
Robert C. MacTavish, Liam W. Remillard, Colleen M. Davison
Subject: Medicine & Pharmacology, Other Keywords: lead exposure; adult literacy; global health; environmental health; Myanmar
Environmental lead exposure is a population health concern in many low- and middle-income countries. Lead is found throughout Myanmar and prior to the 1940s, the country was the largest producer of lead worldwide. The aim of this study was to examine any potential association between lead mining and adult literacy rates at the level of the 330 townships in Myanmar. Townships were identified as lead or non-lead mining areas and 2015 census data were examined with association being identified using descriptive, analytical and spatial statistical methods. Overall, there does appear to be a significant relationship between lead mining activity and adult literacy levels (P<0.05) among townships with both low access [OR= 2.701 (1.136-6.421)] as well as townships with high access to safe sanitation [OR=18.40 (1.794-188.745)]. LISA cluster maps confirm these findings. This exploratory analysis is a first step in the examination of potential environmental lead exposure and its implications in Myanmar.
Watching the Smoke Rise Up: Thermal Efficiency, Pollutant Emissions and Global Warming Impact of Three Biomass Cookstoves in Ghana
George Y. Obeng, Ebenezer Mensah, George Ashiagbor, Owusu Boahen, Dan Sweeney
Subject: Engineering, Energy & Fuel Technology Keywords: cookstove; emissions; emission factor; efficiency; global warming impact; Ghana
In Ghana, about 73% of households rely on solid fuels for cooking. Over 13,000 annual deaths are attributed to exposure to indoor air pollution from inefficient combustion. In this study, assessment of thermal efficiency, emissions and total global warming impact of three cookstoves commonly used in Ghana was completed using IWA water boiling test (WBT) protocol. Statistical averages of three replicate tests for each cookstove were computed. Thermal efficiency results were: wood-burning cookstove 12.2% (Tier 0), traditional charcoal cookstove 23.3% (Tier 1-2) and improved charcoal cookstove 30% (Tier 2-3). The wood-burning cookstove emitted more CO, CO2 and PM2.5 than charcoal cookstove (coalpot) and improved cookstove. Emission factor for PM2.5 and emission rate for the wood-burning cookstove (Tier 0) were over four times higher than the traditional charcoal cookstove (Tier 3) and improved cookstove (Tier 2). On the basis of WBT, annual global warming impact potential for emissions are estimated at 4 tonnes of CO2e for the wood-burning cookstove, 1.5 tonnes of CO2e for charcoal cookstove (coalpot) and 1 tonne of CO2e for improved cookstove. We conclude that there is the need for awareness, policy and incentives to enable end-users switch to improved cookstoves for increased efficiency, reduced emissions/global warming impact.
Global Existence and Exponential Decay for a Dynamic Contact Problem of Thermoelastic Timoshenko Beam with Second Sound
Wenjun Liu, Dongqin Chen, Biqing Zhu
Subject: Mathematics & Computer Science, Analysis Keywords: thermoelastic Timoshenko beam; global existence; exponential stability; second sound
In this paper, we study the global existence and exponential decay for a dynamic contact problem between a Timoshenko beam with second sound and two rigid obstacles, of which the heat flux is given by Cattaneo's law instead of the usual Fourier's law. The main difficulties arise from the irregular boundary terms, from the low regularity of the weak solution and from the weaker dissipative effects of heat conduction induced by Cattaneo's law. By considering related penalized problems, proving some a priori estimates and passing to the limit, we prove the global existence of the solutions. By considering the approximate framework, constructing some new functionals and applying the perturbed energy method, we obtain the exponential decay result for the approximate solution, and then prove the exponential decay rate to the original problem by utilizing the weak lower semicontinuity arguments.
Uncertainty Assessment of Flood Hazard due to Levee Breaching
Cédric Goeury, Vito Bacchi, Fabrice Zaoui, Sophie Bacchi, Sara Pavan, Kamal El kadi Abderrezzak
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: flood hazard; dike breach; Monte Carlo framework; Global Sensitivity Analysis
Water resource management and flood forecasting are crucial societal and financial stakes requiring reliable predictions of flow parameters (depth, velocity), the accuracy of which is often limited by uncertainties in hydrodynamic numerical models. In this study, we assess the effect of two uncertainty sources, namely breach characteristics induced by overtopping and the roughness coefficient, on water elevations and inundation extent. A two-dimensional (2D) hydraulic solver was applied in a Monte Carlo integration framework to a reach of the Loire river (France) including about 300 physical parameters. Inundation hazard maps for different flood scenarios allowed highlighting the impact of the breach development chronology. A special attention was paid to proposing a relevant sensitivity analysis to exam the factors influencing the depth and extent of flooding. The spatial analysis of vulnerability area induced by a levee breach width exhibits that, with increasing the flood discharge, the rise of the parameter influence is accompanied by a more localized spatial effect. This argues for a local analysis to allow a clear understanding of the flood hazard. The physical interpretation, highlighted by a global sensitivity analysis, showed precisely the dependence of the flood simulation on the main factors studied, i.e. the roughness coefficients and the characteristics of the breaches.
Factors Associated with the Prevalence of Malnutrition among Haemodialytic Patients: A Two-Centre Study in Jeddah Region, Saudi Arabia
Waad M. Turkistani, Firas S. Azzeh, Mazen M. Ghaith, Lujain A. Bahubaish, Osama A. Kensara, Hussain A. Almasmoum, Abdullah F. Aldairi, Anmar A. Khan, Ahmad A. Alghamdi, Ghalia Shamlan, Maha H. Alhussain, Reham M. Algheshairy, Abdullah M. AlShahrani, Maysoun S. Qutob, Awfa Y. Alazzeh, Haitham M. Qutob
Subject: Medicine & Pharmacology, Nutrition Keywords: Chronic Kidney Disease; Protein-Energy Wasting; Modified-Subjective Global Assessment
Background: Chronic kidney disease, one of the most common diseases in the world, is characterized by irreversible impairment of the kidney's metabolic, excretory, and endocrine functions. During end-stage renal disease, patients require renal replacement therapy, such as hemodialysis (HD). Protein-energy wasting is a common health problem among HD patients. However, this study aims to assess the nutritional status of HD patients at two HD centers in Jeddah, Saudi Arabia, and to determine its associated factors. Methods: A cross-sectional study was conducted at two different Dialysis Centers in Jeddah, Saudi Arabia; 211 female and male HD patients. Malnutrition was recognized using the Modified-SGA (M-SGA) comprising two parts: medical history and physical examination. Sociodemographic and health status for all patients were also determined. Patients were classified based on their M-SGA score into two groups: normal and malnourished. Results: Overall, 54.5% of the participants showed malnutrition. Unemployment, low muscle strength and mass, high level of medication use, and high dialysis vintage were positively (P<0.05) associated with malnutrition. Conclusion: The M-SGA score indicates a high prevalence of malnutrition among HD patients. These results show the importance of regular assessment and follow-ups for HD patients ensuring better health and nutritional status.
Farmers' Participatory Evaluation of Alternate Wetting and Drying Irrigation Method on the Greenhouse Gas Emission, Water Productivity and Paddy Yield in Bangladesh
Mohammad Mobarak Hossain, Mohammad Rafiqul Islam
Subject: Biology, Anatomy & Morphology Keywords: methane; nitrous oxide; global warming potential; water productivity; paddy yield
Online: 3 February 2022 (17:02:55 CET)
In dry season paddy farming, the alternate wetting and drying (AWD) irrigation improves water productivity, paddy production, and has the potential to decrease greenhouse gas (GHG) such as methane (CH4) and nitrous oxide (N2O) emissions when compared to continuous flooding (CF). However, there is a lack of research in Bangladesh on the effects of water management on CH4 and N2O emissions. During November 2017–April 2018, participatory on-farm trials were conducted at Feni and Chattogram districts of Bangladesh. Total 105 farmers comprising 20-hectare of land (62 farmers at Feni and 43 farmers at Chattogram district, each location having 10 hectare of land). We compared irrigation water and cost reductions, paddy yield, and CH4 and N2O emissions from paddy fields irrigated using AWD and CF irrigation methods. The CH4 and N2O emissions were determined using the Cool Farm Beta-3 methodology, and the global warming potential (GWP) was estimated using the Intergovernmental Panel on Climate Change-2014 standard approach. The mean results of randomly selected 30 farmers from two locations (15 of each) showed that AWD remarkably decreased irrigation water consumption by about 24% and increased water productivity by 224%. We estimated 23% savings for irrigation costs in AWD. By this time, AWD improved paddy production by 3% over CF. The AWD irrigation resulted in a 47% reduction in cumulative CH4 emissions having a lower CH4 emission factor (0.74 kg ha-1 day-1) than CF (1.39 kg ha-1 day-1). There was no obvious difference in N2O emission between AWD and CF. When compared to CF, AWD decreased the overall GWP by 27% and lowered the GHG intensity by 42%. The CH4 and N2O emissions did not differ substantially between Feni and Chattogram.
Global Surface HCHO Distribution derived from Satellite Observations with Neural Networks Technique
Jian Guan, Bohan Jin, Yizhe Ding, Wen Wang, Guoxiang Li, Pubu Ciren
Subject: Earth Sciences, Environmental Sciences Keywords: surface formaldehyde; neural network model; interval estimation; TROPOMI; global distribution
Formaldehyde (HCHO) is one of the most important carcinogenic air contaminants. However, the lack of global surface concentration of HCHO monitoring is currently hindering research on outdoor HCHO pollution. Traditional methods are either restricted to small areas or data- demanding for a global scale of research. To alleviate this issue, we adopted neural networks to estimate surface HCHO concentration with confidence intervals in 2019, where HCHO vertical column density data from TROPOMI, in-situ data from HAPs (harmful air pollutants) monitoring network and ATom mission are utilized. Our result shows that the global surface HCHO average concentration is 2.30 μg/m3. Furthermore, in terms of regions, the concentration in Amazon Basin, Northern China, South-east Asia, Bay of Bengal, Central and Western Africa are among the highest. The results from our study provides a first dataset of the global surface HCHO concentration. In addition, the derived confidence interval of surface HCHO concentration adds an extra layer for the confidence to our results. As a pioneer work in adopting confidence interval estimation into AI-driven atmospheric pollutant research and the first global HCHO surface distribution dataset, our paper will pave the way for the rigorous study on global ambient HCHO health risk and economic loss, thus providing a basis for pollutant controlling policies worldwide.
ON THE ROLE OF MATRIX-WEIGHTS ELEMENTS IN CONSENSUS ALGORITHMS FOR MULTI-AGENT SYSTEMS
Joshua Ogbebor, Xiangyu Meng
Subject: Engineering, Electrical & Electronic Engineering Keywords: matrix-weighted graphs; multi-agent systems; clustered consensus; global consensus
This paper extends the concept of weighted graphs to matrix weighted graphs. The consensus algorithms dictate that all agents reach consensus when the weighted graph is connected. However, it is not always the case for matrix weighted graphs. The conditions leading to different types of consensus have been extensively analysed based on the properties of matrix-weighted Laplacians and graph theoretic methods. However, in practice, there is concern on how to pick matrix-weights to achieve some desired consensus, or how the change of elements in matrix weights affects the consensus algorithm. By selecting the elements in the matrix weights, different clusters may be possible. In this paper, we map the roles of the elements of the matrix weights in the systems consensus algorithm. We explore the choice of matrix weights to achieve different types of consensus and clustering. Our results are demonstrated on a network of three agents where each agent has three states.
Performance Comparison of Geo-Referencing a Radar Using Prism Method With Global Positioning System
Ebenhezer Mabotha, Nkateko Mabunda
Subject: Engineering, Automotive Engineering Keywords: Geo-referencing; Surveying radar; Mining; Global positioning system; Slope Monitoring
Monitoring of the surface operations using movement and surveying radar (MSR) can prevent loss of life, equipment, production and loss of the mine. Slope monitoring using MSR is an important aspect of open-pit mining as it provides real-time movement of deformation data for the slope. It is therefore important that the radar is accurately geo-referenced in order to provide accurate real-time movement data. Geo-referencing is defined as the process of determining an instrument's position (in the form of Easting, Northing, Height) as well as the orientation with respect to the mine's local coordinate system. This helps in getting geo-referenced data points from the radar that are identified by a unique set of coordinates in relation to the mine's coordinate system which allows the radar to track movement for a specific set of coordinates. In this research, we assess the performance of geo-referencing a radar using the total station method and compare it with the integration of Advance Navigation – Spatial Dual GPS system connected via RS422 on the MSR. This includes usage of the Spatial Dual navigation coordinates output to calculate the radar's position relative to the mine local coordinates and mapping the radar's azimuth, elevation and Range (Az, El and Rl) values to the measured pit-slope data points. Furthermore, a comparison of key attributes of both methods of geo-referencing is performed using a matrix system and giving an overall performance appraisal of both systems. Integrating a navigation system allows the radar to have an auto geo-referencing functionality that will reduce the time spent in completing this process. The findings reveal that the GPS obtained a higher score than the total station with prism method on the weighted matrix system. The total station was found to be more accurate than the GPS however, the deployment time for the GPS is quicker than that of the total station. This is important for different operation such as strip and open-pit mining to choose the preferred method of geo-referencing depending on the level of accuracy required.
Speckle-Tracking Echocardiography With Novel High Frame-Rate Imaging
Kana Fujikura, Mohammed Makkiya, Muhammad Farooq, Yun Xing, Wayne Humphrey, Mohammad Hashim Mustehsan, Mario J. Garcia, Cynthia C. Taub
Subject: Medicine & Pharmacology, Allergology Keywords: echocardiography; speckle-tracking; frame rate; global longitudinal strain; left ventricle
Background: global longitudinal strain (GLS) measures myocardial deformation and is a sensitive modality for detecting subclinical myocardial dysfunction and predicting cardiac outcomes. The accuracy of speckle-tracking echocardiography (STE) is dependent on temporal resolution. A novel software enables relatively high frame rate (Hi-FR) (~200 fps) echocardiographic images acquisition which empowers us to investigate the impact of Hi-FR imaging on GLS analysis. The goal of this pilot study was to demonstrate the feasibility of Hi-FR for STE. Methods: In this prospective study, we acquired echocardiographic images using clinical scanners on patients with normal left ventricular systolic function using Hi-FR and conventional frame rate (Reg-FR) (~50 FPS). GLS values were evaluated on apical 4-, 2- and 3-chamber images acquired in both Hi-FR and Reg-FR. Inter-observer and intra-observer variabilities were assessed in Hi-FR and Reg-FR. Results: There were 143 resting echocardiograms with normal LVEF included in this study. The frame rate of Hi-FR was 190 ± 25 and Reg-FR was 50 ± 3, and the heart rate was 71 ± 13. Strain values measured in Hi-FR were significantly higher than those measured in Reg-FR (all p < 0.001). Inter-observer and intra-observer correlations were strong in both Hi-FR and Reg-FR. Conclusions: We demonstrated that strain values were significantly higher using Hi-FR when compared with Reg-FR in patients with normal LVEF. It is plausible that higher temporal resolution enabled the measurement of myocardial strain at desired time point. The result of this study may inform clinical adoption of the novel technology. Further investigations are necessary to evaluate the value of Hi-FR to assess myocardial strain in stress echocardiography in the setting of tachycardia.
Dynamics of a System of Higher Order Difference Equations with Quadratic Terms
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Difference equations; global asymptotic stability; boundedness; rate of convergence; oscillation
In this paper we investigate the global asymptotic stability of following system ofhigher order difference equations with quadratic terms:xn+1=A+Byn/yn−m^2, yn+1=A+Bxn/xn−m^2, where A and B are positive numbers and the initial values are positive numbers.We also study the boundedness, rate of convergence and oscillation behaviour of thesolutions of related system.
Dynamics of System of Higher Order Difference Equations with Quadratic Terms
Online: 6 October 2020 (16:12:29 CEST)
This paper aims to investigate the global asymptotic stability of following system of higher order difference equations with quadratic terms: x_{n+1}=A+B((y_{n})/(y_{n-m}²)),y_{n+1}=A+B((x_{n})/(x_{n-m}²)) where A and B are positive numbers and the initial values are positive numbers. We also study the rate of convergence and oscillation behaviour of the solutions of related system.
Double-Edged Sword of Global Financial Crisis and COVID-19 Pandemic on Crude Oil Stock Returns
Monday Osagie Adenomon, Ngozi G. Emenogu
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Crude oil; Global financial crisis; COVID-19; Stock; Returns; Persistence
This study investigates the impact of global financial crisis and the present COVID-19 pandemic on daily and weekly Crude oil futures using four variants of ARMA-GARCH models: ARMA-sGARCH, ARMA-eGARCH, ARMA-TGARCH and ARMA- aPARCH with dummy variables We also investigated the persistence, half-life and backtesting of the models. This study therefore seeks to contribute to the body of literature on the impact of global financial crisis and the present COVID-19 pandemic on crude oil futures market. This investigation of the impact of global financial crisis and the COVID-19 on crude oil futures has not been much studied at present. We obtained and analyzed the daily and weekly crude oil futures from secondary sources. Daily crude oil futures used in this study covers the period from the 4th January 2000 to 27th April 2020 while the weekly crude oil futures covered from 2ndJanuary 2000 to 26th April 2020 . The global financial crisis period covered from 2nd July 2007 to 31st March 2009 and the current COVID-19 pandemic covered from 1st January 2020 to 27th April, 2020. The study used both student t and skewed student t innovations with AIC, goodness-of-test fit and backtesting to select the best model. Most of the estimated ARMA-GARCH models are supported by skewed student t distribution while most of the ARMA-GARCH models exhibited high persistence values in the presence of global financial crisis and the COVID-19 pandemic. In the overall, the estimated ARMA(1,0)-eGARCH(2,1) and ARMA(1,0)-eGARCH(2,2) model for daily crude oil futures and weekly crude oil futures respectively have been significantly impacted by the global financial crisis and the Present COVID-19 pandemic while the preferred estimated models also passed the goodness-of-test fit and backtesting.This study recommends shareholders and investors should think outside the box as crude oil futures tend to be affected by global financial crisis and COVID-19 pandemic while countries also that depend mostly on crude oil are encouraged to diversify their economy in other to survive and be sustained during financial and health crisis.
Worldwide Hydrogen Provision Scheme Based on Renewable Energy
Philipp-Matthias Heuser, Thomas Grube, Heidi Heinrichs, Martin Robinius, Detlef Stolten
Subject: Engineering, Energy & Fuel Technology Keywords: hydrogen supply; renewable energy import; global energy infrastructure; hydrogen trade
The threats of climate change and the sustainable supply of clean energy are global challenges that require an international approach to the energy supply. Utilizing the wind and solar energy potential of regions where these renewable sources are especially viable to produce hydrogen by means of water electrolysis represents an attractive option to counter the above-mentioned challenges. Within the scope of this techno economic analysis of a worldwide hydrogen supply infrastructure based on renewable energy, selected regions are assessed on the basis of their wind or solar energy potential. In contrast to established analyses of hydrogen infrastructures, this paper introduces a worldwide allocation approach to the supply hydrogen from strong wind and solar regions to different demand regions on the premise of a global supply cost minimum. The allocation results show a significant dependence of hydrogen export volumes and the oversea transport distances of potential trading partners. Hence, the transnational trading flows of hydrogen derived from wind and solar energy are concentrated in continental regions.
Effect of Domestic and Global Environmental Events on Environmental Concern and Environmental Responsibility among University Students
Piyapong Janmaimool, Surapong Chudech
Subject: Social Sciences, Other Keywords: global environmental concerns; domestic environmental concerns; environmental attitudes; environmental responsibility
Recently, both global and domestic environmental events have been occurring more frequently, bringing catastrophic consequences to humans and the environment. These adverse events have caused widespread concern among the general public. In positive terms, these devastating events could potentially enhance people's environmental awareness, which, in turn, could instill a greater sense of environmental responsibility. This study aims to investigate how university students concern themselves with global and domestic catastrophic environmental events and to examine how global and domestic environmental concerns mediate the effect of environmental knowledge and attitudes on university students' environmental responsibility. Students of King Mongkut's University of Technology Thonburi in Bangkok, Thailand were selected as the participants. A simple random technique was applied to select the research participants. Questionnaire surveys with 863 students were carried out during September–October 2019. A path analysis was performed to test how global and local environmental concerns mediate the effect of environmental knowledge and attitudes on university students' environmental responsibility. The results demonstrated that domestic environmental concerns, taken alone, contributed less to the students' sense of environmental responsibility. Domestic environmental concerns had a stronger effect on environmental responsibility when taken together with global environmental concerns. In addition, both domestic and global environmental concerns could help transform environmental knowledge and attitudes into environmental responsibility. Only environmental attitudes had no direct effect on responsibility. These results show that domestic and global catastrophic environmental events could raise students' levels of concern for the environment, and, ultimately, enhance their sense of responsibility to protect the environment.
The Impact of Collaborative Innovation on Ecological Efficiency — Empirical Research Based on China's Regions
Jianqing Zhang, Song Wang, Fei Fan, Peilei Yang
Subject: Social Sciences, Economics Keywords: ecological efficiency; collaborative innovation; global-malmquist; gravity model; system-gmm
Taking capital, manpower, and natural resources as inputs, regional GDP as expected output, and industrial pollution as undesired output, this study measures the ecological efficiency of various regions in China through the Global-Malmquist model. The results show a trend of an initial sharp decline in ecological efficiency followed by a gradual increase in the time dimension, but there is no significant correlation in the spatial dimension. Using the gravity model to quantify the attractiveness of the regions' capital and human resources for collaborative innovation, it estimates the impact of collaborative innovation on eco-efficiency through the system-Generalized Method of Moments (GMM) model. The results show that technological innovation capital in other regions has a negative "U" relationship with local ecological efficiency, while scientific and technological innovation human resources have a positive "U" relationship. In addition, government financial support in science and technology and the ecological efficiency of the previous period serve as promoting factors of the current local ecological efficiency, while the introduction of foreign technological innovation is likely to inhibit improvements in ecological efficiency. Based on these findings, this study puts forward corresponding policy recommendations for local governments to advance their development agendas alongside their environmental priorities in line with their specific circumstances.
On the Solutions of Four Rational Difference Equations Associated to Tribonacci Numbers
İnci Okumuş, Yüksel Soykan
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: difference equations, solution, equilibrium point, tribonacci number, global asymptotic stability
In this study, we investigate the form of solutions, stability character and asymptotic behavior of the following four rational difference equations x_{n+1} = (1/(x_{n}(x_{n-1}±1)±1)), x_{n+1} = ((-1)/(x_{n}(x_{n-1}±1)∓1)), such that their solutions are associated with Tribonacci numbers.
3D Numerical Modelling and Sensitivity Analysis of the Processes Controlling Organic Matter Distribution and Heterogeneity. A Case Study from the Toarcian of the Paris Basin
Benjamin Bruneau, Marc Villié, Mathieu Ducros, Benoit Chauveau, François Baudin, Isabelle Moretti
Subject: Earth Sciences, Geology Keywords: Organic Matter; TOC; Global Sensitivity Analysis; Modelling; Paris Basin, Toarcian
The active debate about the processes governing the organic-rich sediment deposition generally involves the relative roles of elevated primary productivity and enhanced preservation related to anoxia. However, other less spotlighted factors could have a strong impact on such deposits: e.g. residence time into the water column (bathymetry), sedimentation rate, transport behavior of organo-mineral floccules on the sea floor. They are all strongly interrelated and may be obscured in the current conceptual models inspired from most representative modern analogues (i.e. upwelling zones and stratified basins). To improve our comprehension of organic matter distribution and heterogeneities, we conducted a sensitivity analysis on the processes involved in organic matter production and preservation which have been simulated within a 3D stratigraphic forward model. The Lower-Middle Toarcian of the Paris Basin was chosen as a case study as it represents one of the best documented example of marine organic matter accumulation. The relative influence of the critical parameters (bathymetry, diffusive transport, oxygen mixing rate and primary production) on the output parameters (Total Organic Carbon, and oxygen level), determined performing a Global Sensitivity Analysis, shows that, in the context of a shallow epicontinental basin, a moderate primary productivity (> 175 gC.m-².yr-1) can led to local anoxia and organic matter accumulation. We argue that, regarding all the processes involved, the presence and distribution of organic-rich intervals is linked as a first-order parameter to the morphology of the basin (e.g. ramp slope, bottom topography). These interpretations are supported by very specific ranges of critical parameters which allowed to obtain output parameter values in accordance with the data. This quantitative approach and its conclusions open new perspectives about the understanding of global distribution and preservation of organic-rich sediments.
Control of a DC-DC Buck Converter through Contraction Techniques
David Angulo-Garcia, Fabiola Angulo, Gustavo Osorio, Gerard Olivar
Subject: Engineering, Control & Systems Engineering Keywords: DC-DC buck converter; contraction analysis; global stability; matrix norm
Reliable and robust control of power converters is a key issue in the performance of numerous technological devices. In this paper we show a design technique for the control of a DC-DC buck converter with a switching technique that guarantees not only good performance but also global stability. We show that making use of the contraction theorem in the Jordan canonical form of the buck converter, it is possible to find a switching surface that guarantees stability but it is incapable of rejecting load perturbations. To overcome this, we expand the system to include the dynamics of the voltage error and we demonstrate that the same design procedure is not only able to stabilize the system to the desired operation point but also to reject load, input voltage and reference voltage perturbations.
Child Protection and Social Inequality: Understanding Child Prostitution in Malawi
Pearson Nkhoma, Helen Charnley
Subject: Social Sciences, Sociology Keywords: child prostitution, global inequality, gender inequality, participatory research, capability approach.
This article draws on empirical research seeking to develop more nuanced understandings of child prostitution, previously theorised on the basis of children's rights, feminist, and structure/agency debates, largely ignoring children's own understandings of their involvement in prostitution. Conducted in Malawi, one of the economically poorest countries in the world, the study goes to the heart of questions of inequality and child protection. With careful attention to ethical considerations, a participatory approach was used to enable 19 girls and young women, whose involvement in prostitution began in childhood, to convey their own experiences and understandings of involvement. Data were collected using a range of methods, chosen by participants to match their abilities and interests. Data analysis and interpretation were aided by reference to the capability approach focussing on questions of human rights and social justice for women and girls. Generating rare insights into participants' worlds, the research demonstrates how the persistence of deeply embedded cultural values in contexts of extreme poverty serves to sustain gender inequalities, constraining choices for girls and denying them opportunities to lead valued lives. The article ends by considering the theoretical and methodological implications of the study, policy and practice recommendations and opportunities for further research.
Thermal Regime of a Temperate Deep Lake and Its Response to Climate Change: Lake Kuttara, Japan
Kazuhisa A. Chikita, Hideo Oyagi, Tadao Aiyama, Misao Okada, Hideyuki Sakamoto, Toshihisa Itaya
Subject: Earth Sciences, Environmental Sciences Keywords: non-freezing; temperate lake; heat budget; heat storage; global warming
A temperate deep lake, Lake Kuttara, Hokkaido, Japan (148 m deep at maximum) was completely frozen every winter in the 20th century. However, unfrozen conditions of the lake over winter occurred four times in the 21st century, which is probably due to global warming. In order to understand how thermal regime of the lake responds to climate change, its heat storage change was calculated by estimating heat budget of the lake and monitoring water temperature at the deepest point for September 2012–June 2016. As a result, temporal change of the heat storage from the heat budget was very consistent with that from the direct temperature measurement (determination coefficient R2 = 0.827). The 1978–2017 data at a meteorological station near Kuttara indicated that there are significant (less than 5% level) long-term trends for air temperature (0.024 °C/yr) and wind speed (−0.010 m/s/yr). A sensitivity analysis for the heat storage from the heat budget estimate and an estimate of return periods for mean air temperature in mid-winter allow us to conclude that the lake could be unfrozen once per about two year in a decade.
Analysis of Inflection and Singular Points on Parametric Curve with a Shape Factor
Zhi Liu, Chen Li, Jieqing Tan, Xiaoyan Chen
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: shape factor; singular points; inflection points; local convexity; global convexity
The features of a class of cubic curves with a shape factor are analyzed by means of the theory of envelope and topological mapping. The effects of the shape factor on the cubic curves are made clear. Necessary and sufficient conditions are derived for the curve to have one or two inflection points, a loop or a cusp, or to be locally or globally convex. Those conditions are completely characterized by the relative position of the edge vectors of the control polygon and the shape factor. The results are summarized in a shape diagram, which is useful when the cubic parametric curves are used for geometric modeling. Furthermore we discussed the influences of the shape factor on the shape diagram and the ability for adjusting the shape of the curve.
The Fall and Rise of Diopatra In The Brazilian Coast
Paulo Cesar Paiva, Antonia Cecilia Zacagnini Amaral, Victor Corrêa Seixas, Mônica A. Varella Petti, Tatiana Menchini Steiner
Subject: Biology, Ecology Keywords: South Brazilian Bight; biogeography; heatwaves; global warming; range-shifts; alien species
Patches of Diopatra species from Brazilian sandy beaches were followed for ca. 50 years. Data were accessed from papers, gray literature, images and collections to verify time changes in the South Brazilian Bight (SBB) from 1974-2021. We modeled maximum density over time at 15 beaches, observing very high densities (> 100 ind.m-2) in 1974 followed by a decrease (~ 10 ind.m-2) of three species of Diopatra until 1995 and a strong decline (1996-2002) when populations were almost regionally extinct (0-1 ind. m-2). A slight recovery (3-4 ind.m-2) occurred after 2006 for a single species, D. marinae, associated with warmer northern waters, suggesting a range shift. This pattern was associated with a) heatwaves linked to an El-Niño event (1988) and gradual SST surface warming of ca. 1 oC since 1974. The usage of Diopatra spp. as fishing bait could also be associated with such a reduction. After 2016, D. neapolitana, a likely alien species, was established in the SBB in high densities. Projections based on Species Distribution Modeling (SDM) suggest a potential of invasion in the same range of the known species of D. cuprea complex along the Brazilian coast despite that there are no signs of competition between both species.
The Effect of Atmospheric Parameters and Climate Changes on the NDVI Index in the Hyrcanian Forests
Farid Rahimi
Subject: Biology, Ecology Keywords: greenhouse gases; climate changes; Hyrcanian forests; global warming; NDVI; remote sensing
The increase in the production and entry of greenhouse gases into the earth's atmosphere has disturbed the balance of the environment. The signs of climate change (caused by global warming) are clearly visible and its effects are tangible. According to the United Nations, if the current trend continues, the global average temperature will increase by 3.2°C by the end of the century, which will have terrible consequences if it happens. This research aimed to investigate the effects of greenhouse gases and climate changes resulting from them on Hyrcanian forests. The Hyrcanian forests, as one of the oldest forest areas (remaining from the Paleogene era), were studied by telemetry from 2013 to 2021. The analysis of the images taken from the Landsat 8 satellite showed that during 9 years, the NDVI index decreased by 0.6 units and the average air temperature increased by 0.5°C. Although the average relative humidity has only increased by 2%, the average annual rainfall has recorded an increase of 25mm. The analysis of the statistics showed that the rains occur irregularly and are often torrential. Therefore, it is predicted that as the average temperature continues to increase, the NDVI index will further decrease, and as a result, the forest cover will become weaker and the soil will lose more water absorption power, and due to the increase in average rainfall. successive floods will occur. Therefore, soil erosion increases and the extinction and migration of plant and animal species increase significantly.
A Mixed-Method National Study of Public Health Core Competencies in Undergraduate Medical Schools in Thailand to Find out the Need for Transformative Changes
Myo Nyein Aung, Vanich Vanapruks, Pornchai Sithisarankul, Pajaree Yenbutra, Suthee Rattanamongkolgul, Krishna Suvarnabhumi, Pongsak Wannakrairot
Subject: Medicine & Pharmacology, Other Keywords: medical education; public health; medical schools; community; global health; human resource
Background: With new challenges to the health system, many new competencies within the scope of teaching public health need to be addressed in medical schools' curricula such as disaster risk management and health system science. The aims of this study were to identify the needs of public health competencies for medical doctors in Thailand and to assess the level of integration for technical collaboration in teaching public health. Method: A total of 17 out of 21 Thai medical schools participated in the national survey. Qualitative inquiries applied focus group interviews of community representatives from ten sample villages and in-depth interviews of representatives from stakeholder organizations particularly employers. The list of public health competencies framework recommended by WHO-SEARO was applied. Quantitative analysis applied descriptive analysis using STATA 15 and qualitative findings were validated by interrelating the meaning of themes from Word Clouds created in NVivo12. Data integration applied a mixed-method Quan-qual approach. Results: 17 medical schools returned the questionnaires (80.95 % yield). The most common regionally-defined public health competencies (in over 70% of schools) were shown to be: Biostatistics, Community Medicine, Epidemiology, Family Medicine, Medical Ethics and Professional Laws, Preventive Medicine, Health Promotion, Holistic Care, and Research. The curriculum in only one medical school lacked Health Economics, whilst Disaster Management was lacking in two other schools. Discipline-based subjects were found to be more prevalent than interdisciplinary competencies. A variety of methods were being applied for teaching public health. The majority of the schools applied lecture as the main teaching method and multiple-choice questions as the main assessment method. Thai communities expect the doctors to get in touch with the community more often, lead the primary health care team through training the health professionals and community health volunteers, and educate the community for better health. Conclusion: Human resource is the main challenge in addressing interdisciplinary competencies. It is necessary to establish a collaborating mechanism among the big and small medical schools and the faculties of public health to improve the teaching of public health to undergraduate students in medical schools. There is also a need to strengthen the health system science and leadership so that future MDs can lead health service delivery according to the needs of their employers such as the Ministry of Public Health and the Rural Doctors Association. The findings of this study may help to identify a national framework of public health core competencies for medical schools and create a common platform for interdisciplinary collaborations.
A Review of the Geographical Distribution, Indigenous Benefits and Conservation of African Baobab (Adansonia Digitata L.) Tree in Sub-Saharan Africa
John Adekunle Adesina, Jiangang Zhu
Subject: Earth Sciences, Environmental Sciences Keywords: agroforestry activities; anthropogenic global warming; conservation policies; forest management; forest products
Indigenous trees have great economic potential and ecological benefits for enhancing environmental prosperity, mostly in forestry and the forest products sector in the developing countries of Sub-Sahara Africa. The baobab (Adansonia Digitata L.) is known as the African green jewel in both fruit production and medicinal benefits also remarkable for so many forest products exported across the world. Research conducted in the different Sub-Saharan African sub-regions has shown this iconic tree with a majestic outlook has a priority tree species for local and foreign use and conservation. However, data on the benefits and conservation of baobab trees in Africa, especially the Sub-Saharan countries is limited. This study aimed to assess the predominant geo-graphical distribution of the tree, the indigenous (cultural, socio-economic, ecological, and medical/health) benefits, and the conservation strategies of the baobab resources in Sub-Saharan Africa. The baobab tree's succulent roots, bulbs, branches, fruit, pods, foliage, and petals are all nourishing. Baobab parts have been used for diverse reasons in Africa, some countries of Asia, and Europe for the past two centuries due to their medicinal well-being properties. In addition, the medicinal applications of the plant parts are discussed. Many authors have highlighted the baobab tree as one of the most important trees to be saved and localized in Africa because of its high indigenous usage and commercial worth. Anthropogenic global warming may induce a drop in baobab species, which could inflict negative impacts on African economies. As a result, it's critical to research the species' likely future distribution and develop conservation policies. Literature was consulted for records and availability of this tree in the Western, Central, Eastern, and Southern African species records and it was also analyzed what percentage of the current environment would be appropriate in the future. Recent studies suggested that farmers and the locals be provided free seeds and seedlings to encourage biological rejuvenation to maximize the plant's potential, people should be informed about the additional uses of baobab that have been discovered. Individuals must also be educated on simple sustainable agroforestry activities that can be performed in plant and forest management.
Decrease of Aflatoxin M1 Concentration in Milk During Cholesterol Removal by Β-Cyclodextrin Application
Peter Šimko, Lukáš Kolarič
Subject: Chemistry, Food Chemistry Keywords: aflatoxin M1; milk; dairy; cholesterol; β-cyclodextrin; food safety; global warming
Approximately one-third of mankind is chronically exposed to the carcinogenic aflatoxin M1 contained in milk and dairy products and there is no ready to use procedure for decontamination purposes applicable in milk technology. Since β-cyclodextrin is frequently used in food industry, its effect on aflatoxin M1 concentration was investigated during cholesterol removal. So, milk samples were spiked with aflatoxin M1 at the average level 0.89 µg/kg and cholesterol removal was carried out by 2.0% (w/w) β-cyclodextrin addition. As found, average cholesterol concentration decreased by 92.3% while aflatoxin M1 concentration decreased to 0.53 µg/kg, i. e. by 39.1% after the treatment. The procedure itself is easy, inexpensive, and ready to use in milk processing technology on current production lines without any investments, thus fully applicable with a high potential of full aflatoxin M1 milk decontamination efficiency and such way to strengthen considerably the food safety issues associated with milk and dairy products on global level.
Global Well-Posedness for the Fractional Navier-Stokes-Coriolis Equations in Function Spaces Characterized by Semigroups
Xiaochun Sun, Jia Liu, Jihong Zhang
Subject: Mathematics & Computer Science, Analysis Keywords: Cauchy problem; The generalized Navier-Stokes-Coriolis equation; Global well-posedness
We studies the initial value problem for the fractional Navier-Stokes-Coriolis equations, which obtained by replacing the Laplacian operator in the Navier-Stokes-Coriolis equation by the more general operator $(-\Delta)^\alpha$ with $\alpha>0$. We introduce function spaces of the Besove type characterized by the time evolution semigroup associated with the general linear Stokes-Coriolis operator. Next, we establish the unique existence of global in time mild solutions for small initial data belonging to our function spaces characterized by semigroups in both the scaling subcritical and critical settings.
Dependence Structures Between Sovereign Credit Default Swaps and Global Risk Factors in Brics Countries
Prayer Rikhotso, Beatrice D. Simo-Kengne
Subject: Social Sciences, Finance Keywords: Global risk factors; Credit Default Swaps; Sovereign credit risk; Copulas approach
This study examined the tail dependency structure of sovereign credit risk and three global risk factors in BRICS countries using copulas approach, which is known for its ability to provide the "true" tail correlation based on the correct marginal distribution. The empirical results show that global market risk sentiment comoves with sovereign CDS spreads across BRICS countries under extreme market events, with Brazil having the highest co-dependency followed by China, Russia, and South Africa. Furthermore, oil price volatility is the second biggest risk factor correlated with sovereign CDS spreads for Brazil and South Africa while exchange rate risk exhibits very small co-dependence with sovereign CDS spreads under extreme market conditions dominated by tail events. On the contrary, exchange rate risk is the second largest risk factor co-moving with China and Russia's sovereign CDS spreads while oil price volatility exhibits the lowest co-dependence to CDS in these countries. Between oil price and currency risk, evidence of single risk factor dominance is found for Russia where exchange rate risk is largely dominant. These results suggest that BRICS policymakers might consider financial sector regulations that mitigate risks spill-over such as targeted capital controls when markets are distressed.
The Impact of Short-Term Cross-Cultural Experience on the Intercultural Competence of Participating Students: A Case Study of Australian High School Students
Wendy Nelson, Johannes M. Luetz
Subject: Social Sciences, Accounting Keywords: Intercultural competence; Cross-cultural experiences; Emotional intelligence; Global citizenship; Immersive pedagogy
Over recent years globalisation has occasioned a dramatic rise in cross-cultural interactions – until this was disrupted by the COVID-19 pandemic (OECD 2018, Nelson & Luetz 2021). The ability to competently engage in a multicultural world is often considered the "literacy of the future" (UNESCO 2013, OECD 2018). Global interconnectedness has brought studies into intercultural competence to centre stage (UNDP 2004, Bissessar 2018, Nelson et al. 2019). This has increased the demand for cross-cultural education experiences that facilitate such learning. However, there is a dearth of empirical research into the issues and effects surrounding short-term cross-cultural educational experiences for adolescents. This mixed methods study extends previous research by looking specifically into what impact short-term cross-cultural experiences may have on the formation of intercultural competence and emotional intelligence of Australian high school students. This study used two instruments for measuring intercultural competence and emotional intelligence in a pre- and posttest quasi-experimental design (n=14), the GENE Scale and TEQ. Moreover, it conducted in-depth post experience qualitative interviews (n=7) that broadly followed a phenomenological paradigm of inquiry. The findings suggest that fully embodied cross-cultural immersive experiences offer benefits in areas of intercultural competence and emotional intelligence and can offer meaningful application in areas of current affairs. A greater understanding of the linkages between immersive cross-cultural experiences and intercultural competence offers prospects for policy makers, educators, pastoral carers, and other relevant stakeholders who might employ such experiential learning to foster more interculturally and interracially harmonious human relations.
Impact of Anthropogenic Heat Emissions on Global Atmospheric Temperature
Dimitre Karamanev
Subject: Earth Sciences, Atmospheric Science Keywords: Anthropogenic heat emissions; global energy use; atmospheric temperature; carbon dioxide emissions.
The use of different primary energy sources in human society has led to two major polluting emissions in the environment: energy (mostly heat), and chemical substances (mostly carbon dioxide). In this paper, the total global anthropogenic emissions of heat to the atmosphere during the industrial era (years 1850-2018) were determined and their effect on the change of global atmospheric temperature was calculated. The concept of a theoretical three-phase Earth reactor was introduced to estimate global atmospheric temperature increase caused by anthro-pogenic heat emissions. The resulting calculations closely approximated the actual atmospheric temperature change recorded during the last 170-year period. These results suggest that the temperature change of the atmosphere (global warming) is entirely due to anthropogenic heat emissions.
School inquiry in Secondary Education: The experience of the Fiesta de la Historia Youth Congress in Seville
Nicolás De-Alba-Fernández, Elisa Navarro-Medina, Noelia Pérez-Rodríguez
Subject: Social Sciences, Education Studies Keywords: history education; social studies; scholar research; relevant social problems; global citizenship.
Online: 29 March 2021 (14:40:47 CEST)
In Secondary Education, the focus of History teaching must be on the development of global citizenship. The present research was a study contextualized in the Fiesta de la Historia Youth Congress in Seville (Spain). A documentary analysis with a descriptive and interpretive design was made of 63 projects of inquiry that pupils carried out. The main objectives were to assess the incidence of the proposal in terms of participation, and to determine whether the pupils' projects followed a logic of inquiry about socially relevant problems which favours the construction of global citizenship. The results point to a low incidence of schools participating in this initiative. The projects of inquiry analysed present for the most part themes related to the historical and social heritage of the locality. The proposals are approached as problems of a specific discipline, and are worked on through a method based on a pseudoscientific research process. The findings indicate the need to continue implementing initiatives based on school inquiry that allow the teaching of History to be articulated around relevant social problems, with the objective being to develop citizenship skills.
Preprint ARTICLE | doi:10.3390/sci2040077
Atmospheric Temperature and CO<sub>2</sub>: Hen-or-Egg Causality?
Demetris Koutsoyiannis, Zbigniew Kundzewicz
Subject: Keywords: temperature; global warming; greenhouse gases; atmospheric CO<sub>2</sub> concentration
It is common knowledge that increasing CO2 concentration plays a major role in enhancement of the greenhouse effect and contributes to global warming. The purpose of this study is to complement the conventional and established theory that increased CO2 concentration due to human emissions causes an increase of temperature, by considering the reverse causality. Since increased temperature causes an increase in CO2 concentration, the relationship of atmospheric CO2 and temperature may qualify as belonging to the category of "hen-or-egg" problems, where it is not always clear which of two interrelated events is the cause and which the effect. We examine the relationship of global temperature and atmospheric carbon dioxide concentration at the monthly time step, covering the time interval 1980–2019, in which reliable instrumental measurements are available. While both causality directions exist, the results of our study support the hypothesis that the dominant direction is T → CO2. Changes in CO2 follow changes in T by about six months on a monthly scale, or about one year on an annual scale. We attempt to interpret this mechanism by involving biochemical reactions, as at higher temperatures soil respiration, and hence CO2 emission, are increasing.
On the Global Asymptotic Stability of A Two Dimensional System of Difference Equations with Quadratic Terms
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: difference equations; dynamical systems; global stability; rate of convergence; boundedness; oscillation
In this paper, we study the global asymptotically stability of following system of difference equations with quadratic terms: x_{n+1}=A+B((y_{n})/(y_{n-1}²)),y_{n+1}=A+B((x_{n})/(x_{n-1}²)) where A and B are positive numbers and the initial values are positive numbers. We also investigate the rate of convergence and oscillation behaviour of the solutions of related system.
Examining the Change of Human Mobility Adherent to Social Restriction Policies and its Effect on COVID-19 Cases in Australia
Siqin Wang, Yan Liu, Tao Hu
Subject: Social Sciences, Geography Keywords: human mobility; COVID-19 spread; global pandemic; social restriction policy; Australia
Policy induced decline of human mobility has been recognised to be effective in controlling the COVID-19 spread especially in the initial stage of the outbreak, although the relationship among mobility, policy implementation, and virus spread remains contentious. Coupling data of confirmed COVID-19 cases with Google mobility data in Australia, we present a state-level empirical study to: 1) inspect the temporal variation of COVID-19 spread and the change of mobility adherent to social restriction policies; 2) examine the extent that different types of mobility are associated with the COVID-19 spread in eight Australian states/territories; and 3) analyse the time-lag effect of mobility restriction on the COVID-19 spread. We find that social restriction policies implemented in the early stage of the pandemic controlled the COVID-19 spread effectively; the restriction of human mobility has a time-lag effect on growth rates, and the strength of the mobility-spread correlation increases up to seven days after policy implementation but decreases afterwards. The association between mobility and COVID-19 spread varies across space and time, and subjects to the types of mobility. Thus, it is important for governments to consider the degree to which lockdown conditions can be eased by accounting for this dynamic mobility-spread relationship.
Towards Achieving Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Jeffrey Amelse
Subject: Earth Sciences, Atmospheric Science Keywords: carbon dioxide; global warming; sequestration; carbon cycle; biomass sequestration, carbon sequestration, CO2
Many corporations aspire to become Net Zero Carbon Dioxide by 2030-2050. This paper examines what it will take. It requires understanding where energy is produced and consumed, the magnitude of CO2 generation, and the Carbon Cycle. Reviews are provided for prior technologies for reducing CO2 emissions from fossil to focus on their limitations and to show that none offer a complete solution. Both biofuels and CO2 sequestration reduce future CO2 emissions from fossil fuels. They will not remove CO2 already in the atmosphere. Planting trees has been proposed as one solution. Trees are a temporary solution. When they die, they decompose and release their carbon as CO2 to the atmosphere. The only way to permanently remove CO2 already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO2 and sequestering biomass carbon. Permanent sequestration of leaves is proposed as a solution. Leaves have a short Carbon Cycle time constant. They renew and decompose every year. Theoretically, sequestrating a fraction of the world's tree leaves can get the world to Net Zero without disturbing the underlying forests. This would be CO2 capture in its simplest and most natural form. Permanent sequestration may be achieved by redesigning landfills to discourage decomposition. In traditional landfills, waste undergoes several stages of decomposition, including rapid initial aerobic decomposition to CO2, followed by slow anaerobic decomposition to methane and CO2. The latter can take hundreds to thousands of years. Understanding landfill chemistry provides clues to disrupting decomposition at each phase.
Warming and Eutrophication Effects on Phytoplankton Community of Two Tropical Systems with Different Trophic States—An Experimental Approach
Andreia Maria Da Anunciação Gomes, Marcelo Manzi Marinho, Marcella Coelho Berjante Mesquita, Ana Carolina Coelho Prestes, Miquel Lürling, Sandra Maria Feliciano De Oliveira E Azevedo
Subject: Biology, Ecology Keywords: global warming; nutrients addition; cyanobacterial blooms; eutrophic systems; oligo-mesotrophic systems
Global warming, as well as europhication are predicted to promote cyanobacterial blooms, but how tropical phytoplankton communities from different trophic state systems respond to temperature variation is less known. To further explore the effect of temperature changes and nutrient addition on phytoplankton communities and to get insight in possible resistance to these effects, we tested the hypothesis that temperature variation will have a stronger effect on cyanobacteria dominance in eutrophic water than in oligo-mesotrophic. Hereto, we conducted an experiment with phytoplankton communities from two aquatic ecosystems differing in trophic state. Water samples from a eutrophic and an oligo-mesotrophic system were collected and incubated in 25 and 30ºC. Also, treatments that received additional surplus N and P were included that served as eutrophication treatments. Temperature variation itself did not promote cyanobacteria in either water from the oligo-mesotrophic or the eutrophic system. However, nutrient enrichment of water from the eutrophic system significantly boosted cyanobacteria, and biomass increased 10 times in both 25ºC and 30ºC treatments. In contrast, eutrophication of water from the oligo-mesotrophic system did not change the relative contribution of phytoplankton groups and response ratios were much lower than those for water from the eutrophic system. Although using a very simple experimental design, the results suggest that in eutrophic systems cyanobacteria dominance can be favoured by further addition of nutrients, independently of a direct temperature effect and that more pristine environments possess some resistance against eutrophication. Since global warming is assumed to intensify eutrophication symptoms indirectly, our study underscores the importance of nutrient control.
Geodiversity of Las Loras UNESCO Global Geopark: Hydrogeological significance of groundwater and landscape interaction. Conceptual model of functioning
África de la Hera-Portillo, Julio López-Gutiérrez, Luis Moreno-Merino, Miguel Llorente-Isidro, Rod Fensham, Mario Fernández, Marwan Ghanem, Karmah Salman, Jose Angel Sánchez-Fabián, Nicolás Gallego-Rojas, Mª del Mar Corral, Elena Galindo, Manuela Chamizo, Nour-Eddine Laftouhi
Subject: Earth Sciences, Geology Keywords: Geodiversity; geosites; springs; Las Loras UNESCO Global Geopark (UGGp); hydrogeology; Ubierna Fault
Las Loras UNESCO Global Geopark (UGGp) is geologically diverse, particularly in relation to water-derived features: springs, karst springs, travertine buildings, waterfalls, caves. In this work, the interactions between geology, geomorphology, structures and hydrogeology are analyzed. These four components are the fundamentals of geodiversity and their interactions provide a conceptual model of hydrogeological functioning at Las Loras UGGp. The most plausible hypothesis is that the system is formed by two superimposed aquifer systems, separated by an aquitard formed by Lower Cretaceous material. The deep lower aquifer formed by the Jurassic limestones only outcrops on the northern and southern edges of the Geopark and in a small arched band to the south of Aguilar de Campoo. It forms a basement subject to intense deformation. The upper aquifer system, formed by outcropping materials from the Upper Cretaceous, is a free aquifer. It is formed by a multilayered aquifer system that is highly compartmentalized by the very disturbed geomorphology of the landscape, constituting each moorland and each lora, an individualized recharge-discharge system. This model explains the base level of the rivers, the abundant number of existing springs and the permanent nature of some rivers, providing the keys to understand the bases of the geoconservation of a rich geological heritage linked to the active processes of the water cycle. | CommonCrawl |
Principal curvature estimates for the level sets of harmonic functions and minimal graphs in $R^3$
CPAA Home
Asymptotic behavior for solutions of some integral equations
January 2011, 10(1): 209-223. doi: 10.3934/cpaa.2011.10.209
Asymptotic behavior of solutions to a model system of a radiating gas
Yongqin Liu 1, and Shuichi Kawashima 1,
Faculty of Mathematics, Kyushu University, Fukuoka 819-0395, Japan
Received January 2010 Revised April 2010 Published November 2010
In this paper we focus on the initial value problem for a hyperbolic-elliptic coupled system of a radiating gas in multi-dimensional space. By using a time-weighted energy method, we obtain the global existence and optimal decay estimates of solutions. Moreover, we show that the solution is asymptotic to the linear diffusion wave which is given in terms of the heat kernel.
Keywords: Radiating gas, initial value problem, asymptotic behavior..
Mathematics Subject Classification: 35B40, 35M2.
Citation: Yongqin Liu, Shuichi Kawashima. Asymptotic behavior of solutions to a model system of a radiating gas. Communications on Pure & Applied Analysis, 2011, 10 (1) : 209-223. doi: 10.3934/cpaa.2011.10.209
R. Courant and K. O. Friedrichs, "Supersonic Flow and Shock Waves,", Interscience Publishers, (1948). Google Scholar
M. Di Francesco, Initial value problem and relaxation limits of the Hamer model for radiating gases in several space variables,, NoDEA Nonl. Differential Equations Appl., 13 (2007), 531. doi: doi:10.1007/s00030-006-4023-y. Google Scholar
M. Di Francesco, "Diffusive Behavior and Asymptotic Self similarity for Fluid Models,", Ph. D thesis, (2004). Google Scholar
M. Di Francesco and C. Lattanzio, Optimal $L^1$ rate of decay to diffusion waves for the Hamer model of radiating gases,, Appl. Math. Lett., 19 (2006), 1046. doi: doi:10.1016/j.aml.2004.11.008. Google Scholar
R. Duan, K. Fellner and C. Zhu, Energy method for multi-dimensional balance laws with non-local dissipation,, to appear in J. Math. Pur. Appl. (http://homepage.univie.ac.at/klemens.fellner/preprints/DZFfinal.pdf)., (). Google Scholar
W. L. Gao, L. Z. Ruan and C. J. Zhu, Decay rates to the planar rarefaction waves for a model system of the radiating gas in $n$-dimensions,, J. Differential Equations, 244 (2008), 2614. doi: doi:10.1016/j.jde.2008.02.023. Google Scholar
W. L. Gao and C. J. Zhu, Asymptotic decay toward the planar rarefaction waves for a model system of the radiating gas in two dimensions,, Math. Models Methods Appl. Sci., 18 (2008), 511. doi: doi:10.1142/S0218202508002760. Google Scholar
K. Hamer, Nonlinear effects on the propogation of sound waves in a radiating gas,, Quart. J. Mech. Appl. Math., 24 (1971), 155. doi: doi:10.1093/qjmam/24.2.155. Google Scholar
T. Iguchi and S. Kawashima, On space-time decay properties of solutions to hyperbolic-elliptic coupled systems,, Hiroshima Math. J., 32 (2002), 229. Google Scholar
S. Kawashima, Y. Nikkuni and S. Nishibata, The initial value problem for hyperbolic-elliptic coupled systems and applications to radiation hydrodynamics,, in, (1998), 87. Google Scholar
S. Kawashima, Y. Nikkuni and S. Nishibata, Large-time behavior of solutions to hyperbolic-elliptic coupled systems,, Arch. Rational Mech. Anal., 170 (2003), 297. doi: doi:10.1007/s00205-003-0273-6. Google Scholar
S. Kawashima and S. Nishibata, Shock waves for a model system of the radiating gas,, SIAM J. Math. Anal., 30 (1999), 95. doi: doi:10.1137/S0036141097322169. Google Scholar
S. Kawashima and S. Nishibata, Weak solutions with a shock to a model system of the radiating gas,, Sci. Bull. Josai. Univ., 5 (1998), 119. Google Scholar
S. Kawashima and S. Nishibata, Cauchy problem for a model system of the radiating gas: weak solution with a jump and classical solutions,, Math. Models Methods Appl. Sci., 9 (1999), 69. doi: doi:10.1142/S0218202599000063. Google Scholar
S. Kawashima and S. Nishibata, A singular limit for hyperbolic-elliptic coupled systems in radiation hydrodynamics,, Indiana Univ. Math. J., 50 (2001), 567. doi: doi:10.1512/iumj.2001.50.1797. Google Scholar
S. Kawashima and Y. Tanaka, Stability of rarefaction waves for a model system of a radiating gas,, Kyushu J. Math., 58 (2004), 211. doi: doi:10.2206/kyushujm.58.211. Google Scholar
C. Lattanzio and P. Marcati, Global well-posedness and relaxation limits of a model for radiating gas,, J. Differential Equations, 190 (2003), 439. doi: doi:10.1016/S0022-0396(02)00158-4. Google Scholar
C. Lattanzio, C. Mascia, T. Nguyen, R.G. Plaza and K. Zumbrun, Stability of scalar radiative shock profiles,, SIAM J. Math. Anal., 41 (): 2165. doi: doi:10.1137/09076026X. Google Scholar
C. Lattanzio, C. Mascia and D. Serre, Shock waves for radiative hyperbolic-elliptic systems,, Indiana Univ. Math. J., 56 (2007), 2601. doi: doi:10.1512/iumj.2007.56.3043. Google Scholar
C. Lattanzio, C. Mascia and D. Serre, in "Hyperbolic Problems: Theory, Numerics, Applications", (Lyon, (1721), 661. Google Scholar
P. Laurencot, Asymptotic self-similarity for a simplified model for radiating gases,, Asymptot. Anal., 42 (2005), 251. Google Scholar
C. Lin, J.-F. Coulombel and T. Goudon, Shock profiles for non-equilibrium radiating gases,, Phys. D, 218 (2006), 83. doi: doi:10.1016/j.physd.2006.04.012. Google Scholar
C. Lin, J.-F. Coulombel and T. Goudon, Asymptotic stability of shock profiles in radiative hydrodynamics,, C. R. Math. Acad. Sci. Paris, 345 (2007), 625. Google Scholar
Y. Liu and S. Kawashima, Global existence and asymptotic behavior of solutions for quasi-linear dissipative plate equation,, preprint, (). Google Scholar
H. Liu and E. Tadmor, Critical thresholds in a convolution model for nonlinear conservation laws,, SIAM J. Math. Anal., 33 (2001), 930. Google Scholar
T. Nguyen, R. G. Plaza and K. Zumbrun, Stability of radiative shock profiles for hyperbolic-elliptic coupled systems,, Phys. D, 239 (2010), 428. doi: doi:10.1016/j.physd.2010.01.011. Google Scholar
S. Nishibata, Asymptotic behavior of solutions to a model system of a radiating gas with discontinuous initial data,, Math. Models Methods Appl. Sci., 10 (2000), 1209. doi: doi:10.1142/S0218202500000598. Google Scholar
M. Nishikawa and S. Nishibata, Convergence rates toward the travelling waves for a model system of the radiating gas,, Math. Meth. Appl. Sci., 30 (2007), 649. doi: doi:10.1002/mma.800. Google Scholar
L. Z. Ruan and C. J. Zhu, Asymptotic behavior of solutions to a hyperbolic-elliptic coupled system in multi-dimensional radiating gas,, preprint., (). Google Scholar
S. Schochet and E. Tadmor, The regularized Chapman-Enskog expansion for scalar conservation laws,, Arch. Rational Mech. Anal., 119 (1992), 95. doi: doi:10.1007/BF00375117. Google Scholar
D. Serre, $L^1$-stability of constants in a model for radiating gases,, Comm. Math. Sci., 1 (2003), 197. Google Scholar
D. Serre, $L^1$-stability of nonlinear waves in scalar conservation laws,, in, (). Google Scholar
W. G. Vincenti and C. H. Kruger, "Introduction to Physical Gas Dynamics,", Wiley, (1965). Google Scholar
W. Wang and W. Wang, The pointwise estimates of solutions for a model system of the radiating gas in multi-dimensions,, Nonlinear Analysis, 71 (2009), 1180. doi: doi:10.1016/j.na.2008.11.050. Google Scholar
Ivonne Rivas, Muhammad Usman, Bing-Yu Zhang. Global well-posedness and asymptotic behavior of a class of initial-boundary-value problem of the Korteweg-De Vries equation on a finite domain. Mathematical Control & Related Fields, 2011, 1 (1) : 61-81. doi: 10.3934/mcrf.2011.1.61
Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3265-3280. doi: 10.3934/dcdsb.2018319
Vladimir V. Varlamov. On the initial boundary value problem for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 431-444. doi: 10.3934/dcds.1998.4.431
Davide Bellandi. On the initial value problem for a class of discrete velocity models. Mathematical Biosciences & Engineering, 2017, 14 (1) : 31-43. doi: 10.3934/mbe.2017003
Masashi Ohnawa. Convergence rates towards the traveling waves for a model system of radiating gas with discontinuities. Kinetic & Related Models, 2012, 5 (4) : 857-872. doi: 10.3934/krm.2012.5.857
Fritz Gesztesy, Helge Holden, Johanna Michor, Gerald Teschl. The algebro-geometric initial value problem for the Ablowitz-Ladik hierarchy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 151-196. doi: 10.3934/dcds.2010.26.151
Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212
Kai Yan, Zhaoyang Yin. On the initial value problem for higher dimensional Camassa-Holm equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1327-1358. doi: 10.3934/dcds.2015.35.1327
Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbed Boussinesq-type equation. Conference Publications, 2013, 2013 (special) : 709-717. doi: 10.3934/proc.2013.2013.709
Fei Guo, Bao-Feng Feng, Hongjun Gao, Yue Liu. On the initial-value problem to the Degasperis-Procesi equation with linear dispersion. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1269-1290. doi: 10.3934/dcds.2010.26.1269
Roberto Camassa. Characteristics and the initial value problem of a completely integrable shallow water equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 115-139. doi: 10.3934/dcdsb.2003.3.115
Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure & Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319
Pengyu Chen, Yongxiang Li, Xuping Zhang. On the initial value problem of fractional stochastic evolution equations in Hilbert spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1817-1840. doi: 10.3934/cpaa.2015.14.1817
Xianpeng Hu, Dehua Wang. The initial-boundary value problem for the compressible viscoelastic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 917-934. doi: 10.3934/dcds.2015.35.917
Jong-Shenq Guo, Masahiko Shimojo. Blowing up at zero points of potential for an initial boundary value problem. Communications on Pure & Applied Analysis, 2011, 10 (1) : 161-177. doi: 10.3934/cpaa.2011.10.161
Patrizia Pucci, Maria Cesarina Salvatori. On an initial value problem modeling evolution and selection in living systems. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 807-821. doi: 10.3934/dcdss.2014.7.807
Jun Zhou. Initial boundary value problem for a inhomogeneous pseudo-parabolic equation. Electronic Research Archive, 2020, 28 (1) : 67-90. doi: 10.3934/era.2020005
Yinnian He, Yi Li. Asymptotic behavior of linearized viscoelastic flow problem. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 843-856. doi: 10.3934/dcdsb.2008.10.843
Xinhua Zhao, Zilai Li. Asymptotic behavior of spherically or cylindrically symmetric solutions to the compressible Navier-Stokes equations with large initial data. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1421-1448. doi: 10.3934/cpaa.2020052
Xiaoyun Cai, Liangwen Liao, Yongzhong Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 917-923. doi: 10.3934/dcdss.2014.7.917
Yongqin Liu Shuichi Kawashima | CommonCrawl |
SpaceX Demo-2
Kinematic Study of the Falcon 9 🚀
👥 Rodrigo Alcaraz de la Osa
📆 2022-01-12 ⏱️ 16 min read 📁 Physics 🏷️ blog, motion, gravitation
Crédit: NASA/Bill Ingalls
After having to be postponed due to bad weather1, last Saturday May 30, at 21:22 peninsular time, the Falcon 9 rocket from SpaceX launched the second demonstration mission (Demo-2) of Crew Dragon from the Launch Complex 39A (LC-39A) at the John F. Kennedy Space Center of NASA in Florida.
Approximately 19 hours later, the Crew Dragon ship docked autonomously with the International Space Station, with astronauts Bob Behnken and Doug Hurley on board, which has meant that the United States is putting humans back into space for the first time since 20112.
Demo-2 is the last major test of the SpaceX manned space flight system to be certified by NASA for manned missions to and from the International Space Station.
In this video of almost 5 hours of duration, published by SpaceX, you can know many more details about the mission (if you only want to see the launch jump to 4:22:46):
If you wish to read more about this historic mission you can do so at NASA's official website.
The post could have ended with the previous paragraph, but then you wouldn't know if you were in Hello! magazine or in PhysiChemically's blog 😏.
If you look at the video of the launch, in the bottom left corner you can see the module of the velocity (speed from here on), in km/h, and the altitude, in km, of the rocket in real time as it ascends to approximately 200$\thinspace$km. What did I think when I saw that data? Well, write the values down3, represent them and make a small empirical study about the cinematics of the Falcon 9.
The following plot shows the altitude of the Falcon 9, in km, as a function of the elapsed time, in minutes 4:
The altitude rises rapidly during the first two minutes approximately (until minute 2.6), surpassing the 75$\thinspace$km of altitude, when the nine Merlin engines of the Falcon 9 shut down, instant that is known as MECO (Main Engine Cutoff) 5.
From that moment the altitude continues to increase reaching 200$\thinspace$km after approximately 5 minutes of flight and remaining constant.
SECO means Second-Stage Engine Cutoff and represents the moment when the Merlin Vacuum engine, the only one that was driving the second stage of the rocket (to which the Dragon ship itself is attached, where the astronauts were going), stops, which doesn't seem to affect too much the altitude of the Dragon.
The following plot shows the speed of the Falcon 9, in km/h, as a function of the elapsed time, in minutes 6:
The speed increases in a non-linear way, reaching 6724$\thinspace$km/h, more than 5 times the speed of sound in air 7, in the MECO. It is observed that at that moment the speed is even reduced, until the Merlin Vacuum engine of the second stage is started and accelerates the Dragon with a similar tendency as it had done during the first stage.
It's nice to see how in SECO the Dragon stops accelerating, because it doesn't have any engine driving it anymore, keeping its constant speed from then on (describing an uniform circular motion).
Orbital Speed
The maximum value of the speed is approximately 27000$\thinspace$km/h. Can we understand this value? Indeed, as of approximately minute 9, the Dragon ship is in an orbit at an altitude of about 200$\thinspace$km. Assuming a circular orbit, the orbital speed is given by the expression:
$$ v_\text{orbital} = \sqrt{\frac{GM_\mathrm T}{r}}, $$ where $G = 6.67\times 10^{-11}\thinspace\mathrm{m^3\thinspace kg^{-1}\thinspace s^{-2}}$, $M_\mathrm T = 5.97\times 10^{24}\thinspace\mathrm{kg}$ is the mass of the Earth y $r = R_\mathrm T + h$ is the distance the ship is measured from the center of the Earth, with $R_\mathrm T = 6371\thinspace\mathrm{km}$. For an altitude $h = 200\thinspace$km, we have:
\begin{align*} v_\text{orbital} = \sqrt{\frac{GM_\mathrm T}{r}} &= \sqrt{\frac{6.67\times 10^{-11}\cdot 5.97\times 10^{24}}{(6371+200)\times 10^3}} \\ &= 7784.6\thinspace\mathrm{m/s} \approx 28000\thinspace\mathrm{km/h} \end{align*}
which is a relative error of about 3.7$\thinspace$%.
From the speed values it is possible to obtain the tangential acceleration of the rocket by means of a numerical derivation8.
The following plot shows the acceleration of the Falcon 9, in m/s2, as a function of the elapsed time, in minutes:
It is clear that the acceleration is not constant, increasing until MECO, when it even takes negative values (remember that speed is reduced). Then it increases again to values above 30$\thinspace$m/s2 (more than three times the acceleration of gravity on the surface of the Earth), until SECO, when the tangential acceleration vanishes because there is no longer any engine powering the ship.
What if we assume that the acceleration is constant?
If the acceleration of the rocket was constant, then its ascent could be modelled by an uniformly-varied linear motion (UVLM). Looking at the previous plot it seems crazy to think that it could be like that, but it's worth trying as a mental exercise.
The following plot shows again the empirical acceleration of the rocket, obtained by numerical derivation from its speed, and the constant acceleration that it would have assuming a UVLM, obtained as the arithmetic mean9:
The resulting average value of the acceleration before SECO is 14.1$\thinspace$m/s2, $\approx 1.4$ times higher (in module) than the acceleration of gravity on the Earth's surface (9.8$\thinspace$m/s2). This can be interpreted as that, on average, astronauts have spent almost 9 minutes experiencing something worse than a free fall, and on top of that, upwards 🙃 10.
Once we have our constant acceleration value, we can compare the empirical altitude and speed with those obtained from the UVLM expressions (taking into account that after SECO the acceleration is zero and therefore the ship will move with an uniform linear motion or ULM).
After four minutes, the rocket is maintained at a practically constant altitude, so the UVLM or ULM expressions are not valid, as they imply an indefinite increase.
Up to approximately 4 minutes, the theoretical altitude is calculated from the expression:
$$ h(t) = h_0 + v_0 t +\frac{1}{2} a t^2, $$ where $h_0 = 0$, $v_0 = 0$ and $a = 14.1\thinspace$m/s2.
In the following plot both the empirical altitude and the one calculated assuming an UVLM, during the first four minutes of the Falcon 9's ascent, are shown:
The theoretical expression is only able to model the movement of the rocket during the first instants of time (already in the first minute of the ascent the theoretical expression has a relative error of almost 130$\thinspace$%).
The theoretical speed is calculated from the expression 11:
$$ v(t) = \begin{cases} v_0 + a t & \text{before SECO (UVLM)} \\ 26734.6 & \text{after SECO (ULM)} \end{cases} $$ where $v_0 = 0$ and $a = 14.1\thinspace$m/s2.
The following plot shows both the empirical and calculated speed assuming an UVLM and subsequent ULM:
It is observed that the theoretical expression overestimates the speed of the ship before SECO (with a maximum relative error of more than 300$\thinspace$%, for $t = 0.2\overline{6}\thinspace$min), and underestimates it slightly afterwards.
Still, it seems that the theoretical expression does not deviate so much from the empirical values, indicating that, at least for estimating speed, it does not seem so farfetched to model the rocket ascent by an UVLM (and subsequent ULM after SECO).
The launch was originally scheduled for Wednesday, May 27, but had to be cancelled with only 17 minutes left because of Tropical Storm Bertha. ↩︎
On July 8, 2011, the 135th and final mission of NASA's Space Shuttle Program took place. ↩︎
I'd love to tell you that I used a fully automated algorithm with optical character recognition (OCR) to read the values in the video, like some other freaks and more capable than me have managed to do. But no, I'm afraid all I did was play the video in 10 second jumps, manually entering the speed and altitude values 🤷♂️. ↩︎
To put this data into perspective, it takes a commercial plane about 10 minutes to reach its cruising altitude, which is about 10$\thinspace$km. In other words, in half the time, the Falcon 9 is capable of reaching an altitude about 20 times higher than the cruising altitude of a commercial plane. ↩︎
One of the distinguishing features of SpaceX's Falcon 9 is that the first stage of the rocket, once separated, is capable of returning to Earth and landing on its own, as shown in this gif 😲:
↩︎
Again to put this data into perspective, it takes a commercial plane about 10 minutes to reach its cruising speed, which is about 900$\thinspace$km/h. In other words, in the same time, the Falcon 9 is capable of reaching a speed about 30 times higher than the cruising speed of a commercial plane. ↩︎
At 20$\thinspace^\circ$C temperature, 50$\thinspace$% humidity and sea level (https://en.wikipedia.org/wiki/Speed_of_sound). ↩︎
Specifically, the acceleration has been obtained using the diff function of MATLAB®. ↩︎
Actually, two different averages have been taken, before and after the SECO, due to the importance and influence that moment has on the ship's movement. ↩︎
Actually it would have been much worse than this 🤦♂️, but as a matter of fact, a skydiver usually reaches the terminal velocity (around 180$\thinspace$km/h) in only 12 seconds, after which he stops experiencing the feeling of falling. ↩︎
The value of 26734.6$\thinspace$km/h is the speed the ship has, according to the theoretical expression of the UVLM, right in the SECO. ↩︎
📁 Physics 🏷️ blog motion gravitation
PhD in Physics and Physics and Chemistry Teacher
I have a PhD in Physics and I teach Physics and Chemistry at IES Peñacastillo in Cantabria (Spain).
Actively participate in the website by commenting, giving your opinion, making requests, suggestions...
Inviting us to a coffee you are acknowledging and praising our effort, encouraging us to continue preparing quality materials. All the coffees will be used to pay for and to continue improving .
If you don't want to miss any updates consider subscribing to the RSS channel.
✍️ Solar System and Beyond Poster Set
Showcasing the beauty of our solar system and beyond.
📚 Gravitation
COMING SOON Central Forces. Newton's Law of Universal Gravitation. Kepler's Laws.
📚 Gravitational Interaction
COMING SOON Kepler's Laws. Newton's Law of Universal Gravitation. Gravitational Field.
✍️ Scientific School Calendar 2022
365 Anniversaries of Scientific and Technological Events.
✍️ ☢️ Radioactive ☢️
COMING SOON Analysis of the Last Film About Madame Curie. | CommonCrawl |
Search all SpringerOpen articles
Journal of Software Engineering Research and Development
An algorithm for combinatorial interaction testing: definitions and rigorous evaluations
Juliana M. Balera ORCID: orcid.org/0000-0001-6481-53621 &
Valdivino A. de Santiago Júnior1
Journal of Software Engineering Research and Development volume 5, Article number: 10 (2017) Cite this article
Combinatorial Interaction Testing (CIT) approaches have drawn attention of the software testing community to generate sets of smaller, efficient, and effective test cases where they have been successful in detecting faults due to the interaction of several input parameters. Recent empirical studies show that greedy algorithms are still competitive for CIT. It is thus interesting to investigate new approaches to address CIT test case generation via greedy solutions and to perform rigorous evaluations within the greedy context.
We present a new greedy algorithm for unconstrained CIT, T-Tuple
Reallocation (TTR), to generate CIT test suites specifically via the Mixed-value Covering Array (MCA) technique. The main reasoning behind TTR is to generate an MCA M by creating and reallocating t-tuples into this matrix M, considering a variable called goal (ζ). We performed two controlled experiments addressing cost-efficiency and only cost. Considering both experiments, we did 3200 executions related to 8 solutions. In the first controlled experiment, we compared versions 1.1 and 1.2 of TTR in order to check whether there is significant difference between both versions of our algorithm. In such experiment, we jointly considered cost (size of test suites) and efficiency (time to generate the test suites) in a multi-objective perspective. In the second controlled experiment we confronted TTR 1.2 with five other greedy algorithms/tools for unconstrained CIT: IPOG-F, jenny, IPO-TConfig, PICT, and ACTS. We performed two different evaluations within this second experiment where in the first one we addressed cost-efficiency (multi-objective) and in the second only cost (single objective).
Results of the first controlled experiment indicate that TTR 1.2 is more adequate than TTR 1.1 especially for higher strengths (5, 6). In the second controlled experiment, TTR 1.2 also presents better performance for higher strengths (5, 6) where only in one case it is not superior (in the comparison with IPOG-F). We can explain this better performance of TTR 1.2 due to the fact that it no longer generates, at the beginning, the matrix of t-tuples but rather the algorithm works on a t-tuple by t-tuple creation and reallocation into M.
Considering the metrics we defined in this work and based on both controlled experiments, TTR 1.2 is a better option if we need to consider higher strengths (5, 6). For lower strengths, other solutions, like IPOG-F, may be better alternatives.
The academic community has been making efforts to reduce the cost of the software testing process by decreasing the size of test suites while at the same time aiming at maintaining the effectiveness (ability to detect defects) of such sets of test cases. Hence, several contributions exist for test suite/case minimization (Yoo and Harman 2012; Ahmed 2016; Huang et al. 2016; Khan et al. 2016) where the goal is to decrease the size of a test suite by eliminating redundant test cases, and hence demanding less effort to execute the test cases (Yoo and Harman 2012). One of the approaches to reduce the number of test cases is Combinatorial Interaction Testing (CIT) (Petke et al. 2015), also known as Combinatorial Testing (CT) (Kuhn et al. 2013; Schroeder and Korel 2000), Combinatorial Test Design (CTD) (Tzoref-Brill et al. 2016), or Combinatorial Designs (CD) (Mathur 2008). CIT relates to combinatorial analysis whose objective is to answer whether it is possible to organize elements of a finite set into subsets so that certain balance or symmetry properties are satisfied (Stinson 2004).
There are reports which claim the success of CIT (Dalal et al. 1999; Tai and Lei 2002; Kuhn et al. 2004; Yilmaz et al. 2014; Qu et al. 2007; Petke et al. 2015). Such approaches have drawn attention of the software testing community to generate sets of smaller (lower cost to run) and effective (greater ability to find faults in the software) test cases where they have been successful in detecting faults due to the interaction of several input parameters (factors).
CIT approaches to generate test cases can be divided in four main classes: Binary Decision Diagrams (BDDs) (Segall et al. 2011), Satisfiability (SAT) solving (Cohen et al. 1997; Yamada et al. 2015; Yamada et al. 2016), meta-heuristics (Garvin et al. 2011; Shiba et al. 2004; Hernandez et al. 2010), and greedy algorithms (Lei and Tai 1998; Lei et al. 2007)Footnote 1. Recent CIT test case generation methods based on BDD and SAT are interesting to constrained (there are restrictions related to parameter interactions) problems but they perform worse compared with greedy algorithms/tools in the context of unconstrained (there are no restrictions at all) problems.
To corroborate this claim, in (Segall et al. 2011) a BDD-based approach, implemented in the Focus tool, was better in terms of cost than the greedy solutions Advanced Combinatorial Testing System (ACTS) (Yu et al. 2013), Pairwise Indepedent Combinatorial Testing (PICT) (Czerwonka 2006), and jenny (Jenkins 2016) in the constrained domain. However, their method was worse than such greedy solutions for unconstrained problems.
A recent SAT-based approach (Yamada et al. 2016), implemented in the Calot tool, performed well in terms of efficiency (time to generate the test suites) and cost (test suite sizes) comparing again with the greedy tools ACTS (Yu et al. 2013) and PICT (Czerwonka 2006). Despite the advantages of the SAT-based approach, ACTS was much more faster than Calot for many 3-way test case examples. Moreover, if unconstrained CIT is considered, ACTS again was remarkable faster than Calot for large SUT models and higher-strength test case generation.
In the context of CIT, meta-heuristics such as simulated annealing (Garvin et al. 2011), genetic algorithms (Shiba et al. 2004), and Tabu Search Approach (TSA) (Hernandez et al. 2010) have been used. Recent empirical studies show that meta-heurisitic and greedy algorithms have similar performance (Petke et al. 2015). Hence, early fault detection via a greedy algorithm with constraint handling (implemented in the ACTS tool (Yu et al. 2013)) was no worse than a simulated annealing algorithm (implemented in the CASA tool (Garvin et al. 2011)). Moreover, there was not enough difference between test suites generated by ACTS and CASA in terms of efficiency (runtime) and t-way coverage. All such previous remarks, some of them based on strong empirical evidences, emphasize that greedy algorithms are still very competitive for CIT.
Even if some authors have argued that CIT resides in the constrained domain in real-world applications (Bryce and Colbourn 2006; Cohen et al. 2008; Petke et al. 2015), it is important to mention that unconstrained CIT may be interesting from a practical point of view, especially for critical applications such as satellites, rockets, airplanes, controllers of an unmanned train metro system, etc. For such types of applications, robustness testing is very important. In the context of software systems, robustness testing aims to verify whether the Software Under Test (SUT) behaves correctly in the presence of invalid inputs. Therefore, even though an unconstrained CIT-derived test case may seem pointless or even somewhat difficult to execute, it may still be interesting to see how the software will behave in the presence of inconsistent inputs.
Let us consider that we need to test a communication protocol implemented in several critical embedded systems. If each field of such a protocol is a parameter, it is interesting to impose no restriction (no constraint) in the parameter interactions so that a certain Protocol Data Unit (PDU) sent from system A to system B may have values not allowed in the combination of the fields (parameters) of the PDU. In other words, if the specification says that when field f i =1, possible values of field f j are between 20 and 70 (20≤f j ≤70), and other field f k <5, then a test case where f i =1, 1≤f j ≤4, and f k <5 is clearly inconsistent because of the value of f j . But, this can precisely the goal of the test designer because he/she wants to check how the receiving system (B) will act upon receiving a PDU like that from A. This is an example where unconstrained CIT is relevant. It is important to mention that the argument is not that constraints can not be used for testing critical systems but rather that, for certain types of tests (robustness), constraints are not as relevant.
Based on the context and motivation previously presented, this research relates to greedy algorithms for unconstrained CIT. In (Pairwise 2017), 43 algorithms/tools are presented for CIT and many more not shown there exist. Some of these solutions are variations of the In-Parameter-Order (IPO) algorithm (Lei and Tai 1998) such as IPOG, IPOG-D (Lei et al. 2007), IPOG-F, IPOG-F2 (Forbes et al. 2008), IPOG-C (Yu et al. 2013), IPO-TConfig (Williams 2000), ACTS (where IPOG, IPOG-D, IPOG-F, IPOG-F2 are implemented) (Yu et al. 2013), and CitLab (Cavalgna et al. 2013). All IPO-based proposals have in common the fact that they perform horizontal and vertical growths to construct the final test suite. Moreover, some need two auxiliary matrices which may decrease its performance by demanding more computer memory. Such algorithms accomplish exhaustive comparisons within each horizontal extension which may penalize efficiency.
PICT can be regarded as one baseline tool where other approaches have been done based on it (PictMaster 2017). The algorithm implemented in this tool works in two phases, the first being the construction of all t-tuples to be covered. This can often be a not interesting solution since many t-tuples may require large disk space for storage.
Thus, it is interesting to think about a new greedy solution for CIT that does not need, at the beginning, to enumerate all t-tuples (such as PICT) and does not demand many auxiliary matrices to operate (as some IPO-based approaches). Although we have some recent rigorous empirical evaluations comparing greedy algorithms with meta-heuristics solutions (Petke et al. 2015) and greedy approaches against SAT-based methods (Yamada et al. 2016), there are no rigorous empirical assessments comparing greedy algorithms/tools, representative of the unconstrained CIT domain, among each other.
In this paper, we present a new algorithm, called T-Tuple Reallocation (TTR), to generate CIT test suites specifically via the Mixed-value Covering Array (MCA) technique. The main reasoning behind TTR is to generate an MCA M by creating and reallocating t-tuples into this matrix M, considering a variable called goal (ζ). TTR is a greedy algorithm for unconstrained CIT.
Three versions of the TTR algorithm were developed and implemented in Java. Version 1.0 is the original version of TTR (Balera and Santiago Júnior 2015). In version 1.1 (Balera and Santiago Júnior 2016), we made a change where we do not order the input parameters. In the last version, 1.2, the algorithm no longer generates the matrix of t-tuples (Θ) but rather it works on a t-tuple by t-tuple creation and reallocation into M. Moreover, version 1.2 was also implemented in C.
We performed two controlled experiments addressing cost-efficiency and only cost. Considering both experiments, we performed 3,200 executions related to 8 solutions. In the first controlled experiment, our goal was to compare versions 1.1 and 1.2 of TTR (in Java) in order to check whether there is significant difference between both versions of our algorithm. In such experiment, we jointly considered cost (size of test suites) and efficiency (time to generate the test suites) in a multi-objective perspective. We conclude that TTR 1.2 is more adequate than TTR 1.1 especially for higher strengths (5 and 6).
We then carried out a second controlled experiment where we confronted TTR 1.2 with five other greedy algorithms/tools for unconstrained CIT: IPOG-F (Forbes et al. 2008), jenny (Jenkins 2016), IPO-TConfig (Williams 2000), PICT (Czerwonka 2006), and ACTS (Yu et al. 2013). We performed two evaluations where in the first one we compared TTR 1.2 with IPOG-F and jenny since these were the solutions we had the source code (to precisely measure the time). Hence, a cost-efficiency (multi-objective) assessment was accomplished. In order to address a possible evaluation bias in the time measures due to different programming languages, we compared the implementation of TTR 1.2 (in Java) with IPOG-F (in Java), and the implementation of TTR 1.2 (in C) with jenny (in C). In the second assessment, we did a cost (single objective) evaluation where TTR 1.2 (Java) was compared with PICT, IPO-TConfig, and ACTS. The conclusion is the same as before: TTR 1.2 is better for higher strengths (5 and 6).
In this paper, we extend our previous works where we presented version 1.0 of TTR (Balera and Santiago Júnior 2015), and version 1.1 together with another controlled experiment (Balera and Santiago Júnior 2016). The contributions of this work are:
Even though we considered version 1.1 of TTR in (Balera and Santiago Júnior 2016), we did not detail this version since the focus of this previous paper was this other controlled experiment. Thus, we highlight the key features of TTR 1.1 here;
We created another version of our algorithm, 1.2, where, at the beginning, TTR does not generate the matrix of t-tuples. Our goal here is trying to avoid an exhaustive combination of t-tuples as might happen with other classical greedy approaches. Moreover, we rely on just one auxiliary matrix which is different from other greedy solutions which require two auxiliary matrices;
We performed two controlled experiments in the unconstrained CIT domain (TTR 1.1 × TTR 1.2; TTR 1.2 × IPOG-F, jenny, IPO-TConfig, PICT, ACTS) with almost three times more participants, in each experiment, than in the previous one (Balera and Santiago Júnior 2016). In addition, we run each participant (instance) 5 times with different input orders of parameters and values to address the nondeterminism of the solutions. To the best of our knowledge, no previous research presented rigorous empirical evaluations for greedy solutions within the unconstrained CIT domain;
We really accomplished a multi-objective (cost-efficiency) evaluation in both controlled experiments (in the second one, we did it in the first assessment). Previously (Balera and Santiago Júnior 2016), we analyzed cost and efficiency in isolation.
This paper is structured as follows. Section 2 presents an overview of the main concepts related to CIT. In Section 3, we show the main definitions and procedures of versions 1.1 and 1.2 of our algorithm. Section 4 shows all the details of the first controlled experiment when we compare TTR 1.1 against TTR 1.2. In Section 6, the second controlled experiment is presented where TTR is confronted with the other 5 greedy tools. Section 7 presents related work. In Section 8, we show the conclusions and future directions of our research.
In this section we present some basic concepts and definitions (Kuhn et al. 2013; Petke et al. 2015; Cohen et al. 2003) related to CIT. A CIT algorithm receives as input a number of parameters (also known as factors), p, which refer to the input variables. Each parameter can assume a number of values (also known as levels) v. Moreover, t is the strength of the coverage of interactions. For example, in pairwise testing, the degree of interaction is two, so the value of strength is 2. In t-way testing, a t-tuple is an interaction of parameter values of size equal to the strength. Thus, a t-tuple is a finite ordered list of elements, i.e. it is a set of elements.
A Fixed-value Covering Array (CA) denoted by CA(N,p,v,t) is an N×p matrix of entries from the set {0,1,⋯,(v−1)} such that every set of t-columns contains each possible t-tuple of entries at least a certain number of times (e.g. once). N is the number of rows of the array (matrix). Note that in a CA, entries are from the same set of v values.
A Mixed-value Covering Array (MCA)Footnote 2 it is an extension of a CA and it is more flexible because it allows parameters to assume values from different sets. Hence, it is represented as MCA\(\left (N,v^{p_{1}}_{1}v^{p_{2}}_{2}...v^{p_{m}}_{m}, t\right)\), where N is the number of rows of the matrix, \(\sum \limits _{i=1}^{m} p_{i}\) is the number of parameters, each v i is the number of values for each parameter p i , and t is the strength.
Therefore, in CIT a CA or MCA is a test suite and each row of such matrices is a test case. Suppose that we need to generate a pairwise unconstrained CIT test suite considering the following parameters and their respective values:
$$\begin{array}{*{20}l} OS &= \{macOS, Linux, Windows\},\\ Protocol &= \{IPv4, IPv6\},\\ DBMS &= \{MySQL, PostgreSQL, Oracle\}. \end{array} $$
We can formulate this problem as MCA (N,2132,2) which is denoted as a model for the CIT problem. In other words, we have one parameter (Protocol) which can assume two values, two parameters (OS, DBMS) which can assume three values, and t=2.
As we have mentioned in Section 1, CIT is an interesting solution for the test suite minimization problem. As a matter of perspective, let us consider that there are 10 parameters (A,B,⋯,J) and that each parameter has 5 values, i.e. A={a 1,a 2,⋯,a 5}, B={b 1,b 2,⋯,b 5},..., J={j 1,j 2,⋯,j 5}. If we performed an exhaustive combination, there would be 510=9.765.625 test cases generated where each test case is: t c i ={a k ,b k ,⋯j k }. By using version 1.2 of TTR with t=2, even in a unconstrained context, the test suite reduces to 45 test cases. This gives an idea of the strength of CIT for test suite minimization.
Note that the concepts and definitions we provided in this section are related to the context in which our work is inserted: unconstrained CIT. In case of constrained CIT, constraints must be considered and other definitions can be used (see e.g. (Yamada et al. 2016)).
TTR: a new algorithm for combinatorial interaction testing
In this section we detail versions 1.1 and 1.2 of our algorithm. The three versions (1.0 (Balera and Santiago Júnior 2015), 1.1, and 1.2) of TTR were implemented in Java.
TTR: Version 1.1
Version 1.0 of TTR (Balera and Santiago Júnior 2015) can be summarized as follows: (i) it generates all possible t-tuples that have not yet been covered. The Constructor procedure constructs the matrix Θ; (ii) it generates an initial solution, the matrix M; and (iii) it reallocates the t-tuples from Θ in order to achieve the best final solution (M) via the Main procedure. Then, the final set of test cases is updated in the matrix M. An important point here is that we order the parameters and values that are submitted to the algorithm. In other words, if we submit five parameters A,B,C,D,E with 10, 4, 3, 8, 5 values respectively, TTR orders these five parameters in ascending order: A,D,E,B,C. The goal is trying to be insensitive to the input order of parameters and values.
The same steps described above also exist in TTR 1.1. However, comparing with version 1.0 (Balera and Santiago Júnior 2015), in version 1.1 we do not order the parameters and values submitted to our algorithm. The result is that test suites of different sizes may be derived if we submit a different order of parameters and values. The motivation for such a change is because we realized that, in some cases, less test cases were created due to non-ordering of parameters and values.
Let us consider the running example in Fig. 1 with the strength, t, equals to 2. It is important to note that this is a unit testing level and hence each one of the parameters of register is an input parameter sumitted to TTR. Thus, there are 3 parameters: bank, function and card. We assume that there are two banks (bankA, bankB), two functions (debit, credit), and three types of cards (cardA, cardB, cardC) to deal with. Therefore, there are 2, 2, and 3 values of bank, function and card, respectively, as shown in Table 1.
A running example: register method
Table 1 Example of parameters and values: Fig. 1
A high-level view of version 1.1 of TTR is in Algorithm 1. The main reasoning of TTR 1.1 is to build an MCA M through the reallocation of t-tuples from a matrix Θ to this matrix M, and then each reallocated t-tuple should cover the greatest number of t-tuples not yet covered, considering a parameter called a goal (ζ). Also note that P is the submitted set of parameters, V is the set of values of the parameters, and t is the strength. As we have just pointed out, TTR 1.1 follows the same general 3 steps as we have in TTR 1.0.
Before going on with the descriptions of the procedures of our algorithm, we need to define the following operators applied to the structures (set, sequence, matrix) we handle. We also present some examples to better illustrate how such operators work.
Definition 1
Let A be a sequence and B be a set. The addition sequence-set operator, ⊙, is such that A⊙B is a sequence where the elements of B are added after the last position of A. Thus, if |A| is the length of sequence A and |B| is the cardinality of set B, |A⊙B|=|A|+|B|.
Example: Let us consider sequence A={1,2,3} and set B={4,5}. Then, A⊙B={1,2,3,4,5}.
Let A and B be two sequences with the same length, i.e. |A|=|B|. The addition sequence-sequence operator, ⊕, is such that A⊕B is a sequence where the element in position i of A⊕B, a b i , is a i , the element of A in position i, or b i , the element of B in position i. Also note the definition of an "empty" element, λ, within a sequence which is an element with no value. This operator then assumes that if a i ≠λ and b i ≠λ then a b i =a i =b i . However, if a i =λ and b i ≠λ then a b i =b i . On the other hand, if a i ≠λ and b i =λ then a b i =a i . Note that |A⊕B|=|A|=|B|.
Example: Let us consider sequences A={1,2,λ} and B={λ,2,3}. Then, A⊕B={1,2,3}.
Let A and B be two sequences. The removal operator, ⊖, is such that A⊖B is a sequence obtained by "removing" each element of B, b i , from A. This operator assumes that the original sequences A and B are known so that A⊖B=A.
Example: Let us consider that originally we have sequences A={1,2,λ}, B={λ,2,3}, and A⊕B={1,2,3}. Then A⊖B=A={1,2,λ}.
Let A and B be two sets. The set difference operator, ∖, is as defined in set theory.
Example: Let us consider we have sets A={1,2,3} and B={2,3}. Then A∖B={1}.
Let A be a matrix and B be a sequence. The concatenation operator, ∙, is such that A∙B is a matrix where a new row (sequence) B is added after the last row of A.
Example: Let us consider the matrix A below and sequence B={10,11,12}. The matrix A∙B is shown below.
$${\kern14pt} A = \left[ \begin{array}{lll} 1 & 2 & 3 \\[0.3em] 4 & 5 & 6 \\[0.3em] 7 & 8 & 9 \end{array}\right] $$
$$A \bullet B = \left[ \begin{array}{lll} 1 & 2 & 3 \\[0.3em] 4 & 5 & 6 \\[0.3em] 7 & 8 & 9 \\[0.3em] 10 & 11 & 12 \end{array}\right] $$
Let A be a matrix and B be a sequence. The removal from matrix operator, ∘, is such that A∘B is a matrix obtained by removing the entire row (sequence) B from the last row of matrix A. This operator assumes that the original matrix A and sequence B are known so that A∘B=A
Example: Let us consider we have matrix A and sequence B presented in the previous example. Then A∘B=A as shown below.
$$A \circ B = A = \left[ \begin{array}{lll} 1 & 2 & 3 \\[0.3em] 4 & 5 & 6 \\[0.3em] 7 & 8 & 9 \end{array}\right] $$
The constructor procedure
According to the specified input (parameters and values), the Constructor procedure aims to generate all t-tuples that needs to be covered. Each t-tuple is in the matrix Θ |C|×|P| Footnote 3 where |C| represents the number of t-tuples, t is the strength, and |P| is the number of parameters.
Each row, θ i , of Θ is a t-tuple that has not yet been covered and it has a variable, flag, associated with it whose purpose is to aid in the reallocation process of the t-tuple into the final solution. Note that since the order matters, each t-tuple θ i is indeed a sequence and not a set. Moreover, flag does not belong to Θ. Table 2 shows the matrix Θ for the example shown in Fig. 1 and t=2. Note that interactions are made for the values of b a n k∖f u n c t i o n, b a n k∖c a r d, and f u n c t i o n∖c a r d. Then, a t-tuple corresponding to the interaction of factors b a n k∖f u n c t i o n can be written in the form θ i ={b a n k A,d e b i t,λ}. Initially, all values of flag are false. Algorithm 2 shows the Constructor procedure.
Table 2 Matrix Θ for the example in Fig. 1
Constructor operates as follows: based on the set of parameters (domain), P, and the strength (t), interactions between the parameters are generated through the enumeration procedure, and stored in a set named E (line 1). For example, we have 3 parameters (bank, function and card) and t = 2 thus we know that the enumerator will generate the interactions 2 per 2 (t=2) between these 3 parameters. Thus E={I 1,I 2,I 3} where we have the sets I 1={b a n k,f u n c t i o n,λ}, I 2={b a n k,λ,c a r d}, and I 3={λ,f u n c t i o n,c a r d}. For better understanding, we denote the elements of I l in this way: b a n k∖f u n c t i o n, b a n k∖c a r d and f u n c t i o n∖c a r d. Then, the interactions (I l ) are selected one at a time (line 2), and during this selection, t-tuples are constructed based on each parameter of that interaction: in line 5, the first parameter of the first interaction, p 1, is selected. Note that each parameter, p j , is indeed another set composed of values, v k . Thus, p 1=b a n k={b a n k A,b a n k B}, p 2=f u n c t i o n={d e b i t,c r e d i t}, and p 3=c a r d={c a r d A,c a r d B,c a r d C}. Therefore, each of the values (v k ) is added in t-tuples (θ i ) (line 6) and also in Θ (line 7). Recall that θ i is indeed a sequence. From now on, subsequent parameters are selected one by one, and a new t-tuple is generated from the combination between each of the values (v k ) with each of the preexisting t-tuples (θ i ) in Θ (line 16). For example, the algorithm selects the first generated interaction, I 1, b a n k∖f u n c t i o n and construct all t-tuples between these two parameters. After processing each interaction, I l , the Constructor procedure removes it from the set E (line 21).
Note that the main difference between TTR 1.0 and 1.1 is that TTR 1.0 performs the ordering of the domain, P, that is the parameters are ordered according to the amount of values they have: from the highest to the lowest quantity. For example, considering Fig. 1 and this input order: bank, function, and card. In version 1.0, parameters are stored in an ordered way: the first parameter becomes card (3 values), the second parameter is bank (2 values) and the last parameter is function (2 values). In version 1.1, there is no such ordering and this explains why bank and function generate the first rows (t-tuples) of Θ (see Table 2).
The initial solution and addition of test cases
The matrix M N×(|P|+1) is the MCA we need to construct where there are N rows (i.e. test cases) and |P| parameters. The (|P|+1)-th column is not used to represent any parameter but rather to mean the value of the goal (ζ) associated with that test case. There exists an initial solution for the matrix M that is obtained by selecting the parameters interaction I l that has the largest amount of uncovered t-tuples (line 3 in Algorithm 1). Considering the input order bank, function, card, I 2=b a n k∖c a r d that is chosen because it has 6 t-tuples and it appears first than I 3=f u n c t i o n∖c a r d. All t-tuples derived via I 2 in the initial solution are combined with empty test cases, respecting the order of input of the parameters/values submitted to TTR 1.1 as shown in Table 3 (see t-tuples θ 5={b a n k A,λ,c a r d A}, θ 6={b a n k A,λ,c a r d B},⋯ from Θ (Table 2) in the initial M).
Table 3 Initial M: example of Fig. 1
In the same way, to the extent that existing test cases are no longer sufficient to allocate the remaining t-tuples in the Θ matrix, the same procedure is used to include new test cases in matrix M. In other words, when reallocation of t-tuples becomes inefficient, it is necessary to include new test cases. Thus, as in the construction of the initial solution, the interaction of factors I l that has the largest amount of uncovered t-tuples is selected, so that these will become new test cases. This strategy is performed on line 3 of Algorithm 1.
In order to modify the current solution to obtain the final solution, the test suite M, we rely on the variable goal (ζ). For each row of M, i.e. for each test case, there is an associated goal.
As the objective is to address the largest number of uncovered t-tuples, the goal is calculated according to the maximum number of uncovered t-tuples which potentially may be covered when a t-tuple θ i is moved from Θ to M. This results in a temporary test case τ r . In order to find ζ, it is necessary to take into account: (i) the disjoint parameters, P d , covered by the union between t-tuple θ i and a test case from M; (ii) the number of parameter interactions, y, which τ r has already covered; and (iii) the strength t. Therefore:
$$\zeta = \binom{P_{d}}{t} - y. $$
Let us consider again Fig. 1 and t = 2. According to Θ (see Table 2), the initial solution, M, is composed by the t-tuples due to parameters b a n k∖c a r d. This is because the I 2=b a n k∖c a r d has 6 tuples, I 3=f u n c t i o n∖c a r d has 6 t-tuples, and I 1=b a n k∖f u n c t i o n has 4 t-tuples. As b a n k∖c a r d appears first than f u n c t i o n∖c a r d and both have 6 tuples, so the algorithm selects it for reallocating into M.
The number of disjoint parameters, P d , is equal to 3. As the interaction b a n k∖c a r d is already contemplated in matrix M, the next parameter interaction providing the largest number of non-addressed t-tuples is f u n c t i o n∖c a r d. Then we have all 3 parameters with b a n k∖f u n c t i o n and f u n c t i o n∖c a r d which explains P d = 3. As t = 2, we have \(\binom {3}{2} = 3\). However, one of the 3 parameter interactions has already been covered during the initial solution (b a n k∖c a r d), so we need to cover only 2 parameter interactions. Thus, for each t-tuple in the initial solution M, there remains to be covered:
$$\zeta = \binom{3}{2} - 1 = 2. $$
This explains the goal (ζ) in Table 3. It is very important that y is subtracted in order to find ζ. If this is not done, the final goal will never be matched, since there are no uncovered t-tuples that correspond to this interaction.
Even considering y, it is also important to note that not always the expected targets will be reached with the current configurations of the M and Θ matrices. In other words, in certain cases, there will be times when no existing t-tuple will allow the test cases of the M matrix to reach its goals. It is at this point that it becomes necessary to insert new test cases in M. This insertion is done in the same way as the initial solution for M is constructed, as described in the section above.
The Main Procedure
The Main procedure is presented in Algorithm 3. After the construction of the matrix Θ, the initial solution, and the calculation of the goals of all t-tuples, Main sort Θ so that the elements belonging to the parameter interaction with the greatest amount of t-tuples get ahead (line 1). However, these t-tuples will not be reallocated from Θ to M at once. This is done gradually, one by one, as goals are reached (line 7 to 11). Since the matrix M is being traversed in the loop (line 4), it will be updated every time a t-tuple is combined with some of its test cases (note ⊕ in line 5).
Let us consider Fig. 2. All matrices in this figure represent snapshots of M. The upper left matrix (a) is the initial solution. As long as there exists t-tuples (θ i ) in Θ, the Main procedure works. Thus, Main selects from Θ the largest amount of uncovered t-tuples. In Table 2, t-tuples were selected from the parameter interactions I 3=f u n c t i o n∖c a r d. Every t-tuple of the f u n c t i o n∖c a r d interaction is combined with each test case in M until the t-tuple matches some goal (line 7).
Snapshots of M: a initial solution; b and c intermediate matrices; d final test suite
When an uncovered t-tuple fits into a row of M to complete a test case and this t-tuple is not removed on the line 9 in Algorithm 3, it means that the goal for that row of M is reached. Take the first row of the initial M (Table 3) which is a test case (τ r ) originated from θ 5={b a n k A,λ,c a r d A}, and the first t-tuple of f u n c t i o n∖c a r d interaction not yet covered in Θ, θ 11 = {λ,d e b i t,c a r d A}. The addition of θ 11 = {λ,d e b i t,c a r d A} in M is accepted because ζ = 2 is reached. Note that the initial M, with test cases τ r , is also an input parameter of this procedure. Hence, in line 5, M is updated due to the addition sequence-sequence operator (⊕). In addition, note that τ r is also a sequence as θ i . In other words, by inserting θ 11 = {λ,d e b i t,c a r d A}, we have a complete test case τ r = {b a n k A,d e b i t,c a r d A}. In this way, the other two interactions b a n k∖f u n c t i o n (θ 1 = {b a n k A,λ,d e b i t}) and f u n c t i o n∖c a r d (θ 11 = {λ,d e b i t,c a r d A}) are covered, and the goal is achieved. The upper right matrix (b) in Fig. 2 shows the result of this first addition.
After all combinations between t-tuples and test cases are made, that is, when procedure ends, the new ζ is calculated. The bottom left matrix (c) shows the new values of ζ (see rows 3 and 6). Thus the steps described above are repeated with the insertion/reallocation of t-tuples into the matrix M. Once an uncovered t-tuple of Θ is included in M and meets the goal, that t-tuple is excluded from Θ (line 7). Note that if t-tuple does not allow the test to which it was combined to reach the goal, it is "unbound" (line 9) from this test case so that it can be combined with the next test case. The final test suite is the matrix M shown at the bottom right (d).
It is possible that a certain uncovered t-tuple does not fit into M. Consequently, the flag variable associated with this t-tuple in Θ is signed as true so that the Main procedure knows that such a t-tuple can no longer be compared with rows of M. Main continues as long as there are uncovered t-tuples. Table 4 shows part of Θ after the first iteration. Note that t-tuples θ 13 = {d e b i t,c a r d C} and θ 16 = {c r e d i t,c a r d C} of the f u n c t i o n∖c a r d interaction are not inserted into M (see the values true).
Table 4 Part of Θ: unfitness
This exception is ilustred in Table 4, with θ 13 = {λ,d e b i t,c a r d C} and θ 16 = {λ,c r e d i t,c a r d C} happens because the tests generated by these t-tuples and the available rows of the matrix M address t-tuples already covered in Θ. Assuming that the test consists of the combination of a t-tuple and row 3 of M, only one t-tuple is covered since there is no more t-tuples to be covered in b a n k∖c a r d and b a n k∖f u n c t i o n, as illustrated in Table 4. However, ζ = 2 is not satisfied and these t-tuples can not be removed from Θ. Then it is necessary to recalculate the goals according to the parameter interactions that have been already addressed.
The high-level view of the new version of TTR, 1.2, is in Algorithm 4. This new version no longer uses the Constructor procedure since t-tuples are generated one at a time as they are reallocated. In other words, there is no more Θ, a matrix of t-tuples. What we have now is only φ which is a matrix of parameter interactions. TTR 1.2 works as follow: (i) generates only the parameter interactions (it does not generate the t-tuples yet); (ii) generates an initial solution, the matrix M; and (iii) the t-tuples are generated from φ in order to get the final solution (M) via the Main procedure.
Let us consider the code in Fig. 3 where parameters and values are given in Table 5 and t=3. It is a method to update information into a database of a company. TTR 1.2 constructs only parameter interactions according to the strength and stores the number of corresponding t-tuples (Φ) in a matrix φ. These parameter interactions are I 1 = {s t a t u s,e d u c a t i o n,r e g i m e,λ,8}, I 2 = {s t a t u s,e d u c a t i o n,λ,w o r k i n g_h o u r s,8}, I 3 = {s t a t u s,λ,r e g i m e,w o r k i n g_h o u r s,8}, and I 4 = {λ,e d u c a t i o n,r e g i m e,w o r k i n g_h o u r s,8}, where the last element of I l is the number of t-tuples Φ (in all these case I l =8). Here, each interaction I l is indeed a sequence because the algorithm needs to know the exact number of t-tuples and hence position matters. Note that λ is the empty element. No t-tuple corresponding to any parameters/values interactions is constructed as shown in Table 6. The calculation of Φ is simply done by multiplying the number of values of each parameter in the corresponding interaction.
A second running examples: update method
Table 6 Matrix φ for the example of Fig. 3
Initial solution
In this case, the initial solution is no more than the construction of the t-tuples due to the parameters interactions with greater Φ, and their transformation into test cases. In Table 7, the t-tuples of the parameters interaction I 1 = {s t a t u s,e d u c a t i o n,r e g i m e,8} were all transformed into test cases and therefore, for this parameters interaction, Φ becomes 0 and it is no longer considered in the goal (ζ) calculation (Table 8). In fact, we have 4 parameters and t = 3, thus 4 interactions of possible parameters are generated: one is already covered remaining 3 parameter interactions (I 2,I 3,I 4) to be addressed. This justifies ζ=3 (Table 7).
Table 7 Initial M for the example of Fig. 3
Table 8 Matrix φ for the example of Fig. 3: after the initial solution
The new Main procedure is presented in Algorithm 5. After calculating the parameters interactions, Φ, the initial solution, and the goals of all test cases of M, Main selects the parameter interaction that has the highest amount of uncovered t-tuples (line 2) and constructs t-tuples so that they can be reallocated. However, they will be reallocated gradually, one by one, as goals are reached (line 4 to 13). The procedure combines the t-tuples with the test cases of M in order to match them.
Let us take the second running example (Fig. 3). The parameters interaction with the highest amount of non-addressed t-tuples is I 2={s t a t u s,e d u c a t i o n,λ,w o r k i n g_h o u r s,8} (Φ = 8; Table 8 after the initial solution): all t-tuples of this interaction are generated and stored in a sequence S (line 3). The first t-tuple, θ 1 = {a c t i v e,u n d e r g r a d u a t e,λ,a f t e r n o o n}, is combined with each test case, τ r in M (line 7). The t-tuple in question fits test case 1, τ 1. At that moment, it is verified whether the t-tuple θ i makes the τ r test reach its goal. This control is done through the g o a l() function that receives the τ r test case and then is broken in t-tuples (line 8) according to the parameters interactions that have Φ other than 0. For example, the test case τ 1 = {a c t i v e,u n d e r g r a d u a t e,p a r t i a l,a f t e r n o o n} is broken in t-tuples: {{a c t i v e,u n d e r g r a d u a t e,p a r t i a l,λ}, {a c t i v e,u n d e r g r a d u a t e,λ,a f t e r n o o n}, {a c t i v e,λ,p a r t i a l,a f t e r n o o n}, {λ,u n d e r g r a d u a t e,p a r t i a l,a f t e r n o o n}}. It is then verified how many of these t-tuples do not exist in M and, if this amount equals the respective ζ, θ i is permanently stored in M and a unit is taken from the value of Φ of each of the factor interactions that have t-tuples covered by this test case (line 12) because this keeps if the control of the quantity of t-tuples that still have to be covered. Since the matrix M is being traversed in the loop (line 6), it will be updated every time a t-tuple is combined with some of its test cases (line 7).
This step is repeated for all t-tuples. Each time a t-tuple is reallocated from S into M, the goals are recalculated. For example, when the matrix M permanently receives the 4th t-tuple, the test cases that become complete (with a value for each parameter) have ζ = 0 while the others still have ζ = 3 (Table 9).
Table 9 Intermediate matrix M for the example of Fig. 3
All I 2 t-tuples are reallocated from S in order to achieve the goal of all M test cases resulting the final test suite presented in Table 10. In fact, the Main procedure does not construct new t-tuples from another parameters interaction if the current one is not zero: if the parameters interaction I 2 (selected due to the greatest Φ) still has t-tuples, Main will not select another parameters interaction. To do this, the goal of the test cases will be decreased by one, until all t-tuples of the interaction of parameters I 2 make the test cases to match ζ.
Table 10 Final matrix M for the example of Fig. 3
Controlled experiment 1: TTR 1.1 × TTR 1.2
This section presents a controlled experiment where we compare versions 1.1 and 1.2 of TTR in order to realize whether there is significant difference between both versions of our algorithm. We accomplished such an experiment where we jointly considered cost and efficiency in a multi-objective perspective.
Definition and context
The primary aim of this study is to evaluate cost and efficiency related to CIT test case generation via versions 1.1 and 1.2 of the TTR algorithm (both implemented in Java). The rationale is to perceive whether we have significant differences between the two versions of our algorithm.
Regarding the metrics, cost refers to the size of the test suites while efficiency refers to the time to generate the test suites. Although the size of the test suite is used as an indicator of cost, it does not necessarily mean that test execution cost is always less for smaller test suites. However, we assume that this relationship (higher size of test suite means higher execution cost) is generally valid. We should also emphasize that the time we addressed is not the time to run the test suites derived from each algorithm but rather the time to generate them. We jointly analyzed cost and efficiency in a multi-objective way.
The set of samples, i.e. the subjects, are formed by instances that were submitted to both versions of TTR to generate the test suites. We randomly chose 80 test instances/samples (composed of parameters and values) with the strength, t, ranging from 2 to 6. Table 11 shows part of the 80 instances/samples used in this study. Full data obtained in this experiment are presented in (Balera and Santiago Júnior 2017).
Table 11 Samples for the controlled experiment: Instances. Caption: val = value; par = parameter
It is important to mention how each instance/sample can be interpreted. Let us consider instance i=1 in Table 11:
$$2^{1} 4^{1} 5^{1} 3^{1} 6^{1}, \quad t=2. $$
In the context of unit test case generation for programs developed according to the Object-Oriented Programming (OOP) paradigm, this instance can be used to generate test cases for a class that has one attribute (parameter) which can take 2 values (21), 1 attribute that can take 4 values (41), another attribute that can take 5 values (51), ⋯, 1 attribute that can take 6 values (61). In the system and acceptance testing context, this same sample can be used to identify test scenarios (test objectives) in a model-based test case generation approach (Santiago Júnior 2011; Santiago Júnior and Vijaykumar 2012). In both cases, the test suites must meet the criteria of pairwise testing (t=2) where each combination of 2 values of all parameters must be covered. Note that these samples were randomly selected and they cover a wide range of combinations of parameters, values, and strengths to be selected for very simple but also more complex case studies with different testing levels (unit, system, acceptance, etc.).
Hypotheses and variables
We defined two hypotheses as shown below:
Null Hypothesis, H 0.1 - There is no difference regarding cost-efficiency between TTR 1.1 and TTR 1.2;
Alternative Hypothesis, H 1.1 - There is difference regarding cost-efficiency between TTR 1.1 and TTR 1.2.
Regarding the variables involved in this experiment, we can highlight the independent and dependent variables (Wohlin et al. 2012). The first type are those that can be manipulated or controlled during the process of trial and define the causes of the hypotheses. For this experiment, we identified the algorithm/tool for CIT test case generation. The dependent variables allow us to observe the result of manipulation of the independent ones. For this study, we identified the number of generated test cases and the time to generate each set of test cases and we jointly considered them.
Description of the experiment
The experiment was conducted by the researchers who defined it. We relied on the experimentation process proposed in (Wohlin et al. 2012), using the R programming language version 3.2.2 (Kohl 2015). Both algorithms/tools (TTR 1.1, TTR 1.2) were subjected to each one of the 80 test instances (see Table 11), one at a time. The output of each algorithm/tool, with the number of test cases and the time to generate them, was recorded.
To measure cost, we simply verified the number of generated test cases, i.e. the number of rows of the final matrix M, for each instance/sample. The efficiency measurement required us to instrument each one of the implemented versions of TTR and measure the computer current time before and after the execution of each algorithm. In all cases, we used a computer with an Intel Core(TM) i7-4790 CPU @ 3.60 GHz processor, 8 GB of RAM, running Ubuntu 14.04 LTS (Trusty Tahr) 64-bit operating system. The goal of this second analysis is to provide an empirical evaluation of the time performance of the algorithms.
To perform the multi-objective cost-efficiency evaluation, we followed two steps. First, we transformed the cost-efficiency (two-dimensional) representation into a one-dimensional one. Thus, in a second step, we used statistical tests, such as the t-test or the nonparametric Wilcoxon test (Signed Rank) (Kohl 2015), to compare the two test suites (TTR 1.1 and TTR 1.2). To address the nondeterminism of the algorithms/tools, related to the the ordering input of parameters and values, we generated test cases with 5 variations in the order of parameters and values, and took into account the average of these 5 assessments for the statistical tests. We then got points (c A i ,t A i ) that represent the average cost (c A i ) and average time (t A i ) of the algorithms A (TTR 1.1, TTR 1.2) for each instance i (1≤i≤80).
We then determined an optimal point in a two-dimensional space, the point (0,0). This point implies a cost closer to 0 and requires a time closer to 0. The closest condition is because an algorithm is not expected to generate a test suite with, exactly, 0 test case or it does require 0 unit of time to generate the set of test cases. We then used a measure of distance, such as the Euclidean one, to measure the distance from the optimal point (0,0) to (c A i ,t A i ). Thus, each algorithm is then represented by a one-dimensional set, D, where each d i ∈D is the Euclidean distance between (0,0) and (c A i ,t A i ) for every instance i. We selected the Euclidean distance because it is one of the most used similarity distance measure. In software testing, Euclidean distance has been used as a quality indicator in multi-objective test case/data generation (Filho and Vergilio 2015; Santiago Júnior and Silva 2017), to support the automation of test oracles for complex output domains (web applications (Delamaro et al. 2013), text-to-speech systems (Oliveira 2017)), and many others.
Based on this cost-efficiency one-dimensional representation, we relied on appropriate statistical evaluation to check data normality. Verification of normality was done in three steps: (i) by using the Shapiro-Wilk test (Shapiro and Wilk 1965) with a significance level α = 0.05; (ii) by checking the skewness of the frequency distribution (in this case, − 0.1≤s k e w n e s s≤0.1 so that the data can be considered as normally distributed); and (iii) by using a graphical verification by means of Q-Q plot (Kohl 2015) and histogram. Thus, we believe we have greater confidence in this conclusion on data normality compared to an approach that is based only on the Shapiro-Wilk test considering the effects of polarization due to the length of the samples.
If we concluded that data came from a normally distributed population, then the paired, two-sided t-test was applied with α = 0.05. Otherwise, we applied the nonparametric paired, two-sided Wilcoxon test (Signed Rank) (Kohl 2015) with α = 0.05, too. However, if the samples presented ties, we applied a variation of the Wilcoxon test, the Asymptotic paired, two-sided Wilcoxon (Signed Rank) (Kohl 2015), suitable to treat ties, with significance level α = 0.05.
In order to reject the Null Hypothesis, H 0.1, we checked whether p−v a l u e<0.05 (t-test) or whether both p−v a l u e<0.05 and |z|>1.96 (Wilcoxon) where z is the z-score. If H 0.1 was rejected, we observed the average of all Euclidean distances (80) due to each algorithm. The algorithm that presented the lowest average of Euclidean distances was the one chosen as the most adequate. If H 0.1 could not be rejected, then the conclusion was that no statistical difference existed between both algorithms.
In this section, we present the results of this first controlled experiment. Based on the cost-efficiency one-dimensional representation (Section 4.3), we considered four evaluation classes as follows:
All strenghts. In this case, all 80 instances/samples (Table 11) with all strengths (2, 3, 4, 5, and 6) were taken into account. Our idea here is trying to perceive the cost-efficiency performance of both algorithms in a context where several different strengths are selected to generate a test suite;
Low strengths. In this case, we selected only the samples with strength equals to 2. Our aim is to note how the algorithms perform for low strengths;
Medium strengths. By selecting samples with strength equals to 3 or 4, we want to evaluate an intermediate strength context;
High strengths. We aim to assess the performance for higher strengths, i.e. t= 5 or 6.
Table 12 presents the Euclidean distances of part of the 80 samples (all strenghts class only; complete data are in (Balera and Santiago Júnior 2017)) as well as the average values, \(\overline {x}\), of such distances. We checked data normality where Table 13 presents the p−v a l u e, p, due to the Shapiro-Wilk test and the skewness. Note that this table shows p and skewness of all four classes above (all, low, medium, and high strenghts). Moreover Sol 1 is TTR 1.1 and Sol 2 is TTR 1.2. Figures 4 and 5 present the Q-Q plots and histograms for all strengths, Figs. 6 and 7 present the Q-Q plots and histograms for lower strengths, Figs. 8 and 9 present the Q-Q plots and histograms for medium strengths, and Figs. 10 and 11 present the Q-Q plots and histograms for higher strengths, respectively.
Experiment 1: Q-Q plots. a TTR1.1; b TTR 1.2 - All Strengths
Experiment 1: Histograms. a TTR1.1; b TTR 1.2 - All Strengths
Experiment 1: Q-Q plots. a TTR1.1; b TTR 1.2 - 2 Strength
Experiment 1: Histograms. a TTR1.1; b TTR 1.2 - 2 Strength
Experiment 1: Q-Q plots. a TTR1.1; b TTR 1.2 - 3 and 4 Strengths
Experiment 1: Histograms. a TTR1.1; b TTR 1.2 - 3 and 4 Strengths
Table 12 Experiment 1 - Results of the analysis of Euclidean Distance (all strengths)
Table 13 Experiment 1 - Results of the analysis of data normality
We can clearly see that all these data did not come from a normally distribution population because p<0.05 and the skewness is far from 0. Moreover, Q-Q plots and histograms reassure this conclusion. Hence, we used the nonparametric paired, two-sided Wilcoxon test (Signed Rank) or its variation (Asymptotic) when ties were detected. Table 14 presents the p−v a l u e, p, |z|, and additional information for classes all and low strengths while Table 15 shows the results for medium and high strengths.
Table 14 Experiment 1 - Results of the Wilcoxon test
Based on Tables 14 and 15, we could not reject H 0.1 (no difference) for all strengths, but we could do it for the other evaluation classes and hence accept the Alternative Hypothesis, H 1.1. As we have previously pointed out, when there is difference regarding cost-efficiency, we examine the average values of the Euclidean distances: the smaller the better. TTR 1.1 is better, in terms of cost-efficiency, than TTR 1.2 for lower strengths (t=2). However, for medium (t=3,4) and higher strenghts (t=5,6), TTR 1.2 surpassed TTR 1.1. This makes sense because in TTR 1.2 we do not generate, at the beginning, the matrix of t-tuples and hence we expect that the last version of our algorithm can handle properly higher strengths.
Therefore, even if we did not find statistical difference with all the strengths and TTR 1.1 was the best for lower strenghts, we decided to select TTR 1.2, to compare with the other solutions for unconstrained CIT test case generation, because TTR 1.2 performed better than TTR 1.1 for medium and higher strengths.
The conclusion validity has to do with how sure we are that the treatment we used in an experiment is really related to the actual observed outcome (Wohlin et al. 2012). One of the threats to the conclusion validity is the reliability of the measures (Campanha et al. 2010). We automatically obtained the measures via the implementations of the algorithms and hence we believe that replication of this study by other researchers will produce similar results. Even if other researchers may get different absolute results, especially related to the time to generate the test suites simply because such results depend on the computer configuration (processor, memory, operating system), we dot not expect a different conclusion validity. Moreover, we relied on adequate statistical methods in order to reason about data normality and whether we did really find statistical difference between TTR 1.1 and TTR 1.2. Hence, our study has a high conclusion validity.
The internal validity aims to analyze whether the treatment actually caused the outcome (result). Hence, we need to be sure whether other parameters have not caused the outcome, parameters that have not been controlled or measured. There are many threats to internal validity such as testing effects (measuring the participants repeatedly), history (experiment external events or between repeated measures of the dependent variable may influence the responses of the subjects, e.g. interruption of the treatment), instrument change, maturation (participants might mature during the study or between measurements), selection bias (differences between groups), etc. Note that the participants of our experiment are randomly samples composed of parameters, values, and strengths. Hence, we neither had any human/nature/social parameter nor unanticipated events to interruption the collection of the measures once started to pose an internal validity. Hence, we claim that our experiment has a high internal validity.
In the construct validity, the goal is to ensure that the treatment reflects the construction of the cause, and the result the construction of the effect. This is also high because we used the implementations of TTR 1.1 and TTR 1.2 to assess the cause, and the results, supported by the decision-making procedure via statistical tests, clearly provided the basis for the decision to be made between both algorithms.
Threats to external validity compromise the confidence in asserting that the results of the study can be generalized to and between individuals, settings, and under the temporal perspective. Basically, we can divide threats to external validity in two categories: threats to population and ecological threats.
Threats to population refer to how significant is the selected samples of the population. For our study, the ranges of strengths, parameters, and values are the determining points for this threat. Note that for such a study, the possibility of combination of strengths and parameters/values is literally infinite. However, we believe that our choice of the set of samples is significant (80) with strengths spanning from 2 to 6. Also, recall that the samples were determined completely randomly (by combining parameters, values, and strengths), as well as the input order of parameters and values was also random (for the 5 executions addressing nondeterminism). With this, we guarantee one of the basic principles of the sampling process which is the randomness to avoid selection bias.
Ecological threats refer to the degree to which the results may be generalized between different configurations. Pre-test effects, Post-test effects, and the Hawthorne effects (due to the participants simply feel stimulated by knowing that they are participating in an innovative experiment) are some of these threats. The participants in our experiment are the instances/samples composed of parameters, values and strengths and, therefore, this type of threat does not apply to our case.
Controlled experiment 2: TTR 1.2 × other solutions
In this section, we present a second controlled experiment where we compare TTR 1.2 with five other significant greedy approaches for unconstrained CIT test case generation. Many characteristics of this second controlled experiment ressemble the first one (Section 4). We emphasize here the main differences and point to this previous section whenever necessary.
The aim of this experiment is to compare TTR 1.2 with five other greedy algorithms/tools for unconstrained CIT: IPOG-F (Forbes et al. 2008), jenny (Jenkins 2016), IPO-TConfig (Williams 2000), PICT (Czerwonka 2006), and ACTS (Yu et al. 2013). These algorithms/tools have been selected due to their relevance for unconstrained CIT via greedy strategies.
The IPO algorithm (Lei and Tai 1998) is the basis for several other solutions such as IPOG, IPOG-D (Lei et al. 2007), IPOG-F, IPOG-F2 (Forbes et al. 2008), IPOG-C (Yu et al. 2013), IPO-TConfig (Williams 2000), ACTS (where several versions of IPO are implemented) (Yu et al. 2013), and CitLab (Cavalgna et al. 2013). Thus, we considered three of its variations: own our implementation of IPOG-F (in Java), IPO-TConfig (in Java), and IPOG-F2 implemented within ACTS (in Java). Note that ACTS is probably one of the most popular CIT tools where not only academia but industry professionals have been using it for various purposes (NIST National Institute of Standards and Technology 2015). A tool implemented in C, jenny (Jenkins 2016), has been used in informal (Pairwise 2017) and more formal (Segall et al. 2011) CIT comparisons. PICT (in C++) can be regarded as one baseline greedy tool where other tools have been created based on it (PictMaster 2017).
Like in Section 4, the metrics are cost, measured as the size of the test suites, and efficiency which again refers to the time to generate them. However, to proper measure the time to generate the test suites, we should have access to the source code of the tools in order to instrument them and get more precise and accurate measures. We had only the code of the implementation of TTR 1.2, our own implementation of IPOG-F, and jenny. Thus, we could not measure the time to generate the test cases due to IPO-TConfig, PICT, and ACTS (IPOG-F2). Moreover, note that the time measurements may be influenced by different programming languages within the cost-efficiency evaluation (TTR 1.2, IPOG-F, and jenny). In this case, we implemented TTR 1.2 not only in Java but also in C too in order to address a possible evaluation bias in the time measures when comparing TTR 1.2 against the other solutions. To sum up, we decided to perform two evaluations:
Cost-Efficiency (multi-objective). Here, we focused on TTR 1.2, IPOG-F, and jenny since these were the solutions we had the source code and could properly measure the time to generate the test suites. Hence, we compared TTR 1.2 (in Java) with IPOG-F (in Java), and TTR 1.2 (in C) with jenny (in C);
Cost (single objective). In this case, we compared TTR 1.2 (only in Java since efficiency is not considered here and thus time does not matter) with PICT, IPO-TConfig, and ACTS.
With respect to the subjects, the same 80 participants of Section 4 were used (Table 11 and full data are in (Balera and Santiago Júnior 2017)).
Hypotheses of this second experiment are:
Null Hypothesis, H 0.2 - There is no difference regarding cost-efficiency between TTR 1.2 (in Java) and IPOG-F (in Java);
Alternative Hypothesis, H 1.2 - There is difference regarding cost-efficiency between TTR 1.2 (in Java) and IPOG-F (in Java);
Null Hypothesis, H 0.3 - There is no difference regarding cost-efficiency between TTR 1.2 (in C) and jenny (in C);
Alternative Hypothesis, H 1.3 - There is difference regarding cost-efficiency between TTR 1.2 (in C) and jenny (in C);
Null Hypothesis, H 0.4 - There is no difference regarding cost between TTR 1.2 (in Java) and PICT;
Alternative Hypothesis, H 1.4 - There is difference regarding cost between TTR 1.2 (in Java) and PICT;
Null Hypothesis, H 0.5 - There is no difference regarding cost between TTR 1.2 (in Java) and IPO-TConfig;
Alternative Hypothesis, H 1.5 - There is difference regarding cost between TTR 1.2 (in Java) and IPO-TConfig;
Null Hypothesis, H 0.6 - There is no difference regarding cost between TTR 1.2 (in Java) and ACTS;
Alternative Hypothesis, H 1.6 - There is difference regarding cost between TTR 1.2 (in Java) and ACTS.
The independent variable is the algorithm/tool for CIT test case generation for both assessments (cost-efficiency, cost). The dependent variables are the number of generated test cases (cost evaluation), and this number of test cases in addition to the time to generate each set of test cases in a multi-objective perspective as in the previous section (cost-efficiency evaluation).
The general description of both evaluations (cost-efficiency, cost) of this second study is basically the same as shown in Section 4. Algorithms/tools were subjected to each one of the 80 test instances, one at a time, and the outcome was recorded. Cost is the number of generated test cases, and efficiency was obtained via instrumentation of the source code with the same computer previously mentioned.
For the multi-objective cost-efficiency evaluation (IPOG-F, jenny), we followed the same two steps previously mentioned: transformation of the cost-efficiency (two-dimensional) representation into a one-dimensional one and usage of statistical tests, such as the t-test or the nonparametric Wilcoxon test (Signed Rank) (Kohl 2015), to compare each pair of test suites (TTR 1.2 and other). To address the nondeterminism of the algorithms/tools, we again generated test cases with 5 variations in the order of parameters and values, and took into account the average of these 5 assessments for the statistical tests. Hence, we obtained the points (c A i ,t A i ) and calculated the Euclidean distances from the optimal point (0,0) to (c A i ,t A i ). Then, we checked data normality and, based on the result of normality, we used the the paired, two-sided t-test with α = 0.05 (normal data) or the nonparametric paired, two-sided Wilcoxon test (Signed Rank) or its Asymptotic version with α = 0.05 (non-normal data).
For the evaluation of cost (PICT, IPO-TConfig, ACTS), we did not need to transform from two into one dimension because it is a single dimension problem. The optimal point here is the value 0 and the Euclidean distance from 0 to c A i (average cost of the algorithms A for each instance i, 1≤i≤80) is |0−c A i |=|c A i |. We then performed the statistical evaluation just as in the multi-objective case.
Results, discussion and validity
In this section, we present the outcomes of both assessments of our second controlled experiment. Like in the first controlled experiment, to compare TTR 1.2 with IPOG-F, jenny, PICT, IPO-TConfig, and ACTS, we considered four evaluation classes: all, low, medium, and high strengths. Table 16 presents the Euclidean distances of part of the 80 samples (all strenghts class only; complete data are in (Balera and Santiago Júnior 2017)) and the average values, \(\overline {x}\). Table 17 presents results of the analysis of data normality (p−v a l u e (p) and skewness) where we can see all evaluation classes. In this table, Sol 1 is the other solution and Sol 2 is TTR 1.2. Figures 12 and 13 present the Q-Q plots and histograms for all strengths, Figs. 14 and 15 present the Q-Q plots and histograms for lower strengths, Figs. 16 and 17 present the Q-Q plots and histograms for medium strengths, and Figs. 18 and 19 present the Q-Q plots and histograms for higher strengths, respectively.
Experiment 2: Q-Q plots. a IPOG-F; b jenny; c PICT; d IPO-TConfig; e ACTS - All Strengths
Experiment 2: Histograms. a IPOG-F; b jenny; c PICT; d IPO-TConfig; e ACTS - All Strengths
Experiment 2: Q-Q plots. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Lower Strengths
Experiment 2: Histograms. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Lower Strengths
Experiment 2: Q-Q plots. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Medium Strengths
Experiment 2: Histograms. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Medium Strengths
Experiment 2: Q-Q plots. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Higher Strengths
Experiment 2: Histograms. a ACTS; b IPO-TConfig; c IPOG-F; d jenny; e PICT - Higher Strengths
Again we note that all these data did not come from a normally distribution population. The nonparametric paired, two-sided Wilcoxon test (Signed Rank) or its variation (Asymptotic) where then applied. Table 18 presents the p−v a l u e, p, |z|, and additional information for classes all and low strengths while Table 19 shows the results for medium and high strengths. We should mention that in 23 instances (3 with s t r e n g t h=4, 12 with s t r e n g t h=5, and 8 with s t r e n g t h=6) jenny was not able to generate test cases, in some input order of the parameters, due to out of memory issue. Specifically, jenny failed to finish when the test suite size was more than 1,000 test cases. Similar outcomes happened in IPO-TConfig: even if we waited for about 6 hours, it did not generate anything out and hence the tool did not create test cases in 20 instances (3 with s t r e n g t h=4, 9 with s t r e n g t h=5, and 8 with s t r e n g t h=6). In these cases, we adopted a policy penalty: in order to consider these unsuccessful participants, we doubled the respective measure we obtained (average value of the Euclidean distance) due to TTR 1.2 to be the one of jenny and IPO-TConfig. We believe that this is a fair decision because TTR 1.2 could finish generating test cases for all 80 instances.
Table 18 Results of the Wilcoxon test
Table 19 Results of the Wilcoxon test (medium and high strengths)
As shown in Table 18, for class all strengths, two Null Hypotheses were rejected: H 0.2 (TTR 1.2 × IPOG-F) and H 0.5 (TTR 1.2 × IPO-TConfig). TTR 1.2 was better (lowest average value of Euclidean distances) than IPO-TConfig but it was worse than IPOG-F. There is no difference between TTR 1.2 and jenny, PICT, and ACTS.
As in controlled experiment 1, TTR 1.2 did not demonstrate good performance for low strengths. There is no difference between TTR 1.2 and IPO-TConfig. In all the other comparisons, the Null Hypothesis was rejected and TTR 1.2 was worse than the other solutions. This can be attributed to the fact that the algorithm focuses on test cases that have parameter interactions that generate a large amount of t-tuples, which is usually seen in test cases with larger strenghts. This can be verified by the fact that the algorithm gives priority to just covering the interaction of parameters with the greatest amount of t-tuples.
For medium strengths, TTR 1.2 had alternate results. While the Null Hypothesis H 0.6 (TTR 1.2 × ACTS) could not be rejected and our algorithm was better than IPO-TConfig, IPOG-F, jenny, and PICT surpassed TTR 1.2.
The greatest advantage of TTR 1.2 turned out to be again for higher strengths. Recall that TTR 1.2 does not create the matrix of t-tuples at the beginning, and this can potentially benefit our solution compared with the other five for higher strengths. Note that TTR 1.2 was better than jenny, PICT, IPO-TConfig, and ACTS. The only exception is the comparison against IPOG-F where the Null Hypothesis, H 0.2, could not be rejected and thus there is no statistical difference between both approaches.
In general, we can say that IPOG-F presented the best performance compared with TTR 1.2, because IPOG-F was better for all strengths, as well as lower and medium strengths. For higher strengths, there was a statistical draw between both approaches. An explanation for the fact that IPOG-F is better than TTR 1.2 is that TTR 1.2 ends up making more interactions than IPOG-F. In general, we might say that efficiency of IPOG-F is better than TTR 1.2 which influenced the cost-efficiency result. However, if we look at cost in isolation for all strengths, the average value of the test suite size generated via TTR 1.2 (734.50) is better than IPOG-F (770.88).
As we have just stated, for higher strengths, TTR 1.2 is better than two IPO-based approaches (IPO-TConfig and ACTS/IPOG-F2) but there is no difference if we consider our own implementation of IPOG-F and TTR 1.2. This can be explained as follows. The way the array that stores all t-tuples is constructed influences the order in which the t-tuples are evaluated by the algorithm. However, it is not described how this should be done in IPOG-F, leaving it to the developer to define the best way. As the order in which the parameters are presented to the algorithms alters the number of test cases generated, as previously stated, the order in which the t-tuples are evaluated can also generate a certain difference in the final result.
The conclusion of the two evaluations of this second experiment is that our solution is better and quite attractive for the generation of test cases considering higher strengths (5 and 6), where it was superior to basically all other algorithms/tools. Certainly, the main fact that contributes to this result is the non-creation of the matrix of t-tuples at the beginning which allows our solution to be more scalable (higher strengths) in terms of cost-efficiency or cost compared with the other strategies. However, for low strengths, other greedy approaches, like IPOG-F, may be better alternatives.
As before and by making a comparison between pairs of solutions (TTR 1.2 × other), in both assessments (cost-efficiency and cost), we can say that we have a high conclusion, internal, and construct validity. Regarding the external validity, we believe that we selected a significant population for our study. Detailed explanations have been given in Section 5.1 and are valid here.
In this section we present some relevant studies related to greedy algorithms for CIT. The IPO algorithm (Lei and Tai 1998) is one very traditional solution designed for pairwise testing. Several approaches are based on IPO such as IPOG, IPOG-D (Lei et al. 2007), IPOG-F, IPOG-F2 (Forbes et al. 2008), IPOG-C (Yu et al. 2013), IPO-TConfig (Williams 2000), ACTS (where IPOG, IPOG-D, IPOG-F, IPOG-F2 are implemented)(Yu et al. 2013), and CitLab (Cavalgna et al. 2013). All IPO-based proposals have in common the fact that they perform horizontal and vertical growths to construct the final test suite. Moreover, some need two auxiliary matrices which may decrease its performance by demanding more computer memory. Such algorithms accomplish exhaustive comparisons within each horizontal extension which may penalize efficiency.
IPOG-F (Forbes et al. 2008) is an adaptation of the IPOG algorithm (Lei et al. 2007). Through two main steps, horizontal and vertical growths, an MCA is built. Both growths work based on an initial solution. The algorithm is supported by two auxiliary matrices which may decrease its performance by demanding more computer memory to use. Moreover, the algorithm performs exhaustive comparisons within each horizontal extension which may cause longer execution. On the other hand, TTR 1.2 only needs one auxiliary matrix to work and it does not generate, at the beginning, the matrix of t-tuples. These features make our solution better for higher strengths (5, 6) even though we did not find statistical difference when we compared TTR 1.2 with our own implementation of IPOG-F (Section 6.4).
IPO-TConfig is an implementation of IPO in the TConfig tool (Williams 2000). The TConfig tool can generate test cases based on strengths varying from 2 to 6. However, it is not entirely clear whether the IPOG algorithm (Lei et al. 2007) was implemented in the tool or if another approach was chosen for t-way testing. In our empirical evaluation, TTR 1.2 was superior to IPO-TConfig not only for higher strengths (5, 6) but also for all strengths (from 2 to 6). Moreover, IPO-TConfig was unable to generate test cases in 25% of the instances (strengths 4, 5, 6) we selected.
The ACTS tool (Yu et al. 2013) is one of the most used CIT tools to date. Several variations of IPO are implemented in ACTS: IPOG, IPOG-D (Lei et al. 2007), IPOG-F, and IPOG-F2 (Forbes et al. 2008). The implementation of our algorithm performed better in terms of cost, compared with IPOG-F2/ACTS, for higher strengths. However, both solutions performed similarly when we considered all strengths.
IPOG-C (Yu et al. 2013) generates MCAs considering constraints. It is an adaptation of IPOG where constraint handling is provided via a SAT solver. The greatest contribution are three optimizations that seek to reduce the number of calls of the SAT solver. As IPOG-C is based on IPOG, it accomplishes exhaustive comparisons in the horizontal growth which may lead to a longer execution. Besides, each t-tuple is evaluated to see if it is valid or not.
The algorithm implemented in the PICT tool (Czerwonka 2006) has two main phases: preparation and generation. In the first phase, the algorithm generates all t-tuples to be covered. In the second phase, it generates the MCA. The generation of all t-tuples which can often be a bad thing, since many tuples require large disk space for storage. With respect to the application of the tool, this tool is best applied in strenghts of low value as an example, there is no study (Yamada et al. 2016). Other tools have been created based on PICT (PictMaster 2017).
The jenny tool is implemented in C (Jenkins 2016). It is a light greedy tool but one of its limitation is the number of parameters it handles: from 2 to 52. In the controlled experiment we performed, TTR 1.2 was superior to jenny for higher strengths (5, 6) but they presented similar performances for all strengths (from 2 to 6). In 27.5% of the samples (strengths 4, 5, 6), jenny could not create test cases as we have mentioned before.
Automatic Efficient Test Generator (AETG) (Cohen et al. 1997) is based on algorithms that use ideas of statistical experimental design theory to minimize the number of tests needed for a specific level of test coverage of the input test space. AETG generates test cases by means of Experimental Designs (ED) (Cochran and Cox 1950) which are statistical techniques used for planning experiments so that one can extract the maximum possible information based on as few experiments as possible. It makes use of its greedy algorithms and the test cases are constructed one at a time, i.e. it does not use an initial solution.
In (Cavalgna et al. 2013), a new tool is presented for generating MCAs with constraint handling support: CitLab. Like ACTS, CitLab has several algorithms for test suite generation: AETG, IPO, and others. The bottom of line is that test case generation is only one of the characteristics of the tool. Like ACTS, CitLab does not present a new algorithm as it just implements algorithms proposed in the literature. Hence, the same limitations of the existing proposals are also here.
The Feedback Driven Adptative Combinatorial Testing Process (FDA-CIT) algorithm is shown in (Yilmaz et al. 2014). At each iteration of the algorithm, verification of the masking of potential defects is accomplished, isolating their probable causes and then generating a new configuration which omits such causes. The idea is that masked deffects exist and that the proposed algorithm provides an efficient way of dealing with this situation before test execution. However, there is no assessment about the cost of the algorithm to generate MCAs.
In order to better compare the previous studies with our algorithm, TTR 1.2, in Table 20 we show some main characteristics of all the algorithms/tools. In this table, * means that the characteristic is present, - means that it is not present, and empty (blank space) means that either it is not totally evident that the algorithm/tool has such a feature or it is not applicable.
Table 20 Greedy algorithms/tools for CIT
This paper presented a novel CIT algorithm, called TTR, to generate test cases specifically via the MCA technique. TTR produces an MCA M, i.e. a test suite, by creating and reallocating t-tuples into this matrix M, considering a variable called goal (ζ). TTR is a greedy algorithm for unconstrained CIT.
TTR was implemented in Java and C (TTR 1.2) and we developed three versions of our algorithm. In this paper, we focused on the description of versions 1.1 and 1.2 since version 1.0 was detailed elsewhere (Balera and Santiago Júnior 2015).
We carried out two rigorous evaluations to assess the performance of our proposal. In total, we performed 3,200 executions related to 8 solutions (80 instances × 5 variations × 8). In the first controlled experiment, we compared versions 1.1 and 1.2 of TTR in order to know whether there is significant difference between both versions of our algorithm. In such experiment, we jointly considered cost (size of test suites) and efficiency (time to generate the test suites) in a multi-objective perspective. We conclude that TTR 1.2 is more adequate than TTR 1.1 especially for higher strengths (5, 6). This is explained by the fact that, in TTR 1.2, we no longer generate the matrix of t-tuples (Θ) but rather the algorithm works on a t-tuple by t-tuple creation and reallocation into M. This benefits version 1.2 so that it can properly handle higher strengths.
Having chosen version 1.2, we conducted another controlled experiment where we confronted TTR 1.2 with five other greedy algorithms/tools for unconstrained CIT: IPOG-F (Forbes et al. 2008), jenny (Jenkins 2016), IPO-TConfig (Williams 2000), PICT (Czerwonka 2006), and ACTS (Yu et al. 2013). In this case, we carried out two evaluations where in the first one we compared TTR 1.2 with IPOG-F and jenny since these were the solutions we had the source code (to precisely measure the time). Moreover, to address a possible evaluation bias in the time measures when comparing TTR 1.2 against jenny (developed in C), we also implemented it in C in addition to the standard implementation in Java. Hence, a cost-efficiency (multi-objective) evaluation was performed. In the second assessment, we did a cost (single objective) evaluation where TTR 1.2 was compared with PICT, IPO-TConfig, and ACTS. The conclusion is as previously stated: TTR 1.2 is better for higher strengths (5, 6) where only in one case our solution is not superior (in the comparison with IPOG-F where we have a draw). The fact of not creating the matrix of t-tuples at the beginning explains this result.
Therefore, considering the metrics we defined in this work and based on both controlled experiments, TTR 1.2 is a better option if we need to consider higher strengths (5, 6). For lower strengths, other solutions, like IPOG-F, may be better alternatives.
Thinking about the testing process as a whole, one important metric is the time to execute the test suite which eventually may be even more relevant than other metrics. Hence, we need to run multi-objective controlled experiments where we execute all the test suites (TTR 1.1 × TTR 1.2; TTR 1.2 × other solutions) probably assigning different weights to the metrics. We also need to investigate the parallelization of our algorithm so that it can perform even better when subjected to a more complex set of parameters, values, strengths. One possibility is to use the Compute Unified Device Architecture/Graphics Processing Unit (CUDA/GPU) platform (Ploskas and Samaras 2016). We must develop other multi-objective controlled experiment addressing effectiveness (ability to detect defects) of our solution compared with the other five greedy approaches.
Despite this classification, some algorithms/tools are both SAT and greedy-based.
Some authors (Kuhn et al. 2013; Cohen et al. 2003) abbreviate a Mixed-Level Covering Array as CA too. However, as we have made a explicit distinction between Fixed-value and Mixed-Level arrays, we prefer abbreviate it as MCA. Note that an MCA is naturally a Covering Array. We have just used this abbreviation to stress that our work relates to mixed and not fixed arrays.
Θ is a matrix whose order varies. In other words, TTR knows the number of columns beforehand (|f|), but the number of rows (|C|) depends on the interaction of t-way parameter's values. During the reallocation process, TTR removes the rows until Θ is empty.
Advanced combinatorial test system
AETG:
Automatic efficient test generator
Coverage array
CIT:
Combinatorial interaction test
CUDA:
Compute unified device architecture
GA:
Genetic algorithm
IPOG:
In parameter order general
IPO-TConfig:
In parameter order TConfig
Mixed-level covering array
MOA:
Mixed-level orthogonal array
OA:
Orthogonal array
OOP:
Object-oriented programming
Pairwise indepedent combinatorial testing
Simulated annealing
SWPDC:
Software for the payload data handling computer
TSA:
Tabu search approach
TTR:
T-tuple reallocation
Ahmed, BS (2016) "Test case minimization approach using fault detection and combinatorial optimization techniques for configuration-aware structural testing". Eng Sci Technol, Int J 19(2):737–753. http://www.sciencedirect.com/science/article/pii/S2215098615001706.
Balera, JM, Santiago Júnior VA (2015) T-tuple Reallocation: An algorithm to create mixed-level covering arrays to support software test case generation In: 15th International Conference on Computational Science and Its Applications (ICCSA), 503–517.. Springer International, Publishing, Berlin, Heidlberg.
Balera, JM, Santiago Júnior VA (2016) "A controlled experiment for combinatorial testing" In: Proceedings of the 1st Brazilian Symposium on Systematic and Automated Software Testing, 2:1-2:10.. ACM, New York, NY, USA, SAST. http://doi.acm.org/10.1145/2X00000.993288.2993289.
Balera, JM, Santiago Júnior VA (2017) Data set. https://www.dropbox.com/sh/to3a47ncqpliq5l/AACj34JQ9S1I4fzQJf0xPZfva?dl=0. Accessed 17 Oct 2016.
Bryce, RC, Colbourn CJ (2006) "Prioritized interaction testing for pair-wise coverage with seeding and constraints". Inf Softw Technol 48(10):960–970.
Cochran, WG, Cox GM (1950) "Experimental designs". John, Wiley & Sons, New York; Chichester.
Cohen, MB, Dalal SR, Fredman ML, Patton GC (1997) "The AETG system: an approach to testing based on combinatorial design". IEEE Trans Softw Eng 23(7):437–444.
Cohen, MB, Dwyer MB, Shi J (2008) "Constructing interaction test suites for highly-configurable systems in the presence of constraints: A greedy approach". IEEE Trans Softw Eng 34(5):633–650.
Cohen, MB, Gibbons PB, Mugridge WB, Colbourn CJ, Collofello JS (2003) "A variable strength interaction testing of components" In: Proceedings of 27th Annual Int. Comp. Software and Applic. Conf. (COMPSAC), 413–418.. IEEE, USA.
Campanha, DN, Souza SRS, Maldonado JC (2010) "Mutation testing in procedural and object-oriented paradigms: An evaluation of data structure programs" In: Brazilian Symposium on Software Engineering, 90–99.. IEEE, USA.
Cavalgna, A, Gargantini A, Vavassori P (2013) "Combinatorial interaction testing with citlab" In: Proceedings on 2013 IEEE Sixth International, Conference on Software Testing, Verification and Validation, 376–382.. IEEE, Nova York.
Czerwonka, J (2006) "Pairwise testing in the real world: Practical extensions to test-case generators" In: Proceedings 24th Pacific Northwest Software Quality Conference, 285–294.. Academic Press, Portland.
Dalal, SR, A Jain NK, Leaton JM, Lott CM, Patton GC, Horowitz B (1999) "Model-based testing in pratice" In: Proceedings 21st International Conference on Software Engineering (ICSE'99), 285–294.. AMC, Nova York.
Delamaro, ME, de Lourdes dos Santos Nunes F, de Oliveira RAP (2013) "Using concepts of content-based image retrieval to implement graphical testing oracles". Softw Test Verif Reliab 23:171–198. doi:10.1002/stvr.463.
Filho, RAM, Vergilio SR (2015) "A Mutation and Multi- objective Test Data Generation Approach for Feature Testing of Software Prod- uct Lines". 29th Brazilian, Symposium on Software Engineering, Belo Hori-zonte.
Forbes, M, Lawrence J, Lei Y, Kacker RN, Kuhn DR (2008) "Refining the in-parameter-order strategy for constructing covering arrays". J Res Natl Inst Stand Technol 113(5):287–297.
Garvin, BJ, Cohen MB, Dwyer MB (2011) "Evaluating improvements to a meta-heuristic search for constrained interaction testing". Empirical Soft Eng 16(1):61–102.
Hernandez, LG, Valdez NR, Jimenez JT (2010) "Construction of mixed covering arrays of variable strength using a tabu search approach". Springer, International Publisher, Berlin, Heidelberg.
Huang, CY, Chen CS, Lai CE (2016) "Evaluation and analysis of incorporating fuzzy expert system approach into test suite reduction". Inf Softw Technol 79:79–105. http://www.sciencedirect.com/science/article/pii/S0950584916301197.
Jenkins, B (2016) "Jenny: A pairwise tool". http://burtleburtle.net/bob/math/jenny.html. Accessed 6 June 2016.
Khan, SUR, Lee SP, Ahmad RW, Akhunzada A, Chang V (2016) "A survey on test suite reduction frameworks and tool"s. Int J Inf Manag 36(6, Part A):963–975. http://www.sciencedirect.com/science/article/pii/S0268401216303437.
Kohl, M (2015) "Introduction to statistical data analysis with R". bookboon.com, London.
Kuhn, DR, Wallace DR, Gallo AM (2004) "Software fault interactions and implications for software testing". IEEE Trans Software Eng 30(6):418–421. http://doi.ieeecomputersociety.org/10.1109/TSE.2004.24.
Kuhn, RD, Kacker RN, Lei Y (2013) "Introduction to Combinatorial Testing". Chapman and Hall/CRC, USA.
Lei, Y, Kacker R, Kuhn DR, Okun V, Lawrence J (2007) "IPOG: A general strategy for t-way software testing".
Lei, Y, Tai K-C (1998) "In-Parameter-Order: A test generation strategy for pairwise testing" In: Proceedings of the IEEE Int. Symp. on High-Assurance Syst. Eng. (HASE), 254–261.. IEEE Computer Society Press, USA.
Mathur, AP (2008) "Foundations of software testing". Dorling, Kindersley (India), Pearson Education in South Asia, Delhi, India.
NIST National Institute of Standards and Technology (2015) "Automated combinatorial testing for software (ACTS)". http://csrc.nist.gov/groups/SNS/acts/. Accessed 29 July 2017.
Oliveira, RAP (2017) "Test oracles for systems with complex outputs: the case of TTS systems". PhD Thesis, Univesi-dade de São Paulo, Brazil.
Pairwise (2017) "Pairwise Testing: Combinatorial Test Case Generation". http://www.pairwise.org/tools.asp. Accessed 29 July 2017.
Petke, J, Cohen MB, Harman M, Yoo S (2015) "Practical combinatorial interaction testing: Empirical findings on efficiency and early fault detection". IEEE Trans Softw Eng 41(9):901–924.
PictMaster (2017) "Combinatorial testing tool PictMaster". https://osdn.net/projects/pictmaster/. Accessed 29 July 2017.
Ploskas, N, Samaras N (2016) "GPU Programming in MATLAB". Morgan Kaufmann, Boston. http://www.sciencedirect.com/science/article/pii/B9780128051320099951.
Qu, X, Cohen MB, Woolf KM (2007) "Combinatorial interaction regression testing: A study of test case generation and prioritization" In: Proc. IEEE Int. Conf. Softw. Maintenance, 255–264.. IEEE Computer Society Press, USA.
Santiago Júnior, VA (2011) "Solimva: A methodology for generating model-based test cases from natural language requirements and detecting incompleteness in software specifications". PhD thesis, Instituto Nacional de Pesquisas Espaciais (INPE).
Santiago Júnior, VA, Silva FEC (2017) "From Stat- echarts into Model Checking: A Hierarchy-based Translation and Specification Patterns Properties to Generate Test Cases" In: the 2nd Brazilian Symposium, 2017, Fortaleza. Proceedings of the 2nd, Brazilian Symposium on Systematic and Automated Software Testing - SAST, 10–20.. ACM Press, New York.
Santiago Júnior, VA, Vijaykumar NL (2012) "Generating model-based test cases from natural language requirements for space application software". Softw Qual J 20(1):77–143. doi:10.1007/s11219-011-9155-6.
Schroeder, PJ, Korel B (2000) Black-box test reduction using input-output analysis. In: Harold M (ed)Proceedings of the 2000 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA '00), 173–177.. ACM, New York.
Segall, I, Tzoref-Brill R, Farchi E (2011) Using binary decision diagrams for combinatorial test design In: Proceedings of the 2011 International Symposium on Software Testing and Analysis (ISSTA '11), 254–264.. ACM, New York.
Shapiro, SS, Wilk MB (1965) "An analysis of variance test for normality (complete samples)". Biometrika 52(3-4):591.
MathSciNet Article MATH Google Scholar
Shiba, T, Tsuchiya T, Kikuno T (2004) "Using artificial life techniques to generate test cases for combinatorial testing" In: Proceedings 28th Int. Comput. Softw. Appl. Conf., Des. Assessment Trustworthy Softw.-Based Syst, 72–77.. IEEE Computer Society Press, USA.
Stinson, DR (2004) "Combinatorial Designs: Constructions and Analysis". Springer, New York.
Tai, KC, Lei Y (2002) "A test generation strategy for pairwise testing". IEEE Trans Softw Eng 28(1):109–111.
MathSciNet Article Google Scholar
Tzoref-Brill, R, Wojciak P, Maoz S (2016) "Visualization of combinatorial models and test plans" In: Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), 144–154.. IEEE, USA.
Williams, AW (2000) "Determination of test configurations for pairwise interaction coverage" In: Testing of Communicating Systems: Tools and Techniques, IFIP TC6/WG6.1 13th International Conference on Testing Communicating Systems (TestCom 2000), August 29 - September 1, 2000, 59–74, Ottawa, Canada.
Wohlin, C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslén A (2012) "Experimentation in Software Engineering. Springer-Verlag Berlin Heidelberg, Germany.
Yamada, A, Kitamura T, Artho C, Choi E, Oiwa Y, Biere A (2015) "Optimization of combinatorial testing by incremental SAT solving". IEEE, USA.
Yamada, A, Biere A, Artho C, Kitamura T, Choi EH (2016) "Greedy combinatorial test case generation using unsatisfiable cores" In: Proceedings of 2016 31st IEEE/ACM International, Conference on Automated Software Engineering (ASE), 614–624.. IEEE, USA.
Yilmaz, C, Cohen MB, Porter A (2014) "Reducing masking effects in combinatorial interaction testing: A feedback driven adaptative approach". IEEE Trans Softw Eng:43–66.
Yoo, S, Harman M (2012) "Regression testing minimization, selection and prioritization: A survey". Softw Test Verif Reliab 22(2):67–120. https://dl.acm.org/citation.cfm?id=2284813.
Yu, L, Lei Y, Nourozborazjany M, Kacker RN, Kuhn DR (2013) "An efficient algorithm for constraint handling in combinatorial test generation" In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, 242–251.. IEEE, Nova York.
Yu, L, Lei Y, Kacker RN, Kuhn DR (2013) "ACTS: A combinatorial test generation tool" In: Proceedings on 2013 IEEE Sixth International, Conference on Software Testing, Verification and Validation, 370–375.. IEEE, Nova York.
The authors would like to thank the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for supporting this research and Leoni Augusto Romain da Silva for his support in running part of the second controlled experiment.
This work was partially funded by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) through a scholarship granted to the first author (JMB).
Full data obtained during the experiments are in (Balera and Santiago Júnior 2017).
Laboratório Associado de Computação e Matemática Aplicada, Instituto Nacional de Pesquisas Espaciais (INPE), Av. dos Astronautas, 1758, São José dos Campos, SP, Brazil
Juliana M. Balera & Valdivino A. de Santiago Júnior
Juliana M. Balera
Valdivino A. de Santiago Júnior
JMB worked in the definitions and implementations of all three versions of the TTR algorithm, and carried out the two controlled experiments. VASJ worked in the definitions of the TTR algorithm, and in the planning, definitions, and executions of the two controlled experiments. All authors contributed to all sections of the manuscript. All authors read and approved the submitted manuscript.
Correspondence to Juliana M. Balera.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Balera, J., Santiago Júnior, V. An algorithm for combinatorial interaction testing: definitions and rigorous evaluations. J Softw Eng Res Dev 5, 10 (2017). https://doi.org/10.1186/s40411-017-0043-z
Combinatorial interaction testing
Mixed-value covering array
Controlled experiment
Automated software testing: Trends and evidence
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Decitabine enhances targeting of AML cells by NY-ESO-1-specific TCR-T cells and promotes the maintenance of effector function and the memory phenotype
The third-generation anti-CD30 CAR T-cells specifically homing to the tumor and mediating powerful antitumor activity
Shangkun Zhang, Chaojiang Gu, … Tongcun Zhang
Compound CAR T-cells as a double-pronged approach for treating acute myeloid leukemia
Jessica C. Petrov, Masayuki Wada, … Yupo Ma
Development of CAR-T cells specifically targeting cancer stem cell antigen DNAJB8 against solid tumours
Yuto Watanabe, Tomohide Tsukahara, … Toshihiko Torigoe
The immune checkpoint B7x expands tumor-infiltrating Tregs and promotes resistance to anti-CTLA-4 therapy
Peter John, Marc C. Pulanco, … Xingxing Zang
Low-dose decitabine priming endows CAR T cells with enhanced and persistent antitumour potential via epigenetic reprogramming
Yao Wang, Chuan Tong, … Weidong Han
Overcoming primary and acquired resistance to anti-PD-L1 therapy by induction and activation of tumor-residing cDC1s
Takaaki Oba, Mark D. Long, … Fumito Ito
Determinants of response and resistance to CD19 chimeric antigen receptor (CAR) T cell therapy of chronic lymphocytic leukemia
Joseph A. Fraietta, Simon F. Lacey, … J. Joseph Melenhorst
PD-1 blockade-unresponsive human tumor-infiltrating CD8+ T cells are marked by loss of CD28 expression and rescued by IL-15
Kyung Hwan Kim, Hong Kwan Kim, … Eui-Cheol Shin
MHC class I-independent activation of virtual memory CD8 T cells induced by chemotherapeutic agent-treated cancer cells
Xiaoguang Wang, Brittany C. Waschke, … Jing H. Wang
Synat Kang ORCID: orcid.org/0000-0001-6898-52891,
Lixin Wang1,
Lu Xu1,
Ruiqi Wang2,
Qingzheng Kang1,
Xuefeng Gao ORCID: orcid.org/0000-0002-1904-392X1,3 &
Li Yu ORCID: orcid.org/0000-0001-6872-26651
Oncogene volume 41, pages 4696–4708 (2022)Cite this article
Acute myeloid leukaemia
Diagnostic markers
NY-ESO-1 is a well-known cancer-testis antigen (CTA) with re-expression in numerous cancer types, but its expression is suppressed in myeloid leukemia cells. Patients with acute myeloid leukemia (AML) receiving decitabine (DAC) exhibit induced expression of NY-ESO-1 in blasts; thus, we investigated the effects of NY-ESO-1-specific TCR-engineered T (TCR-T) cells combined with DAC against AML. NY-ESO-1-specific TCR-T cells could efficiently eliminate AML cell lines (including U937, HL60, and Kasumi-1cells) and primary AML blasts in vitro by targeting the DAC-induced NY-ESO-1 expression. Moreover, the incubation of T cells with DAC during TCR transduction (designated as dTCR-T cells) could further enhance the anti-leukemia efficacy of TCR-T cells and increase the generation of memory-like phenotype. The combination of DAC with NY-ESO-1-specific dTCR-T cells showed a superior anti-tumor efficacy in vivo and prolonged the survival of an AML xenograft mouse model, with three out of five mice showing complete elimination of AML cells over 90 days. This outcome was correlated with enhanced expressions of IFN-γ and TNF-α, and an increased proportion of central memory T cells (CD45RO+CD62L+ and CD45RO+CCR7+). Taken together, these data provide preclinical evidence for the combined use of DAC and NY-ESO-1-specific dTCR-T cells for the treatment of AML.
Despite therapeutic advances in the treatment of acute myeloid leukemia (AML) in the past few years, the overall survival of AML patients is still poor due to primary and secondary resistance [1]. Allogeneic hematopoietic stem cell transplantation (allo-HSCT) can induce remission in AML patients through the T cell-mediated graft-versus-leukemia effect, but the success of this approach is limited, with relapse in approximately 30–50% of cases depending mainly on the disease status at the time of transplantation [2,3,4]. Furthermore, leukemia cells can also evade the immune system in AML patients after allo-HSCT via several mechanisms [5]. The efforts to reduce toxicity while preserving the anti-tumor efficacy of cytotoxic T cells, as well as to export this therapeutic opportunity beyond the allo-HSCT context, have driven the development of engineered cytotoxic T cells. The redirection of T cells with chimeric antigen receptor (CAR) and T cell receptor (TCR) has been reported to overcome the limitations of allo-HSCT in the treatment of relapsed and refractory leukemia [6,7,8]. CAR-T cells can recognize and kill tumor cells by binding target cell surface antigens in an MHC-unrestricted manner, which is the most common strategy for hematological treatment [9,10,11]. However, the absence of leukemia-specific surface antigens for CAR-T cells to target limits its application in leukemia immunotherapy [12]. Alternatively, TCR-transduced T-cells (TCR-T) can recognize intracellular antigens processed by major histocompatibility (MHC) proteins, which have revealed encouraging results in preclinical and clinical studies [13,14,15].
A variety of cancer-testis antigens (CTAs) have been recognized as potential targets for cancer immunotherapy, given their restricted expression in somatic tissues and aberrant expression in malignant cells [16, 17]. Among the CTA family, NY-ESO-1 is of particular interest, and the safety and efficacy of NY-ESO-1-specific immunotherapies have been demonstrated in a variety of tumors such as sarcoma, melanoma, myeloma, and non-small cell lung cancer [16, 18,19,20,21]. However, the expression of NY-ESO-1 and many other CTAs is suppressed in myeloid leukemia cells due to promoter hypermethylation [22,23,24], which is the main hurdle for immunotherapy for myeloid malignancies that target CTAs. Demethylating agents, such as 5-azacytidine (5AC) and 5-aza-2'-deoxycytidine (decitabine, DAC), can inhibit DNA methyltransferases, thereby leading to re-expression of tumor-suppressor genes, and are approved by the FDA to treat myelodysplastic syndromes (MDS) and AML. Our study and others have previously reported that DAC could induce the expression of CTAs, including NY-ESO-1, in circulating myeloid cells and leukemic blast in MDS/AML patients [23, 25,26,27,28,29,30]. Furthermore, the induction of NY-ESO-1 expression could activate a cytotoxic response from HLA-compatible NY-ESO-1-specific T lymphocytes [25], which has laid the basis for combining CTA-specific immunotherapy with demethylation agents for the treatment of AML. For example, NY-ESO-1 vaccination combined with DAC for the treatment of AML has established positive results in a phase I clinical study [27]. Although DAC-induced NY-ESO-1-specific cytotoxic T lymphocytes (CTLs) have been used in clinical settings, CTLs express wild-type TCRs, which have low affinity for their target under tumor-escaping conditions [31, 32].
In this study, we show that the combined use of DAC with high-affinity NY-ESO-1-specific TCR-T cells has a high efficacy against AML. Moreover, the treatment of TCR-T cells with DAC during transduction (designated as dTCR-T) can further enhance their anti-leukemia efficacy in vivo, and increase the memory phenotype with long-term persistence and anti-leukemia durability.
Decitabine induces NY-ESO-1 expression in leukemia cell lines and primary AML blasts by demethylating the DNA promoter region
We treated four AML cell lines, including U937, HL60, Kasumi-1, and THP-1 cells, with various doses (100–1000 nM) of DAC for 72 h and observed an up-regulation of NY-ESO-1 in U937, HL60, and Kasumi-1 cell lines at both the mRNA and protein levels (Supplementary Fig. S1A–F). However, the expression of NY-ESO-1 did not change in THP-1 cells, suggesting a poor response to DAC (Supplementary Fig. S1G, H). By treating the AML cell lines with DAC at doses ranging from 100 nM to 1000 nM, we observed that the expression level of NY-ESO-1 peaked at 200 nM for U937, HL60, and Kasumi-1 cell lines. By monitoring the NY-ESO-1 expression for 10 days after treatment with 200 nM DAC, the highest level was detected at 3 days for U937, HL60, and Kasumi-1 cell lines, and the significantly high levels (relative to the untreated cells) were maintained until 10 days after exposure for U937 and HL60 cell lines (Fig. 1A–C). The THP-1 cell line showed only a moderate increase in the NY-ESO-1 mRNA level at 10 days of DAC treatment (Fig. 1D). The protein level of NY-ESO-1 after 3 days of DAC exposure was consistent with the results of the mRNA level (Fig. 1E). Thus, DAC induces NY-ESO-1 expression in AML cells in dose- and time-dependent manners.
Fig. 1: Decitabine induces NY-ESO-1 expression in AML cell lines through DNA demethylation.
RT-PCR analysis of NY-ESO-1 levels in AML cell lines (A) U937, (B) HL60, (C) Kasumi-1, and (D) THP-1 at 10 days after treatment with 200 nM DAC. E Western blot analysis of the NY-ESO-1 protein level in AML lines before and 3 days after DAC treatment. The multiple myeloma cell line U266 was used as a positive control for NY-ESO-1 expression. F RT-PCR analysis of NY-ESO-1 mRNA levels in primary AML blasts before and after DAC treatment. The data in A–D, and F are presented as mean ± sd (n = 3). Two-tailed unpaired t tests were used to compare the pre-treatment with post-treatment groups. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. Western blot analysis of the DNMT3a level in AML cell lines (G) and patient blast samples (H) before and after DAC treatment. The membranes were incubated with anti-DNMT3a, and β-actin was used as a loading control. Bisulfite sequencing analysis of the methylation status of NY-ESO-1 promoters in AML cell lines (I) U937, (J) HL60, (K) Kasumi-1, and (L) THP-1 treated with 200 nM DAC for 72 h (n = 8).
Furthermore, we assessed NY-ESO-1 expression in primary blasts isolated from the bone marrow of four AML patients who were treated by DAC. Patient characteristics and chemotherapy regimens are presented in Table 1. A significant increase in the NY-ESO-1 mRNA level was detected in the primary blasts of three AML patients (patient nos. 1, 3, and 4; Fig. 1F), but not for patient no. 2 who had achieved a complete remission (<5% AML blasts in the bone marrow, blood cell counts are normal and absence of any disease signs or symptoms) after the chemotherapy. In addition, we accessed the safety of combined high-affinity TCR-T cells with DAC with human normal bone marrow and three normal cell lines (HK-2, proximal tubule epithelial cell; NCM460, colon mucosal epithelial cell; and MCF10A, breast epithelial cell), and found that the NY-ESO-1 expression was low in these normal cells and not affected by DAC treatment (Supplementary Fig. S2A–H). Thus, the effect of DAC on up-regulating NY-ESO-1 expression is restricted to AML cells.
DNA methyltransferase DNMT3a encodes an epigenetic regulator that mediates de novo methylation of CpG dinucleotides. We found that DNMT3a expression was significantly reduced by DAC treatment in U937, HL60, and Kasumi-1 cell lines, but not in the THP-1 cell line (Fig. 1G), indicating a differential susceptibility of AML subclones to DAC treatment among these cells. In the bone marrow samples, DNMT3a expression was significantly reduced in three AML patients (nos. 1, 3, 4) who showed no remission after DAC treatment, but not for patient no. 2 who achieved complete remission and a healthy donor (Fig. 1H). In addition, DAC did not affect DNMT3a expression in human normal bone marrow and normal cell lines (HK-2, NCM460, and MCF10A) (Supplementary Fig. S2M). Moreover, bisulfite sequencing analysis showed that DAC significantly induced the demethylation of the NY-ESO-1 promoter region in U937, HL60, and Kasumi-1 cell lines (Fig. 1I–K), but barely affected the methylation status of the NY-ESO-1 promoter in THP-1, HK-2, MCF10A, NCM460, and normal bone marrow cells (Fig. 1L; Supplementary Fig. S2I–L).
Taken together, these data reveal that the up-regulation of NY-ESO-1 in AML cells is associated with DAC-induced NY-ESO-1 promoter hypomethylation. Moreover, normal cells and the THP-1 cell line do not respond to the demethylation effect of DAC.
Decitabine enhances NY-ESO-1-specific TCR-T cell-mediated recognition and killing of AML cells in vitro
Given the ability of DAC to induce NY-ESO-1 expression in AML cells, we investigated whether DAC could increase the recognition of AML cells by NY-ESO-1-specific TCR-T cells. TCR-expression in CD3+ and CD8+ cells confirmed a high transduction efficiency of NY-ESO-1-specific TCR-T (Supplementary Fig. S3).
Our previous study demonstrated that high-affinity 1G4 TCR-T cells (KD = 1.07 μM) showed enhanced killing activities than the wild-type 1G4 TCR-T cells (KD = 32 μM) [32]. We found here, in consistence with our previous study, high-affinity TCR-T (ha-TCR-T; thereafter referred to as TCR-T in other sections) cells in combing with DAC remains better killing than the wild-type TCR-T (wt-TCR-T) cells against AML. The cytokine secretions and cytotoxicity were higher in ha-TCR-T than wt-TCR-T cells against U937-A2+, HL60-A2+, and Kasumi-1-A2+ cells, but not in THP-1-A2+ cells (Fig. 2A–L). IFN-γ and TNF-α were secreted by ha-TCR-T and wt-TCR-T cells co-cultured with DAC-treated U937-A2+, HL60-A2+, and Kasumi-1-A2+ cells, whereas low cytokine secretion was detected in the untreated controls. The level of cytokine secretion in response to the targets was higher in ha-TCR-T cells than wt-TCR-T cells (Fig. 2A–C, Fig. 2E–G). In addition, the ha-TCR-T and wt-TCR-T cells secreted higher levels of IFN-γ and TNF-α in response to DAC-treated U937-A2+ cells compared to those in response to DAC-treated HL60-A2+ and Kasumi-1-A2+ cells. Both ha- and wt-TCR-T cells cultured with THP-1-A2+ cells showed no activation, and with only minimal levels of cytokine secretion with or without DAC exposure (Fig. 2D, H). Thus, cytokine production by TCR-T cells is correlated with NY-ESO-1 expression by target cells.
Fig. 2: NY-ESO-1-specific high-affinity TCR-T cells kill AML cell lines by recognizing decitabine-induced NY-ESO-1.
A–D ELISpot analysis of IFN-γ secretion by 2 × 103 NT-T, GFP-T, wt-TCR-T, or ha-TCR-T cells were co-cultured with either untreated or DAC-treated target cells, including U937-A2+, HL60-A2+, Kasumi-1-A2+, and THP-1-A2+ cells at an effector-to-target (E:T) ratio of 1:10 for 20 h. E–H ELISA analysis of TNF-α expression by 1 × 105 NT-T, GFP-T, wt-TCR-T, or ha-TCR-T cells that were stimulated by untreated or DAC-treated target cells at an E:T ratio of 5:1 for 20 h. The data in (A–H) are presented as mean ± sd (n = 3). Statistical comparisons between two groups were determined by two-tailed unpaired t tests. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001; ns not significant. I–L Specific lysis of target cells was measured by the LDH-release assay at different E:T ratios. The data are presented as mean ± sd (n = 3).
The LDH assay was performed to evaluate the cytotoxicity of NY-ESO-1-specific ha-TCR-T and wt-TCR-T cells against AML cells with different E:T ratios (Fig. 2I–L). The untransduced T cells (NT-T), wt-TCR-T, and ha-TCR-T cells that were cultured with untreated target cells showed low cytolytic activity. Specific lysis of U937-A2+, HL60-A2+, and Kasumi-1-A2+ cells by NY-ESO-1-specific ha-TCR-T and wt-TCR-T cells were only observed when they were cultured with DAC (Fig. 2I–K). In addition, DAC did not trigger specific lysis of THP-1-A2+ cells by NY-ESO-1-specific the two TCR-T cells (Fig. 2L). Taken together, these data indicate that DAC can promote the recognition and killing of AML cells by NY-ESO-1-specific TCR-T cells in vitro.
Notably, co-culturing NT-T with DAC-induced U937-A2+ cells showed minor activation (Fig. 2A, E, I). We speculated that AML cell receiving DAC treatment might stimulate other antigen upregulations, including NKG2D ligands (NKG2DL) [38,39,40]. A subset of the T cells in bulk NT-T contained cytotoxicity T cells (CD8+, CD4+) and NK cell receptors (CD56+, NKG2D+) (Supplementary Fig. S4A–C). Therefore, the increased lysis of U937-A2+ cells by NT-T with DAC treatment may be associated with the upregulation of NKG2DL (Supplementary Fig. S4D-F), thereby facilitating the recognition and killing by T cells with NKG2D receptor.
The enhanced anti-leukemia activity of NY-ESO-1-specific TCR-T is MHC-dependent
To determine whether the anti-leukemia activity of NY-ESO-1-specific TCR-T cells was MHC dependent, we incubated DAC-treated U937-A2+, HL60-A2+, and Kasumi-1-A2+ cells with an MHC class I mAb (W6/32 mAb) or isotype control (IgG2a). A high level of TNF-α secretion was observed in TCR-T cells incubated with DAC-treated target cells without the MHC class I mAb compared to NT-T cells (Supplementary Fig. S5A–C). After blocking the interaction between the peptide/HLA complex and TCR on the target cells by the MHC class-I antibody, the co-culture of NY-ESO-1-specific TCR-T cells with DAC-treated target cells showed a significant reduction in TNF-α secretion in response to the target cells. No significant change was observed in the secretion of TNF-α in the culture with untreated target cells (Supplementary Fig. S5D–F). Thus, the increased level of TNF-α production of TCR-T cells induced by DAC depends on the TCR recognition of peptide–MHC.
NY-ESO-1-specific TCR-T cells recognize and kill primary leukemia blasts derived from DAC-treated AML patients
We further accessed the cytolytic function of NY-ESO-1-specific donor-derived TCR-T cells against primary AML blasts derived from the bone marrow of four AML patients (Table 1). A NY-ESO-1-specific donor-derived TCR-T cell response was observed in the primary blasts of patient no. 1 with the HLA-A2+ genotype (Fig. 3A). And the co-culture of NY-ESO-1-specific donor-derived TCR-T cells with primary blasts treated with DAC produced a significantly higher level of IFN-γ than those stimulated by primary blasts without DAC treatment. A low TCR-T response was detected in the bone marrow of patient no. 2 with the HLA-A2+ genotype who achieved complete remission (Fig. 3B). Moreover, there was only a weak (non-specific) response in patients no. 3 and no. 4 who were carrying the HLA-A2- genotype (Fig. 3C, D). Thus, the response of NY-ESO-1-specific TCR-T cells to DAC-treated AML cells is restricted by HLA.
Fig. 3: NY-ESO-1-specific donor-derived T cells effectively kill decitabine-treated primary AML blasts in vitro.
IFN-γ ELISpot analysis was used to determine the magnitude of the NY-ESO-1-specific donor-derived T cell response to AML blasts from four patients. Briefly, 2 × 104 AML blats from (A) patient #1 (HLA-A2+), (B) patient #2 (HLA-A2+; complete remission), (C) patient #3 (HLA-A2-), and (D) patient #4 (HLA-A2-) were co-cultured with either NT-T or TCR-T cells at an effector-to-target (E:T) ratio of 1:10 for 20 h. The data are presented as mean ± sd (n = 3). Statistical comparisons between two groups were determined by two-tailed unpaired t tests. *P < 0.05; ***P < 0.001; ****P < 0.0001; ns not significant.
Decitabine exposure during TCR transduction further enhances the anti-leukemia efficacy of NY-ESO-1-specific TCR-T cells and promotes the development of the memory phenotype
Recently, it was demonstrated CAR-T cells treated with low-dose DAC exhibit improved expansion, enhanced cytotoxicity and cytokine production, as well as reduced exhaustion after antigen exposure [41] Based on these findings, we investigated whether DAC exposure during TCR transduction could enhance the proliferation, viability, longevity, and anti-leukemia activity of TCR-T cells. To produce long-term expression of TCR-T and increase memory-associated T cells, we treated the activated-T cells with low-dose DAC and lentiviral particle encoding TCR gene in the context of recognizing NY-ESO-1157-165 peptide. Lentivirus transduction, an efficient method for delivering transgenes to mammalian cells, achieved high transduction efficiency in the genome of various cell types, including CTLs and PBLs [42]. On the day three of TCR transduction, the culture was treated with DAC ranging from 10 to 1000 nM (Supplementary Fig. S6A), and these cells were designated as dTCR-T cells. A low-dose DAC (10–50 nM) showed little or no effect on the proliferation (Supplementary Fig. S6B) and viability of dTCR-T cells (Supplementary Fig. S6C), and showed no effect on generating the TCR-positive population (Supplementary Fig. S6D, E). By contrast, a high-dose DAC (200 nM or higher) showed toxicity with a significant reduction in the proliferation and viability of dTCR-T cells. The DNMT3a protein level was significantly reduced in dTCR-T cells compared with TCR-T cells after DAC treatment (Supplementary Fig. S6F), confirming the demethylation effect of DAC. Regarding the phenotypic changes, DAC (50 nM) increased the proportions of activated T cells (CD3+CD25+; Fig. 4A), cytotoxic phenotypes (CD8+ T cells and CD4+ T cells; Fig. 4B, C), and central memory-like phenotypes (CD45RO+CD62L+ and CD45RO+CCR7+ cells; Fig. 4D–G). Moreover, dTCR-T cells secreted higher levels of IFN-γ (Supplementary Fig. S6G) and TNF-α (Supplementary Fig. S6H) compared with TCR-T cells in the absence of target cells, indicating an enhanced unspecific killing capacity.
Fig. 4: dTCR-T cells exhibit an improved memory phenotype under NY-ESO-1-specific stimulation.
Phenotypic representation of (A) activation-associated CD3+CD25+ T cells, cytotoxic (B) CD8+ and (C) CD4+ cells, and central memory (D) CD45RO+CD62L+ T cells, and (E) CD45RO+CCR7+ T cells of NT-T, TCR-T, and dTCR-T cells from 12 days of the bulk cell culture (three healthy donors). F, G Pie chart showing the proportion of T-cell subsets, including central memory (CM; CD45RO+CD62L+/CCR7+), naive (CD45RO-CD62L+/CCR7+), effector memory (EM; CD45RO+CD62L-/CCR7-), and highly differentiated effector memory (EMRA; CD45RO-CD62L-/CCR7-) cells from panels D and E. The data are presented as mean ± sd (n = 3). Statistical comparisons between dTCR-T and TCR-T groups were determined by two-tailed unpaired t tests. *P < 0.05; **P < 0.01; ***P < 0.001.
We assessed the in vitro functional activities of TCR-T cells and dTCR-T cells in the presence of DAC (50 nM) by ELISA and ELISpot assays. The productions of IFN-γ and TNF-α were further increased in dTCR-T cells cultured with DAC-treated AML cells (except for THP-1-A2+) (Fig. 5A, B; Supplementary Fig. S7). Consistent with the results of the cytokine release assays, dTCR-T cells exhibited enhanced in vitro cytotoxicity against DAC-treated U937-A2+ cells, but not THP-1-A2+ cells (Fig. 5C, D). Notably, dTCR-T cells demonstrated a high level of IFN-γ secretion against DAC-treated U937-A2+ cells, even at low E:T ratios (1:10, 1:20, 1:30, 1: 60) compared with TCR-T cells with equivalent TCR affinity (Fig. 5E). No response was observed in the untreated the target cells (Fig. 5F). These results indicate that DAC treatment during TCR signaling transduction can enhance the anti-leukemia activity of NY-ESO-1-specific dTCR-T cells against AML and promote the development of the memory-like phenotypes.
Fig. 5: dTCR-T cells exhibit enhanced anti-leukemia cytotoxicity under NY-ESO-1-specific stimulation.
ELISA assays analysis of (A) IFN-γ and (B) TNF-α secretion by the bulk cell culture of NT-T, TCR-T, and dTCR-T cells (from three healthy donors) that were stimulated by 2 × 104 target cells (U937-A2+ with DAC-treated or untreated, and THP-1-A2+ with DAC-treated or untreated) at an effector-to-target (E:T) ratio of 5:1 for 20 h. C, D Increased cytotoxicity of dTCR-T against DAC-treated or untreated AML target cells. A total of 1 × 105 effector cells of the bulk cell culture of NT-T, TCR-T, and dTCR-T were stimulated with DAC-treated or untreated target cells at an E:T ratio of 5:1 for 20 h. E, F Enhanced cytokine secretion of dTCR-T against DAC-treated or untreated 2 × 104 U937-A2+ cells at serial E:T ratios (1:10, 1:20, 1:30, and 1:60) for 20 h. The data are presented as mean ± sd (n = 3). Statistical comparisons between dTCR-T and TCR-T groups were determined by two-tailed unpaired t tests. *P < 0.05; **P < 0.01; ***P < 0.001.
NY-ESO-1-specific dTCR-T cells prolong the survival of AML xenograft mouse model with enhanced effector function and memory phenotype production
To investigate the efficacy of NY-ESO-1-specific TCR-T cells against DAC-treated AML cells in vivo, we established an AML xenograft mouse model using GFP-encoding U937-A2-luciferase+ cells (Fig. 6A). The expression of NY-ESO-1 protein was detected in the tumor tissues derived from DAC-treated mice (3 days after DAC treatment), but it was undetectable in untreated tissues (Fig. 6B). TCR-positive cells comprised ~57% of the bulk cell culture of TCR-T cells and dTCR-T cells (Supplementary Fig. S6D). Tumor growth was uncontrolled in mice that received PBS, NT-T, TCR-T, or dTCR-T cells without DAC (Fig. 6C–E). The administration of DAC alone slowed the tumor growth rate. Among the combined treatment regimens, mice treated with NT-T cells plus DAC showed further reduced tumor growth compared with DAC alone. The combination of DAC with TCR-T cells significantly suppressed tumor development, but tumor recurrence appeared around 1 month after treatment (43 days in Fig. 6C–E). The combination of dTCR-T plus DAC showed a superior tumor suppressive effect compared with the other treatment regimens. On day 61, three out of five mice treated with NY-ESO-1-specific dTCR-T cells plus DAC were tumor-free, as revealed by IVIS, while most mice in the TCR-T plus DAC group exhibited signs of tumor relapse or recurrence (Fig. 6E). Moreover, the survival of mice in the dTCR-T plus DAC group was significantly prolonged compared with the TCR-T plus DAC group and the other control groups (Fig. 6F). Interestingly, three out of five mice in the dTCR-T plus DAC group showed complete response until the end of the study (over 90 days).
Fig. 6: NY-ESO-1-specific dTCR-T cells exhibit enhanced in vivo anti-tumor activity in the decitabine-treated AML xenograft mouse model.
A In vivo experimental layout. A total of 3 × 106 fluorescence-expressing U937-A2-luciferase+ cells in PBS were s.c. transferred to NCG mice. Tumor growth was monitored by BLI with the depicted regimen. After engraftment was confirmed on day 7, the animals were organized into eight groups according to tumor size (average ~80–100 mm3), including four groups that were treated with PBS (n = 5), NT-T cells (n = 5), TCR-T (n = 5), and dTCR-T cells (n = 5), and another four groups that were treated with DAC alone (n = 5) or in combination with adoptive T cells (NT-T plus DAC, n = 5); TCR-T plus DAC, n = 5); and dTCR-T plus DAC, n = 5). DAC was given on days 7–11 via i.p. injection at a dose of 1.0 mg/kg body weight in 100 μL of PBS. On day 13, mice (except for PBS and DAC groups) were treated by i.v. injection of 1.0 × 107 NT-T cells, TCR-T cells, or dTCR-T cells. Tumor volume was assessed every 3 days using calipers with the formula (length × width2)/2) and weekly by BLI with the IVIS Lumina III System. B Western blot analysis of the expression of NY-ESO-1 protein in tumor tissues 3 days after the last administration of DAC, with the multiple myeloma cell line U266 as the positive control (n = 3). C Mean tumor growth curves of the eight groups (n = 5). D Qualification of the BLI signal from each treatment group over the time (n = 5). Data are represented as the total fux (photons/second). E BLI images showing tumor burdens in all mice at the indicated time points (n = 5). F Kaplan–Meier survival curves presenting the overall survival of each group (n = 5). The log-rank Mantel–Cox test was used to analyze the survival of each group.
To interpret the enhanced anti-tumor activity of the combined treatment of dTCR-T plus DAC in vivo, flow cytometry was employed to analyze the percentages of GFP-positive, TCR-positive cells, and T-cell phenotypes in the peripheral blood of mice 10 days after treatment with DAC. The number of GFP+ cells in peripheral blood was significantly reduced in the three groups that received T cells plus DAC (Fig. 7A), which corresponded to the tumor size of each group. A higher percentage of TCR-positive cells in peripheral blood were observed in mice treated with TCR-T plus DAC, which was further increased in those treated with dTCR-T cells plus DAC (Fig. 7B). And mice treated with dTCR-T cells plus DAC showed a higher proportion of activated CD3+CD25+ cells (Fig. 7C), CD8+ cells (Fig. 7D), and CD4+ cells (Fig. 7E) compared with the other groups, suggesting higher anti-leukemia activity.
Fig. 7: The combination of dTCR-T cells with decitabine produces more cytotoxic effector cells and cells with the memory-like phenotype in vivo.
After 10 days of treatment (on day 23), the peripheral blood of five mice from each group was analyzed for (A) GFP+ cells, (B) TCR+ cells, (C) activated CD3+CD25+ T cells, cytotoxic (D) CD8+ and (E) CD4+ T cells, and central memory (F) CD45RO+CD62L+ and (G) CD45RO+CCR7+ T cells by flow cytometry. H, I Pie chart showing the proportion of T-cell subsets, including central memory (CM; CD45RO+CD62L+/CCR7+), naive (CD45RO-CD62L+/CCR7+), effector memory (EM; CD45RO+CD62L-/CCR7-), and highly differentiated effector memory (EMRA; CD45RO-CD62L-/CCR7-) cells from panels F and G. The data are presented as mean ± sd (n = 5). Statistical comparisons between two groups were determined by two-tailed unpaired t tests. *P < 0.05; **P < 0.01; ***P < 0.001; ns not significant.
In addition, we measured the percentages of central memory T cells in the peripheral blood of the treated animals. Mice treated with dTCR-T cells showed 8.75% of CD45RO+CD62L+ T cells, which was significantly higher than that of the other groups (4.82%, TCR-T; 1.52%, NT-T; 0.57%, PBS; Fig. 7F, H), while the percentages of CD45RO+CCR7+ T cells showed no significant differences among dTCR-T, TCR-T, and NT-T treatment groups (Fig. 7G, I). The percentages of both types of central memory T cells were significantly increased in mice that received DAC with dTCR-T, TCR-T, or NT-T cells. Among the four DAC-treated groups, the highest percentages of CD45RO+CD62L+ (13.48%) and CD45RO+CCR7+ T cells (15.49%) were found in mice treated with dTCR-T plus DAC.
Taken together, a combination of DAC with NY-ESO-1-specific dTCR-T cells is superior in suppressing AML xenograft tumor growth and promoting tumor-free survival, possibly by promoting the maintenance of effector function and the memory phenotype.
DAC exerts anti-tumoral activity via several potential mechanisms that associate with demethylation, including cytotoxicity and the DNA damage response [43, 44], re-expression of aberrantly silenced tumor suppressor genes [45, 46], upregulation of silenced tumor-associated antigens that enhance the anti-tumor immune response [47], and induction of cytosolic sensing of the double-stranded RNA response [48]. Among the up-regulated genes induced by DAC in AML blasts, NY-ESO-1 attracted our attention by its restricted tissue expression and immunogenicity suggesting a potential immunotherapeutic target to treat AML. In fact, NY-ESO-1-specific TCR-T cells have demonstrated high efficacy against both hematologic malignancies and solid tumors [16, 18, 19, 21]. Therefore, we anticipated that the combination of NY-ESO-1-specific TCR-T cells with DAC might achieve significant anti-leukemia potential in the treatment of AML.
Myeloid leukemia shows an absence or very low level of NY-ESO-1 expression due to dense promoter hypermethylation [22]. Consistent with previous studies [25, 49, 50], our results demonstrated that DAC could induce NY-ESO-1 re-expression in human AML cell lines through DNA promoter demethylation. Nevertheless, the DAC-induced expression of NY-ESO-1 protein was not detected in the THP-1 cell line as reported previously [23]. We postulate that the non-response was caused by the immature morphology of CpG methylation patterns of the THP-1 cell line [51], as it was established from the peripheral blood of a 1-year-old boy with AML. In addition to these results, the primary AML blasts derived from patients who received DAC treatment also showed increased NY-ESO-1 expression, consistent with a previous study [25]. Moreover, the bone marrow of an AML patient with a complete response expressed a low level of NY-ESO-1 after DAC treatment, confirming that DAC-induced NY-ESO-1 expression is restricted to AML blasts. Thus, the quantification of NY-ESO-1 expression in patients who received DAC may predict the clinical benefits of this combination approach.
The present study showed that the induced NY-ESO-1-expressing in AML cells could be efficiently targeted and killed by NY-ESO-1-specific TCR-T cells, both in vitro and in AML xenograft mouse models. The TCR-transduced T cells specifically recognized NY-ESO-1 in the context of the HLA-A2 restricted NY-ESO-1157-165 peptide, consistent with a previous report [23]. The inhibition of target cells with an MHC-class I monoclonal antibody showed attenuated TCR activity with a significant reduction of TNF-α production by the NY-ESO-1-specific TCR-T cells. Our previous study demonstrated the effect of soluble high-affinity TCR (26 pM) in blocking melanoma and multiple myeloma cell lines, which could significantly reduce IFN-γ secretion [32]. Therefore, these data confirm that the enhanced anti-leukemia capacity of NY-ESO-1-specific TCR-T cells is MHC-dependent.
The combination of DAC with high-affinity NY-ESO-1-specific TCR-T cells revealed significant anti-leukemia efficacy in vivo, with complete tumor eradication and prolonged survival of AML xenograft mouse models. However, tumor relapse was still inevitable, indicating that the TCR-T cells might not persist long enough to maintain long-term remission. It was recently reported that DAC could induce DNA reprogramming of CAR-T cells, thereby promoting sustained cell expansion, cytotoxicity, and cytokine production, while reducing exhaustion after antigen exposure [41]. Based on these findings, we wondered whether TCR-T cells exposed to DAC during transduction could further increase their anti-leukemia capacity, proliferation, and longevity, and promote tumor-free survival. Central memory T cells, which are circulating cells prevalent in lymph nodes, have enhanced longevity and proliferative potential. As expected, treating TCR-T cells with a low dose of DAC (50 nM) during transduction, which were designated as dTCR-T cells, further enhanced their anti-leukemia capacity and cytokine secretion, and promoted the memory phenotype, which corresponded to an increased proportion of cytotoxic CD4+/CD8+ T cells and central memory CD45RO+CD62L+/CD45RO+CCR7+ T cells, respectively. Compared with TCR-T cells, the dTCR-T cells further prolonged the survival of DAC-treated mice bearing AML xenograft tumors. Consistent with previous studies [41, 52, 53], the development of the memory phenotype can boost anti-tumor immunity and ACT persistence, thereby resulting in superior clinical outcomes. Taken together, the increased anti-leukemia activity of dTCR-T cells could be related to an up-regulation in the cytotoxicity of T cells and an increase in the proportion of memory phenotype T-cells owing to the DAC-induced DNA methylation.
A safety concern of this combination is the potential risk of up-regulated CTA expression in normal human tissues induced by demethylation drugs. Several studies have demonstrated that DAC has no side effects on presenting antigen targets, including NY-ESO-1 in normal human tissues and cells such as normal skin, colon, bronchial epithelia, bone marrow, peripheral blood, astrocytes, fibroblasts, smooth muscle, and ovaries (Table 2) [54,55,56,57,58,59]. Moreover, our in vitro study demonstrated limited NY-ESO-1 expression in normal bone marrow cells and normal cell lines after treatment with DAC (Supplementary Fig. S2). The co-culture of these cells with high-affinity NY-ESO-1-specific TCR-T did not lead to cytotoxicity or cytokine secretion (Supplementary Fig. S8). Equally important, the therapeutic potential and safety of NY-ESO-1 vaccination in combination with DAC has been demonstrated in phase I clinical trials for the treatment of patients with high-risk MDS and ovarian cancer [27, 59]. These data are essential for prospective studies that aim to correlate the induction of NY-ESO-1-specific TCR-T or dTCR-T cells with the clinical response in patients with hematopoietic malignancies such as AML treated with DAC. However, additional clinical trials are necessary to access the safety of DAC combined with high-affinity TCR-T.
Table 1 Characteristics of the enrolled AML patients.
The predominant mechanism of action of DAC is likely dose-dependent, with low doses inducing silenced gene re-expression and minimal DNA damage, while high doses causing more pronounced DNA damage and apoptosis. In clinical settings, low-dose DAC is recommended for the treatment of MDS and AML to favor hypomethylation over cytotoxicity. It has been reported that a low dose (0.25 mg/kg) of a demethylating agent could exert robust anti-tumor effects on hematological and epithelial tumor cells, and a significant reduction in tumor size was observed in mice bearing established xenografts after three cycles (2 weeks/cycle) of treatment [60]. Data from an in vitro study suggested that the DAC-induced expression of NY-ESO-1 protein was presented in time- and dose-dependent manners [23]. Thus, the DAC dosage and treatment regimen deserve further investigation to maximize and sustain the expression of NY-ESO-1 protein, thereby optimizing the TCR-T response.
In summary, DAC-induced NY-ESO-1 can be used as an immune target for TCR-T to treat AML. High-affinity NY-ESO-1157-165 TCR-T cells showed an efficient anti-AML response. Demethylating TCR-T cells with DAC during transduction (dTCR-T) can significantly increase their cytotoxicity and cytokine secretion after antigen exposure. dTCR-T cells outperformed TCR-T cells in the anti-leukemia effects and preventing recurrence, likely by producing a higher proportion of memory-like phenotypes. There are several limitations in the present study. (1) The safety of combined high-affinity TCR-T cells with DAC should be further evaluated on cells from more critical organs such as brain, heart, and liver. (2) The AML xenograft mouse model cannot recapitulate the molecular genetics and phenotypic features, as found in primary human AML. (3) Similar to most clinical studies, we generated the engineered T cell products from PBMCs, which were mixed T cell populations containing both highly functional T cells and less-differentiated phenotypes. In preclinical models, the use of purified, naive T cell subsets for adoptive immunotherapy can enhance persistence and anti-tumor immunity [61]. Thus, the anti-leukemia effects of NY-ESO-1-specific TCR-T cells may be further improved if starting from naïve T cells selected from PBMCs.
The antigen NY-ESO-1157–165 (peptide SLLMWITQV) and HLA-A*02:01 double-negative cells of human acute myeloid leukemia (AML) cell lines (U937, HL60, and Kasumi-1), human normal cell lines (kidney proximal tubular 2, HK-2; breast epithelial cell, MCF10A), and the antigen NY-ESO-1157–165-negative and HLA-A*02:01 positive cells (THP-1) were purchased from ATCC (Manassas, VA, USA). Human normal colon epithelia cell line NCM460, the antigen NY-ESO-1157–165 and HLA-A*02:01 double-negative cells, was purchased from IN CELL (San Antonio, TX, USA). The antigen NY-ESO-1157–165 and HLA-A*02:01 double-positive cells of the multiple myeloma (MM) cell line U266 (used as a positive control for NY-ESO-1 expression) were purchased from CBTCCCAS (Shanghai, China). We obtained HLA-A*02:01-positive cells of U937, HL60, Kasumi-1, HK-2, NCM460, and MCF10A cell lines by transducing the cell lines with HLA-A*02:01 lentiviral particles. AML and MM cell lines were cultured in RPMI 1640 (Gibco Life Technologies, Grand Island, NY, USA) containing 10% fetal bovine serum (FBS; Gibco Life Technologies). HK-2 cells were cultured in Minimum Essential Medium (MEM, Gibco Life Technologies, Grand Island, NY, USA) containing 10% FBS. MCF10A cells were cultured in DMEM-F12 (Procell, Wuhan, China) supplemented with 5% horse serum, 20 mg/mL epidermal growth factor (EGF), 0.01 mg/mL insulin, and 0.5 μg/mL Hydrocortisone. 293 T cells, a human embryonic kidney cell line purchased from ATCC. 293 T and NCM460 cells were cultured in Dulbecco's Modified Eagle Medium (DMEM, Gibco Life Technologies) containing 10% FBS.
Patient samples
The use of human materials in this study was approved by the Institutional Review Board of the Shenzhen University General Hospital. Mononuclear cells from the bone marrow of AML patients or healthy donors were isolated and cryopreserved following Ficoll centrifugation (Ficoll-Paque, GE Healthcare, Björkgatan, Uppsala, Sweden). The characteristics of the enrolled patients in clinics are shown in Table 1.
Table 2 Restricted NY-ESO-1 gene expression in normal cells or tissues treated with the decitabine.
Decitabine treatment
5-Aza-2'-deoxycytidine (DAC; Sigma-Aldrich) was dissolved in phosphate buffered saline (PBS, 100 µM stock) and stored at −80 °C. We tested different doses and times of DAC treatment to optimize NY-ESO-1 expression in AML cells. AML cell lines were treated with 100 nM, 200 nM, 500 nM, or 1000 nM DAC in the cell culture medium for 72 h. For the time-dependent study, AML cell lines, normal cell lines (HK-2, NCM460, and MCF10A), and normal bone marrow cells were treated with 200 nM DAC, and NY-ESO-1 expression was measured on days 2, 3, 5, 7, and 10.
RNA extraction, cDNA synthesis, and real-time PCR
Total RNA was isolated with TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) and synthesized from 1 μg of total RNA with the TransScript All-in-One First-Strand cDNA Synthesis SuperMix Kit for qPCR (Transgen, Beijing, China) according to the manufacturer's protocol. Quantitative RT-PCR was performed with the 7500 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) using the PerfectStart SYBR Green qPCR SuperMix Kit (Transgen). The primers for NY-ESO-1 and β-actin, which were used in this study, have been described previously [33]. The NY-ESO-1 primer sequences were as follows: forward primer – 5'-AAAAACACGGGCAGAAAGC-3' and reverse primer – 5'-GCTTCAGGGCTGAATGGAT-3'. The β-actin primer sequences were as follows: forward primer – 5'-CCTCCATGATGCTGCTTACATGTC-3' and reverse primer – 5'-ATGTCTCGCTCCGTGGCCTTAGCT-3'. The TCR primer sequences were as follows: forward primer – 5'-ATGGAGACACTGCTGGGC-3' and reverse primer – 5'-CATGGTGAAGAAGAAGAACAGCTAA-3'. The relative expression of NY-ESO-1 and TCR was normalized to β-actin and calculated using the 2-ΔΔCt method [34].
Cells and tumor tissues were lysed in RIPA buffer containing protease inhibitors (Solarbio, Beijing, China). Proteins were size-fractionated on 10–12% PAGE gels and transferred to Immobilon-transfer nitrocellulose membranes (Millipore, Burlington, MA, USA). The membranes were incubated with anti-human NY-ESO-1 (1:250 dilution; Clone SP349; Abcam, Cambridge, MA, USA), anti-DNMT3 (1:2000; Clone EPR18455, Abcam), and anti-β-actin (1:2000 dilution; Clone 13E5; Cell Signaling Technology, Danvers, MA, USA), washed, and incubated with an HRP-labeled secondary antibody. Protein bands were detected with the Chemiluminescent HRP Substrate Kit (Millipore). The densitometry readings/intensity ratios were analyzed with the ChemiDoc XRS + System (Bio-Rad, Hercules, CA, USA) by comparing the protein band intensity of NY-ESO-1 or DNMT3a to the protein band intensity of β-actin.
Genomic DNA was isolated from DAC-treated AML cell lines and untreated control cells with the Wizard Genomic DNA Purification Kit (Cat. A1120, Promega, Madison, WI, USA). The sodium bisulfite conversion of DNA was performed with the EZ DNA Methylation Kit (Cat. D5001, Zymo Research, Irvine, CA, USA). The methylation status of the NY-ESO-1 promoter region was accessed by sodium bisulfite sequencing as described previously [35]. MethPrimer was used to design the primers [36], and the primer sequences were as follows: forward primer – 5'-GGATGGGATAGGTTGGGTTT-3' and reverse primer – 5'-AACTTAAACCCCTCACCCCTA-3'. The PCR products were purified and cloned into the pGM-T vector (Cat.VT202-01, TianGen, Beijing, China). The individual clones were analyzed using the ABI PRISM 3730 DNA Analyzer (Applied Biosystems, Foster, CA, USA).
Multicolor gated flow cytometry was performed to analyze the expression of cell surface proteins stained with monoclonal antibodies (mAbs) as described previously [32]. The human mAbs used in this study are listed in Supplementary Table 1. CytoExpert software (version 1.1.10.0, Beckman Coulter, Brea, CA, USA) and FlowJo software (Version 10.0.7, FlowJo LLC, Ashland, OR, USA) were used to analyze the FACS data.
Generation of TCR-T cells
NY-ESO-1-specific high-affinity 1G4 TCR (KD = 1.07 μM) and wild-type 1G4TCR (KD = 32 μM) in the context of HLA-A*02:01 was generated and cloned into a lentivirus vector as described previously [32]. The unique clone of wild-type 1G4 TCR, which was isolated from a melanoma patient, was substituted with a dual amino acid in the third complementary determining region (CDR3α) to enhance the TCR affinity and improve the antigen-specific reactivity of T cells (designated 1G4-α95:LY TCR) [37]. Lentiviral particle production and generation of NY-ESO-1-specific TCR-T cells were performed as described previously [32]. To produce the lentiviral products, the desired TCR genes (containing α- and β-chains) were cloned into the pCDH lentiviral vector. The desired TCR plasmid and lentiviral package system consisting of the packaging construct (RRE), Rev expression plasmid (REV), and envelope construct (pG2M.D) were transfected into ~80% confluent 293 T cells, which were cultured in DMEM containing 10% FBS. The supernatant was collected after 48 and 72 h of transfection, concentrated by ultracentrifugation (50 kDa centrifugal filter units; Merck kGaA, Darmstadt, Germany) at 4000 g for 20 min at 4 °C, and stored at −80 °C.
For the generation of NY-ESO-1 specific TCR-T cells (TCR-T), human peripheral blood cells (PBMCs) from healthy donors were stimulated with human T-activator CD3 dynabeads/CD28 mAb microbeads (Cat.11161D, Life Technologies, Grand Island, NY, USA) at a bead:cell ratio of 1:1, and cultured in X-VIVO 15 medium (Lonza, Basel, Switzerland) containing 100 IU/mL IL-2 (PeproTech Inc., Beijing). After 48 h of activation, activated T cells (1 × 106) were transduced with a multiplicity of infection of 5 (5 MOI) of lentiviral particles containing the TCR gene at 48 and 72 h.
To examine the effects of DAC on the development of TCR-T, after 48 h of PBMC stimulation, serial doses (10 nM, 50 nM, 200 nM, and 1000 nM) of DAC were added simultaneously during the first 24 h of transduction. T cells were then re-transfected with TCR lentiviral particles without DAC for another 24 h (designated as dTCR-T cells). After transduction, the bulk TCR-T, dTCR-T, and NT-T cells were expanded ex vivo with X-VIVO 15 medium containing IL-2.
The dynabeads were removed before cell analysis. TCR-positive T cells were detected by staining with FITC-, PE-, and APC-conjugated antibodies, including anti-human CD8 (Clone RPA-T8, BioLegend, San Diego, CA, USA), anti-human CD3 (Clone HIT3a, BioLegend), and anti-human TCR vβ13.1 (Clone H131, BioLegend) or anti-mouse TCR β chain (Clone H57-597, BioLegend).
Cytokine production was assessed by ELISA (BioGems, Westlake Village, CA, USA) using supernatants according to the manufacturer's protocol. Briefly, effector cells were cultured with 2 × 104 target cells at an E:T ratio of 5:1 for 20 h in fresh medium (final volume, 200 μL). Supernatants and standard controls (100 μL, prepared according to the instructions) were transferred to a primary antibody pre-coated (IFN-γ or TNF-α) 96-well strip microplate, incubated at 37 °C for 90 min, and washed. The avidin-biotin-peroxidase complex was added, and the microplate was incubated at 37 °C for 30 min. After additional washes, the color-developing reagent was added, and the microplate was maintained in the dark for 30 min. The reaction was stopped, and the absorbance was measured with a Multiskan FC microplate reader (Thermo Fisher Scientific) at 450 nm.
Enzyme-linked immunospot (ELISpot) assay
ELISpot assays were performed according to the manufacturer's protocol (BD Biosciences, Franklin Lakes, NJ, USA; Bio-Techne, Minneapolis, MN, USA). Briefly, 96-well flat-bottom plates were pre-coated overnight with anti-IFN-γ, washed with RPMI 1640 containing 10% FBS, and blocked with culture medium at room temperature for 2 h. The effectors were added at a final concentration of 2 × 103 cells per well in duplicate in the presence of target cells at serial effector-to-target (E:T) ratios (1:10, 1:20, 1:30, and 1:60) for 20 h. The biotinylated secondary antibody was added, and the plates were incubated at room temperature for 2 h. Streptavidin-horseradish peroxidase was added, followed by incubation for 1 h. After additional washes, 3-amino-9-ethylcarbazole (AEC Substrate Kit; BD Biosciences) was added, followed by incubation at room temperature for 3–5 min. The reaction was stopped by the addition of water, and the plate was air-dried and analyzed using the ELISpot reader (BIOsys ELISpot Reader, Karben, Germany).
Lactate dehydrogenase (LDH) assay
Cytotoxicity was assessed with the CytoTox 96 Non-radioactive Cytotoxicity Assay (Promega) according to the manufacturer's protocol. Briefly, effector cells were prepared in fresh medium and cultured with 2 × 104 target cells at serial E:T ratios (5:1, 2.5:1, and 1.25:1) in a final volume of 200 μL. Control groups, such as spontaneous release (effector or target cells only), maximum release (target cells with 20 μL ×10 lysis buffer), and medium background (no cells added), were also set up. The plates were incubated at 37 °C for 20 h. After centrifugation, 50 μL of the supernatant was transferred into the wells of a flat-bottom plate, and 50 μL of CytoTox 96 Reagent was added. The plates were incubated at room temperature in the dark for 30 min. The reaction was stopped by adding 50 μL of stop solution, and the absorbance was measured with a Multiskan FC microplate reader at 490 nm. Cytotoxicity was calculated using the following formula:
$${{{\mathrm{\% }}}}\;{{{\mathrm{Cytotoxicity}}}} = \frac{{{{{\mathrm{Experimental}}}} - {{{\mathrm{Effector}}}}\;{{{\mathrm{Spontaneous}}}} - {{{\mathrm{Target}}}}\;{{{\mathrm{Spontaneous}}}}}}{{{{{\mathrm{Target}}}}\;{{{\mathrm{Maximum}}}}-{{{\mathrm{Target}}}}\;{{{\mathrm{Spontaneous}}}}}} \times 100$$
Mouse xenograft models
Animal studies were performed using a standard protocol approved by the IACUC of Peking University Shenzhen Graduate School (Shenzhen, China). Immunodeficient NCG (NOD/ShiLtJGpt-Prkdcem26Cd52Il2rgem26Cd22/Gpt) mice (6 weeks old, male) were purchased from GemPharmatech Co., Ltd. (Nanjing, China). Each NCG mouse received a subcutaneous injection of 3.0 × 106 fluorescent U937-A2-luciferase+ cells (U937-A2-GFP-Luci+) in 200 µL of PBS on day 0. Animals were organized into eight groups according to tumor size (average, 80–100 mm3), including four groups treated without DAC (PBS, n = 5; NT-T, n = 5; TCR-T, n = 5; and dTCR-T, n = 5), and another four groups treated with DAC alone (n = 5) or in combination with adoptive T cells (NT-T plus DAC, n = 5; TCR-T plus DAC, n = 5; and dTCR-T plus DAC, n = 5). DAC was given on days 7 to 11 via intraperitoneal (i.p.) injection at a dose of 1.0 mg/kg body weight in 100 μL of PBS. On day 13, mice in six groups (except PBS and DAC groups) were treated by intravenous (i.v.) injection of 1.0 × 107 NT-T cells, TCR-T cells, or dTCR-T cells.
Tumor growth was monitored every 3 days using a caliper and weekly by bioluminescence imaging (BLI) with the IVIS Lumina III in Vivo Imaging System (PerkinElmer, Waltham, MA, USA). The prolonged survival of animals was recorded from the first day of treatment until death or the largest tumor volume (≤1800 mm3). Peripheral blood (~200 μL) was collected through the canthus for subsequent flow cytometry analysis of tumor cell infilration and TCR-T cell persistence.
Statistical analyses were performed using GraphPad Prism 8.0 software (GraphPad Software, Inc., San Diego, CA, USA). Statistical analysis was performed using unpaired two-tailed Student's t-test for comparing two groups and log-range Mantel–Cox test for comparing survival differences between the groups. P < 0.05 was considered statistically significant.
Walter RB, Estey EH. Selection of initial therapy for newly-diagnosed adult acute myeloid leukemia: Limitations of predictive models. Blood Rev. 2020;44:100679.
Wakamatsu M, Terakura S, Ohashi K, Fukuda T, Ozawa Y, Kanamori H, et al. Impacts of thymoglobulin in patients with acute leukemia in remission undergoing allogeneic HSCT from different donors. Blood Adv. 2019;3:105–15.
Arai Y, Takeda J, Aoki K, Kondo T, Takahashi S, Onishi Y, et al. Efficiency of high-dose cytarabine added to CY/TBI in cord blood transplantation for myeloid malignancy. Blood. 2015;126:415–22.
Storb R, Gyurkocza B, Storer BE, Sorror ML, Blume K, Niederwieser D, et al. Graft-versus-host disease and graft-versus-tumor effects after allogeneic hematopoietic cell transplantation. J Clin Oncol. 2013;31:1530–8.
Zeiser R, Vago L. Mechanisms of immune escape after allogeneic hematopoietic cell transplantation. Blood. 2019;133:1290–7.
June CH, O'Connor RS, Kawalekar OU, Ghassemi S, Milone MC. CAR T cell immunotherapy for human cancer. Science. 2018;359:1361.
Zhang Y, Li Y. T cell receptor-engineered T cells for leukemia immunotherapy. Cancer Cell Int. 2019;19:2.
Tawara I, Kageyama S, Miyahara Y, Fujiwara H, Nishida T, Akatsuka Y, et al. Safety and persistence of WT1-specific T-cell receptor gene-transduced lymphocytes in patients with AML and MDS. Blood. 2017;130:1985–94.
June CH, Sadelain M. Chimeric antigen receptor therapy. N. Engl J Med. 2018;379:64–73.
Fry TJ, Shah NN, Orentas RJ, Stetler-Stevenson M, Yuan CM, Ramakrishna S, et al. CD22-targeted CAR T cells induce remission in B-ALL that is naive or resistant to CD19-targeted CAR immunotherapy. Nat Med. 2018;24:20–28.
Turtle CJ, Hanafi L-A, Berger C, Gooley TA, Cherian S, Hudecek M, et al. CD19 CAR-T cells of defined CD4+:CD8+ composition in adult B cell ALL patients. J Clin Invest. 2016;126:2123–38.
Mardiana S, Gill S. CAR T cells for acute myeloid leukemia: state of the art and future directions. Front Oncol. 2020;10:697.
Chapuis AG, Egan DN, Bar M, Schmitt TM, McAfee MS, Paulson KG, et al. T cell receptor gene therapy targeting WT1 prevents acute myeloid leukemia relapse post-transplant. Nat Med. 2019;25:1064–72.
Bethune MT, Li X-H, Yu J, McLaughlin J, Cheng D, Mathis C, et al. Isolation and characterization of NY-ESO-1-specific T cell receptors restricted on various MHC molecules. P Natl Acad Sci USA. 2018;115:E10702–11.
Jahn L, van der Steen DM, Hagedoorn RS, Hombrink P, Kester MGD, Schoonakker MP, et al. Generation of CD20-specific TCRs for TCR gene therapy of CD20low B-cell malignancies insusceptible to CD20-targeting antibodies. Oncotarget. 2016;7:77021–37.
Robbins PF, Morgan RA, Feldman SA, Yang JC, Sherry RM, Dudley ME, et al. Tumor regression in patients with metastatic synovial cell sarcoma and melanoma using genetically engineered lymphocytes reactive with NY-ESO-1. J Clin Oncol. 2011;29:917–24.
Jungbluth AA, Chen YT, Stockert E, Busam KJ, Kolb D, Iversen K, et al. Immunohistochemical analysis of NY-ESO-1 antigen expression in normal and malignant human tissues. Int J Cancer. 2001;92:856–60.
Robbins PF, Kassim SH, Tran TL, Crystal JS, Morgan RA, Feldman SA, et al. A pilot trial using lymphocytes genetically engineered with an NY-ESO-1-reactive T-cell receptor: long-term follow-up and correlates with response. Clin Cancer Res. 2015;21:1019–27.
Xia Y, Tian X, Wang J, Qiao D, Liu X, Xiao L, et al. Treatment of metastatic non-small cell lung cancer with NY-ESO-1 specific TCR engineered-T cells in a phase I clinical trial: A case report. Oncol Lett. 2018;16:6998–7007.
Stadtmauer EA, Faitg TH, Lowther DE, Badros AZ, Chagin K, Dengel K, et al. Long-term safety and activity of NY-ESO-1 SPEAR T cells after autologous stem cell transplant for myeloma. Blood Adv. 2019;3:2022–34.
Rapoport AP, Stadtmauer EA, Binder-Scholl GK, Goloubeva O, Vogl DT, Lacey SF, et al. NY-ESO-1-specific TCR-engineered T cells mediate sustained antigen-specific antitumor effects in myeloma. Nat Med. 2015;21:914–21.
Lim SH, Austin S, Owen-Jones E, Robinson L. Expression of testicular genes in haematological malignancies. Brit J Cancer. 1999;81:1162–4.
Almstedt M, Blagitko-Dorfs N, Duque-Afonso J, Karbach J, Pfeifer D, Jager E, et al. The DNA demethylating agent 5-aza-2'-deoxycytidine induces expression of NY-ESO-1 and other cancer/testis antigens in myeloid leukemia cells. Leuk Res. 2010;34:899–905.
Caballero OL, Chen YT. Cancer/testis (CT) antigens: potential targets for immunotherapy. Cancer Sci. 2009;100:2014–21.
Srivastava P, Paluch BE, Matsuzaki J, James SR, Collamat-Lai G, Blagitko-Dorfs N. et al. Induction of cancer testis antigen expression in circulating acute myeloid leukemia blasts following hypomethylating agent monotherapy. Oncotarget. 2016;7:12840–56.
You L, Han Q, Zhu L, Zhu Y, Bao C, Yang C, et al. Decitabine-mediated epigenetic reprograming enhances anti-leukemia efficacy of CD123-Targeted chimeric antigen receptor T-cells. Front Immunol. 2020;11:1787.
Griffiths EA, Srivastava P, Matsuzaki J, Brumberger Z, Wang ES, Kocent J, et al. NY-ESO-1 Vaccination in Combination with Decitabine Induces Antigen-Specific T-lymphocyte Responses in Patients with Myelodysplastic Syndrome. Clin Cancer Res. 2018;24:1019–29.
Atanackovic D, Luetkens T, Kloth B, Fuchs G, Cao Y, Hildebrandt Y, et al. Cancer-testis antigen expression and its epigenetic modulation in acute myeloid leukemia. Am J Hematol. 2011;86:918–22.
Yao Y, Zhou J, Wang L, Gao X, Ning Q, Jiang M, et al. Increased PRAME-specific CTL killing of acute myeloid leukemia cells by either a novel histone deacetylase inhibitor chidamide alone or combined treatment with decitabine. PLoS One. 2013;8:e70522.
Zhou J, Li Y, Yao Y, Wang L, Gao L, Gao X, et al. The cancer‑testis antigen NXF2 is activated by the hypomethylating agent decitabine in acute leukemia cells in vitro and in vivo. Mol Med Rep. 2013;8:1549–55.
Töpfer K, Kempe S, Müller N, Schmitz M, Bachmann M, Cartellieri M, et al. Tumor evasion from T cell surveillance. J Biomed Biotechnol. 2011;2011:918471.
Kang S, Li Y, Bao Y, Li Y. High-affinity T cell receptors redirect cytokine-activated T cells (CAT) to kill cancer cells. Front Med. 2019;13:69–82.
Cerón-Maldonado R, Martínez-Tovar A, Ramos-Peñafiel CO, Miranda-Peralta E, Mendoza-Salas I, Mendoza-García E, et al. Detection and analysis of tumour biomarkers to strengthen the diagnosis of acute and chronic leukaemias. Rev Med Hosp Gen (Mex). 2015;78:78–84.
McCormack E, Adams KJ, Hassan NJ, Kotian A, Lissin NM, Sami M, et al. Bi-specific TCR-anti CD3 redirected T-cell targeting of NY-ESO-1- and LAGE-1-positive tumors. Cancer Immunol Immunother. 2013;62:773–85.
Woloszynska-Read A, Mhawech-Fauceglia P, Yu J, Odunsi K, Karpf AR. Intertumor and intratumor NY-ESO-1 expression heterogeneity is associated with promoter-specific and global DNA methylation status in ovarian cancer. Clin Cancer Res. 2008;14:3283–90.
Srivastava P, Paluch BE, Matsuzaki J, James SR, Collamat-Lai G, Taverna P, et al. Immunomodulatory action of the DNA methyltransferase inhibitor SGI-110 in epithelial ovarian cancer cells and xenografts. Epigenetics. 2015;10:237–46.
Robbins PF, Li YF, El-Gamil M, Zhao Y, Wargo JA, Zheng Z, et al. Single and dual amino acid substitutions in TCR CDRs can enhance antigen-specific T cell functions. J Immunol. 2008;180:6116–31.
Cany J, Roeven MWH, Hoogstad-van Evert JS, Hobo W, Maas F, Franco Fernandez R. et al. Decitabine enhances targeting of AML cells by CD34(+) progenitor-derived NK cells in NOD/SCID/IL2Rg(null) mice. Blood. 2018;131:202–14.
Baragaño Raneros A, Martín-Palanco V, Fernandez AF, Rodriguez RM, Fraga MF, Lopez-Larrea C, et al. Methylation of NKG2D ligands contributes to immune system evasion in acute myeloid leukemia. Genes Immun. 2015;16:71–82.
Rohner A, Langenkamp U, Siegler U, Kalberer CP, Wodnar-Filipowicz A. Differentiation-promoting drugs up-regulate NKG2D ligand expression and enhance the susceptibility of acute myeloid leukemia cells to natural killer cell-mediated lysis. Leuk Res. 2007;31:1393–402.
Wang Y, Tong C, Dai H, Wu Z, Han X, Guo Y, et al. Low-dose decitabine priming endows CAR T cells with enhanced and persistent antitumour potential via epigenetic reprogramming. Nat Commun. 2021;12:409.
Elegheert J, Behiels E, Bishop B, Scott S, Woolley RE, Griffiths SC, et al. Lentiviral transduction of mammalian cells for fast, scalable and high-level production of soluble and membrane proteins. Nat Protoc. 2018;13:2991–3017.
Wang B, Guan W, Lv N, Li T, Yu F, Huang Y, et al. Genetic features and efficacy of decitabine-based chemotherapy in elderly patients with acute myeloid leukemia. Hematology. 2021;26:371–9.
Lyko F, Brown R. DNA methyltransferase inhibitors and the development of epigenetic cancer therapies. J Natl Cancer Inst. 2005;97:1498–506.
Gore SD, Baylin S, Sugar E, Carraway H, Miller CB, Carducci M, et al. Combined DNA methyltransferase and histone deacetylase inhibition in the treatment of myeloid neoplasms. Cancer Res. 2006;66:6361–9.
Acheampong DO, Adokoh CK, Asante DB, Asiamah EA, Barnie PA, Bonsu DOM, et al. Immunotherapy for acute myeloid leukemia (AML): a potent alternative therapy. Biomed Pharmacother. 2018;97:225–32.
Goodyear O, Agathanggelou A, Novitzky-Basso I, Siddique S, McSkeane T, Ryan G, et al. Induction of a CD8+ T-cell response to the MAGE cancer testis antigen by combined treatment with azacitidine and sodium valproate in patients with acute myeloid leukemia and myelodysplasia. Blood. 2010;116:1908–18.
Chiappinelli KB, Strissel PL, Desrichard A, Li H, Henke C, Akman B, et al. Inhibiting DNA methylation causes an interferon response in cancer via dsRNA including endogenous retroviruses. Cell. 2015;162:974–86.
Klar AS, Gopinadh J, Kleber S, Wadle A, Renner C. Treatment with 5-Aza-2'-Deoxycytidine induces expression of NY-ESO-1 and facilitates cytotoxic T lymphocyte-mediated tumor cell killing. PLoS One. 2015;10:e0139221.
Leonard SM, Perry T, Woodman CB, Kearns P. Sequential treatment with cytarabine and decitabine has an increased anti-leukemia effect compared to cytarabine alone in xenograft models of childhood acute myeloid leukemia. PLoS One. 2014;9:e87475.
Negrotto S, Ng KP, Jankowska AM, Bodo J, Gopalan B, Guinta K, et al. CpG methylation patterns and decitabine treatment response in acute myeloid leukemia cells and normal hematopoietic precursors. Leukemia. 2012;26:244–54.
Fraietta JA, Lacey SF, Orlando EJ, Pruteanu-Malinici I, Gohil M, Lundh S, et al. Determinants of response and resistance to CD19 chimeric antigen receptor (CAR) T cell therapy of chronic lymphocytic leukemia. Nat Med. 2018;24:563–71.
Xu Y, Zhang M, Ramos CA, Durett A, Liu E, Dakhova O. et al. Closely related T-memory stem cells correlate with in vivo expansion of CAR.CD19-T cells and are preserved by IL-7 and IL-15. Blood. 2014;123:3750–9.
Toor AA, Payne KK, Chung HM, Sabo RT, Hazlett AF, Kmieciak M, et al. Epigenetic induction of adaptive immune response in multiple myeloma: sequential azacitidine and lenalidomide generate cancer testis antigen-specific cellular immunity. Br J Haematol. 2012;158:700–11.
Chou J, Voong LN, Mortales CL, Towlerton AM, Pollack SM, Chen X, et al. Epigenetic modulation to enable antigen-specific T-cell therapy of colorectal cancer. J Immunother. 2012;35:131–41.
Weiser TS, Guo ZS, Ohnmacht GA, Parkhurst ML, Tong-On P, Marincola FM, et al. Sequential 5-Aza-2 deoxycytidine-depsipeptide FR901228 treatment induces apoptosis preferentially in cancer cells and facilitates their recognition by cytolytic T lymphocytes specific for NY-ESO-1. J Immunother. 2001;24:151–61.
Dubovsky JA, McNeel DG. Inducible expression of a prostate cancer-testis antigen, SSX-2, following treatment with a DNA methylation inhibitor. Prostate 2007;67:1781–90.
Natsume A, Wakabayashi T, Tsujimura K, Shimato S, Ito M, Kuzushima K, et al. The DNA demethylating agent 5-aza-2'-deoxycytidine activates NY-ESO-1 antigenicity in orthotopic human glioma. Int J Cancer. 2008;122:2542–53.
Odunsi K, Matsuzaki J, James SR, Mhawech-Fauceglia P, Tsuji T, Miller A, et al. Epigenetic potentiation of NY-ESO-1 vaccine therapy in human ovarian cancer. Cancer Immunol Res. 2014;2:37–49.
Tsai HC, Li H, Van Neste L, Cai Y, Robert C, Rassool FV, et al. Transient low doses of DNA-demethylating agents exert durable antitumor effects on hematological and epithelial tumor cells. Cancer Cell. 2012;21:430–46.
Xu Y, Dotti G. Selection bias: maintaining less-differentiated T cells for adoptive immunotherapy. J Clin Invest. 2016;126:35–37.
Srivastava P, Matsuzaki J, Paluch BE, Brumberger Z, Kaufman S, Karpf AR, et al. NY-ESO-1 vaccination in combination with decitabine for patients with MDS induces CD4+ and CD8+ T-cell responses. Blood. 2015;126:2873–2873.
This work was supported by grants Chinese National Major Project for New Drug Innovation (2019ZX09201002003), National Natural Science Foundation of China (82030076, 82070161, 81970151, 81670162, 81870134 and 81900474), Shenzhen Science and Technology Foundation (JCYJ20190808163601776, JCYJ20200109113810154), Shenzhen Key Laboratory Foundation (ZDSYS20200811143757022), Sanming Project of Medicine in Shenzhen (SZSM202111004), Stability support project in colleges and universities of Shenzhen Science and technology innovation Commission (20200830182623001), Shenzhen Peacock Talent Programs Research Start-up Grant (827000644 to XG), and Natural Science Foundation of Shenzhen University General Hospital (SUGH2019QD012).
Department of Hematology and Oncology, International Cancer Center, Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University Health Science Center, Shenzhen, 518000, Guangdong, China
Synat Kang, Lixin Wang, Lu Xu, Qingzheng Kang, Xuefeng Gao & Li Yu
School of Medicine, Nankai University, Tianjin, 300071, China
Ruiqi Wang
Central Laboratory, Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen University General Hospital, Shenzhen, 518000, Guangdong, China
Xuefeng Gao
Synat Kang
Lixin Wang
Lu Xu
Qingzheng Kang
Li Yu
Conceptualization: SK, XG, and LY; Data curation: SK, LX, RW, QK, and LX; Formal analysis: SK and XG; Funding acquisition: LY; Investigation: SK, XG, and LW; Methodology: SK, QK, YL, and LX; Resources: LW and LY; Supervision: XG and LY; Writing – original draft, SK; Writing – review & editing: XG and LY.
Correspondence to Xuefeng Gao or Li Yu.
This clinical study was proved by the Ethics Committee of Shenzhen University General Hospital with ethics code 2020-002-02 and conducted according to Helsinki Declaration's principles. Written informed consent was obtained from each participant before specimen collection. The Ethics Committee provided ethical approval for the mouse study with ethics code 2021004.
Kang, S., Wang, L., Xu, L. et al. Decitabine enhances targeting of AML cells by NY-ESO-1-specific TCR-T cells and promotes the maintenance of effector function and the memory phenotype. Oncogene 41, 4696–4708 (2022). https://doi.org/10.1038/s41388-022-02455-y
Revised: 19 August 2022
Issue Date: 14 October 2022
DOI: https://doi.org/10.1038/s41388-022-02455-y
For Authors & Referees
Oncogene (Oncogene) ISSN 1476-5594 (online) ISSN 0950-9232 (print) | CommonCrawl |
Physics and Astronomy (12)
Statistics and Probability (7)
British Journal of Nutrition (5)
Econometric Theory (3)
Epidemiology & Infection (3)
Robotica (3)
Chinese Journal of Agricultural Biotechnology (2)
Disaster Medicine and Public Health Preparedness (1)
Genetics Research (1)
Journal of Clinical and Translational Science (1)
MRS Advances (1)
MRS Communications (1)
Natural Language Engineering (1)
Nutrition Society (1)
Society for Disaster Medicine and Public Health, Inc. SDMPH (1)
Peripartum women's perspectives on research study participation in the OneFlorida Clinical Research Consortium during COVID-19 pandemic
Ke Xu, Chu J. Hsiao, Hailey Ballard, Nisha Chachad, Callie F. Reeder, Elizabeth A. Shenkman, Elizabeth Flood-Grady, Adetola F. Louis-Jacques, Erica L. Smith, Lindsay A. Thompson, Janice Krieger, Magda Francois, Dominick J. Lemas
Journal: Journal of Clinical and Translational Science / Volume 7 / Issue 1 / 2023
Published online by Cambridge University Press: 10 October 2022, e24
The COVID-19 pandemic created an unprecedented need for population-level clinical trials focused on the discovery of life-saving therapies and treatments. However, there is limited information on perception of research participation among perinatal populations, a population of particular interest during the pandemic.
Eligible respondents were 18 years or older, were currently pregnant or had an infant (≤12 months old), and lived in Florida within 50 miles of sites participating in the OneFlorida Clinical Research Consortium. Respondents were recruited via Qualtrics panels between April and September 2020. Respondents completed survey items about barriers and facilitators to participation and answered sociodemographic questions.
Of 533 respondents, most were between 25 and 34 years of age (n = 259, 49%) and identified as White (n = 303, 47%) and non-Hispanic (n = 344, 65%). Facebook was the most popular social media platform among our respondents. The most common barriers to research participation included poor explanation of study goals, discomforts to the infant, and time commitment. Recruitment through healthcare providers was perceived as the best way to learn about clinical research studies. When considering research participation, "myself" had the greatest influence, followed by familial ties. Noninvasive biological samples were highly acceptable. Hispanics had higher positive perspectives on willingness to participate in a randomized study (p = 0.009). Education (p = 0.007) had significant effects on willingness to release personal health information.
When recruiting women during the pregnancy and postpartum periods for perinatal studies, investigators should consider protocols that account for common barriers and preferred study information sources. Social media-based recruitment is worthy of adoption.
Three-dimensional properties of the viscous boundary layer in turbulent Rayleigh–Bénard convection
Fang Xu, Lu Zhang, Ke-Qing Xia
Journal: Journal of Fluid Mechanics / Volume 947 / 25 September 2022
Published online by Cambridge University Press: 22 August 2022, A15
We report an experimental study of the viscous boundary layer (BL) properties of turbulent Rayleigh–Bénard convection in a cylindrical cell. The velocity profile with all three components was measured from the centre of the bottom plate by an integrated home-made particle image velocimetry system. The Rayleigh number $Ra$ varied in the range $1.82 \times 10^8 \le Ra \le 5.26 \times 10^9$ and the Prandtl number $Pr$ was fixed at $Pr = 4.34$. The probability density function of the wall-shear stress indicates that using the velocity component in the mean large-scale circulation (LSC) plane alone may not be sufficient to characterise the viscous BL. Based on a dynamic wall-shear frame, we propose a method to reconstruct the measured full velocity profile which eliminates the effects of complex dynamics of the LSC. Various BL properties including the eddy viscosity are then obtained and analysed. It is found that, in the dynamic wall-shear frame, the eddy viscosity profiles along the centre line of the convection cell at different $Ra$ all collapse on a single master curve described by $\nu _t^d / \nu = 0.81 (z / \delta _u^d) ^{3.10 \pm 0.05}$. The Rayleigh number dependencies of several BL quantities are also determined in the dynamic frame, including the BL thickness $\delta _u^d$ ( ${\sim } Ra^{-0.21}$), the Reynolds number $Re^d$ ( ${\sim }Ra^{-0.46}$) and the shear Reynolds number $Re_s^d$ ( ${\sim } Ra^{0.24}$). Within the experimental uncertainty, these scaling exponents are the same as those obtained in the static laboratory frame. Finally, with the measured full velocity profile, we obtain the energy dissipation rate at the centre of the bottom plate $\varepsilon _{w}$, which is found to follow $\langle \varepsilon _{w} \rangle _t \sim Ra^{1.25}$.
Design and Experiments of Pneumatic Soft Actuators
Liqiang Guo, Ke Li, Guanggui Cheng, Zhongqiang Zhang, Chu Xu, Jianning Ding
Journal: Robotica / Volume 39 / Issue 10 / October 2021
The soft actuator is made of superelastic material and embedded flexible material. In this paper, a kind of soft tube was designed and used to assemble two kinds of pneumatic soft actuators. The experiment and finite element analysis are used to comprehensively analyze and describe the bending, elongation, and torsion deformation of the soft actuator. The results show that the two soft actuators have the best actuation performance when the inner diameter of the soft tube is 4 mm. In addition, when the twisting pitch of the torsional actuator is 24 mm, its torsional performance is optimized. Finally, a device that can be used in the production line was assembled by utilizing those soft actuators, and some operation tasks were completed. This experiment provides some insights for the development of soft actuators with more complex motions in the future.
Epidemiologic characteristics and influencing factors of cluster infection of COVID-19 in Jiangsu Province
Jing Ai, Naiyang Shi, Yingying Shi, Ke Xu, Qigang Dai, Wendong Liu, Liling Chen, Junjun Wang, Qiang Gao, Hong Ji, Ying Wu, Haodi Huang, Ziping Zhao, Hui Jin, Changjun Bao
Published online by Cambridge University Press: 10 February 2021, e48
To understand the characteristics and influencing factors related to cluster infections in Jiangsu Province, China, we investigated case reports to explore transmission dynamics and influencing factors of scales of cluster infection. The effectiveness of interventions was assessed by changes in the time-dependent reproductive number (Rt). From 25th January to 29th February, Jiangsu Province reported a total of 134 clusters involving 617 cases. Household clusters accounted for 79.85% of the total. The time interval from onset to report of index cases was 8 days, which was longer than that of secondary cases (4 days) (χ2 = 22.763, P < 0.001) and had a relationship with the number of secondary cases (the correlation coefficient (r) = 0.193, P = 0.040). The average interval from onset to report was different between family cluster cases (4 days) and community cluster cases (7 days) (χ2 = 28.072, P < 0.001). The average time interval from onset to isolation of patients with secondary infection (5 days) was longer than that of patients without secondary infection (3 days) (F = 9.761, P = 0.002). Asymptomatic patients and non-familial clusters had impacts on the size of the clusters. The average reduction in the Rt value in family clusters (26.00%, 0.26 ± 0.22) was lower than that in other clusters (37.00%, 0.37 ± 0.26) (F = 4.400, P = 0.039). Early detection of asymptomatic patients and early reports of non-family clusters can effectively weaken cluster infections.
Temperature and humidity associated with increases in tuberculosis notifications: a time-series study in Hong Kong
M. Xu, Y. Li, B. Liu, R. Chen, L. Sheng, S. Yan, H. Chen, J. Hou, L. Yuan, L. Ke, M. Fan, P. Hu
Published online by Cambridge University Press: 28 December 2020, e8
Previous studies have revealed associations of meteorological factors with tuberculosis (TB) cases. However, few studies have examined their lag effects on TB cases. This study was aimed to analyse nonlinear lag effects of meteorological factors on the number of TB notifications in Hong Kong. Using a 22-year consecutive surveillance data in Hong Kong, we examined the association of monthly average temperature and relative humidity with temporal dynamics of the monthly number of TB notifications using a distributed lag nonlinear models combined with a Poisson regression. The relative risks (RRs) of TB notifications were >1.15 as monthly average temperatures were between 16.3 and 17.3 °C at lagged 13–15 months, reaching the peak risk of 1.18 (95% confidence interval (CI) 1.02–1.35) when it was 16.8 °C at lagged 14 months. The RRs of TB notifications were >1.05 as relative humidities of 60.0–63.6% at lagged 9–11 months expanded to 68.0–71.0% at lagged 12–17 months, reaching the highest risk of 1.06 (95% CI 1.01–1.11) when it was 69.0% at lagged 13 months. The nonlinear and delayed effects of average temperature and relative humidity on TB epidemic were identified, which may provide a practical reference for improving the TB warning system.
Effects of resistant starch on glycaemic control: a systematic review and meta-analysis
Ke Xiong, Jinyu Wang, Tong Kang, Fei Xu, Aiguo Ma
Journal: British Journal of Nutrition / Volume 125 / Issue 11 / 14 June 2021
Published online by Cambridge University Press: 22 September 2020, pp. 1260-1269
Print publication: 14 June 2021
The effects of resistant starch on glycaemic control are controversial. In this study, a systematic review and meta-analysis of results from nineteen randomised controlled trials (RCT) was performed to illustrate the effects of resistant starch on glycaemic control. A literature search was conducted on PubMed, Scopus and Cochrane electronic databases for related publications from inception to 6 April 2020. Key inclusion criteria were: RCT; resistant starch as intervention substances and reporting glucose- and insulin-related endpoints. Exclusion criteria were: using type I resistant starch or a mixture of resistant starch and other functional food ingredients as intervention; using substances other than digestible starch as controls. The effect of resistant starch on fasting plasma glucose was significant (effect size (ES) –0·09 (95 % CI –0·13, −0·04) mmol/l, P = 0·001) compared with digestible starch. Subgroup analyses revealed that the ES was larger when the dosage of resistant starch was more than 28 g/d (ES –0·16 (95 % CI –0·24, –0·08) mmol/l, P < 0·001) or the intervention period was more than 8 weeks (ES –0·12 (95 % CI –0·18, –0·06) mmol/l, P < 0·001). The effect on homoeostatic model assessment (HOMA)-insulin resistance (IR) was significant (ES –0·33 (95 % CI –0·51, –0·14), P = 0·001). However, the effects on other insulin-related endpoints were not significant, including fasting plasma insulin, four endpoints from the frequently sampled intravenous glucose tolerance test (insulin sensitivity index, acute insulin response, disposition index and glucose effectiveness) and HOMA-β. The current study indicated moderate effects of resistant starch on improving glycaemic control.
Evaluation of gray matter reduction in patients with typhoon-related posttraumatic stress disorder using causal network analysis of structural MRI
Hui Juan Chen, Rongfeng Qi, Jun Ke, Jie Qiu, Qiang Xu, Yuan Zhong, Guang Ming Lu, Feng Chen
Journal: Psychological Medicine / Volume 52 / Issue 8 / June 2022
The structural changes recent-onset posttraumatic stress disorder (PTSD) subjects were rarely investigated. This study was to compare temporal and causal relationships of structural changes in recent-onset PTSD with trauma-exposed control (TEC) subjects and non-TEC subjects.
T1-weighted magnetic resonance images of 27 PTSD, 33 TEC and 30 age- and sex-matched healthy control (HC) subjects were studied. The causal network of structural covariance was used to evaluate the causal relationships of structural changes in PTSD patients.
Volumes of bilateral hippocampal and left lingual gyrus were significantly smaller in PTSD patients and TEC subjects than HC subjects. As symptom scores increase, reduction in gray matter volume began in the hippocampus and progressed to the frontal lobe, then to the temporal and occipital cortices (p < 0.05, false discovery rate corrected). The hippocampus might be the primary hub of the directional network and demonstrated positive causal effects on the frontal, temporal and occipital regions (p < 0.05, false discovery rate corrected). The frontal regions, which were identified to be transitional points, projected causal effects to the occipital lobe and temporal regions and received causal effects from the hippocampus (p < 0.05, false discovery rate corrected).
The results offer evidence of localized abnormalities in the bilateral hippocampus and remote abnormalities in multiple temporal and frontal regions in typhoon-exposed PTSD patients.
One Way to Fill All the Concave Region in Grid-Based Map
ZiYing Zhang, Xu Yang, Dong Xu, Ke Geng, YuLong Meng, GuangSheng Feng
Journal: Robotica / Volume 39 / Issue 5 / May 2021
Published online by Cambridge University Press: 10 September 2020, pp. 928-944
The search space of the path planning problem can greatly affect the running time and memory consumption, for example, the concave obstacle in grid-based map usually leads to the invalid search space. In this paper, the filling container algorithm is proposed to alleviate the concave area problem in 2D map space, which is inspired from the scenario of pouring water into a cup. With this method, concave areas can be largely excluded by scanning the map repeatedly. And the effectiveness has been proved in our experiments.
Increased vegetable and fruit intake is associated with reduced failure rate of tuberculosis treatment: a hospital-based cohort study in China
Lei Xu, Jinyu Wang, Shanliang Zhao, Jianwen Zhang, Ke Xiong, Jing Cai, Qiuzhen Wang, Song Lin, Yan Ma, Aiguo Ma
Journal: British Journal of Nutrition / Volume 125 / Issue 8 / 28 April 2021
Print publication: 28 April 2021
Increased intake of vegetables and fruits has been associated with reduced risk of tuberculosis infection. Vegetables and fruits exert immunoregulatory effects; however, it is not clear whether vegetables and fruits have an adjuvant treatment effect on tuberculosis. Between 2009 and 2013, a hospital-based cohort study was conducted in Linyi, Shandong Province, China. Treatment outcome was ascertained by sputum smear and chest computerised tomography, and dietary intake was assessed by a semi-quantitative FFQ. The dietary questionnaire was conducted at the end of month 2 of treatment initiation. Participants recalled their dietary intake of the previous 2 months. A total of 2309 patients were enrolled in this study. After 6 months of treatment, 2099 patients were successfully treated and 210 were uncured. In multivariate models, higher intake of total vegetables and fruits (OR 0·70; 95 % CI 0·49, 0·99), total vegetables (OR 0·68; 95 % CI 0·48, 0·97), dark-coloured vegetables (OR 0·61; 95 % CI 0·43, 0·86) and light-coloured vegetables (OR 0·67; 95 % CI 0·48, 0·95) were associated with reduced failure rate of tuberculosis treatment. No association was found between total fruit intake and reduced failure rate of tuberculosis treatment (OR 0·98; 95 % CI 0·70, 1·37). High intake of total vegetables and fruits, especially vegetables, is associated with lower risk of failure of tuberculosis treatment in pulmonary tuberculosis patients. The results provide important information for dietary guidelines during tuberculosis treatment.
Associations of early-life exposure to famine with abdominal fat accumulation are independent of family history of diabetes and physical activity
Xiang Hu, Junping Wen, Weihui Yu, Lijuan Yang, Wei Pan, Ke Xu, Xueqin Chen, Qianqian Li, Gang Chen, Xuejiang Gu
The present study aimed to investigate the association of early-life exposure to famine with abdominal fat accumulation and function and further evaluate the influence of first-degree family history of diabetes and physical activity on this association. The present work analysed parts of the REACTION study. A total of 3033 women were enrolled. Central obesity was defined as waist circumferences (W) ≥ 85 cm. Chinese visceral adiposity index (CVAI) was used to evaluate visceral adipose distribution and function. Partial correlation analysis showed BMI, W, glycated Hb and CVAI were associated with early-life exposure to famine (both P < 0·05). Logistic regression showed that the risks of overall overweight/obesity and central obesity in fetal, early-childhood, mid-childhood and late-childhood exposed subgroups were increased significantly (all P < 0·05). Compared with the non-exposed group, the BMI, W and CVAI of fetal, early- to late-childhood exposed subgroups were significantly increased both in those with or without first-degree family history of diabetes and in those classified as physically active or inactive, respectively (all P < 0·05). The associations of BMI, W and CVAI with early-life exposure to famine were independent of their associations with first-degree family history of diabetes (all P < 0·01) or physical activity status (all P < 0·001). Early-life exposure to famine contributed to abdominal fat accumulation and dysfunction, which was independent of the influence of genetic background and exercise habits. Physical activity could serve as a supplementary intervention for women with high risk of central obesity.
Selective catalytic reduction of NOx with NH3 over cerium–tungsten–titanium mixed oxide catalyst: Synergistic promotional effect of H2O2 and Ce4+
Zhi-bo Xiong, Xiao-ke Qu, Yan-ping Du, Cheng-xu Li, Jing Liu, Wei Lu, Shui-mu Wu
Journal: Journal of Materials Research / Volume 35 / Issue 16 / 28 August 2020
Published online by Cambridge University Press: 06 August 2020, pp. 2218-2229
Print publication: 28 August 2020
A highly active catalyst of cerium–tungsten–titanium mixed oxide was synthesized by introducing Ce4+ and H2O2 in the base sample of Ce20W10Ti100Oz–Ce3+. As a consequence, the NH3-SCR activity of Ce20W10Ti100Oz–Ce3+ is significantly improved as the additives of Ce4+ and H2O2 enlarge the Brunauer–Emmett–Teller (BET) surface area by refining its pore size. Meanwhile, the introduction of Ce4+ increases the Lewis acid sites of Ce20W10Ti100Oz–Ce3+ and decreases its low-temperature Brønsted acid sites. The further addition of H2O2 improves the Brønsted acid sites and dispersion of cerium/tungsten species, and thereby enhances the concentrations of the adsorbed oxygen (Oα) and the adsorbed oxygen $\lpar {\rm {O}^{\prime}}_{\rm \alpha} \rpar$ due to the activation of chemisorbed water on the surface of the catalyst. The addition of Ce4+ and H2O2 shows a synergistic promotional effect, which is due to the largest BET surface area and the highest concentrations of Oα or/and ${\rm {O}^{\prime}}_{\rm \alpha}$. Ce20W10Ti100Oz–Ce3+:Ce4+ = 17.5:2.5 + H2O2 exhibits the highest catalytic activity compared with the conventional ones (Fig. 5).
In-situ Observation of Magnetic Skyrmion Crystal Growth from the Conical Phase
Tae-Hoon Kim, Haijun Zhao, Ben Xu, Brandt Jensen, Alexander King, Matthew Kramer, Liqin Ke, Lin Zhou
Damage behavior of heterogeneous magnesium matrix nanocomposites
Xi Luo, Xu He, Jinling Liu, Xinxin Zhu, Song Jiang, Ke Zhao, Linan An
Journal: MRS Communications / Volume 10 / Issue 2 / June 2020
Heterogeneous magnesium matrix nanocomposites (Hetero-Mg-NCs) exhibited excellent strength–toughness synergy, but their damage behavior and toughness mechanism lacked of investigation. Here, atomic force microscopy was first employed to characterize the microstructure evolution and damage behavior of the Hetero-Mg-NCs after indentation. The heterogeneous structure comprised of pure Mg areas (soft phase) and Mg nanocomposite areas (hard phase) was revealed by the electrostatic force microscopy. Furthermore, the surface morphology and cracks of the deformed area were investigated with high resolution. The results indicated the soft phase undertook most of the deformation and played an important role in capturing and blunting the crack.
Materials Data Science for Microstructural Characterization of Archaeological Concrete
Daniela Ushizima, Ke Xu, Paulo J.M. Monteiro
Journal: MRS Advances / Volume 5 / Issue 7 / 2020
Published online by Cambridge University Press: 24 February 2020, pp. 305-318
Ancient Roman concrete presents exceptional durability, low-carbon footprint, and interlocking minerals that add cohesion to the final composition. Understanding of the structural characteristics of these materials using X-ray tomography (XRT) is of paramount importance in the process of designing future materials with similar complex heterogeneous structures. We introduce Materials Data Science algorithms centered on image analysis of XRT that support inspection and quantification of microstructure from ancient Roman concrete samples. By using XRT imaging, we access properties of two concrete samples in terms of three different material phases as well as estimation of materials fraction, visualization of the porous network and density gradients. These samples present remarkable durability in comparison with the concrete using Portland cement and nonreactive aggregates. Internal structures and respective organization might be the key to construction durability as these samples come from ocean-submersed archeological findings dated from about two thousand years ago. These are preliminary results that highlight the advantages of using non-destructive 3D XRT combined with computer vision and machine learning methods for systematic characterization of complex and irreproducible materials such as archeological samples. One significant impact of this work is the ability to reduce the amount of data for several computations to be held at minimalistic computational infrastructure, near real-time, and potentially during beamtime while materials scientists are still at the imaging facilities.
Influenza activity prediction using meteorological factors in a warm temperate to subtropical transitional zone, Eastern China
Wendong Liu, Qigang Dai, Jing Bao, Wenqi Shen, Ying Wu, Yingying Shi, Ke Xu, Jianli Hu, Changjun Bao, Xiang Huo
Published online by Cambridge University Press: 20 December 2019, e325
Influenza activity is subject to environmental factors. Accurate forecasting of influenza epidemics would permit timely and effective implementation of public health interventions, but it remains challenging. In this study, we aimed to develop random forest (RF) regression models including meterological factors to predict seasonal influenza activity in Jiangsu provine, China. Coefficient of determination (R2) and mean absolute percentage error (MAPE) were employed to evaluate the models' performance. Three RF models with optimum parameters were constructed to predict influenza like illness (ILI) activity, influenza A and B (Flu-A and Flu-B) positive rates in Jiangsu. The models for Flu-B and ILI presented excellent performance with MAPEs <10%. The predicted values of the Flu-A model also matched the real trend very well, although its MAPE reached to 19.49% in the test set. The lagged dependent variables were vital predictors in each model. Seasonality was more pronounced in the models for ILI and Flu-A. The modification effects of the meteorological factors and their lagged terms on the prediction accuracy differed across the three models, while temperature always played an important role. Notably, atmospheric pressure made a major contribution to ILI and Flu-B forecasting. In brief, RF models performed well in influenza activity prediction. Impacts of meteorological factors on the predictive models for influenza activity are type-specific.
Multilocus sequence typing and clonal population genetic structure of Cyclospora cayetanensis in humans
JUNQIANG LI, YANKAI CHANG, KE SHI, RONGJUN WANG, KANDA FU, SHAN LI, JINLING XU, LITING JIA, ZHENXIN GUO, LONGXIAN ZHANG
Journal: Parasitology / Volume 144 / Issue 14 / December 2017
To investigate the prevalence of Cyclospora cayetanensis in a longitudinal study and to conduct a population genetic analysis, fecal specimens from 6579 patients were collected during the cyclosporiasis – prevalent seasons in two urban areas of central China in 2011–2015. The overall incidence of C. cayetanensis infection was 1·2% (76/6579): 1·6% (50/3173) in Zhengzhou and 0·8% (26/3406) in Kaifeng (P < 0·05), with infections in all age groups (P > 0·05). All the isolates clustered in the C. cayetanensis clade based on the small subunit ribosomal RNA gene sequence phylogenetic analysis. There were 45 specimens positive for all the five C. cayetanensis microsatellite loci, and formed 29 multilocus genotypes (MLGs). The phylogenetic relationships of 54 distinct MLGs (including 25 known reference MLGs), based on the concatenated multilocus sequences, formed three main clusters. A population structure analysis showed that the 79 isolates (including 34 known reference isolates) of C. cayetanensis produced three distinct subpopulations based on allelic profile data. In conclusion, we determined the frequency of C. cayetanensis infection in humans in Henan Province. The clonal population structure of the human C. cayetanensis isolates showed linkage disequilibrium and three distinct subpopulations.
Effect of high-Z dopant on the laser-driven ablative Richtmyer–Meshkov instability
B. Xu, Y. Ma, X. Yang, W. Tang, S. Wang, Z. Ge, Y. Zhao, Y. Ke
The effects of high-Z dopant on the laser-driven ablative Richtmyer–Meshkov instability (RMI) are investigated by theoretical analysis and radiation hydrodynamics simulations. It is found that the oscillation amplitude of ablative RMI depends on the ablation velocity, the blow-off plasma velocity and the post-shock sound speed. Owing to enhancing the radiation at the plasma corona and increasing the radiation temperature at the ablation front, the high-Z dopant in plastic target can significantly increase the ablation velocity and the blow-off plasma velocity, leading to an increase in oscillation frequency and a reduction in oscillation amplitude of the ablative RMI. The high-Z dopant in plastic target is beneficial to reduce the seed of ablative Rayleigh–Taylor instability. These results are helpful for the design of direct drive inertial confinement fusion capsules.
The Effects of the Addition of Dy, Nb, and Ga on Microstructure and Magnetic Properties of Nd2Fe14B/α-Fe Nanocomposite Permanent Magnetic Alloys
Kezhi Ren, Xiaohua Tan, Heyun Li, Hui Xu, Ke Han
Journal: Microscopy and Microanalysis / Volume 23 / Issue 2 / April 2017
We study the effects of Dy, Nb, and Ga additions on the microstructure and magnetic properties of Nd2Fe14B/α-Fe nanocomposites. Dy, Nb, and Ga additions inhibit the growth of the soft magnetic α-Fe phase. Dy and Nb additions are able to refine the microstructure, whereas Ga addition plays only a minor role in prohibiting crystal growth. The magnetic properties are sensitive to Dy, Nb, and Ga additions. The Dy-containing alloy enhances the intrinsic coercivity of 872 kA/m because Dy partially replaces Nd, forming (Nd, Dy)2Fe14B. Nb addition refines the microstructure, and consequently increases the exchange coupling between magnetic grains. The Nd9.5Fe75.4Co5Zr3B6.5Ga0.6 alloy exhibits the highest remanence (0.92 T) due to Ga addition.
Identification of alleles and genotypes of beta-casein with DNA sequencing analysis in Chinese Holstein cow
Ronghua Dai, Yu Fang, Wenjing Zhao, Shuyun Liu, Jinmei Ding, Ke Xu, Lingyu Yang, Chuan He, Fangmei Ding, He Meng
Journal: Journal of Dairy Research / Volume 83 / Issue 3 / August 2016
The study reported in this Regional Research Communication aimed to analyse the genetic polymorphisms of β-casein in Chinese Holstein cows. β-casein has received considerable research interest in the dairy industry and animal breeding in recent years as a source not only of high quality protein, but also of bioactive peptides that may be linked to health effects. Morever, the polymorphic nature of β-casein and its association with milk production traits, composition, and quality also attracted several efforts in evaluating the allelic distribution of β-casein locus as a potential dairy trait marker. However, few data on beta-casein variants are available for the Chinese Holstein cow. In the present paper, one hundred and thirty three Holstein cows were included in the analysis. Results revealed the presence of 5 variants (A1, A2, A3, B and I), preponderance of the genotype A1A2 (0·353) and superiorities of A1/A2 alleles (0·432 and 0·459, respectively) in the population. Sequence analysis of β-casein gene in the cows showed four nucleotide changes in exon 7. Our study can provide reference and guidance for selection for superior milk for industrial applications and crossbreeding and genetic improvement programmes.
6 - Data visualization and the DDP process
By Ke Xu, Bristol-Myers-Squibb Inc
Edited by William T. Loging, Mount Sinai School of Medicine, New York
Book: Bioinformatics and Computational Biology in Drug Discovery and Development
Print publication: 17 March 2016, pp 114-136
Data visualization denotes the techniques of visually presenting complex data sets to achieve goals such as displaying multiple data dimensions simultaneously, connecting related data points from data sets, or showing data distribution patterns. They are of great value for data processing, data analysis, and data presentation activities.
Genomics and functional genomics are the major driving forces for the development and utilization of visualization tools in biological fields. Following the completion of genomic sequencing projects of human and other model organisms around the beginning of this century, our knowledge of genes has jumped to the tens of thousands per species. Expression profiling microarray can generate millions of data points per experiment. The challenge of the huge data set size and the need to integrate different data sources in analyses prompted significant research and development work by both academic and industrial bioinformaticians. As a result, many visualization methods, proposals, and tools for biological data have been developed thus far. This chapter will describe the problems and solutions for the visualization of three basic and largest (thus, most challenging) genomics/functional genomics data types. More specifically, the first two sections will discuss visualization of sequence data and pathway/gene network data, which are two data types specific to genomics and other biology fields. In the third section, we will review visualization methods of numeric data, such as expression profiling data, proteomic data, and genotyping data. Most of the techniques in the section can also be applied to other areas. However, some topics, such as viewing numeric data in the context of genome or pathways, are still biology-specific.
Sequence and genomes
The genome is the complete set of genetic materials for an organism, which includes genes, regulatory and replication-related sequences, as well as non-functional intergenic regions. For most organisms other than RNA viruses, long linear or circular DNA molecules form the biochemical basis of the genome that stores all the genetic information. Visualization of the genome refers to the visual display of the DNA sequences and associated annotations. Depending on the visualization purposes, genome visualization tools can be classified into two categories: sequence viewer for visualizing sequence and annotations, and genome alignment viewer, for comparing different genomes. | CommonCrawl |
Public Health Microbiology
Chlorine Disinfection of Atypical Mycobacteria Isolated from a Water Distribution System
Corinne Le Dantec, Jean-Pierre Duguet, Antoine Montiel, Nadine Dumoutier, Sylvie Dubrou, Véronique Vincent
Corinne Le Dantec
Laboratoire de Référence des Mycobactéries, Institut Pasteur, 75724 Paris Cedex 15
Jean-Pierre Duguet
Société Anonyme de Gestion des Eaux de Paris, Paris Cedex 14
Antoine Montiel
Nadine Dumoutier
Lyonnaise des Eaux, CIRSEE, 78230 Le Pecq
Sylvie Dubrou
Laboratoire d'Hygiène de la Ville de Paris, 75013 Paris, France
Véronique Vincent
For correspondence: [email protected]
DOI: 10.1128/AEM.68.3.1025-1032.2002
We studied the resistance of various mycobacteria isolated from a water distribution system to chlorine. Chlorine disinfection efficiency is expressed as the coefficient of lethality (liters per minute per milligram) as follows: Mycobacterium fortuitum (0.02) > M. chelonae (0.03) > M. gordonae (0.09) > M. aurum (0.19). For a C · t value (product of the disinfectant concentration and contact time) of 60 mg · min · liter−1, frequently used in water treatment lines, chlorine disinfection inactivates over 4 log units of M. gordonae and 1.5 log units of M. fortuitum or M. chelonae. C · t values determined under similar conditions show that even the most susceptible species, M. aurum and M. gordonae, are 100 and 330 times more resistant to chlorine than Escherichia coli. We also investigated the effects of different parameters (medium, pH, and temperature) on chlorine disinfection in a chlorine-resistant M. gordonae model. Our experimental results follow the Arrhenius equation, allowing the inactivation rate to be predicted at different temperatures. Our results show that M. gordonae is more resistant to chlorine in low-nutrient media, such as those encountered in water, and that an increase in temperature (from 4°C to 25°C) and a decrease in pH result in better inactivation.
Mycobacteria, except the tubercle bacilli, responsible for tuberculosis, are usually referred as atypical or nontuberculous mycobacteria (NTM). These mycobacteria were originally considered to be unusual Mycobacterium tuberculosis strains. Now, more than 90 NTM species have been described. Unlike tubercle bacilli, which are obligate pathogens, NTM are ubiquitous (8, 10, 12, 15, 17, 26, 35, 40). They have been recovered from a wide variety of environmental sources, including water, soil, dust, and aerosols. Most of them are saprophytic, although some are potential pathogens and may be involved in pulmonary or cutaneous diseases or in lymphadenitis (15, 24, 46). Pulmonary infections occur in immunocompetent patients with predisposing lung conditions, such as smoking, chronic obstructive pulmonary diseases, pneumoconiosis, and silicosis. Disseminated infections may occur in immunocompromised patients. Before the introduction of protease inhibitors for antiretroviral therapy, disseminated infections due to NTM, especially M. avium, were frequent in AIDS patients. NTM infection is now one of the criteria used to diagnose AIDS in human immunodeficiency virus-positive patients (20).
Patient-to-patient transmission of mycobacterial infections has not be demonstrated even in severely immunocompromised patients with advanced AIDS and hospitalized in the same wards as patients with severe M. avium infections. Infection is thought to be acquired from the environment by ingestion, inhalation, or inoculation. Recently, there has been increasing evidence that water may be the vehicle by which the mycobacteria infect or colonize the human body (4, 5, 17, 25, 30, 39, 43-45, 48). Documented evidence has been provided from hospitals and from cases of nosocomial infections. In a dialysis-related outbreak of M. abscessus, a rapidly growing mycobacterial species, the causative organism was recovered from hospital tap water. The strains isolated from patients and from tap water were identified by molecular typing (50). Similarly, molecular analyses were used to identify M. avium strains recovered from a hospital hot water system and from cultures of blood from patients treated at that hospital. These data showed that hospital hot water supplies can be a source of nosocomial outbreaks of disseminated M. avium disease (43). A recent outbreak of spinal infections due to M. xenopi in patients who had undergone discovertebral surgery was shown to be related to the presence of M. xenopi in the tap water distribution network of a French surgical center (3). The occurrence of mycobacteria in tap water raises the possibility that aerosols, which are typically generated in showers, carrying mycobacteria constitute a route for pulmonary infections. This hypothesis was suggested for clustered cases of pulmonary infections due to M. xenopi in a hospital in which the water network was heavily contaminated with M. xenopi (9). However, hospitals are not the only places where mycobacterial contamination of tap water systems may be a major health issue. The sources of pulmonary infections due to M. kansasii in coal miners (28) and in the residents in a city apartment complex (36) were traced to tap water used for showers.
Mycobacteria have been isolated from public water distribution systems and from various other sites, including hot and cold water taps, ice machines, heated nebulizers, and shower heads sprays (8, 12, 15, 17, 21, 23, 33, 44). NTM found in drinking water distribution systems are residents able to colonize, to survive, to persist, and to grow in tap water and are not contaminants from another source (6). The resistance of mycobacteria to common disinfectants and their tolerance of a wide range of pHs and temperatures allow them to persist in drinking water systems (6, 8, 15, 29). The mechanisms responsible for the survival of mycobacteria in drinking water are not well understood. A recent European directive addressed water intended for human consumption, i.e., potable water, including drinking water, water for food preparation, and water for other domestic uses (European Union Council Directive 98/83/EC). Therefore, water used for personal hygiene is included in this definition. Thus, skin contact with contaminated water and the inhalation of aerosols generated from contaminated water may be risk factors legally covered by the directive. The European Union Council directive states, according to World Health Organization guidelines, that drinking water should not contain pathogenic microorganisms in such quantity or concentration able to deteriorate human health. This regulation means that water must be carefully treated and that the response of pathogens to treatment procedures must be carefully evaluated.
To evaluate the efficiency of drinking water treatment against mycobacteria, it is necessary in particular to evaluate the adequacy of disinfection conditions, such as chlorination. We investigated the chlorination efficiency for several species isolated from the water distribution system in Paris, France. In a preliminary study, water samples were collected in 2000 at 12 points along the treatment lines at two treatment plants and in parts of the distribution system. A wide range of species was identified and included M. fortuitum, M. chelonae, M. aurum, M. peregrinum, and M. gordonae. The chlorination efficiency was estimated for various species grown in standard culture medium. M. gordonae was used as a model to test the different parameters affecting chlorination efficiency, including pH, temperature, and the composition of the medium.
(This work is part of a doctoral thesis by C. Le Dantec.)
Bacterial strains and culture conditions.The M. gordonae strain used in this study was isolated from drinking water sampled at the Laboratoire de Référence des Mycobactéries at the Pasteur Institute. The other mycobacterial strains used were isolated from cold public water supplies in Paris. All strains were identified by phenotypic and genotypic methods, including 16S rRNA and/or hsp65 gene analysis by current techniques (13, 31, 41).
Cells were grown in Middlebrook 7H9 liquid medium (Difco Laboratories, West Molesley, Surrey, United Kingdom) containing 10% (vol/vol) oleic acid-albumin enrichment and 0.05% (vol/vol) Tween 80 at their optimal growth temperature (30 or 37°C, depending on the strain) in a rotary shaker (120 rpm). At the end of the exponential phase (optical density at 600 nm, 0.8), the cells were centrifuged at 3,260 × g and washed twice in an equal volume of 0.05 M chlorine demand-free phosphate buffer at pH 7. This buffer was prepared by mixing 420 ml of 0.05 M KH2PO4 with 580 ml of 0.05 M Na2HPO4. The cell pellet was resuspended in 1,500 ml of the same buffer at a density of 105 bacterial cells per ml, and this suspension was used for the washes.
Hypochlorous acid challenge conditions.A freshly prepared free chlorine stock solution (150 mg/liter) was added to the bacterial suspension at a final concentration of 0.5 mg/liter. After 0, 10, 20, 30, 40, 60, and 120 min of reaction with chlorine at room temperature with gentle shaking, samples (100 ml) were removed and quenched with 100 μl of sterile 0.5 mM sodium thiosulfate at pH 7.0 to stop the action of chlorine. Cultivable bacteria were assayed by spreading on Middlebrook 7H9 solid medium plates after serial dilution in Middlebrook 7H11 liquid medium. Free chlorine (hypochlorous acid and hypochlorite ions) was measured by the N,N-diethyl-p-phenylenediamine colorimetric method at each point (18). Colonies were counted after 5 days at 37 or 30°C. The data presented are the averages of a minimum of two replicates.
Influence of temperature, pH, and medium composition on M. gordonae inactivation rates.M. gordonae was used as a model to test the different parameters affecting chlorination efficiency for mycobacteria.
To test the effect of the medium composition on inactivation rates, M. gordonae was grown in tap water previously filtered through a 0.45-μm-pore-size filter and supplemented with 10% Middlebrook 7H9 liquid medium. At the end of exponential phase, the cells were centrifuged, washed, and resuspended as described above.
To test the effect of temperature on inactivation rates, M. gordonae was inactivated with 0.5 mg of chlorine/liter at 4°C (in ice), at 16°C (in a water bath), and at room temperature (25°C). These experiments were carried out at pH 7.0.
The effect of pH on inactivation rates was studied with sterile chlorine demand-free phosphate buffer at pH 6 and pH 8. The pH 6 buffer contained 889 ml of 0.06 M KH2PO4 and 111 ml of 0.06 M Na2HPO4; the pH 8 buffer contained 37 ml of 0.06 M KH2PO4 and 963 ml of 0.06 M Na2HPO4. These experiments were conducted at 25°C.
Reagents.All chemicals used were analytical grade. N,N-Diethyl-p-phenylenediamine and sodium thiosulfate were purchased from Sigma Chemical Co.
Disinfection kinetics.The Chick-Watson law was used for defining the rate of inactivation of mycobacteria according to Chick (7) as modified by Watson (47): log10 (N0/N) = k · C · t. In this equation, N0 is the initial concentration of microorganisms, N is the concentration remaining at time t, C is the concentration of disinfectant, and k is the susceptibility or lethality coefficient of the microorganism.
Due to reactions with residues of organic compounds, the concentration of free chlorine decreased as the experiments proceeded (Fig. 1). The curve represents free chlorine concentration measured at different times during inactivation, with an initial free chlorine concentration of 1.16 mgliter−1.
Schematic representation of the integration calculation of C · t values. The shaded area was used to estimate the integral term equation (equation 1) as described in Materials and Methods. The initial chlorine concentration was 1.16 mg/liter.
In order to accurately evaluate C · t values, chlorine decay was integrated as a function of time: $$mathtex$$\[log_{10}\ (\mathit{N}/\mathit{N}_{0}){=}{-}\mathit{k}_{\mathit{i}}{{\int}_{0}^{\mathit{t}}}\mathit{Cdt}\]$$mathtex$$(1) In this equation, ki is the inactivation rate constant and Cdt is chlorine concentration decay integrated as a function of time.
The C · t values are represented by the areas under the curve presented in Fig. 1. The C · t values were calculated as follows: Cn · tn = (Cn−1 · tn − 1) + [(Cn + Cn+1)/2] · (tn+1 − tn). Linear regressions based on the decimal logarithm of the proportion of the initial concentration of mycobacteria remaining at time t (in minutes) for each strain were calculated and used to calculate the k values.
In this study, mycobacteria were inactivated with free chlorine in a phosphate buffer of a constant ionic strength. This procedure created well-controlled and reproducible experimental conditions.
The results are expressed as C · t values and k values. C · t values are the product of the disinfectant concentration in milligrams per liter and the contact time in minutes required to inactivate target microorganisms at a given temperature and pH. C · t values are usually expressed in log units. Different levels of inactivation (50, 90, and 99.9%, etc.) can be calculated from C · t and k values.
Effects of chlorine on the growth of atypical mycobacteria.In a preliminary study, the occurrence and distribution of mycobacteria at 12 points along the treatment lines at two treatment plants and in parts of the distribution system related to these plants were investigated. An additional point consisted of the tap in our laboratory in Institut Pasteur. The majority of the samples were positive for mycobacteria. The most frequently isolated species were M. aurum, M. chelonae, M. fortuitum, and M. peregrinum among the rapidly growing species and M. gordonae and M. nonchromogenicum among the slowly growing species. However, 28% of the isolated cultures could not be assigned to any described species. M. aurum, M. chelonae, and M. fortuitum were selected as representatives of rapidly growing species, and M. gordonae was selected as a representative of slowly growing species to test the susceptibility of mycobacteria to chlorine.
The resistance of mycobacteria to chlorine, expressed as the k value (liters per minute per milligram), was as follows: M. fortuitum (0.02) > M. chelonae (0.03) > M. gordonae (0.09) > M. aurum (0.19). Thus, M. chelonae and M. fortuitum are the most resistant whereas M. aurum appears to be the most susceptible mycobacterial species to chlorine (Fig. 2A). To reduce the number of cells by 2 log units (99%), the C · t value for M. aurum (15 mg · min · liter−1) had to be 9.5-fold lower than the C · t value for M. fortuitum; in other words, M. fortuitum was 9.5-fold more resistant than M. aurum.
Inactivation of various mycobacterial species with chlorine. (A) The data presented are the averages of a minimum of two replicates. Linear regressions based on the logarithm of the fraction of the original number of mycobacteria remaining at time t (in minutes) for each strain were calculated as shown in Fig. 1 and used to calculate C · t values. For each species tested, the experimental conditions were pH 7, a temperature of 25°C, and an initial chlorine concentration of 0.5 mg/liter. Cells were grown in Middlebrook 7H9-Tween medium. No, initial number of CFU; N, number of CFU at the time of the assays. (B) Extrapolation of experimental curves to determine C · t values for 3 log units of cell death. Slopes were calculated as follows: M. aurum, y = 0.19 x; M. gordonae, y = 0.09 x; M. chelonae, y = 0.03 x; and M. fortuitum, y = 0.04 x. R2, correlation coefficient values.
In water treatment lines, chorination conditions are very often 0.5 mg of chlorineliter−1 for 2 h, providing a C · t value of 60 mg · min · liter−1. From k values, it can be calculated that chlorination could eliminate over 5 log units of M. aurum, 4 log units of M. gordonae, but only 1.5 log units of M. fortuitum or M. chelonae (Fig. 2B).
Effects of medium composition on the susceptibility of M. gordonae to free chlorine.M. gordonae, a frequent contaminant of tap water systems with an intermediate inactivation rate constant (k) among the mycobacterial species tested, was selected as a model to test the various chlorination conditions of mycobacterial inactivation.
To examine the impact of growth conditions on mycobacteria, the chlorine susceptibilities of M. gordonae grown in 7H9-Tween medium and in filtered tap water supplemented with 10% 7H9-Tween medium were compared (Fig. 3). It was not possible to test M. gordonae in 100% tap water because it grew extremely poorly and did not yield enough CFU for a valid analysis.
Effects of culture medium on the susceptibility of M. gordonae to free chlorine. C · t values were calculated as described in the legend to Fig. 1. Experimental conditions were pH 7, a temperature of 25°C, and an initial chlorine concentration of 0.5 mg/liter. No, initial number of CFU; N, number of CFU at the time of the assays. Symbols: ⧫, water plus 10% Middlebrook 7H9-Tween medium (y = 0.01 x); ▪, 100% Middlebrook 7H9-Tween medium (y = 0.09 x). R2, correlation coefficient values.
Mycobacteria grown in tap water were more resistant to chlorine than cells grown in culture medium. The results show that the k value for M. gordonae grown in water supplemented with 10% 7H9-Tween medium was 0.01 liter · min · mg−1; that for M. gordonae grown in 7H9-Tween medium was 0.09 liter · min · mg−1. The growth of M. gordonae in the low-nutrient solution significantly increased the resistance of the microorganism to free chlorine.
Effects of temperature on inactivation rates.The effect of temperature on the bactericidal activity of chlorine was tested with M. gordonae (Fig. 4). For a C · t value of 60 mg · min · liter−1, M. gordonae showed less than 1 log10 unit or less than 1.5 log10 unit of inactivation at 4°C or at 16°C, respectively, whereas the number of CFU decreased by 5.5 log units at 25°C (Fig. 3). Therefore, chlorination was more efficient at higher temperatures.
Effects of temperature on chlorine inactivation of M. gordonae. Experimental conditions were pH 7 and an initial chlorine concentration of 0.5 mg/liter. Cells were grown in Middlebrook 7H9-Tween medium. The chlorine susceptibility of M. gordonae was analyzed at 4, 16, and 25°C. No, initial number of CFU; N, number of CFU at each time point. Symbols: ▪, 4°C (y = 0.01 x); ▴, 16°C (y = 0.02 x); ⧫, 25°C (y = 0.09 x). R2, correlation coefficient values.
To determine whether the decrease in inactivation varied linearly with temperature, the validity of the Arrhenius expression was checked. For a simple chemical reaction, the dependence of ki on temperature is expressed by the classical Arrhenius equation: $$mathtex$$\[\ \mathit{k}_{\mathit{i}}{=}\mathit{A}{\cdot}exp({-}\mathit{E}_{\mathit{a}}/\mathit{RT})\]$$mathtex$$(2) where A is the frequency factor in 1/(moles per minute), Ea is the reaction activation energy in Joules per mole, R (8.314 J per mol per degree kelvin) is the ideal gas constant, and T is the absolute temperature in degrees kelvin. The temperature dependence of ki was consistent with the Arrhenius equation (Fig. 5). Linear regression of the data presented in Fig. 5 yielded an activation energy of 9.14 J/mol. In conclusion, these experimental conditions allowed the determination of the M. gordonae inactivation rate at different temperatures according to the Arrhenius equation.
Arrhenius plot of k values versus temperature. The abscissa values correspond to the temperatures tested in Fig. 4; 3.36 corresponds to 25°C, 3.46 corresponds to 16°C, and 3.61 corresponds to 4°C. Linear regression of the data yielded an activation energy of 9.14 J/mol and a log frequency factor of 1.10 as defined by the Arrhenius equation (equation 2).
Effects of pH on inactivation rates.Free chlorine exists mainly in two different forms in aqueous solutions, HOCl and OCl−. The concentration of each form varies in a nonlinear manner according to the pH. For Escherichia coli, HOCl is more than 50-fold more effective than OCl− as a disinfectant (32). At pH 6.0, HOCl accounts for 98% of free chlorine, whereas at pH 10.0, OCl− accounts for over 99%. Between pHs 7.0 and 8.5, the proportions vary rapidly but not linearly; HOCl decreasing from 83% at pH 7.0 to 14% at pH 8.5. The equilibrium is temperature dependent, with lower temperatures resulting in slightly higher proportions of HOCl.
The results showed that chlorine disinfection for M. gordonae was more rapid at pH 6 than at pH 7 or pH 8 (Fig. 6). These results reflect the fact that more HOCl is present at a lower pH. After 10 min in the presence of chlorine at pH 6, the numbers of M. gordonae cells decreased by 0.64 log10 units (C · t, 3.4 mg · min · liter−1), compared to 0.15 log10 units (C · t, 6.5 mg · min · liter−1) at pH 8 for the same exposure (Fig. 6). The k value was sixfold higher at pH 6 and pH 7 than at pH 8. In conclusion, increasing pH decreases mycobacterial inactivation rates, highlighting the susceptibility of M. gordonae to HOCl.
Effects of pH on the rate of inactivation of M. gordonae. Experimental conditions were a temperature of 25°C and an initial chlorine concentration of 0.5 mg/liter. Cells were grown in Middlebrook 7H9-Tween medium. No, initial number of CFU; N, number of CFU at each time point. Experiments were conducted with different phosphate (0.05 M) buffers within the pH range of 6.0 to 8.0. Slopes were calculated as follows: pH 6, y = 0.11 x; pH 7, y = 0.09 x; and pH 8, y = 0.02 x. R2 , correlation coefficient values.
The chlorine resistance of mycobacteria isolated from the Parisian water distribution network and upstream internal networks is reported. Various species were identified, including M. fortuitum, M. chelonae, M. aurum, M. peregrinum, and M. gordonae. Interestingly, M. avium was not recovered. Most of the available data on chlorine disinfection of mycobacteria in the literature are based on the susceptibility of M. avium, a focus of interest of numerous studies due to its high clinical impact (14, 16, 17, 43). However, recent extensive studies of widely dispersed drinking water utilities in the United States showed that the frequencies of recovery and the numbers of M. avium in drinking water samples are low (10, 16). In Europe, M. avium is not frequently found in tap water, as shown by a German study in which 1.7% of the samples were positive for M. avium (30). A recent study in Greece also failed to detect M. avium in drinking water distribution systems (42). However, in these studies, other mycobacterial species, including M. chelonae, M. gordonae, and M. fortuitum, were frequently isolated.
Uncombined chlorine, in the form of hypochlorous acid (HOCl), is an extremely potent bactericidal agent, active even at concentrations of less than 0.1 mg · liter−1 on most bacteria and viruses (23). As a direct consequence, chlorination is one of the most widely used methods for the disinfection of water. Chlorination is comparatively inexpensive and easy to use, and chlorine remains active within the system for a considerable length of time. Chlorination conditions used in water distribution systems are based on the inactivation of several viruses and bacteria but not mycobacterial species or other pathogens, such as parasites. Information pertaining to the effect of hypochlorous acid on atypical mycobacteria is rather limited.
The results of this study are consistent with those of the study by Carson et al. (6), who found that M. chelonae and M. fortuitum are highly resistant to chlorine. Recently, Taylor et al. showed that M. avium strains have C · t values of 51 to 204 mg · min · liter−1 for 3 log units (99.9%) of cell death at pH 7 and 23°C (40). Stewart and Olson stressed the importance of the conditions in which organisms are grown before disinfection experiments (37). In this study, conditions similar to those used by Taylor et al. (40) were chosen: tests were performed at pH 7 and 23°C with strains grown in 7H9 Middlebrook broth medium. In this study, for 3 log units (99.9%) of inactivation, C · t values of 100 and 135 mg · min · liter−1 were calculated for M. chelonae and M. fortuitum, respectively (Fig. 2B). The similar high C · t values shown for M. avium by Taylor et al. (51 to 204 mg · min · liter−1) are 580 to 2,300 times higher than those for E. coli (40). The comparison of the C · t values showed that the most susceptible species in this study, M. aurum and M. gordonae, were still 100 and 330 times more resistant to chlorine than E. coli, respectively (40).
A C · t value of 60 mg · min · liter−1 maintained in a chlorination tank is not efficient against all mycobacterial species, specially M. chelonae and M. fortuitum, as shown in this study, or M. avium, as shown by others (29, 40). High chlorine concentrations can be used for disinfection purposes in a private distribution system, for example, in a hospital in which the water supply is highly contaminated with mycobacteria. As the resistance of mycobacteria to chlorine is species specific, it is difficult to establish standard chlorine concentrations and times (C · t) to reduce or eliminate waterborne mycobacteria. It is therefore important to consider the level of chlorine resistance of the mycobacterial species responsible for the contamination when establishing chlorination conditions for disinfection.
All the strains used in this study were waterborne strains because previous studies showed that clinical and environmental strains display different levels of chlorine resistance (29). Different parameters (medium, pH, and temperature) relating to chlorine resistance for waterborne M. gordonae were studied. The results showed that inactivation of atypical mycobacteria was more efficient at low temperatures and high pHs. Moreover, when M. gordonae was grown in filtered water supplemented with 10% 7H9 Middlebrook liquid medium, the inactivation rates decreased by a factor 10. In distribution networks, mycobacteria live at average temperatures (16°C) in neutral pHs (7 to 7.5). This information suggests that the chlorine resistance of mycobacteria isolated from fresh tap water was underestimated in this study.
Hypochlorous acid and hypochlorite ions are present simultaneously in water. Hypochlorous acid is the more active of the two components as a disinfectant. The disinfection efficiency increases with temperature, as the reaction rate of hypochlorous acid with bacterial components increases. Haas proposed a model for the inactivation of microorganisms assuming the existence of an intermediate disinfectant-organism complex that governs the rate of microbial inactivation (19).
Moreover, the lower the pH, the greater the proportion of hypochlorous acid. Once again, these experiments with mycobacteria were in agreement with results obtained for Yersinia enterocolitica (27). Hypochlorous acid is a highly destructive oxidant that reacts with various cellular compounds and affects metabolic processes. It alters membrane permeability, inhibits transport, cleaves proteins, and reacts with nucleotides. Hypochlorous acid reacts with unsaturated fatty acids, modifying membrane fluidity and permeability. The peculiar structure of the mycobacterial cell wall skeleton partly explains the high resistance of mycobacteria to chlorination. In mycobacteria, the peptidoglycan is covalently linked to mycolic acids, consisting of long fatty acids up to 90 carbon atoms, through an arabinogalactan bridge. Mycolic acids confer acid fastness to bacilli and represent a thick, hydrophobic barrier preventing diffusion and lowering permeability (1). The current techniques for isolating pure mycobacterial cultures from specimens that contain other microorganisms rely on the greater resistance of mycobacteria to acids, alkalis, or quaternary ammonium ions. Moreover, the external layer is composed of unique constituents noncovalently linked to the cell wall and consisting of peptidoglycolipids, glycolipids, lipopolysaccharides, phospholipids, sulfolipids, and nonlipidic molecules, proteins, and polysaccharides (1, 11). The composition of the outer layer is species specific, a fact which may explain the differential resistance of mycobacterial species to chlorination.
The effect of nutrients on disinfection resistance is the least well understood concept. In various organisms, especially Legionella pneumophila, Flavobacterium, and Klebsiella pneumoniae (22, 38), growth in low-nutrient conditions leads to higher chlorine resistance. Similarly, Taylor et al. showed that water-grown M. avium cells were 10-fold more resistant than medium-grown cells (40). These results are consistent with our results for other mycobacterial species. A tentative explanation could be related to information from previous studies which demonstrated that the total lipid content is doubled when M. phlei is grown in the presence of 1.4% sodium acetate (2). The difference in the resistance of mycobacterial species to chlorine could be related to the composition of the cell wall, especially the outer layer, which varies from species to species and according to growth conditions. Further studies are still necessary to identify the molecules responsible for resistance to chlorine.
Elimination of Mycobacterium species in a real water distribution system.Our results confirm that mycobacteria are highly resistant to chlorine disinfection and document the optimal parameters for the inactivation of mycobacterial species. However, the test conditions are not necessarily those encountered in distribution systems. They are valid for free mycobacteria in the water distribution system. Mycobacteria have been shown to be able to replicate in biofilms, and solid-liquid interfaces may be regarded as sites of selective enrichment for these bacteria (34, 35). The high degree of cell wall hydrophobicity probably accounts for the strong adhesive properties displayed by mycobacterial factors and influences the numbers of mycobacteria in biofilms (16). Additional studies must be performed to test the efficiency of disinfection procedures for mycobacteria in biofilms.
The high C · t values necessary for inactivating mycobacteria to a great extent (at least 99.9%) are not always available in the final chlorination tank and in the distribution system. Careful calculation of real C · t values in global water treatment lines and distribution systems is necessary for evaluating the level of inactivation of mycobacteria.
Nevertheless, chlorination is not the sole efficient process. Filtration, clarification, and ozonation permit the efficient removal of microorganisms and must be investigated in order to evaluate the efficiencies of different treatments. Copper and silver ions are attractive candidates, as this disinfection technique requires little maintenance and results in residual disinfection throughout the distribution system. Ozone disinfection also has advantages, as it reduces taste, odor, and color and oxidizes organic substances. The main drawback of ozone is that chlorine disinfection still remains necessary at low concentrations to maintain the quality of water in distribution systems. Previous studies showed that C · t values required for a 99.9% reduction (3 log units of killing) in M. avium viability were 1.4 mg · min · liter−1 with copper and silver ions and 0.1 to 0.7 mg · min · liter−1 with ozone (40, 49). These C · t values are low compared to those for chlorine and highlight interest in these alternative or complementary disinfection methods for mycobacteria. The effects of copper ions and ozone are currently being tested on a range of waterborne isolates from various mycobacterial species in our laboratory.
This work received support from Lyonnaise des Eaux and Société Anonyme de Gestion des Eaux de Paris.
Accepted 28 November 2001.
Asselineau, C., and J. Asselineau. 1978. Lipides spécifiques des mycobactéries. Ann. Microbiol. 129:46-69.J. D.
Asselineau, J., and E. Lederer. 1953. Chimie des lipides bactériens. Prog. Chem. Nat. Org. Subst. 10:170-273.
Astagneau, P., N. Desplaces, V. Vincent, V. Chicheportiche, A. H. Botherel, S. Maugat, K. Lebascle, P. Leonard, J. C. Desenclos, J. Grosset, J. M. Ziza, and G. Brücker. 2001. Mycobacterium xenopi spinal infections after disco-vertebral surgery: investigation and screening of a large outbreak. Lancet 358:747-751.
Bolan, G., A. L. Reingold, L. A. Carson, V. A. Silcox, C. L. Woodley, P. S. Hayes, A. W. Hightower, L. McFarland, J. W. d. Brown, and N. J. Petersen. 1985. Infections with Mycobacterium chelonei in patients receiving dialysis and using processed hemodialyzers. J. Infect. Dis. 152:1013-1019.
Campagnaro, R. L., H. Teichtahl, and B. Dwyer. 1994. A pseudoepidemic of Mycobacterium chelonae: contamination of a bronchoscope and autocleaner. Aust. N. Z. J. Med. 24:693-695.
Carson, L. A., N. J. Petersen, M. S. Favero, and S. M. Aguero. 1978. Growth characteristics of atypical mycobacteria in water and their comparative resistance to disinfectants. Appl. Environ. Microbiol. 36:839-846.
Chick, H. 1908. An investigation of the laws of disinfection. J. Hyg. 8:92-158.
Collins, C. H., J. M. Grange, and M. D. Yates. 1984. Mycobacteria in water. J. Appl. Bacteriol. 57:193-211.
Costrini, A. M., D. A. Mahler, W. M. Gross, J. E. Hawkins, R. Yesner, and N. D. D'Esopo. 1981. Clinical and roentgenographic features of nosocomial pulmonary disease due to Mycobacterium xenopi. Am. Rev. Respir. Dis. 123:104-109.
Covert, T. C., M. R. Rodgers, A. L. Reyes, and G. N. Stelma, Jr. 1999. Occurrence of nontuberculous mycobacteria in environmental samples. Appl. Environ. Microbiol. 65:2492-2496.
Daffe, M., and G. Etienne. 1999. The capsule of Mycobacterium tuberculosis and its implications for pathogenicity. Tuber. Lung Dis. 79:153-169.
Dailloux, M., C. Laurain, M. Weber, and P. Hartemann. 1999. Water and nontuberculous mycobacteria. Water Res. 33:2219-2228.
David, L. H., V. Levy-Frebault, and F. Papa. 1986. Méthodes de laboratoire pour Mycobacteriologie clinique. Unité de la Tuberculose et des Mycobactéries, Institut Pasteur, Paris, France.
du Moulin, G. C., K. D. Stottmeier, P. A. Pelletier, A. Y. Tsang, and J. Hedley-Whyte. 1988. Concentration of Mycobacterium avium by hospital hot water systems. JAMA 260:1599-1601.
Falkinham, J. O., III. 1996. Epidemiology of infection by nontuberculous mycobacteria. Clin. Microbiol. Rev. 9:177-215.
Falkinham, J. O., III, C. D. Norton, and M. W. LeChevallier. 2001. Factors influencing numbers of Mycobacterium avium, Mycobacterium intracellulare, and other mycobacteria in drinking water distribution systems. Appl. Environ. Microbiol. 67:1225-1231.
Goslee, S., and E. Wolinsky. 1976. Water as a source of potentially pathogenic mycobacteria. Am. Rev. Respir. Dis. 113:287-292.
Greenberg, A. E., L. S. Clesceri, and A. D. Eaton. 1992. Standard methods for the examination of water and wastewater, 18th ed. American Public Health Association, Washington, D.C.
Haas, C. N. 1980. A mechanistic kinetic model for chlorine disinfection. Env. Sci. Technol. 14:339-340.
Inderlied, C. B., C. A. Kemper, and L. E. Bermudez. 1993. The Mycobacterium avium complex. Clin. Microbiol. Rev. 6:266-310.
Laussucq, S., A. L. Baltch, R. P. Smith, R. W. Smithwick, B. J. Davis, E. K. Desjardin, V. A. Silcox, A. B. Spellacy, R. T. Zeimis, H. M. Gruft, et al. 1988. Nosocomial Mycobacterium fortuitum colonization from a contaminated ice machine. Am. Rev. Respir. Dis. 138:891-894.
LeChevallier, M. W., C. D. Cawthon, and R. G. Lee. 1988. Factors promoting survival of bacteria in chlorinated water supplies. Appl. Environ. Microbiol. 54:649-654.
Ludovici, P. P., R. A. Phillips, and W. S. Jeter. 1977. Comparative inactivation of bacteria and viruses in tertiary-treated wastewater by chlorination, p. 359-390. In J. D. Johnson (ed.), Disinfection: water and wastewater. Ann Arbor Science Publishers, Ann Arbor, Mich.
Metchock, B. G., F. S. Nolte, and R. J. Wallace, Jr. 1999. Mycobacterium, p.399-437. In P. R. Murray, E. J. Baron, M. A. Pfaller, F. C. Tenover, and R. H. Yolken (ed.), Manual of clinical microbiology, 7th ed. American Society for Microbiology, Washington, D.C.
Nolan, C. M., P. A. Hashisaki, and D. F. Dundas. 1991. An outbreak of soft-tissue infections due to Mycobacterium fortuitum associated with electromyography. J. Infect. Dis. 163:1150-1153.
Pankhurst, C. L., N. W. Johnson, and R. G. Woods. 1998. Microbial contamination of dental unit waterlines: the scientific argument. Int. Dent. J. 48:359-368.
Paz, M. L., M. V. Duaigues, A. Hanashiro, M. D'Aquino, and P. Santini. 1993. Antimicrobial effect of chlorine on Yersinia enterocolitica. J. Appl. Bacteriol. 75:220-225.
Pelikan, M., Z. Mikova, J. Kaustova, and M. Kubin. 1973. Supply water as a probable cause of transfer of air-borne infections by atypical mycobacteria. Ceskoslovenska Hygiena 18:316-323.
Pelletier, P. A., G. C. du Moulin, and K. D. Stottmeier. 1988. Mycobacteria in public water supplies: comparative resistance to chlorine. Microbiol. Sci. 5:147-148.
Peters, M., C. Muller, S. Rusch-Gerdes, C. Seidel, U. Gobel, H. D. Pohle, and B. Ruf. 1995. Isolation of atypical mycobacteria from tap water in hospitals and homes: is this a possible source of disseminated MAC infection in AIDS patients? J. Infect. 31:39-44.
Rogall, T., T. Flohr, and E. C. Bottger. 1990. Differentiation of Mycobacterium species by direct sequencing of amplified DNA. J. Gen. Microbiol. 136:1915-1920.
Scarpino, P. V., G. Berg, L. S. Chang, D. Dahling, and M. Lucas. 1972. A comparative study of the inactivation of viruses in water by chlorine. Water Res. 6:959-965.
Schulze-Robbecke, R., C. Feldmann, R. Fischeder, B. Janning, M. Exner, and G. Wahl. 1995. Dental units: an environmental study of sources of potentially pathogenic mycobacteria. Tuber. Lung Dis. 76:318-323.
Schulze-Robbecke, R., and R. Fischeder. 1989. Mycobacteria in biofilms. Zentbl. Hyg. Umweltmed. 188:385-390.
Schulze-Robbecke, R., B. Janning, and R. Fischeder. 1992. Occurrence of mycobacteria in biofilm samples. Tuber. Lung Dis. 73:141-144.
Slosarek, M., M. Kubin, and M. Jaresova. 1993. Water-borne household infections due to Mycobacterium xenopi. Cent. Eur. J. Public Health 1:78-80.
Stewart, M. H., and B. H. Olson. 1986. Mechanisms of bacterial resistance to inorganic chloramines. Water Qual. Technol. Conf. Proc. 14:577-590.
Stewart, M. H., and B. H. Olson. 1992. Physiological studies of chloramine resistance developed by Klebsiella pneumoniae under low-nutrient growth conditions. Appl. Environ. Microbiol. 58:2918-2927.
Stine, T. M., A. A. Harris, S. Levin, N. Rivera, and R. L. Kaplan. 1987. A pseudoepidemic due to atypical mycobacteria in a hospital water supply. JAMA 258:809-811.
Taylor, R. H., J. O. Falkinham III, C. D. Norton, and M. W. LeChevallier. 2000. Chlorine, chloramine, chlorine dioxide, and ozone susceptibility of Mycobacterium avium. Appl. Environ. Microbiol. 66:1702-1705.
Telenti, A., F. Marchesi, M. Balz, F. Bally, E. C. Bottger, and T. Bodmer. 1993. Rapid identification of mycobacteria to the species level by polymerase chain reaction and restriction enzyme analysis. J. Clin. Microbiol. 31:175-178.
Tsintzou, A., A. Vantarakis, O. Pagonopoulou, A. Athanassiadou, and M. Papapetropoulou. 2000. Environmental mycobacteria in drinking water before and after replacement of the water distribution network. Water Air Soil Pollut. 120:273-282.
Von Reyn, C. F., J. N. Maslow, T. W. Barber, J. O. Falkinham III, and R. D. Arbeit. 1994. Persistent colonisation of potable water as a source of Mycobacterium avium infection in AIDS. Lancet 343:1137-1141.
Wallace, R. J., Jr., B. A. Brown, and D. E. Griffith. 1998. Nosocomial outbreaks/pseudo-outbreaks caused by nontuberculous mycobacteria. Annu. Rev. Microbiol. 52:453-490.
Wallace, R. J., Jr., J. M. Musser, S. I. Hull, V. A. Silcox, L. C. Steele, G. D. Forrester, A. Labidi, and R. K. Selander. 1989. Diversity and sources of rapidly growing mycobacteria associated with infections following cardiac surgery. J. Infect. Dis. 159:708-716.
Wallace, R. J., Jr., J. M. Swenson, V. A. Silcox, R. C. Good, J. A. Tschen, and M. S. Stone. 1983. Spectrum of disease due to rapidly growing mycobacteria. Rev. Infect. Dis. 5:657-679.
Watson, H. E. 1908. A note on the variation of the rate of disinfection with change in the concentration of the disinfectant. J. Hyg. 8:536.
Wenger, J. D., J. S. Spika, R. W. Smithwick, V. Pryor, D. W. Dodson, G. A. Carden, and K. C. Klontz. 1990. Outbreak of Mycobacterium chelonae infection associated with use of jet injectors. JAMA 264:373-376.
Yu-Sen E. Lin, R. D. V., Janet E. Stout, Christine A. McCartney and Victor L. Yu. 1998. Inactivation of Mycobacterium avium by copper and silver ions. Water Res. 32:1997-2000.
Zhang, Y., M. Rajagopalan, B. A. Brown, and R. J. Wallace, Jr. 1997. Randomly amplified polymorphic DNA PCR for comparison of Mycobacterium abscessus strains from nosocomial outbreaks. J. Clin. Microbiol. 35:3132-3139.
Applied and Environmental Microbiology Mar 2002, 68 (3) 1025-1032; DOI: 10.1128/AEM.68.3.1025-1032.2002
You are going to email the following Chlorine Disinfection of Atypical Mycobacteria Isolated from a Water Distribution System | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.